id
stringlengths
64
64
published
stringlengths
19
25
title
stringlengths
7
262
description
stringlengths
6
54.4k
link
stringlengths
31
227
category
stringclasses
6 values
image
stringlengths
3
247
49dfbe237fca38573d65bd8a8d35c772f18ee20d424c720003acb03877a8e4c0
2026-01-07T00:00:00-05:00
TreeDiff: AST-Guided Code Generation with Diffusion LLMs
arXiv:2508.01473v3 Announce Type: replace Abstract: Code generation is increasingly critical for real-world applications. Still, diffusion-based large language models continue to struggle with this demand. Unlike free-form text, code requires syntactic precision; even minor structural inconsistencies can render a program non-executable. Existing diffusion-based large language models rely on random token masking for corruption, leading to two key failures: they lack awareness of syntactic boundaries during the iterative denoising process, and they fail to capture the long-range hierarchical dependencies essential for program correctness. We propose TreeDiff to address both issues. Specifically, we propose a syntax-aware diffusion framework that incorporates structural priors from Abstract Syntax Tree (AST) into the corruption process. Instead of masking individual tokens at random, we selectively mask tokens belonging to key AST nodes. By aligning the corruption process with the underlying structure of code, our method encourages the model to internalize the compositional nature of programming languages, enabling it to reconstruct programs that respect grammatical boundaries and capture long-range dependencies. Our method achieves a 13.3% relative improvement over the random masking training method, demonstrating its effectiveness in code generation task by leveraging underlying structures.
https://arxiv.org/abs/2508.01473
Academic Papers
svg
2e5c14ac62388abdd7df9c92866c94220eb89ef7b2624759a8d2e15d71021ab1
2026-01-07T00:00:00-05:00
The Homogenizing Effect of Large Language Models on Human Expression and Thought
arXiv:2508.01491v2 Announce Type: replace Abstract: Cognitive diversity, reflected in variations of language, perspective, and reasoning, is essential to creativity and collective intelligence. This diversity is rich and grounded in culture, history, and individual experience. Yet as large language models (LLMs) become deeply embedded in people's lives, they risk standardizing language and reasoning. We synthesize evidence across linguistics, psychology, cognitive science, and computer science to show how LLMs reflect and reinforce dominant styles while marginalizing alternative voices and reasoning strategies. We examine how their design and widespread use contribute to this effect by mirroring patterns in their training data and amplifying convergence as all people increasingly rely on the same models across contexts. Unchecked, this homogenization risks flattening the cognitive landscapes that drive collective intelligence and adaptability.
https://arxiv.org/abs/2508.01491
Academic Papers
svg
6970ddd958db40199d5789abcba9c0863cbc2f7f3032e13157e73adcf3b8cc8f
2026-01-07T00:00:00-05:00
The Bidirectional Process Reward Model
arXiv:2508.01682v2 Announce Type: replace Abstract: Process Reward Models (PRMs), which assign fine-grained scores to intermediate reasoning steps within a solution trajectory, have emerged as a promising approach to enhance the reasoning quality of Large Language Models (LLMs). However, most existing PRMs rely on a unidirectional left-to-right (L2R) evaluation scheme, which restricts their utilization of global context. In light of this challenge, we propose a novel bidirectional evaluation paradigm, named Bidirectional Process Reward Model (BiPRM). BiPRM incorporates a parallel right-to-left (R2L) evaluation stream, implemented via prompt reversal, alongside the conventional L2R flow. Then a gating mechanism is introduced to adaptively fuse the reward scores from both streams to yield a holistic quality assessment. Remarkably, compared to the original PRM, BiPRM introduces only a 0.3% parameter increase for the gating module, and the parallel execution of two streams incurs merely 5% inference time latency. Our extensive empirical evaluations spanning diverse benchmarks, LLM backbones, PRM objectives and sampling policies demonstrate that BiPRM consistently surpasses unidirectional baselines, achieving an average relative gain of 10.6% over 54 solution-level configurations and 37.7% in 12 step-level error detection scenarios. Generally, our results highlight the effectiveness, robustness and general applicability of BiPRM, offering a promising new direction for process-based reward modeling.
https://arxiv.org/abs/2508.01682
Academic Papers
svg
b2fe93cc8c904a54c3b9788676c38d2c841cac62cace979c98fa68ac087071f3
2026-01-07T00:00:00-05:00
Reconsidering Overthinking: Penalizing Internal and External Redundancy in CoT Reasoning
arXiv:2508.02178v2 Announce Type: replace Abstract: Large Reasoning Models (LRMs) often suffer from overthinking, generating verbose reasoning traces that compromise both computational efficiency and interpretability. Unlike prior efforts that rely on global length-based rewards, we propose a semantic-aware decomposition of redundancy into two distinct forms: internal redundancy (informational stagnation within the reasoning process) and external redundancy (superfluous continuation after the final answer). We introduce a dual-penalty reinforcement learning framework that surgically targets these inefficiencies: a sliding-window semantic analysis is employed to penalize low-gain steps within the reasoning trajectory, while a normalized metric suppresses the post-answer tail. Extensive experiments demonstrate that our method significantly compresses Chain-of-Thought traces with minimal accuracy degradation, while maintaining strong generalization to out-of-domain tasks. Crucially, we reveal an asymmetry in redundancy: external redundancy can be safely eliminated without performance loss, whereas internal redundancy removal requires a calibrated trade-off to maintain reasoning fidelity. Our framework enables fine-grained, implicit control over reasoning length, paving the way for more concise and interpretable LRMs.
https://arxiv.org/abs/2508.02178
Academic Papers
svg
3f4933f383867d73f9bc320bb2a8d046a5d94f501f9c92858fad12c76e48883e
2026-01-07T00:00:00-05:00
U-PINet: Physics-Informed Hierarchical Learning for Accurate and Fast 3D RCS Prediction
arXiv:2508.03774v2 Announce Type: replace Abstract: Accurate radar cross section (RCS) computation is a fundamental task in radar engineering and electromagnetic (EM) scattering analysis, underpinning target signature characterization, detection, and recognition. Conventional computational electromagnetics (CEM) solvers provide high-fidelity RCS predictions but suffer from prohibitive computational costs when applied to 3-dimensional (3D) targets under multi-aspect configurations. In contrast, purely data-driven neural networks offer high efficiency yet often lack physical consistency and generalization capability. To address these challenges, this paper proposes a U-shaped Physics-Informed Network (U-PINet). To the best of our knowledge, it is the first framework to establish a fully end-to-end, physics-informed hierarchical architecture for fast and accurate RCS computation, grounded in the governing principles of CEM. Inspired by the near-far field decomposition in classical fast solvers, U-PINet explicitly models local EM coupling and long-range radiation effects through a hierarchical operator design. A physics-guided graph construction is further introduced to represent self- and mutual-coupling among mesh elements of complex 3D targets, enabling physically interpretable intermediate representations. By embedding EM governing equations as residual constraints, the proposed framework achieves end-to-end, physically consistent RCS prediction with significantly improved computational efficiency. Extensive numerical experiments demonstrate that U-PINet attains solver-level RCS accuracy with orders-of-magnitude runtime reduction, while exhibiting strong generalization to unseen target geometries under limited training data.
https://arxiv.org/abs/2508.03774
Academic Papers
svg
15a51210ac514a6041427315bcb836946ca8e13fe42b65cd116943ef835c4d4f
2026-01-07T00:00:00-05:00
SAGOnline: Segment Any Gaussians Online
arXiv:2508.08219v2 Announce Type: replace Abstract: 3D Gaussian Splatting has emerged as a powerful paradigm for explicit 3D scene representation, yet achieving efficient and consistent 3D segmentation remains challenging. Existing segmentation approaches typically rely on high-dimensional feature lifting, which causes costly optimization, implicit semantics, and task-specific constraints. We present \textbf{Segment Any Gaussians Online (SAGOnline)}, a unified, zero-shot framework that achieves real-time, cross-view consistent segmentation without scene-specific training. SAGOnline decouples the monolithic segmentation problem into lightweight sub-tasks. By integrating video foundation models (e.g., SAM 2), we first generate temporally consistent 2D masks across rendered views. Crucially, instead of learning continuous feature fields, we introduce a \textbf{Rasterization-aware Geometric Consensus} mechanism that leverages the traceability of the Gaussian rasterization pipeline. This allows us to deterministically map 2D predictions to explicit, discrete 3D primitive labels in real-time. This discrete representation eliminates the memory and computational burden of feature distillation, enabling instant inference. Extensive evaluations on NVOS and SPIn-NeRF benchmarks demonstrate that SAGOnline achieves state-of-the-art accuracy (92.7\% and 95.2\% mIoU) while operating at the fastest speed at 27 ms per frame. By providing a flexible interface for diverse foundation models, our framework supports instant prompt, instance, and semantic segmentation, paving the way for interactive 3D understanding in AR/VR and robotics.
https://arxiv.org/abs/2508.08219
Academic Papers
svg
65856ef499cc70c77a5d9718df5b2bc50a59a8ff978a967d3a595842d384de6f
2026-01-07T00:00:00-05:00
Optimal Boost Design for Auto-bidding Mechanism with Publisher Quality Constraints
arXiv:2508.08772v2 Announce Type: replace Abstract: Online bidding serves as a fundamental information system in mobile ecosystems, facilitating real-time ad allocation across billions of devices while optimizing both platform performance and user experience through data-driven decision making. Improving ad allocation efficiency is a long-standing research problem, as it directly enhances the economic outcomes for all participants in advertising platforms. This paper investigates the design of optimal boost factors in online bidding while incorporating quality value (the impact of displayed ads on publishers' long-term benefits). To address the divergent interests on quality, we establish a three-party auction framework with a unified welfare metric of advertiser and publisher. Within this framework, we derive the theoretical efficiency lower bound for C-competitive boost in second-price single-slot auctions, then design a novel quality-involved Boosting (q-Boost) algorithm for computing the optimal boost factor. Experimental validation on Alibaba's public dataset (AuctionNet) demonstrates 2%-6% welfare improvements over conventional approaches, proving our method's effectiveness in real-world settings.
https://arxiv.org/abs/2508.08772
Academic Papers
svg
69f5349ee176c17c725418277b3e44ca7858860d8e826d1c29e9af4223b9b21a
2026-01-07T00:00:00-05:00
Diagnostic-Guided Dynamic Profile Optimization for LLM-based User Simulators in Sequential Recommendation
arXiv:2508.12645v4 Announce Type: replace Abstract: Recent advances in large language models (LLMs) have enabled realistic user simulators for developing and evaluating recommender systems (RSs). However, existing LLM-based simulators for RSs face two major limitations: (1) static and single-step prompt-based inference that leads to inaccurate and incomplete user profile construction; (2) unrealistic and single-round recommendation-feedback interaction pattern that fails to capture real-world scenarios. To address these limitations, we propose DGDPO (Diagnostic-Guided Dynamic Profile Optimization), a novel framework that constructs user profile through a dynamic and iterative optimization process to enhance the simulation fidelity. Specifically, DGDPO incorporates two core modules within each optimization loop: firstly, a specialized LLM-based diagnostic module, calibrated through our novel training strategy, accurately identifies specific defects in the user profile. Subsequently, a generalized LLM-based treatment module analyzes the diagnosed defect and generates targeted suggestions to refine the profile. Furthermore, unlike existing LLM-based user simulators that are limited to single-round interactions, we are the first to integrate DGDPO with sequential recommenders, enabling a bidirectional evolution where user profiles and recommendation strategies adapt to each other over multi-round interactions. Extensive experiments conducted on three real-world datasets demonstrate the effectiveness of our proposed framework.
https://arxiv.org/abs/2508.12645
Academic Papers
svg
8d605e51630682b9f8efd18ee9498391f6dc6bc0d59c1fb0d5c21a085aebf00b
2026-01-07T00:00:00-05:00
An Informative Planning Framework for Target Tracking and Active Mapping in Dynamic Environments with ASVs
arXiv:2508.14636v3 Announce Type: replace Abstract: Mobile robot platforms are increasingly being used to automate information gathering tasks such as environmental monitoring. Efficient target tracking in dynamic environments is critical for applications such as search and rescue and pollutant cleanups. In this letter, we study active mapping of floating targets that drift due to environmental disturbances such as wind and currents. This is a challenging problem as it involves predicting both spatial and temporal variations in the map due to changing conditions. We introduce an integrated framework combining dynamic occupancy grid mapping and an informative planning approach to actively map and track freely drifting targets with an autonomous surface vehicle. A key component of our adaptive planning approach is a spatiotemporal prediction network that predicts target position distributions over time. We further propose a planning objective for target tracking that leverages these predictions. Simulation experiments show that this planning objective improves target tracking performance compared to existing methods that consider only entropy reduction as the planning objective. Finally, we validate our approach in field tests, showcasing its ability to track targets in real-world monitoring scenarios.
https://arxiv.org/abs/2508.14636
Academic Papers
svg
1fe5c3afae2fd08d7c237d088a131e459450f2488393304f2de2cdb4a83fd77d
2026-01-07T00:00:00-05:00
VocabTailor: Dynamic Vocabulary Selection for Downstream Tasks in Small Language Models
arXiv:2508.15229v2 Announce Type: replace Abstract: Small Language Models (SLMs) provide computational advantages in resource-constrained environments, yet memory limitations remain a critical bottleneck for edge device deployment. A substantial portion of SLMs' memory footprint stems from vocabulary-related components, particularly embeddings and language modeling (LM) heads, due to large vocabulary sizes. Existing static vocabulary pruning, while reducing memory usage, suffers from rigid, one-size-fits-all designs that cause information loss from the prefill stage and a lack of flexibility. In this work, we identify two key principles underlying the vocabulary reduction challenge: the lexical locality principle, the observation that only a small subset of tokens is required during any single inference, and the asymmetry in computational characteristics between vocabulary-related components of SLM. Based on these insights, we introduce VocabTailor, a novel decoupled dynamic vocabulary selection framework that addresses memory constraints through offloading embedding and implements a hybrid static-dynamic vocabulary selection strategy for LM Head, enabling on-demand loading of vocabulary components. Comprehensive experiments across diverse downstream tasks demonstrate that VocabTailor achieves a reduction of up to 99% in the memory usage of vocabulary-related components with minimal or no degradation in task performance, substantially outperforming existing static vocabulary pruning.
https://arxiv.org/abs/2508.15229
Academic Papers
svg
9d51fd0c38bc510c9dd0c8217c5c0c96e3822e70e30478a65f5a7792d0acae9b
2026-01-07T00:00:00-05:00
Scalable Scientific Interest Profiling Using Large Language Models
arXiv:2508.15834v2 Announce Type: replace Abstract: Research profiles highlight scientists' research focus, enabling talent discovery and collaborations, but are often outdated. Automated, scalable methods are urgently needed to keep profiles current. We design and evaluate two Large Language Models (LLMs)-based methods to generate scientific interest profiles--one summarizing PubMed abstracts and the other using Medical Subject Headings (MeSH) terms--comparing them with researchers' self-summarized interests. We collected titles, MeSH terms, and abstracts of PubMed publications for 595 faculty at Columbia University Irving Medical Center, obtaining human-written profiles for 167. GPT-4o-mini was prompted to summarize each researcher's interests. Manual and automated evaluations characterized similarities between machine-generated and self-written profiles. The similarity study showed low ROUGE-L, BLEU, and METEOR scores, reflecting little terminological overlap. BERTScore analysis revealed moderate semantic similarity (F1: 0.542 for MeSH-based, 0.555 for abstract-based), despite low lexical overlap. In validation, paraphrased summaries achieved a higher F1 of 0.851. Comparing original and manually paraphrased summaries indicated limitations of such metrics. Kullback-Leibler (KL) Divergence of TF-IDF values (8.56 for MeSH-based, 8.58 for abstract-based) suggests machine summaries employ different keywords than human-written ones. Manual reviews showed 77.78% rated MeSH-based profiling "good" or "excellent," with readability rated favorably in 93.44% of cases, though granularity and accuracy varied. Panel reviews favored 67.86% of MeSH-derived profiles over abstract-derived ones. LLMs promise to automate scientific interest profiling at scale. MeSH-derived profiles have better readability than abstract-derived ones. Machine-generated summaries differ from human-written ones in concept choice, with the latter initiating more novel ideas.
https://arxiv.org/abs/2508.15834
Academic Papers
svg
96ae0650951e16911dc6f2860404baaf052e5e7913afe87c6e715c7b5a13f5a7
2026-01-07T00:00:00-05:00
LVLM-Aware Multimodal Retrieval for RAG-Based Medical Diagnosis with General-Purpose Models
arXiv:2508.17394v4 Announce Type: replace Abstract: Retrieving visual and textual information from medical literature and hospital records can enhance diagnostic accuracy for clinical image interpretation. However, multimodal retrieval-augmented diagnosis is highly challenging. We explore a lightweight mechanism for enhancing diagnostic performance of retrieval-augmented LVLMs. We train a lightweight LVLM-aware multimodal retriever, such that the retriever learns to return images and texts that guide the LVLM toward correct predictions. In our low-resource setting, we perform only lightweight fine-tuning with small amounts of data, and use only general-purpose backbone models, achieving competitive results in clinical classification and VQA tasks compared to medically pre-trained models with extensive training. In a novel analysis, we highlight a previously unexplored class of errors that we term inconsistent retrieval predictions: cases where different top-retrieved images yield different predictions for the same target. We find that these cases are challenging for all models, even for non-retrieval models, and that our retrieval optimization mechanism significantly improves these cases over standard RAG. However, our analysis also sheds light on gaps in the ability of LVLMs to utilize retrieved information for clinical predictions. Code and models available at: https://github.com/Nirmaz/JOMED.
https://arxiv.org/abs/2508.17394
Academic Papers
svg
106576bcfcff7dfa4d9ffad9233e7ad67a799a3415ead377ea616457ff12333f
2026-01-07T00:00:00-05:00
Scene-Aware Vectorized Memory Multi-Agent Framework with Cross-Modal Differentiated Quantization VLMs for Visually Impaired Assistance
arXiv:2508.18177v2 Announce Type: replace Abstract: Visually impaired individuals face significant challenges in environmental perception. Traditional assistive technologies often lack adaptive intelligence, focusing on individual components rather than integrated systems. While Vision-Language Models (VLMs) offer a promising path to richer, integrated understanding, their deployment is severely limited by substantial computational requirements, demanding dozens of gigabytes of memory. To address these gaps in computational efficiency and integrated design, this study proposes a dual technological innovation framework: a cross-modal differentiated quantization framework for VLMs and a scene-aware vectorized memory multi-agent system. The quantization framework implements differentiated strategies, reducing memory from 38GB to 11.3GB. The multi-agent system uses vectorized memory and perception-memory-reasoning workflows to provide environmental information beyond the current view, achieving 2.83-3.52s latency to initial speech output. Experiments show the quantized 19B-parameter model only experiences a 2.05% performance drop on MMBench and maintains 63.7 accuracy on OCR-VQA (original: 64.9), outperforming smaller models with equivalent memory. This research advances computational efficiency and assistive technology, offering comprehensive assistance in scene perception, text recognition, and navigation.
https://arxiv.org/abs/2508.18177
Academic Papers
svg
a48e1a7e69d8b1ea1d2c51c06e19d7f833c4f7ca0c68907ec08977bb984f7176
2026-01-07T00:00:00-05:00
Low-Cost Architecture and Efficient Pattern Synthesis for Polarimetric Phased Array Based on Polarization Coding Reconfigurable Elements
arXiv:2508.19644v3 Announce Type: replace Abstract: Polarimetric phased arrays (PPAs) enhance radar target detection and anti-jamming capabilities, but their conventional dual transmit/receive (T/R) channel architecture leads to high cost and system complexity. To address these limitations, this paper proposes a polarization-coding reconfigurable phased array (PCRPA) and associated pattern synthesis techniques, which reduce the channel count while preserving key performance. In the PCRPA, each antenna element connects to a single T/R channel and is equipped with a two-level RF switch, enabling real-time control of its polarization state and subarray grouping. By optimizing both the element polarization codes and the excitation weights, the array can synthesize arbitrarily polarized and dual-polarized beams. Simulation results show that the proposed approach achieves suppressed cross-polarization and comparable sidelobe levels compared to conventional PPAs across a wide scan range, with performance improvements being more pronounced in larger arrays. The inherent channel reduction does, however, incur a trade-off in terms of radiated power and directivity. Experimental validation using an $8\times 8$ X-band array antenna confirms the feasibility and effectiveness of the proposed system. The PCRPA architecture and the accompanying synthesis methods offer a cost-effective solution for large-scale PPA systems, maintaining sidelobe and polarization control with significantly reduced hardware complexity.
https://arxiv.org/abs/2508.19644
Academic Papers
svg
5294a1fda5b2a313d1db2dbae8afb3f37cb49fecde7a97ba4cd20cf4fb17d597
2026-01-07T00:00:00-05:00
Constructive l2-Discrepancy Minimization with Additive Deviations
arXiv:2508.21423v3 Announce Type: replace Abstract: The \emph{signed series} problem in the $\ell_2$ norm asks, given set of vectors $v_1,\ldots,v_n\in \mathbf{R}^d$ having at most unit $\ell_2$ norm, does there always exist a series $(\varepsilon_i)_{i\in [n]}$ of $\pm 1$ signs such that for all $i\in [n]$, $\max_{i\in [n]} \|\sum_{j=1}^i \varepsilon_i v_i\|_2 = O(\sqrt{d})$. A result of Banaszczyk [2012, \emph{Rand. Struct. Alg.}] states that there exist signs $\varepsilon_i\in \{-1,1\},\; i\in [n]$ such that $\max_{i\in [n]} \|\sum_{j=1}^i \varepsilon_i v_i\|_2 = O(\sqrt{d+\log n})$. The best constructive bound known so far is of $O(\sqrt{d\log n})$, by Bansal and Garg [2017, \emph{STOC.}, 2019, \emph{SIAM J. Comput.}]. We give a polynomial-time randomized algorithm to find signs $x(i) \in \{-1,1\},\; i\in [n]$ such that \[ \max_{i\in [n]} \|\sum_{j=1}^i x(i)v_i\|_2 = O(\sqrt{d + \log^2 n}) = O(\sqrt{d}+\log n).\] By the constructive reduction of Harvey and Samadi [\emph{COLT}, 2014], this also yields a constructive bound of $O(\sqrt{d}+\log n)$ for the Steinitz problem in the $\ell_2$-norm. Thus, we algorithmically achieve Banaszczyk's bounds for both problems when $d \geq \log^2n$, which also matches the conjectured bounds. Our algorithm is based on the framework on Bansal and Garg, together with a new analysis involving $(i)$ additional linear and spectral orthogonality constraints during the construction of the covariance matrix of the random walk steps, which allow us to control the quadratic variation in the linear as well as the quadratic components of the discrepancy increment vector, alongwith $(ii)$ a ``Freedman-like" version of the Hanson-Wright concentration inequality, for filtration-dependent sums of subgaussian chaoses.
https://arxiv.org/abs/2508.21423
Academic Papers
svg
712bdb536defe9f3f323fe7051d430606362db9f5ecb1fc9fc0c5adf870e4d87
2026-01-07T00:00:00-05:00
On discrete Sobolev inequalities for nonconforming finite elements under a semi-regular mesh condition
arXiv:2509.00505v2 Announce Type: replace Abstract: We derive a discrete $ L^q-L^p$ Sobolev inequality tailored for the Crouzeix--Raviart and discontinuous Crouzeix--Raviart finite element spaces on anisotropic meshes in both two and three dimensions. Subject to a semi-regular mesh condition, this discrete Sobolev inequality is applicable to all pairs $(q,p)$ that align with the local Sobolev embedding, including scenarios where $q \leq p$. Importantly, the constant is influenced solely by the domain and the semi-regular parameter, ensuring robustness against variations in aspect ratios and interior angles of the mesh. The proof employs an anisotropy-sensitive trace inequality that leverages the element height, a two-step affine/Piola mapping approach, the stability of the Raviart--Thomas interpolation, and a discrete integration-by-parts identity augmented with weighted jump/trace terms on faces. This Sobolev inequality serves as a mesh-robust foundation for the stability and error analysis of nonconforming and discontinuous Galerkin methods on highly anisotropic meshes.
https://arxiv.org/abs/2509.00505
Academic Papers
svg
002fc64ed6b7675e219f38611626472090e242431ff522b4b1e284a9535f3eb5
2026-01-07T00:00:00-05:00
HADIS: Hybrid Adaptive Diffusion Model Serving for Efficient Text-to-Image Generation
arXiv:2509.00642v2 Announce Type: replace Abstract: Text-to-image diffusion models have achieved remarkable visual quality but incur high computational costs, making latency-aware, scalable deployment challenging. To address this, we advocate a hybrid architecture that achieves query awareness when serving diffusion models. Unlike existing query-aware serving systems that cascade lightweight and heavyweight models with a fixed configuration, our hybrid architecture first routes each query directly to a suitable model variant, then reroutes it to a cascaded heavyweight model only if necessary. We theoretically analyze conditions for the hybrid architecture to outperform non-hybrid alternatives in latency and response quality. Building on this architecture, we design HADIS, a hybrid serving system for latency-aware diffusion models that jointly optimizes cascade model selection, query routing, and resource allocation. To reduce the complexity of resource management, HADIS uses an offline profiling phase to produce a Pareto-optimal cascade configuration table. At runtime, HADIS selects the best cascade configuration and GPU allocation given latency and workload constraints. Empirical evaluations on real-world traces demonstrate that HADIS improves response quality by up to 35% while reducing latency violation rates by 2.7-45$\times$ compared to state-of-the-art model serving systems.
https://arxiv.org/abs/2509.00642
Academic Papers
svg
66e77ff3aeb725598534758bdf766354955468d10a4811305b84785d5daf1139
2026-01-07T00:00:00-05:00
Re3: Learning to Balance Relevance & Recency for Temporal Information Retrieval
arXiv:2509.01306v2 Announce Type: replace Abstract: Temporal Information Retrieval (TIR) is a critical yet unresolved task for modern search systems, retrieving documents that not only satisfy a query's information need but also adhere to its temporal constraints. This task is shaped by two challenges: Relevance, ensuring alignment with the query's explicit temporal requirements, and Recency, selecting the freshest document among multiple versions. Existing methods often address the two challenges in isolation, relying on brittle heuristics that fail in scenarios where temporal requirements and staleness resistance are intertwined. To address this gap, we introduce Re2Bench, a benchmark specifically designed to disentangle and evaluate Relevance, Recency, and their hybrid combination. Building on this foundation, we propose Re3, a unified and lightweight framework that dynamically balances semantic and temporal information through a query-aware gating mechanism. On Re2Bench, Re3 achieves state-of-the-art results, leading in R@1 across all three subsets. Ablation studies with backbone sensitivity tests confirm robustness, showing strong generalization across diverse encoders and real-world settings. This work provides both a generalizable solution and a principled evaluation suite, advancing the development of temporally aware retrieval systems. Re3 and Re2Bench are available online: https://anonymous.4open.science/r/Re3-0C5A
https://arxiv.org/abs/2509.01306
Academic Papers
svg
2983ed14072d5ea628620c0022c3aa2108b51707b243650e90541be8f7631bd4
2026-01-07T00:00:00-05:00
ViSTA-SLAM: Visual SLAM with Symmetric Two-view Association
arXiv:2509.01584v2 Announce Type: replace Abstract: We present ViSTA-SLAM as a real-time monocular visual SLAM system that operates without requiring camera intrinsics, making it broadly applicable across diverse camera setups. At its core, the system employs a lightweight symmetric two-view association (STA) model as the frontend, which simultaneously estimates relative camera poses and regresses local pointmaps from only two RGB images. This design reduces model complexity significantly, the size of our frontend is only 35\% that of comparable state-of-the-art methods, while enhancing the quality of two-view constraints used in the pipeline. In the backend, we construct a specially designed Sim(3) pose graph that incorporates loop closures to address accumulated drift. Extensive experiments demonstrate that our approach achieves superior performance in both camera tracking and dense 3D reconstruction quality compared to current methods. Github repository: https://github.com/zhangganlin/vista-slam
https://arxiv.org/abs/2509.01584
Academic Papers
svg
1a81ed879e378da5309b7a4d4d90fb1bec965de89be5e82f0531c718cd55f7e6
2026-01-07T00:00:00-05:00
Uncertainty-driven Adaptive Exploration
arXiv:2509.03219v3 Announce Type: replace Abstract: Adaptive exploration methods propose ways to learn complex policies via alternating between exploration and exploitation. An important question for such methods is to determine the appropriate moment to switch between exploration and exploitation and vice versa. This is critical in domains that require the learning of long and complex sequences of actions. In this work, we present a generic adaptive exploration framework that employs uncertainty to address this important issue in a principled manner. Our framework includes previous adaptive exploration approaches as special cases. Moreover, we can incorporate in our framework any uncertainty-measuring mechanism of choice, for instance mechanisms used in intrinsic motivation or epistemic uncertainty-based exploration methods. We experimentally demonstrate that our framework gives rise to adaptive exploration strategies that outperform standard ones across several environments.
https://arxiv.org/abs/2509.03219
Academic Papers
svg
7b336a99d9248ac44d0bd20b9c18346185834074e0e79c2a1aad45cc1b7c1074
2026-01-07T00:00:00-05:00
A Multidimensional AI-powered Framework for Analyzing Tourist Perception in Historic Urban Quarters: A Case Study in Shanghai
arXiv:2509.03830v2 Announce Type: replace Abstract: Historic urban quarters play a vital role in preserving cultural heritage while serving as vibrant spaces for tourism and everyday life. Understanding how tourists perceive these environments is essential for sustainable, human-centered urban planning. This study proposes a multidimensional AI-powered framework for analyzing tourist perception in historic urban quarters using multimodal data from social media. Applied to twelve historic quarters in central Shanghai, the framework integrates focal point extraction, color theme analysis, and sentiment mining. Visual focus areas are identified from tourist-shared photos using a fine-tuned semantic segmentation model. To assess aesthetic preferences, dominant colors are extracted using a clustering method, and their spatial distribution across quarters is analyzed. Color themes are further compared between social media photos and real-world street views, revealing notable shifts. This divergence highlights potential gaps between visual expectations and the built environment, reflecting both stylistic preferences and perceptual bias. Tourist reviews are evaluated through a hybrid sentiment analysis approach combining a rule-based method and a multi-task BERT model. Satisfaction is assessed across four dimensions: tourist activities, built environment, service facilities, and business formats. The results reveal spatial variations in aesthetic appeal and emotional response. Rather than focusing on a single technical innovation, this framework offers an integrated, data-driven approach to decoding tourist perception and contributes to informed decision-making in tourism, heritage conservation, and the design of aesthetically engaging public spaces.
https://arxiv.org/abs/2509.03830
Academic Papers
svg
f39732a79acd4a1913acc1f5bdff587a552404546f7e60062e5fccb5fa7a5c1e
2026-01-07T00:00:00-05:00
IPA: An Information-Reconstructive Input Projection Framework for Efficient Foundation Model Adaptation
arXiv:2509.04398v3 Announce Type: replace Abstract: Parameter-efficient fine-tuning (PEFT) methods, such as LoRA, reduce adaptation cost by injecting low-rank updates into pretrained weights. However, LoRA's down-projection is randomly initialized and data-agnostic, discarding potentially useful information. Prior analyses show that this projection changes little during training, while the up-projection carries most of the adaptation, making the random input compression a performance bottleneck. We propose IPA, a feature-aware projection framework that explicitly aims to reconstruct the original input within a reduced hidden space. In the linear case, we instantiate IPA with algorithms approximating top principal components, enabling efficient projector pretraining with negligible inference overhead. Across language and vision benchmarks, IPA consistently improves over LoRA and DoRA, achieving on average 1.5 points higher accuracy on commonsense reasoning and 2.3 points on VTAB-1k, while matching full LoRA performance with roughly half the trainable parameters when the projection is frozen. Code available at https://github.com/valeoai/peft-ipa .
https://arxiv.org/abs/2509.04398
Academic Papers
svg
0c5a0d9194af9b10baa13124f2164f3dee87d6a4101a1854f1dd573c58156eae
2026-01-07T00:00:00-05:00
Predicting Failures of LLMs to Link Biomedical Ontology Terms to Identifiers Evidence Across Models and Ontologies
arXiv:2509.04458v2 Announce Type: replace Abstract: Large language models often perform well on biomedical NLP tasks but may fail to link ontology terms to their correct identifiers. We investigate why these failures occur by analyzing predictions across two major ontologies, Human Phenotype Ontology and Gene Ontology, and two high-performing models, GPT-4o and LLaMa 3.1 405B. We evaluate nine candidate features related to term familiarity, identifier usage, morphology, and ontology structure. Univariate and multivariate analyses show that exposure to ontology identifiers is the strongest predictor of linking success.
https://arxiv.org/abs/2509.04458
Academic Papers
svg
2e2928130aba94de9e866dbf46461342791f345b8f78e1c517614b905a973e61
2026-01-07T00:00:00-05:00
Benchmarking CNN and Transformer-Based Object Detectors for UAV Solar Panel Inspection
arXiv:2509.05348v2 Announce Type: replace Abstract: Timely and accurate detection of defects and contaminants in solar panels is critical for maintaining the efficiency and reliability of photovoltaic (PV) systems. While recent studies have applied deep learning to PV inspection, fair benchmarking across detector architectures and unbiased handling of class imbalance remain limited. This work presents a comprehensive benchmark of convolutional and transformer-based object detectors on UAV-captured RGB imagery of solar panels. It introduces a class-targeted augmentation strategy applied exclusively to the training split to mitigate imbalance without compromising evaluation integrity. Faster R-CNN with ResNet50 and MobileNetV3 backbones, RetinaNet with ResNet50, YOLOv5, YOLOv8, and Swin Transformer backbones integrated with Faster R-CNN (Tiny, Small, and Base variants) are evaluated. Performance is assessed using mean Average Precision (mAP) across multiple IoU thresholds, precision, recall, F1 score, and inference throughput to enable accuracy-throughput tradeoff analysis relevant to UAV deployment. Experimental results show that Faster R-CNN with a ResNet50 backbone achieves the highest localization accuracy, with mAP@0.5 of 0.893 and mAP@0.5:0.95 of 0.759, whereas the MobileNetV3 variant provides the best overall reliability balance, achieving recall of 0.745, F1-score of 0.809, and accuracy of 0.679 on the test set. The dataset and code will be released upon acceptance of the paper.
https://arxiv.org/abs/2509.05348
Academic Papers
svg
45f0f749e0091dc8232b8c1434d781feffd8ffb7192b8eea48624af7c0ee2dcb
2026-01-07T00:00:00-05:00
Optimal Average Disk-Inspection via Fermat's Principle
arXiv:2509.06334v3 Announce Type: replace Abstract: This work resolves the optimal average-case cost of the Disk-Inspection problem, a variant of Bellman's 1955 lost-in-a-forest problem. In Disk-Inspection, a mobile agent starts at the center of a unit disk and follows a trajectory that inspects perimeter points whenever the disk does not obstruct visibility. The worst-case cost was solved optimally in 1957 by Isbell, but the average-case version remained open, with heuristic upper bounds proposed by Gluss in 1961 and improved only recently. Our approach applies Fermat's Principle of Least Time to a recently proposed discretization framework, showing that optimal solutions are captured by a one-parameter family of recurrences independent of the discretization size. In the continuum limit these recurrences give rise to a single-parameter optimal control problem, whose trajectories coincide with limiting solutions of the original Disk-Inspection problem. A crucial step is proving that the optimal initial condition generates a trajectory that avoids the unit disk, thereby validating the optics formulation and reducing the many-variable optimization to a rigorous one-parameter problem. In particular, this disproves Gluss's conjecture that optimal trajectories must touch the disk. Our analysis determines the exact optimal average-case inspection cost, equal to $3.549259\ldots$ and certified to at least six digits of accuracy.
https://arxiv.org/abs/2509.06334
Academic Papers
svg
675ddeb5a1489fcd40672b1bbfceb20ca613a2f07fd4f0ed5aabb09bc6d7b194
2026-01-07T00:00:00-05:00
Learning Optimal Defender Strategies for CAGE-2 using a POMDP Model
arXiv:2509.06539v2 Announce Type: replace Abstract: CAGE-2 is an accepted benchmark for learning and evaluating defender strategies against cyberattacks. It reflects a scenario where a defender agent protects an IT infrastructure against various attacks. Many defender methods for CAGE-2 have been proposed in the literature. In this paper, we construct a formal model for CAGE-2 using the framework of Partially Observable Markov Decision Process (POMDP). Based on this model, we define an optimal defender strategy for CAGE-2 and introduce a method to efficiently learn this strategy. Our method, called BF-PPO, is based on PPO, and it uses particle filter to mitigate the computational complexity due to the large state space of the CAGE-2 model. We evaluate our method in the CAGE-2 CybORG environment and compare its performance with that of CARDIFF, the highest ranked method on the CAGE-2 leaderboard. We find that our method outperforms CARDIFF regarding the learned defender strategy and the required training time.
https://arxiv.org/abs/2509.06539
Academic Papers
svg
616513cc16f686c906d56ca7ad73e942e900a1c0cd64c96911cd7c231a4e0db4
2026-01-07T00:00:00-05:00
A Decade-long Landscape of Advanced Persistent Threats: Longitudinal Analysis and Global Trends
arXiv:2509.07457v2 Announce Type: replace Abstract: An advanced persistent threat (APT) refers to a covert, long-term cyberattack, typically conducted by state-sponsored actors, targeting critical sectors and often remaining undetected for long periods. In response, collective intelligence from around the globe collaborates to identify and trace surreptitious activities, generating substantial documentation on APT campaigns publicly available on the web. While prior works predominantly focus on specific aspects of APT cases, such as detection, evaluation, cyber threat intelligence, and dataset creation, limited attention has been devoted to revisiting and investigating these scattered dossiers in a longitudinal manner. The objective of our study is to fill the gap by offering a macro perspective, connecting key insights and global trends in past APT attacks. We systematically analyze six reliable sources-three focused on technical reports and another three on threat actors-examining 1,509 APT dossiers (24,215 pages) spanning 2014-2023, and identifying 603 unique APT groups worldwide. To efficiently unearth relevant information, we employ a hybrid methodology that combines rule-based information retrieval with large-language-model-based search techniques. Our longitudinal analysis reveals shifts in threat actor activities, global attack vectors, changes in targeted sectors, and relationships between cyberattacks and significant events such as elections or wars, which provide insights into historical patterns in APT evolution. Over the past decade, 154 countries have been affected, primarily using malicious documents and spear phishing as dominant initial infiltration vectors, with a noticeable decline in zero-day exploitation since 2016. Furthermore, we present our findings through interactive visualization tools, such as an APT map or flow diagram, to facilitate intuitive understanding of global patterns and trends in APT activities.
https://arxiv.org/abs/2509.07457
Academic Papers
svg
fedc48308c9c76bc85aca612a0baa0e2e146820b804fa42f729d6e1cd0f960fd
2026-01-07T00:00:00-05:00
EFPIX: A zero-trust encrypted flood protocol
arXiv:2509.08248v3 Announce Type: replace Abstract: We propose EFPIX (Encrypted Flood Protocol for Information eXchange), a flood-based relay communication protocol that achieves end-to-end encryption, plausible deniability for users, and untraceable messages while hiding metadata, such as sender and receiver, from those not involved. It also has built-in spam resistance and multiple optional enhancements. It can be used in privacy-critical communication, infrastructure-loss scenarios, space/research/military communication, where central servers are infeasible, or general-purpose messaging.
https://arxiv.org/abs/2509.08248
Academic Papers
svg
aa1033a4de2f76ddfb78814260835c8c187424a3f26b68532deb4a48bb0232eb
2026-01-07T00:00:00-05:00
Personality-Enhanced Social Recommendations in SAMI: Exploring the Role of Personality Detection in Matchmaking
arXiv:2509.09583v2 Announce Type: replace Abstract: Social belonging is a vital part of learning, yet online course environments present barriers to the organic formation of social groups. SAMI (Social Agent Mediated Interactions) offers one solution by facilitating student connections, but its effectiveness may be constrained by an incomplete Theory of Mind, limiting its ability to create an effective 'mental model' of a student. One facet of this is its inability to intuit personality, which may influence the relevance of its recommendations. To explore this gap, we examine the viability of automated personality inference by proposing a personality detection model utilizing GPT's zeroshot capability to infer Big-Five personality traits from forum introduction posts, often encouraged in online courses. We benchmark its performance against established models, finding that while GPT models show promising results on this specific dataset, performance varies significantly across traits. We identify potential biases toward optimistic trait inference, particularly for traits with skewed distributions. We demonstrate a proof-of-concept integration of personality detection into SAMI's entity-based matchmaking system, focusing on three traits with established connections to positive social formation: Extroversion, Agreeableness, and Openness. This work represents an initial exploration of personality-informed social recommendations in educational settings. While our implementation shows technical feasibility, significant questions remain. We discuss these limitations and outline directions for future work, examining what LLMs specifically capture when performing personality inference and whether personality-based matching meaningfully improves student connections in practice.
https://arxiv.org/abs/2509.09583
Academic Papers
svg
726c07934e085280f5678ca6a1451c12709ce0bec18f8e40badf82fe65a15f2f
2026-01-07T00:00:00-05:00
AgentArch: A Comprehensive Benchmark to Evaluate Agent Architectures in Enterprise
arXiv:2509.10769v2 Announce Type: replace Abstract: While individual components of agentic architectures have been studied in isolation, there remains limited empirical understanding of how different design dimensions interact within complex multi-agent systems. This study aims to address these gaps by providing a comprehensive enterprise-specific benchmark evaluating 18 distinct agentic configurations across state-of-the-art large language models. We examine four critical agentic system dimensions: orchestration strategy, agent prompt implementation (ReAct versus function calling), memory architecture, and thinking tool integration. Our benchmark reveals significant model-specific architectural preferences that challenge the prevalent one-size-fits-all paradigm in agentic AI systems. It also reveals significant weaknesses in overall agentic performance on enterprise tasks with the highest scoring models achieving a maximum of only 35.3\% success on the more complex task and 70.8\% on the simpler task. We hope these findings inform the design of future agentic systems by enabling more empirically backed decisions regarding architectural components and model selection.
https://arxiv.org/abs/2509.10769
Academic Papers
svg
fc5a75d4cab874b6d83ebfc8df01645fbff75a0c3da5bb1f25136d640a250092
2026-01-07T00:00:00-05:00
Patient-Zero: Scaling Synthetic Patient Agents to Real-World Distributions without Real Patient Data
arXiv:2509.11078v2 Announce Type: replace Abstract: Synthetic data generation with Large Language Models (LLMs) has emerged as a promising solution in the medical domain to mitigate data scarcity and privacy constraints. However, existing approaches remain constrained by their derivative nature, relying on real-world records, which pose privacy risks and distribution biases. Furthermore, current patient agents face the Stability-Plasticity Dilemma, struggling to maintain clinical consistency during dynamic inquiries. To address these challenges, we introduce Patient-Zero, a novel framework for ab initio patient simulation that requires no real medical records. Our Medically-Aligned Hierarchical Synthesis framework generates comprehensive and diverse patient records from abstract clinical guidelines via stratified attribute permutation. To support rigorous clinical interaction, we design a Dual-Track Cognitive Memory System to enable agents dynamically update memory while preserving logical consistency and persona adherence. Extensive evaluations show that Patient-Zero establishes a new state-of-the-art in both data quality and interaction fidelity. In human expert evaluations, senior licensed physicians judge our synthetic data to be statistically indistinguishable from real human-authored data and higher in clinical quality. Furthermore, downstream medical reasoning model trained on our synthetic dataset shows substantial performance gains (MedQA +24.0%; MMLU +14.5%), demonstrating the practical utility of our framework.
https://arxiv.org/abs/2509.11078
Academic Papers
svg
3f6ce4f840dcf3fd2a7f6e696f059c8c2c190059419e54b17aed68c5e2fa4379
2026-01-07T00:00:00-05:00
OnlineMate: An LLM-Based Multi-Agent Companion System for Cognitive Support in Online Learning
arXiv:2509.14803v3 Announce Type: replace Abstract: In online learning environments, students often lack personalized peer interactions, which are crucial for cognitive development and learning engagement. Although previous studies have employed large language models (LLMs) to simulate interactive learning environments, these interactions are limited to conversational exchanges, failing to adapt to learners' individualized cognitive and psychological states. As a result, students' engagement is low and they struggle to gain inspiration. To address this challenge, we propose OnlineMate, a multi-agent learning companion system driven by LLMs integrated with Theory of Mind (ToM). OnlineMate simulates peer-like roles, infers learners' psychological states such as misunderstandings and confusion during collaborative discussions, and dynamically adjusts interaction strategies to support higher-order thinking. Comprehensive evaluations, including simulation-based experiments, human assessments, and real classroom trials, demonstrate that OnlineMate significantly promotes deep learning and cognitive engagement by elevating students' average cognitive level while substantially improving emotional engagement scores.
https://arxiv.org/abs/2509.14803
Academic Papers
svg
67d84b2138bb24e75c9ab625d34ed317324c218d79d21fe6659ad7124394dd87
2026-01-07T00:00:00-05:00
Exploring How Audio Effects Alter Emotion with Foundation Models
arXiv:2509.15151v3 Announce Type: replace Abstract: Audio effects (FX) such as reverberation, distortion, modulation, and dynamic range processing play a pivotal role in shaping emotional responses during music listening. While prior studies have examined links between low-level audio features and affective perception, the systematic impact of audio FX on emotion remains underexplored. This work investigates how foundation models - large-scale neural architectures pretrained on multimodal data - can be leveraged to analyze these effects. Such models encode rich associations between musical structure, timbre, and affective meaning, offering a powerful framework for probing the emotional consequences of sound design techniques. By applying various probing methods to embeddings from deep learning models, we examine the complex, nonlinear relationships between audio FX and estimated emotion, uncovering patterns tied to specific effects and evaluating the robustness of foundation audio models. Our findings aim to advance understanding of the perceptual impact of audio production practices, with implications for music cognition, performance, and affective computing.
https://arxiv.org/abs/2509.15151
Academic Papers
svg
0b8ea300437e862d585aa6ce12ed6242084f995b9088fb5d7cab4b3fdedb6fe7
2026-01-07T00:00:00-05:00
FragmentRetro: A Quadratic Retrosynthetic Method Based on Fragmentation Algorithms
arXiv:2509.15409v2 Announce Type: replace Abstract: Retrosynthesis, the process of deconstructing a target molecule into simpler precursors, is crucial for computer-aided synthesis planning (CASP). Widely adopted tree-search methods often suffer from exponential computational complexity. In this work, we introduce FragmentRetro, a novel retrosynthetic method that leverages fragmentation algorithms, specifically BRICS and r-BRICS, combined with stock-aware exploration and pattern fingerprint screening to achieve quadratic complexity. FragmentRetro recursively combines molecular fragments and verifies their presence in a building block set, providing sets of fragment combinations as retrosynthetic solutions. We present the first formal computational analysis of retrosynthetic methods, showing that tree search exhibits exponential complexity $O(b^h)$, DirectMultiStep scales as $O(h^6)$, and FragmentRetro achieves $O(h^2)$, where $h$ represents the number of heavy atoms in the target molecule and $b$ is the branching factor for tree search. Evaluations on PaRoutes, USPTO-190, and natural products demonstrate that FragmentRetro achieves high solved rates with competitive runtime, including cases where tree search fails. The method benefits from fingerprint screening, which significantly reduces substructure matching complexity. While FragmentRetro focuses on efficiently identifying fragment-based solutions rather than full reaction pathways, its computational advantages and ability to generate strategic starting candidates establish it as a powerful foundational component for scalable and automated synthesis planning.
https://arxiv.org/abs/2509.15409
Academic Papers
svg
05d79847d56fe749ed9205fa4ab43dd4d61168e91a5e2f04f119d11d77176e92
2026-01-07T00:00:00-05:00
ISCS: Parameter-Guided Feature Pruning for Resource-Constrained Embodied Perception
arXiv:2509.16853v2 Announce Type: replace Abstract: Prior studies in embodied AI consistently show that robust perception is critical for human-robot interaction, yet deploying high-fidelity visual models on resource-constrained agents remains challenging due to limited on-device computation power and transmission latency. Exploiting the redundancy in latent representations could improve system efficiency, yet existing approaches often rely on costly dataset-specific ablation tests or heavy entropy models unsuitable for real-time edge-robot collaboration. We propose a generalizable, dataset-agnostic method to identify and selectively transmit structure-critical channels in pretrained encoders. Instead of brute-force empirical evaluations, our approach leverages intrinsic parameter statistics-weight variances and biases-to estimate channel importance. This analysis reveals a consistent organizational structure, termed the Invariant Salient Channel Space (ISCS), where Salient-Core channels capture dominant structures while Salient-Auxiliary channels encode fine visual details. Building on ISCS, we introduce a deterministic static pruning strategy that enables lightweight split-computing. Experiments across different datasets demonstrate that our method achieves a deterministic, ultra-low latency pipeline by bypassing heavy entropy modeling. Our method reduces end-to-end latency, providing a critical speed-accuracy trade-off for resource-constrained human-aware embodied systems.
https://arxiv.org/abs/2509.16853
Academic Papers
svg
4710ac07237914f2541c2d849aed5e9e908c3e85acb1bc34fda9efe68d529c5e
2026-01-07T00:00:00-05:00
Evolutionary Learning in Spatial Agent-Based Models for Physical Climate Risk Assessment
arXiv:2509.18633v3 Announce Type: replace Abstract: Climate risk assessment requires modelling complex interactions between spatially heterogeneous hazards and adaptive economic systems. We present a novel geospatial agent-based model that integrates climate hazard data with evolutionary learning for economic agents. Our framework combines geospatial agent-based modelling with asset-level damage functions, featuring an illustrative three-sector economy (commodity, manufacturing, retail) with adaptive learning behaviours that allow firms to evolve strategies for budget allocation, pricing, wages, and risk adaptation through fitness-based selection and mutation. We demonstrate the framework using riverine flood projections under RCP8.5 until 2100, comparing four scenarios: baseline and hazard conditions with and without evolutionary learning. Our results show that increasingly frequent and intense acute hazards lower firm production levels, liquidity, and capital, while increasing the prices of goods and unemployment. The framework reveals systemic risks where even agents not directly exposed to floods face impacts through supply chain disruptions. Importantly, evolutionary adaptation enables firms to maintain higher production, capital, liquidity, wages and employment levels while keeping prices lower compared to non-learning counterparts. This open-source framework provides financial institutions and companies with tools to quantify both direct and cascading climate risks while evaluating cost-effective adaptation strategies.
https://arxiv.org/abs/2509.18633
Academic Papers
svg
cdc84172061eb34ad89e1b4935127f2d9cb0ebf9f15a4ab9a5006b372770136c
2026-01-07T00:00:00-05:00
Consistency-Aware Parameter-Preserving Knowledge Editing Framework for Multi-Hop Question Answering
arXiv:2509.18655v2 Announce Type: replace Abstract: Parameter-Preserving Knowledge Editing (PPKE) enables updating models with new information without retraining or parameter adjustment. Recent PPKE approaches used knowledge graphs (KG) to extend knowledge editing (KE) capabilities to multi-hop question answering (MHQA). However, these methods often lack consistency, leading to knowledge contamination, unstable updates, and retrieval behaviors that are misaligned with the intended edits. Such inconsistencies undermine the reliability of PPKE in multi-hop reasoning. We present CAPE-KG, Consistency-Aware Parameter-Preserving Editing with Knowledge Graphs, a novel consistency-aware framework for PPKE on MHQA. CAPE-KG ensures KG construction, update, and retrieval are always aligned with the requirements of the MHQA task, maintaining coherent reasoning over both unedited and edited knowledge. Extensive experiments on the MQuAKE benchmark show accuracy improvements in PPKE performance for MHQA, demonstrating the effectiveness of addressing consistency in PPKE.
https://arxiv.org/abs/2509.18655
Academic Papers
svg
77f5225a87b6daca9c405f0b15e0d784199bea53c7cd54f06e2dc139fa3815ef
2026-01-07T00:00:00-05:00
ImageNet-trained CNNs are not biased towards texture: Revisiting feature reliance through controlled suppression
arXiv:2509.20234v4 Announce Type: replace Abstract: The hypothesis that Convolutional Neural Networks (CNNs) are inherently texture-biased has shaped much of the discourse on feature use in deep learning. We revisit this hypothesis by examining limitations in the cue-conflict experiment by Geirhos et al. To address these limitations, we propose a domain-agnostic framework that quantifies feature reliance through systematic suppression of shape, texture, and color cues, avoiding the confounds of forced-choice conflicts. By evaluating humans and neural networks under controlled suppression conditions, we find that CNNs are not inherently texture-biased but predominantly rely on local shape features. Nonetheless, this reliance can be substantially mitigated through modern training strategies or architectures (ConvNeXt, ViTs). We further extend the analysis across computer vision, medical imaging, and remote sensing, revealing that reliance patterns differ systematically: computer vision models prioritize shape, medical imaging models emphasize color, and remote sensing models exhibit a stronger reliance on texture. Code is available at https://github.com/tomburgert/feature-reliance.
https://arxiv.org/abs/2509.20234
Academic Papers
svg
5ae2bf780df057a71cb8d5b5e0e2b5b8fddc9a3d081f06548e06df48e82fa78f
2026-01-07T00:00:00-05:00
Quantifying LLM Biases Across Instruction Boundary in Mixed Question Forms
arXiv:2509.20278v3 Announce Type: replace Abstract: Large Language Models (LLMs) annotated datasets are widely used nowadays, however, large-scale annotations often show biases in low-quality datasets. For example, Multiple-Choice Questions (MCQs) datasets with one single correct option is common, however, there may be questions attributed to none or multiple correct options; whereas true-or-false questions are supposed to be labeled with either True or False, but similarly the text can include unsolvable elements, which should be further labeled as Unknown. There are problems when low-quality datasets with mixed question forms can not be identified. We refer to these exceptional label forms as Sparse Labels, and LLMs' ability to distinguish datasets with Sparse Labels mixture is important. Since users may not know situations of datasets, their instructions can be biased. To study how different instruction settings affect LLMs' identifications of Sparse Labels mixture, we introduce the concept of Instruction Boundary, which systematically evaluates different instruction settings that lead to biases. We propose BiasDetector, a diagnostic benchmark to systematically evaluate LLMs on datasets with mixed question forms under Instruction Boundary settings. Experiments show that users' instructions induce large biases on our benchmark, highlighting the need not only for LLM developers to recognize risks of LLM biased annotation resulting in Sparse Labels mixture, but also problems arising from users' instructions to identify them. Code, datasets and detailed implementations are available at https://github.com/ZpLing/Instruction-Boundary.
https://arxiv.org/abs/2509.20278
Academic Papers
svg
afbc6daaa581406656c73d9b307d690b8c560eb3a0b19d58e4ddc73e4524f896
2026-01-07T00:00:00-05:00
CaTS-Bench: Can Language Models Describe Time Series?
arXiv:2509.20823v2 Announce Type: replace Abstract: Time series captioning, the task of describing time series in natural language, requires numeric and temporal reasoning, trend interpretation, and contextual understanding. Existing benchmarks, however, often rely on fully synthetic or generic captions, and typically neglect metadata and visual representations. We introduce \textbf{CaTS-Bench}, a comprehensive benchmark for \textbf{C}ontext-\textbf{a}ware \textbf{T}ime \textbf{S}eries reasoning across $11$ diverse domains, centered on a gold-standard evaluation set of $1746$ human-rewritten captions that measure how effectively models translate numeric trends into immediately interpretable narratives. To address the scarcity of human-annotated data, we also propose a scalable pipeline for generating high-fidelity synthetic captions, the quality of which we validate. We evaluate leading Vision-Language Models on our benchmark, revealing that even proprietary models struggle to capture numeric nuances in temporal descriptions, while finetuning open-source models on synthetic data yields substantial performance gains. Finally, release a diagnostic suite of $910$ multiple-choice questions and tailored numeric metrics to gauge time-series-specific reasoning capabilities, establishing CaTS-Bench as a reliable foundation for grounded, multimodal language generation in numeric domains.
https://arxiv.org/abs/2509.20823
Academic Papers
svg
223914da9c3c175984f9109f390ebc31f022e4128d496e7b8b60bb8158cde6da
2026-01-07T00:00:00-05:00
Think-on-Graph 3.0: Efficient and Adaptive LLM Reasoning on Heterogeneous Graphs via Multi-Agent Dual-Evolving Context Retrieval
arXiv:2509.21710v2 Announce Type: replace Abstract: Graph-based Retrieval-Augmented Generation (GraphRAG) has become the important paradigm for enhancing Large Language Models (LLMs) with external knowledge. However, existing approaches are constrained by their reliance on high-quality knowledge graphs: manually built ones are not scalable, while automatically extracted ones are limited by the performance of LLM extractors, especially when using smaller, local-deployed models. To address this, we introduce Think-on-Graph 3.0 (ToG-3), a novel framework featuring a Multi-Agent Context Evolution and Retrieval (MACER) mechanism. Its core contribution is the dynamic construction and iterative refinement of a Chunk-Triplets-Community heterogeneous graph index, powered by a Dual-Evolution process that adaptively evolves both the query and the retrieved sub-graph during reasoning. ToG-3 dynamically builds a targeted graph index tailored to the query, enabling precise evidence retrieval and reasoning even with lightweight LLMs. Extensive experiments demonstrate that ToG-3 outperforms compared baselines on both deep and broad reasoning benchmarks, and ablation studies confirm the efficacy of the components of MACER framework. The source code are available in https://github.com/DataArcTech/ToG-3.
https://arxiv.org/abs/2509.21710
Academic Papers
svg
ca876265e1358b8dafed997e63970877f8050fdd8b157d1f23afa32034824d70
2026-01-07T00:00:00-05:00
D-Artemis: A Deliberative Cognitive Framework for Mobile GUI Multi-Agents
arXiv:2509.21799v2 Announce Type: replace Abstract: Graphical User Interface (GUI) agents aim to automate a wide spectrum of human tasks by emulating user interaction. Despite rapid advancements, current approaches are hindered by several critical challenges: data bottleneck in end-to-end training, high cost of delayed error detection, and risk of contradictory guidance. Inspired by the human cognitive loop of Thinking, Alignment, and Reflection, we present D-Artemis -- a novel deliberative framework in this paper. D-Artemis leverages a fine-grained, app-specific tip retrieval mechanism to inform its decision-making process. It also employs a proactive Pre-execution Alignment stage, where Thought-Action Consistency (TAC) Check module and Action Correction Agent (ACA) work in concert to mitigate the risk of execution failures. A post-execution Status Reflection Agent (SRA) completes the cognitive loop, enabling strategic learning from experience. Crucially, D-Artemis enhances the capabilities of general-purpose Multimodal large language models (MLLMs) for GUI tasks without the need for training on complex trajectory datasets, demonstrating strong generalization. D-Artemis establishes new state-of-the-art (SOTA) results across both major benchmarks, achieving a 75.8% success rate on AndroidWorld and 96.8% on ScreenSpot-V2. Extensive ablation studies further demonstrate the significant contribution of each component to the framework.
https://arxiv.org/abs/2509.21799
Academic Papers
svg
50af53734f068c8b9eeae092aff364ddb635d683a662e21cb67ecc89b20b799b
2026-01-07T00:00:00-05:00
CMDAR: A Chinese Multi-scene Dynamic Audio Reasoning Benchmark with Diverse Challenges
arXiv:2509.22461v3 Announce Type: replace Abstract: The ability to reason from audio, including speech, environmental sounds, and music, is essential for AI agents to interact effectively in real-world scenarios. Existing benchmarks mainly focus on static or single-scene settings and English audio data and do not fully capture scenarios where multiple speakers, unfolding events, and heterogeneous audio sources interact. To address these challenges, we introduce CMDAR, a Chinese benchmark for evaluating models on complex, multi-scene, and dynamically evolving audio reasoning tasks. CMDAR comprises 3,000 carefully curated question-answer pairs linked to diverse audio clips, covering five categories of complex reasoning and spanning three question types. We benchmark 26 state-of-the-art audio language models on CMDAR and observe that they exhibit limitations in complex reasoning tasks. In CMDAR-main, Qwen2.5-Omni achieves 76.67% accuracy, whereas GPT-4o Audio reaches 68.47%. However, GPT-4o Audio substantially outperforms Qwen2.5-Omni on the more challenging multiple-choice with multiple audios and open-ended tasks. And we provide detail analysis corresponding suggestions for the future development of large audio language models.
https://arxiv.org/abs/2509.22461
Academic Papers
svg
5646439cd91f949886c4cbcad2f73c33bf66694c8391cd5385ee0eef814af40c
2026-01-07T00:00:00-05:00
MARCH: Evaluating the Intersection of Ambiguity Interpretation and Multi-hop Inference
arXiv:2509.22750v2 Announce Type: replace Abstract: Real-world multi-hop QA is naturally linked with ambiguity, where a single query can trigger multiple reasoning paths that require independent resolution. Since ambiguity can occur at any stage, models must navigate layered uncertainty throughout the entire reasoning chain. Despite its prevalence in real-world user queries, previous benchmarks have primarily focused on single-hop ambiguity, leaving the complex interaction between multi-step inference and layered ambiguity underexplored. In this paper, we introduce \textbf{MARCH}, a benchmark for their intersection, with 2,209 multi-hop ambiguous questions curated via multi-LLM verification and validated by human annotation with strong agreement. Our experiments reveal that even state-of-the-art models struggle with MARCH, confirming that combining ambiguity resolution with multi-step reasoning is a significant challenge. To address this, we propose \textbf{CLARION}, a two-stage agentic framework that explicitly decouples ambiguity planning from evidence-driven reasoning, significantly outperforms existing approaches, and paves the way for robust reasoning systems.
https://arxiv.org/abs/2509.22750
Academic Papers
svg
ea91862af8ed0f81e343cf113d17f8ffd0acf5028f600349ede52ff44b430038
2026-01-07T00:00:00-05:00
Gradient Coupling: The Hidden Barrier to Generalization in Agentic Reinforcement Learning
arXiv:2509.23870v3 Announce Type: replace Abstract: Reinforcement learning (RL) is a dominant paradigm for training autonomous agents, yet these agents often exhibit poor generalization, failing to adapt to scenarios not seen during training. In this work, we identify a fundamental cause of this brittleness, a phenomenon which we term "gradient coupling." We hypothesize that in complex agentic tasks, the high similarity between distinct states leads to destructive interference between gradients. Specifically, a gradient update that reinforces an optimal action in one state can inadvertently increase the likelihood of a suboptimal action in a similar, yet different, state. To solve this, we propose a novel objective where the actor is trained to simultaneously function as a classifier that separates good and bad actions. This auxiliary pressure compels the model to learn disentangled embeddings for positive and negative actions, which mitigates negative gradient interference and improve the generalization performance. Extensive experiments demonstrate the effectiveness of our method.
https://arxiv.org/abs/2509.23870
Academic Papers
svg
6d130d8de3fb374d6c93709a88cab441e53588b6f93f323bb952f4380ecf2005
2026-01-07T00:00:00-05:00
Go with Your Gut: Scaling Confidence for Autoregressive Image Generation
arXiv:2509.26376v2 Announce Type: replace Abstract: Test-time scaling (TTS) has demonstrated remarkable success in enhancing large language models, yet its application to next-token prediction (NTP) autoregressive (AR) image generation remains largely uncharted. Existing TTS approaches for visual AR (VAR), which rely on frequent partial decoding and external reward models, are ill-suited for NTP-based image generation due to the inherent incompleteness of intermediate decoding results. To bridge this gap, we introduce ScalingAR, the first TTS framework specifically designed for NTP-based AR image generation that eliminates the need for early decoding or auxiliary rewards. ScalingAR leverages token entropy as a novel signal in visual token generation and operates at two complementary scaling levels: (i) Profile Level, which streams a calibrated confidence state by fusing intrinsic and conditional signals; and (ii) Policy Level, which utilizes this state to adaptively terminate low-confidence trajectories and dynamically schedule guidance for phase-appropriate conditioning strength. Experiments on both general and compositional benchmarks show that ScalingAR (1) improves base models by 12.5% on GenEval and 15.2% on TIIF-Bench, (2) efficiently reduces visual token consumption by 62.0% while outperforming baselines, and (3) successfully enhances robustness, mitigating performance drops by 26.0% in challenging scenarios.
https://arxiv.org/abs/2509.26376
Academic Papers
svg
779f1da354a40dd23529bac764df6929219a2e17d5bda49830031f8185d4edea
2026-01-07T00:00:00-05:00
Framing Unionization on Facebook: Communication around Representation Elections in the United States
arXiv:2510.01757v2 Announce Type: replace Abstract: Digital media have become central to how labor unions communicate, organize, and sustain collective action. Yet little is known about how unions' online discourse relates to concrete outcomes such as representation elections. This study addresses the gap by combining National Labor Relations Board (NLRB) election data with 158k Facebook posts published by U.S. labor unions between 2015 and 2024. We focused on five discourse frames widely recognized in labor and social movement communication research: diagnostic (identifying problems), prognostic (proposing solutions), motivational (mobilizing action), community (emphasizing solidarity), and engagement (promoting social media interaction). Using a fine-tuned RoBERTa classifier, we systematically annotated unions' posts and analyzed patterns of frame usage around election events. Our findings showed that diagnostic and community frames dominated union communication overall, but that frame usage varied substantially across organizations. Greater use of diagnostic, prognostic, and community frames prior to an election was associated with higher odds of a successful outcome. After elections, framing patterns diverged depending on results: after wins, the use of prognostic and motivational frames decreased, whereas after losses, the use of prognostic and engagement frames increased. By examining variation in message-level framing, the study highlights how communication strategies correlate with organizational success, contributing open tools and data, and complementing prior research in understanding digital communication of unions and social movements.
https://arxiv.org/abs/2510.01757
Academic Papers
svg
56714f8ce3407704e97135becf2212c494803324e358c69acba28ee348a99cb7
2026-01-07T00:00:00-05:00
Universal Dynamic Regret and Constraint Violation Bounds for Constrained Online Convex Optimization
arXiv:2510.01867v2 Announce Type: replace Abstract: We consider a generalization of the celebrated Online Convex Optimization (OCO) framework with adversarial online constraints. In this problem, an online learner interacts with an adversary sequentially over multiple rounds. At the beginning of each round, the learner chooses an action from a convex decision set. After that, the adversary reveals a convex cost function and a convex constraint function. The goal of the learner is to minimize the cumulative cost while satisfying the constraints as tightly as possible. We present two efficient algorithms with simple modular structures that give universal dynamic regret and cumulative constraint violation bounds, improving upon state-of-the-art results. While the first algorithm, which achieves the optimal regret bound, involves projection onto the constraint sets, the second algorithm is projection-free and achieves better violation bounds in rapidly varying environments. Our results hold in the most general case when both the cost and constraint functions are chosen arbitrarily, and the constraint functions need not contain any fixed common feasible point. We establish these results by introducing a general framework that reduces the constrained learning problem to an instance of the standard OCO problem with specially constructed surrogate cost functions.
https://arxiv.org/abs/2510.01867
Academic Papers
svg
45477e1deeab57f7dd10aa0dc4fc82440a3d27399a0b357ea38c9929a878a45a
2026-01-07T00:00:00-05:00
Style over Story: Measuring LLM Narrative Preferences via Structured Selection
arXiv:2510.02025v3 Announce Type: replace Abstract: We introduce a constraint-selection-based experiment design for measuring narrative preferences of Large Language Models (LLMs). This design offers an interpretable lens on LLMs' narrative behavior. We developed a library of 200 narratology-grounded constraints and prompted selections from six LLMs under three different instruction types: basic, quality-focused, and creativity-focused. Findings demonstrate that models consistently prioritize Style over narrative content elements like Event, Character, and Setting. Style preferences remain stable across models and instruction types, whereas content elements show cross-model divergence and instructional sensitivity. These results suggest that LLMs have latent narrative preferences, which should inform how the NLP community evaluates and deploys models in creative domains.
https://arxiv.org/abs/2510.02025
Academic Papers
svg
7d464613ada37d07f8be24952b3b8c978603ddd8ba2b3dd4a900af348d5598d3
2026-01-07T00:00:00-05:00
Agentic Additive Manufacturing Alloy Evaluation
arXiv:2510.02567v3 Announce Type: replace Abstract: Agentic systems enable the intelligent use of research tooling, augmenting a researcher's ability to investigate and propose novel solutions to existing problems. Within Additive Manufacturing (AM), alloy selection and evaluation remains a complex challenge, often requiring expertise in the various domains of materials science, thermodynamic simulations, and experimental analysis. Large Language Model (LLM) enabled agents can facilitate this endeavor by utilizing their extensive knowledge base to dispatch tool calls via Model Context Protocol (MCP) to perform actions such as thermophysical property diagram calculations and lack of fusion process map generation. In addition, the multi-agent system can effectively reason through complex user prompts and provide analysis on the lack of fusion process window of common alloys such as SS316L and IN718 along with proposed composition variants of known alloys. These agents can dynamically adjust their task trajectory to the outcomes of tool call results, effectively enabling autonomous decision-making in practical environments. This work aims to showcase the benefits of adopting a LLM enabled multi-agent system to automate and accelerate the task of evaluating proposed additive manufacturing alloys, both novel and known.
https://arxiv.org/abs/2510.02567
Academic Papers
svg
1e98e8a34f9b8028d3dd9371893fdfa260d2397bd11671c3788bf56d8b1093f3
2026-01-07T00:00:00-05:00
The Artificial Intelligence Cognitive Examination: A Survey on the Evolution of Multimodal Evaluation from Recognition to Reasoning
arXiv:2510.04141v2 Announce Type: replace Abstract: This survey paper chronicles the evolution of evaluation in multimodal artificial intelligence (AI), framing it as a progression of increasingly sophisticated "cognitive examinations." We argue that the field is undergoing a paradigm shift, moving from simple recognition tasks that test "what" a model sees, to complex reasoning benchmarks that probe "why" and "how" it understands. This evolution is driven by the saturation of older benchmarks, where high performance often masks fundamental weaknesses. We chart the journey from the foundational "knowledge tests" of the ImageNet era to the "applied logic and comprehension" exams such as GQA and Visual Commonsense Reasoning (VCR), which were designed specifically to diagnose systemic flaws such as shortcut learning and failures in compositional generalization. We then survey the current frontier of "expert-level integration" benchmarks (e.g., MMBench, SEED-Bench, MMMU) designed for today's powerful multimodal large language models (MLLMs), which increasingly evaluate the reasoning process itself. Finally, we explore the uncharted territories of evaluating abstract, creative, and social intelligence. We conclude that the narrative of AI evaluation is not merely a history of datasets, but a continuous, adversarial process of designing better examinations that, in turn, redefine our goals for creating truly intelligent systems.
https://arxiv.org/abs/2510.04141
Academic Papers
svg
3461f03db9c500d7839e3ace6ae1ec71a797b60268eb39df2e2bddb6cdc89244
2026-01-07T00:00:00-05:00
Self-Filtered Distillation with LLMs-generated Trust Indicators for Reliable Patent Classification
arXiv:2510.05431v3 Announce Type: replace Abstract: Large language models (LLMs) increasingly generate natural language rationales to enhance interpretability, but these often contain logical errors, label mismatches, and domain-specific misalignments. Directly using such rationales as supervision risks propagating noise and undermining training stability. To address this challenge, we introduce Self-Filtered Distillation, a framework tailored for patent classification that treats LLM-generated rationales as trust signals rather than ground-truth supervision. The framework employs selective distillation guided by three unsupervised trust metrics: (1) Self-Consistency, which measures the stability of LLM-generated rationales across multiple generations; (2) Class Entailment Alignment, which assesses semantic coherence with patent-specific class definitions; and (3) LLM Agreement Scoring, which validates rationale-label plausibility. These metrics are integrated into a unified trust score that primarily weights training samples while optionally filtering out extremely low-trust cases, enabling reasoning-aware supervision. Experiments on the USPTO-2M dataset show that our method consistently outperforms label-based learning and conventional distillation in accuracy, stability, and interpretability across diverse student architectures, establishing a reliable paradigm for leveraging reasoning-aware trust indicators in patent analytics.
https://arxiv.org/abs/2510.05431
Academic Papers
svg
af3915f0256a674fe222d2642131a0d05458c1c52d84555b1e2327cc0bd090aa
2026-01-07T00:00:00-05:00
When Identity Skews Debate: Anonymization for Bias-Reduced Multi-Agent Reasoning
arXiv:2510.07517v3 Announce Type: replace Abstract: Multi-agent debate (MAD) aims to improve large language model (LLM) reasoning by letting multiple agents exchange answers and then aggregate their opinions. Yet recent studies reveal that agents are not neutral: they are prone to identity-driven sycophancy and self-bias, uncritically adopting a peer's view or stubbornly adhering to their own prior output, undermining the reliability of debate. In this work, we present the first principled framework that joins sycophancy and self-bias to mitigate and quantify identity bias in MAD. First, we formalize the debate dynamics as an identity-weighted Bayesian update process. Second, we propose response anonymization: by removing identity markers from prompts, agents cannot distinguish "self" from "peer", which forces equal weights on agent identity, thereby reducing bias and improving trustworthiness. Third, we define the Identity Bias Coefficient (IBC), a principled bias metric that measures an agent's tendency to follow its peer versus itself. Empirical studies across multiple models and benchmarks confirm that identity bias is widespread, with sycophancy far more common than self-bias. Our findings highlight the need to ensure that MAD systems reason based on content rather than identity. Code is released in https://github.com/deeplearning-wisc/MAD-identity-bias.
https://arxiv.org/abs/2510.07517
Academic Papers
svg
df2878713f557235d9e2fea193a15e16b84f423533c91eb16ee02165dad29d92
2026-01-07T00:00:00-05:00
SyncLipMAE: Contrastive Masked Pretraining for Audio-Visual Talking-Face Representation
arXiv:2510.10069v2 Announce Type: replace Abstract: We introduce SyncLipMAE, a self-supervised pretraining framework for talking-face video that learns synchronization-aware and transferable facial dynamics from unlabeled audio-visual streams. Our approach couples masked visual modeling with cross-modal contrastive alignment and employs three per-frame prompt tokens that explicitly encode the essential factors of a talking-face frame - identity, vocal motion (speech-synchronized facial dynamics), and ambient motion (audio-agnostic movements such as blinks and head pose). The contrastive objective uses time-aligned vocal-motion and audio tokens as positives and misaligned pairs as negatives, driving both modalities into a shared embedding space and yielding token-level audio-visual stream synchronization. After pretraining, the aligned audio tokens together with the visual prompt tokens (identity, vocal motion, ambient motion) form a unified interface for four disparate downstream settings: (i) audio-visual stream synchronization; (ii) facial emotion and head/face action recognition; (iii) visual speech recognition; and (iv) visual dubbing, for which we enable indistinguishable audio- or video-driven control within a single model. Across four task families that require distinct capabilities, SyncLipMAE achieves state-of-the-art results, underscoring the effectiveness of synchronization-aware, factorized self-supervised pretraining.
https://arxiv.org/abs/2510.10069
Academic Papers
svg
14a8af8e6a415707ed614ba6660744de642dc4d1052486bf5a498adc455d09c9
2026-01-07T00:00:00-05:00
What Makes Looped Transformers Perform Better Than Non-Recursive Ones
arXiv:2510.10089v3 Announce Type: replace Abstract: While looped transformers (termed as Looped-Attn) often outperform standard transformers (termed as Single-Attn) on complex reasoning tasks, the mechanism for this advantage remains underexplored. In this paper, we explain this phenomenon through the lens of loss landscape geometry, inspired by empirical observations of their distinct dynamics at both sample and Hessian levels. To formalize this, we extend the River-Valley landscape model by distinguishing between U-shaped valleys (flat) and V-shaped valleys (steep). Based on empirical observations, we conjecture that the recursive architecture of Looped-Attn induces a landscape-level inductive bias towards River-V-Valley. This inductive bias suggest a better loss convergence along the river due to valley hopping, and further encourage learning about complex patterns compared to the River-U-Valley induced by Single-Attn. Building on this insight, we propose SHIFT (Staged HIerarchical Framework for Progressive Training), a principled training strategy that accelerates the training process of Looped-Attn while achieving comparable performances.
https://arxiv.org/abs/2510.10089
Academic Papers
svg
1ded85cce0a6560c0e1ab4ee9ea1ad8373d4c7d5a6fc79743d72e1c84996345d
2026-01-07T00:00:00-05:00
Do You Get the Hint? Benchmarking LLMs on the Board Game Concept
arXiv:2510.13271v2 Announce Type: replace Abstract: Large language models (LLMs) have achieved striking successes on many benchmarks, yet recent studies continue to expose fundamental weaknesses. In this paper, we introduce Concept, a simple word-guessing board game, as a benchmark for probing abductive reasoning. Our results show that this game, easily solved by humans (with a success rate of over 90\%), is still very challenging for state-of-the-art LLMs (no model exceeds 40\% success rate). Specifically, we observe that LLMs struggle with interpreting other players' strategic intents, and with correcting initial hypotheses given sequential information updates. In addition, we extend the evaluation across multiple languages, and find that the LLM performance drops further in lower-resource languages (Dutch, French, and Spanish) compared to English.
https://arxiv.org/abs/2510.13271
Academic Papers
svg
de01f4385ccf18b86fbd2bc1ddd745667bcc4d5a3b8afd26bd23e84918bd9cb7
2026-01-07T00:00:00-05:00
Exploratory Causal Inference in SAEnce
arXiv:2510.14073v2 Announce Type: replace Abstract: Randomized Controlled Trials are one of the pillars of science; nevertheless, they rely on hand-crafted hypotheses and expensive analysis. Such constraints prevent causal effect estimation at scale, potentially anchoring on popular yet incomplete hypotheses. We propose to discover the unknown effects of a treatment directly from data. For this, we turn unstructured data from a trial into meaningful representations via pretrained foundation models and interpret them via a sparse autoencoder. However, discovering significant causal effects at the neural level is not trivial due to multiple-testing issues and effects entanglement. To address these challenges, we introduce Neural Effect Search, a novel recursive procedure solving both issues by progressive stratification. After assessing the robustness of our algorithm on semi-synthetic experiments, we showcase, in the context of experimental ecology, the first successful unsupervised causal effect identification on a real-world scientific trial.
https://arxiv.org/abs/2510.14073
Academic Papers
svg
1534e6a11c5af21c51e6382f3b7595f9ed7507dd6721571678e8165d3381d089
2026-01-07T00:00:00-05:00
CodeEvolve: an open source evolutionary coding agent for algorithm discovery and optimization
arXiv:2510.14150v3 Announce Type: replace Abstract: We introduce CodeEvolve, an open-source framework that combines large language models (LLMs) with evolutionary search to synthesize high-performing algorithmic solutions. CodeEvolve couples an islands-based genetic algorithm with modular LLM orchestration, using execution feedback and task-specific metrics to guide selection and variation. Exploration and exploitation are balanced through context-aware recombination, adaptive meta-prompting, and targeted refinement of promising solutions. We evaluate CodeEvolve on benchmarks previously used to assess Google DeepMind's AlphaEvolve, showing superior performance on several tasks and competitive results overall. Notably, open-weight models often match or exceed closed-source baselines at a fraction of the compute cost. We provide extensive ablations analyzing the contribution of each component and release our framework and experimental results at https://github.com/inter-co/science-codeevolve.
https://arxiv.org/abs/2510.14150
Academic Papers
svg
1f04c2e54c51e1a76a07cdbf43bd7931e72ff89c8060dd0c48df5d4e676e45eb
2026-01-07T00:00:00-05:00
Iterative Topic Taxonomy Induction with LLMs: A Case Study of Electoral Advertising
arXiv:2510.15125v2 Announce Type: replace Abstract: Social media platforms play a pivotal role in shaping political discourse, but analyzing their vast and rapidly evolving content remains a major challenge. We introduce an end-to-end framework for automatically inducing an interpretable topic taxonomy from unlabeled text corpora. By combining unsupervised clustering with prompt-based inference, our method leverages large language models (LLMs) to iteratively construct a taxonomy without requiring seed sets (predefined labels) or domain expertise. We validate the framework through a study of political advertising ahead of the 2024 U.S. presidential election. The induced taxonomy yields semantically rich topic labels and supports downstream analyses, including moral framing, in this setting. Results suggest that structured, iterative labeling yields more consistent and interpretable topic labels than existing approaches under human evaluation, and is practical for analyzing large-scale political advertising data.
https://arxiv.org/abs/2510.15125
Academic Papers
svg
2d19149224278970642c77a048bf774fe2e2f11ff172b245f165f5af66973d63
2026-01-07T00:00:00-05:00
ELMM: Efficient Lightweight Multimodal Large Language Models for Multimodal Knowledge Graph Completion
arXiv:2510.16753v2 Announce Type: replace Abstract: Multimodal Knowledge Graphs (MKGs) extend traditional knowledge graphs by incorporating visual and textual modalities, enabling richer and more expressive entity representations. However, existing MKGs often suffer from incompleteness, which hinder their effectiveness in downstream tasks. Therefore, multimodal knowledge graph completion (MKGC) task is receiving increasing attention. While large language models (LLMs) have shown promise for knowledge graph completion (KGC), their application to the multimodal setting remains underexplored. Moreover, applying Multimodal Large Language Models (MLLMs) to the task of MKGC introduces significant challenges: (1) the large number of image tokens per entity leads to semantic noise and modality conflicts, and (2) the high computational cost of processing large token inputs. To address these issues, we propose Efficient Lightweight Multimodal Large Language Models (ELMM) for MKGC. ELMM proposes a Multi-view Visual Token Compressor (MVTC) based on multi-head attention mechanism, which adaptively compresses image tokens from both textual and visual views, thereby effectively reducing redundancy while retaining necessary information and avoiding modality conflicts. Additionally, we design an attention pruning strategy to remove redundant attention layers from MLLMs, thereby significantly reducing the inference cost. We further introduce a linear projection to compensate for the performance degradation caused by pruning. Extensive experiments on four benchmark datasets demonstrate that ELMM achieves state-of-the-art performance.
https://arxiv.org/abs/2510.16753
Academic Papers
svg
4b1f9336be9500228e346ba076d3309b92f6a27ed246e3279c52f85dfd60b1cc
2026-01-07T00:00:00-05:00
DDBot: Differentiable Physics-based Digging Robot for Unknown Granular Materials
arXiv:2510.17335v4 Announce Type: replace Abstract: Automating the manipulation of granular materials poses significant challenges due to complex contact dynamics, unpredictable material properties, and intricate system states. Existing approaches often fail to achieve efficiency and accuracy in such tasks. To fill the research gap, this article studies the small-scale and high-precision granular material digging task with unknown physical properties. A key scientific problem addressed is the feasibility of applying first-order gradient-based optimization to complex differentiable granular material simulation and overcoming associated numerical instability. A new framework, named differentiable digging robot (DDBot), is proposed to manipulate granular materials, including sand and soil. Specifically, we equip DDBot with a differentiable physics-based simulator, tailored for granular material manipulation, powered by GPU-accelerated parallel computing and automatic differentiation. DDBot can perform efficient differentiable system identification and high-precision digging skill optimization for unknown granular materials, which is enabled by a differentiable skill-to-action mapping, a task-oriented demonstration method, gradient clipping and line search-based gradient descent. Experimental results show that DDBot can efficiently (converge within 5 to 20 minutes) identify unknown granular material dynamics and optimize digging skills, with high-precision results in zero-shot real-world deployments, highlighting its practicality. Benchmark results against state-of-the-art baselines also confirm the robustness and efficiency of DDBot in such digging tasks.
https://arxiv.org/abs/2510.17335
Academic Papers
svg
73eafc0bb8b550c5f904194c41e3212dc0e7b1a87ad4d22c90bba51bc2b0e877
2026-01-07T00:00:00-05:00
Qomhra: A Bilingual Irish and English Large Language Model
arXiv:2510.17652v2 Announce Type: replace Abstract: Large language model (LLM) research and development has overwhelmingly focused on the world's major languages, leading to under-representation of low-resource languages such as Irish. This paper introduces \textbf{Qomhr\'a}, a bilingual Irish and English LLM, developed under extremely low-resource constraints. A complete pipeline is outlined spanning bilingual continued pre-training, instruction tuning, and the synthesis of human preference data for future alignment training. We focus on the lack of scalable methods to create human preference data by proposing a novel method to synthesise such data by prompting an LLM to generate ``accepted'' and ``rejected'' responses, which we validate as aligning with L1 Irish speakers. To select an LLM for synthesis, we evaluate the top closed-weight LLMs for Irish language generation performance. Gemini-2.5-Pro is ranked highest by L1 and L2 Irish-speakers, diverging from LLM-as-a-judge ratings, indicating a misalignment between current LLMs and the Irish-language community. Subsequently, we leverage Gemini-2.5-Pro to translate a large scale English-language instruction tuning dataset to Irish and to synthesise a first-of-its-kind Irish-language human preference dataset. We comprehensively evaluate Qomhr\'a across several benchmarks, testing translation, gender understanding, topic identification, and world knowledge; these evaluations show gains of up to 29\% in Irish and 44\% in English compared to the existing open-source Irish LLM baseline, UCCIX. The results of our framework provide insight and guidance to developing LLMs for both Irish and other low-resource languages.
https://arxiv.org/abs/2510.17652
Academic Papers
svg
6ad3da0db428eacadfd082077e1084ab983caf0c3df49b6f09dc2cb693ea201e
2026-01-07T00:00:00-05:00
Compositional Monte Carlo Tree Diffusion for Extendable Planning
arXiv:2510.21361v2 Announce Type: replace Abstract: Monte Carlo Tree Diffusion (MCTD) integrates diffusion models with structured tree search to enable effective trajectory exploration through stepwise reasoning. However, MCTD remains fundamentally limited by training trajectory lengths. While periodic replanning allows plan concatenation for longer plan generation, the planning process remains locally confined, as MCTD searches within individual trajectories without access to global context. We propose Compositional Monte Carlo Tree Diffusion (C-MCTD), a framework that elevates planning from individual trajectory optimization to reasoning over complete plan compositions. C-MCTD introduces three complementary components: (1) Online Composer, which performs globally-aware planning by searching across entire plan compositions; (2) Distributed Composer, which reduces search complexity through parallel exploration from multiple starting points; and (3) Preplan Composer, which accelerates inference by leveraging cached plan graphs.
https://arxiv.org/abs/2510.21361
Academic Papers
svg
91ace533db3330924577679bab5d6b1c6dd9ab515b100b9904a51680056546c3
2026-01-07T00:00:00-05:00
Leveraging Design-Aware Context in Large Language Models for Code Comment Generation
arXiv:2510.22338v2 Announce Type: replace Abstract: Comments are very useful to the flow of code development. With the increasing commonality of code, novice coders have been creating a significant amount of codebases. Due to lack of commenting standards, their comments are often useless, and increase the time taken to further maintain codes. This study intends to find the usefulness of large language models (LLMs) in these cases to generate potentially better comments. This study focuses on the feasibility of design documents as a context for the LLMs to generate more useful comments, as design documents are often used by maintainers to understand code when comments do not suffice.
https://arxiv.org/abs/2510.22338
Academic Papers
svg
e5fbe758bf333a76a52997de7d6c2d55632a330feb6692410360187200582f39
2026-01-07T00:00:00-05:00
Block-Diagonal LoRA for Eliminating Communication Overhead in Tensor Parallel LoRA Serving
arXiv:2510.23346v2 Announce Type: replace Abstract: When serving a single base LLM with several different LoRA adapters simultaneously, the adapters cannot simply be merged with the base model's weights as the adapter swapping would create overhead and requests using different adapters could not be batched. Rather, the LoRA computations have to be separated from the base LLM computations, and in a multi-device setup the LoRA adapters can be sharded in a way that is well aligned with the base model's tensor parallel execution, as proposed in S-LoRA. However, the S-LoRA sharding strategy encounters some communication overhead, which may be small in theory, but can be large in practice. In this paper, we propose to constrain certain LoRA factors to be block-diagonal, which allows for an alternative way of sharding LoRA adapters that does not require any additional communication for the LoRA computations. We demonstrate in extensive experiments that our block-diagonal LoRA approach is similarly parameter efficient as standard LoRA (i.e., for a similar number of parameters it achieves similar downstream performance) and that it leads to significant end-to-end speed-up over S-LoRA. For example, when serving on eight A100 GPUs, we observe up to 1.79x (1.23x) end-to-end speed-up with 0.87x (1.74x) the number of adapter parameters for Llama-3.1-70B, and up to 1.63x (1.3x) end-to-end speed-up with 0.86x (1.73x) the number of adapter parameters for Llama-3.1-8B.
https://arxiv.org/abs/2510.23346
Academic Papers
svg
c3c5dcbe4d010e35549c710c1d596b4b5499a5265f630a4fe38a207380437e38
2026-01-07T00:00:00-05:00
ReCode: Unify Plan and Action for Universal Granularity Control
arXiv:2510.23564v4 Announce Type: replace Abstract: Real-world tasks require decisions at varying granularities, and humans excel at this by leveraging a unified cognitive representation where planning is fundamentally understood as a high-level form of action. However, current Large Language Model (LLM)-based agents lack this crucial capability to operate fluidly across decision granularities. This limitation stems from existing paradigms that enforce a rigid separation between high-level planning and low-level action, which impairs dynamic adaptability and limits generalization. We propose ReCode (Recursive Code Generation), a novel paradigm that addresses this limitation by unifying planning and action within a single code representation. In this representation, ReCode treats high-level plans as abstract placeholder functions, which the agent then recursively decomposes into finer-grained sub-functions until reaching primitive actions. This recursive approach dissolves the rigid boundary between plan and action, enabling the agent to dynamically control its decision granularity. Furthermore, the recursive structure inherently generates rich, multi-granularity training data, enabling models to learn hierarchical decision-making processes. Extensive experiments show ReCode significantly surpasses advanced baselines in inference performance and demonstrates exceptional data efficiency in training, validating our core insight that unifying planning and action through recursive code generation is a powerful and effective approach to achieving universal granularity control. The code is available at https://github.com/FoundationAgents/ReCode.
https://arxiv.org/abs/2510.23564
Academic Papers
svg
9a0be9455fb114392b3b4ccbef14c3c5fcf1fbc58f8fe30484bd78e57573128e
2026-01-07T00:00:00-05:00
Adaptive Data Collection for Latin-American Community-sourced Evaluation of Stereotypes (LACES)
arXiv:2510.24958v2 Announce Type: replace Abstract: The evaluation of societal biases in NLP models is critically hindered by a geo-cultural gap, This leaves regions such as Latin America severely underserved, making it impossible to adequately assess or mitigate the perpetuation of harmful regional stereotypes in language technologies. This paper presents LACES, a stereotype association dataset, for 15 Latin American countries. This dataset includes 4,789 stereotype associations manually created and annotated by 83 participants. The dataset was developed through targeted community partnerships across Latin America. Additionally, in this paper, we propose a novel adaptive data collection methodology that uniquely integrates the sourcing of new stereotype entries and the validation of existing data within a single, unified workflow. This approach results in a resource with more unique stereotypes than previous static collection methods, enabling a more efficient stereotype collection. The paper further supports the quality of LACES by demonstrating reduced efficacy of debiasing methods on this dataset in comparison to existing popular stereotype benchmarks.
https://arxiv.org/abs/2510.24958
Academic Papers
svg
2c6b59fa866da0764acad24b832cef63cefd8d0408adc74d18bb0aecce63484b
2026-01-07T00:00:00-05:00
Do Not Step Into the Same River Twice: Learning to Reason from Trial and Error
arXiv:2510.26109v3 Announce Type: replace Abstract: Reinforcement learning with verifiable rewards (RLVR) has significantly boosted the reasoning capability of language models (LMs) recently. However, existing RLVR approaches merely train LMs based on their own generated on-policy responses and are constrained by the initial capability of LMs, thus prone to exploration stagnation, in which LMs fail to solve more training problems and cannot further learn from the training data. Some work tries to address this by leveraging off-policy solutions to training problems, but relies on external expert guidance that is limited in availability and scalability. In this work, we propose LTE (Learning to reason from Trial and Error), an approach that hints LMs with their previously self-made mistakes, not requiring any external expert guidance. Experiments validate the effectiveness of LTE, which outperforms the normal group relative policy optimization (GRPO) by 5.02 in Pass@1 and 9.96 in Pass@k on average across six mathematical reasoning benchmarks for Qwen3-8B-Base and even performs better than methods that require external gold solutions as guidance after aligning the experimental setup. Further analysis confirms that LTE successfully mitigates exploration stagnation and enhances both exploitation and exploration during training. Our code is available at https://anonymous.4open.science/r/Learning-from-Trial-and-Error.
https://arxiv.org/abs/2510.26109
Academic Papers
svg
48c8ee45209ddd61467c570a992a935a8f6265ca3055678f1cecb28b0680244c
2026-01-07T00:00:00-05:00
Cut-free Deductive System for Continuous Intuitionistic Logic
arXiv:2510.26849v2 Announce Type: replace Abstract: We introduce and develop propositional continuous intuitionistic logic and propositional continuous affine logic via complete algebraic semantics. Our approach centres on AC-algebras, which are algebras $USC(\mathcal{L})$ of sup-preserving functions from $[0,1]$ to an integral commutative residuated complete lattice $\mathcal{L}$ (in the intuitionistic case, $\mathcal{L}$ is a locale). We give an algebraic axiomatisation of AC-algebras in the language of continuous logic and prove, using the Macneille completion, that every Archimedean model embeds into some AC-algebra. We also show that (i) $USC(\mathcal{L})$ satisfies $v \dot + v = 2v$ exactly when $\mathcal{L}$ is a locale, (ii) involutiveness of negation in $USC(\mathcal{L})$ corresponds to that in $\mathcal{L} $, and that (iii) adding those conditions recovers classical continuous logic. For each variant -affine, intuitionistic, involutive, classical -we provide a sequent style deductive system and prove completeness and cut admissibility. This yields the first sequent style formulation of classical continuous logic enjoying cut admissibility.
https://arxiv.org/abs/2510.26849
Academic Papers
svg
6c6e3ee7ad158f42aaa2852cdfea68e8f4b0f97a150bcb9ccd9842104d0b9669
2026-01-07T00:00:00-05:00
Group-Sensitive Offline Contextual Bandits
arXiv:2510.27123v2 Announce Type: replace Abstract: Offline contextual bandits allow one to learn policies from historical/offline data without requiring online interaction. However, offline policy optimization that maximizes overall expected rewards can unintentionally amplify the reward disparities across groups. As a result, some groups might benefit more than others from the learned policy, raising concerns about fairness, especially when the resources are limited. In this paper, we study a group-sensitive fairness constraint in offline contextual bandits, reducing group-wise reward disparities that may arise during policy learning. We tackle the following common-parity requirements: the reward disparity is constrained within some user-defined threshold or the reward disparity should be minimized during policy optimization. We propose a constrained offline policy optimization framework by introducing group-wise reward disparity constraints into an off-policy gradient-based optimization procedure. To improve the estimation of the group-wise reward disparity during training, we employ a doubly robust estimator and further provide a convergence guarantee for policy optimization. Empirical results in synthetic and real-world datasets demonstrate that our method effectively reduces reward disparities while maintaining competitive overall performance.
https://arxiv.org/abs/2510.27123
Academic Papers
svg
7002ecdf2f63a1fd7f0c3006db6020b47d7491e5e18b3a531f124114717ef2e1
2026-01-07T00:00:00-05:00
Orthogonal-by-construction augmentation of physics-based input-output models
arXiv:2511.01321v2 Announce Type: replace Abstract: This paper proposes a novel orthogonal-by-construction parametrization for augmenting physics-based input-output models with a learning component in an additive sense. The parametrization allows to jointly optimize the parameters of the physics-based model and the learning component. Unlike the commonly applied additive (parallel) augmentation structure, the proposed formulation eliminates overlap in representation of the system dynamics, thereby preserving the uniqueness of the estimated physical parameters, ultimately leading to enhanced model interpretability. By theoretical analysis, we show that, under mild conditions, the method is statistically consistent and guarantees recovery of the true physical parameters. With further analysis regarding the asymptotic covariance matrix of the identified parameters, we also prove that the proposed structure provides a clear separation between the physics-based and learning components of the augmentation structure. The effectiveness of the proposed approach is demonstrated through simulation studies, showing accurate reproduction of the data-generating dynamics without sacrificing consistent estimation of the physical parameters.
https://arxiv.org/abs/2511.01321
Academic Papers
svg
4d95fca4a3533834d6906c249cc8fd5681b480c60c91cb3cf3af888b130e3bc3
2026-01-07T00:00:00-05:00
LiCoMemory: Lightweight and Cognitive Agentic Memory for Efficient Long-Term Reasoning
arXiv:2511.01448v2 Announce Type: replace Abstract: Large Language Model (LLM) agents exhibit remarkable conversational and reasoning capabilities but remain constrained by limited context windows and the lack of persistent memory. Recent efforts address these limitations via external memory architectures, often employing graph-based representations, yet most adopt flat, entangled structures that intertwine semantics with topology, leading to redundant representations, unstructured retrieval, and degraded efficiency and accuracy. To resolve these issues, we propose LiCoMemory, an end-to-end agentic memory framework for real-time updating and retrieval, which introduces CogniGraph, a lightweight hierarchical graph that utilizes entities and relations as semantic indexing layers, and employs temporal and hierarchy-aware search with integrated reranking for adaptive and coherent knowledge retrieval. Experiments on long-term dialogue benchmarks, LoCoMo and LongMemEval, show that LiCoMemory not only outperforms established baselines in temporal reasoning, multi-session consistency, and retrieval efficiency, but also notably reduces update latency. Our official code and data are available at https://github.com/EverM0re/LiCoMemory.
https://arxiv.org/abs/2511.01448
Academic Papers
svg
959ddace4bc799a7100ac4bf181426ba132c81e601fbaf91de8b710d62529955
2026-01-07T00:00:00-05:00
Adapting Web Agents with Synthetic Supervision
arXiv:2511.06101v2 Announce Type: replace Abstract: Web agents struggle to adapt to new websites due to the scarcity of environment specific tasks and demonstrations. Recent works have explored synthetic data generation to address this challenge, however, they suffer from data quality issues where synthesized tasks contain hallucinations that cannot be executed, and collected trajectories are noisy with redundant or misaligned actions. In this paper, we propose SynthAgent, a fully synthetic supervision framework that aims at improving synthetic data quality via dual refinement of both tasks and trajectories. Our approach begins by synthesizing diverse tasks through categorized exploration of web elements, ensuring efficient coverage of the target environment. During trajectory collection, tasks are refined only when conflicts with observations are detected, which mitigates hallucinations while preserving task consistency. After collection, we conduct trajectory refinement with global context to mitigate potential noise or misalignments. Finally, we fine-tune open-source web agents on the refined synthetic data to adapt them to the target environment. Experimental results demonstrate that SynthAgent outperforms existing synthetic data methods, validating the importance of high-quality synthetic supervision. The code is publicly available at https://github.com/aiming-lab/SynthAgent.
https://arxiv.org/abs/2511.06101
Academic Papers
svg
f5e220c745b3ce9fd9d3258fdb922be1eba86246841f18d2075231b0589dc313
2026-01-07T00:00:00-05:00
Alignment-Aware Quantization for LLM Safety
arXiv:2511.07842v3 Announce Type: replace Abstract: Safety and efficiency are paramount yet often conflicting requirements for deploying Large Language Models (LLMs). While LLMs are trained to follow human alignment for safety, Post-Training Quantization (PTQ) is applied afterward to ensure efficiency. Here we identify a fundamental flaw in the conventional PTQ paradigm: quantization can turn into a safety vulnerability if it only aims to achieve low perplexity. To address this, we propose Alignment-Aware Quantization (AAQ), a novel approach that integrates an Alignment-Preserving Contrastive (APC) loss into the PTQ pipeline. Our method explicitly preserves alignment by encouraging the quantized model to mimic its safe, instruction-tuned model while diverging from the unaligned, pre-trained counterpart. AAQ achieves robust safety alignment without specialized safety-focused datasets, using only standard calibration data. We show that AAQ is compatible with standard PTQ techniques and enables robust 4-bit (W4A4) quantization across diverse model families. Our work resolves the critical trade-off between efficiency and safety, paving the way toward LLMs that are both efficient and trustworthy. Anonymized code is available in the supplementary material.
https://arxiv.org/abs/2511.07842
Academic Papers
svg
17b31b0ea41c1a95d190f411dd1c9753d7f4960a374f15f9b282b4bcff5dd368
2026-01-07T00:00:00-05:00
The Journal of Prompt-Engineered Philosophy Or: How I Started to Track AI Assistance and Stopped Worrying About Slop
arXiv:2511.08639v2 Announce Type: replace Abstract: Academic publishing increasingly requires authors to disclose AI assistance, yet imposes reputational costs for doing so--especially when such assistance is substantial. This article analyzes that structural contradiction, showing how incentives discourage transparency in precisely the work where it matters most. Traditional venues cannot resolve this tension through policy tweaks alone, as the underlying prestige economy rewards opacity. To address this, the article proposes an alternative publishing infrastructure: a venue outside prestige systems that enforces mandatory disclosure, enables reproduction-based review, and supports ecological validity through detailed documentation. As a demonstration of this approach, the article itself is presented as an example of AI-assisted scholarship under reasonably detailed disclosure, with representative prompt logs and modification records included. Rather than taking a position for or against AI-assisted scholarship, the article outlines conditions under which such work can be evaluated on its own terms: through transparent documentation, verification-oriented review, and participation by methodologically committed scholars. While focused on AI, the framework speaks to broader questions about how academic systems handle methodological innovation.
https://arxiv.org/abs/2511.08639
Academic Papers
svg
cf47331c997189aa51f0ad52fe2e6c8581c7aa83da40144de45c9b45e36454fa
2026-01-07T00:00:00-05:00
DoPE: Denoising Rotary Position Embedding
arXiv:2511.09146v2 Announce Type: replace Abstract: Positional encoding is essential for large language models (LLMs) to represent sequence order, yet recent studies show that Rotary Position Embedding (RoPE) can induce massive activation. We investigate the source of these instabilities via a spectral analysis of RoPE, and show that its low-frequency components concentrate structured energy, producing low-rank, over-aligned attention patterns. We theoretically reveal that this low-frequency alignment manifests as activation noise, degrading stability during long-context extrapolation. To mitigate this effect, we introduce Denoising Rotary Position Embedding (DoPE), a training-free method that identifies and suppresses noisy attention heads using truncated matrix entropy, then reparameterizes their attention maps with an isotropic Gaussian distribution. Across a range of settings, DoPE improves length extrapolation performance without fine-tuning, increases robustness to perturbations, and boosts both needle-in-a-haystack and many-shot in-context learning tasks. These results suggest that selective positional encoding is key to robust extrapolation. Our project page is Project: https://The-physical-picture-of-LLMs.github.io
https://arxiv.org/abs/2511.09146
Academic Papers
svg
413d12239f9c2a0eac83f6ab071bc2e2af66c7c94a6223cb1809bd4dba9941b3
2026-01-07T00:00:00-05:00
Towards Unbiased Cross-Modal Representation Learning for Food Image-to-Recipe Retrieval
arXiv:2511.15201v2 Announce Type: replace Abstract: This paper addresses the challenges of learning representations for recipes and food images in the cross-modal retrieval problem. As the relationship between a recipe and its cooked dish is cause-and-effect, treating a recipe as a text source describing the visual appearance of a dish for learning representation, as the existing approaches, will create bias misleading image-and-recipe similarity judgment. Specifically, a food image may not equally capture every detail in a recipe, due to factors such as the cooking process, dish presentation, and image-capturing conditions. The current representation learning tends to capture dominant visual-text alignment while overlooking subtle variations that determine retrieval relevance. In this paper, we model such bias in cross-modal representation learning using causal theory. The causal view of this problem suggests ingredients as one of the confounder sources and a simple backdoor adjustment can alleviate the bias. By causal intervention, we reformulate the conventional model for food-to-recipe retrieval with an additional term to remove the potential bias in similarity judgment. Based on this theory-informed formulation, we empirically prove the oracle performance of retrieval on the Recipe1M dataset to be MedR=1 across the testing data sizes of 1K, 10K, and even 50K. We also propose a plug-and-play neural module, which is essentially a multi-label ingredient classifier for debiasing. New state-of-the-art search performances are reported on the Recipe1M dataset.
https://arxiv.org/abs/2511.15201
Academic Papers
svg
c6061f798a529a9331eae2f4d424b3b0f12c5a42cfa84893b6cd2cf6668fef54
2026-01-07T00:00:00-05:00
Point-Supervised Facial Expression Spotting with Gaussian-Based Instance-Adaptive Intensity Modeling
arXiv:2511.16952v3 Announce Type: replace Abstract: Automatic facial expression spotting, which aims to identify facial expression instances in untrimmed videos, is crucial for facial expression analysis. Existing methods primarily focus on fully-supervised learning and rely on costly, time-consuming temporal boundary annotations. In this paper, we investigate point-supervised facial expression spotting (P-FES), where only a single timestamp annotation per instance is required for training. We propose a unique two-branch framework for P-FES. First, to mitigate the limitation of hard pseudo-labeling, which often confuses neutral and expression frames with various intensities, we propose a Gaussian-based instance-adaptive intensity modeling (GIM) module to model instance-level expression intensity distribution for soft pseudo-labeling. By detecting the pseudo-apex frame around each point label, estimating the duration, and constructing an instance-level Gaussian distribution, GIM assigns soft pseudo-labels to expression frames for more reliable intensity supervision. The GIM module is incorporated into our framework to optimize the class-agnostic expression intensity branch. Second, we design a class-aware apex classification branch that distinguishes macro- and micro-expressions solely based on their pseudo-apex frames. During inference, the two branches work independently: the class-agnostic expression intensity branch generates expression proposals, while the class-aware apex-classification branch is responsible for macro- and micro-expression classification. Furthermore, we introduce an intensity-aware contrastive loss to enhance discriminative feature learning and suppress neutral noise by contrasting neutral frames with expression frames with various intensities. Extensive experiments on the SAMM-LV, CAS(ME)$^2$, and CAS(ME)$^3$ datasets demonstrate the effectiveness of our proposed framework.
https://arxiv.org/abs/2511.16952
Academic Papers
svg
5fc6c4673fe5e63cfedfd090c656f2d582360a21895db568c7e4e4b6023a7ead
2026-01-07T00:00:00-05:00
FLUID: Training-Free Face De-identification via Latent Identity Substitution
arXiv:2511.17005v2 Announce Type: replace Abstract: Current face de-identification methods that replace identifiable cues in the face region with other sacrifices utilities contributing to realism, such as age and gender. To retrieve the damaged realism, we present FLUID (Face de-identification in the Latent space via Utility-preserving Identity Displacement), a single-input face de-identification framework that directly replaces identity features in the latent space of a pretrained diffusion model without affecting the model's weights. We reinterpret face de-identification as an image editing task in the latent h-space of a pretrained unconditional diffusion model. Our framework estimates identity-editing directions through optimization guided by loss functions that encourage attribute preservation while suppressing identity signals. We further introduce both linear and geodesic (tangent-based) editing schemes to effectively navigate the latent manifold. Experiments on CelebA-HQ and FFHQ show that FLUID achieves a superior balance between identity suppression and attribute preservation, outperforming existing de-identification approaches in both qualitative and quantitative evaluations.
https://arxiv.org/abs/2511.17005
Academic Papers
svg
c18df8b22acfc5b3181ddba83d0aa88d86ba2d25fe79e0d375ebddf29521b10d
2026-01-07T00:00:00-05:00
Intervene-All-Paths: Unified Mitigation of LVLM Hallucinations across Alignment Formats
arXiv:2511.17254v2 Announce Type: replace Abstract: Despite their impressive performance across a wide range of tasks, Large Vision-Language Models (LVLMs) remain prone to hallucination. In this study, we propose a comprehensive intervention framework aligned with the transformer's causal architecture in LVLMs, integrating the effects of different intervention paths on hallucination. We find that hallucinations in LVLMs do not arise from a single causal path, but rather from the interplay among image-to-input-text, image-to-output-text, and text-to-text pathways. For the first time, we also find that LVLMs rely on different pathways depending on the question-answer alignment format. Building on these insights, we propose simple yet effective methods to identify and intervene on critical hallucination heads within each pathway, tailored to discriminative and generative formats. Experiments across multiple benchmarks demonstrate that our approach consistently reduces hallucinations across diverse alignment types.
https://arxiv.org/abs/2511.17254
Academic Papers
svg
5a2260a16b26133de72658f06b34e983c414436a450881d7bfcdafec999bcfe5
2026-01-07T00:00:00-05:00
Musical Score Understanding Benchmark: Evaluating Large Language Models' Comprehension of Complete Musical Scores
arXiv:2511.20697v2 Announce Type: replace Abstract: Understanding complete musical scores entails integrated reasoning over pitch, rhythm, harmony, and large-scale structure, yet the ability of Large Language Models and Vision-Language Models to interpret full musical notation remains insufficiently examined. We introduce the Musical Score Understanding Benchmark (MSU-Bench), the first large-scale, human-curated benchmark for score-level musical understanding across textual (ABC notation) and visual (PDF) modalities. MSU-Bench contains 1,800 generative Question-Answering pairs from works by Bach, Beethoven, Chopin, Debussy, and others, organised into four levels of increasing difficulty, ranging from onset information to texture and form. Evaluations of more than fifteen state-of-the-art models, in both zero-shot and fine-tuned settings, reveal pronounced modality gaps, unstable level-wise performance, and challenges in maintaining multilevel correctness. Fine-tuning substantially improves results across modalities while preserving general knowledge, positioning MSU-Bench as a robust foundation for future research in multimodal reasoning. To facilitate further research, we publicly release MSU-Bench and all associated resources.
https://arxiv.org/abs/2511.20697
Academic Papers
svg
0886899094a3c75c41e7f54e1c596267eee9e680464786070f453349f006ac6d
2026-01-07T00:00:00-05:00
Representation Interventions Enable Lifelong Unstructured Knowledge Control
arXiv:2511.20892v2 Announce Type: replace Abstract: Large language models (LLMs) often produce incorrect or outdated content. Updating their knowledge efficiently and accurately without costly retraining is a major challenge. This problem is particularly challenging for complex, unstructured knowledge in lifelong settings, where many edits must coexist without interference. We introduce RILKE (Representation Intervention for Lifelong KnowledgE Control), a robust and scalable method that treats knowledge control as interventions within the model's representation space. Leveraging representation-space expressiveness, we identify two key properties enabling RILKE to achieve fine-grained control over complex, unstructured knowledge while maintaining general utility with frozen base weights. During training, RILKE learns paraphrase-robust and edit-localized modules that limit each update to a low-dimensional subspace to minimize cross-edit interference. At inference, a query-adaptive router selects the appropriate module to guide the model's generation. Across LLaMA and Qwen models, RILKE scales effectively to large-scale benchmarks, demonstrating high edit success and strong paraphrase generalization while preserving general utility with modest memory overhead. These results show RILKE is an effective and scalable solution for lifelong knowledge control in LLMs.
https://arxiv.org/abs/2511.20892
Academic Papers
svg
3640e46412c44ee3281a45181547a85631d8e56baea79150ee57488f8b2d31aa
2026-01-07T00:00:00-05:00
Emergence and Localisation of Semantic Role Circuits in LLMs
arXiv:2511.20910v2 Announce Type: replace Abstract: Despite displaying semantic competence, large language models' internal mechanisms that ground abstract semantic structure remain insufficiently characterised. We propose a method integrating role-cross minimal pairs, temporal emergence analysis, and cross-model comparison to study how LLMs implement semantic roles. Our analysis uncovers: (i) highly concentrated circuits (89-94% attribution within 28 nodes); (ii) gradual structural refinement rather than phase transitions, with larger models sometimes bypassing localised circuits; and (iii) moderate cross-scale conservation (24-59% component overlap) alongside high spectral similarity. These findings suggest that LLMs form compact, causally isolated mechanisms for abstract semantic structure, and these mechanisms exhibit partial transfer across scales and architectures.
https://arxiv.org/abs/2511.20910
Academic Papers
svg
47358b3a50da733ce70c4530eed49c8b787bf2ece5f31c0ca08c024a6b59db7d
2026-01-07T00:00:00-05:00
Beyond Patch Aggregation: 3-Pass Pyramid Indexing for Vision-Enhanced Document Retrieval
arXiv:2511.21121v2 Announce Type: replace Abstract: Document centric RAG pipelines usually begin with OCR, followed by brittle heuristics for chunking, table parsing, and layout reconstruction. These text first workflows are costly to maintain, sensitive to small layout shifts, and often lose the spatial cues that contain the answer. Vision first retrieval has emerged as a strong alternative. By operating directly on page images, systems like ColPali and ColQwen preserve structure and reduce pipeline complexity while achieving strong benchmark performance. However, these late interaction models tie retrieval to a specific vision backbone and require storing hundreds of patch embeddings per page, creating high memory overhead and complicating large scale deployment. We introduce VisionRAG, a multimodal retrieval system that is OCR free and model agnostic. VisionRAG indexes documents directly as images, preserving layout, tables, and spatial cues, and builds semantic vectors without committing to a specific extraction. Our three pass pyramid indexing framework creates vectors using global page summaries, section headers, visual hotspots, and fact level cues. These summaries act as lightweight retrieval surrogates. At query time, VisionRAG retrieves the most relevant pages using the pyramid index, then forwards the raw page image encoded as base64 to a multimodal LLM for final question answering. During retrieval, reciprocal rank fusion integrates signals across the pyramid to produce robust ranking. VisionRAG stores only 17 to 27 vectors per page, matching the efficiency of patch based methods while staying flexible across multimodal encoders. On financial document benchmarks, it achieves 0.8051 accuracy at 10 on FinanceBench and 0.9629 recall at 100 on TAT DQA. These results show that OCR free, summary guided multimodal retrieval is a practical and scalable alternative to traditional text extraction pipelines.
https://arxiv.org/abs/2511.21121
Academic Papers
svg
f337b0237044dc3ade44c084e35b409481b508c37a53a7bd777a7a693f4595e2
2026-01-07T00:00:00-05:00
Improving motor imagery decoding methods for an EEG-based mobile brain-computer interface in the context of the 2024 Cybathlon
arXiv:2511.23384v3 Announce Type: replace Abstract: Motivated by the Cybathlon 2024 competition, we developed a modular, online EEG-based brain-computer interface to address these challenges, increasing accessibility for individuals with severe mobility impairments. Our system uses three mental and motor imagery classes to control up to five control signals. The pipeline consists of four modules: data acquisition, preprocessing, classification, and the transfer function to map classification output to control dimensions. We use three diagonalized structured state-space sequence layers as a deep learning classifier. We developed a training game for our pilot where the mental tasks control the game during quick-time events. We implemented a mobile web application for live user feedback. The components were designed with a human-centred approach in collaboration with the tetraplegic user. We achieve up to 84% classification accuracy in offline analysis using an S4D-layer-based model. In a competition setting, our pilot successfully completed one task; we attribute the reduced performance in this context primarily to factors such as stress and the challenging competition environment. Following the Cybathlon, we further validated our pipeline with the original pilot and an additional participant, achieving a success rate of 73% in real-time gameplay. We also compare our model to the EEGEncoder, which is slower in training but has a higher performance. The S4D model outperforms the reference machine learning models. We provide insights into developing a framework for portable BCIs, bridging the gap between the laboratory and daily life. Specifically, our framework integrates modular design, real-time data processing, user-centred feedback, and low-cost hardware to deliver an accessible and adaptable BCI solution, addressing critical gaps in current BCI applications.
https://arxiv.org/abs/2511.23384
Academic Papers
svg
0cec0f6c1ef52a52602a2324d3b0b14aaed2617a39e0800a7ad970e2a5fe500a
2026-01-07T00:00:00-05:00
Reward Auditor: Inference on Reward Modeling Suitability in Real-World Perturbed Scenarios
arXiv:2512.00920v2 Announce Type: replace Abstract: Reliable reward models (RMs) are critical for ensuring the safe alignment of large language models (LLMs). However, current RM evaluation methods focus solely on preference perception accuracies in given specific scenarios, obscuring the critical vulnerabilities of RMs in real-world scenarios. We identify the true challenge lies in assessing a novel dimension: Suitability, defined as conditional reliability under specific real-world perturbations. To this end, we introduce Reward Auditor, a hypothesis-testing framework specifically designed for RM suitability inference. Rather than answering "How accurate is the RM's preference perception for given samples?", it employs scientific auditing to answer: "Can we infer RMs exhibit systematic vulnerabilities in specific real-world scenarios?". Under real-world perturbed scenarios, Reward Auditor quantifies statistical significance and effect size by auditing distribution degradation of RM preference perception confidence. This enables inference of both the certainty and severity of RM vulnerabilities across diverse real-world scenarios. This lays a solid foundation for building next-generation LLM alignment systems that are verifiably safe, more robust, and trustworthy.
https://arxiv.org/abs/2512.00920
Academic Papers
svg
a58f2555eefcb2a7314b6f34ef20dd9ee2eff7186b8f31537e6a01d031312c8c
2026-01-07T00:00:00-05:00
Thucy: An LLM-based Multi-Agent System for Claim Verification across Relational Databases
arXiv:2512.03278v2 Announce Type: replace Abstract: In today's age, it is becoming increasingly difficult to decipher truth from lies. Every day, politicians, media outlets, and public figures make conflicting claims -- often about topics that can, in principle, be verified against structured data. For instance, statements about crime rates, economic growth or healthcare can all be verified against official public records and structured datasets. Building a system that can automatically do that would have sounded like science fiction just a few years ago. Yet, with the extraordinary progress in LLMs and agentic AI, this is now within reach. Still, there remains a striking gap between what is technically possible and what is being demonstrated by recent work. Most existing verification systems operate only on small, single-table databases -- typically a few hundred rows -- that conveniently fit within an LLM's context window. In this paper we report our progress on Thucy, the first cross-database, cross-table multi-agent claim verification system that also provides concrete evidence for each verification verdict. Thucy remains completely agnostic to the underlying data sources before deployment and must therefore autonomously discover, inspect, and reason over all available relational databases to verify claims. Importantly, Thucy also reports the exact SQL queries that support its verdict (whether the claim is accurate or not) offering full transparency to expert users familiar with SQL. When evaluated on the TabFact dataset -- the standard benchmark for fact verification over structured data -- Thucy surpasses the previous state of the art by 5.6 percentage points in accuracy (94.3% vs. 88.7%).
https://arxiv.org/abs/2512.03278
Academic Papers
svg
5f3ec77e733f3cb5c6fdf5fe4bcca441f8c4ba75d51b98b9ad8f6bcb36e16c97
2026-01-07T00:00:00-05:00
Legitimizing, Developing, and Sustaining Feminist HCI in East Asia: Challenges and Opportunities
arXiv:2512.13000v2 Announce Type: replace Abstract: Feminist HCI has been rapidly developing in East Asian contexts in recent years. The region's unique cultural and political backgrounds have contributed valuable, situated knowledge, revealing topics such as localized digital feminism practices, or women's complex navigation among social expectations. However, the very factors that ground these perspectives also create significant survival challenges for researchers in East Asia. These include a scarcity of dedicated funding, the stigma of being perceived as less valuable than productivity-oriented technologies, and the lack of senior researchers and established, resilient communities. Grounded in these challenges and our prior collective practices, we propose this meet-up with two focused goals: (1) to provide a legitimized channel for Feminist HCI researchers to connect and build community, and (2) to facilitate an action-oriented dialogue on how to legitimize, develop, and sustain Feminist HCI in the East Asian context. The website for this meet-up is: https://feminist-hci.github.io/
https://arxiv.org/abs/2512.13000
Academic Papers
svg
f929f2ae824773c1581c2f1c4c6e0f29b82e647d8620da2efa730dc8c73d9d6c
2026-01-07T00:00:00-05:00
Socratic Students: Teaching Language Models to Learn by Asking Questions
arXiv:2512.13102v4 Announce Type: replace Abstract: Large language Models (LLMs) are usually used to answer questions, but many high-stakes applications (e.g., tutoring, clinical support) require the complementary skill of asking questions: detecting missing information, requesting clarifications, and using them to solve tasks. We study this skill in reasoning-heavy domains where progress depends on inquiry rather than factual recall. We define an interactive protocol where a student model engages a stronger teacher under a small turn budget. After each teacher reply, we evaluate the student on the original task with Pass@k. We propose Outcome-Driven Question optimization Strategy (ODQS ), a training framework that learns a questioning policy from downstream task outcomes. At each turn, we sample multiple candidate questions; query the teacher with each, then score the student's resulting performance. Using these scores, we train the student via supervised fine-tuning followed by Direct Preference Optimization (DPO), without any human labels. On GSM8K, HumanEval, and OpenCoder, ODQS produces large gains over interactive baselines, boosting Pass@5 by up to 54.7% (absolute) on math and 22.9% (absolute) on coding, and matching baseline performance in three fewer turns. Thus, question asking can be explicitly trained from task outcomes, improving both accuracy and efficiency in interactive reasoning.
https://arxiv.org/abs/2512.13102
Academic Papers
svg
ed133a9dbe1f5d26ab39ccb910ec0599497629e1600f530b725473895a812888
2026-01-07T00:00:00-05:00
RoboTracer: Mastering Spatial Trace with Reasoning in Vision-Language Models for Robotics
arXiv:2512.13660v2 Announce Type: replace Abstract: Spatial tracing, as a fundamental embodied interaction ability for robots, is inherently challenging as it requires multi-step metric-grounded reasoning compounded with complex spatial referring and real-world metric measurement. However, existing methods struggle with this compositional task. To this end, we propose RoboTracer, a 3D-aware VLM that first achieves both 3D spatial referring and measuring via a universal spatial encoder and a regression-supervised decoder to enhance scale awareness during supervised fine-tuning (SFT). Moreover, RoboTracer advances multi-step metric-grounded reasoning via reinforcement fine-tuning (RFT) with metric-sensitive process rewards, supervising key intermediate perceptual cues to accurately generate spatial traces. To support SFT and RFT training, we introduce TraceSpatial, a large-scale dataset of 30M QA pairs, spanning outdoor/indoor/tabletop scenes and supporting complex reasoning processes (up to 9 steps). We further present TraceSpatial-Bench, a challenging benchmark filling the gap to evaluate spatial tracing. Experimental results show that RoboTracer surpasses baselines in spatial understanding, measuring, and referring, with an average success rate of 79.1%, and also achieves SOTA performance on TraceSpatial-Bench by a large margin, exceeding Gemini-2.5-Pro by 36% accuracy. Notably, RoboTracer can be integrated with various control policies to execute long-horizon, dynamic tasks across diverse robots (UR5, G1 humanoid) in cluttered real-world scenes. See the project page at https://zhoues.github.io/RoboTracer.
https://arxiv.org/abs/2512.13660
Academic Papers
svg
d84ba99b62ee0781d52ff37d8a49db70752dd7fbf24ec7384071e721d73fbfb6
2026-01-07T00:00:00-05:00
Massive Editing for Large Language Models Based on Dynamic Weight Generation
arXiv:2512.14395v3 Announce Type: replace Abstract: Knowledge Editing (KE) is a field that studies how to modify some knowledge in Large Language Models (LLMs) at a low cost (compared to pre-training). Currently, performing large-scale edits on LLMs while ensuring the Reliability, Generality, and Locality metrics of the edits remain a challenge. This paper proposes a Massive editing approach for LLMs based on dynamic weight Generation (MeG). Our MeG involves attaching a dynamic weight neuron to specific layers of the LLMs and using a diffusion model to conditionally generate the weights of this neuron based on the input query required for the knowledge. This allows the use of adding a single dynamic weight neuron to achieve the goal of large-scale knowledge editing. Experiments show that our MeG can significantly improve the performance of large-scale KE in terms of Reliability, Generality, and Locality metrics compared to existing knowledge editing methods, particularly with a high percentage point increase in the absolute value index for the Locality metric, demonstrating the advantages of our proposed method.
https://arxiv.org/abs/2512.14395
Academic Papers
svg
3ed9a14a9d04bff45fae531661ff7eb71967b8e4a4dcc91f1dc6df8ed7f1ced1
2026-01-07T00:00:00-05:00
Activation Oracles: Training and Evaluating LLMs as General-Purpose Activation Explainers
arXiv:2512.15674v2 Announce Type: replace Abstract: Large language model (LLM) activations are notoriously difficult to understand, with most existing techniques using complex, specialized methods for interpreting them. Recent work has proposed a simpler approach known as LatentQA: training LLMs to directly accept LLM activations as inputs and answer arbitrary questions about them in natural language. However, prior work has focused on narrow task settings for both training and evaluation. In this paper, we instead take a generalist perspective. We evaluate LatentQA-trained models, which we call Activation Oracles (AOs), in far out-of-distribution settings and examine how performance scales with training data diversity. We find that AOs can recover information fine-tuned into a model (e.g., biographical knowledge or malign propensities) that does not appear in the input text, despite never being trained with activations from a fine-tuned model. Our main evaluations are four downstream tasks where we can compare to prior white- and black-box techniques. We find that even narrowly-trained LatentQA models can generalize well, and that adding additional training datasets (such as classification tasks and a self-supervised context prediction task) yields consistent further improvements. Our best AOs match or exceed white-box baselines on all four tasks and the best overall baseline on 3 of 4. These results suggest that diversified training to answer natural-language queries imparts a general capability to verbalize information about LLM activations.
https://arxiv.org/abs/2512.15674
Academic Papers
svg
e1fad90e2593fa2310107a63ba845ec6534c76c0797a4326b9889a6a7586c26f
2026-01-07T00:00:00-05:00
LoFT-LLM: Low-Frequency Time-Series Forecasting with Large Language Models
arXiv:2512.20002v2 Announce Type: replace Abstract: Time-series forecasting in real-world applications such as finance and energy often faces challenges due to limited training data and complex, noisy temporal dynamics. Existing deep forecasting models typically supervise predictions using full-length temporal windows, which include substantial high-frequency noise and obscure long-term trends. Moreover, auxiliary variables containing rich domain-specific information are often underutilized, especially in few-shot settings. To address these challenges, we propose LoFT-LLM, a frequency-aware forecasting pipeline that integrates low-frequency learning with semantic calibration via a large language model (LLM). Firstly, a Patch Low-Frequency forecasting Module (PLFM) extracts stable low-frequency trends from localized spectral patches. Secondly, a residual learner then models high-frequency variations. Finally, a fine-tuned LLM refines the predictions by incorporating auxiliary context and domain knowledge through structured natural language prompts. Extensive experiments on financial and energy datasets demonstrate that LoFT-LLM significantly outperforms strong baselines under both full-data and few-shot regimes, delivering superior accuracy, robustness, and interpretability.
https://arxiv.org/abs/2512.20002
Academic Papers
svg
2551b8df668067e2d4cf90a68a874ed108069658c66f3578043ebddf5366253c
2026-01-07T00:00:00-05:00
D^3ETOR: Debate-Enhanced Pseudo Labeling and Frequency-Aware Progressive Debiasing for Weakly-Supervised Camouflaged Object Detection with Scribble Annotations
arXiv:2512.20260v2 Announce Type: replace Abstract: Weakly-Supervised Camouflaged Object Detection (WSCOD) aims to locate and segment objects that are visually concealed within their surrounding scenes, relying solely on sparse supervision such as scribble annotations. Despite recent progress, existing WSCOD methods still lag far behind fully supervised ones due to two major limitations: (1) the pseudo masks generated by general-purpose segmentation models (e.g., SAM) and filtered via rules are often unreliable, as these models lack the task-specific semantic understanding required for effective pseudo labeling in COD; and (2) the neglect of inherent annotation bias in scribbles, which hinders the model from capturing the global structure of camouflaged objects. To overcome these challenges, we propose ${D}^{3}$ETOR, a two-stage WSCOD framework consisting of Debate-Enhanced Pseudo Labeling and Frequency-Aware Progressive Debiasing. In the first stage, we introduce an adaptive entropy-driven point sampling method and a multi-agent debate mechanism to enhance the capability of SAM for COD, improving the interpretability and precision of pseudo masks. In the second stage, we design FADeNet, which progressively fuses multi-level frequency-aware features to balance global semantic understanding with local detail modeling, while dynamically reweighting supervision strength across regions to alleviate scribble bias. By jointly exploiting the supervision signals from both the pseudo masks and scribble semantics, ${D}^{3}$ETOR significantly narrows the gap between weakly and fully supervised COD, achieving state-of-the-art performance on multiple benchmarks.
https://arxiv.org/abs/2512.20260
Academic Papers
svg
66bdb1b1cf15be6d4518676906d2e081ffaac6e5a6f3dd48d5b56edc57576f09
2026-01-07T00:00:00-05:00
Mixture-of-Experts with Gradient Conflict-Driven Subspace Topology Pruning for Emergent Modularity
arXiv:2512.20291v3 Announce Type: replace Abstract: Mixture-of-Experts (MoE) architectures achieve parameter efficiency through conditional computation, yet contemporary designs suffer from two fundamental limitations: structural parameter isolation that causes catastrophic forgetting, and instruction-overfitting that degrades performance in instruction-free scenarios. We propose CDSP-MoE (Conflict-Driven Subspace Pruning MoE), a framework that addresses these issues through a paradigm shift from isolated expert containers to dynamic expert instantiation within a shared physical subspace. Grounded in the Universal Weight Subspace Hypothesis, CDSP-MoE maintains a super-complete parameter backbone where logical experts are carved out via learnable topology masks. Unlike prior work that uses gradient conflict for token reassignment or optimization surgery, we leverage it as a structural supervisory signal: a Lagged Gradient Game penalizes interfering connections in the shared manifold, enabling the topology to spontaneously prune conflicting pathways and evolve interpretable modular structures. Experimental results demonstrate that CDSP-MoE achieves robust content-driven routing without human-defined task labels, maintaining semantic specialization even under strict blind inference protocols where explicit instructions are absent. Code is available at: https://github.com/konodiodaaaaa1/Conflict-Driven-Subspace-Pruning-Mixture-of-Experts
https://arxiv.org/abs/2512.20291
Academic Papers
svg
2ab37e447e0fb327e48e61ae1ab2106b5c6da2679e41c57a8cfb6149136efc10
2026-01-07T00:00:00-05:00
Efficient and Robust Video Defense Framework against 3D-field Personalized Talking Face
arXiv:2512.21019v3 Announce Type: replace Abstract: State-of-the-art 3D-field video-referenced Talking Face Generation (TFG) methods synthesize high-fidelity personalized talking-face videos in real time by modeling 3D geometry and appearance from reference portrait video. This capability raises significant privacy concerns regarding malicious misuse of personal portraits. However, no efficient defense framework exists to protect such videos against 3D-field TFG methods. While image-based defenses could apply per-frame 2D perturbations, they incur prohibitive computational costs, severe video quality degradation, failing to disrupt 3D information for video protection. To address this, we propose a novel and efficient video defense framework against 3D-field TFG methods, which protects portrait video by perturbing the 3D information acquisition process while maintain high-fidelity video quality. Specifically, our method introduces: (1) a similarity-guided parameter sharing mechanism for computational efficiency, and (2) a multi-scale dual-domain attention module to jointly optimize spatial-frequency perturbations. Extensive experiments demonstrate that our proposed framework exhibits strong defense capability and achieves a 47x acceleration over the fastest baseline while maintaining high fidelity. Moreover, it remains robust against scaling operations and state-of-the-art purification attacks, and the effectiveness of our design choices is further validated through ablation studies. Our project is available at https://github.com/Richen7418/VDF.
https://arxiv.org/abs/2512.21019
Academic Papers
svg
93c70607c930b560f860da263670dbaf666a1a3fbd4b67007b8bae8cdf4ffd94
2026-01-07T00:00:00-05:00
Making AI Functional with Workarounds: An Insider's Account of Invisible Labour in Organisational Politics
arXiv:2512.21055v2 Announce Type: replace Abstract: Research on the implementation of Generative Artificial Intelligence (GenAI) in higher education often focuses on strategic goals, overlooking the hidden, and often politically charged, labour required to make it functional. This paper provides an insider's account of the sociotechnical friction that arises when an institutional goal of empowering non-technical staff conflicts with the technical limitations of enterprise Large Language Models (LLMs). Through analytic autoethnography, this study examines a GenAI project pushed to an impasse, focusing on a workaround developed to navigate not only technical constraints but also the combined challenge of organisational territoriality and assertions of positional power. Drawing upon Alter's (2014) theory of workarounds, the analysis interprets "articulation work" as a form of "invisible labour". By engaging with the Information Systems (IS) domains of user innovation and technology-in-practice, this study argues that such user-driven workarounds should be understood not as deviations, but as integral acts of sociotechnical integration. This integration, however, highlights the central paradoxes of modern GenAI where such workarounds for "unfinished" systems can simultaneously create unofficial "shadow" systems and obscure the crucial, yet invisible, sociotechnical labour involved. The findings suggest that the invisible labour required to integrate GenAI within complex organisational politics is an important, rather than peripheral, component of how it becomes functional in practice.
https://arxiv.org/abs/2512.21055
Academic Papers
svg
32e1e4220ffefe397b6618abfc8d447ca2e93d87d20e1c29c131079c9e5fa858
2026-01-07T00:00:00-05:00
NEMO-4-PAYPAL: Leveraging NVIDIA's Nemo Framework for empowering PayPal's Commerce Agent
arXiv:2512.21578v2 Announce Type: replace Abstract: We present the development and optimization of PayPal's Commerce Agent, powered by NEMO-4-PAYPAL, a multi-agent system designed to revolutionize agentic commerce on the PayPal platform. Through our strategic partnership with NVIDIA, we leveraged the NeMo Framework for LLM model fine-tuning to enhance agent performance. Specifically, we optimized the Search and Discovery agent by replacing our base model with a fine-tuned Nemotron small language model (SLM). We conducted comprehensive experiments using the llama3.1-nemotron-nano-8B-v1 architecture, training LoRA-based models through systematic hyperparameter sweeps across learning rates, optimizers (Adam, AdamW), cosine annealing schedules, and LoRA ranks. Our contributions include: (1) the first application of NVIDIA's NeMo Framework to commerce-specific agent optimization, (2) LLM powered fine-tuning strategy for retrieval-focused commerce tasks, (3) demonstration of significant improvements in latency and cost while maintaining agent quality, and (4) a scalable framework for multi-agent system optimization in production e-commerce environments. Our results demonstrate that the fine-tuned Nemotron SLM effectively resolves the key performance issue in the retrieval component, which represents over 50\% of total agent response time, while maintaining or enhancing overall system performance.
https://arxiv.org/abs/2512.21578
Academic Papers
svg
b1a4fbe34a137c9a0a99748e5e782559653c2a9dbc771342319cd2b895c9754a
2026-01-07T00:00:00-05:00
A Comedy of Estimators: On KL Regularization in RL Training of LLMs
arXiv:2512.21852v2 Announce Type: replace Abstract: The reasoning performance of large language models (LLMs) can be substantially improved by training them with reinforcement learning (RL). The RL objective for LLM training involves a regularization term, which is the reverse Kullback-Leibler (KL) divergence between the trained policy and the reference policy. Since computing the KL divergence exactly is intractable, various estimators are used in practice to estimate it from on-policy samples. Despite its wide adoption, including in several open-source libraries, there is no systematic study analyzing the numerous ways of incorporating KL estimators in the objective and their effect on the downstream performance of RL-trained models. Recent works show that prevailing practices for incorporating KL regularization do not provide correct gradients for stated objectives, creating a discrepancy between the objective and its implementation. In this paper, we further analyze these practices and study the gradients of several estimators configurations, revealing how design choices shape gradient bias. We substantiate these findings with empirical observations by RL fine-tuning \texttt{Qwen2.5-7B}, \texttt{Llama-3.1-8B-Instruct} and \texttt{Qwen3-4B-Instruct-2507} with different configurations and evaluating their performance on both in- and out-of-distribution tasks. Through our analysis, we observe that, in on-policy settings: (1) estimator configurations with biased gradients can result in training instabilities; and (2) using estimator configurations resulting in unbiased gradients leads to better performance on in-domain as well as out-of-domain tasks. We also investigate the performance resulting from different KL configurations in off-policy settings and observe that KL regularization can help stabilize off-policy RL training resulting from asynchronous setups.
https://arxiv.org/abs/2512.21852
Academic Papers
svg
1bf2aff0b1bcac273e50f812dfc965d1ea22313449a32a26b4a2fd1e65cb887d
2026-01-07T00:00:00-05:00
DiRL: An Efficient Post-Training Framework for Diffusion Language Models
arXiv:2512.22234v2 Announce Type: replace Abstract: Diffusion Language Models (dLLMs) have emerged as promising alternatives to Auto-Regressive (AR) models. While recent efforts have validated their pre-training potential and accelerated inference speeds, the post-training landscape for dLLMs remains underdeveloped. Existing methods suffer from computational inefficiency and objective mismatches between training and inference, severely limiting performance on complex reasoning tasks such as mathematics. To address this, we introduce DiRL, an efficient post-training framework that tightly integrates FlexAttention-accelerated blockwise training with LMDeploy-optimized inference. This architecture enables a streamlined online model update loop, facilitating efficient two-stage post-training (Supervised Fine-Tuning followed by Reinforcement Learning). Building on this framework, we propose DiPO, the first unbiased Group Relative Policy Optimization (GRPO) implementation tailored for dLLMs. We validate our approach by training DiRL-8B-Instruct on high-quality math data. Our model achieves state-of-the-art math performance among dLLMs and surpasses comparable models in the Qwen2.5 series on several benchmarks.
https://arxiv.org/abs/2512.22234
Academic Papers
svg