id
stringlengths 64
64
| published
stringlengths 19
25
| title
stringlengths 7
262
| description
stringlengths 6
54.4k
| link
stringlengths 31
227
| category
stringclasses 6
values | image
stringlengths 3
247
|
|---|---|---|---|---|---|---|
ec775776cf608f653f34c7406f0d543ac09a6034877c4079ef68a058ff20b7ff
|
2026-02-02T00:00:00-05:00
|
DoS Attacks and Defense Technologies in Blockchain Systems: A Hierarchical Analysis
|
arXiv:2507.22611v2 Announce Type: replace Abstract: Blockchain technology is widely used in various fields due to its ability to provide decentralization and trustless security. This is a fundamental understanding held by many advocates, but it is misunderstood, leading participants to fail to recognize the limitations of the security that blockchain can provide. Among all current network attacks, Denial of Service (DoS) attacks pose significant threats due to their ease of execution and destructive potential. This paper, based on the blockchain architecture hierarchy, categorizes and organizes existing DoS attacks, with a focus on explaining the principles and methods of contract layer and consensus layer DoS attacks. Furthermore, this paper comprehensively analyzes and compares commonly used detection methods and defense technologies, which will contribute to strengthening the security and stability of blockchain systems and promoting further innovation and application of blockchain systems.
|
https://arxiv.org/abs/2507.22611
|
Academic Papers
|
svg
|
43d6e2e78530faf2b67d4d3baf6c0daa265e0deff2570c6c540955470ddc72c2
|
2026-02-02T00:00:00-05:00
|
ElectriQ: A Benchmark for Assessing the Response Capability of Large Language Models in Power Marketing
|
arXiv:2507.22911v2 Announce Type: replace Abstract: As power systems decarbonise and digitalise, high penetrations of distributed energy resources and flexible tariffs make electric power marketing (EPM) a key interface between regulation, system operation and sustainable-energy deployment. Many utilities still rely on human agents and rule- or intent-based chatbots with fragmented knowledge bases that struggle with long, cross-scenario dialogues and fall short of requirements for compliant, verifiable and DR-ready interactions. Meanwhile, frontier large language models (LLMs) show strong conversational ability but are evaluated on generic benchmarks that underweight sector-specific terminology, regulatory reasoning and multi-turn process stability. To address this gap, we present ElectriQ, a large-scale benchmark and evaluation framework for LLMs in EPM. ElectriQ contains over 550k dialogues across six service domains and 24 sub-scenarios and defines a unified protocol that combines human ratings, automatic metrics and two compliance stress tests-Statutory Citation Correctness and Long-Dialogue Consistency. Building on ElectriQ, we propose SEEK-RAG, a retrieval-augmented method that injects policy and domain knowledge during finetuning and inference. Experiments on 13 LLMs show that domain-aligned 7B models with SEEK-RAG match or surpass much larger models while reducing computational cost, providing an auditable, regulation-aware basis for deploying LLM-based EPM assistants that support demand-side management, renewable integration and resilient grid operation.
|
https://arxiv.org/abs/2507.22911
|
Academic Papers
|
svg
|
09b5e2d89f7c2b248bc75e1b585e0f9bdc62c97ea53decbe940ed9eebf040c17
|
2026-02-02T00:00:00-05:00
|
Thinking Machines: Mathematical Reasoning in the Age of LLMs
|
arXiv:2508.00459v2 Announce Type: replace Abstract: Large Language Models (LLMs) have demonstrated impressive capabilities in structured reasoning and symbolic tasks, with coding emerging as a particularly successful application. This progress has naturally motivated efforts to extend these models to mathematics, both in its traditional form, expressed through natural-style mathematical language, and in its formalized counterpart, expressed in a symbolic syntax suitable for automatic verification. Yet, despite apparent parallels between programming and proof construction, advances in formalized mathematics have proven significantly more challenging. This gap raises fundamental questions about the nature of reasoning in current LLM architectures, the role of supervision and feedback, and the extent to which such models maintain an internal notion of computational or deductive state. In this article, we review the current state-of-the-art in mathematical reasoning with LLMs, focusing on recent models and benchmarks. We explore three central issues at the intersection of machine learning and mathematical cognition: (i) the trade-offs between traditional and formalized mathematics as training and evaluation domains; (ii) the structural and methodological reasons why proof synthesis remains more brittle than code generation; and (iii) whether LLMs genuinely represent or merely emulate a notion of evolving logical state. Our goal is not to draw rigid distinctions but to clarify the present boundaries of these systems and outline promising directions for their extension.
|
https://arxiv.org/abs/2508.00459
|
Academic Papers
|
svg
|
b3661ebbd2a5080da616a18171cc52e8f98ab957bc816c082f846b0e1fe8a9c8
|
2026-02-02T00:00:00-05:00
|
Benchmarking Foundation Models for Mitotic Figure Classification
|
arXiv:2508.04441v2 Announce Type: replace Abstract: The performance of deep learning models is known to scale with data quantity and diversity. In pathology, as in many other medical imaging domains, the availability of labeled images for a specific task is often limited. Self-supervised learning techniques have enabled the use of vast amounts of unlabeled data to train large-scale neural networks, i.e., foundation models, that can address the limited data problem by providing semantically rich feature vectors that can generalize well to new tasks with minimal training effort increasing model performance and robustness. In this work, we investigate the use of foundation models for mitotic figure classification. The mitotic count, which can be derived from this classification task, is an independent prognostic marker for specific tumors and part of certain tumor grading systems. In particular, we investigate the data scaling laws on multiple current foundation models and evaluate their robustness to unseen tumor domains. Next to the commonly used linear probing paradigm, we also adapt the models using low-rank adaptation (LoRA) of their attention mechanisms. We compare all models against end-to-end-trained baselines, both CNNs and Vision Transformers. Our results demonstrate that LoRA-adapted foundation models provide superior performance to those adapted with standard linear probing, reaching performance levels close to 100% data availability with only 10% of training data. Furthermore, LoRA-adaptation of the most recent foundation models almost closes the out-of-domain performance gap when evaluated on unseen tumor domains. However, full fine-tuning of traditional architectures still yields competitive performance.
|
https://arxiv.org/abs/2508.04441
|
Academic Papers
|
svg
|
a5fed53080868e1ae0ea3f211a3b7826e7c343026a188e8320256080a794ec2e
|
2026-02-02T00:00:00-05:00
|
Matrix-Driven Identification and Reconstruction of LLM Weight Homology
|
arXiv:2508.06309v3 Announce Type: replace Abstract: We propose Matrix-Driven Identification and Reconstruction (MDIR), a SOTA large language model homology method that accurately detects weight correspondences between models and provides rigorous $p$-value estimation of the statistical significance of these correspondences. Our method does not require model inference, and allows the detection of unattributed reuse or replication of model weights even on low-resource devices as it compares only a single pair of matrices at a time. We leverage matrix analysis, polar decomposition, and Large Deviation Theory (LDT) to achieve accurate reconstruction of weight relationships between models. Notably, MDIR is the first method to achieve perfect scores on both Area-Under-Curve (AUC) and accuracy metrics across different source models on LeaFBench.
|
https://arxiv.org/abs/2508.06309
|
Academic Papers
|
svg
|
b5c97ac2c5977d4cad4f6e1f1b67a32d276b09c7478b4df068bad548820a697b
|
2026-02-02T00:00:00-05:00
|
From Label Error Detection to Correction: A Modular Framework and Benchmark for Object Detection Datasets
|
arXiv:2508.06556v2 Announce Type: replace Abstract: Object detection has advanced rapidly in recent years, driven by increasingly large and diverse datasets. However, label errors often compromise the quality of these datasets and affect the outcomes of training and benchmark evaluations. Although label error detection methods for object detection datasets now exist, they are typically validated only on synthetic benchmarks or via limited manual inspection. How to correct such errors systematically and at scale remains an open problem. We introduce a semi-automated framework for label error correction called Rechecked. Building on existing label error detection methods, their error proposals are reviewed with lightweight, crowd-sourced microtasks. We apply Rechecked to the class pedestrian in the KITTI dataset, for which we crowdsourced high-quality corrected annotations. We detect 18% of missing and inaccurate labels in the original ground truth. We show that current label error detection methods, when combined with our correction framework, can recover hundreds of errors with little human effort compared to annotation from scratch. However, even the best methods still miss up to 66% of the label errors, which motivates further research, now enabled by our released benchmark.
|
https://arxiv.org/abs/2508.06556
|
Academic Papers
|
svg
|
0718e03f6a96b89679b50f41c7d14b982082278c3ad7db63bbcb6f5073262f71
|
2026-02-02T00:00:00-05:00
|
QuiZSF: A Retrieval-Augmented Framework for Zero-Shot Time Series Forecasting
|
arXiv:2508.06915v2 Announce Type: replace Abstract: Accurate forecasting of sequential data streams is a cornerstone of modern Web services, supporting applications such as traffic management, user behavior modeling, and online anomaly prevention. However, in many Web environments, new domains emerge rapidly and labeled history data is scarce, which makes zero-shot forecasting particularly challenging. Existing time-series pre-trained models (TSPMs) show promise but they lack the ability to dynamically incorporate external knowledge, while conventional retrieval-augmented generation (RAG) methods are rarely extended beyond text. In this work, we present \textbf{QuiZSF}, a retrieval-augmented forecasting framework that integrates search and forecasting for time series data. The framework performs search by retrieving structurally similar sequences from a large-scale time-series database, and it performs forecasting by integrating the retrieved knowledge into the target sequence. Specifically, QuiZSF introduces a \textbf{ChronoRAG Base}, a hierarchical tree-structured database that enables scalable and domain-aware retrieval, a \textbf{Multi-grained Series Interaction Learner} that captures fine- and coarse-grained dependencies between target and retrieved sequences, and a \textbf{Model Cooperation Coherer} that adapts retrieved knowledge to TSPMs. This design teaches models to actively perform search, align auxiliary information across modalities, and leverage it for more accurate forecasting. Extensive experiments on five public benchmarks demonstrate that QuiZSF consistently outperforms strong baselines, ranking first in up to \textbf{87.5\%} of zero-shot forecasting settings while maintaining high efficiency.
|
https://arxiv.org/abs/2508.06915
|
Academic Papers
|
svg
|
436b7befcd50c1b19c9a959a48e047b2f5be0cb3d694f94584eef4fee521b047
|
2026-02-02T00:00:00-05:00
|
Emergent morphogenesis via planar fabrication enabled by a reduced model of composites
|
arXiv:2508.08198v2 Announce Type: replace Abstract: The ability to engineer complex three-dimensional shapes from planar sheets with precise, programmable control underpins emerging technologies in soft robotics, reconfigurable devices, and functional materials. Here, we present a reduced-order numerical and experimental framework for a bilayer system consisting of a stimuli-responsive thermoplastic sheet (Shrinky Dink) bonded to a kirigami-patterned, inert plastic layer. Upon uniform heating, the active layer contracts while the patterned layer constrains in-plane stretch but allows out-of-plane bending, yielding programmable 3D morphologies from simple planar precursors. Our approach enables efficient computational design and scalable manufacturing of 3D forms with a single-layer reduced model that captures the coupled mechanics of stretching and bending. Unlike traditional bilayer modeling, our framework collapses the multilayer composite into a single layer of nodes and elements, reducing the degrees of freedom and enabling simulation on a 2D geometry. This is achieved by introducing a novel energy formulation that captures the coupling between in-plane stretch mismatch and out-of-plane bending - extending beyond simple isotropic linear elastic models. Experimentally, we establish a fully planar, repeatable fabrication protocol using a stimuli-responsive thermoplastic and a laser-cut inert plastic layer. The programmed strain mismatch drives an array of 3D morphologies, such as bowls, canoes, and flower petals, all verified by both simulation and physical prototypes.
|
https://arxiv.org/abs/2508.08198
|
Academic Papers
|
svg
|
12481fbf60aa7c9a2af974895afb8d872dcbea625607a55d5aa6315168142521
|
2026-02-02T00:00:00-05:00
|
BiasGym: Fantastic LLM Biases and How to Find (and Remove) Them
|
arXiv:2508.08855v3 Announce Type: replace Abstract: Understanding biases and stereotypes encoded in the weights of Large Language Models (LLMs) is crucial for developing effective mitigation strategies. However, biased behaviour is often subtle and non-trivial to isolate, even when deliberately elicited, making systematic analysis and debiasing particularly challenging. To address this, we introduce \texttt{BiasGym}, a simple, cost-effective, and generalizable framework for reliably and safely injecting, analyzing, and mitigating conceptual associations of biases within LLMs. \texttt{BiasGym} consists of two components: \texttt{BiasInject}, which safely injects specific biases into the model via token-based fine-tuning while keeping the model frozen, and \texttt{BiasScope}, which leverages these injected signals to identify and reliably steer the components responsible for biased behavior. Our method enables consistent bias elicitation for mechanistic analysis, supports targeted debiasing without degrading performance on downstream tasks, and generalizes to biases unseen during fine-tuning. We demonstrate the effectiveness of BiasGym in reducing real-world stereotypes (e.g., people from Italy being `reckless drivers'), showing its utility for both safety interventions and interpretability research.
|
https://arxiv.org/abs/2508.08855
|
Academic Papers
|
svg
|
607d31706a32e0f3521570dd5aceb413a990056873a2587a9ec87027b41c2744
|
2026-02-02T00:00:00-05:00
|
A Review On Safe Reinforcement Learning Using Lyapunov and Barrier Functions
|
arXiv:2508.09128v3 Announce Type: replace Abstract: Reinforcement learning (RL) has proven to be particularly effective in solving complex decision-making problems for a wide range of applications. From a control theory perspective, RL can be considered as an adaptive optimal control scheme. Lyapunov and barrier functions are the most commonly used certificates to guarantee system stability for a proposed/derived controller and constraint satisfaction guarantees, respectively, in control theoretic approaches. However, compared to theoretical guarantees available in control theoretic methods, RL lacks closed-loop stability of a computed policy and constraint satisfaction guarantees. Safe reinforcement learning refers to a class of constrained problems where the constraint violations lead to partial or complete system failure. The goal of this review is to provide an overview of safe RL techniques using Lyapunov and barrier functions to guarantee this notion of safety discussed (stability of the system in terms of a computed policy and constraint satisfaction during training and deployment). The different approaches employed are discussed in detail along with their shortcomings and benefits to provide critique and possible future research directions. Key motivation for this review is to discuss current theoretical approaches for safety and stability guarantees in RL similar to control theoretic approaches using Lyapunov and barrier functions. The review provides proven potential and promising scope of providing safety guarantees for complex dynamical systems with operational constraints using model-based and model-free RL.
|
https://arxiv.org/abs/2508.09128
|
Academic Papers
|
svg
|
009da03ee0a2e3b68f723d40d3ec1ebfd6c6f0a56f41b5cdea877a1c81dca66e
|
2026-02-02T00:00:00-05:00
|
Multi-Level Safety Continual Projection for Fine-Tuned Large Language Models without Retraining
|
arXiv:2508.09190v4 Announce Type: replace Abstract: While fine-tuning services drive the rapid expansion of task capabilities in large language models (LLMs), they are often accompanied by the degradation and reorganization of safety-aligned representations, making models more prone to deviating from human preferences and exposing them to emerging jailbreak risks. Existing post-fine-tuning defense methods predominantly rely on single-scale safety correction mechanisms, which struggle to achieve a robust balance among safety, model utility, and continual adaptability. We propose Multi-Level Safety Continual Projection (MSCP), a training-free post-fine-tuning safety enhancement method that implicitly aligns global and localized safety activations through coordinated multi-level representations to isolate sparse neuron clusters governing safety-sensitive behaviors. It then applies composable safety-direction projections without retraining, effectively suppressing harmful outputs under minimal parameter perturbations while preserving task performance and improving alignment with human preferences. Extensive experiments across multiple fine-tuned LLM models demonstrate that our method significantly reduce harmfulness scores and attack success rates with minimal parameter modifications, while preserving the model's utility. Furthermore, we introduce a task-specific, multi-dimensional heterogeneous safety activation clustering mechanism that enables continual defense and generalization capability against unforeseen emerging safety concerns.
|
https://arxiv.org/abs/2508.09190
|
Academic Papers
|
svg
|
616c3f5067551df31d5a20abc466343103237a9cd5a2ae7948e751255d44a7c6
|
2026-02-02T00:00:00-05:00
|
A Generalized Alternating Anderson Acceleration Method
|
arXiv:2508.10158v2 Announce Type: replace Abstract: In this work, we propose a generalized alternating Anderson acceleration method, a periodic scheme composed of $t$ fixed-point iteration steps, interleaved with $s$ steps of Anderson acceleration with window size $m$, to solve linear and nonlinear problems. This allows flexibility to use different combinations of fixed-point iteration and Anderson iteration. We present a convergence analysis of the proposed scheme for accelerating the Richardson iteration in the linear case, with a focus on specific parameter choices of interest. Specifically, we prove convergence of the proposed method under contractive fixed-point iteration and provide a sufficient condition for convergence when the Richardson iteration matrix is diagonalizable and noncontractive. To demonstrate the broader applicability of our proposed method, we use it to accelerate Jacobi iteration, Picard iteration, gradient descent, and the alternating direction method of multipliers in solving partial differential equations and nonlinear, nonsmooth optimization problems. The numerical results illustrate that the proposed scheme is more efficient than the existing windowed Anderson acceleration and alternating Anderson ($s=1$) in terms of iteration number and CPU time for careful choice of parameters $m, s, t$.
|
https://arxiv.org/abs/2508.10158
|
Academic Papers
|
svg
|
ea1e398e077e199fe2823e529809f1868dadf58763398bab2294afa24a2d144b
|
2026-02-02T00:00:00-05:00
|
A Unified Evaluation Framework for Multi-Annotator Tendency Learning
|
arXiv:2508.10393v2 Announce Type: replace Abstract: Recent works have emerged in multi-annotator learning that shift focus from Consensus-oriented Learning (CoL), which aggregates multiple annotations into a single ground-truth prediction, to Individual Tendency Learning (ITL), which models annotator-specific labeling behavior patterns (i.e., tendency) to provide explanation analysis for understanding annotator decisions. However, no evaluation framework currently exists to assess whether ITL methods truly capture individual tendencies and provide meaningful behavioral explanations. To address this gap, we propose the first unified evaluation framework with two novel metrics: (1) Difference of Inter-annotator Consistency (DIC) quantifies how well models capture annotator tendencies by comparing predicted inter-annotator similarity structures with ground-truth; (2) Behavior Alignment Explainability (BAE) evaluates how well model explanations reflect annotator behavior and decision relevance by aligning explainability-derived with ground-truth labeling similarity structures via Multidimensional Scaling (MDS). Extensive experiments validate the effectiveness of our proposed evaluation framework.
|
https://arxiv.org/abs/2508.10393
|
Academic Papers
|
svg
|
2e672eed9fdcb52253cdf9690db14f5d92834679aa7f043e40059e7a17801dbb
|
2026-02-02T00:00:00-05:00
|
Spirals and Beyond: Competitive Plane Search with Multi-Speed Agents
|
arXiv:2508.10793v2 Announce Type: replace Abstract: We consider the problem of minimizing the worst-case search time for a hidden point target in the plane using multiple mobile agents of differing speeds, all starting from a common origin. The search time is normalized by the target's distance to the origin, following the standard convention in competitive analysis. The goal is to minimize the maximum such normalized time over all target locations, the search cost. As a base case, we extend the known result for a single unit-speed agent, which achieves an optimal cost of about $\mathcal{U}_1 = 17.28935$ via a logarithmic spiral, to $n$ unit-speed agents. We give a symmetric spiral-based algorithm where each agent follows a logarithmic spiral offset by equal angular phases. This yields a search cost independent of which agent finds the target. We provide a closed-form upper bound $\mathcal{U}_n$ for this setting, which we use in our general result. Our main contribution is an upper bound on the worst-case normalized search time for $n$ agents with arbitrary speeds. We give a framework that selects a subset of agents and assigns spiral-type trajectories with speed-dependent angular offsets, again making the search cost independent of which agent reaches the target. A corollary shows that $n$ multi-speed agents (fastest speed 1) can beat $k$ unit-speed agents (cost below $\mathcal{U}_k$) if the geometric mean of their speeds exceeds $\mathcal{U}_n / \mathcal{U}_k$. This means slow agents may be excluded if they lower the mean too much, motivating non-spiral algorithms. We also give new upper bounds for point search in cones and conic complements using a single unit-speed agent. These are then used to design hybrid spiral-directional strategies, which outperform the spiral-based algorithms when some agents are slow. This suggests that spiral-type trajectories may not be optimal in the general multi-speed setting.
|
https://arxiv.org/abs/2508.10793
|
Academic Papers
|
svg
|
92cb60ef47526c6c686dbb94b8d7442c69d39e46154128f9609b45fa5659c795
|
2026-02-02T00:00:00-05:00
|
DREAMS: Preserving both Local and Global Structure in Dimensionality Reduction
|
arXiv:2508.13747v2 Announce Type: replace Abstract: Dimensionality reduction techniques are widely used for visualizing high-dimensional data in two dimensions. Existing methods are typically designed to preserve either local (e.g., $t$-SNE, UMAP) or global (e.g., MDS, PCA) structure of the data, but none of the established methods can represent both aspects well. In this paper, we present DREAMS (Dimensionality Reduction Enhanced Across Multiple Scales), a method that combines the local structure preservation of $t$-SNE with the global structure preservation of PCA via a simple regularization term. Our approach generates a spectrum of embeddings between the locally well-structured $t$-SNE embedding and the globally well-structured PCA embedding, efficiently balancing both local and global structure preservation. We benchmark DREAMS across eleven real-world datasets, showcasing qualitatively and quantitatively its superior ability to preserve structure across multiple scales compared to previous approaches.
|
https://arxiv.org/abs/2508.13747
|
Academic Papers
|
svg
|
feb635b3dc70cc9e610a9d296cc23d0cb72390a8ced7c9441ef3a9ae6c00dcdc
|
2026-02-02T00:00:00-05:00
|
GMOR: A Lightweight Robust Point Cloud Registration Framework via Geometric Maximum Overlapping
|
arXiv:2508.17427v2 Announce Type: replace Abstract: Point cloud registration based on correspondences computes the rigid transformation that maximizes the number of inliers constrained within the noise threshold. Current state-of-the-art (SOTA) methods employing spatial compatibility graphs or branch-and-bound (BnB) search mainly focus on registration under high outlier ratios. However, graph-based methods require at least quadratic space and time complexity for graph construction, while multi-stage BnB search methods often suffer from inaccuracy due to local optima between decomposed stages. This paper proposes a geometric maximum overlapping registration framework via rotation-only BnB search. The rigid transformation is decomposed using Chasles' theorem into a translation along rotation axis and a 2D rigid transformation. The optimal rotation axis and angle are searched via BnB, with residual parameters formulated as range maximum query (RMQ) problems. Firstly, the top-k candidate rotation axes are searched within a hemisphere parameterized by cube mapping, and the translation along each axis is estimated through interval stabbing of the correspondences projected onto that axis. Secondly, the 2D registration is relaxed to 1D rotation angle search with 2D RMQ of geometric overlapping for axis-aligned rectangles, which is solved deterministically in polynomial time using sweep line algorithm with segment tree. Experimental results on indoor 3DMatch/3DLoMatch scanning and outdoor KITTI LiDAR datasets demonstrate superior accuracy and efficiency over SOTA methods, while the time complexity is polynomial and the space complexity increases linearly with the number of points, even in the worst case.
|
https://arxiv.org/abs/2508.17427
|
Academic Papers
|
svg
|
fc601f612bbc6a73074455b402a7d81b2203e74c957460ebe4d28a8e91a2cbd1
|
2026-02-02T00:00:00-05:00
|
MemoryVLA: Perceptual-Cognitive Memory in Vision-Language-Action Models for Robotic Manipulation
|
arXiv:2508.19236v2 Announce Type: replace Abstract: Temporal context is essential for robotic manipulation because such tasks are inherently non-Markovian, yet mainstream VLA models typically overlook it and struggle with long-horizon, temporally dependent tasks. Cognitive science suggests that humans rely on working memory to buffer short-lived representations for immediate control, while the hippocampal system preserves verbatim episodic details and semantic gist of past experience for long-term memory. Inspired by these mechanisms, we propose MemoryVLA, a Cognition-Memory-Action framework for long-horizon robotic manipulation. A pretrained VLM encodes the observation into perceptual and cognitive tokens that form working memory, while a Perceptual-Cognitive Memory Bank stores low-level details and high-level semantics consolidated from it. Working memory retrieves decision-relevant entries from the bank, adaptively fuses them with current tokens, and updates the bank by merging redundancies. Using these tokens, a memory-conditioned diffusion action expert yields temporally aware action sequences. We evaluate MemoryVLA on 150+ simulation and real-world tasks across three robots. On SimplerEnv-Bridge, Fractal, LIBERO-5 suites and Mikasa-Robo, it achieves 71.9%, 72.7%, 96.5%, and 41.2% success rates, respectively, all outperforming state-of-the-art baselines CogACT and pi-0, with a notable +14.6 gain on Bridge and +11.8 gain on Mikasa-Robo. On 12 real-world tasks spanning general skills and long-horizon temporal dependencies, MemoryVLA achieves 84.0% success rate, with long-horizon tasks showing a +26 improvement over state-of-the-art baseline. Project Page: https://shihao1895.github.io/MemoryVLA
|
https://arxiv.org/abs/2508.19236
|
Academic Papers
|
svg
|
a9f303dc6403d3e4a125094ebd64d1cf98c5b3c58593e3a13465ca37e08aa674
|
2026-02-02T00:00:00-05:00
|
Quantum latent distributions in deep generative models
|
arXiv:2508.19857v2 Announce Type: replace Abstract: Many successful families of generative models leverage a low-dimensional latent distribution that is mapped to a data distribution. Though simple latent distributions are often used, the choice of distribution has a strong impact on model performance. Recent experiments have suggested that the probability distributions produced by quantum processors, which are typically highly correlated and classically intractable, can lead to improved performance on some datasets. However, when and why latent distributions produced by quantum processors can improve performance, and whether these improvements are connected to quantum properties of these distributions, are open questions that we investigate in this work. We show in theory that, under certain conditions, these "quantum latent distributions" enable generative models to produce data distributions that classical latent distributions cannot efficiently produce. We provide intuition as to the underlying mechanisms that could explain a performance advantage on real datasets. Based on this, we perform extensive benchmarking on a synthetic quantum dataset and the QM9 molecular dataset, using both simulated and real photonic quantum processors. We find that the statistics arising from quantum interference lead to improved generative performance compared to classical baselines, suggesting that quantum processors can play a role in expanding the capabilities of deep generative models.
|
https://arxiv.org/abs/2508.19857
|
Academic Papers
|
svg
|
7e1b4a6851856ebed9e420854ff4e90ee48787d20b32f9d63b0cf65e7bedbb49
|
2026-02-02T00:00:00-05:00
|
Automatic Reviewers Fail to Detect Faulty Reasoning in Research Papers: A New Counterfactual Evaluation Framework
|
arXiv:2508.21422v2 Announce Type: replace Abstract: Large Language Models (LLMs) have great potential to accelerate and support scholarly peer review and are increasingly used as fully automatic review generators (ARGs). However, potential biases and systematic errors may pose significant risks to scientific integrity; understanding the specific capabilities and limitations of state-of-the-art ARGs is essential. We focus on a core reviewing skill that underpins high-quality peer review: detecting faulty research logic. This involves evaluating the internal consistency between a paper's results, interpretations, and claims. We present a fully automated counterfactual evaluation framework that isolates and tests this skill under controlled conditions. Testing a range of ARG approaches, we find that, contrary to expectation, flaws in research logic have no significant effect on their output reviews. Based on our findings, we derive three actionable recommendations for future work and release our counterfactual dataset and evaluation framework publicly.
|
https://arxiv.org/abs/2508.21422
|
Academic Papers
|
svg
|
a9ea3d982e954fb44f7c62fc21c6851007f9eb93cac2abbba98db50e66b1caba
|
2026-02-02T00:00:00-05:00
|
Social World Models
|
arXiv:2509.00559v2 Announce Type: replace Abstract: Humans intuitively navigate social interactions by simulating unspoken dynamics and reasoning about others' perspectives, even with limited information. In contrast, AI systems struggle to structure and reason about implicit social contexts, as they lack explicit representations for unobserved dynamics such as intentions, beliefs, and evolving social states. In this paper, we introduce the concept of social world models (SWMs) to characterize the complex social dynamics. To operationalize SWMs, we introduce a novel structured social world representation formalism (S3AP), which captures the evolving states, actions, and mental states of agents, addressing the lack of explicit structure in traditional free-text-based inputs. Through comprehensive experiments across five social reasoning benchmarks, we show that S3AP significantly enhances LLM performance-achieving a +51% improvement on FANToM over OpenAI's o1. Our ablations further reveal that these gains are driven by the explicit modeling of hidden mental states, which proves more effective than a wide range of baseline methods. Finally, we introduce an algorithm for social world models using S3AP, which enables AI agents to build models of their interlocutors and predict their next actions and mental states. Empirically, S3AP-enabled social world models yield up to +18% improvement on the SOTOPIA multi-turn social interaction benchmark. Our findings highlight the promise of S3AP as a powerful, general-purpose representation for social world states, enabling the development of more socially-aware systems that better navigate social interactions.
|
https://arxiv.org/abs/2509.00559
|
Academic Papers
|
svg
|
f3d18d9adabd1e72a80f6713d2935c6bbc943b29d6b7b10fa51e1507735506f8
|
2026-02-02T00:00:00-05:00
|
FLM-Audio: Natural Monologues Improves Native Full-Duplex Chatbots via Dual Training
|
arXiv:2509.02521v3 Announce Type: replace Abstract: Full-duplex dialog models aim to listen and speak simultaneously, delivering rapid responses to dynamic user input. Among different solutions to full-duplexity, a native solution merges multiple channels in each time step, achieving the lowest latency. However, prevailing designs break down the textual monologue sentences for word-level alignment with audio streams, which degrades language modeling abilities. To help address this issue, we introduce "contiguous monologues", which are composed by continuous sentences and "waiting" intervals, mimicking human-like cognitive behavior in dialogs. We find a proper training paradigm to be critical for semantically aligning contiguous monologues with audio. To this end, we develop a "dual" training paradigm that alternates the position of the monologues, either leading or trailing the audio, across different training stages. A combination of our contiguous monologue and dual training strategy is applied in developing FLM-Audio, our 7B spoken dialog chatbot with native full-duplexity. As confirmed by experimental results, FLM-Audio achieves superior response qualities and chatting experiences while requiring significantly less training data.
|
https://arxiv.org/abs/2509.02521
|
Academic Papers
|
svg
|
bf53040413310ed58d8d7e9d03aa2b1b3dfab740c331ae070896c89dd6fe9862
|
2026-02-02T00:00:00-05:00
|
TRACE: Unlocking Effective CXL Bandwidth via Lossless Compression and Precision Scaling
|
arXiv:2509.03377v3 Announce Type: replace Abstract: LLM inference is increasingly limited by memory bandwidth, and the bottleneck worsens at long context as the KV cache grows. CXL memory adds capacity to offload weights and KV, but its link and device-side DDR bandwidth are far below HBM, so decoding stalls once traffic shifts to the CXL tier. Many CXL controllers are starting to add generic \emph{lossless} compression, yet applying commodity codecs directly to standard word-major LLM tensors is largely ineffective, especially for token-major KV streams. We propose TRACE (\textbf{T}raffic-\textbf{R}educed \textbf{A}rchitecture for \textbf{C}ompression and \textbf{E}lasticity), which preserves the unmodified CXL.mem interface but changes the device-internal representation. It stores tensors in a channel-major, disaggregated bit-plane layout, and applies a KV-specific transform before compression, converting mixed-field words into low-entropy plane streams that commodity codecs can compress. The same substrate enables precision-proportional fetch by reading only the required bit-planes. Across public LLMs, TRACE reduces BF16 weight footprint by 25.2\% and BF16 KV footprint by 46.9\% losslessly, with per-layer KV ratios peaking at 2.69$\times$. In trace-driven system modeling, once KV spills to CXL, GPT-OSS-120B-MXFP4 improves throughput at 128k tokens from 16.28 to 68.99 tok/s (4.24$\times$). DRAMSim3 shows up to 40.3\% lower DRAM access energy under plane-aligned fetch. A 7\,nm SystemVerilog implementation sustains 256\,GB/s device bandwidth. Relative to a CXL controller with generic inline lossless compression, TRACE only adds 7.2\% area, 4.7\% power, and 6.0\% load-to-use latency at 2\,GHz and 0.7\,V.
|
https://arxiv.org/abs/2509.03377
|
Academic Papers
|
svg
|
c6553e3f4dd6a40301b98c0c27de210fb87e58104daa19e0b027dcc0e9eba06b
|
2026-02-02T00:00:00-05:00
|
SpiderNets: Vision Models Predict Human Fear From Aversive Images
|
arXiv:2509.04889v2 Announce Type: replace Abstract: Phobias are common and impairing, and exposure therapy, which involves confronting patients with fear-provoking visual stimuli, is the most effective treatment. Scalable computerized exposure therapy requires automated prediction of fear directly from image content to adapt stimulus selection and treatment intensity. Whether such predictions can be made reliably and generalize across individuals and stimuli, however, remains unknown. Here we show that pretrained convolutional and transformer vision models, adapted via transfer learning, accurately predict group-level perceived fear for spider-related images, even when evaluated on new people and new images, achieving a mean absolute error (MAE) below 10 units on the 0-100 fear scale. Visual explanation analyses indicate that predictions are driven by spider-specific regions in the images. Learning-curve analyses show that transformer models are data efficient and approach performance saturation with the available data (~300 images). Prediction errors increase for very low and very high fear levels and within specific categories of images. These results establish transparent, data-driven fear estimation from images, laying the groundwork for adaptive digital mental health tools.
|
https://arxiv.org/abs/2509.04889
|
Academic Papers
|
svg
|
28731e8b020f0d9384b42889b04312defcc32926397228dee7e3a6998c6140de
|
2026-02-02T00:00:00-05:00
|
AI for Scientific Discovery is a Social Problem
|
arXiv:2509.06580v5 Announce Type: replace Abstract: Artificial intelligence (AI) is being increasingly applied to scientific research, but its benefits remain unevenly distributed across different communities and disciplines. While technical challenges such as limited data, fragmented standards, and unequal access to computational resources are already well known, social and institutional factors are often the primary constraints. Narratives emphasizing autonomous "AI scientists," the underrecognition of data and infrastructure work, misaligned incentives, and gaps between domain experts and machine learning researchers all limit the impact of AI on scientific discovery. Four interconnected challenges are highlighted in this paper: community coordination, the misalignment of research priorities with upstream needs, data fragmentation, and infrastructure inequities. We argue that addressing these challenges requires not only technical innovations but also intentional community-building efforts, cross-disciplinary education, shared benchmarks, and accessible infrastructure. We call for reframing AI for science as a collective social project, where sustainable collaboration and equitable participation are treated as prerequisites for achieving technical progress.
|
https://arxiv.org/abs/2509.06580
|
Academic Papers
|
svg
|
701ffb600215de1673501aa5771fb9bcfb96b94fad4cc018d75e6f26503a56e3
|
2026-02-02T00:00:00-05:00
|
RAFFLES: Reasoning-based Attribution of Faults for LLM Systems
|
arXiv:2509.06822v3 Announce Type: replace Abstract: The advent of complex, interconnected long-horizon LLM systems has made it incredibly tricky to identify where and when these systems break down. Evaluation capabilities that currently exist today are limited in that they often focus on simple metrics, end-to-end outcomes, and are dependent on the perspectives of humans. In order to match the increasing complexity of these many component systems, evaluation frameworks must also be able to reason, probe, iterate, and understand the nuanced logic passing through these systems. In this paper, we present RAFFLES, an offline evaluation architecture that incorporates iterative reasoning. Specifically, RAFFLES operates as an iterative, multi-component pipeline, using a central Judge to systematically identify faults and a set of specialized Evaluators to assess the quality of the candidate faults as well as rationales of the Judge. We evaluated RAFFLES with several benchmarks - the Who&When dataset to identify step-level faults in multi-agent systems and the ReasonEval datasets to diagnose step-level mathematical reasoning errors. RAFFLES outperforms strong baselines, achieving an accuracy of over 20% and 50% on the Who&When Hand-Crafted and Algorithmically-Generated datasets, and over 80% on the ReasonEval datasets. These results demonstrate a key step towards introducing automated fault detection for autonomous systems over labor-intensive manual review.
|
https://arxiv.org/abs/2509.06822
|
Academic Papers
|
svg
|
91ccd4686adb276925cbf461dd489690a435447d3b30267691aae8367900eca0
|
2026-02-02T00:00:00-05:00
|
Leveraging AI Agents for Autonomous Networks: A Reference Architecture and Empirical Studies
|
arXiv:2509.08312v2 Announce Type: replace Abstract: The evolution toward Level 4 (L4) Autonomous Networks (AN) represents a strategic inflection point in telecommunications, where networks must transcend reactive automation to achieve genuine cognitive capabilities--fulfilling TM Forum's vision of self-configuring, self-healing, and self-optimizing systems that deliver zero-wait, zero-touch, and zero-fault services. This work bridges the gap between architectural theory and operational reality by implementing Joseph Sifakis's AN Agent reference architecture in a functional cognitive system, deploying coordinated proactive-reactive runtimes driven by hybrid knowledge representation. Through an empirical case study of a Radio Access Network (RAN) Link Adaptation (LA) Agent, we validate this framework's transformative potential: demonstrating sub-10 ms real-time control in 5G NR sub-6 GHz while achieving 4% higher downlink throughput than Outer Loop Link Adaptation (OLLA) algorithms and 85% Block Error Rate (BLER) reduction for ultra-reliable services through dynamic Modulation and Coding Scheme (MCS) optimization. These improvements confirm the architecture's viability in overcoming traditional autonomy barriers and advancing critical L4-enabling capabilities toward next-generation objectives.
|
https://arxiv.org/abs/2509.08312
|
Academic Papers
|
svg
|
196cb361fdb61cb9af41545bd0fd801fb99feb0e16e1c93aad4027ec5362648a
|
2026-02-02T00:00:00-05:00
|
HyperMOOC: Augmenting MOOC Videos with Concept-based Embedded Visualizations
|
arXiv:2509.08404v3 Announce Type: replace Abstract: Massive Open Online Courses (MOOCs) have become increasingly popular worldwide. However, learners primarily rely on watching videos, easily losing knowledge context and reducing learning effectiveness. We propose HyperMOOC, a novel approach augmenting MOOC videos with concept-based embedded visualizations to help learners maintain knowledge context. Informed by expert interviews and literature review, HyperMOOC employs multi-glyph designs for different knowledge types and multi-stage interactions for deeper understanding. Using a timeline-based radial visualization, learners can grasp cognitive paths of concepts and navigate courses through hyperlink-based interactions. We evaluated HyperMOOC through a user study with 36 MOOC learners and interviews with two instructors. Results demonstrate that HyperMOOC enhances learners' learning effect and efficiency on MOOCs, with participants showing higher satisfaction and improved course understanding compared to traditional video-based learning approaches.
|
https://arxiv.org/abs/2509.08404
|
Academic Papers
|
svg
|
ea8ee2428017a614d75b484783020499745d694ecfcdad6d968428b7b5ea80bb
|
2026-02-02T00:00:00-05:00
|
Feature Space Topology Control via Hopkins Loss
|
arXiv:2509.11154v2 Announce Type: replace Abstract: Feature space topology refers to the organization of samples within the feature space. Modifying this topology can be beneficial in machine learning applications, including dimensionality reduction, generative modeling, transfer learning, and robustness to adversarial attacks. This paper introduces a novel loss function, Hopkins loss, which leverages the Hopkins statistic to enforce a desired feature space topology, which is in contrast to existing topology-related methods that aim to preserve input feature topology. We evaluate the effectiveness of Hopkins loss on speech, text, and image data in two scenarios: classification and dimensionality reduction using nonlinear bottleneck autoencoders. Our experiments show that integrating Hopkins loss into classification or dimensionality reduction has only a small impact on classification performance while providing the benefit of modifying feature topology.
|
https://arxiv.org/abs/2509.11154
|
Academic Papers
|
svg
|
98904e1fde571caa8b7d20fa7e638b48d566235f10159059f11f527d354b554c
|
2026-02-02T00:00:00-05:00
|
EgoMem: Lifelong Memory Agent for Full-duplex Omnimodal Models
|
arXiv:2509.11914v2 Announce Type: replace Abstract: We introduce EgoMem, the first lifelong memory agent tailored for full-duplex models that process real-time omnimodal streams. EgoMem enables real-time models to recognize multiple users directly from raw audiovisual streams, to provide personalized response, and to maintain long-term knowledge of users' facts, preferences, and social relationships extracted from audiovisual history. EgoMem operates with three asynchronous processes: (i) a retrieval process that dynamically identifies user via face and voice, and gathers relevant context from a long-term memory; (ii) an omnimodal dialog process that generates personalized audio responses based on the retrieved context; and (iii) a memory management process that automatically detects dialog boundaries from omnimodal streams, and extracts necessary information to update the long-term memory. Unlike existing memory agents for LLMs, EgoMem relies entirely on raw audiovisual streams, making it especially suitable for lifelong, real-time, and embodied scenarios. Experimental results demonstrate that EgoMem's retrieval and memory management modules achieve over 95% accuracy on the test set. When integrated with a fine-tuned RoboEgo omnimodal chatbot, the system achieves fact-consistency scores above 87% in real-time personalized dialogs, establishing a strong baseline for future research.
|
https://arxiv.org/abs/2509.11914
|
Academic Papers
|
svg
|
bbc876baf691bb65624400463325de3bb82148bfea08f351202b3cff735b9ee9
|
2026-02-02T00:00:00-05:00
|
Information Loss and Disparate Effects in Network Embeddings
|
arXiv:2509.12396v2 Announce Type: replace Abstract: An extensive line of work studies fairness interventions for network embeddings, but less is known about their baseline behavior. In this work, we ask: how do baseline embeddings (without fairness interventions) produce disparate effects at the representation level? We analyze the asymptotic behavior of low-dimensional embeddings on stochastic block model (SBM) graphs, which encode both homophily and group structure. We characterize exact conditions under which embeddings cause information loss, showing that the amount of information loss depends directly on the graph's density and assortativity. Notably, very different graphs can produce identical embeddings in the limit, and this non-invertibility disproportionately affects smaller and sparser communities. As a result, simple downstream tasks, such as link prediction, introduce higher error rates for these communities, helping explain disparities widely observed in practice.
|
https://arxiv.org/abs/2509.12396
|
Academic Papers
|
svg
|
27571b1343060ba33dfbf5fa3f12115594949c7159f2c987febee1eeb6c5f34a
|
2026-02-02T00:00:00-05:00
|
Linear Complexity Computation of Code Distance and Minimum Size of Trapping Sets for LDPC Codes with Bounded Treewidth
|
arXiv:2509.13040v2 Announce Type: replace Abstract: It is well known that, given \(b\ge 0\), finding an $(a,b)$-trapping set with the minimum \(a\) in a binary linear code is NP-hard. In this paper, we demonstrate that this problem can be solved with linear complexity with respect to the code length for codes with bounded treewidth. Furthermore, suppose a tree decomposition corresponding to the treewidth of the binary linear code is known. In that case, we also provide a specific algorithm to compute the minimum \(a\) and the number of the corresponding \((a, b)\)-trapping sets for a given \(b\) with linear complexity. Simulation experiments are presented to verify the correctness of the proposed algorithm.
|
https://arxiv.org/abs/2509.13040
|
Academic Papers
|
svg
|
1f02777b43cf44561aeda511e19396587e8f573e7dc21589a30f33e1012096d3
|
2026-02-02T00:00:00-05:00
|
Optimal Learning from Label Proportions with General Loss Functions
|
arXiv:2509.15145v2 Announce Type: replace Abstract: Motivated by problems in online advertising, we address the task of Learning from Label Proportions (LLP). We introduce a novel and versatile low-variance debiasing methodology to learn from aggregate label information, significantly advancing the state of the art in LLP. Our debiasing approach exhibits remarkable flexibility, seamlessly accommodating a broad spectrum of practically relevant loss functions across both binary and multi-class classification settings. By carefully combining our estimators with standard techniques, we improve sample complexity guarantees for a large class of losses of practical relevance. We also empirically validate the efficacy of our proposed approach across a diverse array of benchmark datasets, demonstrating compelling empirical advantages over standard baselines.
|
https://arxiv.org/abs/2509.15145
|
Academic Papers
|
svg
|
77435f12445f986197960c3518199098d389c4b11ef4afad6005d58be224ec27
|
2026-02-02T00:00:00-05:00
|
Self-Improvement of Language Models by Post-Training on Multi-Agent Debate
|
arXiv:2509.15172v3 Announce Type: replace Abstract: Self-improvement, where models improve beyond their current performance without external supervision, remains a challenge. The core difficulty is sourcing a training signal stronger than what the model itself can currently produce. Majority voting has been shown to provide such a signal by aggregating over multiple samples, helping mitigate some of the inconsistencies in LM reasoning. In this work, we show that multi-agent debate--where models collaborate and exchange reasoning over multiple rounds--provides an even richer signal than single-round majority voting. We introduce Multi-Agent Consensus Alignment (MACA), which uses reinforcement learning (RL) to post-train models to effectively utilize multi-agent debate. We find that preference learning over full reasoning traces, learning to differentiate between majority and minority reasoning, is more effective than binary consensus rewards or SFT-based approaches for leveraging these debate signals. This produces three key improvements: models are (1) better at utilizing the multi-agent debate setting (+26.87% on MATH), (2) individually more accurate (+21.51% on MathQA), and (3) more self-consistent (+27.6% on GSM8K). We also see strong generalization to unseen benchmarks (+16.3% on GPQA, +11.6% on CommonsenseQA).
|
https://arxiv.org/abs/2509.15172
|
Academic Papers
|
svg
|
06eef4782afadb5cc726154b42711637e82e7981700684d4f487c810bfe5abea
|
2026-02-02T00:00:00-05:00
|
Impact of Phonetics on Speaker Identity in Adversarial Voice Attack
|
arXiv:2509.15437v2 Announce Type: replace Abstract: Adversarial perturbations in speech pose a serious threat to automatic speech recognition (ASR) and speaker verification by introducing subtle waveform modifications that remain imperceptible to humans but can significantly alter system outputs. While targeted attacks on end-to-end ASR models have been widely studied, the phonetic basis of these perturbations and their effect on speaker identity remain underexplored. In this work, we analyze adversarial audio at the phonetic level and show that perturbations exploit systematic confusions such as vowel centralization and consonant substitutions. These distortions not only mislead transcription but also degrade phonetic cues critical for speaker verification, leading to identity drift. Using DeepSpeech as our ASR target, we generate targeted adversarial examples and evaluate their impact on speaker embeddings across genuine and impostor samples. Results across 16 phonetically diverse target phrases demonstrate that adversarial audio induces both transcription errors and identity drift, highlighting the need for phonetic-aware defenses to ensure the robustness of ASR and speaker recognition systems.
|
https://arxiv.org/abs/2509.15437
|
Academic Papers
|
svg
|
33900d50afa79c5b16e6169f41e16c78650cf284671d31a696a860b1108d5bc9
|
2026-02-02T00:00:00-05:00
|
Thinking in cocktail party: Chain-of-Thought and reinforcement learning for target speaker automatic speech recognition
|
arXiv:2509.15612v2 Announce Type: replace Abstract: Target Speaker Automatic Speech Recognition (TS-ASR) aims to transcribe the speech of a specified target speaker from multi-speaker mixtures in cocktail party scenarios. Recent advancement of Large Audio-Language Models (LALMs) has already brought some new insights to TS-ASR. However, significant room for optimization remains for the TS-ASR task within the LALMs architecture. While Chain of Thoughts (CoT) and Reinforcement Learning (RL) have proven effective in certain speech tasks, TS-ASR, which requires the model to deeply comprehend speech signals, differentiate various speakers, and handle overlapping utterances is particularly well-suited to a reasoning-guided approach. Therefore, we propose a novel framework that incorporates CoT and RL training into TS-ASR for performance improvement. A novel CoT dataset of TS-ASR is constructed, and the TS-ASR model is first trained on regular data and then fine-tuned on CoT data. Finally, the model is further trained with RL using selected data to enhance generalized reasoning capabilities. Experiment results show a significant improvement of TS-ASR performance with CoT and RL training, which demonstrates the effectiveness of the proposed CoT and RL training methods adapted for the TS-ASR task.
|
https://arxiv.org/abs/2509.15612
|
Academic Papers
|
svg
|
5dc07865443a551c18b95b0857af517266a474e940cf7f399e3e64f06eeb8b97
|
2026-02-02T00:00:00-05:00
|
CompSpoof: A Dataset and Joint Learning Framework for Component-Level Audio Anti-spoofing Countermeasures
|
arXiv:2509.15804v2 Announce Type: replace Abstract: Component-level audio Spoofing (Comp-Spoof) targets a new form of audio manipulation where only specific components of a signal, such as speech or environmental sound, are forged or substituted while other components remain genuine. Existing anti-spoofing datasets and methods treat an utterance or a segment as entirely bona fide or entirely spoofed, and thus cannot accurately detect component-level spoofing. To address this, we construct a new dataset, CompSpoof, covering multiple combinations of bona fide and spoofed speech and environmental sound. We further propose a separation-enhanced joint learning framework that separates audio components apart and applies anti-spoofing models to each one. Joint learning is employed, preserving information relevant for detection. Extensive experiments demonstrate that our method outperforms the baseline, highlighting the necessity of separate components and the importance of detecting spoofing for each component separately. Datasets and code are available at: https://github.com/XuepingZhang/CompSpoof.
|
https://arxiv.org/abs/2509.15804
|
Academic Papers
|
svg
|
56d6da56d23e01b187a6a3306e2751cbe05abf74a8853bee17aa49e186f6f981
|
2026-02-02T00:00:00-05:00
|
FESTA: Functionally Equivalent Sampling for Trust Assessment of Multimodal LLMs
|
arXiv:2509.16648v4 Announce Type: replace Abstract: The accurate trust assessment of multimodal large language models (MLLMs) generated predictions, which can enable selective prediction and improve user confidence, is challenging due to the diverse multi-modal input paradigms. We propose Functionally Equivalent Sampling for Trust Assessment (FESTA), a multimodal input sampling technique for MLLMs, that generates an uncertainty measure based on the equivalent and complementary input samplings. The proposed task-preserving sampling approach for uncertainty quantification expands the input space to probe the consistency (through equivalent samples) and sensitivity (through complementary samples) of the model. FESTA uses only input-output access of the model (black-box), and does not require ground truth (unsupervised). The experiments are conducted with various off-the-shelf multi-modal LLMs, on both visual and audio reasoning tasks. The proposed FESTA uncertainty estimate achieves significant improvement (33.3% relative improvement for vision-LLMs and 29.6% relative improvement for audio-LLMs) in selective prediction performance, based on area-under-receiver-operating-characteristic curve (AUROC) metric in detecting mispredictions. The code implementation is open-sourced.
|
https://arxiv.org/abs/2509.16648
|
Academic Papers
|
svg
|
ca80f721810dc9bc29ac8422f739e9bc419129785fc7637131b5fb805398e964
|
2026-02-02T00:00:00-05:00
|
Accurate and Efficient Low-Rank Model Merging in Core Space
|
arXiv:2509.17786v4 Announce Type: replace Abstract: In this paper, we address the challenges associated with merging low-rank adaptations of large neural networks. With the rise of parameter-efficient adaptation techniques, such as Low-Rank Adaptation (LoRA), model fine-tuning has become more accessible. While fine-tuning models with LoRA is highly efficient, existing merging methods often sacrifice this efficiency by merging fully-sized weight matrices. We propose the Core Space merging framework, which enables the merging of LoRA-adapted models within a common alignment basis, thereby preserving the efficiency of low-rank adaptation while substantially improving accuracy across tasks. We further provide a formal proof that projection into Core Space ensures no loss of information and provide a complexity analysis showing the efficiency gains. Extensive empirical results demonstrate that Core Space significantly improves existing merging techniques and achieves state-of-the-art results on both vision and language tasks while utilizing a fraction of the computational resources. Codebase is available at https://github.com/apanariello4/core-space-merging.
|
https://arxiv.org/abs/2509.17786
|
Academic Papers
|
svg
|
143931fd523f3d0b848858058fd2db04e822524f733ca1566dc70788bfc5e8d8
|
2026-02-02T00:00:00-05:00
|
A Scalable Lift-and-Project Differentiable Approach For the Maximum Cut Problem
|
arXiv:2509.18612v2 Announce Type: replace Abstract: We propose a scalable framework for solving the Maximum Cut (MaxCut) problem in large graphs using projected gradient ascent on quadratic objectives. Our approach is differentiable and leverages GPUs for gradient-based optimization. It is not a machine learning method and does not require training data. Starting from a continuous relaxation of the classical quadratic binary formulation, we present a parallelized strategy that explores multiple initialization vectors in batch. We analyze the relaxed objective, showing it is convex and has fixed-points corresponding to local optima, particularly at boundary points, highlighting a key challenge in non-convex optimization. To improve exploration, we introduce a lifted quadratic formulation that over-parameterizes the solution space. We also provide a theoretical characterization of these lifted fixed-points. Finally, we propose DECO, a dimension-alternating algorithm that switches between the unlifted and lifted formulations, combined with importance-based degree initialization and a population-based evolutionary hyper-parameter search. Experiments on diverse graph families show that our methods attain comparable or superior performance relative to recent neural networks and GPU-accelerated sampling approaches.
|
https://arxiv.org/abs/2509.18612
|
Academic Papers
|
svg
|
052844e4eb8c206e898192cd526fd80efa489a2c69e2b7527f18432175c8917b
|
2026-02-02T00:00:00-05:00
|
Latent Iterative Refinement Flow: A Geometric Constrained Approach for Few-Shot Generation
|
arXiv:2509.19903v2 Announce Type: replace Abstract: Diffusion and flow-matching models trained with limited data often tend to memorize the training data instead of generalization, leading to severely reduced diversity. In this paper, we provide a dynamical perspective and identify this ``collapse-to-memorization'' phenomenon as a consequence of the \emph{velocity field collapse}, where the learned field degenerates into isolated point attractors and trap the sampling trajectories. Inspired by this novel view, we introduce \textbf{{\BLUE L}atent {\BLUE I}terative {\BLUE R}efinement {\BLUE F}low ({\BLUE LIRF})}, a geometry-aware framework for from-scratch training of diffusion models in the limited-data regime. By exploiting the intrinsic geometry of a semantically aligned latent space, LIRF progressively densifies the training data manifold via a \emph{generation--correction--augmentation} closed loop, thereby effectively resolving the velocity field collapse. Theoretical guarantee on the convergence of this manifold densification procedure is also provided. Experiments on FFHQ subsets and Low-Shot datasets demonstrate the advantageous performance of LIRF over existing diffusion models for limited-data generation, achieving significantly higher diversity and recall, with comparably good generative performance.
|
https://arxiv.org/abs/2509.19903
|
Academic Papers
|
svg
|
3efa3b029ed9782310f9a31b7cb90ca14081394a12ca82b7e06f6f60508a2a83
|
2026-02-02T00:00:00-05:00
|
Towards Atoms of Large Language Models
|
arXiv:2509.20784v2 Announce Type: replace Abstract: The fundamental representational units (FRUs) of large language models (LLMs) remain undefined, limiting further understanding of their underlying mechanisms. In this paper, we introduce Atom Theory to systematically define, evaluate, and identify such FRUs, which we term atoms. Building on the atomic inner product (AIP), a non-Euclidean metric that captures the underlying geometry of LLM representations, we formally define atoms and propose two key criteria for ideal atoms: faithfulness ($R^2$) and stability ($q^*$). We further prove that atoms are identifiable under threshold-activated sparse autoencoders (TSAEs). Empirically, we uncover a pervasive representation shift in LLMs and demonstrate that the AIP corrects this shift to capture the underlying representational geometry, thereby grounding Atom Theory. We find that two widely used units, neurons and features, fail to qualify as ideal atoms: neurons are faithful ($R^2=1$) but unstable ($q^*=0.5\%$), while features are more stable ($q^*=68.2\%$) but unfaithful ($R^2=48.8\%$). To find atoms of LLMs, leveraging atom identifiability under TSAEs, we show via large-scale experiments that reliable atom identification occurs only when the TSAE capacity matches the data scale. Guided by this insight, we identify FRUs with near-perfect faithfulness ($R^2=99.9\%$) and stability ($q^*=99.8\%$) across layers of Gemma2-2B, Gemma2-9B, and Llama3.1-8B, satisfying the criteria of ideal atoms statistically. Further analysis confirms that these atoms align with theoretical expectations and exhibit substantially higher monosemanticity. Overall, we propose and validate Atom Theory as a foundation for understanding the internal representations of LLMs. Code available at https://github.com/ChenhuiHu/towards_atoms.
|
https://arxiv.org/abs/2509.20784
|
Academic Papers
|
svg
|
660904941339b13e5409732622822979ba208a9d968ee211e7abfcab2d87e594
|
2026-02-02T00:00:00-05:00
|
LAVA: Explainability for Unsupervised Latent Embeddings
|
arXiv:2509.21149v2 Announce Type: replace Abstract: Unsupervised black-box models are drivers of scientific discovery, yet are difficult to interpret, as their output is often a multidimensional embedding rather than a well-defined target. While explainability for supervised learning uncovers how input features contribute to predictions, its unsupervised counterpart should relate input features to the structure of the learned embeddings. However, adaptations of supervised model explainability for unsupervised learning provide either single-sample or dataset-summary explanations, remaining too fine-grained or reductive to be meaningful, and cannot explain embeddings without mapping functions. To bridge this gap, we propose LAVA, a post-hoc model-agnostic method to explain local embedding organization through feature covariation in the original input data. LAVA explanations comprise modules, capturing local subpatterns of input feature correlation that reoccur globally across the embeddings. LAVA delivers stable explanations at a desired level of granularity, revealing domain-relevant patterns such as visual parts of images or disease signals in cellular processes, otherwise missed by existing methods.
|
https://arxiv.org/abs/2509.21149
|
Academic Papers
|
svg
|
4312ca9941ceed5a5124c47438724701ce7082b5d6cbd13a4b0167cc547fcbe4
|
2026-02-02T00:00:00-05:00
|
It's Not You, It's Clipping: A Soft Trust-Region via Probability Smoothing for LLM RL
|
arXiv:2509.21282v2 Announce Type: replace Abstract: Training large language models (LLMs) with reinforcement learning (RL) methods such as PPO and GRPO commonly relies on ratio clipping to stabilise updates. While effective at preventing instability, clipping discards information, introduces gradient discontinuities and can prevent exploration of better policies. Inspired by label smoothing, we propose Probability Smoothing Policy Optimisation (PSPO). PSPO smooths current policy probabilities toward the behaviour policy before computing importance ratios, creating a soft trust region that preserves gradients while preventing destabilising updates. Unlike prior soft clipping approaches that use sigmoid-based transformations which can suffer from vanishing gradients and saturation, our method uses a linear interpolation, providing simpler and more robust gradient preservation. Empirically, GR-PSPO outperforms clipping and sigmoid-based alternatives on mathematical reasoning benchmarks when refining models with prior domain knowledge, achieving an accuracy of 79.9% on GSM8K and 59.6% on MATH for Qwen2-Math-1.5B.
|
https://arxiv.org/abs/2509.21282
|
Academic Papers
|
svg
|
5d687456588fe4ae5019b9f1e8e1ea3e7bc535745dcc22d1978abb5f345418ad
|
2026-02-02T00:00:00-05:00
|
Filtering with Confidence: When Data Augmentation Meets Conformal Prediction
|
arXiv:2509.21479v2 Announce Type: replace Abstract: With promising empirical performance across a wide range of applications, synthetic data augmentation appears a viable solution to data scarcity and the demands of increasingly data-intensive models. Its effectiveness lies in expanding the training set in a way that reduces estimator variance while introducing only minimal bias. Controlling this bias is therefore critical: effective data augmentation should generate diverse samples from the same underlying distribution as the training set, with minimal shifts. In this paper, we propose conformal data augmentation, a principled data filtering framework that leverages the power of conformal prediction to produce diverse synthetic data while filtering out poor-quality generations with provable risk control. Our method is simple to implement, requires no access to internal model logits, nor large-scale model retraining. We demonstrate the effectiveness of our approach across multiple tasks, including topic prediction, sentiment analysis, image classification, and fraud detection, showing consistent performance improvements of up to 40 percentage points (pp) in $F_1$ score over unaugmented baselines, and 4~pp over other filtered augmentation baselines.
|
https://arxiv.org/abs/2509.21479
|
Academic Papers
|
svg
|
0e62eaae0089441fc02513448347d107049c49a1174dbcccba8517bb2232ed89
|
2026-02-02T00:00:00-05:00
|
Incentives in Federated Learning with Heterogeneous Agents
|
arXiv:2509.21612v2 Announce Type: replace Abstract: Federated learning promises significant sample-efficiency gains by pooling data across multiple agents, yet incentive misalignment is an obstacle: each update is costly to the contributor but boosts every participant. We introduce a game-theoretic framework that captures heterogeneous data: an agent's utility depends on who supplies each sample, not just how many. Agents aim to meet a PAC-style accuracy threshold at minimal personal cost. We show that uncoordinated play yields pathologies: pure equilibria may not exist, and the best equilibrium can be arbitrarily more costly than cooperation. To steer collaboration, we analyze the cost-minimizing contribution vector, prove that computing it is NP-hard, and derive a polynomial-time linear program that achieves a logarithmic approximation. Finally, pairing the LP with a simple pay what you contribute rule, where each agent receives a payment equal to its sample cost, yields a mechanism that is strategy-proof and, within the class of contribution-based transfers, is unique.
|
https://arxiv.org/abs/2509.21612
|
Academic Papers
|
svg
|
8876590e9379745701157eb1242e3713895e0ef61d00aed8db5f40c5a55ce57f
|
2026-02-02T00:00:00-05:00
|
Lifelong Learning with Behavior Consolidation for Vehicle Routing
|
arXiv:2509.21765v3 Announce Type: replace Abstract: Recent neural solvers have demonstrated promising performance in learning to solve routing problems. However, existing studies are primarily based on one-off training on one or a set of predefined problem distributions and scales, i.e., tasks. When a new task arises, they typically rely on either zero-shot generalization, which may be poor due to the discrepancies between the new task and the training task(s), or fine-tuning the pretrained solver on the new task, which possibly leads to catastrophic forgetting of knowledge acquired from previous tasks. This paper explores a novel lifelong learning paradigm for neural VRP solvers, where multiple tasks with diverse distributions and scales arise sequentially over time. Solvers are required to effectively and efficiently learn to solve new tasks while maintaining their performance on previously learned tasks. Consequently, a novel framework called Lifelong Learning Router with Behavior Consolidation (LLR-BC) is proposed. LLR-BC consolidates prior knowledge effectively by aligning behaviors of the solver trained on a new task with the buffered ones in a decision-seeking way. To encourage more focus on crucial experiences, LLR-BC assigns greater consolidated weights to decisions with lower confidence. Extensive experiments on capacitated vehicle routing problems and traveling salesman problems demonstrate LLR-BC's effectiveness in training high-performance neural solvers in a lifelong learning setting, addressing the catastrophic forgetting issue, maintaining their plasticity, and improving zero-shot generalization ability.
|
https://arxiv.org/abs/2509.21765
|
Academic Papers
|
svg
|
c2f05b8f7b2f8c962949ddda41390428dd7e403aece4073cce868e13b7f6e548
|
2026-02-02T00:00:00-05:00
|
SimulSense: Sense-Driven Interpreting for Efficient Simultaneous Speech Translation
|
arXiv:2509.21932v2 Announce Type: replace Abstract: How to make human-interpreter-like read/write decisions for simultaneous speech translation (SimulST) systems? Current state-of-the-art systems formulate SimulST as a multi-turn dialogue task, requiring specialized interleaved training data and relying on computationally expensive large language model (LLM) inference for decision-making. In this paper, we propose SimulSense, a novel framework for SimulST that mimics human interpreters by continuously reading input speech and triggering write decisions to produce translation when a new sense unit is perceived. Experiments against two state-of-the-art baseline systems demonstrate that our proposed method achieves a superior quality-latency tradeoff and substantially improved real-time efficiency, where its decision-making is up to 9.6x faster than the baselines.
|
https://arxiv.org/abs/2509.21932
|
Academic Papers
|
svg
|
d6b8e838a99318a38e95f27f1b93c496ff2035af2eb47ade0770a3822e3a53d3
|
2026-02-02T00:00:00-05:00
|
Collaborative Belief Reasoning with LLMs for Efficient Multi-Agent Collaboration
|
arXiv:2509.21981v3 Announce Type: replace Abstract: Effective real-world multi-agent collaboration requires not only accurate planning but also the ability to reason about collaborators' intents--a crucial capability for avoiding miscoordination and redundant communication under partial observable environments. Due to their strong planning and reasoning capabilities, large language models (LLMs) have emerged as promising autonomous agents for collaborative task solving. However, existing collaboration frameworks for LLMs overlook their reasoning potential for dynamic intent inference, and thus produce inconsistent plans and redundant communication, reducing collaboration efficiency. To bridge this gap, we propose CoBel-World, a novel framework that equips LLM agents with a Collaborative Belief World--an internal representation jointly modeling the physical environment and collaborators' mental states. CoBel-World enables agents to parse external open-world knowledge into structured beliefs via a symbolic belief representation module, and perform zero-shot Bayesian-style belief updates through LLM reasoning. This allows agents to proactively detect potential miscoordination (e.g., conflicting plans) and communicate adaptively. Evaluated on challenging embodied benchmarks (i.e., TDW-MAT and C-WAH), CoBel-World significantly reduces communication costs by 64-79% and improves task completion efficiency by 4-28% compared to the strongest baseline. Our results show that explicit, intent-aware belief modeling is essential for efficient and human-like collaboration in LLM-based multi-agent systems.
|
https://arxiv.org/abs/2509.21981
|
Academic Papers
|
svg
|
48a2f1a559f8dfb71e4fc994bf642239d7031d773dcf2e47a0c15333a6259ac6
|
2026-02-02T00:00:00-05:00
|
Towards a more realistic evaluation of machine learning models for bearing fault diagnosis
|
arXiv:2509.22267v2 Announce Type: replace Abstract: Reliable detection of bearing faults is essential for maintaining the safety and operational efficiency of rotating machinery. While recent advances in machine learning (ML), particularly deep learning, have shown strong performance in controlled settings, many studies fail to generalize to real-world applications due to methodological flaws, most notably data leakage. This paper investigates the issue of data leakage in vibration-based bearing fault diagnosis and its impact on model evaluation. We demonstrate that common dataset partitioning strategies, such as segment-wise and condition-wise splits, introduce spurious correlations that inflate performance metrics. To address this, we propose a rigorous, leakage-free evaluation methodology centered on bearing-wise data partitioning, ensuring no overlap between the physical components used for training and testing. Additionally, we reformulate the classification task as a multi-label problem, enabling the detection of co-occurring fault types and the use of prevalence-independent metrics such as Macro AUROC. Beyond preventing leakage, we also examine the effect of dataset diversity on generalization, showing that the number of unique training bearings is a decisive factor for achieving robust performance. We evaluate our methodology on three widely adopted datasets: CWRU, Paderborn University (PU), and University of Ottawa (UORED-VAFCLS). This study highlights the importance of leakage-aware evaluation protocols and provides practical guidelines for dataset partitioning, model selection, and validation, fostering the development of more trustworthy ML systems for industrial fault diagnosis applications.
|
https://arxiv.org/abs/2509.22267
|
Academic Papers
|
svg
|
9f40d1e013de18c15a62e3e6defd51c5c2e9c6764d58cc2b064f5495003d395a
|
2026-02-02T00:00:00-05:00
|
ChatInject: Abusing Chat Templates for Prompt Injection in LLM Agents
|
arXiv:2509.22830v2 Announce Type: replace Abstract: The growing deployment of large language model (LLM) based agents that interact with external environments has created new attack surfaces for adversarial manipulation. One major threat is indirect prompt injection, where attackers embed malicious instructions in external environment output, causing agents to interpret and execute them as if they were legitimate prompts. While previous research has focused primarily on plain-text injection attacks, we find a significant yet underexplored vulnerability: LLMs' dependence on structured chat templates and their susceptibility to contextual manipulation through persuasive multi-turn dialogues. To this end, we introduce ChatInject, an attack that formats malicious payloads to mimic native chat templates, thereby exploiting the model's inherent instruction-following tendencies. Building on this foundation, we develop a persuasion-driven Multi-turn variant that primes the agent across conversational turns to accept and execute otherwise suspicious actions. Through comprehensive experiments across frontier LLMs, we demonstrate three critical findings: (1) ChatInject achieves significantly higher average attack success rates than traditional prompt injection methods, improving from 5.18% to 32.05% on AgentDojo and from 15.13% to 45.90% on InjecAgent, with multi-turn dialogues showing particularly strong performance at average 52.33% success rate on InjecAgent, (2) chat-template-based payloads demonstrate strong transferability across models and remain effective even against closed-source LLMs, despite their unknown template structures, and (3) existing prompt-based defenses are largely ineffective against this attack approach, especially against Multi-turn variants. These findings highlight vulnerabilities in current agent systems.
|
https://arxiv.org/abs/2509.22830
|
Academic Papers
|
svg
|
66146c0e928e28286ee4132eb7e1d6ea50ea7ed61c1a6bd472ab65fde81a38a5
|
2026-02-02T00:00:00-05:00
|
On the Separability of Information in Diffusion Models
|
arXiv:2509.23937v4 Announce Type: replace Abstract: Diffusion models transform noise into data by injecting information that was captured in their neural network during the training phase. In this paper, we ask: \textit{what} is this information? We find that, in pixel-space diffusion models, (1) a large fraction of the total information in the neural network is committed to reconstructing small-scale perceptual details of the image, and (2) the correlations between images and their class labels are informed by the semantic content of the images, and are largely agnostic to the low-level details. We argue that these properties are intrinsically tied to the manifold structure of the data itself. Finally, we show that these facts explain the efficacy of classifier-free guidance: the guidance vector amplifies the mutual information between images and conditioning signals early in the generative process, influencing semantic structure, but tapers out as perceptual details are filled in.
|
https://arxiv.org/abs/2509.23937
|
Academic Papers
|
svg
|
14e295079d58416d6321866becf07ee0d4ef200885e6fa8563f9e87a87ee1e5a
|
2026-02-02T00:00:00-05:00
|
Fidel-TS: A High-Fidelity Multimodal Benchmark for Time Series Forecasting
|
arXiv:2509.24789v3 Announce Type: replace Abstract: The evaluation of time series forecasting models is hindered by a critical lack of high-quality benchmarks, leading to a potential illusion of progress. Existing datasets suffer from issues ranging from pre-training data contamination in the age of LLMs to the temporal and description leakage prevalent in early multimodal designs. To address this, we formalize the core principles of high-fidelity benchmarking, focusing on data sourcing integrity, leak-free and causally sound design, and structural clarity. We introduce Fidel-TS, a new large-scale benchmark built from the ground up on these principles by sourcing data from live APIs. Our experiments reveal the flaws of the previous benchmarks and the biases in model evaluation, providing new insights into multiple existing forecasting models and LLMs across various evaluation tasks.
|
https://arxiv.org/abs/2509.24789
|
Academic Papers
|
svg
|
e0daec9cececf9fb189d841988d85527ae998abde0af4ca573ca1023c3580ccd
|
2026-02-02T00:00:00-05:00
|
Causal-Adapter: Taming Text-to-Image Diffusion for Faithful Counterfactual Generation
|
arXiv:2509.24798v4 Announce Type: replace Abstract: We present Causal-Adapter, a modular framework that adapts frozen text-to-image diffusion backbones for counterfactual image generation. Our method enables causal interventions on target attributes, consistently propagating their effects to causal dependents without altering the core identity of the image. In contrast to prior approaches that rely on prompt engineering without explicit causal structure, Causal-Adapter leverages structural causal modeling augmented with two attribute regularization strategies: prompt-aligned injection, which aligns causal attributes with textual embeddings for precise semantic control, and a conditioned token contrastive loss to disentangle attribute factors and reduce spurious correlations. Causal-Adapter achieves state-of-the-art performance on both synthetic and real-world datasets, with up to 91% MAE reduction on Pendulum for accurate attribute control and 87% FID reduction on ADNI for high-fidelity MRI image generation. These results show that our approach enables robust, generalizable counterfactual editing with faithful attribute modification and strong identity preservation.
|
https://arxiv.org/abs/2509.24798
|
Academic Papers
|
svg
|
43e3bfb7fc1b7ae955f197f66d3b2608c56e02b79c430b7e4dcc7de7f7b5cead
|
2026-02-02T00:00:00-05:00
|
IRIS: Intrinsic Reward Image Synthesis
|
arXiv:2509.25562v2 Announce Type: replace Abstract: Despite the success of Reinforcement Learning from Human Feedback (RLHF) in language reasoning, its application to autoregressive Text-to-Image (T2I) generation is often constrained by the limited availability of human preference data. This paper explores how an autoregressive T2I model can learn from internal signals without relying on external rewards or labeled data. Contrary to recent findings in math and code reasoning, we show that minimizing self-certainty, rather than maximizing it, improves image generation. We observe that autoregressive T2I models with higher certainty are likely to generate simple and uniform images, which are less aligned with human preferences, and models with lower certainty are likely to generate vivid images rich in detail. Based on this observation, we propose IRIS(Intrinsic Reward Image Synthesis), the first framework to improve autoregressive T2I models with reinforcement learning using only an intrinsic reward. Empirical results demonstrate that applying IRIS to autoregressive T2I models achieves performance superior to those trained by individual external rewards, and matching those trained by ensemble external rewards. IRIS also incentivizes the emergence of nuanced CoT reasoning for high-quality image generation.
|
https://arxiv.org/abs/2509.25562
|
Academic Papers
|
svg
|
2ef4e590319cae63740077976d992ae413655b2bb2f8f0c877cc6fe1e7217d44
|
2026-02-02T00:00:00-05:00
|
Think Less, Label Better: Multi-Stage Domain-Grounded Synthetic Data Generation for Fine-Tuning Large Language Models in Telecommunications
|
arXiv:2509.25736v2 Announce Type: replace Abstract: The success of large language models (LLMs) depends heavily on large-scale, high-quality instruction-following and reinforcement datasets. However, generating such data through human annotation is prohibitively time-consuming particularly for domain-specific tasks like telecom network troubleshooting, where accurate responses require deep technical expertise and contextual understanding. In this paper, we present a fully automated, retrieval-augmented pipeline for generating synthetic question-answer (QA) pairs grounded in structured domain knowledge. Our multi-stage framework integrates a retriever, base generator, and refinement model to synthesize and enhance QA pairs using documents retrieved from a domain-specific knowledge graph. To ensure data quality, we employ customized RAGAS-based scoring to filter low-quality samples, producing a high-quality dataset suitable for reinforcement fine-tuning (RFT). We demonstrate our approach in a real-world telecom scenario focused on radio access network (RAN) troubleshooting. The resulting pipeline generates complex, context-rich troubleshooting solution plans without human intervention. This work offers a scalable solution for building instruction and reinforcement datasets in specialized domains, significantly reducing dependence on manual labeling while maintaining high technical fidelity.
|
https://arxiv.org/abs/2509.25736
|
Academic Papers
|
svg
|
6cd361a8b5254e7c7bbbb745d33af4cac2b6c492258e1d00b31eb7d30621446a
|
2026-02-02T00:00:00-05:00
|
A Generalized Information Bottleneck Theory of Deep Learning
|
arXiv:2509.26327v3 Announce Type: replace Abstract: The Information Bottleneck (IB) principle offers a compelling theoretical framework to understand how neural networks (NNs) learn. However, its practical utility has been constrained by unresolved theoretical ambiguities and significant challenges in accurate estimation. In this paper, we present a \textit{Generalized Information Bottleneck (GIB)} framework that reformulates the original IB principle through the lens of synergy, i.e., the information obtainable only through joint processing of features. We provide theoretical and empirical evidence demonstrating that synergistic functions achieve superior generalization compared to their non-synergistic counterparts. Building on these foundations we re-formulate the IB using a computable definition of synergy based on the average interaction information (II) of each feature with those remaining. We demonstrate that the original IB objective is upper bounded by our GIB in the case of perfect estimation, ensuring compatibility with existing IB theory while addressing its limitations. Our experimental results demonstrate that GIB consistently exhibits compression phases across a wide range of architectures (including those with \textit{ReLU} activations where the standard IB fails), while yielding interpretable dynamics in both CNNs and Transformers and aligning more closely with our understanding of adversarial robustness.
|
https://arxiv.org/abs/2509.26327
|
Academic Papers
|
svg
|
3c55846900e7a2da92fa837313f616ccde4298a34f65a5fc9de6eda5ab415914
|
2026-02-02T00:00:00-05:00
|
TAP: Two-Stage Adaptive Personalization of Multi-Task and Multi-Modal Foundation Models in Federated Learning
|
arXiv:2509.26524v2 Announce Type: replace Abstract: In federated learning (FL), local personalization of models has received significant attention, yet personalized fine-tuning of foundation models remains a significant challenge. In particular, there is a lack of understanding in the literature on how to fine-tune and personalize foundation models in settings that are heterogeneous across clients not only in data, but also in tasks and modalities. To address this gap, we propose TAP (Two-Stage Adaptive Personalization), which has two key features: (i) leveraging mismatched model architectures between the clients and server to selectively conduct replacement operations when it benefits a client's local tasks; (ii) engaging in post-FL knowledge distillation for capturing beneficial general knowledge without compromising personalization. In developing TAP, we introduce the first convergence analysis of federated foundation model training at the server under its modality-task pair architecture, and demonstrate that as the number of modality-task pairs increases, its ability to cater to all tasks suffers. Through extensive experiments, we demonstrate the effectiveness of our proposed algorithm across a variety of datasets and tasks in comparison to state-of-the-art federated personalization baselines.
|
https://arxiv.org/abs/2509.26524
|
Academic Papers
|
svg
|
c6876cbe23103b20d89cdf23d54f4423a0e38a6da8127828a37871f8848d1f97
|
2026-02-02T00:00:00-05:00
|
Efficient Approximation Algorithms for Fair Influence Maximization under Maximin Constraint
|
arXiv:2509.26579v2 Announce Type: replace Abstract: Fair Influence Maximization (FIM) seeks to mitigate disparities in influence across different groups and has recently garnered increasing attention. A widely adopted notion of fairness in FIM is the maximin constraint, which directly requires maximizing the utility (influenced ratio within a group) of the worst-off group. Despite its intuitive formulation, designing efficient algorithms with strong theoretical guarantees remains challenging, as the maximin objective does not satisfy submodularity, a key property for designing approximate algorithms in traditional influence maximization settings. In this paper, we address this challenge by proposing a two-step optimization framework consisting of Inner-group Maximization (IGM) and Across-group Maximization (AGM). We first prove that the influence spread within any individual group remains submodular, enabling effective optimization within groups. Based on this, IGM applies a greedy approach to pick high-quality seeds for each group. In the second step, AGM coordinates seed selection across groups by introducing two strategies: Uniform Selection (US) and Greedy Selection (GS). We prove that AGM-GS holds a $(1-1/e-\varepsilon)$ approximation to the optimal solution when groups are completely disconnected, while AGM-US guarantees a roughly $\frac{1}{m}(1-1/e-\varepsilon)$ lower bound regardless of the group structure, with $m$ denoting the number of groups.
|
https://arxiv.org/abs/2509.26579
|
Academic Papers
|
svg
|
a7e1ad4f83075244d790f47741c64cc36405a576f9915ecb5103ea27d9d4a8c9
|
2026-02-02T00:00:00-05:00
|
FedLLM-Align: Feature Extraction From Heterogeneous Clients
|
arXiv:2510.00065v2 Announce Type: replace Abstract: Federated learning (FL) enables collaborative model training without sharing raw data, making it attractive for privacy-sensitive domains, e.g., healthcare, finance, and IoT. A major obstacle, however, is the potential heterogeneity of tabular data across clients, in practical settings, where schema mismatches and incompatible feature spaces prevent straightforward aggregation. To address this challenge, this paper proposes FedLLM-Align, a federated learning framework that leverages pretrained transformer based language models for feature extraction. Towards this objective, FedLLM-Align serializes tabular records into text and derives semantically aligned embeddings from a pretrained LLM encoder, e.g, DistilBERT, facilitating lightweight local classifier heads that can be trained in a federated manner using standard aggregation schemes, e.g., FedAvg, while keeping all raw data records local. To quantify the merits and trade-offs of FedLLM-Align, we evaluate the proposed framework on binary classification tasks from two different domains: i) Coronary heart disease prediction on partitioned Framingham Heart Study data, and ii) Customer churn prediction on a financial dataset. FedLLM-Align outperforms state-of-the-art baselines by up to 25% in terms of the F1 score, under simulated schema heterogeneity, and achieves a 65% reduction in the communication overhead. These results establish FedLLM-Align as a privacy-preserving and communication-efficient approach for federated training based on clients with heterogeneous tabular datasets, commonly encountered in practice.
|
https://arxiv.org/abs/2510.00065
|
Academic Papers
|
svg
|
b1ffbc0f16be02206226caa3abc448ce0694110b1620af2daf69d4315f2fc24b
|
2026-02-02T00:00:00-05:00
|
Thoughtbubbles: an Unsupervised Method for Parallel Thinking in Latent Space
|
arXiv:2510.00219v2 Announce Type: replace Abstract: Current approaches for scaling inference-time compute in transformers train them to emit explicit chain-of-thought tokens before producing an answer. While these methods are powerful, they are limited because they cannot be applied during pretraining and rely solely on serially-generated, natural-language verbalization. In this work, we propose Thoughtbubbles, a transformer variant that natively performs parallel adaptive computation in latent space by learning to fork or delete residual streams. Thus, tokens requiring more computation can form a "bubble" of cloned residuals in the middle of the network. Crucially, this behavior is learned during pretraining with only language modeling loss. Using half of the training budget, Thoughtbubbles outperforms the perplexity and zero-shot evals of both standard decoder LMs and those using non-adaptive parallel computation approaches. These results hold across model sizes from 150M to 1.9B. Thoughtbubbles achieves competitive GSM8K results using half of the baseline's token budget. The implicit nature of our method enables models to begin learning adaptive computation at pretraining time, paving the way to unified train-time and test-time scaling behaviors.
|
https://arxiv.org/abs/2510.00219
|
Academic Papers
|
svg
|
ace5d4a822e4b7bfaf0a946b196cf19f2325e1dfc4a639a72f0d619c7bc82061
|
2026-02-02T00:00:00-05:00
|
It Takes Two: Your GRPO Is Secretly DPO
|
arXiv:2510.00977v2 Announce Type: replace Abstract: Group Relative Policy Optimization (GRPO) has emerged as a prominent reinforcement learning algorithm for post-training Large Language Models. Different from critic-based methods such as PPO, GRPO estimates the advantage function using group-level statistics to reduce the variance of policy gradient estimators. While the prevailing view attributes GRPO's effectiveness to large group sizes for accurate advantage estimation, we propose a different perspective. We demonstrate that the efficacy of GRPO stems from its implicit contrastive objective in the optimization, which helps reduce variance via the control variate method. This perspective establishes a fundamental connection between GRPO and DPO, wherein group size influences only the Monte Carlo estimators of the contrastive objective. To validate this, we investigate the minimal two-rollout case (2-GRPO), a configuration permissible under the contrastive framework but typically considered insufficient for reward normalization. We provide a rigorous theoretical analysis of 2-GRPO and empirically validate its effectiveness: 2-GRPO retains 98.1% of the performance of 16-GRPO, while requiring only 12.5% of the rollouts and 21% of the training time. This study offers a new perspective for future algorithm design in LLM post-training.
|
https://arxiv.org/abs/2510.00977
|
Academic Papers
|
svg
|
f20e858ca872bcead753680fc0a6b5142c4c7651f710d2f0171b896c99f79e3b
|
2026-02-02T00:00:00-05:00
|
How Well Can Preference Optimization Generalize Under Noisy Feedback?
|
arXiv:2510.01458v3 Announce Type: replace Abstract: As large language models (LLMs) advance their capabilities, aligning these models with human preferences has become crucial. Preference optimization, which trains models to distinguish between preferred and non-preferred responses based on human feedback, has become a crucial component for aligning LLMs. However, most existing works assume noise-free feedback, which is unrealistic due to the inherent errors and inconsistencies in human judgments. This paper addresses the impact of noisy feedback on preference optimization, providing generalization guarantees under these conditions. In particular, we consider noise models that correspond to common real-world sources of noise, such as mislabeling and uncertainty. Unlike traditional analyses that assume convergence, our work focuses on finite-step preference optimization, offering new insights that are more aligned with practical LLM training. We describe how generalization decays with different types of noise across levels of noise rates based on the preference data distribution and number of samples. Our analysis for noisy preference learning applies to a broad family of preference optimization losses such as DPO, IPO, SLiC, etc. Empirical validation on contemporary LLMs confirms the practical relevance of our findings, offering valuable insights for developing AI systems that align with human preferences.
|
https://arxiv.org/abs/2510.01458
|
Academic Papers
|
svg
|
e83a3786fd5926072b1bf98fff40593b53d316e31c9421c377fbecc7de570dfb
|
2026-02-02T00:00:00-05:00
|
InvThink: Towards AI Safety via Inverse Reasoning
|
arXiv:2510.01569v2 Announce Type: replace Abstract: We present InvThink, a simple yet powerful approach that gives language models the capability of inverse thinking: reasoning through failure modes before generating responses. Unlike existing safety alignment methods that optimize directly for safe response, InvThink instructs models to 1) enumerate potential harms, 2) analyze their consequences, and 3) generate safe outputs that proactively avoid these risks. Our paper reveals three key findings: (i) InvThink demonstrates significantly improved safety reasoning as model size scales, compared to existing safety methods. (ii) InvThink mitigates safety tax; by training models to systematically consider failure modes, it preserves general reasoning capabilities on standard benchmarks. (iii) beyond general safety tasks, InvThink excels in high-stakes domains including external-facing applications (medicine, finance, law) and agentic risk scenarios (blackmail, murder), achieving up to 17.8% reduction in harmful responses compared to baseline methods like SafetyPrompt. We further equip InvThink with supervised fine-tuning, and reinforcement learning across three LLM families. These results suggest that InvThink provides a scalable and generalizable path toward safer, more capable language models.
|
https://arxiv.org/abs/2510.01569
|
Academic Papers
|
svg
|
d599f2ec0ae8ecd685bf249ca37000928058e2cd738b3448e5ed16ab15af68bb
|
2026-02-02T00:00:00-05:00
|
PENEX: AdaBoost-Inspired Neural Network Regularization
|
arXiv:2510.02107v3 Announce Type: replace Abstract: AdaBoost sequentially fits so-called weak learners to minimize an exponential loss, which penalizes misclassified data points more severely than other loss functions like cross-entropy. Paradoxically, AdaBoost generalizes well in practice as the number of weak learners grows. In the present work, we introduce Penalized Exponential Loss (PENEX), a new formulation of the multi-class exponential loss that is theoretically grounded and, in contrast to the existing formulation, amenable to optimization via first-order methods, making it a practical objective for training neural networks. We demonstrate that PENEX effectively increases margins of data points, which can be translated into a generalization bound. Empirically, across computer vision and language tasks, PENEX improves neural network generalization in low-data regimes, often matching or outperforming established regularizers at comparable computational cost. Our results highlight the potential of the exponential loss beyond its application in AdaBoost.
|
https://arxiv.org/abs/2510.02107
|
Academic Papers
|
svg
|
e812b5f236351a92ad4bbd1a4b1d8949d1aab4e186f5eedc1058c531d8d01aef
|
2026-02-02T00:00:00-05:00
|
Test-Time Anchoring for Discrete Diffusion Posterior Sampling
|
arXiv:2510.02291v2 Announce Type: replace Abstract: While continuous diffusion models have achieved remarkable success, discrete diffusion offers a unified framework for jointly modeling text and images. Beyond unification, discrete diffusion provides faster inference, finer control, and principled training-free guidance, making it well-suited for posterior sampling. Existing approaches to posterior sampling using discrete diffusion face severe challenges: derivative-free guidance yields sparse signals, continuous relaxations limit applicability, and split Gibbs samplers suffer from the curse of dimensionality. To overcome these limitations, we introduce Anchored Posterior Sampling (APS), built on two key innovations: quantized expectation for gradient-like guidance in discrete embedding space, and anchored remasking for adaptive decoding. APS achieves state-of-the-art performance among discrete diffusion samplers on both linear and nonlinear inverse problems across the standard image benchmarks. We demonstrate the generality of APS through training-free stylization and text-guided editing. We further apply APS to a large-scale diffusion language model, showing consistent improvement in question answering.
|
https://arxiv.org/abs/2510.02291
|
Academic Papers
|
svg
|
9b4df8c03a84bfdac0bd9daaa623e5549bc9a955a5ec14384d7542d359c871d1
|
2026-02-02T00:00:00-05:00
|
VideoNSA: Native Sparse Attention Scales Video Understanding
|
arXiv:2510.02295v2 Announce Type: replace Abstract: Video understanding in multimodal language models remains limited by context length: models often miss key transition frames and struggle to maintain coherence across long time scales. To address this, we adapt Native Sparse Attention (NSA) to video-language models. Our method, VideoNSA, adapts Qwen2.5-VL through end-to-end training on a 216K video instruction dataset. We employ a hardware-aware hybrid approach to attention, preserving dense attention for text, while employing NSA for video. Compared to token-compression and training-free sparse baselines, VideoNSA achieves improved performance on long-video understanding, temporal reasoning, and spatial benchmarks. Further ablation analysis reveals four key findings: (1) reliable scaling to 128K tokens; (2) an optimal global-local attention allocation at a fixed budget; (3) task-dependent branch usage patterns; and (4) the learnable combined sparse attention help induce dynamic attention sinks. Project Page: https://enxinsong.com/VideoNSA-web/, Code: https://github.com/Espere-1119-Song/VideoNSA
|
https://arxiv.org/abs/2510.02295
|
Academic Papers
|
svg
|
f49b7c322731f85016ba3467541fd9770060fce6989c88c6ee8fa2879caf2459
|
2026-02-02T00:00:00-05:00
|
ContextFlow: Context-Aware Flow Matching For Trajectory Inference From Spatial Omics Data
|
arXiv:2510.02952v2 Announce Type: replace Abstract: Inferring trajectories from longitudinal spatially-resolved omics data is fundamental to understanding the dynamics of structural and functional tissue changes in development, regeneration and repair, disease progression, and response to treatment. We propose ContextFlow, a novel context-aware flow matching framework that incorporates prior knowledge to guide the inference of structural tissue dynamics from spatially resolved omics data. Specifically, ContextFlow integrates local tissue organization and ligand-receptor communication patterns into a transition plausibility matrix that regularizes the optimal transport objective. By embedding these contextual constraints, ContextFlow generates trajectories that are not only statistically consistent but also biologically meaningful, making it a generalizable framework for modeling spatiotemporal dynamics from longitudinal, spatially resolved omics data. Evaluated on three datasets, ContextFlow consistently outperforms state-of-the-art flow matching methods across multiple quantitative and qualitative metrics of inference accuracy and biological coherence. Our code is available at: \href{https://github.com/santanurathod/ContextFlow}{ContextFlow}
|
https://arxiv.org/abs/2510.02952
|
Academic Papers
|
svg
|
932b6e31aa772a448bf332c3b1d309450f8c8496eefcedddc19de05f5fb7362a
|
2026-02-02T00:00:00-05:00
|
PT$^2$-LLM: Post-Training Ternarization for Large Language Models
|
arXiv:2510.03267v2 Announce Type: replace Abstract: Large Language Models (LLMs) have shown impressive capabilities across diverse tasks, but their large memory and compute demands hinder deployment. Ternarization has gained attention as a promising compression technique, delivering substantial size reduction and high computational efficiency. However, its potential in the post-training quantization (PTQ) setting remains underexplored, due to the challenge of training-free parameter optimization and the quantization difficulty posed by outliers and dispersed weights. To address these issues, we propose PT$^2$-LLM, a post-training ternarization framework tailored for LLMs. At its core is an Asymmetric Ternary Quantizer equipped with a two-stage refinement pipeline: (1) Iterative Ternary Fitting (ITF), which alternates between optimal ternary grid construction and flexible rounding to minimize quantization error, and (2) Activation-aware Grid Alignment (AGA), which further refines the ternary grid to better match full-precision outputs. In addition, we propose a plug-and-play Structural Similarity-based Reordering (SSR) strategy that leverages inter-column structural similarity to ease quantization and mitigate outlier effects, further enhancing overall performance. Extensive experiments demonstrate that PT$^2$-LLM delivers competitive performance against state-of-the-art (SOTA) 2-bit PTQ methods with lower memory cost, while also accelerating both prefill and decoding to achieve end-to-end speedup. The code and models will be available at https://github.com/XIANGLONGYAN/PT2-LLM.
|
https://arxiv.org/abs/2510.03267
|
Academic Papers
|
svg
|
ec79ce08d54b0be315ede1d4b47d8773e63b76df6d4a85614a41867d84cba37d
|
2026-02-02T00:00:00-05:00
|
FrameOracle: Learning What to See and How Much to See in Videos
|
arXiv:2510.03584v2 Announce Type: replace Abstract: Vision-language models (VLMs) advance video understanding but operate under tight computational budgets, making performance dependent on selecting a small, high-quality subset of frames. Existing frame sampling strategies, such as uniform or fixed-budget selection, fail to adapt to variations in content density or task complexity. To address this, we present FrameOracle, a lightweight, plug-and-play module that predicts both (1) which frames are most relevant to a given query and (2) how many frames are needed. FrameOracle is trained via a curriculum that progresses from weak proxy signals, such as cross-modal similarity, to stronger supervision with FrameOracle-41K, the first large-scale VideoQA dataset with validated keyframe annotations specifying minimal sufficient frames per question. Extensive experiments across five VLMs and six benchmarks show that FrameOracle reduces 16-frame inputs to an average of 10.4 frames without accuracy loss. When starting from 64-frame candidates, it reduces inputs to 13.9 frames on average while improving accuracy by 1.5%, achieving state-of-the-art efficiency-accuracy trade-offs for scalable video understanding.
|
https://arxiv.org/abs/2510.03584
|
Academic Papers
|
svg
|
ca2784f791d4cc6863f2c640e1426838473f4a9eaf8a4a6e2ac8f60412aa97da
|
2026-02-02T00:00:00-05:00
|
Security Analysis of Ponzi Schemes in Ethereum Smart Contracts
|
arXiv:2510.03819v2 Announce Type: replace Abstract: The rapid advancement of blockchain technology has precipitated the widespread adoption of Ethereum and smart contracts across a variety of sectors. However, this has also given rise to numerous fraudulent activities, with many speculators embedding Ponzi schemes within smart contracts, resulting in significant financial losses for investors. Currently, there is a lack of effective methods for identifying and analyzing such new types of fraudulent activities. This paper categorizes these scams into four structural types and explores the intrinsic characteristics of Ponzi scheme contract source code from a program analysis perspective. The Mythril tool is employed to conduct static and dynamic analyses of representative cases, thereby revealing their vulnerabilities and operational mechanisms. Furthermore, this paper employs shell scripts and command patterns to conduct batch detection of open-source smart contract code, thereby unveiling the common characteristics of Ponzi scheme smart contracts.
|
https://arxiv.org/abs/2510.03819
|
Academic Papers
|
svg
|
c28f998d84da658e2d4531eda973dccc84e7656d64fcde0807e386c1b4b3a837
|
2026-02-02T00:00:00-05:00
|
Unmasking Backdoors: An Explainable Defense via Gradient-Attention Anomaly Scoring for Pre-trained Language Models
|
arXiv:2510.04347v2 Announce Type: replace Abstract: Pre-trained language models have achieved remarkable success across a wide range of natural language processing (NLP) tasks, particularly when fine-tuned on large, domain-relevant datasets. However, they remain vulnerable to backdoor attacks, where adversaries embed malicious behaviors using trigger patterns in the training data. These triggers remain dormant during normal usage, but, when activated, can cause targeted misclassifications. In this work, we investigate the internal behavior of backdoored pre-trained encoder-based language models, focusing on the consistent shift in attention and gradient attribution when processing poisoned inputs; where the trigger token dominates both attention and gradient signals, overriding the surrounding context. We propose an inference-time defense that constructs anomaly scores by combining token-level attention and gradient information. Extensive experiments on text classification tasks across diverse backdoor attack scenarios demonstrate that our method significantly reduces attack success rates compared to existing baselines. Furthermore, we provide an interpretability-driven analysis of the scoring mechanism, shedding light on trigger localization and the robustness of the proposed defense.
|
https://arxiv.org/abs/2510.04347
|
Academic Papers
|
svg
|
f789bce664fcaa515c509200c13c1b1a2c7ed8ef5af78e0f0d720ecb2770db86
|
2026-02-02T00:00:00-05:00
|
Agentic Context Engineering: Evolving Contexts for Self-Improving Language Models
|
arXiv:2510.04618v2 Announce Type: replace Abstract: Large language model (LLM) applications such as agents and domain-specific reasoning increasingly rely on context adaptation -- modifying inputs with instructions, strategies, or evidence, rather than weight updates. Prior approaches improve usability but often suffer from brevity bias, which drops domain insights for concise summaries, and from context collapse, where iterative rewriting erodes details over time. Building on the adaptive memory introduced by Dynamic Cheatsheet, we introduce ACE (Agentic Context Engineering), a framework that treats contexts as evolving playbooks that accumulate, refine, and organize strategies through a modular process of generation, reflection, and curation. ACE prevents collapse with structured, incremental updates that preserve detailed knowledge and scale with long-context models. Across agent and domain-specific benchmarks, ACE optimizes contexts both offline (e.g., system prompts) and online (e.g., agent memory), consistently outperforming strong baselines: +10.6% on agents and +8.6% on finance, while significantly reducing adaptation latency and rollout cost. Notably, ACE could adapt effectively without labeled supervision and instead by leveraging natural execution feedback. On the AppWorld leaderboard, ACE matches the top-ranked production-level agent on the overall average and surpasses it on the harder test-challenge split, despite using a smaller open-source model. These results show that comprehensive, evolving contexts enable scalable, efficient, and self-improving LLM systems with low overhead.
|
https://arxiv.org/abs/2510.04618
|
Academic Papers
|
svg
|
f25a71e052bced36c3be5e1788ce1a12d6d8739332d95b039765436b7a9130e9
|
2026-02-02T00:00:00-05:00
|
Training Dynamics Impact Post-Training Quantization Robustness
|
arXiv:2510.06213v2 Announce Type: replace Abstract: While post-training quantization is widely adopted for efficient deployment of large language models, the mechanisms underlying quantization robustness remain unclear. We conduct a comprehensive analysis of quantization degradation across open-source language model training trajectories up to 32B parameters and 15T training tokens to accurately assess the relationship between training dynamics and quantization performance. Our key finding is that quantization errors in large-scale training runs are driven by a complex interplay between learning rate and other training hyperparameters. Specifically, once learning rates decay, validation loss and quantization error diverge, largely independent of training data scale. To investigate interventions on the training dynamics and identify specific configurations that can modulate quantization robustness favorably, we train our own models in controlled experiments up to 100B tokens. Our results challenge the assumption that increasing dataset scale inherently compromises quantization effectiveness, demonstrating instead that strategic training hyperparameter interventions can improve quantization quality at scale.
|
https://arxiv.org/abs/2510.06213
|
Academic Papers
|
svg
|
1ef0d03eb015f4ff6fd423b79134d47158de1bbaa27fef6fd19ae6eb55810f2c
|
2026-02-02T00:00:00-05:00
|
Quantifying Data Contamination in Psychometric Evaluations of LLMs
|
arXiv:2510.07175v2 Announce Type: replace Abstract: Recent studies apply psychometric questionnaires to Large Language Models (LLMs) to assess high-level psychological constructs such as values, personality, moral foundations, and dark traits. Although prior work has raised concerns about possible data contamination from psychometric inventories, which may threaten the reliability of such evaluations, there has been no systematic attempt to quantify the extent of this contamination. To address this gap, we propose a framework to systematically measure data contamination in psychometric evaluations of LLMs, evaluating three aspects: (1) item memorization, (2) evaluation memorization, and (3) target score matching. Applying this framework to 21 models from major families and four widely used psychometric inventories, we provide evidence that popular inventories such as the Big Five Inventory (BFI-44) and Portrait Values Questionnaire (PVQ-40) exhibit strong contamination, where models not only memorize items but can also adjust their responses to achieve specific target scores.
|
https://arxiv.org/abs/2510.07175
|
Academic Papers
|
svg
|
893e97c473442a2c6e38097c6c6727d44cc58edef193727022d3b177a16c7a38
|
2026-02-02T00:00:00-05:00
|
The Unintended Trade-off of AI Alignment:Balancing Hallucination Mitigation and Safety in LLMs
|
arXiv:2510.07775v2 Announce Type: replace Abstract: Hallucination in large language models (LLMs) has been widely studied in recent years, with progress in both detection and mitigation aimed at improving truthfulness. Yet, a critical side effect remains largely overlooked: enhancing truthfulness can negatively impact safety alignment. In this paper, we investigate this trade-off and show that increasing factual accuracy often comes at the cost of weakened refusal behavior. Our analysis reveals that this arises from overlapping components in the model that simultaneously encode hallucination and refusal information, leading alignment methods to suppress factual knowledge unintentionally. We further examine how fine-tuning on benign datasets, even when curated for safety, can degrade alignment for the same reason. To address this, we propose a method that disentangles refusal-related features from hallucination features using sparse autoencoders, and preserves refusal behavior during fine-tuning through subspace orthogonalization. This approach prevents hallucinations from increasing while maintaining safety alignment.We evaluate our method on commonsense reasoning tasks and harmful benchmarks (AdvBench and StrongReject). Results demonstrate that our approach preserves refusal behavior and task utility, mitigating the trade-off between truthfulness and safety.
|
https://arxiv.org/abs/2510.07775
|
Academic Papers
|
svg
|
4c52b93c9483f8676bb1d5f56d2fffcecff5ec067857a2a821c3388171f8dcd5
|
2026-02-02T00:00:00-05:00
|
Post-Norm can Resharpen Attention
|
arXiv:2510.08341v2 Announce Type: replace Abstract: Length Generalization is the essential capacity of autonomous agents to perform tasks in longer contexts than those encountered during training. To systematically study this feat, we test how well models can approximate the next token distributions in algorithmic tasks. This is to take into account the realistic possibility of multiple next tokens being legal. We present a prototypical benchmark for this line of study: in the Set Complement Task, the model needs to output a uniform distribution over tokens not in the input. We prove a theorem that states simple transformers can length generalize on this task, however, with performance degradation due to attention dispersion. A mechanistic reading of how dispersion takes effect lets us discover a remedy: Post-Norm can Resharpen Attention. We present experimental evidence to support this idea. We also show that Exponential Moving Averages can help the issue of noisy gradients that arises when many next tokens are legal. We validate the general applicability of our proposed methods on a suite of formal language experiments. Our source code will be available upon publication.
|
https://arxiv.org/abs/2510.08341
|
Academic Papers
|
svg
|
14e108be8ce8b966938aee014f484ec3ae95b05aef9569c9305a459c47591741
|
2026-02-02T00:00:00-05:00
|
Which Heads Matter for Reasoning? RL-Guided KV Cache Compression
|
arXiv:2510.08525v2 Announce Type: replace Abstract: Reasoning large language models exhibit complex reasoning behaviors via extended chain-of-thought generation that are highly fragile to information loss during decoding, creating critical challenges for KV cache compression. Existing token-dropping methods directly disrupt reasoning chains by removing intermediate steps, while head-reallocation methods, designed for retrieval tasks, fail to preserve the heads essential for generative reasoning. However, no existing method can identify which attention heads genuinely maintain reasoning consistency and control generation termination. To address this, we propose RLKV, which uses reinforcement learning as a probe to discover which heads contribute to reasoning quality by directly optimizing their cache usage against actual generation outcomes. This discovery naturally leads to an efficient compression strategy: we allocate full KV cache to reasoning-critical heads while aggressively compressing others. Experiments reveal that a fraction of heads proves essential for reasoning, enabling 20--50% cache reduction with near-lossless performance and up to 1.21x speedup.
|
https://arxiv.org/abs/2510.08525
|
Academic Papers
|
svg
|
e3ce87150f265c1cd103a8b4713238922f98ddcd01a8f2d8a01ea55549397fc7
|
2026-02-02T00:00:00-05:00
|
GraphGhost: Tracing Structures Behind Large Language Models
|
arXiv:2510.08613v2 Announce Type: replace Abstract: Large Language Models (LLMs) exhibit strong reasoning capabilities on structured tasks, yet the internal mechanisms underlying such behaviors remain poorly understood. Existing interpretation methods mainly focus on token-level attributions, which provide limited insight into multi-step reasoning inside the model. We propose GraphGhost, a graph-based framework that models internal token interactions and neuron activations in LLMs as graphs. By aggregating token dependencies traced across layers, GraphGhost captures global information flow underlying model predictions. We formalize GraphGhost from two complementary perspectives: a sample view, which traces token dependencies for individual predictions, and a dataset view, which aggregates recurring structural patterns learned during training. Through graph analytics and quantitative experiments, we show that graph structural properties are closely associated with influential tokens and neuron nodes, and that perturbations to structurally critical nodes lead to measurable changes in reasoning behavior. These results indicate that the structural patterns captured by GraphGhost reflect meaningful internal organization of LLM reasoning. The codes are available at software part. Artifacts will be made available for research use only.
|
https://arxiv.org/abs/2510.08613
|
Academic Papers
|
svg
|
15f283b591125637f76ff81aaa1062aa8afc4e84439fdb1ce839abc88c3cffc6
|
2026-02-02T00:00:00-05:00
|
On the Provable Performance Guarantee of Efficient Reasoning Models
|
arXiv:2510.09133v2 Announce Type: replace Abstract: Large reasoning models (LRMs) have achieved remarkable progress in complex problem-solving tasks. Despite this success, LRMs typically suffer from high computational costs during deployment, highlighting a need for efficient inference. A practical direction of efficiency improvement is to switch the LRM between thinking and non-thinking modes dynamically. However, such approaches often introduce additional reasoning errors and lack statistical guarantees for the performance loss, which are critical for high-stakes applications. In this work, we propose Probably Approximately Correct (PAC) reasoning that controls the performance loss under the user-specified tolerance. Specifically, we construct an upper confidence bound on the performance loss and determine a threshold for switching to the non-thinking model. Theoretically, using the threshold to switch between the thinking and non-thinking modes ensures bounded performance loss in a distribution-free manner. Our comprehensive experiments on reasoning benchmarks show that the proposed method can save computational budgets and control the user-specified performance loss.
|
https://arxiv.org/abs/2510.09133
|
Academic Papers
|
svg
|
302be08bf4a2a7f178329d33acbbd03dd81ac2eafd9178cb606dc2060ab4ffc9
|
2026-02-02T00:00:00-05:00
|
Herb.jl: A Unifying Program Synthesis Library
|
arXiv:2510.09726v2 Announce Type: replace Abstract: Program synthesis -- the automatic generation of code given a specification -- is one of the most fundamental tasks in artificial intelligence (AI) and the dream of many programmers. Numerous synthesizers have been developed for program synthesis, offering different approaches to the exponentially growing program space. Although such state-of-the-art tools exist, reusing and adapting them remains tedious and time-consuming. We propose Herb.jl, a unifying program synthesis library written in Julia, to address these issues. Since current methods share similar building blocks, we aim to break down the underlying algorithms into extendable, reusable subcomponents. To demonstrate the benefits of using Herb.jl, we show how to implement a simple problem and grammar, and how to solve it with just a few lines of code.
|
https://arxiv.org/abs/2510.09726
|
Academic Papers
|
svg
|
e44405dca51a10a783af745f57238fa2405d37d18f32f89b76239f2d548ee9d7
|
2026-02-02T00:00:00-05:00
|
GOLD PANNING: Iterative Bayesian Signal Anchoring for Many-Document Needle-in-Haystack Reasoning
|
arXiv:2510.09770v2 Announce Type: replace Abstract: Large language models (LLMs) exhibit pronounced position bias in long-context needle-in-haystack problems, systematically prioritizing the location of information over its relevance. While current mitigations rely on white-box access, this is effectively impossible for many state-of-the-art models. We introduce GOLD PANNING, a black-box Bayesian framework that performs inference-time active search over long contexts by (i) reordering documents to concentrate high-belief items in highly diagnostic positions (signal anchoring) and (ii) updating beliefs over document relevance from model outputs. Unlike conventional active learning, which prioritizes uncertainty reduction, GOLD PANNING leverages anchoring -- once flagged, keep it in sight -- to preserve weak cues. We implement this using iterative assignment derived from the model's diagnosticity profile, which provably identifies a target among $N$ documents in $O(\log N)$ rounds, ensuring scalability to many-document settings.On needle-in-a-haystack retrieval and long-context QA, GOLD PANNING matches Permutation Self-Consistency's target identification with $30--65%$ fewer queries and remains effective under calibration mismatch, suggesting coarse positional ordering drives performance gains. These results demonstrate that inherent model biases need not be failures, but can be used as tools for control.
|
https://arxiv.org/abs/2510.09770
|
Academic Papers
|
svg
|
8003667b430c01e82089fd3002bd91adbc7c6b4daa34e7d3b30a8743012a83aa
|
2026-02-02T00:00:00-05:00
|
Don't Just Fine-tune the Agent, Tune the Environment
|
arXiv:2510.10197v2 Announce Type: replace Abstract: Large Language Model (LLM) agents show great promise for complex, multi-turn tool-use tasks, but their development is often hampered by the extreme scarcity of high-quality training data. Supervised fine-tuning (SFT) on synthetic data leads to overfitting, whereas standard reinforcement learning (RL) struggles with a critical cold-start problem and training instability. To address these challenges, we introduce $\textbf{Environment Tuning}$, a novel training paradigm that enables agents to learn complex behaviors directly from problem instances without relying on pre-collected expert trajectories. $\textbf{Environment Tuning}$ orchestrates this learning process through a structured curriculum, actionable environment augmentation that provides corrective feedback, and fine-grained progress rewards to ensure stable and efficient exploration. Using only 400 problem instances from Berkeley Function-Calling Leaderboard (BFCL) benchmark, our method not only achieves competitive in-distribution performance against strong baselines but also demonstrates superior out-of-distribution generalization, overcoming the performance collapse common to SFT-based approaches. Our work presents a paradigm shift from supervised fine-tuning on static trajectories to dynamic, environment-based exploration, paving the way for training more robust and data-efficient agents. The code is available at https://github.com/inclusionAI/AWorld-RL/tree/main/EnvTuning.
|
https://arxiv.org/abs/2510.10197
|
Academic Papers
|
svg
|
fa7abcbdf32138fdeadc88879d73b0f7e8815c91239d3811d23f1a344c785c14
|
2026-02-02T00:00:00-05:00
|
Understanding and Bridging the Planner-Coder Gap: A Systematic Study on the Robustness of Multi-Agent Systems for Code Generation
|
arXiv:2510.10460v2 Announce Type: replace Abstract: Multi-agent systems (MASs) have emerged as a promising paradigm for automated code generation, demonstrating impressive performance on established benchmarks. Despite their prosperous development, the fundamental mechanisms underlying their robustness remain poorly understood, raising critical concerns for real-world deployment. This paper conducts a systematic empirical study to uncover the internal robustness flaws of MASs using a mutation-based methodology. By designing a testing pipeline incorporating semantic-preserving mutation operators and a novel fitness function, we assess mainstream MASs across multiple datasets and LLMs. Our findings reveal substantial robustness flaws: semantically equivalent inputs cause drastic performance drops, with MASs failing to solve 7.9\%--83.3\% of problems they initially resolved successfully. Through comprehensive failure analysis, we discover a fundamental cause underlying these robustness issues: the \textit{planner-coder gap}, which accounts for 75.3\% of failures. This gap arises from information loss in the multi-stage transformation process where planning agents decompose requirements into underspecified plans, and coding agents subsequently misinterpret intricate logic during code generation. Based on this formulated information transformation process, we propose a \textit{repairing method} that mitigates information loss through multi-prompt generation and introduces a monitor agent to bridge the planner-coder gap. Evaluation shows that our repairing method effectively enhances the robustness of MASs by solving 40.0\%--88.9\% of identified failures. Our work uncovers critical robustness flaws in MASs and provides effective mitigation strategies, contributing essential insights for developing more reliable MASs for code generation.
|
https://arxiv.org/abs/2510.10460
|
Academic Papers
|
svg
|
226c377b437a081915f49111ed63927e1a27b43fcd276bb319e0493d7348ce5e
|
2026-02-02T00:00:00-05:00
|
DUAL-Bench: Measuring Over-Refusal and Robustness in Vision-Language Models
|
arXiv:2510.10846v2 Announce Type: replace Abstract: As vision-language models become increasingly capable, maintaining a balance between safety and usefulness remains a central challenge. Safety mechanisms, while essential, can backfire, causing over-refusal, where models decline benign requests out of excessive caution. Yet, no existing benchmark has systematically addressed over-refusal in the visual modality. This setting introduces unique challenges, such as dual-use cases where an instruction is harmless, but the accompanying image contains harmful content. Models frequently fail in such scenarios, either refusing too conservatively or completing tasks unsafely, which highlights the need for more fine-grained alignment. The ideal behavior is safe completion, i.e., fulfilling the benign parts of a request while explicitly warning about any potentially harmful elements. To address this, we present DUAL-Bench, the first multimodal benchmark focused on over-refusal and safe completion in VLMs. We evaluated 18 VLMs across 12 hazard categories, with focus on their robustness under semantics-preserving visual perturbations. The results reveal substantial room for improvement: GPT-5-Nano achieves 12.9% safe completion, GPT-5 models average 7.9%, and Qwen models only 3.9%. We hope that DUAL-Bench will foster the development of more nuanced alignment strategies that ensure models remain both safe and useful in complex multimodal settings.
|
https://arxiv.org/abs/2510.10846
|
Academic Papers
|
svg
|
36ad7879feff9091b63bb9886493d40637e98bef3219c19d17ef27cc54e825d2
|
2026-02-02T00:00:00-05:00
|
PaperArena: An Evaluation Benchmark for Tool-Augmented Agentic Reasoning on Scientific Literature
|
arXiv:2510.10909v4 Announce Type: replace Abstract: Understanding and reasoning on the large-scale scientific literature is a crucial touchstone for large language model (LLM) based agents. However, existing works are mainly restricted to tool-free tasks within single papers, largely due to the lack of a benchmark that evaluates cross-paper reasoning and multi-tool orchestration in authentic research scenarios. In this work, we propose PaperArena, a benchmark to evaluate LLM-based agents on questions that require integrating information across multiple papers with the assistance of external tools. Given a research question, agents should formulate a reasoning plan, interact with multiple papers, and invoke appropriate tools to produce a well-grounded answer. To support standardized evaluation, we provide a platform for agent execution, offering a modular tool environment including multimodal parsing, context retrieval, and programmatic computation. Experiments reveal that even the leading LLM powering a well-established agentic workflow achieves merely 38.78% average accuracy, while on the hard subset, accuracy drops to only 18.47%. We also analyze reasoning traces and diagnose agent behavior, providing the community with insights to develop and evaluate more capable scientific agents.
|
https://arxiv.org/abs/2510.10909
|
Academic Papers
|
svg
|
00d1dccdb206f68862c642322983da1bf8bcfcb6d41637fb3b7c00ac20fe8665
|
2026-02-02T00:00:00-05:00
|
Stronger-MAS: Multi-Agent Reinforcement Learning for Collaborative LLMs
|
arXiv:2510.11062v5 Announce Type: replace Abstract: Multi-agent systems (MAS) and reinforcement learning (RL) are widely used to enhance the agentic capabilities of large language models (LLMs). MAS improves task performance through role-based orchestration, while RL uses environmental rewards to learn stronger policies, such as GRPO-style optimization. However, applying on-policy RL to MAS remains underexplored and presents unique challenges. Algorithmically, standard GRPO grouping assumptions break down because prompts vary by role and by turn. System-wise, the training stack must support MAS-workflow rollouts and on-policy updates for both single-policy and multi-policy models. We propose AT-GRPO, which includes (i) an agent- and turn-wise grouped RL algorithm tailored to MAS and (ii) a training system that supports both single- and multi-policy regimes. Across game, planning, coding, and math tasks, AT-GRPO delivers substantial gains. On long-horizon planning, it increases accuracy from a 14.0 to 47.0 percent single-agent RL baseline to 96.0 to 99.5 percent. It also improves reasoning performance, with average gains of 3.87 to 7.62 percent on coding tasks and 9.0 to 17.93 percent on math. Code and environments are available at: https://github.com/pettingllms-ai/PettingLLMs.
|
https://arxiv.org/abs/2510.11062
|
Academic Papers
|
svg
|
7ea555cf4db2128467b58ef7a775b976727950ead594461c549752dee08cb2f1
|
2026-02-02T00:00:00-05:00
|
Thompson Sampling via Fine-Tuning of LLMs
|
arXiv:2510.13328v3 Announce Type: replace Abstract: Bayesian optimization in large unstructured discrete spaces is often hindered by the computational cost of maximizing acquisition functions due to the absence of gradients. We propose a scalable alternative based on Thompson sampling that eliminates the need for acquisition function maximization by directly parameterizing the probability that a candidate yields the maximum reward. Our approach, Thompson Sampling via Fine-Tuning (ToSFiT) leverages the prior knowledge embedded in prompt-conditioned large language models, and incrementally adapts them toward the posterior. Theoretically, we derive a novel regret bound for a variational formulation of Thompson Sampling that matches the strong guarantees of its standard counterpart. Our analysis reveals the critical role of careful adaptation to the posterior probability of maximality -- a principle that underpins our ToSFiT algorithm. Empirically, we validate our method on three diverse tasks: FAQ response refinement, thermally stable protein search, and quantum circuit design. Within a collection of methods covering Bayesian optimization, reinforcement learning, and evolutionary search, ToSFiT exhibits both state-of-the-art sample efficiency and computational efficiency.
|
https://arxiv.org/abs/2510.13328
|
Academic Papers
|
svg
|
2c7191d959c4d409b86db940de4af084939c03278b457acaad127cf0b811ab7a
|
2026-02-02T00:00:00-05:00
|
On Your Own: Pro-level Autonomous Drone Racing in Uninstrumented Arenas
|
arXiv:2510.13644v2 Announce Type: replace Abstract: Drone technology is proliferating in many industries, including agriculture, logistics, defense, infrastructure, and environmental monitoring. Vision-based autonomy is one of its key enablers, particularly for real-world applications. This is essential for operating in novel, unstructured environments where traditional navigation methods may be unavailable. Autonomous drone racing has become the de facto benchmark for such systems. State-of-the-art research has shown that autonomous systems can surpass human-level performance in racing arenas. However, the direct applicability to commercial and field operations is still limited, as current systems are often trained and evaluated in highly controlled environments. In our contribution, the system's capabilities are analyzed within a controlled environment -- where external tracking is available for ground-truth comparison -- but also demonstrated in a challenging, uninstrumented environment -- where ground-truth measurements were never available. We show that our approach can match the performance of professional human pilots in both scenarios.
|
https://arxiv.org/abs/2510.13644
|
Academic Papers
|
svg
|
896517a19d7b1f85ad301b1fc48a51bbe3b3ad3422b2dfbe4417802f84c5cf5d
|
2026-02-02T00:00:00-05:00
|
Identity-GRPO: Optimizing Multi-Human Identity-preserving Video Generation via Reinforcement Learning
|
arXiv:2510.14256v3 Announce Type: replace Abstract: While advanced methods like VACE and Phantom have advanced video generation for specific subjects in diverse scenarios, they struggle with multi-human identity preservation in dynamic interactions, where consistent identities across multiple characters are critical. To address this, we propose Identity-GRPO, a human feedback-driven optimization pipeline for refining multi-human identity-preserving video generation. First, we construct a video reward model trained on a large-scale preference dataset containing human-annotated and synthetic distortion data, with pairwise annotations focused on maintaining human consistency throughout the video. We then employ a GRPO variant tailored for multi-human consistency, which greatly enhances both VACE and Phantom. Through extensive ablation studies, we evaluate the impact of annotation quality and design choices on policy optimization. Experiments show that Identity-GRPO achieves up to 18.9% improvement in human consistency metrics over baseline methods, offering actionable insights for aligning reinforcement learning with personalized video generation.
|
https://arxiv.org/abs/2510.14256
|
Academic Papers
|
svg
|
e461410787c2eb0e8cd9fef3894f66733902617033c335baabf193e9c894cc79
|
2026-02-02T00:00:00-05:00
|
DialectGen: Benchmarking and Improving Dialect Robustness in Multimodal Generation
|
arXiv:2510.14949v2 Announce Type: replace Abstract: Contact languages like English exhibit rich regional variations in the form of dialects, which are often used by dialect speakers interacting with generative models. However, can multimodal generative models effectively produce content given dialectal textual input? In this work, we study this question by constructing a new large-scale benchmark spanning six common English dialects. We work with dialect speakers to collect and verify over 4200 unique prompts and evaluate on 17 image and video generative models. Our automatic and human evaluation results show that current state-of-the-art multimodal generative models exhibit 32.26% to 48.17% performance degradation when a single dialect word is used in the prompt. Common mitigation methods such as fine-tuning and prompt rewriting can only improve dialect performance by small margins (< 7%), while potentially incurring significant performance degradation in Standard American English (SAE). To this end, we design a general encoder-based mitigation strategy for multimodal generative models. Our method teaches the model to recognize new dialect features while preserving SAE performance. Experiments on models such as Stable Diffusion 1.5 show that our method is able to simultaneously raise performance on five dialects to be on par with SAE (+34.4%), while incurring near zero cost to SAE performance.
|
https://arxiv.org/abs/2510.14949
|
Academic Papers
|
svg
|
9485441713b317164e5fb7ef6e5570250eb35ef30f2eb58419dc048b6e69deeb
|
2026-02-02T00:00:00-05:00
|
LLM Latent Reasoning as Chain of Superposition
|
arXiv:2510.15522v2 Announce Type: replace Abstract: Latent reasoning offers a computation-efficient alternative to Chain-of-Thought but often suffers from performance degradation due to distributional misalignment and ambiguous chain definitions. Ideally, latent reasoning should function as a superposition of multiple reasoning paths. To realize this, we introduce Latent-SFT, a unified framework addressing challenges at three levels: token, chain, and learning. First, we define the Latent-Vocab to constrain hidden states within the pre-trained vocab-space. Second, we construct the Latent-Chain via Induction-Supervision Masking to ensure semantic compactness and sufficiency. Third, we employ Latent-Optim with stochastic Gumbel-Softmax to guide the model toward generalizable solutions. Empirical results demonstrate that Latent-SFT consistently outperforms explicit SFT across six mathematical benchmarks (e.g., GSM8k, AIME24) while achieving a 2.7x to 5.5x reduction in reasoning length. Analysis confirms that our method effectively captures a superposition of diverse reasoning trajectories rather than merely compressing a single path.
|
https://arxiv.org/abs/2510.15522
|
Academic Papers
|
svg
|
b9aa18247092b71d3fd429b8ecc377d884118f0e85fad2c40d53e9dba38b472b
|
2026-02-02T00:00:00-05:00
|
Open Shouldn't Mean Exempt: Open-Source Exceptionalism and Generative AI
|
arXiv:2510.16048v2 Announce Type: replace Abstract: Open-source status should not shield generative artificial intelligence systems from ethical or legal accountability. Through a rigorous analysis of regulatory, legal, and policy frameworks, this Article contends that open-source GenAI must be held to the same standards as proprietary systems. While recognizing the value of openness for scientific advancement, I propose a narrowly tailored safe harbor for bona fide, non-commercial research, conditioned on strict compliance with defined criteria. This Article critically examines and refutes the core claims of open-source exceptionalism--namely, that open-source GenAI disrupts entrenched oligopolies, democratizes access, and uniquely drives innovation. The evidence shows that open-source GenAI can facilitate unlawful conduct, exacerbate environmental harms, and reinforce existing power structures. Rhetoric around "democratization" and "innovation" often serves as an unsubstantiated basis for regulatory exemptions not afforded to proprietary systems. This Article ultimately advocates for a framework that promotes responsible AI development, balancing openness with robust legal and ethical safeguards and a clear-eyed assessment of societal impacts.
|
https://arxiv.org/abs/2510.16048
|
Academic Papers
|
svg
|
eb9ea120efc89e123eb7545a9b09ef1e5bbf28b3659534484d7922a4a6c0c6be
|
2026-02-02T00:00:00-05:00
|
In the Mood to Exclude: Revitalizing Trespass to Chattels in the Era of GenAI Scraping
|
arXiv:2510.16049v2 Announce Type: replace Abstract: GenAI companies are strip-mining the web. Their scraping bots harvest content at an unprecedented scale, circumventing technical barriers to fuel billion-dollar models while creators receive nothing. Courts have enabled this exploitation by misunderstanding what property rights protect online. The prevailing view treats websites as mere repositories of intellectual property and dismisses trespass claims absent server damage. That framework grants AI companies presumptive access while ignoring the economic devastation they inflict. But the content is severable from the website itself. This paper reframes the debate: websites are personal property as integrated digital assets subject to the same exclusionary rights as physical chattels. When scrapers bypass access controls and divert traffic that sustains a website's value, they commit actionable trespass. The law need not create new protections; it need only apply existing property principles to digital space. Courts and litigants have struggled to police unwanted, large-scale scraping because copyright preemption often narrows available claims, leaving copyright and its fair use defense as the primary battleground. Trespass to chattels offers a superior path, grounded in the fundamental right to exclude unwanted intrusions. Reviving this tort would protect not only content creators but also the digital ecosystem. Such protection would discourage exploitative scraping, preserve incentives for content creation, help protect privacy and personal data, and safeguard autonomy and expression. Reaffirming website owners' right to exclude is essential to maintaining a fair and sustainable online environment.
|
https://arxiv.org/abs/2510.16049
|
Academic Papers
|
svg
|
f6beb65baa2a2524a440060eb22f03d85355704df5e9163562e94246f7bf20d5
|
2026-02-02T00:00:00-05:00
|
DDSC: Dynamic Dual-Signal Curriculum for Data-Efficient Acoustic Scene Classification under Domain Shift
|
arXiv:2510.17345v2 Announce Type: replace Abstract: Acoustic scene classification (ASC) suffers from device-induced domain shift, especially when labels are limited. Prior work focuses on curriculum-based training schedules that structure data presentation by ordering or reweighting training examples from easy-to-hard to facilitate learning; however, existing curricula are static, fixing the ordering or the weights before training and ignoring that example difficulty and marginal utility evolve with the learned representation. To overcome this limitation, we propose the Dynamic Dual-Signal Curriculum (DDSC), a training schedule that adapts the curriculum online by combining two signals computed each epoch: a domain-invariance signal and a learning-progress signal. A time-varying scheduler fuses these signals into per-example weights that prioritize domain-invariant examples in early epochs and progressively emphasize device-specific cases. DDSC is lightweight, architecture-agnostic, and introduces no additional inference overhead. Under the official DCASE 2024 Task~1 protocol, DDSC consistently improves cross-device performance across diverse ASC baselines and label budgets, with the largest gains on unseen-device splits.
|
https://arxiv.org/abs/2510.17345
|
Academic Papers
|
svg
|
9f4eacc1912bf0f70544710358956b73d4bf92c1aaf87b21249a9a6ebaf6ec18
|
2026-02-02T00:00:00-05:00
|
TopSeg: A Multi-Scale Topological Framework for Data-Efficient Heart Sound Segmentation
|
arXiv:2510.17346v2 Announce Type: replace Abstract: Deep learning approaches for heart-sound (PCG) segmentation built on time-frequency features can be accurate but often rely on large expert-labeled datasets, limiting robustness and deployment. We present TopSeg, a topological representation-centric framework that encodes PCG dynamics with multi-scale topological features and decodes them using a lightweight temporal convolutional network (TCN) with an order- and duration-constrained inference step. To evaluate data efficiency and generalization, we train exclusively on PhysioNet 2016 dataset with subject-level subsampling and perform external validation on CirCor dataset. Under matched-capacity decoders, the topological features consistently outperform spectrogram and envelope inputs, with the largest margins at low data budgets; as a full system, TopSeg surpasses representative end-to-end baselines trained on their native inputs under the same budgets while remaining competitive at full data. Ablations at 10% training confirm that all scales contribute and that combining H_0 and H_1 yields more reliable S1/S2 localization and boundary stability. These results indicate that topology-aware representations provide a strong inductive bias for data-efficient, cross-dataset PCG segmentation, supporting practical use when labeled data are limited.
|
https://arxiv.org/abs/2510.17346
|
Academic Papers
|
svg
|
0ed64f4434424b52346e1664976f0bbdc2399b32bfbc3793f0ec3348baa8fe45
|
2026-02-02T00:00:00-05:00
|
Evaluating LLMs for Career Guidance: Comparative Analysis of Computing Competency Recommendations Across Ten African Countries
|
arXiv:2510.18902v2 Announce Type: replace Abstract: Employers increasingly expect graduates to utilize large language models (LLMs) in the workplace, yet the competencies needed for computing roles across Africa remain unclear given varying national contexts. This study examined how six LLMs, namely ChatGPT 4, DeepSeek, Gemini, Claude 3.5, Llama 3, and Mistral AI, describe entry-level computing career expectations across ten African countries. Using the Computing Curricula 2020 framework and drawing on Digital Colonialism Theory and Ubuntu Philosophy, content analysis of 60 LLM responses to standardized prompts reveals consistent coverage of technical competencies such as cloud computing and programming, but notable differences in non-technical competencies, particularly ethics and responsible AI use. Models vary considerably in recognizing country-specific factors, including local technology ecosystems, language requirements, and national policies averaging only 35.4% contextual awareness overall. Open-source models demonstrated stronger contextual awareness and better balance between technical and professional skills, with Llama (4.47/5) and DeepSeek (4.25/5) outperforming proprietary alternatives ChatGPT-4 (3.90/5) and Claude (3.46/5). However, Mistral's poor contextual performance (0.00/4) despite being open-source indicates that development philosophy alone does not guarantee contextual responsiveness. This first comprehensive comparison of LLM career guidance for African computing students uncovers entrenched infrastructure assumptions and Western-centric biases that create gaps between technical recommendations and local realities. The findings challenge assumptions about AI tool quality in resource-constrained settings and underscore the need for decolonial approaches to AI in education, emphasizing contextual relevance and hybrid human-AI guidance models.
|
https://arxiv.org/abs/2510.18902
|
Academic Papers
|
svg
|
47695968470dab1039c69394393ff31d1f0663847017767fdf034cf26a750cb4
|
2026-02-02T00:00:00-05:00
|
Context-aware Fairness Evaluation and Mitigation in LLMs
|
arXiv:2510.18914v2 Announce Type: replace Abstract: Large language models often display undesirable behaviors embedded in their internal representations, undermining fairness, inconsistency drift, amplification of harmful content, and the propagation of unwanted patterns during extended dialogue and conversations. Although training-time or data-centric methods attempt to reduce these effects, they are computationally expensive, irreversible once deployed, and slow to adapt to new conversational contexts. Pruning-based methods provide a flexible and transparent way to reduce bias by adjusting the neurons responsible for certain behaviors. However, most existing approaches are static; once a neuron is removed, the model loses the ability to adapt when the conversation or context changes. To address this, we propose a dynamic, reversible, pruning-based framework that detects context-aware neuron activations and applies adaptive masking to modulate their influence during generation. Our inference-time solution provides fine-grained, memory-aware mitigation with knowledge-preserved, more coherent behavior across multilingual single- and multi-turn dialogues, enabling dynamic fairness control in real-world conversational AI.
|
https://arxiv.org/abs/2510.18914
|
Academic Papers
|
svg
|
d684f4e7c55c15b35b4e587ae592acba29997b858212395504538bd2a13176ed
|
2026-02-02T00:00:00-05:00
|
Serverless GPU Architecture for Enterprise HR Analytics: A Production-Scale BDaaS Implementation
|
arXiv:2510.19689v2 Announce Type: replace Abstract: Industrial and government organizations increasingly depend on data-driven analytics for workforce, finance, and regulated decision processes, where timeliness, cost efficiency, and compliance are critical. Distributed frameworks such as Spark and Flink remain effective for massive-scale batch or streaming analytics but introduce coordination complexity and auditing overheads that misalign with moderate-scale, latency-sensitive inference. Meanwhile, cloud providers now offer serverless GPUs, and models such as TabNet enable interpretable tabular ML, motivating new deployment blueprints for regulated environments. In this paper, we present a production-oriented Big Data as a Service (BDaaS) blueprint that integrates a single-node serverless GPU runtime with TabNet. The design leverages GPU acceleration for throughput, serverless elasticity for cost reduction, and feature-mask interpretability for IL4/FIPS compliance. We conduct benchmarks on the HR, Adult, and BLS datasets, comparing our approach against Spark and CPU baselines. Our results show that GPU pipelines achieve up to 4.5x higher throughput, 98x lower latency, and 90% lower cost per 1K inferences compared to Spark baselines, while compliance mechanisms add only ~5.7 ms latency with p99 < 22 ms. Interpretability remains stable under peak load, ensuring reliable auditability. Taken together, these findings provide a compliance-aware benchmark, a reproducible Helm-packaged blueprint, and a decision framework that demonstrate the practicality of secure, interpretable, and cost-efficient serverless GPU analytics for regulated enterprise and government settings.
|
https://arxiv.org/abs/2510.19689
|
Academic Papers
|
svg
|
21871a934f825fc6aa473ad9644803a637548ade4b5c85c8c2ad218c21a84e20
|
2026-02-02T00:00:00-05:00
|
MARS-M: When Variance Reduction Meets Matrices
|
arXiv:2510.21800v3 Announce Type: replace Abstract: Matrix-based preconditioned optimizers, such as Muon, have recently been shown to be more efficient than scalar-based optimizers for training large-scale neural networks, including large language models (LLMs). Recent benchmark studies of LLM pretraining optimizers have demonstrated that variance-reduction techniques such as MARS can substantially speed up training compared with standard optimizers that do not employ variance reduction. In this paper, we introduce MARS-M, a new optimizer that integrates MARS-style variance reduction with Muon. Under standard regularity conditions, we prove that MARS-M converges to a first-order stationary point at a rate of $\tilde{\mathcal{O}}(T^{-1/3})$, improving upon the $\tilde{\mathcal{O}}(T^{-1/4})$ rate attained by Muon. Empirical results on language modeling and computer vision tasks demonstrate that MARS-M consistently yields lower losses and improved performance across various downstream benchmarks. The implementation of MARS-M is available at https://github.com/AGI-Arena/MARS/tree/main/MARS_M.
|
https://arxiv.org/abs/2510.21800
|
Academic Papers
|
svg
|
ce90fabc97754ed9e8bb4602a702ff7ea5cae086654eb14db76103e51a81641a
|
2026-02-02T00:00:00-05:00
|
TOM-SWE: User Mental Modeling For Software Engineering Agents
|
arXiv:2510.21903v2 Announce Type: replace Abstract: Recent advances in coding agents have made them capable of planning, editing, running, and testing complex code bases. Despite their growing ability in coding tasks, these systems still struggle to infer and track user intent, especially when instructions are underspecified or context-dependent. To bridge this gap, we introduce ToM-SWE, a dual-agent architecture that pairs a primary software-engineering (SWE) agent with a lightweight theory-of-mind (ToM) partner agent dedicated to modeling the user's mental state. The ToM agent infers user goals, constraints, and preferences from instructions and interaction history, maintains a \textbf{persistent memory} of the user, and provides user-related suggestions to the SWE agent. In two software engineering benchmarks (ambiguous SWE-bench and stateful SWE-bench), ToM-SWE improves task success rates and user satisfaction. Notably, on the stateful SWE benchmark, a newly introduced evaluation that provides agents with a user simulator along with previous interaction histories, ToM-SWE achieves a substantially higher task success rate of 59.7\% compared to 18.1\% for OpenHands, a state-of-the-art SWE agent. Furthermore, in a three-week study with professional developers using ToM-SWE in their daily work, participants found it useful 86\% of the time, underscoring the value of stateful user modeling for practical coding agents.
|
https://arxiv.org/abs/2510.21903
|
Academic Papers
|
svg
|
Subsets and Splits
No community queries yet
The top public SQL queries from the community will appear here once available.