id
stringlengths 64
64
| published
stringlengths 19
25
| title
stringlengths 7
262
| description
stringlengths 6
54.4k
| link
stringlengths 31
227
| category
stringclasses 6
values | image
stringlengths 3
247
|
|---|---|---|---|---|---|---|
e21024c3848eeb2ddecc4bdb71b879db7ad0c1dc2116b7f3a1107960dfb9f6c1
|
2026-01-16T00:00:00-05:00
|
On the Need to Rethink Trust in AI Assistants for Software Development: A Critical Review
|
arXiv:2504.12461v3 Announce Type: replace Abstract: Trust is a fundamental concept in human decision-making and collaboration that has long been studied in philosophy and psychology. However, software engineering (SE) articles often use the term trust informally; providing an explicit definition or embedding results in established trust models is rare. In SE research on AI assistants, this practice culminates in equating trust with the likelihood of accepting generated content, which, in isolation, does not capture the full conceptual complexity of trust. Without a common definition, true secondary research on trust is impossible. The objectives of our research were: (1) to present the psychological and philosophical foundations of human trust, (2) to systematically study how trust is conceptualized in SE and the related disciplines human-computer interaction and information systems, and (3) to discuss limitations of equating trust with content acceptance, outlining how SE research can adopt existing trust models to overcome the widespread informal use of the term trust. We conducted a literature review across disciplines and a critical review of recent SE articles with a focus on trust conceptualizations. We found that trust is rarely defined or conceptualized in SE articles. Related disciplines commonly embed their methodology and results in established trust models, clearly distinguishing, for example, between initial trust and trust formation and between appropriate and inappropriate trust. On a meta-scientific level, other disciplines even discuss whether and when trust can be applied to AI assistants at all. Our study reveals a significant maturity gap of trust research in SE compared to other disciplines. We provide concrete recommendations on how SE researchers can adopt established trust models and instruments to study trust in AI assistants beyond the acceptance of generated software artifacts.
|
https://arxiv.org/abs/2504.12461
|
Academic Papers
|
svg
|
0943c518f039bc26d9dde8ec3c02392b4fe594c1ff192a6b26910c50367e345e
|
2026-01-16T00:00:00-05:00
|
Pushing the frontiers of subexponential FPT time for Feedback Vertex Set
|
arXiv:2504.17708v2 Announce Type: replace Abstract: The paper deals with the Feedback Vertex Set problem parameterized by the solution size. Given a graph $G$ and a parameter $k$, one has to decide if there is a set $S$ of at most $k$ vertices such that $G-S$ is acyclic. Assuming the Exponential Time Hypothesis, it is known that FVS cannot be solved in time $2^{o(k)}n^{\mathcal{O}(1)}$ in general graphs. To overcome this, many recent results considered FVS restricted to particular intersection graph classes and provided such $2^{o(k)}n^{\mathcal{O}(1)}$ algorithms. In this paper we provide generic conditions on a graph class for the existence of an algorithm solving FVS in subexponential FPT time, i.e. time $2^{k^\varepsilon} \mathop{\rm poly}(n)$, for some $\varepsilon<1$, where $n$ denotes the number of vertices of the instance and $k$ the parameter. On the one hand this result unifies algorithms that have been proposed over the years for several graph classes such as planar graphs, map graphs, unit-disk graphs, pseudo-disk graphs, and string graphs of bounded edge-degree. On the other hand it extends the tractability horizon of FVS to new classes that are not amenable to previously used techniques, in particular intersection graphs of ``thin'' objects like segment graphs or more generally $s$-string graphs.
|
https://arxiv.org/abs/2504.17708
|
Academic Papers
|
svg
|
994f9784d74f5730c2f357254f51904f76f299dace8a5b65f30cbfdb72019337
|
2026-01-16T00:00:00-05:00
|
Mixed Bernstein-Fourier Approximants for Optimal Trajectory Generation with Periodic Behavior
|
arXiv:2504.17969v3 Announce Type: replace Abstract: Efficient trajectory generation is crucial for autonomous systems; however, current numerical methods often struggle to handle periodic behaviors effectively, particularly when the onboard sensors require equidistant temporal sampling. This paper introduces a novel mixed Bernstein-Fourier approximation framework tailored explicitly for optimal motion planning. Our proposed methodology leverages the uniform convergence properties of Bernstein polynomials for nonperiodic behaviors while effectively capturing periodic dynamics through the Fourier series. Theoretical results are established, including uniform convergence proofs for approximations of functions, derivatives, and integrals, as well as detailed error bound analyses. We further introduce a regulated least squares approach for determining approximation coefficients, enhancing numerical stability and practical applicability. Within an optimal control context, we establish the feasibility and consistency of approximated solutions to their continuous counterparts. We also extend the covector mapping theorem, providing theoretical guarantees for approximating dual variables crucial in verifying the necessary optimality conditions from Pontryagin's Maximum Principle. Numerical examples illustrate the method's superior performance, demonstrating substantial improvements in computational efficiency and precision in scenarios with complex periodic constraints and dynamics. Our mixed Bernstein-Fourier methodology thus presents a robust, theoretically grounded, and computationally efficient approach for advanced optimal trajectory planning in autonomous systems.
|
https://arxiv.org/abs/2504.17969
|
Academic Papers
|
svg
|
7e8391d061e512c00c18c3b32665dcc854d8d3cea2e806ef651b801f2a2c9a77
|
2026-01-16T00:00:00-05:00
|
RTV-Bench: Benchmarking MLLM Continuous Perception, Understanding and Reasoning through Real-Time Video
|
arXiv:2505.02064v4 Announce Type: replace Abstract: Multimodal Large Language Models (MLLMs) have made rapid progress in perception, understanding, and reasoning, yet existing benchmarks fall short in evaluating these abilities under continuous and dynamic real-world video streams. Such settings require models to maintain coherent understanding and reasoning as visual scenes evolve over time. **We introduce RTV-Bench, a fine-grained benchmark for real-time video analysis with MLLMs**. It is built upon three key principles: multi-timestamp question answering, hierarchical question structures spanning perception and reasoning, and multi-dimensional evaluation of continuous perception, understanding, and reasoning. RTV-Bench comprises 552 diverse videos and 4,608 carefully curated QA pairs covering a wide range of dynamic scenarios. We evaluate a broad range of state-of-the-art MLLMs, including proprietary, open-source offline, and open-source real-time models. Our results show that real-time models generally outperform offline counterparts but still lag behind leading proprietary systems. While scaling model capacity generally yields performance gains, simply increasing the density of sampled input frames does not consistently translate into improved results. These observations suggest inherent limitations in current architectures when handling long-horizon video streams, underscoring the need for models explicitly designed for streaming video processing and analysis.
|
https://arxiv.org/abs/2505.02064
|
Academic Papers
|
svg
|
b257883c466e341914e3c55dd3fe1dbec928bdcfd8556bea11374eb12c3baf46
|
2026-01-16T00:00:00-05:00
|
Towards Understanding Deep Learning Model in Image Recognition via Coverage Test
|
arXiv:2505.08814v2 Announce Type: replace Abstract: Deep neural networks (DNNs) play a crucial role in the field of artificial intelligence, and their security-related testing has been a prominent research focus. By inputting test cases, the behavior of models is examined for anomalies, and coverage metrics are utilized to determine the extent of neurons covered by these test cases. With the widespread application and advancement of DNNs, different types of neural behaviors have garnered attention, leading to the emergence of various coverage metrics for neural networks. However, there is currently a lack of empirical research on these coverage metrics, specifically in analyzing the relationships and patterns between model depth, configuration information, and neural network coverage. This paper aims to investigate the relationships and patterns of four coverage metrics: primary functionality, boundary, hierarchy, and structural coverage. A series of empirical experiments were conducted, selecting LeNet, VGG, and ResNet as different DNN architectures, along with 10 models of varying depths ranging from 5 to 54 layers, to compare and study the relationships between different depths, configuration information, and various neural network coverage metrics. Additionally, an investigation was carried out on the relationships between modified decision/condition coverage and dataset size. Finally, three potential future directions are proposed to further contribute to the security testing of DNN Models.
|
https://arxiv.org/abs/2505.08814
|
Academic Papers
|
svg
|
dd558162fe166026cbfe371ab4b41ab31adf79502636c69507e481679162bc4b
|
2026-01-16T00:00:00-05:00
|
On the Failure of Latent State Persistence in Large Language Models
|
arXiv:2505.10571v4 Announce Type: replace Abstract: While Large Language Models (LLMs) excel in reasoning, whether they can sustain persistent latent states remains under-explored. The capacity to maintain and manipulate unexpressed, internal representations-analogous to human working memory-is a cornerstone of complex reasoning. In this paper, we formalize and quantify the "Latent State Persistence" (LSP) gap through three novel experiments. First, we utilize a Number Guessing Game, demonstrating that across independent queries, LLMs fail to allocate probability mass to a singular hidden choice, violating a fundamental probabilistic principle. Second, we employ a Yes-No Game to show that as the number of questions increases, LLMs suffer from "concept drift," leading to inevitable self-contradictions due to the lack of LSP. Finally, inspired by Mathematical Mentalism, we task models with tracking transformations on hidden variables, revealing a failure in variable binding and state evolution when the initial state is not explicitly present in the context. Collectively, these findings suggest that LLMs function as reactive post-hoc solvers rather than proactive planners with LSP. Our work provides a framework for evaluating the fidelity of internal representations and highlights a fundamental architectural divergence between autoregressive transformers and human-like cognition.
|
https://arxiv.org/abs/2505.10571
|
Academic Papers
|
svg
|
11d141f8651d59dd6a6a0cb2e2002d387df8fe931195b25dd05bcdb4587fecd3
|
2026-01-16T00:00:00-05:00
|
SageAttention3: Microscaling FP4 Attention for Inference and An Exploration of 8-Bit Training
|
arXiv:2505.11594v3 Announce Type: replace Abstract: The efficiency of attention is important due to its quadratic time complexity. We enhance the efficiency of attention through two key contributions: First, we leverage the new FP4 Tensor Cores in Blackwell GPUs to accelerate attention computation. Our implementation achieves 1038 TOPS on RTX5090, which is a 5x speedup over the fastest FlashAttention on RTX5090. Experiments show that our FP4 attention can accelerate inference of various models in a plug-and-play way. Second, we pioneer low-bit attention to training tasks. Existing low-bit attention works like FlashAttention3 and SageAttention focus only on inference. However, the efficiency of training large models is also important. To explore whether low-bit attention can be effectively applied to training tasks, we design an accurate and efficient 8-bit attention for both forward and backward propagation. Experiments indicate that 8-bit attention achieves lossless performance in fine-tuning tasks but exhibits slower convergence in pretraining tasks. The code is available at https://github.com/thu-ml/SageAttention.
|
https://arxiv.org/abs/2505.11594
|
Academic Papers
|
svg
|
d04a73dfa01ac425a8cb5195be3a237cde6e9ef708716589b9519494c347cac3
|
2026-01-16T00:00:00-05:00
|
Why Knowledge Distillation Works in Generative Models: A Minimal Working Explanation
|
arXiv:2505.13111v3 Announce Type: replace Abstract: Knowledge distillation (KD) is a core component in the training and deployment of modern generative models, particularly large language models (LLMs). While its empirical benefits are well documented -- enabling smaller student models to emulate the performance of much larger teachers -- the underlying mechanisms by which KD improves generative quality remain poorly understood. In this work, we present a minimal working explanation of KD in generative modeling. Using a controlled simulation with mixtures of Gaussians, we demonstrate that distillation induces a trade-off between precision and recall in the student model. As the teacher distribution becomes more selective, the student concentrates more probability mass on high-likelihood regions at the expense of coverage, which is a behavior modulated by a single entropy-controlling parameter. We then validate this effect in a large-scale language modeling setup using the SmolLM2 family of models. Empirical results reveal the same precision-recall dynamics observed in simulation, where precision corresponds to sample quality and recall to distributional coverage. This precision-recall trade-off in LLMs is found to be especially beneficial in scenarios where sample quality is more important than diversity, such as instruction tuning or downstream generation. Our analysis provides a simple and general explanation for the effectiveness of KD in generative modeling.
|
https://arxiv.org/abs/2505.13111
|
Academic Papers
|
svg
|
2533a0445f4b74f626c6a154a7b4d19baa64cd76479bf9867d8f3aba6e09b0f2
|
2026-01-16T00:00:00-05:00
|
Deep Learning for Continuous-Time Stochastic Control with Jumps
|
arXiv:2505.15602v3 Announce Type: replace Abstract: In this paper, we introduce a model-based deep-learning approach to solve finite-horizon continuous-time stochastic control problems with jumps. We iteratively train two neural networks: one to represent the optimal policy and the other to approximate the value function. Leveraging a continuous-time version of the dynamic programming principle, we derive two different training objectives based on the Hamilton-Jacobi-Bellman equation, ensuring that the networks capture the underlying stochastic dynamics. Empirical evaluations on different problems illustrate the accuracy and scalability of our approach, demonstrating its effectiveness in solving complex high-dimensional stochastic control tasks.
|
https://arxiv.org/abs/2505.15602
|
Academic Papers
|
svg
|
13c9199ec3cb4647c7c5c317b53f548a7505dc75aafd811b509b5cc2443f17ff
|
2026-01-16T00:00:00-05:00
|
LLM-Based Emulation of the Radio Resource Control Layer: Towards AI-Native RAN Protocols
|
arXiv:2505.16821v5 Announce Type: replace Abstract: Integrating Large AI Models (LAMs) into 6G mobile networks is a key enabler of the AI-Native Air Interface (AI-AI), where protocol intelligence must scale beyond handcrafted logic. This paper presents, to our knowledge, the first standards-compliant emulation of the Radio Resource Control (RRC) layer using a decoder-only LAM (LLAMA-class) fine-tuned with Low-Rank Adaptation (LoRA) on a multi-vendor corpus of real-world traces spanning both 5G and 4G systems. We treat RRC as a domain-specific language and construct a segmentation-safe, question-answer (Question-and-Answer (QA)) dataset that preserves Abstract Syntax Notation (ASN.1) structure through linearization prior to Byte Pair Encoding (BPE) tokenization. The proposed approach combines parameter-efficient adaptation with schema-bounded prompting to ensure syntactic and procedural fidelity. Evaluation introduces a standards-aware triad -- ASN.1 conformance, field-level coverage analysis, and uplink-to-downlink state-machine checks -- alongside semantic similarity and latency profiling across 120 configurations. On 30k 5G request-response pairs plus an additional 4.8k QA turns from 4G sessions, our 8B model achieves a median cosine similarity of 0.97, a 61% relative gain over a zero-shot baseline, while sustaining high conformance rates. These results demonstrate that LAMs, when augmented with protocol-aware reasoning, can directly orchestrate control-plane procedures, laying the foundation for the future Artificial Intelligence (AI)-native Radio Access Network (RAN).
|
https://arxiv.org/abs/2505.16821
|
Academic Papers
|
svg
|
15b9679b317c32d74de5d811ce00c72fb1eae0311f92f7e7d87e4298bccd5e16
|
2026-01-16T00:00:00-05:00
|
PMOA-TTS: Introducing the PubMed Open Access Textual Times Series Corpus
|
arXiv:2505.20323v2 Announce Type: replace Abstract: Clinical narratives encode temporal dynamics essential for modeling patient trajectories, yet large-scale temporally annotated resources are scarce. We introduce PMOA-TTS, a corpus of 124,699 single-patient PubMed Open Access case reports converted into structured textual timelines of (event, time) pairs using a scalable large-language-model pipeline (Llama 3.3 70B and DeepSeek-R1). The corpus comprises over 5.6 million timestamped events, alongside extracted demographics and diagnoses. Technical validation uses a clinician-curated gold set and three measures: semantic event matching, temporal concordance (c-index), and alignment error summarized with Area Under the Log-Time CDF (AULTC). We benchmark alternative prompting and model choices and provide documentation to support reproduction. PMOA-TTS enables research on timeline extraction, temporal reasoning, survival modeling and event forecasting from narrative text, and offers broad diagnostic and demographic coverage. Data and code are openly available in public repositories.
|
https://arxiv.org/abs/2505.20323
|
Academic Papers
|
svg
|
a74044ee699bd775940361552da3a96fcf50e133efc15175c2a784bbf8178391
|
2026-01-16T00:00:00-05:00
|
GraLoRA: Granular Low-Rank Adaptation for Parameter-Efficient Fine-Tuning
|
arXiv:2505.20355v2 Announce Type: replace Abstract: Low-Rank Adaptation (LoRA) is a popular method for parameter-efficient fine-tuning (PEFT) of generative models, valued for its simplicity and effectiveness. Despite recent enhancements, LoRA still suffers from a fundamental limitation: overfitting when the bottleneck is widened. It performs best at ranks 32-64, yet its accuracy stagnates or declines at higher ranks, still falling short of full fine-tuning (FFT) performance. We identify the root cause as LoRA's structural bottleneck, which introduces gradient entanglement to the unrelated input channels and distorts gradient propagation. To address this, we introduce a novel structure, Granular Low-Rank Adaptation (GraLoRA) that partitions weight matrices into sub-blocks, each with its own low-rank adapter. With negligible computational or storage cost, GraLoRA overcomes LoRA's limitations, effectively increases the representational capacity, and more closely approximates FFT behavior. Experiments on code generation and commonsense reasoning benchmarks show that GraLoRA consistently outperforms LoRA and other baselines, achieving up to +8.5% absolute gain in Pass@1 on HumanEval+. These improvements hold across model sizes and rank settings, making GraLoRA a scalable and robust solution for PEFT. Code, data, and scripts are available at https://github.com/SqueezeBits/GraLoRA.git
|
https://arxiv.org/abs/2505.20355
|
Academic Papers
|
svg
|
47ff94b8f16b92eb84efdfbb8d7b8b4315d76975759c0903c1ebec7366d12a8a
|
2026-01-16T00:00:00-05:00
|
AgriFM: A Multi-source Temporal Remote Sensing Foundation Model for Agriculture Mapping
|
arXiv:2505.21357v3 Announce Type: replace Abstract: Accurate crop mapping fundamentally relies on modeling multi-scale spatiotemporal patterns, where spatial scales range from individual field textures to landscape-level context, and temporal scales capture both short-term phenological transitions and full growing-season dynamics. Transformer-based remote sensing foundation models (RSFMs) offer promising potential for crop mapping due to their innate ability for unified spatiotemporal processing. However, current RSFMs remain suboptimal for crop mapping: they either employ fixed spatiotemporal windows that ignore the multi-scale nature of crop systems or completely disregard temporal information by focusing solely on spatial patterns. To bridge these gaps, we present AgriFM, a multi-source remote sensing foundation model specifically designed for agricultural crop mapping. Our approach begins by establishing the necessity of simultaneous hierarchical spatiotemporal feature extraction, leading to the development of a modified Video Swin Transformer architecture where temporal down-sampling is synchronized with spatial scaling operations. This modified backbone enables efficient unified processing of long time-series satellite inputs. AgriFM leverages temporally rich data streams from three satellite sources including MODIS, Landsat-8/9 and Sentinel-2, and is pre-trained on a global representative dataset comprising over 25 million image samples supervised by land cover products. The resulting framework incorporates a versatile decoder architecture that dynamically fuses these learned spatiotemporal representations, supporting diverse downstream tasks. Comprehensive evaluations demonstrate AgriFM's superior performance over conventional deep learning approaches and state-of-the-art general-purpose RSFMs across all downstream tasks. Codes will be available at https://github.com/flyakon/AgriFM.
|
https://arxiv.org/abs/2505.21357
|
Academic Papers
|
svg
|
90866432757f5f54a34d18f5e748f1ad9ce1fefcf0fca6afc8b17175697e3316
|
2026-01-16T00:00:00-05:00
|
Optimal kernel regression bounds under energy-bounded noise
|
arXiv:2505.22235v3 Announce Type: replace Abstract: Non-conservative uncertainty bounds are key for both assessing an estimation algorithm's accuracy and in view of downstream tasks, such as its deployment in safety-critical contexts. In this paper, we derive a tight, non-asymptotic uncertainty bound for kernel-based estimation, which can also handle correlated noise sequences. Its computation relies on a mild norm-boundedness assumption on the unknown function and the noise, returning the worst-case function realization within the hypothesis class at an arbitrary query input location. The value of this function is shown to be given in terms of the posterior mean and covariance of a Gaussian process for an optimal choice of the measurement noise covariance. By rigorously analyzing the proposed approach and comparing it with other results in the literature, we show its effectiveness in returning tight and easy-to-compute bounds for kernel-based estimates.
|
https://arxiv.org/abs/2505.22235
|
Academic Papers
|
svg
|
2418e728803732ec4f809bc60ed27403d1456d9974256b07ce3f364c346a539d
|
2026-01-16T00:00:00-05:00
|
From Dormant to Deleted: Tamper-Resistant Unlearning Through Weight-Space Regularization
|
arXiv:2505.22310v2 Announce Type: replace Abstract: Recent unlearning methods for LLMs are vulnerable to relearning attacks: knowledge believed-to-be-unlearned re-emerges by fine-tuning on a small set of (even seemingly-unrelated) examples. We study this phenomenon in a controlled setting for example-level unlearning in vision classifiers. We make the surprising discovery that forget-set accuracy can recover from around 50% post-unlearning to nearly 100% with fine-tuning on just the retain set -- i.e., zero examples of the forget set. We observe this effect across a wide variety of unlearning methods, whereas for a model retrained from scratch excluding the forget set (gold standard), the accuracy remains at 50%. We observe that resistance to relearning attacks can be predicted by weight-space properties, specifically, $L_2$-distance and linear mode connectivity between the original and the unlearned model. Leveraging this insight, we propose a new class of methods that achieve state-of-the-art resistance to relearning attacks.
|
https://arxiv.org/abs/2505.22310
|
Academic Papers
|
svg
|
5bed3f1bb899e61d6142e8968822932738e717f6194f0ae73d14169d6c6e69fd
|
2026-01-16T00:00:00-05:00
|
MathArena: Evaluating LLMs on Uncontaminated Math Competitions
|
arXiv:2505.23281v3 Announce Type: replace Abstract: The rapid advancement of reasoning capabilities in large language models (LLMs) has led to notable improvements on mathematical benchmarks. However, many of the most commonly used evaluation datasets (e.g., AIME 2024) are widely available online, making it difficult to disentangle genuine reasoning from potential memorization. Furthermore, these benchmarks do not evaluate proof-writing capabilities, which are crucial for many mathematical tasks. To address this, we introduce MathArena, a new benchmark based on the following key insight: recurring math competitions provide a stream of high-quality, challenging problems that can be used for real-time evaluation of LLMs. By evaluating models as soon as new problems are released, we effectively eliminate the risk of contamination. Using this framework, we find strong signs of contamination in AIME 2024. Nonetheless, evaluations on harder competitions, such as CMIMC 2025, demonstrate impressive reasoning capabilities in top-performing models. MathArena is also the first benchmark for proof-writing capabilities. On IMO 2025, top models achieve slightly less than 40%, demonstrating both notable progress and significant room for improvement. So far, we have evaluated over $50$ models across seven competitions, totaling $162$ problems. As an evolving benchmark, MathArena will continue to track the progress of LLMs on newly released competitions, ensuring rigorous and up-to-date evaluation of mathematical reasoning.
|
https://arxiv.org/abs/2505.23281
|
Academic Papers
|
svg
|
811cbb3b34c94c52fd2d37aa8c833afeea6487d2adf59adecb6e71c4d5aab992
|
2026-01-16T00:00:00-05:00
|
Exploiting Euclidean Distance Field Properties for Fast and Safe 3D planning with a modified Lazy Theta*
|
arXiv:2505.24024v2 Announce Type: replace Abstract: This paper presents the FS-Planner, a fast graph-search planner based on a modified Lazy Theta* algorithm that exploits the analytical properties of Euclidean Distance Fields (EDFs). We introduce a new cost function that integrates an EDF-based term proven to satisfy the triangle inequality, enabling efficient parent selection and reducing computation time while generating safe paths with smaller heading variations. We also derive an analytic approximation of the EDF integral along a segment and analyze the influence of the line-of-sight limit on the approximation error, motivating the use of a bounded visibility range. Furthermore, we propose a gradient-based neighbour-selection mechanism that decreases the number of explored nodes and improves computational performance without degrading safety or path quality. The FS-Planner produces safe paths with small heading changes without requiring the use of post-processing methods. Extensive experiments and comparisons in challenging 3D indoor simulation environments, complemented by tests in real-world outdoor environments, are used to evaluate and validate the FS-Planner. The results show consistent improvements in computation time, exploration efficiency, safety, and smoothness in a geometric sense compared with baseline heuristic planners, while maintaining sub-optimality within acceptable bounds. Finally, the proposed EDF-based cost formulation is orthogonal to the underlying search method and can be incorporated into other planning paradigms.
|
https://arxiv.org/abs/2505.24024
|
Academic Papers
|
svg
|
d64d7fe5278807948c702d712aa0291d6f7ec1a5c484ea3da01e904e6602f971
|
2026-01-16T00:00:00-05:00
|
Robot-R1: Reinforcement Learning for Enhanced Embodied Reasoning in Robotics
|
arXiv:2506.00070v2 Announce Type: replace Abstract: Large Vision-Language Models (LVLMs) have recently shown great promise in advancing robotics by combining embodied reasoning with robot control. A common approach involves training on embodied reasoning tasks related to robot control using Supervised Fine-Tuning (SFT). However, SFT datasets are often heuristically constructed and not explicitly optimized for improving robot control. Furthermore, SFT often leads to issues such as catastrophic forgetting and reduced generalization performance. To address these limitations, we introduce Robot-R1, a novel framework that leverages reinforcement learning to enhance embodied reasoning specifically for robot control. Robot-R1 learns to predict the next keypoint state required for task completion, conditioned on the current scene image and environment metadata derived from expert demonstrations. Inspired by the DeepSeek-R1 learning approach, Robot-R1 samples reasoning-based responses and reinforces those that lead to more accurate predictions. To rigorously evaluate Robot-R1, we also introduce a new benchmark that demands the diverse embodied reasoning capabilities for the task. Our experiments show that models trained with Robot-R1 outperform SFT methods on embodied reasoning tasks. Despite having only 7B parameters, Robot-R1 even surpasses GPT-4o on reasoning tasks related to low-level action control, such as spatial and movement reasoning.
|
https://arxiv.org/abs/2506.00070
|
Academic Papers
|
svg
|
dba62e2a80c87349526a312d2510f866e955772d54e290084207496d77bf8c0d
|
2026-01-16T00:00:00-05:00
|
NestedFP: High-Performance, Memory-Efficient Dual-Precision Floating Point Support for LLMs
|
arXiv:2506.02024v3 Announce Type: replace Abstract: Meeting service-level objectives (SLOs) in Large Language Models (LLMs) serving is critical, but managing the high variability in load presents a significant challenge. Recent advancements in FP8 inference, backed by native hardware support, offer a potential solution: executing FP16 models by default, while switching to FP8 models during sudden load surges to achieve higher throughput at the cost of a slight quality degradation. Although this approach facilitates effective SLO management, it introduces additional memory overhead due to storing two versions of the same model. In response, this paper proposes NestedFP, an LLM serving technique that supports both FP16 and FP8 models in a memory-efficient manner by overlaying FP8 parameters onto FP16 parameters, allowing both models to share the same FP16 memory footprint. By leveraging a compact data format for the overlay and a specialized GEMM kernel optimized for this format, NestedFP ensures minimal degradation in both model quality and inference throughput across both FP8 and FP16 modes. NestedFP provides a flexible platform for dynamic, SLO-aware precision selection. The code is available at https://github.com/SNU-ARC/NestedFP.
|
https://arxiv.org/abs/2506.02024
|
Academic Papers
|
svg
|
3fd72f1da12bd89274a89754eb9b523cc009ec9c75b15f60519fb7ad46ac9622
|
2026-01-16T00:00:00-05:00
|
APEX: Asynchronous Parallel CPU-GPU Execution for Online LLM Inference on Constrained GPUs
|
arXiv:2506.03296v4 Announce Type: replace Abstract: Deploying large language models (LLMs) for online inference is often constrained by limited GPU memory, particularly due to the growing KV cache during auto-regressive decoding. Hybrid GPU-CPU execution has emerged as a promising solution by offloading KV cache management and parts of attention computation to the CPU. However, a key bottleneck remains: existing schedulers fail to effectively overlap CPU-offloaded tasks with GPU execution during the latency-critical, bandwidth-bound decode phase. This particularly penalizes real-time, decode-heavy applications (e.g., chat, Chain-of-Thought reasoning) which are currently underserved by existing systems, especially under memory pressure typical of edge or low-cost deployments. We present APEX, a novel, profiling-informed scheduling strategy that maximizes CPU-GPU parallelism during hybrid LLM inference. Unlike systems relying on static rules or purely heuristic approaches, APEX dynamically dispatches compute across heterogeneous resources by predicting execution times of CPU and GPU subtasks to maximize overlap while avoiding scheduling overheads. We evaluate APEX on diverse workloads and GPU architectures (NVIDIA T4, A10), using LLaMa-2-7B and LLaMa-3.1-8B models. Compared to GPU-only schedulers like vLLM, APEX improves throughput by 84% - 96% on T4 and 11% - 89% on A10 GPUs, while preserving latency. Against the best existing hybrid schedulers, it delivers up to 72% (T4) and 37% (A10) higher throughput in long-output settings. APEX significantly advances hybrid LLM inference efficiency on such memory-constrained hardware and provides a blueprint for scheduling in heterogeneous AI systems, filling a critical gap for efficient real-time LLM applications.
|
https://arxiv.org/abs/2506.03296
|
Academic Papers
|
svg
|
6163bd0e9e679764362c5e16b546884f4a79f38972c9a215c213f052223c9601
|
2026-01-16T00:00:00-05:00
|
Normalize Filters! Classical Wisdom for Deep Vision
|
arXiv:2506.04401v5 Announce Type: replace Abstract: Classical image filters, such as those for averaging or differencing, are carefully normalized to ensure consistency, interpretability, and to avoid artifacts like intensity shifts, halos, or ringing. In contrast, convolutional filters learned end-to-end in deep networks lack such constraints. Although they may resemble wavelets and blob/edge detectors, they are not normalized in the same or any way. Consequently, when images undergo atmospheric transfer, their responses become distorted, leading to incorrect outcomes. We address this limitation by proposing filter normalization, followed by learnable scaling and shifting, akin to batch normalization. This simple yet effective modification ensures that the filters are atmosphere-equivariant, enabling co-domain symmetry. By integrating classical filtering principles into deep learning (applicable to both convolutional neural networks and convolution-dependent vision transformers), our method achieves significant improvements on artificial and natural intensity variation benchmarks. Our ResNet34 could even outperform CLIP by a large margin. Our analysis reveals that unnormalized filters degrade performance, whereas filter normalization regularizes learning, promotes diversity, and improves robustness and generalization.
|
https://arxiv.org/abs/2506.04401
|
Academic Papers
|
svg
|
3bc657295258fbdaf002a6666c5f7c79bf7604cede36fd5d33b5523e2adb0e68
|
2026-01-16T00:00:00-05:00
|
Learning normalized image densities via dual score matching
|
arXiv:2506.05310v3 Announce Type: replace Abstract: Learning probability models from data is at the heart of many machine learning endeavors, but is notoriously difficult due to the curse of dimensionality. We introduce a new framework for learning \emph{normalized} energy (log probability) models that is inspired by diffusion generative models, which rely on networks optimized to estimate the score. We modify a score network architecture to compute an energy while preserving its inductive biases. The gradient of this energy network with respect to its input image is the score of the learned density, which can be optimized using a denoising objective. Importantly, the gradient with respect to the noise level provides an additional score that can be optimized with a novel secondary objective, ensuring consistent and normalized energies across noise levels. We train an energy network with this \emph{dual} score matching objective on the ImageNet64 dataset, and obtain a cross-entropy (negative log likelihood) value comparable to the state of the art. We further validate our approach by showing that our energy model \emph{strongly generalizes}: log probabilities estimated with two networks trained on non-overlapping data subsets are nearly identical. Finally, we demonstrate that both image probability and dimensionality of local neighborhoods vary substantially depending on image content, in contrast with conventional assumptions such as concentration of measure or support on a low-dimensional manifold.
|
https://arxiv.org/abs/2506.05310
|
Academic Papers
|
svg
|
87836c096a5a8af5515586ff8a27f380eb165a233d82abf1886a6748d7f60c0b
|
2026-01-16T00:00:00-05:00
|
The State-of-the-Art in Lifelog Retrieval: A Review of Progress at the ACM Lifelog Search Challenge Workshop 2022-24
|
arXiv:2506.06743v2 Announce Type: replace Abstract: The ACM Lifelog Search Challenge (LSC) is a venue that welcomes and compares systems that support the exploration of lifelog data, and in particular the retrieval of specific information, through an interactive competition format. This paper reviews the recent advances in interactive lifelog retrieval as demonstrated at the ACM LSC from 2022 to 2024. Through a detailed comparative analysis, we highlight key improvements across three main retrieval tasks: known-item search, question answering, and ad-hoc search. Our analysis identifies trends such as the widespread adoption of embedding-based retrieval methods (e.g., CLIP, BLIP), increased integration of large language models (LLMs) for conversational retrieval, and continued innovation in multimodal and collaborative search interfaces. We further discuss how specific retrieval techniques and user interface (UI) designs have impacted system performance, emphasizing the importance of balancing retrieval complexity with usability. Our findings indicate that embedding-driven approaches combined with LLMs show promise for lifelog retrieval systems. Likewise, improving UI design can enhance usability and efficiency. Additionally, we recommend reconsidering multi-instance system evaluations within the expert track to better manage variability in user familiarity and configuration effectiveness.
|
https://arxiv.org/abs/2506.06743
|
Academic Papers
|
svg
|
8d04c2ab4ca3b258c9974ad43a1deb3712a2c5920a609f802280d824e64a1dc0
|
2026-01-16T00:00:00-05:00
|
Audio Generation Through Score-Based Generative Modeling: Design Principles and Implementation
|
arXiv:2506.08457v2 Announce Type: replace Abstract: Diffusion models have emerged as powerful deep generative techniques, producing high-quality and diverse samples in applications in various domains including audio. While existing reviews provide overviews, there remains limited in-depth discussion of these specific design choices. The audio diffusion model literature also lacks principled guidance for the implementation of these design choices and their comparisons for different applications. This survey provides a comprehensive review of diffusion model design with an emphasis on design principles for quality improvement and conditioning for audio applications. We adopt the score modeling perspective as a unifying framework that accommodates various interpretations, including recent approaches like flow matching. We systematically examine the training and sampling procedures of diffusion models, and audio applications through different conditioning mechanisms. To provide an integrated, unified codebase and to promote reproducible research and rapid prototyping, we introduce an open-source codebase (https://github.com/gzhu06/AudioDiffuser) that implements our reviewed framework for various audio applications. We demonstrate its capabilities through three case studies: audio generation, speech enhancement, and text-to-speech synthesis, with benchmark evaluations on standard datasets.
|
https://arxiv.org/abs/2506.08457
|
Academic Papers
|
svg
|
6cf6f0d58c2d3f58718a0f9ae32df9e3c0eb0aeabfc31a1143137c47a36c07f9
|
2026-01-16T00:00:00-05:00
|
Semi-Tensor-Product Based Convolutional Neural Networks
|
arXiv:2506.10407v3 Announce Type: replace Abstract: The semi-tensor product of vectors generalizes the conventional inner product, enabling algebraic operations between vectors of different dimensions. Building upon this foundation, we introduce a domain-based convolutional product and integrate it with the STP to formulate a padding-free convolutional operation. This new operation inherently avoids zero or other artificial padding, thereby eliminating redundant information and boundary artifacts commonly present in conventional convolutional neural networks. Based on this operation, we further develop an STP-based CNN framework that extends convolutional computation to irregular and cross-dimensional data domains. Applications to image processing and third-order signal identification demonstrate the proposed method's effectiveness in handling irregular, incomplete, and high-dimensional data without the distortions caused by padding.
|
https://arxiv.org/abs/2506.10407
|
Academic Papers
|
svg
|
c56cc60a35f9076dbbdd0a5cf6741cbc6e8d80e47d12ae9f907c2ff6f7b60962
|
2026-01-16T00:00:00-05:00
|
HP2C-DT: High-Precision High-Performance Computer-enabled Digital Twin
|
arXiv:2506.10523v2 Announce Type: replace Abstract: Digital twins are transforming the way we monitor, analyze, and control physical systems, but designing architectures that balance real-time responsiveness with heavy computational demands remains a challenge. Cloud-based solutions often struggle with latency and resource constraints, while edge-based approaches lack the processing power for complex simulations and data-driven optimizations. To address this problem, we propose the High-Precision High-Performance Computer-enabled Digital Twin (HP2C-DT) reference architecture, which integrates High-Performance Computing (HPC) into the computing continuum. Unlike traditional setups that use HPC only for offline simulations, HP2C-DT makes it an active part of digital twin workflows, dynamically assigning tasks to edge, cloud, or HPC resources based on urgency and computational needs. Furthermore, to bridge the gap between theory and practice, we introduce the HP2C-DT framework, a working implementation that uses COMPSs for seamless workload distribution across diverse infrastructures. We test it in a power grid use case, showing how it reduces communication bandwidth by an order of magnitude through edge-side data aggregation, improves response times by up to 2x via dynamic offloading, and maintains near-ideal strong scaling for compute-intensive workflows across a practical range of resources. These results demonstrate how an HPC-driven approach can push digital twins beyond their current limitations, making them smarter, faster, and more capable of handling real-world complexity.
|
https://arxiv.org/abs/2506.10523
|
Academic Papers
|
svg
|
6de42b6b2369893ae86ddf50dabfaab5ebed56505af4f33031f0353d4f6c7c0b
|
2026-01-16T00:00:00-05:00
|
Approximations for Fault-Tolerant Total and Partial Positive Influence Domination
|
arXiv:2506.12828v3 Announce Type: replace Abstract: In $\textit{total domination}$, given a graph $G=(V,E)$, we seek a minimum-size set of nodes $S\subseteq V$, such that every node in $V$ has at least one neighbor in $S$. We define a $\textit{fault-tolerant}$ version of total domination, where we require any node in $V \setminus S$ to have at least $m$ neighbors in $S$. Let $\Delta$ denote the maximum degree in $G$. We prove a first $1 + \ln(\Delta + m - 1)$ approximation for fault-tolerant total domination. We also consider fault-tolerant variants of the weighted $\textit{partial positive influence dominating set}$ problem, where we seek a minimum-size set of nodes $S\subseteq V$, such that every node in $V$ is either a member of $S$ or the sum of weights of its incident edges leading to nodes in $S$ is at least half of the sum of weights over all its incident edges. We prove the first logarithmic approximations for the simple, total, and connected variants of this problem. To prove the result for the connected case, we extend the general approximation framework for non-submodular functions from integer-valued to fractional-valued functions, which we believe is of independent interest.
|
https://arxiv.org/abs/2506.12828
|
Academic Papers
|
svg
|
5f4af74399adfedd25b65b1c76939c8a549e059a4b6f34c7d6953156d29377e1
|
2026-01-16T00:00:00-05:00
|
LittleBit: Ultra Low-Bit Quantization via Latent Factorization
|
arXiv:2506.13771v4 Announce Type: replace Abstract: Deploying large language models (LLMs) often faces challenges from substantial memory and computational costs. Quantization offers a solution, yet performance degradation in the sub-1-bit regime remains particularly difficult. This paper introduces LittleBit, a novel method for extreme LLM compression. It targets levels like 0.1 bits per weight (BPW), achieving nearly 31$\times$ memory reduction, e.g., Llama2-13B to under 0.9 GB. LittleBit represents weights in a low-rank form using latent matrix factorization, subsequently binarizing these factors. To counteract information loss from this extreme precision, it integrates a multi-scale compensation mechanism. This includes row, column, and an additional latent dimension that learns per-rank importance. Two key contributions enable effective training: Dual Sign-Value-Independent Decomposition (Dual-SVID) for quantization-aware training (QAT) initialization, and integrated Residual Compensation to mitigate errors. Extensive experiments confirm LittleBit's superiority in sub-1-bit quantization: e.g., its 0.1 BPW performance on Llama2-7B surpasses the leading method's 0.7 BPW. LittleBit establishes a new, viable size-performance trade-off--unlocking a potential 11.6$\times$ speedup over FP16 at the kernel level--and makes powerful LLMs practical for resource-constrained environments. Our code can be found at https://github.com/SamsungLabs/LittleBit.
|
https://arxiv.org/abs/2506.13771
|
Academic Papers
|
svg
|
56302ede176c1b69cfe9ce42184b0bfadc352a9709ed3043473659788a3ca0b8
|
2026-01-16T00:00:00-05:00
|
Advancing Safe Mechanical Ventilation Using Offline RL With Hybrid Actions and Clinically Aligned Rewards
|
arXiv:2506.14375v2 Announce Type: replace Abstract: Invasive mechanical ventilation (MV) is a life-sustaining therapy commonly used in the intensive care unit (ICU) for patients with severe and acute conditions. These patients frequently rely on MV for breathing. Given the high risk of death in such cases, optimal MV settings can reduce mortality, minimize ventilator-induced lung injury, shorten ICU stays, and ease the strain on healthcare resources. However, optimizing MV settings remains a complex and error-prone process due to patient-specific variability. While Offline Reinforcement Learning (RL) shows promise for optimizing MV settings, current methods struggle with the hybrid (continuous and discrete) nature of MV settings. Discretizing continuous settings leads to exponential growth in the action space, which limits the number of optimizable settings. Converting the predictions back to continuous can cause a distribution shift, compromising safety and performance. To address this challenge, in the IntelliLung project, we are developing an AI-based approach where we constrain the action space and employ factored action critics. This approach allows us to scale to six optimizable settings compared to 2-3 in previous studies. We adapt SOTA offline RL algorithms to operate directly on hybrid action spaces, avoiding the pitfalls of discretization. We also introduce a clinically grounded reward function based on ventilator-free days and physiological targets. Using multiobjective optimization for reward selection, we show that this leads to a more equitable consideration of all clinically relevant objectives. Notably, we develop a system in close collaboration with healthcare professionals that is aligned with real-world clinical objectives and designed with future deployment in mind.
|
https://arxiv.org/abs/2506.14375
|
Academic Papers
|
svg
|
f06133d7aeaa23af6c9eda9984c7dbb3236a11b5d2ac9e73fd9b21824cbce32c
|
2026-01-16T00:00:00-05:00
|
Curating art exhibitions using machine learning
|
arXiv:2506.19813v3 Announce Type: replace Abstract: Here we present a series of artificial models - a total of four related models - based on machine learning techniques that attempt to learn from existing exhibitions which have been curated by human experts, in order to be able to do similar curatorship work. Out of our four artificial intelligence models, three achieve a reasonable ability at imitating these various curators responsible for all those exhibitions, with various degrees of precision and curatorial coherence. In particular, we can conclude two key insights: first, that there is sufficient information in these exhibitions to construct an artificial intelligence model that replicates past exhibitions with an accuracy well above random choices; and second, that using feature engineering and carefully designing the architecture of modest size models can make them almost as good as those using the so-called large language models such as GPT in a brute force approach.
|
https://arxiv.org/abs/2506.19813
|
Academic Papers
|
svg
|
f39e3109fa394aaf069335855d2d12232d18662c667ac7d92097e12dada5743e
|
2026-01-16T00:00:00-05:00
|
The Open Proof Corpus: A Large-Scale Study of LLM-Generated Mathematical Proofs
|
arXiv:2506.21621v2 Announce Type: replace Abstract: In recent months, large language models (LLMs) have made significant progress in mathematical proof generation, but further advancement is hindered by the lack of a large-scale, high-quality dataset of human-evaluated proofs. While expensive to create, such a dataset is essential for driving improvements in training and enabling a rigorous analysis of proof generation capabilities. In this work, we present the Open Proof Corpus (OPC), a dataset comprising over 5,000 human-evaluated proofs produced by state-of-the-art LLMs. The OPC was specifically designed for broad applicability and downstream usage in proof generation research and is the first to include a substantial number of correct, LLM-generated solutions to problems from prestigious mathematics competitions such as the USAMO and IMO. Using the OPC, we explore critical questions in automated proof generation: (1) the performance gap between natural language and formal proof generation, (2) the discrepancy between final-answer accuracy and full-proof validity, and (3) the impact of best-of-n selection on proof quality. Finally, to showcase the utility of the OPC, we finetune an 8B-parameter model on the dataset, obtaining a model that performs on par with the best model, Gemini-2.5-Pro, on the task of evaluating proof correctness.
|
https://arxiv.org/abs/2506.21621
|
Academic Papers
|
svg
|
af7d6e48db11f09faba71f738285890b8bb7d0ca30dd9ffffed8e93408dc8aea
|
2026-01-16T00:00:00-05:00
|
Uncovering Systemic and Environment Errors in Autonomous Systems Using Differential Testing
|
arXiv:2507.03870v2 Announce Type: replace Abstract: When an autonomous agent behaves undesirably, including failure to complete a task, it can be difficult to determine whether the behavior is due to a systemic agent error, such as flaws in the model or policy, or an environment error, where a task is inherently infeasible under a given environment configuration, even for an ideal agent. As agents and their environments grow more complex, identifying the error source becomes increasingly difficult but critical for reliable deployment. We introduce AIProbe, a novel black-box testing technique that applies differential testing to attribute undesirable agent behaviors either to agent deficiencies, such as modeling or training flaws, or due to environmental infeasibility. AIProbe first generates diverse environmental configurations and tasks for testing the agent, by modifying configurable parameters using Latin Hypercube sampling. It then solves each generated task using a search-based planner, independent of the agent. By comparing the agent's performance to the planner's solution, AIProbe identifies whether failures are due to errors in the agent's model or policy, or due to unsolvable task conditions. Our evaluation across multiple domains shows that AIProbe significantly outperforms state-of-the-art techniques in detecting both total and unique errors, thereby contributing to a reliable deployment of autonomous agents.
|
https://arxiv.org/abs/2507.03870
|
Academic Papers
|
svg
|
ece58d51b30160fd9e88039a878263aa5bf3a0d5639fc98d6d3933d0d7b7df16
|
2026-01-16T00:00:00-05:00
|
COALA: Numerically Stable and Efficient Framework for Context-Aware Low-Rank Approximation
|
arXiv:2507.07580v2 Announce Type: replace Abstract: Recent studies suggest that context-aware low-rank approximation is a useful tool for compression and fine-tuning of modern large-scale neural networks. In this type of approximation, a norm is weighted by a matrix of input activations, significantly improving metrics over the unweighted case. Nevertheless, existing methods for neural networks suffer from numerical instabilities due to their reliance on classical formulas involving explicit Gram matrix computation and their subsequent inversion. We demonstrate that this can degrade the approximation quality or cause numerically singular matrices. To address these limitations, we propose a novel inversion-free regularized framework that is based entirely on stable decompositions and overcomes the numerical pitfalls of prior art. Our method can handle possible challenging scenarios: (1) when calibration matrices exceed GPU memory capacity, (2) when input activation matrices are nearly singular, and even (3) when insufficient data prevents unique approximation. For the latter, we prove that our solution converges to a desired approximation and derive explicit error bounds.
|
https://arxiv.org/abs/2507.07580
|
Academic Papers
|
svg
|
1c19b6e4a81c0a7321f3156e37c807f945a297054fa30e7ccaeebf391c34bd00
|
2026-01-16T00:00:00-05:00
|
A simple formalization of alpha-equivalence
|
arXiv:2507.10181v2 Announce Type: replace Abstract: While teaching untyped $\lambda$-calculus to undergraduate students, we were wondering why $\alpha$-equivalence is not directly inductively defined. In this paper, we demonstrate that this is indeed feasible. Specifically, we provide a grounded, inductive definition for $\alpha$-equivalence and show that it conforms to the specification provided in the literature. The work presented in this paper is fully formalized in the Rocq Prover.
|
https://arxiv.org/abs/2507.10181
|
Academic Papers
|
svg
|
8f42fbcdfd0bb6327b1a0feb917146a1b20c06416a69090e8906a9fde96a71b2
|
2026-01-16T00:00:00-05:00
|
CodeAssistBench (CAB): Dataset & Benchmarking for Multi-turn Chat-Based Code Assistance
|
arXiv:2507.10646v5 Announce Type: replace Abstract: Programming assistants powered by large language models have improved dramatically, yet existing benchmarks still evaluate them in narrow code-generation settings. Recent efforts such as InfiBench and StackEval rely on Stack Overflow questions and remain limited to single-turn interactions, manually curated data, and isolated snippets rather than full project environments. We introduce CodeAssistBench (CAB), the first benchmark for evaluating multi-turn, project-grounded programming assistance at scale. CAB automatically constructs datasets from GitHub issues tagged as questions, using an LLM-driven pipeline that filters noise, extracts runnable contexts, builds executable containers, and verifies environment correctness. This enables continuous, automated expansion across diverse repositories without manual intervention. Using CAB, we create a testbed of 3,286 real-world issues across 214 repositories, spanning seven languages. Evaluating state-of-the-art models reveals a substantial gap: while models achieve 70-83% accuracy on Stack Overflow-style questions, they solve only 7.22-16.49% of CAB issues from post-training-cutoff repositories. These results highlight a fundamental challenge: current LLMs struggle to provide assistance in realistic, project-specific contexts despite strong performance on traditional Q&A benchmarks. CAB provides a scalable, reproducible framework for advancing research in multi-turn, codebase-grounded programming agents. The benchmark and pipeline are fully automated and publicly available at https://github.com/amazon-science/CodeAssistBench/.
|
https://arxiv.org/abs/2507.10646
|
Academic Papers
|
svg
|
9a192b758751fedacb8545d0e169c639a41c7f25a9a37d769b269eb76ffa9d7f
|
2026-01-16T00:00:00-05:00
|
Keep the beat going: Automatic drum transcription with momentum
|
arXiv:2507.12596v2 Announce Type: replace Abstract: How can we process a piece of recorded music to detect and visualize the onset of each instrument? A simple, interpretable approach is based on partially fixed nonnegative matrix factorization (NMF). Yet despite the method's simplicity, partially fixed NMF is challenging to apply because the associated optimization problem is high-dimensional and non-convex. This paper explores two optimization approaches that preserve the nonnegative structure, including a multiplicative update rule and projected gradient descent with momentum. These techniques are derived from the previous literature, but they have not been fully developed for partially fixed NMF before now. Results indicate that projected gradient descent with momentum leads to the higher accuracy among the two methods, and it satisfies stronger local convergence guarantees.
|
https://arxiv.org/abs/2507.12596
|
Academic Papers
|
svg
|
eb4490e7084542cb87b2fb95b5a371d49f01de314f90018ba21793ae2ab2316f
|
2026-01-16T00:00:00-05:00
|
Approximation algorithms for scheduling with rejection in green manufacturing
|
arXiv:2507.12635v3 Announce Type: replace Abstract: Motivated by green manufacturing, this paper investigates a scheduling with rejection problem subject to an energy consumption constraint. Machines are associated with non-uniform energy consumption rates, defined as the energy consumed per unit time. Each job is either rejected with a rejection penalty or accepted and scheduled on some machine for processing, which incurs energy consumption. The problem aims to minimize the makespan of the accepted jobs plus the total penalty of the rejected jobs while the total energy consumption is bounded by a given threshold. In this paper, when the number of machines is part of the input, we develop the first $(2+\epsilon)$-approximation algorithm for any fixed constant $\epsilon$ and a simple QPTAS as well as a PTAS for uniform energy consumption rates. Moreover, we present an FPTAS when the number of machines is a fixed constant.
|
https://arxiv.org/abs/2507.12635
|
Academic Papers
|
svg
|
1c79f72dbbc91341ee76d5cd6856701cec46920d0ce3f489cee0ba9b58dbac15
|
2026-01-16T00:00:00-05:00
|
Enhancing Smart Grid Information Exchanges: A Three-Phase Method for Evaluating Information and Data Models during their Development Process
|
arXiv:2507.12649v2 Announce Type: replace Abstract: The ongoing process of smart grid digitalisation is increasing the volume of automated information exchange across distributed energy systems. This has driven the development of new information and data models when existing models fail to offer an optimal description of the requisite information due to be exchanged. To prevent potential operational disruption - i.e. in the provision of flexibility - caused by flaws in these newly designed models, it is essential to conduct evaluations during the development process before these models are deployed. Current practices differ across domains. Beyond smart grid applications, information models are evaluated through explicit reviews using quality characteristics. Within smart grid contexts, evaluation focuses on data models and implicit system-level conformance and interoperability testing. However, no existing approach combines these explicit and implicit evaluation methods for both information and data models during their development. This limits early fault detection and increases potential model correction costs. To address this gap, we propose a three-phase evaluation method based on design science research. Our method integrates explicit and implicit approaches, applies them to information and data models and is adaptable to various design stages. We also introduce a set of quality characteristics to support explicit model evaluation. Overall, our contribution enhances the reliability and interoperability of smart grid information exchange.
|
https://arxiv.org/abs/2507.12649
|
Academic Papers
|
svg
|
437a58669403035b6783a15856b0da1b61029f302d1e894dc20463ead7e216a5
|
2026-01-16T00:00:00-05:00
|
A Framework of Distributed Source Encryption using Mutual Information Security Criterion and the Strong Converse Theorem
|
arXiv:2507.13294v4 Announce Type: replace Abstract: We reinvestigate the general distributed secure source coding based on the common key cryptosystem proposed by Oohama and Santoso (ITW 2021). They proposed a framework of distributed source encryption and derived the necessary and sufficient conditions to have reliable and secure transmission. However, the bounds of the rate region, which specifies both necessary and sufficient conditions to have reliable and secure transmission under the proposed cryptosystem, were derived based on a self-tailored non-standard} security criterion. In this paper we adopt the standard security criterion, i.e., standard mutual information. We successfully establish the bounds of the rate region based on this security criterion. Information spectrum method and a variant of Birkhoff-von Neumann theorem play an important role in deriving the result.
|
https://arxiv.org/abs/2507.13294
|
Academic Papers
|
svg
|
2fc221bc1f4b8afa2711712f683120ea7529643bdb0e0488719c6e527c30ea51
|
2026-01-16T00:00:00-05:00
|
1/2 order convergence rate of Euler-type methods for time-changed stochastic differential equations with super-linearly growing drift and diffusion coefficients
|
arXiv:2507.14562v4 Announce Type: replace Abstract: This paper investigates the strong convergence properties of two Euler-type methods for a class of time-changed stochastic differential equations (TCSDEs) with super-linearly growing drift and diffusion coefficients. Building upon existing research, we propose a backward Euler method (BEM) and introduce its explicit counterpart -- the projected Euler method (PEM). We prove that both methods converge strongly in the $L_2$-sense at the optimal rate of 1/2. This result extends the applicability of both the BEM and the PEM to a broader class of TCSDEs. Moreover, the two methods offer complementary strengths: while BEM possesses wide applicability, PEM is computationally more efficient. Numerical simulations confirm our theoretical findings and illustrate practical performance of both schemes.
|
https://arxiv.org/abs/2507.14562
|
Academic Papers
|
svg
|
11ae998b8f7a53d0468d26ce058367672a075c4a490635667ca5cf502384a036
|
2026-01-16T00:00:00-05:00
|
An intelligent agent-based simulation of human mobility in extreme urban morphologies
|
arXiv:2507.15143v2 Announce Type: replace Abstract: This paper investigates the feasibility of human mobility in extreme urban morphologies, characterized by high-density vertical structures and linear city layouts. To assess whether agents can navigate efficiently within such unprecedented topologies, we develop a hybrid simulation framework that integrates agent-based modeling, reinforcement learning (RL), supervised learning, and graph neural networks (GNNs). The simulation captures multi-modal transportation behaviors across multiple vertical levels and varying density scenarios, using both synthetic data and real-world traces from high-density cities. Experiments show that the full AI-integrated architecture enables agents to achieve an average commute time of 7.8--8.4 minutes, a satisfaction rate exceeding 89\%, and a reachability index over 91\%, even during peak congestion periods. Ablation studies indicate that removing intelligent modules such as RL or GNN significantly degrades performance, with commute times increasing by up to 85\% and reachability falling below 70\%. Environmental modeling demonstrates low energy consumption and minimal CO$_2$ emissions when electric modes are prioritized. These results suggest that efficient and sustainable mobility in extreme urban forms is achievable, provided adaptive AI systems, intelligent infrastructure, and real-time feedback mechanisms are implemented.
|
https://arxiv.org/abs/2507.15143
|
Academic Papers
|
svg
|
d57ecb5c8158ed1554d2b5cd9a3f4e14f0417cdd1cb581444e096e40a5950e06
|
2026-01-16T00:00:00-05:00
|
Pareto-Grid-Guided Large Language Models for Fast and High-Quality Heuristics Design in Multi-Objective Combinatorial Optimization
|
arXiv:2507.20923v3 Announce Type: replace Abstract: Multi-objective combinatorial optimization problems (MOCOP) frequently arise in practical applications that require the simultaneous optimization of conflicting objectives. Although traditional evolutionary algorithms can be effective, they typically depend on domain knowledge and repeated parameter tuning, limiting flexibility when applied to unseen MOCOP instances. Recently, integration of Large Language Models (LLMs) into evolutionary computation has opened new avenues for automatic heuristic generation, using their advanced language understanding and code synthesis capabilities. Nevertheless, most existing approaches predominantly focus on single-objective tasks, often neglecting key considerations such as runtime efficiency and heuristic diversity in multi-objective settings. To bridge this gap, we introduce Multi-heuristics for MOCOP via Pareto-Grid-guided Evolution of LLMs (MPaGE), a novel enhancement of the Simple Evolutionary Multiobjective Optimization (SEMO) framework that leverages LLMs and Pareto Front Grid (PFG) technique. By partitioning the objective space into grids and retaining top-performing candidates to guide heuristic generation, MPaGE utilizes LLMs to prioritize heuristics with semantically distinct logical structures during variation, thus promoting diversity and mitigating redundancy within the population. Through extensive evaluations, MPaGE demonstrates superior performance over existing LLM-based frameworks, and achieves competitive results to traditional Multi-objective evolutionary algorithms (MOEAs), with significantly faster runtime. Our code is available at: https://github.com/langkhachhoha/MPaGE.
|
https://arxiv.org/abs/2507.20923
|
Academic Papers
|
svg
|
446a64d2ffa4d6b5825cc987f5b5197bae6de5f360856e15cf4f2f28ed932b5d
|
2026-01-16T00:00:00-05:00
|
Out of Distribution, Out of Luck: How Well Can LLMs Trained on Vulnerability Datasets Detect Top 25 CWE Weaknesses?
|
arXiv:2507.21817v4 Announce Type: replace Abstract: Automated vulnerability detection research has made substantial progress, yet its real-world impact remains limited. Prior work found that current vulnerability datasets suffer from issues including label inaccuracy rates of 20%-71%, extensive duplication, and poor coverage of critical Common Weakness Enumeration (CWE). These issues create a significant generalization gap where models achieve misleading In-Distribution (ID) accuracies (testing on splits from the same dataset) by exploiting spurious correlations rather than learning true vulnerability patterns. To address these limitations, we present a three-part solution. First, we introduce BenchVul, which is a manually curated and balanced test dataset covering the MITRE Top 25 Most Dangerous CWEs, to enable fair model evaluation. Second, we construct a high-quality training dataset, TitanVul, comprising 38,548 functions by aggregating seven public sources and applying deduplication and validation using a novel multi-agent LLM pipeline. Third, we propose a Realistic Vulnerability Generation (RVG) pipeline, which synthesizes context-aware vulnerability examples for underrepresented but critical CWE types through simulated development workflows. Our evaluation reveals that In-Distribution (ID) performance does not reliably predict Out-of-Distribution (OOD) performance on BenchVul. For example, a model trained on BigVul achieves the highest 0.703 ID accuracy but fails on BenchVul's real-world samples (0.493 OOD accuracy). Conversely, a model trained on our TitanVul achieves the highest OOD performance on both the real-world (0.881) and synthesized (0.785) portions of BenchVul, improving upon the next-best performing dataset by 5.3% and 11.8% respectively, despite a modest ID score (0.590). Augmenting TitanVul with our RVG further boosts this leading OOD performance, improving accuracy on real-world data by 5.8% (to 0.932).
|
https://arxiv.org/abs/2507.21817
|
Academic Papers
|
svg
|
c225985e1b38bd369b7d4eb65724418a0d669825e7a45f9e8fb8919606545055
|
2026-01-16T00:00:00-05:00
|
UEChecker: Detecting Unchecked External Call Vulnerabilities in DApps via Graph Analysis
|
arXiv:2508.01343v2 Announce Type: replace Abstract: The increasing number of attacks on the contract layer of DApps has resulted in economic losses amounting to $66 billion. Vulnerabilities arise when contracts interact with external protocols without verifying the results of the calls, leading to exploit entry points such as flash loan attacks and reentrancy attacks. In this paper, we propose UEChecker, a deep learning-based tool that utilizes a call graph and a Graph Convolutional Network to detect unchecked external call vulnerabilities. We design the following components: An edge prediction module that reconstructs the feature representation of nodes and edges in the call graph; A node aggregation module that captures structural information from both the node itself and its neighbors, thereby enhancing feature representation between nodes and improving the model's understanding of the global graph structure; A Conformer Block module that integrates multi-head attention, convolutional modules, and feedforward neural networks to more effectively capture dependencies of different scales within the call graph, extending beyond immediate neighbors and enhancing the performance of vulnerability detection. Finally, we combine these modules with Graph Convolutional Network to detect unchecked external call vulnerabilities. By auditing the smart contracts of 608 DApps, our results show that our tool achieves an accuracy of 87.59% in detecting unchecked external call vulnerabilities. Furthermore, we compare our tool with GAT, LSTM, and GCN baselines, and in the comparison experiments, UEChecker consistently outperforms these models in terms of accuracy.
|
https://arxiv.org/abs/2508.01343
|
Academic Papers
|
svg
|
0f0b73a2e4bd1f468c8d36d7ecd882d419aa33457ab15476dfe2b687beacb2bb
|
2026-01-16T00:00:00-05:00
|
MultiCFV: Detecting Control Flow Vulnerabilities in Smart Contracts Leveraging Multimodal Deep Learning
|
arXiv:2508.01346v2 Announce Type: replace Abstract: The introduction of smart contract functionality marks the advent of the blockchain 2.0 era, enabling blockchain technology to support digital currency transactions and complex distributed applications. However, many smart contracts have been found to contain vulnerabilities and errors, leading to the loss of assets within the blockchain. Despite a range of tools that have been developed to identify vulnerabilities in smart contracts at the source code or bytecode level, most rely on a single modality, reducing performance, accuracy, and limited generalization capabilities. This paper proposes a multimodal deep learning approach, MultiCFV, which is designed specifically to analyze and detect erroneous control flow vulnerability, as well as identify code clones in smart contracts. Bytecode is generated from source code to construct control flow graphs, with graph embedding techniques extracting graph features. Abstract syntax trees are used to obtain syntax features, while code comments capture key commentary words and comment features. These three feature vectors are fused to create a database for code inspection, which is used to detect similar code and identify contract vulnerabilities. Experimental results demonstrate our method effectively combines structural, syntactic, and semantic information, improving the accuracy of smart contract vulnerability detection and clone detection.
|
https://arxiv.org/abs/2508.01346
|
Academic Papers
|
svg
|
ab1d55670f41cef9d14a4a9f79a041b166675dcd9b844dd582ab986e46c0dfdd
|
2026-01-16T00:00:00-05:00
|
NATLM: Detecting Defects in NFT Smart Contracts Leveraging LLM
|
arXiv:2508.01351v2 Announce Type: replace Abstract: Security issues are becoming increasingly significant with the rapid evolution of Non-fungible Tokens (NFTs). As NFTs are traded as digital assets, they have emerged as prime targets for cyber attackers. In the development of NFT smart contracts, there may exist undiscovered defects that could lead to substantial financial losses if exploited. To tackle this issue, this paper presents a framework called NATLM(NFT Assistant LLM), designed to detect potential defects in NFT smart contracts. The framework effectively identifies four common types of vulnerabilities in NFT smart contracts: ERC-721 Reentrancy, Public Burn, Risky Mutable Proxy, and Unlimited Minting. Relying exclusively on large language models (LLMs) for defect detection can lead to a high false-positive rate. To enhance detection performance, NATLM integrates static analysis with LLMs, specifically Gemini Pro 1.5. Initially, NATLM employs static analysis to extract structural, syntactic, and execution flow information from the code, represented through Abstract Syntax Trees (AST) and Control Flow Graphs (CFG). These extracted features are then combined with vectors of known defect examples to create a matrix for input into the knowledge base. Subsequently, the feature vectors and code vectors of the analyzed contract are compared with the contents of the knowledge base. Finally, the LLM performs deep semantic analysis to enhance detection capabilities, providing a more comprehensive and accurate identification of potential security issues. Experimental results indicate that NATLM analyzed 8,672 collected NFT smart contracts, achieving an overall precision of 87.72%, a recall of 89.58%, and an F1 score of 88.94%. The results outperform other baseline experiments, successfully identifying four common types of defects.
|
https://arxiv.org/abs/2508.01351
|
Academic Papers
|
svg
|
eae41be8a3b280e7356b9ee61459e87fafa35b6d1712d841b45d7aa46608455e
|
2026-01-16T00:00:00-05:00
|
A Study of Commonsense Reasoning over Visual Object Properties
|
arXiv:2508.10956v2 Announce Type: replace Abstract: Inspired by human categorization, object property reasoning involves identifying and recognizing low-level details and higher-level abstractions. While current visual question answering (VQA) studies consider multiple object properties, such as size, they typically blend perception and reasoning and lack representativeness in terms of reasoning and image categories, making it unclear whether and how vision-language models (VLMs) abstract and reason over depicted objects. To this end, we introduce a systematic evaluation framework comprising images of three representative types, three reasoning levels of increasing complexity, and four object property dimensions, informed by prior work on common sense. We develop a procedure to instantiate this framework in two VQA object reasoning benchmarks: OPTICS-CNT, comprising 360 images paired with 1,080 multi-level, count-based questions, and OPTICS-CMP, with 2.1k comparison questions. Experiments with 12 state-of-the-art VLMs in zero-shot settings reveal significant limitations relative to humans, with the best-performing model achieving below 40% counting and 70% comparison accuracy. VLMs struggle particularly with photographic images, counterfactual reasoning, physical and functional properties, and higher counts. We make the OPTICS benchmark data and code available to support future work on scalable benchmarking methods, generalized annotation guidelines, and advanced reasoning VLMs.
|
https://arxiv.org/abs/2508.10956
|
Academic Papers
|
svg
|
d89f770fb47320ef1f94cac5d9f862f1c8765dca66a87e436731721e81a0e07c
|
2026-01-16T00:00:00-05:00
|
Adaptive Model-Predictive Control of a Soft Continuum Robot Using a Physics-Informed Neural Network Based on Cosserat Rod Theory
|
arXiv:2508.12681v2 Announce Type: replace Abstract: Dynamic control of soft continuum robots (SCRs) holds great potential for expanding their applications, but remains a challenging problem due to the high computational demands of accurate dynamic models. While data-driven approaches like Koopman-operator-based methods have been proposed, they typically lack adaptability and cannot reconstruct the full robot shape, limiting their applicability. This work introduces a real-time-capable nonlinear model-predictive control (MPC) framework for SCRs based on a domain-decoupled physics-informed neural network (DD-PINN) with adaptable bending stiffness. The DD-PINN serves as a surrogate for the dynamic Cosserat rod model with a speed-up factor of 44000. It is also used within an unscented Kalman filter for estimating the model states and bending compliance from end-effector position measurements. We implement a nonlinear evolutionary MPC running at 70 Hz on the GPU. In simulation, it demonstrates accurate tracking of dynamic trajectories and setpoint control with end-effector position errors below 3 mm (2.3% of the actuator's length). In real-world experiments, the controller achieves similar accuracy and accelerations up to 3.55 m/s2.
|
https://arxiv.org/abs/2508.12681
|
Academic Papers
|
svg
|
94eb80687fd88c4b67fc69ccdf75f2f0f253390f95d0964e6b308a5df404cb77
|
2026-01-16T00:00:00-05:00
|
Accelerating Edge Inference for Distributed MoE Models with Latency-Optimized Expert Placement
|
arXiv:2508.12851v3 Announce Type: replace Abstract: The emergence of Mixture-of-Experts (MoE) has transformed the scaling of large language models by enabling vast model capacity through sparse activation. Yet, converting these performance gains into practical edge deployment remains difficult, as the massive memory footprint and communication demands often overwhelm resource-limited environments. While centralized cloud-based solutions are available, they are frequently plagued by prohibitive infrastructure costs, latency issues, and privacy concerns. Moreover, existing edge-oriented optimizations largely overlook the complexities of heterogeneous hardware, focusing instead on isolated or uniform device setups. In response, this paper proposes Prism, an inference framework engineered for collaborative MoE serving across diverse GPU-equipped edge servers. By leveraging the intrinsic sparsity and input locality of MoE workloads, Prism minimizes inter-server communication and optimizes expert placement within diverse resource constraints. The framework integrates an activation-aware placement strategy that balances local request coverage with memory utilization, supplemented by a runtime migration mechanism to adapt expert distribution to dynamic workload changes. Experiments on contemporary MoE models and datasets demonstrate that Prism reduces inference latency by up to 30.6% and significantly lowers communication costs compared to state-of-the-art baselines, confirming the effectiveness of cooperative edge-based MoE serving.
|
https://arxiv.org/abs/2508.12851
|
Academic Papers
|
svg
|
2a1641679262f9d370f81a87e65fb3b310802279f49fbbc3562e2f910b8d40f5
|
2026-01-16T00:00:00-05:00
|
CASPER: Concept-integrated Sparse Representation for Scientific Retrieval
|
arXiv:2508.13394v2 Announce Type: replace Abstract: Identifying relevant research concepts is crucial for effective scientific search. However, primary sparse retrieval methods often lack concept-aware representations. To address this, we propose CASPER, a sparse retrieval model for scientific search that utilizes both tokens and keyphrases as representation units (i.e., dimensions in the sparse embedding space). This enables CASPER to represent queries and documents via research concepts and match them at both granular and conceptual levels. Furthermore, we construct training data by leveraging abundant scholarly references (including titles, citation contexts, author-assigned keyphrases, and co-citations), which capture how research concepts are expressed in diverse settings. Empirically, CASPER outperforms strong dense and sparse retrieval baselines across eight scientific retrieval benchmarks. We also explore the effectiveness-efficiency trade-off via representation pruning and demonstrate CASPER's interpretability by showing that it can serve as an effective and efficient keyphrase generation model.
|
https://arxiv.org/abs/2508.13394
|
Academic Papers
|
svg
|
945ebba025f05e0abe4a0c16c3d2e87e2c799aa5937664af64cdde9b517fbade
|
2026-01-16T00:00:00-05:00
|
Unleashing Semantic and Geometric Priors for 3D Scene Completion
|
arXiv:2508.13601v2 Announce Type: replace Abstract: Camera-based 3D semantic scene completion (SSC) provides dense geometric and semantic perception for autonomous driving and robotic navigation. However, existing methods rely on a coupled encoder to deliver both semantic and geometric priors, which forces the model to make a trade-off between conflicting demands and limits its overall performance. To tackle these challenges, we propose FoundationSSC, a novel framework that performs dual decoupling at both the source and pathway levels. At the source level, we introduce a foundation encoder that provides rich semantic feature priors for the semantic branch and high-fidelity stereo cost volumes for the geometric branch. At the pathway level, these priors are refined through specialised, decoupled pathways, yielding superior semantic context and depth distributions. Our dual-decoupling design produces disentangled and refined inputs, which are then utilised by a hybrid view transformation to generate complementary 3D features. Additionally, we introduce a novel Axis-Aware Fusion (AAF) module that addresses the often-overlooked challenge of fusing these features by anisotropically merging them into a unified representation. Extensive experiments demonstrate the advantages of FoundationSSC, achieving simultaneous improvements in both semantic and geometric metrics, surpassing prior bests by +0.23 mIoU and +2.03 IoU on SemanticKITTI. Additionally, we achieve state-of-the-art performance on SSCBench-KITTI-360, with 21.78 mIoU and 48.61 IoU.
|
https://arxiv.org/abs/2508.13601
|
Academic Papers
|
svg
|
3db3d4a4d4fd0f97780197c58a18b86af708c746c9c6a47bb82ac6427e41d196
|
2026-01-16T00:00:00-05:00
|
OMHBench: Benchmarking Balanced and Grounded Omni-Modal Multi-Hop Reasoning
|
arXiv:2508.16198v2 Announce Type: replace Abstract: Multimodal Large Language Models (MLLMs) have increasingly supported omni-modal processing across text, vision, and speech. However, existing evaluation frameworks for such models suffer from critical limitations, including modality shortcuts and biased reasoning paths. To address these challenges, we propose OMHBench, a novel benchmark designed to rigorously evaluate omni-modal multi-hop reasoning. It consists of 6,144 questions with balanced reasoning paths that are jointly grounded across all three modalities. Extensive evaluation of 13 state-of-the-art models reveals that (1) a large performance gap exists between proprietary and open-source MLLMs and (2) even proprietary models exhibit high sensitivity to reasoning path variations, resulting in asymmetric omni-modal grounding. Notably, models struggle when processing the speech modality, underscoring the need for balanced, multi-hop evaluation of omni-modal intelligence.
|
https://arxiv.org/abs/2508.16198
|
Academic Papers
|
svg
|
397b24edc65a9b3560be8defcdf673037906cb3af050829eab8fee26ebcd369c
|
2026-01-16T00:00:00-05:00
|
BASIL: Bayesian Assessment of Sycophancy in LLMs
|
arXiv:2508.16846v3 Announce Type: replace Abstract: Sycophancy (overly agreeable or flattering behavior) poses a fundamental challenge for human-AI collaboration, particularly in high-stakes decision-making domains such as health, law, and education. A central difficulty in studying sycophancy in large language models (LLMs) is disentangling sycophantic belief shifts from rational changes in behavior driven by new evidence or user-provided information. Existing approaches either measure descriptive behavior changes or apply normative evaluations that rely on objective ground truth, limiting their applicability to subjective or uncertain tasks. We introduce a Bayesian probabilistic framework, grounded in behavioral economics and rational decision theory, that explicitly separates sycophancy from rational belief updating. Within this framework, we achieve three objectives: (i) a descriptive metric that measures sycophancy while controlling for rational responses to evidence; (ii) a normative metric that quantifies how sycophancy leads models astray from Bayesian-consistent belief updating; and (iii) the ability to apply both metrics in settings without ground-truth labels. Applying our framework across multiple LLMs and three uncertainty-driven tasks, we find robust evidence of sycophantic belief shifts and show that their impact on rationality depends on whether models systematically over- or under-update their beliefs. Finally, we demonstrate that a post-hoc calibration method and two fine-tuning strategies (SFT and DPO) substantially reduce Bayesian inconsistency, with particularly strong improvements under explicit sycophancy prompting.
|
https://arxiv.org/abs/2508.16846
|
Academic Papers
|
svg
|
133e682fd97498ff055b1221f3bd63dbb6e6594863e897f8b4f0d2f3cdcd35c9
|
2026-01-16T00:00:00-05:00
|
Some new properties of the PamPa scheme
|
arXiv:2508.17147v2 Announce Type: replace Abstract: In this paper, we provide a few new properties of Active Flux (AF)/Point-Average-Moment PolynomiAl-interpreted (\pampa) schemes. First, we show, in full generality, that the AF/pampa schemes can be interpreted in such a way that the discontinuous Galerkin (dG) scheme is one of their building blocks. Secondly we provide intrinsic bound preserving properties of the current variant of pampa. This is also illustrated numerically. Last, we show, at least in one dimension, that the pampa scheme has the summation by part (SBP) property.
|
https://arxiv.org/abs/2508.17147
|
Academic Papers
|
svg
|
4b0f032bef6d3b13c7057f228b7872815b3092092b50979a866438b8d5158961
|
2026-01-16T00:00:00-05:00
|
How Quantization Shapes Bias in Large Language Models
|
arXiv:2508.18088v2 Announce Type: replace Abstract: This work presents a comprehensive evaluation of how quantization affects model bias, with particular attention to its impact on individual demographic subgroups. We focus on weight and activation quantization strategies and examine their effects across a broad range of bias types, including stereotypes, fairness, toxicity, and sentiment. We employ both probability- and generated text-based metrics across 13 benchmarks and evaluate models that differ in architecture family and reasoning ability. Our findings show that quantization has a nuanced impact on bias: while it can reduce model toxicity and does not significantly impact sentiment, it tends to slightly increase stereotypes and unfairness in generative tasks, especially under aggressive compression. These trends are generally consistent across demographic categories and subgroups, and model types, although their magnitude depends on the specific setting. Overall, our results highlight the importance of carefully balancing efficiency and ethical considerations when applying quantization in practice.
|
https://arxiv.org/abs/2508.18088
|
Academic Papers
|
svg
|
7ea0a1e0819312b46af33b02eb2eb4d00faf885d13f1e624830062a3d1526f54
|
2026-01-16T00:00:00-05:00
|
FastMesh: Efficient Artistic Mesh Generation via Component Decoupling
|
arXiv:2508.19188v3 Announce Type: replace Abstract: Recent mesh generation approaches typically tokenize triangle meshes into sequences of tokens and train autoregressive models to generate these tokens sequentially. Despite substantial progress, such token sequences inevitably reuse vertices multiple times to fully represent manifold meshes, as each vertex is shared by multiple faces. This redundancy leads to excessively long token sequences and inefficient generation processes. In this paper, we propose an efficient framework that generates artistic meshes by treating vertices and faces separately, significantly reducing redundancy. We employ an autoregressive model solely for vertex generation, decreasing the token count to approximately 23% of that required by the most compact existing tokenizer. Next, we leverage a bidirectional transformer to complete the mesh in a single step by capturing inter-vertex relationships and constructing the adjacency matrix that defines the mesh faces. To further improve the generation quality, we introduce a fidelity enhancer to refine vertex positioning into more natural arrangements and propose a post-processing framework to remove undesirable edge connections. Experimental results show that our method achieves more than 8x faster speed on mesh generation compared to state-of-the-art approaches, while producing higher mesh quality.
|
https://arxiv.org/abs/2508.19188
|
Academic Papers
|
svg
|
9f5c1013ad7fb23832a4713a0fa2dc8612d0c7bcf0b539dfb727a3995b6e34c4
|
2026-01-16T00:00:00-05:00
|
Network-Level Prompt and Trait Leakage in Local Research Agents
|
arXiv:2508.20282v3 Announce Type: replace Abstract: We show that Web and Research Agents (WRAs) -- language-model-based systems that investigate complex topics on the Internet -- are vulnerable to inference attacks by passive network observers. Deployment of WRAs \emph{locally} by organizations and individuals for privacy, legal, or financial purposes exposes them to DNS resolvers, malicious ISPs, VPNs, web proxies, and corporate or government firewalls. However, unlike sporadic and scarce web browsing by humans, WRAs visit $70{-}140$ domains per each request with a distinct timing pattern creating unique privacy risks. Specifically, we demonstrate a novel prompt and user trait leakage attack against WRAs that only leverages their network-level metadata (i.e., visited IP addresses and their timings). We start by building a new dataset of WRA traces based on real user search queries and queries generated by synthetic personas. We define a behavioral metric (called OBELS) to comprehensively assess similarity between original and inferred prompts, showing that our attack recovers over 73\% of the functional and domain knowledge of user prompts. Extending to a multi-session setting, we recover up to 19 of 32 latent traits with high accuracy. Our attack remains effective under partial observability and noisy conditions. Finally, we discuss mitigation strategies that constrain domain diversity or obfuscate traces, showing negligible utility impact while reducing attack effectiveness by an average of 29\%.
|
https://arxiv.org/abs/2508.20282
|
Academic Papers
|
svg
|
6f3cb4044b8f2ccc73b58da326a080c00a5b6ebd16effc270c1eb9478d764480
|
2026-01-16T00:00:00-05:00
|
MindGuard: Intrinsic Decision Inspection for Securing LLM Agents Against Metadata Poisoning
|
arXiv:2508.20412v3 Announce Type: replace Abstract: The Model Context Protocol (MCP) is increasingly adopted to standardize the interaction between LLM agents and external tools. However, this trend introduces a new threat: Tool Poisoning Attacks (TPA), where tool metadata is poisoned to induce the agent to perform unauthorized operations. Existing defenses that primarily focus on behavior-level analysis are fundamentally ineffective against TPA, as poisoned tools need not be executed, leaving no behavioral trace to monitor. Thus, we propose MindGuard, a decision-level guardrail for LLM agents, providing provenance tracking of call decisions, policy-agnostic detection, and poisoning source attribution against TPA. While fully explaining LLM decision remains challenging, our empirical findings uncover a strong correlation between LLM attention mechanisms and tool invocation decisions. Therefore, we choose attention as an empirical signal for decision tracking and formalize this as the Decision Dependence Graph (DDG), which models the LLM's reasoning process as a weighted, directed graph where vertices represent logical concepts and edges quantify the attention-based dependencies. We further design robust DDG construction and graph-based anomaly analysis mechanisms that efficiently detect and attribute TPA attacks. Extensive experiments on real-world datasets demonstrate that MindGuard achieves 94\%-99\% average precision in detecting poisoned invocations, 95\%-100\% attribution accuracy, with processing times under one second and no additional token cost. Moreover, DDG can be viewed as an adaptation of the classical Program Dependence Graph (PDG), providing a solid foundation for applying traditional security policies at the decision level.
|
https://arxiv.org/abs/2508.20412
|
Academic Papers
|
svg
|
edc5fe9735207128cd821b52cf3a39023492e92d17c8b6a642a592870bb4ecc6
|
2026-01-16T00:00:00-05:00
|
Encoder-Only Image Registration
|
arXiv:2509.00451v3 Announce Type: replace Abstract: Learning-based techniques have significantly improved the accuracy and speed of deformable image registration. However, challenges such as reducing computational complexity and handling large deformations persist. To address these challenges, we analyze how convolutional neural networks (ConvNets) influence registration performance using the Horn-Schunck optical flow equation. Supported by prior studies and our empirical experiments, we observe that ConvNets play two key roles in registration: linearizing local intensities and harmonizing global contrast variations. Based on these insights, we propose the Encoder-Only Image Registration (EOIR) framework, designed to achieve a better accuracy-efficiency trade-off. EOIR separates feature learning from flow estimation, employing only a 3-layer ConvNet for feature extraction and a set of 3-layer flow estimators to construct a Laplacian feature pyramid, progressively composing diffeomorphic deformations under a large-deformation model. Results on five datasets across different modalities and anatomical regions demonstrate EOIR's effectiveness, achieving superior accuracy-efficiency and accuracy-smoothness trade-offs. With comparable accuracy, EOIR provides better efficiency and smoothness, and vice versa. The source code of EOIR is publicly available on https://github.com/XiangChen1994/EOIR.
|
https://arxiv.org/abs/2509.00451
|
Academic Papers
|
svg
|
7bdbbdde88be6f71ce282de1e82809db263b9a75eb4ab75c26cb80cd9f62deec
|
2026-01-16T00:00:00-05:00
|
Morse sequences on stacks and flooding sequences
|
arXiv:2509.01384v2 Announce Type: replace Abstract: This paper builds upon the framework of \emph{Morse sequences}, a simple and effective approach to discrete Morse theory. A Morse sequence on a simplicial complex consists of a sequence of nested subcomplexes generated by expansions and fillings-two operations originally introduced by Whitehead. Expansions preserve homotopy, while fillings introduce critical simplexes that capture essential topological features. We extend the notion of Morse sequences to \emph{stacks}, which are monotonic functions defined on simplicial complexes, and define \emph{Morse sequences on stacks} as those whose expansions preserve the homotopy of all sublevel sets. This extension leads to a generalization of the fundamental collapse theorem to weighted simplicial complexes. Within this framework, we focus on a refined class of sequences called \emph{flooding sequences}, which exhibit an ordering behavior similar to that of classical watershed algorithms. Although not every Morse sequence on a stack is a flooding sequence, we show that the gradient vector field associated with any Morse sequence can be recovered through a flooding sequence. Finally, we present algorithmic schemes for computing flooding sequences using cosimplicial complexes.
|
https://arxiv.org/abs/2509.01384
|
Academic Papers
|
svg
|
daabcce116a737c1592d4ad08a2c2628f1cdade73da6a5780dffd3427782ff1e
|
2026-01-16T00:00:00-05:00
|
JudgeAgent: Beyond Static Benchmarks for Knowledge-Driven and Dynamic LLM Evaluation
|
arXiv:2509.02097v4 Announce Type: replace Abstract: Current evaluation methods for large language models (LLMs) primarily rely on static benchmarks, presenting two major challenges: limited knowledge coverage and fixed difficulties that mismatch with the evaluated LLMs. These limitations lead to superficial assessments of LLM knowledge, thereby impeding the targeted model optimizations. To bridge this gap, we propose JudgeAgent, a knowledge-driven and dynamic evaluation framework for LLMs. To address the challenge of limited knowledge coverage, JudgeAgent leverages LLM agents equipped with context graphs to traverse knowledge structures systematically for question generation. Furthermore, to mitigate data contamination and difficulty mismatch, it adopts a difficulty-adaptive and multi-turn interview mechanism. Thereby, JudgeAgent can achieve comprehensive evaluations and facilitate more effective improvement of LLMs. Empirical results demonstrate that JudgeAgent enables more comprehensive evaluations and facilitates effective model iterations, highlighting the potential of this knowledge-driven and dynamic evaluation paradigm. The source code is available on https://github.com/DataArcTech/JudgeAgent.
|
https://arxiv.org/abs/2509.02097
|
Academic Papers
|
svg
|
6a60501e0e2e431f797498dba17afad15b538d1e44e1899f29090b3fc9c52b3f
|
2026-01-16T00:00:00-05:00
|
Small Open Models Achieve Near Parity with Large Models in Low Resource Literary Translation at a Fraction of the Cost
|
arXiv:2509.07829v2 Announce Type: replace Abstract: Literary translation has recently gained attention as a distinct and complex task in machine translation research. However, the translation by small open models remains an open problem. We contribute to this ongoing research by introducing TinyFabulist Translation Framework (TF2), a unified framework for dataset creation, fine-tuning, and evaluation in English->Romanian literary translation, centered on the creation and open release of both a compact, fine-tuned language model (TF2-12B) and large-scale synthetic parallel datasets (DS-TF2-EN-RO-3M and DS-TF2-EN-RO-15K). Building on DS-TF1-EN-3M (TF1), the largest collection of synthetic English fables to date, we address the need for rich, high-quality literary datasets in low-resource languages such as Romanian. Our pipeline first generates 15k high-quality Romanian reference translations from the TF1 pool using a high-performing LLM. We then apply a two-stage fine-tuning process to a 12B-parameter open-weight model: (i) instruction tuning to capture genre-specific narrative style, and (ii) adapter compression for efficient deployment. Evaluation combines corpus-level BLEU with a five-dimension LLM-based rubric (accuracy, fluency, coherence, style, and cultural adaptation) to provide a nuanced assessment of translation quality. Results show that our fine-tuned model achieves strong fluency and adequacy, narrowing the gap to top-performing proprietary models under automated and human-anchored evaluation, while being open, accessible, and significantly more cost-effective. Alongside the fine-tuned model and both datasets, we publicly release all scripts and evaluation prompts. TF2 thus provides an end-to-end, reproducible pipeline for research on cost-efficient translation, cross-lingual narrative generation, and the broad adoption of open models for culturally significant literary content in low-resource settings.
|
https://arxiv.org/abs/2509.07829
|
Academic Papers
|
svg
|
a214352cb5c71a16b752e8a1fd07bee9ff0337ec413b341a00dfd86ee30344e2
|
2026-01-16T00:00:00-05:00
|
Compartmentalised Agentic Reasoning for Clinical NLI
|
arXiv:2509.10222v2 Announce Type: replace Abstract: Large language models can produce fluent judgments for clinical natural language inference, yet they frequently fail when the decision requires the correct inferential schema rather than surface matching. We introduce CARENLI, a compartmentalised agentic framework that routes each premise-statement pair to a reasoning family and then applies a specialised solver with explicit verification and targeted refinement. We evaluate on an expanded CTNLI benchmark of 200 instances spanning four reasoning families: Causal Attribution, Compositional Grounding, Epistemic Verification, and Risk State Abstraction. Across four contemporary backbone models, CARENLI improves mean accuracy from about 23% with direct prompting to about 57%, a gain of roughly 34 points, with the largest benefits on structurally demanding reasoning types. These results support compartmentalisation plus verification as a practical route to more reliable and auditable clinical inference.
|
https://arxiv.org/abs/2509.10222
|
Academic Papers
|
svg
|
cc6d034ee0ff325144951d764fdc1e58e297a529d1b8eb2c3b4d89ff1086067d
|
2026-01-16T00:00:00-05:00
|
Judge Q: Trainable Queries for Optimized Information Retention in KV Cache Eviction
|
arXiv:2509.10798v2 Announce Type: replace Abstract: Large language models (LLMs) utilize key-value (KV) cache to store historical information during sequence processing. The size of KV cache grows linearly as the length of the sequence extends, which seriously affects memory usage and decoding efficiency. Current methods for KV cache eviction typically utilize the last window from the pre-filling phase as queries to compute the KV importance scores for eviction. Although this scheme is simple to implement, it tends to overly focus on local information, potentially leading to the neglect or omission of crucial global information. To mitigate this issue, we propose Judge Q, a novel training method which incorporates a soft token list. This method only tunes the model's embedding layer at a low training cost. By concatenating the soft token list at the end of the input sequence, we train these tokens' attention map to the original input sequence to align with that of the actual decoded tokens. In this way, the queries corresponding to the soft tokens can effectively capture global information and better evaluate the importance of the keys and values within the KV cache, thus maintaining decoding quality when KV cache is evicted. Under the same eviction budget, our method exhibits less performance degradation compared to existing eviction approaches. We validate our approach through experiments conducted on models such as Llama-3.1-8B-Instruct and Mistral-7B-Instruct-v0.3, using benchmarks including LongBench, RULER, and Needle-in-a-Haystack. Results indicate an improvement of approximately 1 point on the LongBench and over 3 points on RULER. This proposed methodology can be seamlessly integrated into existing open-source models with minimal training overhead, thereby enhancing performance in KV cache eviction scenarios.
|
https://arxiv.org/abs/2509.10798
|
Academic Papers
|
svg
|
37c6e8b9f3001c6370e62a165e4d6b66bbdec70bfdfe8913dc4d0fd81324b676
|
2026-01-16T00:00:00-05:00
|
Graph Algorithm Unrolling with Douglas-Rachford Iterations for Image Interpolation with Guaranteed Initialization
|
arXiv:2509.11926v3 Announce Type: replace Abstract: Conventional deep neural nets (DNNs) initialize network parameters at random and then optimize each one via stochastic gradient descent (SGD), resulting in substantial risk of poor-performing local minima.Focusing on the image interpolation problem and leveraging a recent theorem that maps a (pseudo-)linear interpolator {\Theta} to a directed graph filter that is a solution to a MAP problem regularized with a graph shift variation (GSV) prior, we first initialize a directed graph adjacency matrix A based on a known interpolator {\Theta}, establishing a baseline performance.Then, towards further gain, we learn perturbation matrices P and P(2) from data to augment A, whose restoration effects are implemented via Douglas-Rachford (DR) iterations, which we unroll into a lightweight interpretable neural net.Experimental results demonstrate state-of-the-art image interpolation results, while drastically reducing network parameters.
|
https://arxiv.org/abs/2509.11926
|
Academic Papers
|
svg
|
d67b046ccfcd22b1bc5e6a864fa091af304afb436a8dda5bdd131c5ed2120c9d
|
2026-01-16T00:00:00-05:00
|
Multi-Threaded Software Model Checking via Parallel Trace Abstraction Refinement
|
arXiv:2509.13699v2 Announce Type: replace Abstract: Automatic software verification is a valuable means for software quality assurance. However, automatic verification and in particular software model checking can be time-consuming, which hinders their practical applicability e.g., the use in continuous integration. One solution to address the issue is to reduce the response time of the verification procedure by leveraging today's multi-core CPUs. In this paper, we propose a solution to parallelize trace abstraction, an abstraction-based approach to software model checking. The underlying idea of our approach is to parallelize the abstraction refinement. More concretely, our approach analyzes different traces (syntactic program paths) that could violate the safety property in parallel. We realize our parallelized version of trace abstraction in the verification tool Ulti mate Automizer and perform a thorough evaluation. Our evaluation shows that our parallelization is more effective than sequential trace abstraction and can provide results significantly faster on many time-consuming tasks. Also, our approach is more effective than DSS, a recent parallel approach to abstraction-based software model checking.
|
https://arxiv.org/abs/2509.13699
|
Academic Papers
|
svg
|
3c9e83bda93d1ff19d8eede227344aa4869d0ff7d1c9db0116daa85f49f90e21
|
2026-01-16T00:00:00-05:00
|
SPATIALGEN: Layout-guided 3D Indoor Scene Generation
|
arXiv:2509.14981v4 Announce Type: replace Abstract: Creating high-fidelity 3D models of indoor environments is essential for applications in design, virtual reality, and robotics. However, manual 3D modeling remains time-consuming and labor-intensive. While recent advances in generative AI have enabled automated scene synthesis, existing methods often face challenges in balancing visual quality, diversity, semantic consistency, and user control. A major bottleneck is the lack of a large-scale, high-quality dataset tailored to this task. To address this gap, we introduce a comprehensive synthetic dataset, featuring 12,328 structured annotated scenes with 57,431 rooms, and 4.7M photorealistic 2D renderings. Leveraging this dataset, we present SpatialGen, a novel multi-view multi-modal diffusion model that generates realistic and semantically consistent 3D indoor scenes. Given a 3D layout and a reference image (derived from a text prompt), our model synthesizes appearance (color image), geometry (scene coordinate map), and semantic (semantic segmentation map) from arbitrary viewpoints, while preserving spatial consistency across modalities. SpatialGen consistently generates superior results to previous methods in our experiments. We are open-sourcing our data and models to empower the community and advance the field of indoor scene understanding and generation.
|
https://arxiv.org/abs/2509.14981
|
Academic Papers
|
svg
|
268cfa608fe44b9fd78380505f55c4721f255faa128b3b72f7112b4dc5458313
|
2026-01-16T00:00:00-05:00
|
Query-Efficient Locally Private Hypothesis Selection via the Scheffe Graph
|
arXiv:2509.16180v2 Announce Type: replace Abstract: We propose an algorithm with improved query-complexity for the problem of hypothesis selection under local differential privacy constraints. Given a set of $k$ probability distributions $Q$, we describe an algorithm that satisfies local differential privacy, performs $\tilde{O}(k^{3/2})$ non-adaptive queries to individuals who each have samples from a probability distribution $p$, and outputs a probability distribution from the set $Q$ which is nearly the closest to $p$. Previous algorithms required either $\Omega(k^2)$ queries or many rounds of interactive queries. Technically, we introduce a new object we dub the Scheff\'e graph, which captures structure of the differences between distributions in $Q$, and may be of more broad interest for hypothesis selection tasks.
|
https://arxiv.org/abs/2509.16180
|
Academic Papers
|
svg
|
1691b83460238ac392c423a19b60f1a332e65d089c3f0209d868592eae5f4299
|
2026-01-16T00:00:00-05:00
|
Filling in the Clinical Gaps in Benchmark: Case for HealthBench for the Japanese medical system
|
arXiv:2509.17444v3 Announce Type: replace Abstract: This study investigates the applicability of HealthBench, a large-scale, rubric-based medical benchmark, to the Japanese context. Although robust evaluation frameworks are essential for the safe development of medical LLMs, resources in Japanese are scarce and often consist of translated multiple-choice questions. Our research addresses this issue in two ways. First, we establish a performance baseline by applying a machine-translated version of HealthBench's 5,000 scenarios to evaluate two models: a high-performing multilingual model (GPT-4.1) and a Japanese-native open-source model (LLM-jp-3.1). Secondly, we use an LLM-as-a-Judge approach to systematically classify the benchmark's scenarios and rubric criteria. This allows us to identify 'contextual gaps' where the content is misaligned with Japan's clinical guidelines, healthcare systems or cultural norms. Our findings reveal a modest performance drop in GPT-4.1 due to rubric mismatches, as well as a significant failure in the Japanese-native model, which lacked the required clinical completeness. Furthermore, our classification shows that, despite most scenarios being applicable, a significant proportion of the rubric criteria require localisation. This work underscores the limitations of direct benchmark translation and highlights the urgent need for a context-aware, localised adaptation, a "J-HealthBench", to ensure the reliable and safe evaluation of medical LLMs in Japan.
|
https://arxiv.org/abs/2509.17444
|
Academic Papers
|
svg
|
d465b8bcaac0cc9a764f17039dcca83781f783f8ff9907e8cf319d3419b26976
|
2026-01-16T00:00:00-05:00
|
Depth Edge Alignment Loss: DEALing with Depth in Weakly Supervised Semantic Segmentation
|
arXiv:2509.17702v2 Announce Type: replace Abstract: Autonomous robotic systems applied to new domains require an abundance of expensive, pixel-level dense labels to train robust semantic segmentation models under full supervision. This study proposes a model-agnostic Depth Edge Alignment Loss to improve Weakly Supervised Semantic Segmentation models across different datasets. The methodology generates pixel-level semantic labels from image-level supervision, avoiding expensive annotation processes. While weak supervision is widely explored in traditional computer vision, our approach adds supervision with pixel-level depth information, a modality commonly available in robotic systems. We demonstrate how our approach improves segmentation performance across datasets and models, but can also be combined with other losses for even better performance, with improvements up to +5.439, +1.274 and +16.416 points in mean Intersection over Union on the PASCAL VOC / MS COCO validation, and the HOPE static onboarding split, respectively. Our code is made publicly available at https://github.com/DTU-PAS/DEAL.
|
https://arxiv.org/abs/2509.17702
|
Academic Papers
|
svg
|
03cef9ebeae86ab5d6c1271ff1bd2cf5411f92d66c5506f4f2ad452da65c4149
|
2026-01-16T00:00:00-05:00
|
Unveiling m-Sharpness Through the Structure of Stochastic Gradient Noise
|
arXiv:2509.18001v3 Announce Type: replace Abstract: Sharpness-aware minimization (SAM) has emerged as a highly effective technique to improve model generalization, but its underlying principles are not fully understood. We investigate m-sharpness, where SAM performance improves monotonically as the micro-batch size for computing perturbations decreases, a phenomenon critical for distributed training yet lacking rigorous explanation. We leverage an extended Stochastic Differential Equation (SDE) framework and analyze stochastic gradient noise (SGN) to characterize the dynamics of SAM variants, including n-SAM and m-SAM. Our analysis reveals that stochastic perturbations induce an implicit variance-based sharpness regularization whose strength increases as m decreases. Motivated by this insight, we propose Reweighted SAM (RW-SAM), which employs sharpness-weighted sampling to mimic the generalization benefits of m-SAM while remaining parallelizable. Comprehensive experiments validate our theory and method.
|
https://arxiv.org/abs/2509.18001
|
Academic Papers
|
svg
|
39581d24f02658ce9cb61a496bab4c121bd1f26db7225b0c6304bc0d5e1e3933
|
2026-01-16T00:00:00-05:00
|
Governing Together: Toward Infrastructure for Community-Run Social Media
|
arXiv:2509.19653v2 Announce Type: replace Abstract: Decentralizing the governance of social computing systems to communities promises to empower them to make independent decisions, with nuance and in accordance with their values. Yet, communities do not govern in isolation. Many problems communities face are common, or move across their boundaries. We therefore propose designing for "inter-community governance:" mechanisms that support relationships and interactions between communities to coordinate on governance issues. Drawing from workshops with 24 individuals on decentralized, community-run social media, we present six challenges in designing for inter-community governance surfaced through ideas proposed in workshops. Together, these ideas come together as an ecosystem of resources, infrastructures, and tools that highlight three key principles for designing for inter-community governance: modularity, forkability, and polycentricity. We end with a discussion of how the ideas proposed in workshops might be implemented in future work aiming to support community governance in social computing systems broadly.
|
https://arxiv.org/abs/2509.19653
|
Academic Papers
|
svg
|
caa6a5dc5ec19c5ee93ce48ba6c5541367d23e94fa6ae8e62d903346cb7f6e7c
|
2026-01-16T00:00:00-05:00
|
Functional Critics Are Essential in Off-Policy Actor-Critic: Provable Convergence and Efficient Exploration
|
arXiv:2509.22964v3 Announce Type: replace Abstract: Off-policy reinforcement learning (RL) with function approximation offers an effective way to improve sample efficiency by reusing past experience. Within this setting, the actor-critic (AC) framework has achieved strong empirical success but suffers from the "moving target" problem, where the policy being evaluated changes continually. Functional critics, or policy-conditioned value functions, have been proposed to address this issue by including a representation of the policy as input. While the concept of generalizing value functions across policy space is appealing, previous efforts have struggled to remain competitive against state-of-the-art AC algorithms that do not utilize functional critics. In this work, we revisit functional critics within the off-policy AC framework and identify two aspects that render them a necessity rather than a luxury. First, in off-policy AC, critic learning contends with both the "deadly triad" instability and the "moving target" issue, while actor learning faces the challenge of estimating the exact off-policy policy gradient. This complex interplay makes theoretical convergence extremely difficult for practical algorithms. We demonstrate that a functional critic is essential for addressing this challenge and establish the first convergence proof for an off-policy target-based AC algorithm under linear function approximation. Second, we identify a crucial link between functional critic modeling and efficient exploration. Specifically, we show that approximating posterior sampling for exploration in model-free settings is infeasible without functional critics. Practically, we propose a tailored neural network architecture and a minimal AC algorithm that relies solely on these insights. In experiments on the DeepMind Control Suite, this implementation achieves performance competitive with state-of-the-art methods.
|
https://arxiv.org/abs/2509.22964
|
Academic Papers
|
svg
|
f51fd99265bfc5c8b0f1751a9a11870b1086307a19dca6f968a2e867c2938cfc
|
2026-01-16T00:00:00-05:00
|
Knowledge Homophily in Large Language Models
|
arXiv:2509.23773v2 Announce Type: replace Abstract: Large Language Models (LLMs) have been increasingly studied as neural knowledge bases for supporting knowledge-intensive applications such as question answering and fact checking. However, the structural organization of their knowledge remains unexplored. Inspired by cognitive neuroscience findings, such as semantic clustering and priming, where knowing one fact increases the likelihood of recalling related facts, we investigate an analogous knowledge homophily pattern in LLMs. To this end, we map LLM knowledge into a graph representation through knowledge checking at both the triplet and entity levels. After that, we analyze the knowledgeability relationship between an entity and its neighbors, discovering that LLMs tend to possess a similar level of knowledge about entities positioned closer in the graph. Motivated by this homophily principle, we propose a Graph Neural Network (GNN) regression model to estimate entity-level knowledgeability scores for triplets by leveraging their neighborhood scores. The predicted knowledgeability enables us to prioritize checking less well-known triplets, thereby maximizing knowledge coverage under the same labeling budget. This not only improves the efficiency of active labeling for fine-tuning to inject knowledge into LLMs but also enhances multi-hop path retrieval in reasoning-intensive question answering.
|
https://arxiv.org/abs/2509.23773
|
Academic Papers
|
svg
|
44440b5248f7602ceb53494af947534d833531a39a719312bcb4723d50196ae1
|
2026-01-16T00:00:00-05:00
|
YOLO26: Key Architectural Enhancements and Performance Benchmarking for Real-Time Object Detection
|
arXiv:2509.25164v3 Announce Type: replace Abstract: This study presents a comprehensive analysis of Ultralytics YOLO26(also called as YOLOv26), highlighting its key architectural enhancements and performance benchmarking for real-time object detection. YOLO26, released in September 2025, stands as the newest and most advanced member of the YOLO family, purpose-built to deliver efficiency, accuracy, and deployment readiness on edge and low-power devices. The paper sequentially details architectural innovations of YOLO26, including the removal of Distribution Focal Loss (DFL), adoption of end-to-end NMS-free inference, integration of ProgLoss and Small-Target-Aware Label Assignment (STAL), and the introduction of the MuSGD optimizer for stable convergence. Beyond architecture, the study positions YOLO26 as a multi-task framework, supporting object detection, instance segmentation, pose/keypoints estimation, oriented detection, and classification. We present performance benchmarks of YOLO26 on edge devices such as NVIDIA Jetson Nano and Orin, comparing its results with YOLOv8, YOLOv11, YOLOv12, YOLOv13, and transformer-based detectors(RF-DETR and RT-DETR). This paper further explores real-time deployment pathways, flexible export options (ONNX, TensorRT, CoreML, TFLite), and quantization for INT8/FP16. Practical use cases of YOLO26 across robotics, manufacturing, and IoT are highlighted to demonstrate cross-industry adaptability. Finally, insights on deployment efficiency and broader implications are discussed, with future directions for YOLO26 and the YOLO lineage outlined.
|
https://arxiv.org/abs/2509.25164
|
Academic Papers
|
svg
|
714d38e77317ca0df22ec92b088f682b9841add14358e153ffae6e32c6b9944f
|
2026-01-16T00:00:00-05:00
|
A Geometric Unification of Generative AI with Manifold-Probabilistic Projection Models
|
arXiv:2510.00666v2 Announce Type: replace Abstract: Most models of generative AI for images assume that images are inherently low-dimensional objects embedded within a high-dimensional space. Additionally, it is often implicitly assumed that thematic image datasets form smooth or piecewise smooth manifolds. Common approaches overlook the geometric structure and focus solely on probabilistic methods, approximating the probability distribution through universal approximation techniques such as the kernel method. In some generative models the low dimensional nature of the data manifest itself by the introduction of a lower dimensional latent space. Yet, the probability distribution in the latent or the manifold's coordinate space is considered uninteresting and is predefined or considered uniform. In this study, we address the problem of Blind Image Denoising (BID), and to some extent, the problem of generating images from noise by unifying geometric and probabilistic perspectives. We introduce a novel framework that improves upon existing probabilistic approaches by incorporating geometric assumptions that enable the effective use of kernel-based probabilistic methods. Furthermore, the proposed framework extends prior geometric approaches by combining explicit and implicit manifold descriptions through the introduction of a distance function. The resulting framework demystifies diffusion models by interpreting them as a projection mechanism onto the manifold of ``good images''. This interpretation leads to the construction of a new deterministic model, the Manifold-Probabilistic Projection Model (MPPM), which operates in both the representation (pixel) space and the latent space. We demonstrate that the Latent MPPM (LMPPM) outperforms the Latent Diffusion Model (LDM) across various datasets, achieving superior results in terms of image restoration and generation.
|
https://arxiv.org/abs/2510.00666
|
Academic Papers
|
svg
|
36b192b2ebe1e6280bc2c88850478109945266463b7bec19e9e4b658ac37297a
|
2026-01-16T00:00:00-05:00
|
Dual-Uncertainty Guided Policy Learning for Multimodal Reasoning
|
arXiv:2510.01444v2 Announce Type: replace Abstract: Reinforcement learning with verifiable rewards (RLVR) has advanced reasoning capabilities in multimodal large language models. However, existing methods typically treat visual inputs as deterministic, overlooking the perceptual ambiguity inherent to the visual modality. Consequently, they fail to distinguish whether a model's uncertainty stems from complex reasoning or ambiguous perception, preventing the targeted allocation of exploration or learning signals. To address this gap, we introduce DUPL, a dual-uncertainty guided policy learning approach for multimodal RLVR that quantifies and leverages both perceptual uncertainty (via symmetric KL divergence) and output uncertainty (via policy entropy) to guide policy updates. By establishing an uncertainty-driven feedback loop and employing a dynamic branch prioritization mechanism, DUPL recalibrates the policy advantage to focus learning on states with high perceptual or decisional ambiguity, enabling effective targeted exploration beyond passive data augmentation. Implemented on top of GRPO and evaluated on six multimodal mathematical and general-domain reasoning benchmarks, DUPL improves Qwen2.5-VL 3B and 7B models, achieving accuracy gains of up to 11.2% on visual math tasks and up to 7.1% on general-domain reasoning tasks, while consistently outperforming GRPO. These results demonstrate that dual-uncertainty guided policy learning is an effective and generalizable approach for multimodal RLVR.
|
https://arxiv.org/abs/2510.01444
|
Academic Papers
|
svg
|
f8592f73042fe009d0d4e61bcbd9f8d18f6dcf3653539a7bfb4b2f64de88ba41
|
2026-01-16T00:00:00-05:00
|
Data selection: at the interface of PDE-based inverse problem and randomized linear algebra
|
arXiv:2510.01567v2 Announce Type: replace Abstract: All inverse problems rely on data to recover unknown parameters, yet not all data are equally informative. This raises the central question of data selection. A distinctive challenge in PDE-based inverse problems is their inherently infinite-dimensional nature: both the parameter space and the design space are infinite, which greatly complicates the selection process. Somewhat unexpectedly, randomized numerical linear algebra (RNLA), originally developed in very different contexts, has provided powerful tools for addressing this challenge. These methods are inherently probabilistic, with guarantees typically stating that information is preserved with probability at least 1-p when using N randomly selected, weighted samples. Here, the notion of "information" can take different mathematical forms depending on the setting. In this review, we survey the problem of data selection in PDE-based inverse problems, emphasize its unique infinite-dimensional aspects, and highlight how RNLA strategies have been adapted and applied in this context.
|
https://arxiv.org/abs/2510.01567
|
Academic Papers
|
svg
|
87571cc5299ea2d4947c6174f02e49f92d47df4109f3052b5f11597891f6d657
|
2026-01-16T00:00:00-05:00
|
Learning Regularization Functionals for Inverse Problems: A Comparative Study
|
arXiv:2510.01755v2 Announce Type: replace Abstract: In recent years, a variety of learned regularization frameworks for solving inverse problems in imaging have emerged. These offer flexible modeling together with mathematical insights. The proposed methods differ in their architectural design and training strategies, making direct comparison challenging due to non-modular implementations. We address this gap by collecting and unifying the available code into a common framework. This unified view allows us to systematically compare the approaches and highlight their strengths and limitations, providing valuable insights into their future potential. We also provide concise descriptions of each method, complemented by practical guidelines.
|
https://arxiv.org/abs/2510.01755
|
Academic Papers
|
svg
|
f7bc340c03c73af84677f6cee2fbd44cbd863225f27064480ae6243eeb0d5513
|
2026-01-16T00:00:00-05:00
|
Fine-Tuning Diffusion Models via Intermediate Distribution Shaping
|
arXiv:2510.02692v2 Announce Type: replace Abstract: Diffusion models are widely used for generative tasks across domains. While pre-trained diffusion models effectively capture the training data distribution, it is often desirable to shape these distributions using reward functions to align with downstream applications. Policy gradient methods, such as Proximal Policy Optimization (PPO), are widely used in the context of autoregressive generation. However, the marginal likelihoods required for such methods are intractable for diffusion models, leading to alternative proposals and relaxations. In this context, we unify variants of Rejection sAmpling based Fine-Tuning (RAFT) as GRAFT, and show that this implicitly performs KL regularized reward maximization with reshaped rewards. We then introduce P-GRAFT to shape distributions at intermediate noise levels and demonstrate empirically that this can lead to more effective fine-tuning. We mathematically explain this via a bias-variance tradeoff. Motivated by this, we propose inverse noise correction to improve flow models without leveraging explicit rewards. We empirically evaluate our methods on text-to-image(T2I) generation, layout generation, molecule generation and unconditional image generation. Notably, our framework, applied to Stable Diffusion 2, improves over policy gradient methods on popular T2I benchmarks in terms of VQAScore and shows an $8.81\%$ relative improvement over the base model. For unconditional image generation, inverse noise correction improves FID of generated images at lower FLOPs/image.
|
https://arxiv.org/abs/2510.02692
|
Academic Papers
|
svg
|
bba8c6176576cf97d3b85c1f02c9a2645adb876270c211ddf6706dbb0402a03a
|
2026-01-16T00:00:00-05:00
|
Distributionally Robust Causal Abstractions
|
arXiv:2510.04842v2 Announce Type: replace Abstract: Causal Abstraction (CA) theory provides a principled framework for relating causal models that describe the same system at different levels of granularity while ensuring interventional consistency between them. Recently, several approaches for learning CAs have been proposed, but all assume fixed and well-specified exogenous distributions, making them vulnerable to environmental shifts and misspecification. In this work, we address these limitations by introducing the first class of distributionally robust CAs and their associated learning algorithms. The latter cast robust causal abstraction learning as a constrained min-max optimization problem with Wasserstein ambiguity sets. We provide theoretical results, for both empirical and Gaussian environments, leading to principled selection of the level of robustness via the radius of these sets. Furthermore, we present empirical evidence across different problems and CA learning methods, demonstrating our framework's robustness not only to environmental shifts but also to structural model and intervention mapping misspecification.
|
https://arxiv.org/abs/2510.04842
|
Academic Papers
|
svg
|
3e48df477e15682386f96e84ee4ae447fe15898ad9cc2277a8dabf6ed15bc61b
|
2026-01-16T00:00:00-05:00
|
VAL-Bench: Belief Consistency as a measure for Value Alignment in Language Models
|
arXiv:2510.05465v3 Announce Type: replace Abstract: Large language models (LLMs) are increasingly being used for tasks where outputs shape human decisions, so it is critical to verify that their responses consistently reflect desired human values. Humans, as individuals or groups, don't agree on a universal set of values, which makes evaluating value alignment difficult. Existing benchmarks often use hypothetical or commonsensical situations, which don't capture the complexity and ambiguity of real-life debates. We introduce the Value ALignment Benchmark (VAL-Bench), which measures the consistency in language model belief expressions in response to real-life value-laden prompts. VAL-Bench consists of 115K pairs of prompts designed to elicit opposing stances on a controversial issue, extracted from Wikipedia. We use an LLM-as-a-judge, validated against human annotations, to evaluate if the pair of responses consistently expresses either a neutral or a specific stance on the issue. Applied across leading open- and closed-source models, the benchmark shows considerable variation in consistency rates (ranging from ~10% to ~80%), with Claude models the only ones to achieve high levels of consistency. Lack of consistency in this manner risks epistemic harm by making user beliefs dependent on how questions are framed rather than on underlying evidence, and undermines LLM reliability in trust-critical applications. Therefore, we stress the importance of research towards training belief consistency in modern LLMs. By providing a scalable, reproducible benchmark, VAL-Bench enables systematic measurement of necessary conditions for value alignment.
|
https://arxiv.org/abs/2510.05465
|
Academic Papers
|
svg
|
00e1b344bb70fb2bdd292b6263a112bdc2f0ab38addbe3acf56fa3aede611e16
|
2026-01-16T00:00:00-05:00
|
A recursive approach to the construction and enumeration of self-orthogonal and self-dual codes over finite commutative chain rings of even characteristic
|
arXiv:2510.06069v2 Announce Type: replace Abstract: Let $\mathcal{R}_{e,m}$ be a finite commutative chain ring of even characteristic with maximal ideal $\langle u \rangle$ of nilpotency index $e \geq 2,$ Teichm$\ddot{u}$ller set $\mathcal{T}_{m},$ and residue field $\mathcal{R}_{e,m}/\langle u \rangle$ of order $2^m.$ Suppose that $2 \in \langle u^{\kappa}\rangle \setminus \langle u^{\kappa+1}\rangle$ for some even positive integer $ \kappa \leq e.$ In this paper, we provide a recursive method to construct a self-orthogonal code $\mathcal{C}_e$ of type $\{\lambda_1, \lambda_2, \ldots, \lambda_e\}$ and length $n$ over $\mathcal{R}_{e,m}$ from a chain $\mathcal{D}^{(1)}\subseteq \mathcal{D}^{(2)} \subseteq \cdots \subseteq \mathcal{D}^{(\lceil \frac{e}{2} \rceil)}$ of self-orthogonal codes of length $n$ over $\mathcal{T}_{m},$ and vice versa, where $\dim \mathcal{D}^{(i)}=\lambda_1+\lambda_2+\cdots+\lambda_i$ for $1 \leq i \leq \lceil \frac{e}{2} \rceil,$ the codes $\mathcal{D}^{(\lfloor \frac{e+1}{2} \rfloor-\kappa)},\mathcal{D}^{(\lfloor \frac{e+1}{2} \rfloor -\kappa+1)},\ldots,\mathcal{D}^{(\lfloor \frac{e}{2}\rfloor-\lfloor \frac{\kappa}{2} \rfloor)}$ satisfy certain additional conditions, and $\lambda_1,\lambda_2,\ldots,\lambda_e$ are non-negative integers satisfying $2\lambda_1+2\lambda_2+\cdots+2\lambda_{e-i+1}+\lambda_{e-i+2}+\lambda_{e-i+3}+\cdots+\lambda_i \leq n$ for $\lceil \frac{e+1}{2} \rceil \leq i\leq e.$ This construction guarantees that $Tor_i(\mathcal{C}_e)=\mathcal{D}^{(i)}$ for $1 \leq i \leq \lceil \frac{e}{2} \rceil.$ By employing this recursive construction method, together with the results from group theory and finite geometry, we derive explicit enumeration formulae for all self-orthogonal and self-dual codes of an arbitrary length over $\mathcal{R}_{e,m}.$ We also demonstrate these results through examples.
|
https://arxiv.org/abs/2510.06069
|
Academic Papers
|
svg
|
384b5dfeca08a4457d7a91a8eaa54e375861b8ff8371c70b190fd051bfb5013e
|
2026-01-16T00:00:00-05:00
|
Recursive construction and enumeration of self-orthogonal and self-dual codes over finite commutative chain rings of even characteristic
|
arXiv:2510.06082v2 Announce Type: replace Abstract: Let $\mathscr{R}_{e,m}$ denote a finite commutative chain ring of even characteristic with maximal ideal $\langle u \rangle$ of nilpotency index $e \geq 3,$ Teichm$\ddot{u}$ller set $\mathcal{T}_{m},$ and residue field $\mathscr{R}_{e,m}/\langle u \rangle$ of order $2^m.$ Suppose that $2 \in \langle u^{\kappa}\rangle \setminus \langle u^{\kappa+1}\rangle$ for some odd integer $\kappa$ with $3 \leq \kappa \leq e.$ In this paper, we first develop a recursive method to construct a self-orthogonal code $\mathscr{D}_e$ of type $\{\lambda_1, \lambda_2, \ldots, \lambda_e\}$ and length $n$ over $\mathscr{R}_{e,m}$ from a chain $\mathcal{C}^{(1)}\subseteq \mathcal{C}^{(2)} \subseteq \cdots \subseteq \mathcal{C}^{(\lceil \frac{e}{2} \rceil)} $ of self-orthogonal codes of length $n$ over $\mathcal{T}_{m},$ and vice versa, subject to certain conditions, where $\lambda_1,\lambda_2,\ldots,\lambda_e$ are non-negative integers satisfying $2\lambda_1+2\lambda_2+\cdots+2\lambda_{e-i+1}+\lambda_{e-i+2}+\lambda_{e-i+3}+\cdots+\lambda_i \leq n$ for $\lceil \frac{e+1}{2} \rceil \leq i\leq e,$ and $\lfloor \cdot \rfloor$ and $\lceil \cdot \rceil$ denote the floor and ceiling functions, respectively. This construction ensures that $Tor_i(\mathscr{D}_e)=\mathcal{C}^{(i)}$ for $1 \leq i \leq \lceil \frac{e}{2} \rceil.$ With the help of this recursive construction method and by applying results from group theory and finite geometry, we obtain explicit enumeration formulae for all self-orthogonal and self-dual codes of an arbitrary length over $\mathscr{R}_{e,m}.$ We also illustrate these results with some examples.
|
https://arxiv.org/abs/2510.06082
|
Academic Papers
|
svg
|
69fa6fe20b70be79d6469246e1aa217e4c627205eb1161e7737ca3f263c53cc7
|
2026-01-16T00:00:00-05:00
|
Textual Entailment is not a Better Bias Metric than Token Probability
|
arXiv:2510.07662v2 Announce Type: replace Abstract: Measurement of social bias in language models is typically by token probability (TP) metrics, which are broadly applicable but have been criticized for their distance from real-world language model use cases and harms. In this work, we test natural language inference (NLI) as an alternative bias metric. In extensive experiments across seven LM families, we show that NLI and TP bias evaluation behave substantially differently, with very low correlation among different NLI metrics and between NLI and TP metrics. NLI metrics are more brittle and unstable, slightly less sensitive to wording of counterstereotypical sentences, and slightly more sensitive to wording of tested stereotypes than TP approaches. Given this conflicting evidence, we conclude that neither token probability nor natural language inference is a ``better'' bias metric in all cases. We do not find sufficient evidence to justify NLI as a complete replacement for TP metrics in bias evaluation.
|
https://arxiv.org/abs/2510.07662
|
Academic Papers
|
svg
|
22825f41d6ce85091f062400cb5f1e29546924a7a96cdac37ce3d660783840e5
|
2026-01-16T00:00:00-05:00
|
Parallel Test-Time Scaling for Latent Reasoning Models
|
arXiv:2510.07745v3 Announce Type: replace Abstract: Parallel test-time scaling (TTS) is a pivotal approach for enhancing large language models (LLMs), typically by sampling multiple token-based chains-of-thought in parallel and aggregating outcomes through voting or search. Recent advances in latent reasoning, where intermediate reasoning unfolds in continuous vector spaces, offer a more efficient alternative to explicit Chain-of-Thought, yet whether such latent models can similarly benefit from parallel TTS remains open, mainly due to the absence of sampling mechanisms in continuous space, and the lack of probabilistic signals for advanced trajectory aggregation. This work enables parallel TTS for latent reasoning models by addressing the above issues. For sampling, we introduce two uncertainty-inspired stochastic strategies: Monte Carlo Dropout and Additive Gaussian Noise. For aggregation, we design a Latent Reward Model (LatentRM) trained with step-wise contrastive objective to score and guide latent reasoning. Extensive experiments and visualization analyses show that both sampling strategies scale effectively with compute and exhibit distinct exploration dynamics, while LatentRM enables effective trajectory selection. Together, our explorations open a new direction for scalable inference in continuous spaces. Code and checkpoints released at https://github.com/ModalityDance/LatentTTS
|
https://arxiv.org/abs/2510.07745
|
Academic Papers
|
svg
|
1911b176a1470f592ec024f88fd26d687bc631a4f80c34c04f80bfb7225bf714
|
2026-01-16T00:00:00-05:00
|
One Sentence, Two Embeddings: Contrastive Learning of Explicit and Implicit Semantic Representations
|
arXiv:2510.09293v2 Announce Type: replace Abstract: Sentence embedding methods have made remarkable progress, yet they still struggle to capture the implicit semantics within sentences. This can be attributed to the inherent limitations of conventional sentence embedding methods that assign only a single vector per sentence. To overcome this limitation, we propose DualCSE, a sentence embedding method that assigns two embeddings to each sentence: one representing the explicit semantics and the other representing the implicit semantics. These embeddings coexist in the shared space, enabling the selection of the desired semantics for specific purposes such as information retrieval and text classification. Experimental results demonstrate that DualCSE can effectively encode both explicit and implicit meanings and improve the performance of the downstream task.
|
https://arxiv.org/abs/2510.09293
|
Academic Papers
|
svg
|
cb3932af8038b3aea9920a7ef7827e6c529ea8678c8acbc1409e5df2445578f5
|
2026-01-16T00:00:00-05:00
|
Classifying and Addressing the Diversity of Errors in Retrieval-Augmented Generation Systems
|
arXiv:2510.13975v2 Announce Type: replace Abstract: Retrieval-augmented generation (RAG) is a prevalent approach for building LLM-based question-answering systems that can take advantage of external knowledge databases. Due to the complexity of real-world RAG systems, there are many potential causes for erroneous outputs. Understanding the range of errors that can occur in practice is crucial for robust deployment. We present a new taxonomy of the error types that can occur in realistic RAG systems, examples of each, and practical advice for addressing them. Additionally, we curate a dataset of erroneous RAG responses annotated by error types. We then propose an auto-evaluation method aligned with our taxonomy that can be used in practice to track and address errors during development. Code and data are available at https://github.com/layer6ai-labs/rag-error-classification.
|
https://arxiv.org/abs/2510.13975
|
Academic Papers
|
svg
|
080bc809e70ab25fd78a935583a639b36481b7ce5be1ebb1bb9b03d5f39230c1
|
2026-01-16T00:00:00-05:00
|
Decorrelation Speeds Up Vision Transformers
|
arXiv:2510.14657v3 Announce Type: replace Abstract: Masked Autoencoder (MAE) pre-training of vision transformers (ViTs) yields strong performance in low-label data regimes but comes with substantial computational costs, making it impractical in time- and resource-constrained industrial settings. We address this by integrating Decorrelated Backpropagation (DBP) into MAE pre-training, an optimization method that iteratively reduces input correlations at each layer to accelerate convergence. Applied selectively to the encoder, DBP achieves faster pre-training without loss of stability. To mimic constrained-data scenarios, we evaluate our approach on ImageNet-1K pre-training and ADE20K fine-tuning using randomly sampled subsets of each dataset. Under this setting, DBP-MAE reduces wall-clock time to baseline performance by 21.1%, lowers carbon emissions by 21.4%, and improves segmentation mIoU by 1.1 points. We observe similar gains when pre-training and fine-tuning on proprietary industrial data, confirming the method's applicability in real-world scenarios. These results demonstrate that DBP can reduce training time and energy use while improving downstream performance for large-scale ViT pre-training. Keywords: Deep learning, Vision transformers, Efficient AI, Decorrelation
|
https://arxiv.org/abs/2510.14657
|
Academic Papers
|
svg
|
36789083ca10ee5d1105a92c7668ea84cd0fd06a0eb2e2e7d35b6897c49556a1
|
2026-01-16T00:00:00-05:00
|
Attn-JGNN: Attention Enhanced Join-Graph Neural Networks
|
arXiv:2510.15583v2 Announce Type: replace Abstract: We propose an Attention Enhanced Join-Graph Neural Networks(Attn-JGNN) model for solving #SAT problems, which significantly improves the solving accuracy. Inspired by the Iterative Join Graph Propagation (IJGP) algorithm, Attn-JGNN uses tree decomposition to encode the CNF formula into a join-graph, then performs iterative message passing on the join-graph, and finally approximates the model number by learning partition functions. In order to further improve the accuracy of the solution, we apply the attention mechanism in and between clusters of the join-graphs, which makes Attn-JGNN pay more attention to the key variables and clusters in probabilistic inference, and reduces the redundant calculation. Finally, our experiments show that our Attn-JGNN model achieves better results than other neural network methods.
|
https://arxiv.org/abs/2510.15583
|
Academic Papers
|
svg
|
2ebbc691d83b1c121cfe63ec9fcef6d932e9d6a153fa1b903f6b431076f13600
|
2026-01-16T00:00:00-05:00
|
Investigating LLM Capabilities on Long Context Comprehension for Medical Question Answering
|
arXiv:2510.18691v2 Announce Type: replace Abstract: This study is the first to investigate LLM comprehension capabilities over long-context (LC), clinically relevant medical Question Answering (QA) beyond MCQA. Our comprehensive approach considers a range of settings based on content inclusion of varying size and relevance, LLM models of different capabilities and a variety of datasets across task formulations. We reveal insights on model size effects and their limitations, underlying memorization issues and the benefits of reasoning models, while demonstrating the value and challenges of leveraging the full long patient's context. Importantly, we examine the effect of Retrieval Augmented Generation (RAG) on medical LC comprehension, showcasing best settings in single versus multi-document QA datasets. We shed light into some of the evaluation aspects using a multi-faceted approach uncovering common metric challenges. Our quantitative analysis reveals challenging cases where RAG excels while still showing limitations in cases requiring temporal reasoning.
|
https://arxiv.org/abs/2510.18691
|
Academic Papers
|
svg
|
04159755038dce8e51b67de8a2df5245733b5c7b9d5845b0eda8dd225dc87793
|
2026-01-16T00:00:00-05:00
|
CoRECT: A Framework for Evaluating Embedding Compression Techniques at Scale
|
arXiv:2510.19340v3 Announce Type: replace Abstract: Dense retrieval systems have proven to be effective across various benchmarks, but require substantial memory to store large search indices. Recent advances in embedding compression show that index sizes can be greatly reduced with minimal loss in ranking quality. However, existing studies often overlook the role of corpus complexity -- a critical factor, as recent work shows that both corpus size and document length strongly affect dense retrieval performance. In this paper, we introduce CoRECT (Controlled Retrieval Evaluation of Compression Techniques), a framework for large-scale evaluation of embedding compression methods, supported by a newly curated dataset collection. To demonstrate its utility, we benchmark eight representative types of compression methods. Notably, we show that non-learned compression achieves substantial index size reduction, even on up to 100M passages, with statistically insignificant performance loss. However, selecting the optimal compression method remains challenging, as performance varies across models. Such variability highlights the necessity of CoRECT to enable consistent comparison and informed selection of compression methods. All code, data, and results are available on GitHub and HuggingFace.
|
https://arxiv.org/abs/2510.19340
|
Academic Papers
|
svg
|
28cdd9060d1922b219242a20783b9931e251ff95c238d3c29a89820a95d146de
|
2026-01-16T00:00:00-05:00
|
User Perceptions vs. Proxy LLM Judges: Privacy and Helpfulness in LLM Responses to Privacy-Sensitive Scenarios
|
arXiv:2510.20721v3 Announce Type: replace Abstract: Large language models (LLMs) are rapidly being adopted for tasks like drafting emails, summarizing meetings, and answering health questions. In these settings, users may need to share private information (e.g., contact details, health records). To evaluate LLMs' ability to identify and redact such information, prior work introduced real-life, scenario-based benchmarks (e.g., ConfAIde, PrivacyLens) and found that LLMs can leak private information in complex scenarios. However, these evaluations relied on proxy LLMs to judge the helpfulness and privacy-preservation quality of LLM responses, rather than directly measuring users' perceptions. To understand how users perceive the helpfulness and privacy-preservation quality of LLM responses to privacy-sensitive scenarios, we conducted a user study ($n=94$) using 90 PrivacyLens scenarios. We found that users had low agreement with each other when evaluating identical LLM responses. In contrast, five proxy LLMs reached high agreement, yet each proxy LLM had low correlation with users' evaluations. These results indicate that proxy LLMs cannot accurately estimate users' wide range of perceptions of utility and privacy in privacy-sensitive scenarios. We discuss the need for more user-centered studies to measure LLMs' ability to help users while preserving privacy, and for improving alignment between LLMs and users in estimating perceived privacy and utility.
|
https://arxiv.org/abs/2510.20721
|
Academic Papers
|
svg
|
b1b0397d4d9f0a9feddf9d7c5492d122c5696189d5a99bb4cc2f08db37e7d1f5
|
2026-01-16T00:00:00-05:00
|
Universal Maximum Likelihood (List) Decoding via Fast Vector-Matrix Multiplication
|
arXiv:2510.21414v2 Announce Type: replace Abstract: Maximum-likelihood (ML) decoding for arbitrary block codes remains fundamentally hard, with worst-case time complexity-measured by the total number of multiplications-being no better than straightforward exhaustive search, which requires $q^{k} n$ operations for an $[n,k]_q$ code. This paper introduces a simple, code-agnostic framework that reduces the worst-case complexity by a factor of $n$, down to $q^{k}$ operations, a highly desirable reduction in practice. The result holds for both linear and nonlinear block codes over general memoryless channels and under both hard-decision and soft-decision decoding. It naturally extends to intersymbol-interference (ISI) channels and ML list decoding with only a negligible increase in complexity. Our core insight is that, upon receipt of each sequence at the receiver, the conditional probability of that sequence for each codeword in the codebook (i.e., the \emph{likelihood}) can be expressed as the inner product of two carefully constructed vectors -- the first depending on the received sequence, and the second on that codeword itself. As a result, evaluating the likelihoods for all codewords in the codebook reduces to a single vector-matrix multiplication, and ML decoding (MLD) becomes the simple task of picking the maximum entry in the resulting vector. The only non-trivial cost lies in the vector-matrix product. However, our matrix construction allows the use of the Mailman algorithm to reduce this cost. This time reduction is achieved at the cost of high space complexity, requiring $\mathcal{O}(q^{k+1} n)$ space to store the pre-computed codebook matrix.
|
https://arxiv.org/abs/2510.21414
|
Academic Papers
|
svg
|
7eb0802fea1546b96e98aabf3efdb8e2b569849ef3846a2abe9f82348942140e
|
2026-01-16T00:00:00-05:00
|
Deep Jump Gaussian Processes for Surrogate Modeling of High-Dimensional Piecewise Continuous Functions
|
arXiv:2510.21974v2 Announce Type: replace Abstract: We introduce Deep Jump Gaussian Processes (DJGP), a novel method for surrogate modeling of a piecewise continuous function on a high-dimensional domain. DJGP addresses the limitations of conventional Jump Gaussian Processes (JGP) in high-dimensional input spaces by integrating region-specific, locally linear projections with JGP modeling. These projections employ region-dependent matrices to capture local low-dimensional subspace structures, making them well suited to the inherently localized modeling behavior of JGPs, a variant of local Gaussian processes. To control model complexity, we place a Gaussian Process prior on the projection matrices, allowing them to evolve smoothly across the input space. The projected inputs are then modeled with a JGP to capture piecewise continuous relationships with the response. This yields a distinctive two-layer deep learning of GP/JGP. We further develop a scalable variational inference algorithm to jointly learn the projection matrices and JGP hyperparameters. Rigorous theoretical analysis and extensive empirical studies are provided to justify the proposed approach. In particular, we derive an oracle error bound for DJGP and decompose it into four distinct sources of error, which are then linked to practical implications. Experiments on synthetic and benchmark datasets demonstrate that DJGP achieves superior predictive accuracy and more reliable uncertainty quantification compared with existing methods.
|
https://arxiv.org/abs/2510.21974
|
Academic Papers
|
svg
|
a5a7febee09ce2d7a28c647a953b48822d0ca3a1e262f62385de6f632b1fddca
|
2026-01-16T00:00:00-05:00
|
Learning Without Augmenting: Unsupervised Time Series Representation Learning via Frame Projections
|
arXiv:2510.22655v2 Announce Type: replace Abstract: Self-supervised learning (SSL) has emerged as a powerful paradigm for learning representations without labeled data. Most SSL approaches rely on strong, well-established, handcrafted data augmentations to generate diverse views for representation learning. However, designing such augmentations requires domain-specific knowledge and implicitly imposes representational invariances on the model, which can limit generalization. In this work, we propose an unsupervised representation learning method that replaces augmentations by generating views using orthonormal bases and overcomplete frames. We show that embeddings learned from orthonormal and overcomplete spaces reside on distinct manifolds, shaped by the geometric biases introduced by representing samples in different spaces. By jointly leveraging the complementary geometry of these distinct manifolds, our approach achieves superior performance without artificially increasing data diversity through strong augmentations. We demonstrate the effectiveness of our method on nine datasets across five temporal sequence tasks, where signal-specific characteristics make data augmentations particularly challenging. Without relying on augmentation-induced diversity, our method achieves performance gains of up to 15--20\% over existing self-supervised approaches. Source code: https://github.com/eth-siplab/Learning-with-FrameProjections
|
https://arxiv.org/abs/2510.22655
|
Academic Papers
|
svg
|
c790200f1ff1d99ae2f641e75e79fe1d34bb2490a1489c8202da05b16da1b930
|
2026-01-16T00:00:00-05:00
|
Differential Privacy as a Perk: Federated Learning over Multiple-Access Fading Channels with a Multi-Antenna Base Station
|
arXiv:2510.23463v3 Announce Type: replace Abstract: Federated Learning (FL) is a distributed learning paradigm that preserves privacy by eliminating the need to exchange raw data during training. In its prototypical edge instantiation with underlying wireless transmissions enabled by analog over-the-air computing (AirComp), referred to as \emph{over-the-air FL (AirFL)}, the inherent channel noise plays a unique role of \emph{frenemy} in the sense that it degrades training due to noisy global aggregation while providing a natural source of randomness for privacy-preserving mechanisms, formally quantified by \emph{differential privacy (DP)}. It remains, nevertheless, challenging to effectively harness such channel impairments, as prior arts, under assumptions of either simple channel models or restricted types of loss functions, mostly considering (local) DP enhancement with a single-round or non-convergent bound on privacy loss. In this paper, we study AirFL over multiple-access fading channels with a multi-antenna base station (BS) subject to user-level DP requirements. Despite a recent study, which claimed in similar settings that artificial noise (AN) must be injected to ensure DP in general, we demonstrate, on the contrary, that DP can be gained as a \emph{perk} even \emph{without} employing any AN. Specifically, we derive a novel bound on DP that converges under general bounded-domain assumptions on model parameters, along with a convergence bound with general smooth and non-convex loss functions. Next, we optimize over receive beamforming and power allocations to characterize the optimal convergence-privacy trade-offs, which also reveal explicit conditions in which DP is achievable without compromising training. Finally, our theoretical findings are validated by extensive numerical results.
|
https://arxiv.org/abs/2510.23463
|
Academic Papers
|
svg
|
faa91142f638ac03ab17ced6f4ca5ed3110a2740e6dda8bd94c1207f00f5b7b0
|
2026-01-16T00:00:00-05:00
|
Geometric Algorithms for Neural Combinatorial Optimization with Constraints
|
arXiv:2510.24039v3 Announce Type: replace Abstract: Self-Supervised Learning (SSL) for Combinatorial Optimization (CO) is an emerging paradigm for solving combinatorial problems using neural networks. In this paper, we address a central challenge of SSL for CO: solving problems with discrete constraints. We design an end-to-end differentiable framework that enables us to solve discrete constrained optimization problems with neural networks. Concretely, we leverage algorithmic techniques from the literature on convex geometry and Carath\'eodory's theorem to decompose neural network outputs into convex combinations of polytope corners that correspond to feasible sets. This decomposition-based approach enables self-supervised training but also ensures efficient quality-preserving rounding of the neural net output into feasible solutions. Extensive experiments in cardinality-constrained optimization show that our approach can consistently outperform neural baselines. We further provide worked-out examples of how our method can be applied beyond cardinality-constrained problems to a diverse set of combinatorial optimization tasks, including finding independent sets in graphs, and solving matroid-constrained problems.
|
https://arxiv.org/abs/2510.24039
|
Academic Papers
|
svg
|
b02698dbbfb38c73422205a0f9a5c8ef41ff585e4fe8fc016af43b03d4b1127d
|
2026-01-16T00:00:00-05:00
|
Pinwheel Scheduling with Real Periods
|
arXiv:2510.24068v3 Announce Type: replace Abstract: For a sequence of tasks, each with a positive integer period, the pinwheel scheduling problem involves finding a valid schedule in the sense that the schedule performs one task per day and each task is performed at least once every consecutive days of its period. It had been conjectured by Chan and Chin in 1993 that there exists a valid schedule for any sequence of tasks with density, the sum of the reciprocals of each period, at most $\frac{5}{6}$. Recently, Kawamura settled this conjecture affirmatively. In this paper we consider an extended version with real periods proposed by Kawamura, in which a valid schedule must perform each task $i$ having a real period~$a_{i}$ at least $l$ times in any consecutive $\lceil l a_{i} \rceil$ days for all positive integer $l$. We show that any sequence of tasks such that the periods take three distinct real values and the density is at most $\frac{5}{6}$ admits a valid schedule. We hereby conjecture that the conjecture of Chan and Chin is true also for real periods.
|
https://arxiv.org/abs/2510.24068
|
Academic Papers
|
svg
|
ce567857ee07857fb02db4f2e18c96131cb9d18fb0ad1977522c8fbabed7f47b
|
2026-01-16T00:00:00-05:00
|
Relative Scaling Laws for LLMs
|
arXiv:2510.24626v2 Announce Type: replace Abstract: Scaling laws describe how language models improve with additional data, parameters, and compute. While widely used, they are typically measured on aggregate test sets. Aggregate evaluations yield clean trends but average over heterogeneous subpopulations, obscuring performance disparities. We introduce relative scaling laws, which track how performance gaps between test distributions evolve with scale rather than focusing solely on absolute error. Using 255 decoder-only Transformers trained under matched-compute (IsoFLOP) budgets from $10^{18}$--$10^{20}$ FLOPs on standard pretraining datasets, we find diverse trajectories: academic domains on MMLU converge toward parity; regional English dialects shift depending on population size; and clusters of AI risk behaviours split, with capability- and influence-related risks increasing during pretraining while adversarial risks do not. These results show that although scaling improves overall performance, it is not a universal equalizer. To support further study, we release all model checkpoints from this work to enable practitioners to measure relative alongside traditional scaling laws, in order to better prioritize robustness challenges in light of the bitter lesson.
|
https://arxiv.org/abs/2510.24626
|
Academic Papers
|
svg
|
Subsets and Splits
No community queries yet
The top public SQL queries from the community will appear here once available.