id
stringlengths 64
64
| published
stringlengths 19
25
| title
stringlengths 7
262
| description
stringlengths 6
54.4k
| link
stringlengths 31
227
| category
stringclasses 6
values | image
stringlengths 3
247
|
|---|---|---|---|---|---|---|
a5db7d7ba2a75c80edb0c270b0c4440956e2f8b5d5cf0c1548bacd1b3f5a48d0
|
2026-02-02T00:00:00-05:00
|
Comparing and Contrasting DLWP Backbones on Navier-Stokes and Atmospheric Dynamics
|
arXiv:2407.14129v3 Announce Type: replace Abstract: A large number of Deep Learning Weather Prediction (DLWP) architectures -- based on various backbones, including U-Net, Transformer, Graph Neural Network, and Fourier Neural Operator (FNO) -- have demonstrated their potential at forecasting atmospheric states. However, due to differences in training protocols, forecast horizons, and data choices, it remains unclear which (if any) of these methods and architectures are most suitable for weather forecasting and for future model development. Here, we step back and provide a detailed empirical analysis, under controlled conditions, comparing and contrasting the most prominent DLWP models, along with their backbones. We accomplish this by predicting synthetic two-dimensional incompressible Navier-Stokes and real-world global weather dynamics. On synthetic data, we observe favorable performance of FNO, while on the real-world WeatherBench dataset, our results demonstrate the suitability of ConvLSTM and SwinTransformer for short-to-mid-ranged forecasts. For long-ranged weather rollouts of up to 50 years, we observe superior stability and physical soundness in architectures that formulate a spherical data representation, i.e., GraphCast and Spherical FNO. The code is available at https://github.com/amazon-science/dlwp-benchmark.
|
https://arxiv.org/abs/2407.14129
|
Academic Papers
|
svg
|
1fc8bb7bda512e436d86e5e030ddc498f0eb3db6efe1733e23bebf0c5800de1c
|
2026-02-02T00:00:00-05:00
|
Are Pose Estimators Ready for the Open World? STAGE: A GenAI Toolkit for Auditing 3D Human Pose Estimators
|
arXiv:2408.16536v2 Announce Type: replace Abstract: For safety-critical applications, it is crucial to audit 3D human pose estimators before deployment. Will the system break down if the weather or the clothing changes? Is it robust regarding gender and age? To answer these questions and more, we need controlled studies with images that differ in a single attribute, but real benchmarks cannot provide such pairs. We thus present STAGE, a GenAI data toolkit for auditing 3D human pose estimators. For STAGE, we develop the first GenAI image creator with accurate 3D pose control and propose a novel evaluation strategy to isolate and quantify the effects of single factors such as gender, ethnicity, age, clothing, location, and weather. Enabled by STAGE, we generate a series of benchmarks to audit, for the first time, the sensitivity of popular pose estimators towards such factors. Our results show that natural variations can severely degrade pose estimator performance, raising doubts about their readiness for open-world deployment. We aim to highlight these robustness issues and establish STAGE as a benchmark to quantify them.
|
https://arxiv.org/abs/2408.16536
|
Academic Papers
|
svg
|
e426b02fbdcc7cbd05c312a13b726956e4f988a91d84d63fe3802fcae23e7c0a
|
2026-02-02T00:00:00-05:00
|
FC-KAN: Function Combinations in Kolmogorov-Arnold Networks
|
arXiv:2409.01763v4 Announce Type: replace Abstract: In this paper, we introduce FC-KAN, a Kolmogorov-Arnold Network (KAN) that leverages combinations of popular mathematical functions such as B-splines, wavelets, and radial basis functions on low-dimensional data through element-wise operations. We explore several methods for combining the outputs of these functions, including sum, element-wise product, the addition of sum and element-wise product, representations of quadratic and cubic functions, concatenation, linear transformation of the concatenated output, and others. In our experiments, we compare FC-KAN with a multi-layer perceptron network (MLP) and other existing KANs, such as BSRBF-KAN, EfficientKAN, FastKAN, and FasterKAN, on the MNIST and Fashion-MNIST datasets. Two variants of FC-KAN, which use a combination of outputs from B-splines and Difference of Gaussians (DoG) and from B-splines and linear transformations in the form of a quadratic function, outperformed overall other models on the average of 5 independent training runs. We expect that FC-KAN can leverage function combinations to design future KANs. Our repository is publicly available at: https://github.com/hoangthangta/FC_KAN.
|
https://arxiv.org/abs/2409.01763
|
Academic Papers
|
svg
|
4d41355a0aa0b2e3646dddf67643174b4913ef83576fb05e5cbd0f7cdb0d7dbb
|
2026-02-02T00:00:00-05:00
|
Unveiling and Mitigating Bias in Large Language Model Recommendations: A Path to Fairness
|
arXiv:2409.10825v5 Announce Type: replace Abstract: Large Language Model (LLM)-based recommendation systems excel in delivering comprehensive suggestions by deeply analyzing content and user behavior. However, they often inherit biases from skewed training data, favoring mainstream content while underrepresenting diverse or non-traditional options. This study explores the interplay between bias and LLM-based recommendation systems, focusing on music, song, and book recommendations across diverse demographic and cultural groups. This paper analyzes bias in LLM-based recommendation systems across multiple models (GPT, LLaMA, and Gemini), revealing its deep and pervasive impact on outcomes. Intersecting identities and contextual factors, like socioeconomic status, further amplify biases, complicating fair recommendations across diverse groups. Our findings reveal that bias in these systems is deeply ingrained, yet even simple interventions like prompt engineering can significantly reduce it. We further propose a retrieval-augmented generation strategy to mitigate bias more effectively. Numerical experiments validate these strategies, demonstrating both the pervasive nature of bias and the impact of the proposed solutions.
|
https://arxiv.org/abs/2409.10825
|
Academic Papers
|
svg
|
7d455b36b16a894cbb606473ac7f57291846f6c7529a98092e084b25210c4cca
|
2026-02-02T00:00:00-05:00
|
Impacts of aspect ratio on task accuracy in parallel coordinates
|
arXiv:2409.12540v2 Announce Type: replace Abstract: Parallel coordinates plots (PCPs) are a widely used visualization method, particularly for exploratory analysis. Previous studies show that PCPs perform much more poorly for estimating positive correlation than for estimating negative correlation, but it is not clear if this is affected by the aspect ratio (AR) of the axes pairs. In this paper, we present the results from an evaluation of the effect of the aspect ratio of axes in static (non-interactive) PCPs for two tasks: a) linear correlation estimation and b) value tracing. For both tasks we find strong evidence that AR influences accuracy, including ARs greater than 1:1 being much more performant for estimation of positive correlations. We provide a set of recommendations for visualization designers using PCPs for correlation or value-tracing tasks, based on the data characteristics and expected use cases.
|
https://arxiv.org/abs/2409.12540
|
Academic Papers
|
svg
|
ea83138d4b4eb8695724affa1f4f5bdcce3834b029d8772a9b635b359bf998e6
|
2026-02-02T00:00:00-05:00
|
ARB-LLM: Alternating Refined Binarizations for Large Language Models
|
arXiv:2410.03129v3 Announce Type: replace Abstract: Large Language Models (LLMs) have greatly pushed forward advancements in natural language processing, yet their high memory and computational demands hinder practical deployment. Binarization, as an effective compression technique, can shrink model weights to just 1 bit, significantly reducing the high demands on computation and memory. However, current binarization methods struggle to narrow the distribution gap between binarized and full-precision weights, while also overlooking the column deviation in LLM weight distribution. To tackle these issues, we propose ARB-LLM, a novel 1-bit post-training quantization (PTQ) technique tailored for LLMs. To narrow the distribution shift between binarized and full-precision weights, we first design an alternating refined binarization (ARB) algorithm to progressively update the binarization parameters, which significantly reduces the quantization error. Moreover, considering the pivot role of calibration data and the column deviation in LLM weights, we further extend ARB to ARB-X and ARB-RC. In addition, we refine the weight partition strategy with column-group bitmap (CGB), which further enhance performance. Equipping ARB-X and ARB-RC with CGB, we obtain ARB-LLM$_\text{X}$ and ARB-LLM$_\text{RC}$ respectively, which significantly outperform state-of-the-art (SOTA) binarization methods for LLMs. As a binary PTQ method, our ARB-LLM$_\text{RC}$ is the first to surpass FP16 models of the same size. The code and models will be available at https://github.com/ZHITENGLI/ARB-LLM.
|
https://arxiv.org/abs/2410.03129
|
Academic Papers
|
svg
|
610bda63311f7027744bd4b32a5d43e0df3f4e8e73a12cefa82877643212678e
|
2026-02-02T00:00:00-05:00
|
Policies for Fair Exchanges of Resources
|
arXiv:2410.21214v2 Announce Type: replace Abstract: People increasingly use digital platforms to exchange resources in accordance with some policies stating what resources users offer and what they require in return. In this paper, we propose a formal model of these environments, focussing on how users' policies are defined and enforced, so ensuring that malicious users cannot take advantage of honest ones. To that end, we introduce the declarative policy language MuAC and equip it with a formal semantics. To determine if a resource exchange is fair, i.e., if it respects the MuAC policies in force, we introduce the non-standard logic MuACL that combines non-linear, linear and contractual aspects, and prove it decidable. Notably, the operator for contractual implication of MuACL is not expressible in linear logic. We define a semantics preserving compilation of MuAC policies into MuACL, thus establishing that exchange fairness is reduced to finding a proof in MuACL. Finally, we show how this approach can be put to work on a blockchain to exchange non-fungible tokens.
|
https://arxiv.org/abs/2410.21214
|
Academic Papers
|
svg
|
acf42376bc1111e9b84c5bec0020364124b602667aeed5ab0dc7ff1f4515fc8d
|
2026-02-02T00:00:00-05:00
|
A spatiotemporal fused network considering electrode spatial topology and time-window transition for MDD detection
|
arXiv:2411.08521v4 Announce Type: replace Abstract: Recently, researchers have begun to experiment with deep learning-based methods for detecting major depressive disor-der (MDD) using electroencephalogram (EEG) signals in search of a more objective means of diagnosis. However, exist-ing spatiotemporal feature extraction methods only consider the functional correlation between multiple electrodes and temporal correlation of EEG signals, ignoring the spatial posi-tion connection information between electrodes and the conti-nuity between time windows, which reduces the model's fea-ture extraction capabilities. To address this issue, a Spatio-temporal fused network for MDD detection with Electrode spatial Topology and adjacent TIME-window transition in-formation (SET-TIME) is proposed in this study. SET-TIME is composed by a common feature extractor, a secondary time-correlation feature extractor, and a domain adaptation (DA) module, in which the former extractor is used to obtain the temporal and spatial features, and the latter extractor can mine the correlation between multiple time windows, and the DA module is adopted to enhance cross-subject detection ca-pability. The experimental results of 10-fold cross-validation show that the proposed SET-TIME method outperforms the state-of-the-art (SOTA) method by achieving MDD detection accuracies of 92.00% and 94.00% on the public datasets PRED+CT and MODMA, respectively. Ablation experiments demonstrate the effectiveness of the multiple modules in SET-TIME, which assist in MDD detection by exploring the intrin-sic spatiotemporal information of EEG signals.
|
https://arxiv.org/abs/2411.08521
|
Academic Papers
|
svg
|
5e310f3db9f44ac70b234d954a220c8b4cad5f5b56cb563ec6ce0fff5a873f91
|
2026-02-02T00:00:00-05:00
|
Strengthening False Information Propagation Detection: Leveraging SVM and Sophisticated Text Vectorization Techniques in comparison to BERT
|
arXiv:2411.12703v4 Announce Type: replace Abstract: The rapid spread of misinformation, particularly through online platforms, underscores the urgent need for reliable detection systems. This study explores the utilization of machine learning and natural language processing, specifically Support Vector Machines (SVM) and BERT, to detect fake news. We employ three distinct text vectorization methods for SVM: Term Frequency Inverse Document Frequency (TF-IDF), Word2Vec, and Bag of Words (BoW), evaluating their effectiveness in distinguishing between genuine and fake news. Additionally, we compare these methods against the transformer large language model, BERT. Our comprehensive approach includes detailed preprocessing steps, rigorous model implementation, and thorough evaluation to determine the most effective techniques. The results demonstrate that while BERT achieves superior accuracy with 99.98% and an F1-score of 0.9998, the SVM model with a linear kernel and BoW vectorization also performs exceptionally well, achieving 99.81% accuracy and an F1-score of 0.9980. These findings highlight that, despite BERT's superior performance, SVM models with BoW and TF-IDF vectorization methods come remarkably close, offering highly competitive performance with the advantage of lower computational requirements.
|
https://arxiv.org/abs/2411.12703
|
Academic Papers
|
svg
|
dcd98ddb9141f2441fb55972dd58f1603c44a6a1dc66e80985664f692efb63da
|
2026-02-02T00:00:00-05:00
|
SilentWood: Private Inference Over Gradient-Boosting Decision Forests
|
arXiv:2411.15494v2 Announce Type: replace Abstract: Gradient boosting decision forests, used by XGBoost or AdaBoost, offer higher accuracy and lower training times than decision trees for large datasets. Protocols for private inference over decision trees can be used to preserve the privacy of the input data as well as the privacy of the trees. However, naively extending private inference over decision trees to private inference over decision forests by replicating the protocols leads to impractical running times. In this paper, we propose an efficient private decision inference protocol using homomorphic encryption. We present several optimizations that identify and then remove (approximate) duplication between the trees in a forest, thereby achieving significant improvements in communication and computation cost over the naive approach. To the best of our knowledge, we present the first private inference protocol for highly scalable gradient boosting decision forests. Our protocol's (SilentWood) inference time is faster than the baseline of parallel running the RCC-PDTE protocol by Mahdavi et al. by up to 42.5x, and faster than Zama's Concrete ML XGBoost by up to 27.8x, and faster than SoK-GGG's two-party garbled circuit protocol by 2.94x.
|
https://arxiv.org/abs/2411.15494
|
Academic Papers
|
svg
|
1dac54917b3a0faccd652a86643314c765353a44ce12bce43a22a1daf9997918
|
2026-02-02T00:00:00-05:00
|
Numerical analysis of a constrained strain energy minimization problem
|
arXiv:2411.19089v2 Announce Type: replace Abstract: We consider a setting in which an evolving surface is implicitly characterized as the zero level of a level set function. Such an implicit surface does not encode any information about the path of a single point on the evolving surface. In the literature different approaches for determining a velocity that induces corresponding paths of points on the surface have been proposed. One of these is based on minimization of the strain energy functional. This then leads to a constrained minimization problem, which has a corresponding equivalent formulation as a saddle point problem. The main topic of this paper is a detailed analysis of this saddle point problem and of a finite element discretization of this problem. We derive well-posedness results for the continuous and discrete problems and optimal error estimates for a finite element discretization that uses standard $H^1$-conforming finite element spaces.
|
https://arxiv.org/abs/2411.19089
|
Academic Papers
|
svg
|
0efc8e213c3e37ee52d6cfa0420105d1bb4e4cefbe31cadcef52d78653b511a4
|
2026-02-02T00:00:00-05:00
|
2DMamba: Efficient State Space Model for Image Representation with Applications on Giga-Pixel Whole Slide Image Classification
|
arXiv:2412.00678v3 Announce Type: replace Abstract: Efficiently modeling large 2D contexts is essential for various fields including Giga-Pixel Whole Slide Imaging (WSI) and remote sensing. Transformer-based models offer high parallelism but face challenges due to their quadratic complexity for handling long sequences. Recently, Mamba introduced a selective State Space Model (SSM) with linear complexity and high parallelism, enabling effective and efficient modeling of wide context in 1D sequences. However, extending Mamba to vision tasks, which inherently involve 2D structures, results in spatial discrepancies due to the limitations of 1D sequence processing. On the other hand, current 2D SSMs inherently model 2D structures but they suffer from prohibitively slow computation due to the lack of efficient parallel algorithms. In this work, we propose 2DMamba, a novel 2D selective SSM framework that incorporates the 2D spatial structure of images into Mamba, with a highly optimized hardware-aware operator, adopting both spatial continuity and computational efficiency. We validate the versatility of our approach on both WSIs and natural images. Extensive experiments on 10 public datasets for WSI classification and survival analysis show that 2DMamba improves up to 2.48% in AUC, 3.11% in F1 score, 2.47% in accuracy and 5.52% in C-index. Additionally, integrating our method with VMamba for natural imaging yields 0.5 to 0.7 improvements in mIoU on the ADE20k semantic segmentation dataset, and 0.2% accuracy improvement on ImageNet-1K classification dataset. Our code is available at https://github.com/AtlasAnalyticsLab/2DMamba.
|
https://arxiv.org/abs/2412.00678
|
Academic Papers
|
svg
|
1e67793dbcfa267f712d3910ab6f124e9f893eb67ba12bc7d4b6686cf99c20bc
|
2026-02-02T00:00:00-05:00
|
The Narrow Gate: Localized Image-Text Communication in Native Multimodal Models
|
arXiv:2412.06646v4 Announce Type: replace Abstract: Recent advances in multimodal training have significantly improved the integration of image understanding and generation within a unified model. This study investigates how vision-language models (VLMs) handle image-understanding tasks, focusing on how visual information is processed and transferred to the textual domain. We compare native multimodal VLMs, models trained from scratch on multimodal data to generate both text and images, and non-native multimodal VLMs, models adapted from pre-trained large language models or capable of generating only text, highlighting key differences in information flow. We find that in native multimodal VLMs, image and text embeddings are more separated within the residual stream. Moreover, VLMs differ in how visual information reaches text: non-native multimodal VLMs exhibit a distributed communication pattern, where information is exchanged through multiple image tokens, whereas models trained natively for joint image and text generation tend to rely on a single post-image token that acts as a narrow gate for visual information. We show that ablating this single token significantly deteriorates image-understanding performance, whereas targeted, token-level interventions reliably steer image semantics and downstream text with fine-grained control.
|
https://arxiv.org/abs/2412.06646
|
Academic Papers
|
svg
|
3ffee32c68e10359e699956e7a039513502fd2c8426a8be1b33ec55a0ab73f63
|
2026-02-02T00:00:00-05:00
|
A Library for Learning Neural Operators
|
arXiv:2412.10354v5 Announce Type: replace Abstract: We present NeuralOperator, an open-source Python library for operator learning. Neural operators generalize neural networks to maps between function spaces instead of finite-dimensional Euclidean spaces. They can be trained and inferenced on input and output functions given at various discretizations, satisfying a discretization convergence properties. Part of the official PyTorch Ecosystem, NeuralOperator provides all the tools for training and deploying neural operator models, as well as developing new ones, in a high-quality, tested, open-source package. It combines cutting-edge models and customizability with a gentle learning curve and simple user interface for newcomers.
|
https://arxiv.org/abs/2412.10354
|
Academic Papers
|
svg
|
366928823e5df6ecc9a40447dc7cedef350ea4b12fbf7dbd2ab6c9b22e71ac91
|
2026-02-02T00:00:00-05:00
|
Softplus Attention with Re-weighting Boosts Length Extrapolation in Large Language Models
|
arXiv:2501.13428v5 Announce Type: replace Abstract: Large language models have achieved remarkable success in recent years, primarily due to self-attention. However, traditional Softmax attention suffers from numerical instability and reduced performance as the number of inference tokens increases. This work addresses these issues by proposing a new design principle for attention, viewing it as a two-stage process. The first stage (normalisation) refines standard attention by replacing Softmax with the more numerically stable Softplus followed by $l_{1}$-normalisation. Furthermore, we introduce a dynamic scale factor based on invariance entropy. We show that this novel attention mechanism outperforms conventional Softmax attention, and state-of-the-art Softmax-free alternatives. Our second proposal is to introduce a second processing stage (sharpening) which consists of a re-weighting mechanism that amplifies significant attentional weights while diminishing weaker ones. This enables the model to concentrate more effectively on relevant tokens, mitigating the attention sink phenomenon, and fundamentally improving length extrapolation. This novel, two-stage, replacement for self-attention is shown to ensure numerical stability and dramatically improve length extrapolation, maintaining a nearly constant validation loss at 16$\times$ the training length while achieving superior results on challenging long-context retrieval tasks and downstream benchmarks. Furthermore, symbolic regression experiments demonstrate that our method enables models to recover Newton's gravitational law from orbital trajectory sequences, providing evidence that appropriate attention mechanisms are crucial for foundation models to develop genuine physical world models.
|
https://arxiv.org/abs/2501.13428
|
Academic Papers
|
svg
|
9aee00427a36b56789285fdbedae5b7eeadfcebb400b8ce7ac307b4cb5e8be3d
|
2026-02-02T00:00:00-05:00
|
Understanding Transformer Optimization via Gradient Heterogeneity
|
arXiv:2502.00213v3 Announce Type: replace Abstract: Transformers are difficult to optimize with stochastic gradient descent (SGD) and largely rely on adaptive optimizers such as Adam. Despite their empirical success, the reasons behind Adam's superior performance over SGD remain poorly understood. In this study, we analyze the optimization of Transformer models through the lens of \emph{gradient heterogeneity}, defined as the variation in gradient norms across parameter blocks. We provide a theoretical analysis showing that gradient heterogeneity, together with Hessian heterogeneity, degrades the convergence of gradient-based methods such as SGD, while sign-based methods are substantially less sensitive to this effect. Adam's coordinate-wise normalization makes its update directions depend mainly on gradient signs, so Adam can be interpreted as a soft variant of SignSGD. Our analysis uses the fact that SGD and SignSGD follow steepest descent directions under different norms, and derives upper bounds on the iteration complexity with implications for learning rate scaling in SignSGD. We further investigate the origin of gradient heterogeneity in Transformer architectures and show that it is strongly influenced by the placement of layer normalization, with Post-LN architectures exhibiting particularly pronounced heterogeneity. Experimental results from fine-tuning Transformers in both NLP and vision domains validate our theoretical analysis. Code is available at https://github.com/tom4649/gradient-heterogeneity.
|
https://arxiv.org/abs/2502.00213
|
Academic Papers
|
svg
|
aa1b97a3213c0f09d33ef2dc6074e475967d010b8341431cf7462ed183115e78
|
2026-02-02T00:00:00-05:00
|
Sparsity-Guided Multi-Parameter Selection in $\ell_1$-Regularized Models via a Fixed-Point Proximity Approach
|
arXiv:2502.00655v2 Announce Type: replace Abstract: We study a regularization framework that combines a convex fidelity term with multiple $\ell_1$-based regularizers, each linked to a distinct linear transform. This multi-penalty model enhances flexibility in promoting structured sparsity. We analyze how the choice of regularization parameters governs the sparsity of solutions under the given transforms and derive a precise relationship between the parameters and resulting sparsity patterns. This insight enables the development of an iterative strategy for selecting parameters to achieve prescribed sparsity levels. A key computational challenge arises in practice: effective parameter tuning requires simultaneous access to the regularized solution and two auxiliary vectors derived from the sparsity analysis. To address this, we propose a fixed-point proximity algorithm that jointly computes all three vectors. Together with our theoretical characterization, this algorithm forms the basis of a practical multi-parameter selection scheme. Numerical experiments demonstrate that the proposed method reliably produces solutions with desired sparsity patterns and strong approximation accuracy.
|
https://arxiv.org/abs/2502.00655
|
Academic Papers
|
svg
|
b7e3ecb1ae23c15d0551db1d75027c4caf0612e44df230fd129a578e5336ccda
|
2026-02-02T00:00:00-05:00
|
Preprocessing Disks for Convex Hulls, Revisited
|
arXiv:2502.03633v2 Announce Type: replace Abstract: In the preprocessing framework one is given a set of regions that one is allowed to preprocess to create some auxiliary structure such that when a realization of these regions is given, consisting of one point per region, this auxiliary structure can be used to reconstruct some desired output geometric structure more efficiently than would have been possible without preprocessing. Prior work showed that a set of $n$ unit disks of constant ply can be preprocessed in $O(n\log n)$ time such that the convex hull of any realization can be reconstructed in $O(n)$ time. (This prior work focused on triangulations and the convex hull was a byproduct.) In this work we show for the first time that we can reconstruct the convex hull in time proportional to the number of \emph{unstable} disks, which may be sublinear, and that such a running time is the best possible. Here a disk is called \emph{stable} if the combinatorial structure of the convex hull does not depend on the location of its realized point. The main tool by which we achieve our results is by using a supersequence as the auxiliary structure constructed in the preprocessing phase, that is we output a supersequence of the disks such that the convex hull of any realization is a subsequence. One advantage of using a supersequence as the auxiliary structure is that it allows us to decouple the preprocessing phase from the reconstruction phase in a stronger sense than was possible in previous work, resulting in two separate algorithmic problems which may be independent interest. Finally, in the process of obtaining our results for convex hulls, we solve the corresponding problem of creating such supersequences for intervals in one dimension, yielding corresponding results for that case.
|
https://arxiv.org/abs/2502.03633
|
Academic Papers
|
svg
|
172b45ca99c5fd5eac53adaafc6fadf3c9efca6a2412e9c836c69cbc7b08bb12
|
2026-02-02T00:00:00-05:00
|
FlashVideo: Flowing Fidelity to Detail for Efficient High-Resolution Video Generation
|
arXiv:2502.05179v4 Announce Type: replace Abstract: DiT models have achieved great success in text-to-video generation, leveraging their scalability in model capacity and data scale. High content and motion fidelity aligned with text prompts, however, often require large model parameters and a substantial number of function evaluations (NFEs). Realistic and visually appealing details are typically reflected in high-resolution outputs, further amplifying computational demands-especially for single-stage DiT models. To address these challenges, we propose a novel two-stage framework, FlashVideo, which strategically allocates model capacity and NFEs across stages to balance generation fidelity and quality. In the first stage, prompt fidelity is prioritized through a low-resolution generation process utilizing large parameters and sufficient NFEs to enhance computational efficiency. The second stage achieves a nearly straight ODE trajectory between low and high resolutions via flow matching, effectively generating fine details and fixing artifacts with minimal NFEs. To ensure a seamless connection between the two independently trained stages during inference, we carefully design degradation strategies during the second-stage training. Quantitative and visual results demonstrate that FlashVideo achieves state-of-the-art high-resolution video generation with superior computational efficiency. Additionally, the two-stage design enables users to preview the initial output and accordingly adjust the prompt before committing to full-resolution generation, thereby significantly reducing computational costs and wait times as well as enhancing commercial viability.
|
https://arxiv.org/abs/2502.05179
|
Academic Papers
|
svg
|
461e749189cb7390d6eaa96cc5fbefe3a2ad3fcae82443ada9d56f1ab909905d
|
2026-02-02T00:00:00-05:00
|
Causal Imitation Learning under Expert-Observable and Expert-Unobservable Confounding
|
arXiv:2502.07656v2 Announce Type: replace Abstract: We propose a general framework for causal Imitation Learning (IL) with hidden confounders, which subsumes several existing settings. Our framework accounts for two types of hidden confounders: (a) variables observed by the expert but not by the imitator, and (b) confounding noise hidden from both. By leveraging trajectory histories as instruments, we reformulate causal IL in our framework into a Conditional Moment Restriction (CMR) problem. We propose DML-IL, an algorithm that solves this CMR problem via instrumental variable regression, and upper bound its imitation gap. Empirical evaluation on continuous state-action environments, including Mujoco tasks, demonstrates that DML-IL outperforms existing causal IL baselines.
|
https://arxiv.org/abs/2502.07656
|
Academic Papers
|
svg
|
6e9bd4d7f07dad0d13ff799e28c31727e3265f572af7c14ae567eff16225a916
|
2026-02-02T00:00:00-05:00
|
Ambig-SWE: Interactive Agents to Overcome Underspecificity in Software Engineering
|
arXiv:2502.13069v2 Announce Type: replace Abstract: AI agents are increasingly being deployed to automate tasks, often based on underspecified user instructions. Making unwarranted assumptions to compensate for the missing information and failing to ask clarifying questions can lead to suboptimal outcomes, safety risks due to tool misuse, and wasted computational resources. In this work, we study the ability of LLM agents to handle underspecified instructions in interactive code generation settings by evaluating proprietary and open-weight models on their performance across three key steps: (a) detecting underspecificity, (b) asking targeted clarification questions, and (c) leveraging the interaction to improve performance in underspecified scenarios. We introduce Ambig-SWE, an underspecified variant of SWE-Bench Verified, specifically designed to evaluate agent behavior under ambiguity and interaction. Our findings reveal that models struggle to distinguish between well-specified and underspecified instructions. However, when models interact for underspecified inputs, they effectively obtain vital information from the user leading to significant improvements in performance, up to 74% over the non-interactive settings, underscoring the value of effective interaction. Our study highlights critical gaps in how current state-of-the-art models handle missing information in complex software engineering tasks and structures the evaluation into distinct steps to enable targeted improvements.
|
https://arxiv.org/abs/2502.13069
|
Academic Papers
|
svg
|
4391589e1312fc6cf320144504585a8d2fd4227ab3ce56c39b86e1ebdd747311
|
2026-02-02T00:00:00-05:00
|
PSDNorm: Test-Time Temporal Normalization for Deep Learning in Sleep Staging
|
arXiv:2503.04582v3 Announce Type: replace Abstract: Distribution shift poses a significant challenge in machine learning, particularly in biomedical applications using data collected across different subjects, institutions, and recording devices, such as sleep data. While existing normalization layers, BatchNorm, LayerNorm and InstanceNorm, help mitigate distribution shifts, when applied over the time dimension they ignore the dependencies and auto-correlation inherent to the vector coefficients they normalize. In this paper, we propose PSDNorm that leverages Monge mapping and temporal context to normalize feature maps in deep learning models for signals. Evaluations with architectures based on U-Net or transformer backbones trained on 10K subjects across 10 datasets, show that PSDNorm achieves state-of-the-art performance on unseen left-out datasets while being more robust to data scarcity.
|
https://arxiv.org/abs/2503.04582
|
Academic Papers
|
svg
|
7865564a9294fb5b4f15b57873e75d08bb921164cabda713212c1e5280886b84
|
2026-02-02T00:00:00-05:00
|
SPEED: Scalable, Precise, and Efficient Concept Erasure for Diffusion Models
|
arXiv:2503.07392v4 Announce Type: replace Abstract: Erasing concepts from large-scale text-to-image (T2I) diffusion models has become increasingly crucial due to the growing concerns over copyright infringement, offensive content, and privacy violations. In scalable applications, fine-tuning-based methods are time-consuming to precisely erase multiple target concepts, while real-time editing-based methods often degrade the generation quality of non-target concepts due to conflicting optimization objectives. To address this dilemma, we introduce SPEED, an efficient concept erasure approach that directly edits model parameters. SPEED searches for a null space, a model editing space where parameter updates do not affect non-target concepts, to achieve scalable and precise erasure. To facilitate accurate null space optimization, we incorporate three complementary strategies: Influence-based Prior Filtering (IPF) to selectively retain the most affected non-target concepts, Directed Prior Augmentation (DPA) to enrich the filtered retain set with semantically consistent variations, and Invariant Equality Constraints (IEC) to preserve key invariants during the T2I generation process. Extensive evaluations across multiple concept erasure tasks demonstrate that SPEED consistently outperforms existing methods in non-target preservation while achieving efficient and high-fidelity concept erasure, successfully erasing 100 concepts within only 5 seconds. Our code and models are available at: https://github.com/Ouxiang-Li/SPEED.
|
https://arxiv.org/abs/2503.07392
|
Academic Papers
|
svg
|
3d10967e69163beaddcb9fb1578e9fc1076db82dbccb33f431c76fbf618e4a6a
|
2026-02-02T00:00:00-05:00
|
FaVChat: Hierarchical Prompt-Query Guided Facial Video Understanding with Data-Efficient GRPO
|
arXiv:2503.09158v4 Announce Type: replace Abstract: Existing video large language models (VLLMs) primarily leverage prompt agnostic visual encoders, which extract untargeted facial representations without awareness of the queried information, leading to the loss of task critical cues. To address this challenge, we propose FaVChat, the first VLLM designed for reasoning over subtle visual and dynamic facial cues. FaVChat introduces a hierarchical, prompt guided visual feature extraction framework that emphasizes question relevant information at three complementary levels. These multi level features are dynamically fused and injected into the LLM, enabling more accurate facial details reasoning To further improve learning efficiency under data scarcity, we propose Data Efficient GRPO, a reinforcement learning strategy that iteratively identifies high utility samples and maximizes the contribution of each instance via per instance utility estimation, substantially enhancing performance gains under limited supervision. We construct a large scale benchmark dataset FaVChat 170K, comprising approximately 60K high quality facial videos and 170K question answer pairs focusing on fine grained facial details. Extensive experiments, including zero shot evaluations on four facial understanding tasks, demonstrate that FaVChat consistently outperforms existing VLLMs.
|
https://arxiv.org/abs/2503.09158
|
Academic Papers
|
svg
|
963f2c30dd8930e0994a9374d21abb6ffcf15435af077e09ddaf7245f724aea5
|
2026-02-02T00:00:00-05:00
|
Ethical AI for Young Digital Citizens: A Call to Action on Privacy Governance
|
arXiv:2503.11947v4 Announce Type: replace Abstract: The rapid expansion of Artificial Intelligence (AI) in digital platforms used by youth has created significant challenges related to privacy, autonomy, and data protection. While AI-driven personalization offers enhanced user experiences, it often operates without clear ethical boundaries, leaving young users vulnerable to data exploitation and algorithmic biases. This paper presents a call to action for ethical AI governance, advocating for a structured framework that ensures youth-centred privacy protections, transparent data practices, and regulatory oversight. We outline key areas requiring urgent intervention, including algorithmic transparency, privacy education, parental data-sharing ethics, and accountability measures. Through this approach, we seek to empower youth with greater control over their digital identities and propose actionable strategies for policymakers, AI developers, and educators to build a fairer and more accountable AI ecosystem.
|
https://arxiv.org/abs/2503.11947
|
Academic Papers
|
svg
|
f5b272eac908f4811f010ba6fa85943bc3c6dbf1817ac004097c42d2ee9b6fbf
|
2026-02-02T00:00:00-05:00
|
FactSelfCheck: Fact-Level Black-Box Hallucination Detection for LLMs
|
arXiv:2503.17229v3 Announce Type: replace Abstract: Large Language Models (LLMs) frequently generate hallucinated content, posing significant challenges for applications where factuality is crucial. While existing hallucination detection methods typically operate at the sentence level or passage level, we propose FactSelfCheck, a novel zero-resource black-box sampling-based method that enables fine-grained fact-level detection. Our approach represents text as interpretable knowledge graphs consisting of facts in the form of triples, providing clearer insights into content factuality than traditional approaches. Through analyzing factual consistency across multiple LLM responses, we compute fine-grained hallucination scores without requiring external resources or training data. Our evaluation demonstrates that FactSelfCheck performs competitively with leading sentence-level sampling-based methods while providing more detailed and interpretable insights. Most notably, our fact-level approach significantly improves hallucination correction, achieving a 35.5% increase in factual content compared to the baseline, while sentence-level SelfCheckGPT yields only a 10.6% improvement. The granular nature of our detection enables more precise identification and correction of hallucinated content. Additionally, we contribute FavaMultiSamples, a novel dataset that addresses a gap in the field by providing the research community with a second dataset for evaluating sampling-based methods.
|
https://arxiv.org/abs/2503.17229
|
Academic Papers
|
svg
|
5f726e386fc1061d150fe4f1a9fc1040b366ec612efe750019f242795cc9a731
|
2026-02-02T00:00:00-05:00
|
AccidentSim: Generating Vehicle Collision Videos with Physically Realistic Collision Trajectories from Real-World Accident Reports
|
arXiv:2503.20654v2 Announce Type: replace Abstract: Collecting real-world vehicle accident videos for autonomous driving research is challenging due to their rarity and complexity. While existing driving video generation methods may produce visually realistic videos, they often fail to deliver physically realistic simulations because they lack the capability to generate accurate post-collision trajectories. In this paper, we introduce AccidentSim, a novel framework that generates physically realistic vehicle collision videos by extracting and utilizing the physical clues and contextual information available in real-world vehicle accident reports. Specifically, AccidentSim leverages a reliable physical simulator to replicate post-collision vehicle trajectories from the physical and contextual information in the accident reports and to build a vehicle collision trajectory dataset. This dataset is then used to fine-tune a language model, enabling it to respond to user prompts and predict physically consistent post-collision trajectories across various driving scenarios based on user descriptions. Finally, we employ Neural Radiance Fields (NeRF) to render high-quality backgrounds, merging them with the foreground vehicles that exhibit physically realistic trajectories to generate vehicle collision videos. Experimental results demonstrate that the videos produced by AccidentSim excel in both visual and physical authenticity.
|
https://arxiv.org/abs/2503.20654
|
Academic Papers
|
svg
|
e9ddf5b8eb1caac3cac3972eb9d2fc12f1281ca5f7bd9f1b141be05e97cf7d43
|
2026-02-02T00:00:00-05:00
|
Integrating Fourier Neural Operators with Diffusion Models to improve Spectral Representation of Synthetic Earthquake Ground Motion Response
|
arXiv:2504.00757v2 Announce Type: replace Abstract: Nuclear reactor buildings must be designed to withstand the dynamic load induced by strong ground motion earthquakes. For this reason, their structural behavior must be assessed in multiple realistic ground shaking scenarios (e.g., the Maximum Credible Earthquake). However, earthquake catalogs and recorded seismograms may not always be available in the region of interest. Therefore, synthetic earthquake ground motion is progressively being employed, although with some due precautions: earthquake physics is sometimes not well enough understood to be accurately reproduced with numerical tools, and the underlying epistemic uncertainties lead to prohibitive computational costs related to model calibration. In this study, we propose an AI physics-based approach to generate synthetic ground motion, based on the combination of a neural operator that approximates the elastodynamics Green's operator in arbitrary source-geology setups, enhanced by a denoising diffusion probabilistic model. The diffusion model is trained to correct the ground motion time series generated by the neural operator. Our results show that such an approach promisingly enhances the realism of the generated synthetic seismograms, with frequency biases and Goodness-Of-Fit (GOF) scores being improved by the diffusion model. This indicates that the latter is capable to mitigate the mid-frequency spectral falloff observed in the time series generated by the neural operator. Our method showcases fast and cheap inference in different site and source conditions.
|
https://arxiv.org/abs/2504.00757
|
Academic Papers
|
svg
|
75bf3c82762a8a5ee55929447d813100e6b6ac53e8d324604ba59d139b3f7701
|
2026-02-02T00:00:00-05:00
|
Split Federated Learning for Low-Altitude Wireless Networks: Joint Sensing, Communication, Computation, and Control Co-design
|
arXiv:2504.01443v2 Announce Type: replace Abstract: Unmanned aerial vehicles (UAVs) with integrated sensing, communication, computation and control (ISC3) capabilities have become key enablers of next-generation wireless networks. Federated edge learning (FEL) leverages UAVs as mobile learning agents to collect data, perform local model updates, and contribute to global model aggregation. However, existing UAV-assisted FEL systems face critical challenges, including excessive computational demands, privacy risks, and inefficient communication, primarily due to the requirement for full-model training on resource-constrained UAVs. To deal with aforementioned challenges, we propose Split Federated Learning for UAV-Enabled ISC3 (SFLSC3), a novel framework that integrates split federated learning (SFL) into UAV-assisted FEL. SFLSC3 optimally partitions model training between UAVs and edge servers, significantly reducing UAVs' computational burden while preserving data privacy. We conduct a theoretical analysis of UAV deployment, split point selection, data sensing volume, and client-side aggregation frequency, deriving closed-form upper bounds for the convergence gap. Based on these insights, we conceive a joint optimization problem to minimize the delay required to achieve a target model accuracy. Given the non-convex nature of the problem, we develop a low-complexity algorithm to efficiently determine UAV deployment, split point selection, and communication frequency. Extensive simulations on a target motion recognition task validate the effectiveness of SFLSC3, demonstrating superior convergence and delay performance compared to baseline methods.
|
https://arxiv.org/abs/2504.01443
|
Academic Papers
|
svg
|
d031cd9155e7fb4a2c3424fd5157d8334a421b393e5e7b870f5284b31139629f
|
2026-02-02T00:00:00-05:00
|
CaLiV: LiDAR-to-Vehicle Calibration of Arbitrary Sensor Setups
|
arXiv:2504.01987v3 Announce Type: replace Abstract: In autonomous systems, sensor calibration is essential for safe and efficient navigation in dynamic environments. Accurate calibration is a prerequisite for reliable perception and planning tasks such as object detection and obstacle avoidance. Many existing LiDAR calibration methods require overlapping fields of view, while others use external sensing devices or postulate a feature-rich environment. In addition, Sensor-to-Vehicle calibration is not supported by the vast majority of calibration algorithms. In this work, we propose a novel target-based technique for extrinsic Sensor-to-Sensor and Sensor-to-Vehicle calibration of multi-LiDAR systems called CaLiV. This algorithm works for non-overlapping fields of view and does not require any external sensing devices. First, we apply motion to produce field of view overlaps and utilize a simple Unscented Kalman Filter to obtain vehicle poses. Then, we use the Gaussian mixture model-based registration framework GMMCalib to align the point clouds in a common calibration frame. Finally, we reduce the task of recovering the sensor extrinsics to a minimization problem. We show that both translational and rotational Sensor-to-Sensor errors can be solved accurately by our method. In addition, all Sensor-to-Vehicle rotation angles can also be calibrated with high accuracy. We validate the simulation results in real-world experiments. The code is open-source and available on https://github.com/TUMFTM/CaLiV.
|
https://arxiv.org/abs/2504.01987
|
Academic Papers
|
svg
|
a132dc8bcd29a63533b87c4603202457b53ab452d436aafcce9ab76d0f4fbf2d
|
2026-02-02T00:00:00-05:00
|
Decentralized Domain Generalization with Style Sharing: Formal Model and Convergence Analysis
|
arXiv:2504.06235v4 Announce Type: replace Abstract: Much of federated learning (FL) focuses on settings where local dataset statistics remain the same between training and testing. However, this assumption often does not hold in practice due to distribution shifts, motivating the development of domain generalization (DG) approaches that leverage source domain data to train models capable of generalizing to unseen target domains. In this paper, we are motivated by two major gaps in existing work on FL and DG: (1) the lack of formal mathematical analysis of DG objectives; and (2) DG research in FL being limited to the star-topology architecture. We develop Decentralized Federated Domain Generalization with Style Sharing ($\textit{StyleDDG}$), a decentralized DG algorithm which allows devices in a peer-to-peer network to achieve DG based on sharing style information inferred from their datasets. Additionally, we provide the first systematic approach to analyzing style-based DG training in decentralized networks. We cast existing centralized DG algorithms within our framework, and employ their formalisms to model $\textit{StyleDDG}$. We then obtain analytical conditions under which convergence of $\textit{StyleDDG}$ can be guaranteed. Through experiments on popular DG datasets, we demonstrate that $\textit{StyleDDG}$ can obtain significant improvements in accuracy across target domains with minimal communication overhead compared to baseline decentralized gradient methods.
|
https://arxiv.org/abs/2504.06235
|
Academic Papers
|
svg
|
663d715b65b283903fdaa53d6c2ce2b5e1c609f20599193822ed0247d7f381ff
|
2026-02-02T00:00:00-05:00
|
DeepGreen: Effective LLM-Driven Greenwashing Monitoring System Designed for Empirical Testing -- Evidence from China
|
arXiv:2504.07733v2 Announce Type: replace Abstract: Motivated by the emerging adoption of Large Language Models (LLMs) in economics and management research, this paper investigates whether LLMs can reliably identify corporate greenwashing narratives and, more importantly, whether and how the greenwashing signals extracted from textual disclosures can be used to empirically identify causal effects. To this end, this paper proposes DeepGreen, a dual-stage LLM-Driven system for detecting potential corporate greenwashing in annual reports. Applied to 9369 A-share annual reports published between 2021 and 2023, DeepGreen attains high reliability in random-sample validation at both stages. Ablation experiment shows that Retrieval-Augmented Generation (RAG) reduces hallucinations, as compared to simply lengthening the input window. Empirical tests indicate that "greenwashing" captured by DeepGreen can effectively reveal a positive relationship between greenwashing and environmental penalties, and IV, PSM, Placebo test, which enhance the robustness and causal effects of the empirical evidence. Further study suggests that the presence and number of green investors can weaken the positive correlation between greenwashing and penalties. Heterogeneity analysis shows that the positive relationship between "greenwashing - penalty" is less significant in large-sized corporations and corporations that have accumulated green assets, indicating that these green assets may be exploited as a credibility shield for greenwashing. Our findings demonstrate that LLMs can standardize ESG oversight by early warning and direct regulators' scarce attention toward the subsets of corporations where monitoring is more warranted.
|
https://arxiv.org/abs/2504.07733
|
Academic Papers
|
svg
|
f80bfc8a65a35c0e9a5dc625e0fffbec140a93df90b5138b9c510accd9d50e7e
|
2026-02-02T00:00:00-05:00
|
Location-Oriented Sound Event Localization and Detection with Spatial Mapping and Regression Localization
|
arXiv:2504.08365v3 Announce Type: replace Abstract: Sound Event Localization and Detection (SELD) combines the Sound Event Detection (SED) with the corresponding Direction Of Arrival (DOA). Recently, adopted event oriented multi-track methods affect the generality in polyphonic environments due to the limitation of the number of tracks. To enhance the generality in polyphonic environments, we propose Spatial Mapping and Regression Localization for SELD (SMRL-SELD). SMRL-SELD segments the 3D spatial space, mapping it to a 2D plane, and a new regression localization loss is proposed to help the results converge toward the location of the corresponding event. SMRL-SELD is location-oriented, allowing the model to learn event features based on orientation. Thus, the method enables the model to process polyphonic sounds regardless of the number of overlapping events. We conducted experiments on STARSS23 and STARSS22 datasets and our proposed SMRL-SELD outperforms the existing SELD methods in overall evaluation and polyphony environments.
|
https://arxiv.org/abs/2504.08365
|
Academic Papers
|
svg
|
91c070c6f228112fddade3a21cb8997b2eadb162e5d581302bc96ec4a7a69bf5
|
2026-02-02T00:00:00-05:00
|
Detecting Instruction Fine-tuning Attacks using Influence Function
|
arXiv:2504.09026v3 Announce Type: replace Abstract: Instruction fine-tuning attacks pose a serious threat to large language models (LLMs) by subtly embedding poisoned examples in fine-tuning datasets, leading to harmful or unintended behaviors in downstream applications. Detecting such attacks is challenging because poisoned data is often indistinguishable from clean data, and prior knowledge of triggers or attack strategies is rarely available. We present a detection method that requires no prior knowledge of the attack. Our approach leverages influence functions under semantic transformation by comparing influence distributions before and after semantic inversions to identify critical poisons, defined as examples whose influence is strong and remains unchanged across transformations. We introduce a multi-transform ensemble approach that achieves F1 scores between 79.5 and 95.2 percent with precision between 66 and 100 percent on sentiment classification, significantly improving over single-transform methods. Our method generalizes to unseen transformation types with an F1 score of 86 percent through cross-category validation. We demonstrate effectiveness across multiple models, including T5-small and DeepSeek-Coder-1.3B, and across tasks such as sentiment classification and math reasoning. Removing a small fraction of detected poisons, between 1 and 3 percent of the data, restores model performance to near-clean levels. These results demonstrate the practicality of influence-based diagnostics for defending against instruction fine-tuning attacks in real-world large language model deployment. Artifact available at https://github.com/lijiawei20161002/Poison-Detection. Warning: this paper contains offensive data examples.
|
https://arxiv.org/abs/2504.09026
|
Academic Papers
|
svg
|
ddf42165187bffd4242f9d5a6fe117545ec13041252225693941a344bb1585d2
|
2026-02-02T00:00:00-05:00
|
Can you map it to English? The Role of Cross-Lingual Alignment in Multilingual Performance of LLMs
|
arXiv:2504.09378v3 Announce Type: replace Abstract: Large language models (LLMs) can answer prompts in many languages, despite being trained predominantly on English; yet, the mechanisms driving this generalization remain poorly understood. This work asks: How does an LLM's ability to align representations of non-English inputs to English impact its performance on natural language understanding (NLU) tasks? We study the role of representation alignment in instance-level task decisions, complementing prior analyses conducted both at the language level and task-independently. We introduce the Discriminative Alignment Index ($\DALI$) to quantify instance-level alignment across 24 languages other than English and three distinct NLU tasks. Results show that incorrect NLU predictions are strongly associated with lower representation alignment with English in the model's middle layers. Through activation patching, we show that incorrect predictions in languages other than English can be fixed by patching their parallel English activations in the middle layers, thereby demonstrating the causal role of representation (mis)alignment in cross-lingual correctness.
|
https://arxiv.org/abs/2504.09378
|
Academic Papers
|
svg
|
bbc8ffe773303b0bddd2250c541cbcbe5e81d0844d2e00ffd191ef385534a50a
|
2026-02-02T00:00:00-05:00
|
What Matters in Linearizing Language Models? A Comparative Study of Architecture, Scale, and Task Adaptation
|
arXiv:2504.14366v3 Announce Type: replace Abstract: Linearization has emerged as a strategy for developing efficient language models (LMs). Starting from an existing Transformer-based LM, linearization replaces the attention component with computationally efficient subquadratic \textit{token mixers}. However, as an increasing number of mixers are proposed, it remains unclear which inductive biases are best suited to inherit the original Transformer's capabilities. Furthermore, it is unknown how linearization is affected by parameter and token budget scaling. To address these questions, we propose a unified setup to compare seven representative architectures, including xLSTM, GLA, and Gated DeltaNet. Our findings reveal that performance hierarchies remain stable from 140M to 1.7B parameters, with error-correcting update rules demonstrating superior scaling exponents. We show that performance gaps are established early and persist through asymptotic maturity at 10B tokens, suggesting that state resolution is a more fundamental bottleneck than the distillation budget. Finally, while most models adapt to instruction tuning, only gated delta-rule formulations maintain the precision necessary for long-context retrieval, whereas additive models suffer from irreversible state saturation. These results suggest that for successful linearization, architectural inductive biases remain the primary constraint that cannot be overcome by simply scaling training compute.
|
https://arxiv.org/abs/2504.14366
|
Academic Papers
|
svg
|
22d173bcca6ac6996460a6794617e2c20605c1b6a0a00cd33c245d7689971b08
|
2026-02-02T00:00:00-05:00
|
Synthesising Asynchronous Automata from Fair Specifications
|
arXiv:2504.14623v2 Announce Type: replace Abstract: Asynchronous automata are a model of distributed finite state processes synchronising on shared actions. A celebrated result by Zielonka shows how a deterministic asynchronous automaton (AA) can be synthesised, starting from two inputs: a global specification given as a deterministic finite-state automaton (DFA) and a distribution of the alphabet into local alphabets for each process. The DFA to AA translation is particularly complex and has been revisited several times, with no complete prototype tool provided for the full construction. In this work, we revisit this construction on a restricted class of "fair" specifications: a DFA describes a fair specification if in every loop, all processes participate in at least one action, so no process is starved. For fair specifications, we present a new construction to synthesise an AA. Our construction results in an AA where every process has a number of local states that is linear in the number of states of the DFA, and where the only exponential explosion is related to a fairness parameter: the length of the longest word that can be read in the DFA in which not every process participates. We have implemented a prototype tool showing how it can be applied to some examples, in particular, a concrete one: the dining philosophers problem. Finally, we show how this construction can be combined with an existing construction for hierarchical process architectures, in order to relax the fairness assumption. We have implemented a prototype tool showing how it can be applied to some examples, in particular, a concrete one: the dining philosophers problem. Finally, we show how this construction can be combined with an existing construction for hierarchical process architectures, in order to relax the fairness assumption.
|
https://arxiv.org/abs/2504.14623
|
Academic Papers
|
svg
|
9c92da79a21ce59592fb8ea6c08ea4a8933f082a1ab18ee7f294f0fa55d89567
|
2026-02-02T00:00:00-05:00
|
Analysis and Elimination of Numerical Pressure Dependency in Coupled Stokes-Darcy Problem
|
arXiv:2504.19116v2 Announce Type: replace Abstract: This paper analyses the classical mixed finite element method (FEM) and a pressure-robust variant with divergence-free reconstruction operators for the coupled Stokes-Darcy problem. Its main contribution is to provide viscosity-explicit a priori error estimates that clearly distinguish the pressure dependence of the two discretizations: the velocity error of the classical scheme depends on both the exact pressure and the viscosity, whereas the pressure-robust method eliminates both entirely. Moreover, we derive pressure error estimates and quantify their dependence on the exact solution and model parameters. Two-dimensional numerical experiments validate the theoretical findings, including higher-order tests up to polynomial degree three and a lid-driven cavity benchmark with a piecewise linear interface. The implementation code is made publicly available to facilitate reproducibility.
|
https://arxiv.org/abs/2504.19116
|
Academic Papers
|
svg
|
cac85ba448cbbbd5726f1689c3663ac77420a76ad9565660b73cc3c70715086f
|
2026-02-02T00:00:00-05:00
|
Vision-Language-Action (VLA) Models: Concepts, Progress, Applications and Challenges
|
arXiv:2505.04769v2 Announce Type: replace Abstract: Vision-Language-Action (VLA) models mark a transformative advancement in artificial intelligence, aiming to unify perception, natural language understanding, and embodied action within a single computational framework. This foundational review presents a comprehensive synthesis of recent advancements in Vision-Language-Action models, systematically organized across five thematic pillars that structure the landscape of this rapidly evolving field. We begin by establishing the conceptual foundations of VLA systems, tracing their evolution from cross-modal learning architectures to generalist agents that tightly integrate vision-language models (VLMs), action planners, and hierarchical controllers. Our methodology adopts a rigorous literature review framework, covering over 80 VLA models published in the past three years. Key progress areas include architectural innovations, efficient training strategies, and real-time inference accelerations. We explore diverse application domains such as autonomous vehicles, medical and industrial robotics, precision agriculture, humanoid robotics, and augmented reality. We analyzed challenges and propose solutions including agentic adaptation and cross-embodiment planning. Furthermore, we outline a forward-looking roadmap where VLA models, VLMs, and agentic AI converge to strengthen socially aligned, adaptive, and general-purpose embodied agents. This work, is expected to serve as a foundational reference for advancing intelligent, real-world robotics and artificial general intelligence. The project repository is available on GitHub as https://github.com/Applied-AI-Research-Lab/Vision-Language-Action-Models-Concepts-Progress-Applications-and-Challenges. [Index Terms: Vision Language Action, VLA, Vision Language Models, VLMs, Action Tokenization, NLP]
|
https://arxiv.org/abs/2505.04769
|
Academic Papers
|
svg
|
f0a8d50f9367da39f2da13c993449e6a4acb1618d03ce3203e51f1aae59b2abe
|
2026-02-02T00:00:00-05:00
|
Kalman Filter Enhanced GRPO for Reinforcement Learning-Based Language Model Reasoning
|
arXiv:2505.07527v4 Announce Type: replace Abstract: The advantage function is a central concept in RL that helps reduce variance in policy gradient estimates. Recently, for language modeling, Group Relative Policy Optimization (GRPO) was proposed to compute the advantage for each output by subtracting the mean reward, as the baseline, for all outputs in the group. However, it can lead to high variance when the reward advantage is inaccurately estimated. In this work, we propose Kalman Filter Enhanced Group Relative Policy Optimization (KRPO) model, by using lightweight Kalman filtering to dynamically estimate the latent reward baseline and uncertainty. This filtering technique replaces the naive group mean, enabling more adaptive advantage normalization. Our method does not require additional learned parameters over GRPO. This approach offers a simple yet effective way to incorporate group-level uncertainty for advantage estimation, improving policy optimization in settings where highly dynamic reward signals are difficult to model for language models. Through the accuracies and rewards obtained from math question answering and reasoning, we show that using a more adaptive advantage estimation model, KRPO can improve the performance and show more stable return curves upon GRPO. The code is available at https://github.com/billhhh/KRPO_LLMs_RL.
|
https://arxiv.org/abs/2505.07527
|
Academic Papers
|
svg
|
8a8e037644ccfa1ff3caa6e27bd1142dfcecac6ab308f75e134330eda5e8ffcd
|
2026-02-02T00:00:00-05:00
|
Lost in Transmission: When and Why LLMs Fail to Reason Globally
|
arXiv:2505.08140v5 Announce Type: replace Abstract: Despite their many successes, transformer-based large language models (LLMs) continue to struggle with tasks that require complex reasoning over large parts of their input. We argue that these failures arise due to capacity limits on the accurate flow of information within LLMs. To formalize this issue, we introduce the bounded attention prefix oracle (BAPO) model, a new computational framework that models bandwidth constraints on attention heads, the mechanism for internal communication in LLMs. We show that several important reasoning problems like graph reachability require high communication bandwidth for BAPOs to solve; we call these problems BAPO-hard. Our experiments corroborate our theoretical predictions: GPT-4o, Claude, and Gemini succeed on BAPO-easy tasks and fail even on relatively small BAPO-hard tasks. BAPOs also reveal another benefit of chain of thought (CoT): we prove that breaking down a task using CoT can turn any BAPO-hard problem into a BAPO-easy one. Our results offer principled explanations for key LLM failures and suggest directions for architectures and inference methods that mitigate bandwidth limits.
|
https://arxiv.org/abs/2505.08140
|
Academic Papers
|
svg
|
659e4345a64879c575fa0c2a658f05ac98dc590841c01a3ba7143442d7686a44
|
2026-02-02T00:00:00-05:00
|
SuperCoder: Assembly Program Superoptimization with Large Language Models
|
arXiv:2505.11480v3 Announce Type: replace Abstract: Superoptimization is the task of transforming a program into a faster one while preserving its input-output behavior. In this work, we investigate whether large language models (LLMs) can serve as superoptimizers, generating assembly programs that outperform code already optimized by industry-standard compilers. We construct the first large-scale benchmark for this problem, consisting of 8,072 assembly programs averaging 130 lines, in contrast to prior datasets restricted to 2-15 straight-line, loop-free programs. We evaluate 23 LLMs on this benchmark and find that the strongest baseline, Claude-opus-4, achieves a 51.5% test-passing rate and a 1.43x average speedup over gcc -O3. To further enhance performance, we fine-tune models with reinforcement learning, optimizing a reward function that integrates correctness and performance speedup. Starting from Qwen2.5-Coder-7B-Instruct (61.4% correctness, 1.10x speedup), the fine-tuned model SuperCoder attains 95.0% correctness and 1.46x average speedup, with additional improvement enabled by Best-of-N sampling and iterative refinement. Our results demonstrate, for the first time, that LLMs can be applied as superoptimizers for assembly programs, establishing a foundation for future research in program performance optimization beyond compiler heuristics.
|
https://arxiv.org/abs/2505.11480
|
Academic Papers
|
svg
|
6665f9d9928ae40175e1b1feec9c99a3e368a26a5892573046aabf1cc58e1285
|
2026-02-02T00:00:00-05:00
|
From Street View to Visibility Network: Mapping Urban Visual Relationships with Vision-Language Models
|
arXiv:2505.11809v2 Announce Type: replace Abstract: Visibility analysis is one of the fundamental analytics methods in urban planning and landscape research, traditionally conducted through computational simulations based on the Line-of-Sight (LoS) principle. However, when assessing the visibility of named urban objects such as landmarks, geometric intersection alone fails to capture the contextual and perceptual dimensions of visibility as experienced in the real world. The study challenges the traditional LoS-based approaches by introducing a new, image-based visibility analysis method. Specifically, a Vision Language Model (VLM) is applied to detect the target object within a direction-zoomed Street View Image (SVI). Successful detection represents the object's visibility at the corresponding SVI location. Further, a heterogeneous visibility graph is constructed to address the complex interaction between observers and target objects. In the first case study, the method proves its reliability in detecting the visibility of six tall landmark constructions in global cities, with an overall accuracy of 87%. Furthermore, it reveals broader contextual differences when the landmarks are perceived and experienced. In the second case, the proposed visibility graph uncovers the form and strength of connections for multiple landmarks along the River Thames in London, as well as the places where these connections occur. Notably, bridges on the River Thames account for approximately 30% of total connections. Our method complements and enhances traditional LoS-based visibility analysis, and showcases the possibility of revealing the prevalent connection of any visual objects in the urban environment. It opens up new research perspectives for urban planning, heritage conservation, and computational social science.
|
https://arxiv.org/abs/2505.11809
|
Academic Papers
|
svg
|
1b4a7b9fd5e2a4d8ddee4f30a9c2d25ce17563553e83db748b076b4c66925055
|
2026-02-02T00:00:00-05:00
|
SAINT: Attention-Based Policies for Discrete Combinatorial Action Spaces
|
arXiv:2505.12109v3 Announce Type: replace Abstract: The combinatorial structure of many real-world action spaces leads to exponential growth in the number of possible actions, limiting the effectiveness of conventional reinforcement learning algorithms. Recent approaches for combinatorial action spaces impose factorized or sequential structures over sub-actions, failing to capture complex joint behavior. We introduce the Sub-Action Interaction Network using Transformers (SAINT), a novel policy architecture that represents multi-component actions as unordered sets and models their dependencies via self-attention conditioned on the global state. SAINT is permutation-invariant, sample-efficient, and compatible with standard policy optimization algorithms. In 18 distinct combinatorial environments across three task domains, including environments with $1.35 \times 10^{18}$ possible actions, SAINT consistently outperforms strong baselines.
|
https://arxiv.org/abs/2505.12109
|
Academic Papers
|
svg
|
5078231d9001dd0f518730f22ec56164e0a827f323a3b0176add42a9bf2be0fb
|
2026-02-02T00:00:00-05:00
|
LightRetriever: A LLM-based Text Retrieval Architecture with Extremely Faster Query Inference
|
arXiv:2505.12260v5 Announce Type: replace Abstract: Large Language Models (LLMs)-based text retrieval retrieves documents relevant to search queries based on vector similarities. Documents are pre-encoded offline, while queries arrive in real-time, necessitating an efficient online query encoder. Although LLMs significantly enhance retrieval capabilities, serving deeply parameterized LLMs slows down query inference throughput and increases demands for online deployment resources. In this paper, we propose LightRetriever, a novel LLM-based retriever with extremely lightweight query encoders. Our method retains a full-sized LLM for document encoding, but reduces the workload of query encoding to no more than an embedding lookup. Compared to serving a full LLM on an A800 GPU, our method achieves over 1000x speedup in query encoding and over 10x increase in end-to-end retrieval throughput. Extensive experiments on large-scale retrieval benchmarks show that LightRetriever generalizes well across diverse tasks, maintaining an average of 95% retrieval performance.
|
https://arxiv.org/abs/2505.12260
|
Academic Papers
|
svg
|
056484facc6ad4c0802828b9dc153b2f1018ac6783f0bfb2543bb536d4687887
|
2026-02-02T00:00:00-05:00
|
Language Models That Walk the Talk: A Framework for Formal Fairness Certificates
|
arXiv:2505.12767v2 Announce Type: replace Abstract: As large language models become integral to high-stakes applications, ensuring their robustness and fairness is critical. Despite their success, large language models remain vulnerable to adversarial attacks, where small perturbations, such as synonym substitutions, can alter model predictions, posing risks in fairness-critical areas, such as gender bias mitigation, and safety-critical areas, such as toxicity detection. While formal verification has been explored for neural networks, its application to large language models remains limited. This work presents a holistic verification framework to certify the robustness of transformer-based language models, with a focus on ensuring gender fairness and consistent outputs across different gender-related terms. Furthermore, we extend this methodology to toxicity detection, offering formal guarantees that adversarially manipulated toxic inputs are consistently detected and appropriately censored, thereby ensuring the reliability of moderation systems. By formalizing robustness within the embedding space, this work strengthens the reliability of language models in ethical AI deployment and content moderation.
|
https://arxiv.org/abs/2505.12767
|
Academic Papers
|
svg
|
6effeadebcc2aa6b53228b633cc6ea378c664ecdcd6ea1e83cea1e464355a169
|
2026-02-02T00:00:00-05:00
|
CacheFlow: Fast Human Motion Prediction by Cached Normalizing Flow
|
arXiv:2505.13140v3 Announce Type: replace Abstract: Many density estimation techniques for 3D human motion prediction require a significant amount of inference time, often exceeding the duration of the predicted time horizon. To address the need for faster density estimation for 3D human motion prediction, we introduce a novel flow-based method for human motion prediction called CacheFlow. Unlike previous conditional generative models that suffer from poor time efficiency, CacheFlow takes advantage of an unconditional flow-based generative model that transforms a Gaussian mixture into the density of future motions. The results of the computation of the flow-based generative model can be precomputed and cached. Then, for conditional prediction, we seek a mapping from historical trajectories to samples in the Gaussian mixture. This mapping can be done by a much more lightweight model, thus saving significant computation overhead compared to a typical conditional flow model. In such a two-stage fashion and by caching results from the slow flow model computation, we build our CacheFlow without loss of prediction accuracy and model expressiveness. This inference process is completed in approximately one millisecond, making it 4 times faster than previous VAE methods and 30 times faster than previous diffusion-based methods on standard benchmarks such as Human3.6M and AMASS datasets. Furthermore, our method demonstrates improved density estimation accuracy and comparable prediction accuracy to a SOTA method on Human3.6M. Our code and models are available at https://github.com/meaten/CacheFlow.
|
https://arxiv.org/abs/2505.13140
|
Academic Papers
|
svg
|
6f4255be4c85ef49294ad051e0b9befcbee80807bae7c9cd344978884047a564
|
2026-02-02T00:00:00-05:00
|
Policy-Driven World Model Adaptation for Robust Offline Model-based Reinforcement Learning
|
arXiv:2505.13709v3 Announce Type: replace Abstract: Offline reinforcement learning (RL) offers a powerful paradigm for data-driven control. Compared to model-free approaches, offline model-based RL (MBRL) explicitly learns a world model from a static dataset and uses it as a surrogate simulator, improving data efficiency and enabling potential generalization beyond the dataset support. However, most existing offline MBRL methods follow a two-stage training procedure: first learning a world model by maximizing the likelihood of the observed transitions, then optimizing a policy to maximize its expected return under the learned model. This objective mismatch results in a world model that is not necessarily optimized for effective policy learning. Moreover, we observe that policies learned via offline MBRL often lack robustness during deployment, and small adversarial noise in the environment can lead to significant performance degradation. To address these, we propose a framework that dynamically adapts the world model alongside the policy under a unified learning objective aimed at improving robustness. At the core of our method is a maximin optimization problem, which we solve by innovatively utilizing Stackelberg learning dynamics. We provide theoretical analysis to support our design and introduce computationally efficient implementations. We benchmark our algorithm on twelve noisy D4RL MuJoCo tasks and three stochastic Tokamak Control tasks, demonstrating its state-of-the-art performance.
|
https://arxiv.org/abs/2505.13709
|
Academic Papers
|
svg
|
c2e101205f472bec862e71012a6ff4564419a427cb55e434404e5baca391b729
|
2026-02-02T00:00:00-05:00
|
Warm Up Before You Train: Unlocking General Reasoning in Resource-Constrained Settings
|
arXiv:2505.13718v3 Announce Type: replace Abstract: Designing effective reasoning-capable LLMs typically requires training using Reinforcement Learning with Verifiable Rewards (RLVR) or distillation with carefully curated Long Chain of Thoughts (CoT), both of which depend heavily on extensive training data. This creates a major challenge when the amount of quality training data is scarce. We propose a sample-efficient, two-stage training strategy to develop reasoning LLMs under limited supervision. In the first stage, we "warm up" the model by distilling Long CoTs from a toy domain, namely, Knights \& Knaves (K\&K) logic puzzles to acquire general reasoning skills. In the second stage, we apply RLVR to the warmed-up model using a limited set of target-domain examples. Our experiments demonstrate that this two-phase approach offers several benefits: $(i)$ the warmup phase alone facilitates generalized reasoning, leading to performance improvements across a range of tasks, including MATH, HumanEval$^{+}$, and MMLU-Pro; $(ii)$ When both the base model and the warmed-up model are RLVR trained on the same small dataset ($\leq100$ examples), the warmed-up model consistently outperforms the base model; $(iii)$ Warming up before RLVR training allows a model to maintain cross-domain generalizability even after training on a specific domain; $(iv)$ Introducing warmup in the pipeline improves not only accuracy but also overall sample efficiency during RLVR training. The results in this paper highlight the promise of warmup for building robust reasoning LLMs in data-scarce environments.
|
https://arxiv.org/abs/2505.13718
|
Academic Papers
|
svg
|
8aaf5d8d3fa819287e73bd07c27a6c0e6064cac2faae97c40f2417e180cb23e0
|
2026-02-02T00:00:00-05:00
|
Mechanistic evaluation of Transformers and state space models
|
arXiv:2505.15105v3 Announce Type: replace Abstract: State space models (SSMs) for language modelling promise an efficient and performant alternative to quadratic-attention Transformers, yet show variable performance on recalling basic information from the context. While performance on synthetic tasks like Associative Recall (AR) can point to this deficiency, behavioural metrics provide little information as to \textit{why} -- on a mechanistic level -- certain architectures fail and others succeed. To address this, we conduct experiments on AR, and find that only Transformers and Based SSM models fully succeed at AR, with Mamba and DeltaNet close behind, while the other SSMs (H3, Hyena) fail. We then use causal interventions to explain why. We find that Transformers and Based learn to store key-value associations in-context using induction. By contrast, the SSMs seem to compute these associations only at the last state using a single layer. We further investigate the mechanism underlying the success of Mamba, and find novel evidence that Mamba \textit{does} implement induction: not via the SSM, but instead via short convolutions. Further experiments on a new hierarchical retrieval task, Associative Treecall (ATR), show that all architectures learn the same mechanism as they did for AR. Furthermore, we show that Mamba can learn Attention-like induction on ATR when short convolutions are removed. These results reveal that architectures with similar accuracy may still have substantive differences, motivating the adoption of mechanistic evaluations.
|
https://arxiv.org/abs/2505.15105
|
Academic Papers
|
svg
|
7d214fce3ce42d250886f179aecaad989f0bd974ce4280bc770307bb14663c90
|
2026-02-02T00:00:00-05:00
|
Identification of Probabilities of Causation: from Recursive to Closed-Form Bounds
|
arXiv:2505.15274v3 Announce Type: replace Abstract: Probabilities of causation (PoCs) are fundamental quantities for counterfactual analysis and personalized decision making. However, existing analytical results are largely confined to binary settings. This paper extends PoCs to multi-valued treatments and outcomes by deriving closed form bounds for a representative family of discrete PoCs within Structural Causal Models, using standard experimental and observational distributions. We introduce the notion of equivalence classes of PoCs, which reduces arbitrary discrete PoCs to this family, and establish a replaceability principle that transfers bounds across value permutations. For the resulting bounds, we prove soundness in all dimensions and empirically verify tightness in low dimensional cases via Balke's linear programming method; we further conjecture that this tightness extends to all dimensions. Simulations indicate that our closed form bounds consistently tighten recent recursive bounds while remaining simpler to compute. Finally, we illustrate the practical relevance of our results through toy examples.
|
https://arxiv.org/abs/2505.15274
|
Academic Papers
|
svg
|
30d67b6fffd36437cdb8db7439a1090341de91ce6f45960b95fd5e1ce974c24c
|
2026-02-02T00:00:00-05:00
|
Diverse, not Short: A Length-Controlled Data Selection Strategy for Improving Response Diversity of Language Models
|
arXiv:2505.16245v4 Announce Type: replace Abstract: Diverse language model responses are crucial for creative generation, open-ended tasks, and self-improvement training. We show that common diversity metrics, and even reward models used for preference optimization, systematically bias models toward shorter outputs, limiting expressiveness. To address this, we introduce Diverse, not Short (Diverse-NS), a length-controlled data selection strategy that improves response diversity while maintaining length parity. By generating and filtering preference data that balances diversity, quality, and length, Diverse-NS enables effective training using only 3,000 preference pairs. Applied to LLaMA-3.1-8B and the Olmo-2 family, Diverse-NS substantially enhances lexical and semantic diversity. We show consistent improvement in diversity with minor reduction or gains in response quality on four creative generation tasks: Divergent Associations, Persona Generation, Alternate Uses, and Creative Writing. Surprisingly, experiments with the Olmo-2 model family (7B, and 13B) show that smaller models like Olmo-2-7B can serve as effective "diversity teachers" for larger models. By explicitly addressing length bias, our method efficiently pushes models toward more diverse and expressive outputs.
|
https://arxiv.org/abs/2505.16245
|
Academic Papers
|
svg
|
cbc9b2a044d4e7692a0bebd2c1c105254bef74f09986ba4c183314aae7437e13
|
2026-02-02T00:00:00-05:00
|
An Analysis of Concept Bottleneck Models: Measuring, Understanding, and Mitigating the Impact of Noisy Annotations
|
arXiv:2505.16705v3 Announce Type: replace Abstract: Concept bottleneck models (CBMs) ensure interpretability by decomposing predictions into human interpretable concepts. Yet the annotations used for training CBMs that enable this transparency are often noisy, and the impact of such corruption is not well understood. In this study, we present the first systematic study of noise in CBMs and show that even moderate corruption simultaneously impairs prediction performance, interpretability, and the intervention effectiveness. Our analysis identifies a susceptible subset of concepts whose accuracy declines far more than the average gap between noisy and clean supervision and whose corruption accounts for most performance loss. To mitigate this vulnerability we propose a two-stage framework. During training, sharpness-aware minimization stabilizes the learning of noise-sensitive concepts. During inference, where clean labels are unavailable, we rank concepts by predictive entropy and correct only the most uncertain ones, using uncertainty as a proxy for susceptibility. Theoretical analysis and extensive ablations elucidate why sharpness-aware training confers robustness and why uncertainty reliably identifies susceptible concepts, providing a principled basis that preserves both interpretability and resilience in the presence of noise.
|
https://arxiv.org/abs/2505.16705
|
Academic Papers
|
svg
|
d1aea1ffc03d1b459a513eb79b46f710b1c53ee541de0211ce6a8fedefd6edd9
|
2026-02-02T00:00:00-05:00
|
NeUQI: Near-Optimal Uniform Quantization Parameter Initialization for Low-Bit LLMs
|
arXiv:2505.17595v3 Announce Type: replace Abstract: Large language models (LLMs) achieve impressive performance across domains but face significant challenges when deployed on consumer-grade GPUs or personal devices such as laptops, due to high memory consumption and inference costs. Post-training quantization (PTQ) of LLMs offers a promising solution that reduces their memory footprint and decoding latency. In practice, PTQ with uniform quantization representation is favored due to its efficiency and ease of deployment, as uniform quantization is widely supported by mainstream hardware and software libraries. Recent studies on low-bit uniform quantization have led to noticeable improvements in post-quantization model performance; however, they mainly focus on quantization methodologies, while the initialization of quantization parameters remains underexplored and still relies on the conventional Min-Max formula. In this work, we identify the limitations of the Min-Max formula, move beyond its constraints, and propose NeUQI, a method that efficiently determines near-optimal initialization for uniform quantization. Our NeUQI simplifies the joint optimization of the scale and zero-point by deriving the zero-point for a given scale, thereby reducing the problem to a scale-only optimization. Benefiting from the improved quantization parameters, our NeUQI consistently outperforms existing methods in the experiments with the LLaMA and Qwen families on various settings and tasks. Furthermore, when combined with a lightweight distillation strategy, NeUQI even achieves superior performance to PV-tuning, a considerably more resource-intensive method.
|
https://arxiv.org/abs/2505.17595
|
Academic Papers
|
svg
|
c130586f787f418284b1132dc2e0e15b1108ee2286108de6b3db242622367485
|
2026-02-02T00:00:00-05:00
|
Rethinking the Sampling Criteria in Reinforcement Learning for LLM Reasoning: A Competence-Difficulty Alignment Perspective
|
arXiv:2505.17652v3 Announce Type: replace Abstract: Reinforcement learning exhibits potential in enhancing the reasoning abilities of large language models, yet it is hard to scale for the low sample efficiency during the rollout phase. Existing methods attempt to improve efficiency by scheduling problems based on problem difficulties. However, these approaches suffer from unstable and biased estimations of problem difficulty and fail to capture the alignment between model competence and problem difficulty in RL training, leading to suboptimal results. To tackle these limitations, this paper introduces $\textbf{C}$ompetence-$\textbf{D}$ifficulty $\textbf{A}$lignment $\textbf{S}$ampling ($\textbf{CDAS}$), which enables accurate and stable estimation of problem difficulties by aggregating historical performance discrepancies of problems. Then the model competence is quantified to adaptively select problems whose difficulty is in alignment with the model's current competence using a fixed-point system. Experimental results across a range of challenging mathematical benchmarks show that CDAS achieves great improvements in both accuracy and efficiency. CDAS attains the highest average accuracy against baselines and exhibits significant speed advantages compared to Dynamic Sampling, a competitive strategy in DAPO, which is 2.33 times slower than CDAS.
|
https://arxiv.org/abs/2505.17652
|
Academic Papers
|
svg
|
e235d169859ad2855c311a95fb6d7bb65d630c809451b7d2d5761824f1579259
|
2026-02-02T00:00:00-05:00
|
Just as Humans Need Vaccines, So Do Models: Model Immunization to Combat Falsehoods
|
arXiv:2505.17870v2 Announce Type: replace Abstract: Large language models (LLMs) reproduce misinformation by learning the linguistic patterns that make falsehoods persuasive, such as hedging, false presuppositions, and citation fabrication, rather than merely memorizing false facts. We propose model immunization: supervised fine-tuning on curated (false claim, correction) pairs injected as small "vaccine doses" (5-10\% of tokens) alongside truthful data. Unlike post-hoc filtering or preference-based alignment, immunization provides direct negative supervision on labeled falsehoods. Across four open-weight model families, immunization improves TruthfulQA accuracy by 12 points and misinformation rejection by 30 points with negligible capability loss. We outline design requirements, which includes, dosage, labeling, quarantine, diversity and call for standardized vaccine corpora and benchmarks that test generalization, making immunization a routine component of responsible LLM development
|
https://arxiv.org/abs/2505.17870
|
Academic Papers
|
svg
|
c6c3398a7655144dc9c1d8462aa7251f8a4078a422182c4a24c77ca2e94db2c1
|
2026-02-02T00:00:00-05:00
|
Reinforcement Learning for Ballbot Navigation in Uneven Terrain
|
arXiv:2505.18417v2 Announce Type: replace Abstract: Ballbot (i.e. Ball balancing robot) navigation usually relies on methods rooted in control theory (CT), and works that apply Reinforcement learning (RL) to the problem remain rare while generally being limited to specific subtasks (e.g. balance recovery). Unlike CT based methods, RL does not require (simplifying) assumptions about environment dynamics (e.g. the absence of slippage between the ball and the floor). In addition to this increased accuracy in modeling, RL agents can easily be conditioned on additional observations such as depth-maps without the need for explicit formulations from first principles, leading to increased adaptivity. Despite those advantages, there has been little to no investigation into the capabilities, data-efficiency and limitations of RL based methods for ballbot control and navigation. Furthermore, there is a notable absence of an open-source, RL-friendly simulator for this task. In this paper, we present an open-source ballbot simulation based on MuJoCo, and show that with appropriate conditioning on exteroceptive observations as well as reward shaping, policies learned by classical model-free RL methods are capable of effectively navigating through randomly generated uneven terrain, using a reasonable amount of data (four to five hours on a system operating at 500hz). Our code is made publicly available.
|
https://arxiv.org/abs/2505.18417
|
Academic Papers
|
svg
|
95239865ea110cbf6bd90df4af1db7574167f34895ff6333aa4704f500881123
|
2026-02-02T00:00:00-05:00
|
Surrogate Signals from Format and Length: Reinforcement Learning for Solving Mathematical Problems without Ground Truth Answers
|
arXiv:2505.19439v5 Announce Type: replace Abstract: Large Language Models (LLMs) have achieved remarkable success in natural language processing tasks, with Reinforcement Learning (RL) playing a key role in adapting them to specific applications. In mathematical problem solving, however, the reliance on ground truth answers poses significant challenges due to their high collection cost and limited availability. This work explores the use of simple surrogate signals, format and length, to guide RL training. We find that early training is dominated by format learning, where structural feedback alone accounts for most performance gains. Incorporating length-based rewards further refines outputs by discouraging overly long or short responses, enabling a GRPO approach with format-length signals to approximate, and in some cases surpass, ground-truth-based optimization. For example, our method achieves 40.0% accuracy on AIME2024 with a 7B base model, and generalizes across different model sizes and series. Beyond practical efficiency, these findings provide an inspirational perspective on RL: rather than imparting new knowledge, RL primarily activates reasoning capabilities already embedded in pre-trained models. This insight suggests that lightweight, label-efficient strategies can complement pre-training to unlock LLMs' latent potential in reasoning-intensive tasks.
|
https://arxiv.org/abs/2505.19439
|
Academic Papers
|
svg
|
d501a1041cd3ad2dc2ff704cf7e46725854fab511835d86a17d6a8b39874e4a1
|
2026-02-02T00:00:00-05:00
|
Model Agnostic Differentially Private Causal Inference
|
arXiv:2505.19589v3 Announce Type: replace Abstract: Estimating causal effects from observational data is essential in fields such as medicine, economics and social sciences, where privacy concerns are paramount. We propose a general, model-agnostic framework for differentially private estimation of average treatment effects (ATE) that avoids strong structural assumptions on the data-generating process or the models used to estimate propensity scores and conditional outcomes. In contrast to prior work, which enforces differential privacy by directly privatizing these nuisance components, our approach decouples nuisance estimation from privacy protection. This separation allows the use of flexible, state-of-the-art black-box models, while differential privacy is achieved by perturbing only predictions and aggregation steps within a fold-splitting scheme with ensemble techniques. We instantiate the framework for three classical estimators -- the G-Formula, inverse propensity weighting (IPW), and augmented IPW (AIPW) -- and provide formal utility and privacy guarantees, together with privatized confidence intervals. Empirical results on synthetic and real data show that our methods maintain competitive performance under realistic privacy budgets.
|
https://arxiv.org/abs/2505.19589
|
Academic Papers
|
svg
|
7bf46c7e762859a900a6b4d711e9792f40510c05f587da5aa438bc19e7a4e4ef
|
2026-02-02T00:00:00-05:00
|
Spatially-Adaptive Gradient Re-parameterization for 3D Large Kernel Optimization
|
arXiv:2505.19603v2 Announce Type: replace Abstract: Large kernel convolutions offer a scalable alternative to vision transformers for high-resolution 3D volumetric analysis, yet naively increasing kernel size often leads to optimization instability. Motivated by the spatial bias inherent in effective receptive fields (ERFs), we theoretically demonstrate that structurally re-parameterized blocks induce spatially varying learning rates that are crucial for convergence. Leveraging this insight, we introduce Rep3D, a framework that employs a lightweight modulation network to generate receptive-biased scaling masks, adaptively re-weighting kernel updates within a plain encoder architecture. This approach unifies spatial inductive bias with optimization-aware learning, avoiding the complexity of multi-branch designs while ensuring robust local-to-global convergence. Extensive evaluations on five 3D segmentation benchmarks demonstrate that Rep3D consistently outperforms state-of-the-art transformer and fixed-prior baselines. The source code is publicly available at https://github.com/leeh43/Rep3D.
|
https://arxiv.org/abs/2505.19603
|
Academic Papers
|
svg
|
18648ffadfa5e99b4a630521a8fd5285148400c40e4c2511a483203f8f7f7235
|
2026-02-02T00:00:00-05:00
|
VScan: Rethinking Visual Token Reduction for Efficient Large Vision-Language Models
|
arXiv:2505.22654v3 Announce Type: replace Abstract: Recent Large Vision-Language Models (LVLMs) have advanced multi-modal understanding by incorporating finer-grained visual perception and encoding. However, such methods incur significant computational costs due to longer visual token sequences, posing challenges for real-time deployment. To mitigate this, prior studies have explored pruning unimportant visual tokens either at the output layer of the visual encoder or at the early layers of the language model. In this work, we revisit these design choices and reassess their effectiveness through comprehensive empirical studies of how visual tokens are processed throughout the visual encoding and language decoding stages. Guided by these insights, we propose VScan, a two-stage visual token reduction framework that addresses token redundancy by: (1) integrating complementary global and local scans with token merging during visual encoding, and (2) introducing pruning at intermediate layers of the language model. Extensive experimental results across four LVLMs validate the effectiveness of VScan in accelerating inference and demonstrate its superior performance over current state-of-the-arts on sixteen benchmarks. Notably, when applied to LLaVA-NeXT-7B, VScan achieves a 2.91$\times$ speedup in prefilling and a 10$\times$ reduction in FLOPs, while retaining 95.4\% of the original performance. Code is available at https://github.com/Tencent/SelfEvolvingAgent/tree/main/VScan.
|
https://arxiv.org/abs/2505.22654
|
Academic Papers
|
svg
|
0bb009d3f04637983db354468af0c86901431d2dcf59c4b1b65838b43d9e859f
|
2026-02-02T00:00:00-05:00
|
Genomic-Informed Heterogeneous Graph Learning for Spatiotemporal Avian Influenza Outbreak Forecasting
|
arXiv:2505.22692v5 Announce Type: replace Abstract: Accurate forecasting of Avian Influenza Virus (AIV) outbreaks within wild bird populations necessitates models that account for complex, multi-scale transmission patterns driven by diverse factors. While conventional spatiotemporal epidemic models are robust for human-centric diseases, they rely on spatial homophily and diffusive transmission between geographic regions. This simplification is incomplete for AIV as it neglects valuable genomic information critical for capturing dynamics like high-frequency reassortment and lineage turnover at the case level (e.g., genetic descent across regions), which are essential for understanding AIV spread. To address these limitations, we systematically formulate the AIV forecasting problem and propose a Bi-Layer genomic-aware heterogeneous graph fusion pipeline. This pipeline integrates genetic, spatial, and ecological data to achieve highly accurate outbreak forecasting. It 1) defines a multi-layered graph structure incorporating information from diverse sources and multiple layers (case and location), 2) applies cross-relation smoothing to smooth information flow across edge types, 3) performs graph fusion that preserves critical structural patterns backed by theoretical spectral guarantees, and 4) forecasts future outbreaks using an autoregressive graph sequence model to capture transmission dynamics. To support research, we release the Avian-US dataset, which provides comprehensive genetic, spatial, and ecological data on US avian influenza outbreaks. BLUE demonstrates superior performance over existing baselines, highlighting the efficacy of integrating multi-layer information for infectious disease forecasting. The code is available at: https://github.com/cruiseresearchgroup/BLUE.
|
https://arxiv.org/abs/2505.22692
|
Academic Papers
|
svg
|
b38c282488edf21b36144860edf41a3b9bd08ab2c6f504e99c4c503b6c6676f9
|
2026-02-02T00:00:00-05:00
|
Learning Hierarchical Sparse Transform Coding for 3DGS Compression
|
arXiv:2505.22908v3 Announce Type: replace Abstract: Current 3DGS compression methods largely forego the neural analysis-synthesis transform, which is a crucial component in learned signal compression systems. As a result, redundancy removal is left solely to the entropy coder, overburdening the entropy coding module and reducing rate-distortion (R-D) performance. To fix this critical omission, we propose a training-time transform coding (TTC) method that adds the analysis-synthesis transform and optimizes it jointly with the 3DGS representation and entropy model. Concretely, we adopt a hierarchical design: a channel-wise KLT for decorrelation and energy compaction, followed by a sparsity-aware neural transform that reconstructs the KLT residuals with minimal parameter and computational overhead. Experiments show that our method delivers strong R-D performance with fast decoding, offering a favorable BD-rate-decoding-time trade-off over SOTA 3DGS compressors.
|
https://arxiv.org/abs/2505.22908
|
Academic Papers
|
svg
|
d202d14fad836ef103451be16083b43e167bda6a87b4e0db0d170fa028b0a467
|
2026-02-02T00:00:00-05:00
|
Studying the Soupability of Documents in State Space Models
|
arXiv:2505.24033v2 Announce Type: replace Abstract: We investigate whether hidden states from Structured State Space Models (SSMs) can be merged post hoc to support downstream reasoning. Inspired by model souping, we study document souping, a strategy where documents are encoded independently, and their representations are pooled, via simple operations like averaging, into a single context state. This approach enables modular encoding and reuse without reprocessing the full input for each query. We demonstrate that finetuned Mamba2 models with souped representations achieve competitive or superior performance across multi-hop QA, sparse retrieval, and long-document reasoning tasks compared to the standard monolithic encoding approach. For example, on the RACE and QuALITY benchmarks for long document question answering, this method substantially outperforms a traditional concatenation approach. Crucially, this modular design scales to hundreds of documents while delivering substantial savings in inference cost, unlocking new possibilities for large-scale corpus reasoning.
|
https://arxiv.org/abs/2505.24033
|
Academic Papers
|
svg
|
17f492302d0af5fed6cba780b7488a00dca060c895e51902b22b786c38b29eed
|
2026-02-02T00:00:00-05:00
|
Framing Political Bias in Multilingual LLMs Across Pakistani Languages
|
arXiv:2506.00068v3 Announce Type: replace Abstract: Large Language Models (LLMs) increasingly shape public discourse, yet most evaluations of political and economic bias have focused on high-resource, Western languages and contexts. This leaves critical blind spots in low-resource, multilingual regions such as Pakistan, where linguistic identity is closely tied to political, religious, and regional ideologies. We present a systematic evaluation of political bias in 13 state-of-the-art LLMs across five Pakistani languages: Urdu, Punjabi, Sindhi, Pashto, and Balochi. Our framework integrates a culturally adapted Political Compass Test (PCT) with multi-level framing analysis, capturing both ideological stance (economic/social axes) and stylistic framing (content, tone, emphasis). Prompts are aligned with 11 socio-political themes specific to the Pakistani context. Results show that while LLMs predominantly reflect liberal-left orientations consistent with Western training data, they exhibit more authoritarian framing in regional languages, highlighting language-conditioned ideological modulation. We also identify consistent model-specific bias patterns across languages. These findings show the need for culturally grounded, multilingual bias auditing frameworks in global NLP.
|
https://arxiv.org/abs/2506.00068
|
Academic Papers
|
svg
|
d17f81ed0dff3b1ae76667a725e3300eb56106a0c80b927acade2b4b1e81a925
|
2026-02-02T00:00:00-05:00
|
Unlearning's Blind Spots: Over-Unlearning and Prototypical Relearning Attack
|
arXiv:2506.01318v3 Announce Type: replace Abstract: Machine unlearning (MU) aims to expunge a designated forget set from a trained model without costly retraining, yet the existing techniques overlook two critical blind spots: "over-unlearning" that deteriorates retained data near the forget set, and post-hoc "relearning" attacks that aim to resurrect the forgotten knowledge. Focusing on class-level unlearning, we first derive an over-unlearning metric, OU@epsilon, which quantifies collateral damage in regions proximal to the forget set, where over-unlearning mainly appears. Next, we expose an unforeseen relearning threat on MU, i.e., the Prototypical Relearning Attack, which exploits the per-class prototype of the forget class with just a few samples, and easily restores the pre-unlearning performance. To counter both blind spots in class-level unlearning, we introduce Spotter, a plug-and-play objective that combines (i) a masked knowledge-distillation penalty on the nearby region of forget classes to suppress OU@epsilon, and (ii) an intra-class dispersion loss that scatters forget-class embeddings, neutralizing Prototypical Relearning Attacks. Spotter achieves state-of-the-art results across CIFAR, TinyImageNet, and CASIA-WebFace datasets, offering a practical remedy to unlearning's blind spots.
|
https://arxiv.org/abs/2506.01318
|
Academic Papers
|
svg
|
1c6a08510ee25fb05c506f020af00b3c427a1f1520abb9d4a3b4415a5843198a
|
2026-02-02T00:00:00-05:00
|
A Continual Offline Reinforcement Learning Benchmark for Navigation Tasks
|
arXiv:2506.02883v2 Announce Type: replace Abstract: Autonomous agents operating in domains such as robotics or video game simulations must adapt to changing tasks without forgetting about the previous ones. This process called Continual Reinforcement Learning poses non-trivial difficulties, from preventing catastrophic forgetting to ensuring the scalability of the approaches considered. Building on recent advances, we introduce a benchmark providing a suite of video-game navigation scenarios, thus filling a gap in the literature and capturing key challenges : catastrophic forgetting, task adaptation, and memory efficiency. We define a set of various tasks and datasets, evaluation protocols, and metrics to assess the performance of algorithms, including state-of-the-art baselines. Our benchmark is designed not only to foster reproducible research and to accelerate progress in continual reinforcement learning for gaming, but also to provide a reproducible framework for production pipelines -- helping practitioners to identify and to apply effective approaches.
|
https://arxiv.org/abs/2506.02883
|
Academic Papers
|
svg
|
685543a08af532da228fb59efe2ff928432da5ebd044bb443a64bd0e25325b9a
|
2026-02-02T00:00:00-05:00
|
PPO in the Fisher-Rao geometry
|
arXiv:2506.03757v2 Announce Type: replace Abstract: Proximal Policy Optimization (PPO) is widely used in reinforcement learning due to its strong empirical performance, yet it lacks formal guarantees for policy improvement and convergence. PPO's clipped surrogate objective is motivated by a lower bound on linearization of the value function in flat geometry setting. We derive a tighter surrogate objective and introduce Fisher-Rao PPO (FR-PPO) by leveraging the Fisher-Rao (FR) geometry. Our scheme provides strong theoretical guarantees, including monotonic policy improvement. In the direct parametrization setting, we show that FR-PPO achieves sub-linear convergence with no dependence on action or state space dimensions, and for parametrized policies we further obtain sub-linear convergence up to the compatible function approximation error. Finally, although our primary focus is theoretical, we also demonstrate empirically that FR-PPO performs well across a range of standard reinforcement learning tasks.
|
https://arxiv.org/abs/2506.03757
|
Academic Papers
|
svg
|
3a6dba726274a1cde85436549864548c16d02009fc3ef24eed64d082b1c63a9d
|
2026-02-02T00:00:00-05:00
|
Zero-Shot Open-Schema Entity Structure Discovery
|
arXiv:2506.04458v2 Announce Type: replace Abstract: Entity structure extraction, which aims to extract entities and their associated attribute-value structures from text, is an essential task for text understanding and knowledge graph construction. Existing methods based on large language models (LLMs) typically rely heavily on predefined entity attribute schemas or annotated datasets, often leading to incomplete extraction results. To address these challenges, we introduce Zero-Shot Open-schema Entity Structure Discovery (ZOES), a novel approach to entity structure extraction that does not require any schema or annotated samples. ZOES operates via a principled mechanism of enrichment, refinement, and unification, based on the insight that an entity and its associated structure are mutually reinforcing. Experiments demonstrate that ZOES consistently enhances LLMs' ability to extract more complete entity structures across three different domains, showcasing both the effectiveness and generalizability of the method. These findings suggest that such an enrichment, refinement, and unification mechanism may serve as a principled approach to improving the quality of LLM-based entity structure discovery in various scenarios.
|
https://arxiv.org/abs/2506.04458
|
Academic Papers
|
svg
|
881cba546352d50d862b61ac99e9a3995751643a5a0972287b94300b4f7bad31
|
2026-02-02T00:00:00-05:00
|
Are LLMs Stable Formal Logic Translators in Logical Reasoning Across Linguistically Diversified Texts?
|
arXiv:2506.04575v3 Announce Type: replace Abstract: Logical reasoning with large language models (LLMs) has received growing attention. One mainstream approach translates natural language into formal logic and then applies symbolic solvers for deduction. While effective in many tasks, these LLM-based translators often fail to generate consistent symbolic representations when the same concept appears in different linguistic forms. Such inconsistencies break logical coherence and lead to solver errors. However, most existing benchmarks lack this type of linguistic variation, which frequently occurs in real-world text, leaving the problem underexplored. To address this gap, we present SoLT, a benchmark that systematically rewrites reasoning datasets into diverse yet logically equivalent forms across multiple levels. Beyond evaluation, SoLT also provides a general method to enrich any dataset with linguistic diversity while preserving both meaning and logic. To further enhance the stability of LLM-based reasoning, we propose MenTaL, which explicitly guides models to build a concept-symbol mapping table during translation. By linking equivalent expressions to shared symbols, MenTaL maintains consistency and mitigates symbol drift. Experiments on SoLT demonstrate that LLMs indeed suffer from inconsistent symbol mapping under linguistic variation, leading to significant drops in reasoning accuracy. Meanwhile, applying MenTaL brings clear and stable performance improvements across diverse inputs. Overall, our findings reveal that overlooking linguistic diversity hides key weaknesses in LLM-based translators, and our work offers a step toward more reliable logical reasoning in varied real-world scenarios. Our code is available at https://github.com/wufeiwuwoshihua/LinguDiver.
|
https://arxiv.org/abs/2506.04575
|
Academic Papers
|
svg
|
13fdf553b4bc890dd1eb33b965d66bbff46a3ddc13798f191851411b2746b115
|
2026-02-02T00:00:00-05:00
|
Influence Functions for Edge Edits in Non-Convex Graph Neural Networks
|
arXiv:2506.04694v2 Announce Type: replace Abstract: Understanding how individual edges influence the behavior of graph neural networks (GNNs) is essential for improving their interpretability and robustness. Graph influence functions have emerged as promising tools to efficiently estimate the effects of edge deletions without retraining. However, existing influence prediction methods rely on strict convexity assumptions, exclusively consider the influence of edge deletions while disregarding edge insertions, and fail to capture changes in message propagation caused by these modifications. In this work, we propose a proximal Bregman response function specifically tailored for GNNs, relaxing the convexity requirement and enabling accurate influence prediction for standard neural network architectures. Furthermore, our method explicitly accounts for message propagation effects and extends influence prediction to both edge deletions and insertions in a principled way. Experiments with real-world datasets demonstrate accurate influence predictions for different characteristics of GNNs. We further demonstrate that the influence function is versatile in applications such as graph rewiring and adversarial attacks.
|
https://arxiv.org/abs/2506.04694
|
Academic Papers
|
svg
|
f74c4414b2be99af1f787d8a80dbfefe59e957d39d4a56fde3c81c50dd705034
|
2026-02-02T00:00:00-05:00
|
Quasiparticle Interference Kernel Extraction with Variational Autoencoders via Latent Alignment
|
arXiv:2506.05325v2 Announce Type: replace Abstract: Quasiparticle interference (QPI) imaging is a powerful tool for probing electronic structures in quantum materials, but extracting the single-scatterer QPI pattern (i.e., the kernel) from a multi-scatterer image remains a fundamentally ill-posed inverse problem, because many different kernels can combine to produce almost the same observed image, and noise or overlaps further obscure the true signal. Existing solutions to this extraction problem rely on manually zooming into small local regions with isolated single-scatterers. This is infeasible for real cases where scattering conditions are too complex. In this work, we propose the first AI-based framework for QPI kernel extraction, which models the space of physically valid kernels and uses this knowledge to guide the inverse mapping. We introduce a two-step learning strategy that decouples kernel representation learning from observation-to-kernel inference. In the first step, we train a variational autoencoder to learn a compact latent space of scattering kernels. In the second step, we align the latent representation of QPI observations with those of the pre-learned kernels using a dedicated encoder. This design enables the model to infer kernels robustly under complex, entangled scattering conditions. We construct a diverse and physically realistic QPI dataset comprising 100 unique kernels and evaluate our method against a direct one-step baseline. Experimental results demonstrate that our approach achieves significantly higher extraction accuracy, improved generalization to unseen kernels. To further validate its effectiveness, we also apply the method to real QPI data from Ag and FeSe samples, where it reliably extracts meaningful kernels under complex scattering conditions.
|
https://arxiv.org/abs/2506.05325
|
Academic Papers
|
svg
|
712297208d49f197c286e5ac6571a5506e5915ca5a7fb60f962ca3f4a3b1b8be
|
2026-02-02T00:00:00-05:00
|
Ravan: Multi-Head Low-Rank Adaptation for Federated Fine-Tuning
|
arXiv:2506.05568v2 Announce Type: replace Abstract: Large language models (LLMs) have not yet effectively leveraged the vast amounts of edge-device data, and federated learning (FL) offers a promising paradigm to collaboratively fine-tune LLMs without transferring private edge data to the cloud. To operate within the computation and communication constraints of edge devices, recent literature on federated fine-tuning of LLMs proposes the use of low-rank adaptation (LoRA) and similar parameter-efficient methods. However, LoRA-based methods suffer from accuracy degradation in FL settings, primarily because of data and computational heterogeneity across clients. We propose Ravan, an adaptive multi-head LoRA method that balances parameter efficiency and model expressivity by reparameterizing the weight updates as the sum of multiple LoRA heads $s_i\textbf{B}_i\textbf{H}_i\textbf{A}_i$ in which only the core matrices $\textbf{H}_i$ and their lightweight scaling factors $s_i$ are trained. These trainable scaling factors let the optimization focus on the most useful heads, recovering a higher-rank approximation of the full update without increasing the number of communicated parameters since clients upload $s_i\textbf{H}_i$ directly. Experiments on vision and language benchmarks show that Ravan improves test accuracy by $2-8\%$ over prior parameter-efficient baselines, making it a robust and scalable solution for federated fine-tuning of LLMs.
|
https://arxiv.org/abs/2506.05568
|
Academic Papers
|
svg
|
3c310237a7855b6a90a0ed9bdfecaf5afe2914ef606e90150628940383dd171c
|
2026-02-02T00:00:00-05:00
|
Antithetic Noise in Diffusion Models
|
arXiv:2506.06185v2 Announce Type: replace Abstract: We systematically study antithetic initial noise in diffusion models, discovering that pairing each noise sample with its negation consistently produces strong negative correlation. This universal phenomenon holds across datasets, model architectures, conditional and unconditional sampling, and even other generative models such as VAEs and Normalizing Flows. To explain it, we combine experiments and theory and propose a \textit{symmetry conjecture} that the learned score function is approximately affine antisymmetric (odd symmetry up to a constant shift), supported by empirical evidence. This negative correlation leads to substantially more reliable uncertainty quantification with up to $90\%$ narrower confidence intervals. We demonstrate these gains on tasks including estimating pixel-wise statistics and evaluating diffusion inverse solvers. We also provide extensions with randomized quasi-Monte Carlo noise designs for uncertainty quantification, and explore additional applications of the antithetic noise design to improve image editing and generation diversity. Our framework is training-free, model-agnostic, and adds no runtime overhead. Code is available at https://github.com/jjia131/Antithetic-Noise-in-Diffusion-Models-page.
|
https://arxiv.org/abs/2506.06185
|
Academic Papers
|
svg
|
b460be1e5bc846328783a236556251963baea269fca58a594e65d7c6b2d44949
|
2026-02-02T00:00:00-05:00
|
Tokenization Multiplicity Leads to Arbitrary Price Variation in LLM-as-a-service
|
arXiv:2506.06446v2 Announce Type: replace Abstract: Providers of LLM-as-a-service have predominantly adopted a simple pricing model: users pay a fixed price per token. Consequently, one may think that the price two different users would pay for the same output string under the same input prompt is the same. In our work, we show that, surprisingly, this is not (always) true. We find empirical evidence that, particularly for non-english outputs, both proprietary and open-weights LLMs often generate the same (output) string with multiple different tokenizations, even under the same input prompt, and this in turn leads to arbitrary price variation. To address the problem of tokenization multiplicity, we introduce canonical generation, a type of constrained generation that restricts LLMs to only generate canonical tokenizations -- the unique tokenization in which each string is tokenized during the training process of an LLM. Further, we introduce an efficient sampling algorithm for canonical generation based on the Gumbel-Max trick. Experiments on a variety of natural language tasks demonstrate that our sampling algorithm for canonical generation is comparable to standard sampling in terms of performance and runtime, and it solves the problem of tokenization multiplicity.
|
https://arxiv.org/abs/2506.06446
|
Academic Papers
|
svg
|
49ef63083026dc2f1211d5e52ea3e228a49a6b2e36c93ce2ac67166780ee6f47
|
2026-02-02T00:00:00-05:00
|
Video Unlearning via Low-Rank Refusal Vector
|
arXiv:2506.07891v2 Announce Type: replace Abstract: Video generative models achieve high-quality synthesis from natural-language prompts by leveraging large-scale web data. However, this training paradigm inherently exposes them to unsafe biases and harmful concepts, introducing the risk of generating undesirable or illicit content. To mitigate unsafe generations, existing machine unlearning approaches either rely on filtering, and can therefore be bypassed, or they update model weights, but with costly fine-tuning or training-free closed-form edits. We propose the first training-free weight update framework for concept removal in video diffusion models. From five paired safe/unsafe prompts, our method estimates a refusal vector and integrates it into the model weights as a closed-form update. A contrastive low-rank factorization further disentangles the target concept from unrelated semantics, it ensures a selective concept suppression and it does not harm generation quality. Our approach reduces unsafe generations on the Open-Sora and ZeroScopeT2V models across the T2VSafetyBench and SafeSora benchmarks, with average reductions of 36.3% and 58.2% respectively, while preserving prompt alignment and video quality. This establishes an efficient and scalable solution for safe video generation without retraining nor any inference overhead. Project page: https://www.pinlab.org/video-unlearning.
|
https://arxiv.org/abs/2506.07891
|
Academic Papers
|
svg
|
ece8dac800daeecb92259eecca2008ec3fd16c4350278ccc28b58a4d36f2b913
|
2026-02-02T00:00:00-05:00
|
Diffusion Models under Alternative Noise: Simplified Analysis and Sensitivity
|
arXiv:2506.08337v2 Announce Type: replace Abstract: Diffusion models, typically formulated as discretizations of stochastic differential equations (SDEs), have achieved state-of-the-art performance in generative tasks. However, their theoretical analysis often involves complex proofs. In this work, we present a simplified framework for analyzing the Euler--Maruyama discretization of variance-preserving SDEs (VP-SDEs). Using Gr\"onwall's inequality, we derive a convergence rate of $O(T^{-1/2})$ under standard Lipschitz assumptions, streamlining prior analyses. We then demonstrate that the standard Gaussian noise can be replaced by computationally cheaper discrete random variables (e.g., Rademacher) without sacrificing this convergence guarantee, provided the mean and variance are matched. Our experiments validate this theory, showing that (i) discrete noise achieves sample quality comparable to Gaussian noise provided the variance is matched correctly, and (ii) performance degrades if the noise variance is scaled incorrectly.
|
https://arxiv.org/abs/2506.08337
|
Academic Papers
|
svg
|
b9677aafd04e678b8e94636cf4bf68839f2948dcbeeff78216497b51d537294c
|
2026-02-02T00:00:00-05:00
|
BNMusic: Blending Environmental Noises into Personalized Music
|
arXiv:2506.10754v3 Announce Type: replace Abstract: While being disturbed by environmental noises, the acoustic masking technique is a conventional way to reduce the annoyance in audio engineering that seeks to cover up the noises with other dominant yet less intrusive sounds. However, misalignment between the dominant sound and the noise-such as mismatched downbeats-often requires an excessive volume increase to achieve effective masking. Motivated by recent advances in cross-modal generation, in this work, we introduce an alternative method to acoustic masking, aiming to reduce the noticeability of environmental noises by blending them into personalized music generated based on user-provided text prompts. Following the paradigm of music generation using mel-spectrogram representations, we propose a Blending Noises into Personalized Music (BNMusic) framework with two key stages. The first stage synthesizes a complete piece of music in a mel-spectrogram representation that encapsulates the musical essence of the noise. In the second stage, we adaptively amplify the generated music segment to further reduce noise perception and enhance the blending effectiveness, while preserving auditory quality. Our experiments with comprehensive evaluations on MusicBench, EPIC-SOUNDS, and ESC-50 demonstrate the effectiveness of our framework, highlighting the ability to blend environmental noise with rhythmically aligned, adaptively amplified, and enjoyable music segments, minimizing the noticeability of the noise, thereby improving overall acoustic experiences. Project page: https://d-fas.github.io/BNMusic_page/.
|
https://arxiv.org/abs/2506.10754
|
Academic Papers
|
svg
|
5cd3791378093f0f0d6c3efd2690ece717c810a9375f096b8c92a81f0a9220a8
|
2026-02-02T00:00:00-05:00
|
Synthetic Socratic Debates: Examining Persona Effects on Moral Decision and Persuasion Dynamics
|
arXiv:2506.12657v2 Announce Type: replace Abstract: As large language models (LLMs) are increasingly used in morally sensitive domains, it is crucial to understand how persona traits affect their moral reasoning and persuasive behavior. We present the first large-scale study of multi-dimensional persona effects in AI-AI debates over real-world moral dilemmas. Using a 6-dimensional persona space (age, gender, country, class, ideology, and personality), we simulate structured debates between AI agents over 131 relationship-based cases. Our results show that personas affect initial moral stances and debate outcomes, with political ideology and personality traits exerting the strongest influence. Persuasive success varies across traits, with liberal and open personalities reaching higher consensus and win rates. While logit-based confidence grows during debates, emotional and credibility-based appeals diminish, indicating more tempered argumentation over time. These trends mirror findings from psychology and cultural studies, reinforcing the need for persona-aware evaluation frameworks for AI moral reasoning.
|
https://arxiv.org/abs/2506.12657
|
Academic Papers
|
svg
|
afad45563d44c33f9339a4b7ee5a2f3d8dd04bcaa29841ac64727173c94de9af
|
2026-02-02T00:00:00-05:00
|
SuperPoint-SLAM3: Augmenting ORB-SLAM3 with Deep Features, Adaptive NMS, and Learning-Based Loop Closure
|
arXiv:2506.13089v2 Announce Type: replace Abstract: Visual simultaneous localization and mapping (SLAM) must remain accurate under extreme viewpoint, scale and illumination variations. The widely adopted ORB-SLAM3 falters in these regimes because it relies on hand-crafted ORB keypoints. We introduce SuperPoint-SLAM3, a drop-in upgrade that (i) replaces ORB with the self-supervised SuperPoint detector--descriptor, (ii) enforces spatially uniform keypoints via adaptive non-maximal suppression (ANMS), and (iii) integrates a lightweight NetVLAD place-recognition head for learning-based loop closure. On the KITTI Odometry benchmark SuperPoint-SLAM3 reduces mean translational error from 4.15% to 0.34% and mean rotational error from 0.0027 deg/m to 0.0010 deg/m. On the EuRoC MAV dataset it roughly halves both errors across every sequence (e.g., V2\_03: 1.58% -> 0.79%). These gains confirm that fusing modern deep features with a learned loop-closure module markedly improves ORB-SLAM3 accuracy while preserving its real-time operation. Implementation, pretrained weights and reproducibility scripts are available at https://github.com/shahram95/SuperPointSLAM3.
|
https://arxiv.org/abs/2506.13089
|
Academic Papers
|
svg
|
1de373f709b93ad3b58d9e41d72e916f95d264d3736de29364b6ce00eabb0f2e
|
2026-02-02T00:00:00-05:00
|
Direct Reasoning Optimization: Constrained RL with Token-Level Dense Reward and Rubric-Gated Constraints for Open-ended Tasks
|
arXiv:2506.13351v2 Announce Type: replace Abstract: RL training of LLMs on open-ended tasks is challenging due to the lack of direct verifiability. In this paper, we frame such training as constrained RL that (i) optimizes a token-level dense Reasoning Reflection Reward (R3) aligned with reasoning quality, and (ii) enforces rubric-gating as feasibility constraints at the rollout group level. R3 measures the model's token-level certainty of a reference answer under its CoT reasoning prefix while selectively emphasizing reasoning-reflective tokens to capture how likely the generated reasoning is to yield the desired answer. Rubric-gating complements R3 by operationalizing principled task criteria as hard accept/reject checks on final answers. Empirically, across four datasets, our framework outperforms baselines, achieves faster, more sample-efficient learning, and respects feasibility constraints.
|
https://arxiv.org/abs/2506.13351
|
Academic Papers
|
svg
|
754942013a8d1a615ca062011c00f137a12d1cbcfb50c056a3759e05fa8e51a8
|
2026-02-02T00:00:00-05:00
|
A hybrid isogeometric and finite element method: NURBS-enhanced finite element method for hexahedral meshes (NEFEM-HEX)
|
arXiv:2506.13694v3 Announce Type: replace Abstract: In this paper, we present a NURBS-enhanced finite element method that integrates the NURBS-based boundary representation of a geometric domain into a standard finite element framework for hexahedral meshes. We decompose an open, bounded, convex three-dimensional domain with a NURBS boundary into two parts, define NURBS-enhanced finite elements over the boundary layer, and use piecewise-linear Lagrange finite elements in the interior region. We introduce a special quadrature rule and a stable interpolation operator for the NURBS-enhanced elements. We discuss how the h-refinement in finite element analysis and the knot insertion in isogeometric analysis can be utilized in the refinement of the NURBS-enhanced elements. To illustrate an application of our methodology, we utilize a generic weak formulation of a second-order linear elliptic boundary value problem and derive a priori error estimates in the $H^{1}$ norm. In addition, we use the Poisson problem as a model problem and provide numerical results that support the theoretical results. The proposed methodology combines the efficiency of finite element analysis with the geometric precision of NURBS, and may enable more accurate and efficient simulations over complex geometries.
|
https://arxiv.org/abs/2506.13694
|
Academic Papers
|
svg
|
a5689389ee080ac521d58860c051144c8b3ea8308fabb7c31313e476dc0e6cd5
|
2026-02-02T00:00:00-05:00
|
Stretching Beyond the Obvious: A Gradient-Free Framework to Unveil the Hidden Landscape of Visual Invariance
|
arXiv:2506.17040v2 Announce Type: replace Abstract: Uncovering which feature combinations are encoded by visual units is critical to understanding how images are transformed into representations that support recognition. While existing feature visualization approaches typically infer a unit's most exciting images, this is insufficient to reveal the manifold of transformations under which responses remain invariant, which is critical to generalization in vision. Here we introduce Stretch-and-Squeeze (SnS), a model-agnostic, gradient-free framework to systematically characterize a unit's maximally invariant stimuli, and its vulnerability to adversarial perturbations, in both biological and artificial visual systems. SnS frames these transformations as bi-objective optimization problems. To probe invariance, SnS seeks image perturbations that maximally alter (stretch) the representation of a reference stimulus in a given processing stage while preserving unit activation downstream (squeeze). To probe adversarial sensitivity, stretching and squeezing are reversed to maximally perturb unit activation while minimizing changes to the upstream representation. Applied to CNNs, SnS revealed invariant transformations that were farther from a reference image in pixel-space than those produced by affine transformations, while more strongly preserving the target unit's response. The discovered invariant images differed depending on the stage of the image representation used for optimization: pixel-level changes primarily affected luminance and contrast, while stretching mid- and late-layer representations mainly altered texture and pose. By measuring how well the hierarchical invariant images obtained for L2 robust networks were classified by humans and other observer networks, we discovered a substantial drop in their interpretability when the representation was stretched in deep layers, while the opposite trend was found for standard models.
|
https://arxiv.org/abs/2506.17040
|
Academic Papers
|
svg
|
460b66e3d27fa88b734b8cc45d9e5054f2386b9f1b5cb6413b2be2ece7c2a60f
|
2026-02-02T00:00:00-05:00
|
Offline Goal-Conditioned Reinforcement Learning with Projective Quasimetric Planning
|
arXiv:2506.18847v3 Announce Type: replace Abstract: Offline Goal-Conditioned Reinforcement Learning seeks to train agents to reach specified goals from previously collected trajectories. Scaling that promises to long-horizon tasks remains challenging, notably due to compounding value-estimation errors. Principled geometric offers a potential solution to address these issues. Following this insight, we introduce Projective Quasimetric Planning (ProQ), a compositional framework that learns an asymmetric distance and then repurposes it, firstly as a repulsive energy forcing a sparse set of keypoints to uniformly spread over the learned latent space, and secondly as a structured directional cost guiding towards proximal sub-goals. In particular, ProQ couples this geometry with a Lagrangian out-of-distribution detector to ensure the learned keypoints stay within reachable areas. By unifying metric learning, keypoint coverage, and goal-conditioned control, our approach produces meaningful sub-goals and robustly drives long-horizon goal-reaching on diverse a navigation benchmarks.
|
https://arxiv.org/abs/2506.18847
|
Academic Papers
|
svg
|
598614ea8964ee46f740575cb806d17efe726714b9cd291989b54a66a43f65ff
|
2026-02-02T00:00:00-05:00
|
The lightning method for the heat equation
|
arXiv:2506.22576v3 Announce Type: replace Abstract: This paper introduces a new method for solving the planar heat equation based on the Lightning Method. The lightning method is a recent development in the numerical solution of linear PDEs which expresses solutions using sums of polynomials and rational functions, or more generally as sums of fundamental solutions. The method is particularly well suited to handle domains with sharp corners where solution singularities are present. Boundary conditions are formed on a set of collocation points which is then solved as an overdetermined linear system. The approach of the present work is to utilize the Laplace transform to obtain a modified Helmholtz equation which is solved by an application of the lightning method. The numerical inversion of the Laplace transform is then performed by means of Talbot integration. Our validation of the method against existing results and multiple challenging test problems shows the method attains spectral accuracy with root-exponential convergence while being robust across a wide range of time intervals and adaptable to a variety of geometric scenarios.
|
https://arxiv.org/abs/2506.22576
|
Academic Papers
|
svg
|
a6e26840ce564bdc21a1ac1d5b5598a3f00bf90b13936fb904577717f48f24f6
|
2026-02-02T00:00:00-05:00
|
On the rank weight hierarchy of $M$-codes
|
arXiv:2507.00609v3 Announce Type: replace Abstract: We study the rank weight hierarchy of linear codes which are stable under a linear endomorphism defined over the base field, in particular when the endomorphism is cyclic. In this last case, we give a necessary and sufficient condition for such a code to have first rank weight equal to $1$ in terms of its generator polynomial, as well as an explicit formula for its last rank weight.
|
https://arxiv.org/abs/2507.00609
|
Academic Papers
|
svg
|
eec535dbdaf4ca6d1feb8f2732f2a8eb51c7112b7329635102c8a761914000bf
|
2026-02-02T00:00:00-05:00
|
SAFER: Probing Safety in Reward Models with Sparse Autoencoder
|
arXiv:2507.00665v3 Announce Type: replace Abstract: Reinforcement learning from human feedback (RLHF) is a key paradigm for aligning large language models (LLMs) with human values, yet the reward models at its core remain largely opaque. In this work, we present Sparse Autoencoder For Enhanced Reward model (\textbf{SAFER}), a novel framework for interpreting and improving reward models through mechanistic analysis. Leveraging Sparse Autoencoders (SAEs), we uncover human-interpretable features in reward model activations, enabling insight into safety-relevant decision-making. We apply SAFER to safety-oriented preference datasets and quantify the salience of individual features by activation differences between chosen and rejected responses. Using these feature-level signals, we design targeted data poisoning and denoising strategies. Experiments show that SAFER can precisely degrade or enhance safety alignment with minimal data modification, without sacrificing general chat performance. Our approach contributes to interpreting, auditing and refining reward models in high-stakes LLM alignment tasks. Our codes are available at https://github.com/xzy-101/SAFER-code. \textit{This paper discusses topics related to reward model safety and may include discussions or examples that highlight potential risks or unsafe outcomes.}
|
https://arxiv.org/abs/2507.00665
|
Academic Papers
|
svg
|
3e1d002f4e70533dc3374b8a3efe5d3571fb0064befe5ade57ab2d996a818397
|
2026-02-02T00:00:00-05:00
|
DiffusionLight-Turbo: Accelerated Light Probes for Free via Single-Pass Chrome Ball Inpainting
|
arXiv:2507.01305v2 Announce Type: replace Abstract: We introduce a simple yet effective technique for estimating lighting from a single low-dynamic-range (LDR) image by reframing the task as a chrome ball inpainting problem. This approach leverages a pre-trained diffusion model, Stable Diffusion XL, to overcome the generalization failures of existing methods that rely on limited HDR panorama datasets. While conceptually simple, the task remains challenging because diffusion models often insert incorrect or inconsistent content and cannot readily generate chrome balls in HDR format. Our analysis reveals that the inpainting process is highly sensitive to the initial noise in the diffusion process, occasionally resulting in unrealistic outputs. To address this, we first introduce DiffusionLight, which uses iterative inpainting to compute a median chrome ball from multiple outputs to serve as a stable, low-frequency lighting prior that guides the generation of a high-quality final result. To generate high-dynamic-range (HDR) light probes, an Exposure LoRA is fine-tuned to create LDR images at multiple exposure values, which are then merged. While effective, DiffusionLight is time-intensive, requiring approximately 30 minutes per estimation. To reduce this overhead, we introduce DiffusionLight-Turbo, which reduces the runtime to about 30 seconds with minimal quality loss. This 60x speedup is achieved by training a Turbo LoRA to directly predict the averaged chrome balls from the iterative process. Inference is further streamlined into a single denoising pass using a LoRA swapping technique. Experimental results that show our method produces convincing light estimates across diverse settings and demonstrates superior generalization to in-the-wild scenarios. Our code is available at https://diffusionlight.github.io/turbo
|
https://arxiv.org/abs/2507.01305
|
Academic Papers
|
svg
|
2994e8be15692e4680c82ea8cf3c261a9edc96a593aacf600498f43b52f9de6a
|
2026-02-02T00:00:00-05:00
|
Hybrid Approach to Directed Fuzzing
|
arXiv:2507.04855v2 Announce Type: replace Abstract: Program analysis and automated testing have recently become an essential part of SSDLC. Directed greybox fuzzing is one of the most popular automated testing methods that focuses on error detection in predefined code regions. However, it still lacks ability to overcome difficult program constraints. This problem can be well addressed by symbolic execution, but at the cost of lower performance. Thus, combining directed fuzzing and symbolic execution techniques can lead to more efficient error detection. In this paper, we propose a hybrid approach to directed fuzzing with novel seed scheduling algorithm, based on target-related interestingness and coverage. The approach also performs minimization and sorting of objective seeds according to a target-related information. We implement our approach in Sydr-Fuzz tool using LibAFL-DiFuzz as directed fuzzer and Sydr as dynamic symbolic executor. We evaluate our approach with Time to Exposure metric and compare it with pure LibAFL-DiFuzz, AFLGo, BEACON, WAFLGo, WindRanger, FishFuzz, and Prospector. The results show an improvement for 3 out of 7 examples with speedup up to 1.86 times over the second best result, as well as a significant improvement for 3 out of 7 examples over the pure LibAFL-DiFuzz fuzzer. Sydr-Fuzz hybrid approach to directed fuzzing shows high performance and helps to improve directed fuzzing efficiency.
|
https://arxiv.org/abs/2507.04855
|
Academic Papers
|
svg
|
088a27d2da84b0e89edf46918669e304db2f80fce9ccbe56ce67251a34ee4f41
|
2026-02-02T00:00:00-05:00
|
Spattack: Subgroup Poisoning Attacks on Federated Recommender Systems
|
arXiv:2507.06258v2 Announce Type: replace Abstract: Federated recommender systems (FedRec) have emerged as a promising approach to provide personalized recommendations while protecting user privacy. However, recent studies have shown their vulnerability to poisoning attacks, where malicious clients inject crafted gradients to promote target items to benign users. Existing attacks typically target the full user group, which compromises stealth and increases detection risk. In contrast, real-world adversaries may prefer to target specific user subgroups, such as promoting health supplements to older individuals, to maximize effectiveness while preserving stealth. Motivated by this gap, we introduce Spattack, the first poisoning attack designed to manipulate recommendations for specific user subgroups in federated settings. Spattack adopts an approximate-and-promote paradigm, which approximates user embeddings of target and non-target subgroups and then promotes target items to the target subgroup. We further reveal a trade-off between strong attack performance on the target subgroup and limited impact on the non-target subgroup. To achieve a better trade-off, we propose enhanced approximation and promotion strategies. For approximation, we push embeddings of different subgroups apart via contrastive learning and augment the target subgroup's relevant item set through clustering. For promotion, we align embeddings of target items and relevant items to strengthen their semantic connections, together with an adaptive weighting strategy to balance effects across subgroups. Experiments on three real-world datasets demonstrate that Spattack achieves strong attack performance on the target subgroup with minimal impact on non-target users, even when only 0.1% of users are malicious. Moreover, Spattack maintains competitive recommendation performance and shows strong resilience against mainstream defenses.
|
https://arxiv.org/abs/2507.06258
|
Academic Papers
|
svg
|
ab89184d8a37832305ef906fbc29555e26316523151e8868e877863383528fee
|
2026-02-02T00:00:00-05:00
|
Online Navigation Refinement: Achieving Lane-Level Guidance by Associating Standard-Definition and Online Perception Maps
|
arXiv:2507.07487v5 Announce Type: replace Abstract: Lane-level navigation is critical for geographic information systems and navigation-based tasks, offering finer-grained guidance than road-level navigation by standard definition (SD) maps. However, it currently relies on expansive global HD maps that cannot adapt to dynamic road conditions. Recently, online perception (OP) maps have become research hotspots, providing real-time geometry as an alternative, but lack the global topology needed for navigation. To address these issues, Online Navigation Refinement (ONR), a new mission is introduced that refines SD-map-based road-level routes into accurate lane-level navigation by associating SD maps with OP maps. The map-to-map association to handle many-to-one lane-to-road mappings under two key challenges: (1) no public dataset provides lane-to-road correspondences; (2) severe misalignment from spatial fluctuations, semantic disparities, and OP map noise invalidates traditional map matching. For these challenges, We contribute: (1) Online map association dataset (OMA), the first ONR benchmark with 30K scenarios and 2.6M annotated lane vectors; (2) MAT, a transformer with path-aware attention to aligns topology despite spatial fluctuations and semantic disparities and spatial attention for integrates noisy OP features via global context; and (3) NR P-R, a metric evaluating geometric and semantic alignment. Experiments show that MAT outperforms existing methods at 34 ms latency, enabling low-cost and up-to-date lane-level navigation.
|
https://arxiv.org/abs/2507.07487
|
Academic Papers
|
svg
|
9f50bb36a2ab7664befe0b1fd43994a4541ea09ed9a8531e4a4f8151df63d203
|
2026-02-02T00:00:00-05:00
|
FloorplanQA: A Benchmark for Spatial Reasoning in LLMs using Structured Representations
|
arXiv:2507.07644v3 Announce Type: replace Abstract: We introduce FloorplanQA, a diagnostic benchmark for evaluating spatial reasoning in large-language models (LLMs). FloorplanQA is grounded in structured representations of indoor scenes, such as (e.g., kitchens, living rooms, bedrooms, bathrooms, and others), encoded symbolically in JSON or XML layouts. The benchmark covers core spatial tasks, including distance measurement, visibility, path finding, and object placement within constrained spaces. Our results across a variety of frontier open-source and commercial LLMs reveal that while models may succeed in shallow queries, they often fail to respect physical constraints, preserve spatial coherence, though they remain mostly robust to small spatial perturbations. FloorplanQA uncovers a blind spot in today's LLMs: inconsistent reasoning about indoor layouts. We hope this benchmark inspires new work on language models that can accurately infer and manipulate spatial and geometric properties in practical settings.
|
https://arxiv.org/abs/2507.07644
|
Academic Papers
|
svg
|
b3b337b55eee4c46f7d6b5c06a2b5578806af836ada6aa01e0f561dc854de048
|
2026-02-02T00:00:00-05:00
|
BlindSight: Harnessing Sparsity for Efficient Vision-Language Models
|
arXiv:2507.09071v3 Announce Type: replace Abstract: Large vision-language models (VLMs) enable joint processing of text and images. However, incorporating vision data significantly increases the prompt length, resulting in a longer time to first token (TTFT). This bottleneck can be alleviated by leveraging the inherent sparsity in the attention computation. Analyzing these attention patterns in VLMs when processing a series of images, we observe the absence of inter-image attention in a substantial portion of layers. Based on this, we propose BlindSight: an approach to optimize multi-image VLM inference using an input-template-aware attention sparsity mask with no runtime overhead. We utilize a dataset to derive a prompt-agnostic categorization for attention heads: Dense, Sink, Intra-Image, and Intra-Image+Sink. We develop a Triton-based GPU kernel to leverage this sparsity. BlindSight achieves a 1.8-3.2x speedup in the attention computation (prompt length 36K-300K). BlindSight generalizes across VLMs (Qwen2-VL, Qwen2.5-VL, Gemma 3), with only a 0.78% absolute accuracy degradation on average on multi-image comprehension benchmarks. Finally, we advocate for the design of efficient VLMs that combine BlindSight-inspired sparse and dense layers.
|
https://arxiv.org/abs/2507.09071
|
Academic Papers
|
svg
|
96682fa24a38d53d6f7f4202516633e01746fbcd59182e94d7ddc24734b6280e
|
2026-02-02T00:00:00-05:00
|
A Pre-training Framework for Relational Data with Information-theoretic Principles
|
arXiv:2507.09837v2 Announce Type: replace Abstract: Relational databases underpin critical infrastructure across a wide range of domains, yet the design of generalizable pre-training strategies for learning from relational databases remains an open challenge due to task heterogeneity. Specifically, there exist many possible downstream tasks, as tasks are defined based on relational schema graphs, temporal dependencies, and SQL-defined label logics. An effective pre-training framework is desired to take these factors into account in order to obtain task-aware representations. By incorporating knowledge of the underlying distribution that drives label generation, downstream tasks can benefit from relevant side-channel information. To bridge this gap, we introduce Task Vector Estimation (TVE), a novel pre-training framework that constructs predictive supervisory signals via set-based aggregation over schema traversal graphs, explicitly modeling next-window relational dynamics. We formalize our approach through an information-theoretic lens, demonstrating that task-informed representations retain more relevant signals than those obtained without task priors. Extensive experiments on the RelBench benchmark show that TVE consistently outperforms traditional pre-training baselines. Our findings advocate for pre-training objectives that encode task heterogeneity and temporal structure as design principles for predictive modeling on relational databases. Our code is publicly available at https://github.com/quang-truong/task-vector-estimation.
|
https://arxiv.org/abs/2507.09837
|
Academic Papers
|
svg
|
38073effd1880ac8fdd5037130791fc07dc76e019fa3a8b1063c4931836d111c
|
2026-02-02T00:00:00-05:00
|
EquiContact: A Hierarchical SE(3) Vision-to-Force Equivariant Policy for Spatially Generalizable Contact-rich Tasks
|
arXiv:2507.10961v4 Announce Type: replace Abstract: This paper presents a framework for learning vision-based robotic policies for contact-rich manipulation tasks that generalize spatially across task configurations. We focus on achieving robust spatial generalization of the policy for the peg-in-hole (PiH) task trained from a small number of demonstrations. We propose EquiContact, a hierarchical policy composed of a high-level vision planner (Diffusion Equivariant Descriptor Field, Diff-EDF) and a novel low-level compliant visuomotor policy (Geometric Compliant ACT, G-CompACT). G-CompACT operates using only localized observations (geometrically consistent error vectors (GCEV), force-torque readings, and wrist-mounted RGB images) and produces actions defined in the end-effector frame. Through these design choices, we show that the entire EquiContact pipeline is SE(3)-equivariant, from perception to force control. We also outline three key components for spatially generalizable contact-rich policies: compliance, localized policies, and induced equivariance. Real-world experiments on PiH, screwing, and surface wiping tasks demonstrate a near-perfect success rate and robust generalization to unseen spatial configurations, validating the proposed framework and principles. The experimental videos and more details can be found on the project website: https://equicontact.github.io/EquiContact-website/
|
https://arxiv.org/abs/2507.10961
|
Academic Papers
|
svg
|
bc4255f370758c006c822fa53869e51de1ab78eab66c46b1c29ad38d786af523
|
2026-02-02T00:00:00-05:00
|
Foundation Models for Logistics: Toward Certifiable, Conversational Planning Interfaces
|
arXiv:2507.11352v2 Announce Type: replace Abstract: Logistics operators, from battlefield coordinators re-routing airlifts ahead of a storm to warehouse managers juggling late trucks, need to make mission-critical decisions. Prevailing methods for logistics planning such as integer programming yield plans that satisfy user-defined logical constraints, assuming an idealized mathematical model of the environment. On the other hand, foundation models lower the intermediate processing barrier by translating natural-language user utterances into executable plans, yet they remain prone to misinterpretations and hallucinations that jeopardize safety and cost. We introduce a Vision-Language Logistics (VLL) agent, built on a neurosymbolic framework that pairs the accessibility of natural-language dialogue with verifiable guarantees on user-objective interpretation. The agent interprets user requests and converts them into structured planning specifications, quantifies the uncertainty of the interpretation, and invokes an interactive clarification loop when the uncertainty exceeds an adaptive threshold. Drawing on a lightweight airlift logistics planning use case as an illustrative case study, we highlight a practical path toward certifiable and user-aligned decision-making for complex logistics. Our lightweight model, fine-tuned on just 100 training samples, surpasses the zero-shot performance of 20x larger models in logistic planning tasks while cutting inference latency by nearly 50%.
|
https://arxiv.org/abs/2507.11352
|
Academic Papers
|
svg
|
03ee7cc96c9a12c0867986b62b745808bbf8821b43a14ae68483245be83940d6
|
2026-02-02T00:00:00-05:00
|
MetaLint: Generalizable Idiomatic Code Quality Analysis through Instruction-Following and Easy-to-Hard Generalization
|
arXiv:2507.11687v3 Announce Type: replace Abstract: Large Language Models excel at code generation but struggle with code quality analysis, where best practices evolve and cannot be fully captured by static training data. We introduce MetaLint, a training framework that treats code quality analysis as detecting best practice violations from high-level specifications over semantic code fragments (code idioms). Instead of training on a fixed set of rules, MetaLint reorganizes supervision around dynamically specified best practices using synthetic linter-derived labels, integrated with instruction-following and preference optimization. This encourages extrapolation to more complex, unseen best practices at test time, consistent with easy-to-hard generalization without retraining. To evaluate MetaLint, we create a new benchmark of hard-to-detect best practices inspired by Python Enhancement Proposals. Across this benchmark, MetaLint improves generalization to unseen best practices. Qwen3-4B achieves a 2.7x detection F-score gain (25.9% -> 70.4%), the highest recall, and a 26.7% localization F-score, matching larger models such as o3-mini. These gains generalize across programming languages, model families, scales, reasoning settings, and linter sources.
|
https://arxiv.org/abs/2507.11687
|
Academic Papers
|
svg
|
77b1873152a2744ee567f6c932332b50d9cdfa74b70d3ad84ec1a4097ee5b388
|
2026-02-02T00:00:00-05:00
|
PICACO: Pluralistic In-Context Value Alignment of LLMs via Total Correlation Optimization
|
arXiv:2507.16679v2 Announce Type: replace Abstract: In-Context Learning has shown great potential for aligning Large Language Models (LLMs) with human values, helping reduce harmful outputs and accommodate diverse preferences without costly post-training, known as In-Context Alignment (ICA). However, LLMs' comprehension of input prompts remains agnostic, limiting ICA's ability to address value tensions--human values are inherently pluralistic, often imposing conflicting demands, e.g., stimulation vs. tradition. Current ICA methods therefore face the Instruction Bottleneck challenge, where LLMs struggle to reconcile multiple intended values within a single prompt, leading to incomplete or biased alignment. To address this, we propose PICACO, a novel pluralistic ICA method. Without fine-tuning, PICACO optimizes a meta-instruction that navigates multiple values to better elicit LLMs' understanding of them and improve their alignment. This is achieved by maximizing the total correlation between specified values and LLM responses, theoretically reinforcing value correlation while reducing distractive noise, resulting in effective value instructions. Extensive experiments on five value sets show that PICACO works well with both black-box and open-source LLMs, outperforms several recent strong baselines, and achieves a better balance across up to 8 distinct values.
|
https://arxiv.org/abs/2507.16679
|
Academic Papers
|
svg
|
2d782fcaa2f8a9e8c233f6584289f46946733006c15df152f9451a8556f720e4
|
2026-02-02T00:00:00-05:00
|
A Zero-overhead Flow for Security Closure
|
arXiv:2507.17385v2 Announce Type: replace Abstract: In the traditional Application-Specific Integrated Circuit (ASIC) design flow, the concept of timing closure implies to reach convergence during physical synthesis such that, under a given area and power budget, the design works at the targeted frequency. However, security has been largely neglected when evaluating the Quality of Results (QoR) from physical synthesis. In general, commercial place & route tools do not understand security goals. In this work, we propose a modified ASIC design flow that is security-aware and, differently from prior research, does not degrade QoR for the sake of security improvement. Therefore, we propose a first-of-its-kind zero-overhead flow for security closure. Our flow is concerned with two distinct threat models: (i) insertion of Hardware Trojans (HTs) and (ii) physical probing/fault injection. Importantly, the flow is entirely executed within a commercial place & route engine and is scalable. In several metrics, our security-aware flow achieves the best-known results for the ISPD`22 set of benchmark circuits while incurring negligible design overheads due to security-related strategies. Finally, we open source the entire methodology (as a set of scripts) and also share the protected circuits (as design databases) for the benefit of the hardware security community.
|
https://arxiv.org/abs/2507.17385
|
Academic Papers
|
svg
|
778fea846a1c42a1c6b013bd9f995401e2020e4db0d91a29305c9721cc910c67
|
2026-02-02T00:00:00-05:00
|
Debating Truth: Debate-driven Claim Verification with Multiple Large Language Model Agents
|
arXiv:2507.19090v3 Announce Type: replace Abstract: State-of-the-art single-agent claim verification methods struggle with complex claims that require nuanced analysis of multifaceted evidence. Inspired by real-world professional fact-checkers, we propose \textbf{DebateCV}, the first debate-driven claim verification framework powered by multiple LLM agents. In DebateCV, two \textit{Debaters} argue opposing stances to surface subtle errors in single-agent assessments. A decisive \textit{Moderator} is then required to weigh the evidential strength of conflicting arguments to deliver an accurate verdict. Yet, zero-shot Moderators are biased toward neutral judgments, and no datasets exist for training them. To bridge this gap, we propose \textbf{Debate-SFT}, a post-training framework that leverages synthetic data to enhance agents' ability to effectively adjudicate debates for claim verification. Results show that our methods surpass state-of-the-art non-debate approaches in both accuracy (across various evidence conditions) and justification quality.
|
https://arxiv.org/abs/2507.19090
|
Academic Papers
|
svg
|
Subsets and Splits
No community queries yet
The top public SQL queries from the community will appear here once available.