id
stringlengths
64
64
published
stringlengths
19
25
title
stringlengths
7
262
description
stringlengths
6
54.4k
link
stringlengths
31
227
category
stringclasses
6 values
image
stringlengths
3
247
e7a589e4e09943532c0149e86a42879c17b54ae194f4d49139ad760ecca06d1f
2026-01-07T00:00:00-05:00
Algorithmic randomness in harmonic analysis
arXiv:2601.03239v1 Announce Type: cross Abstract: Within the last fifteen years, a program of establishing relationships between algorithmic randomness and almost-everywhere theorems in analysis and ergodic theory has developed. In harmonic analysis, Franklin, McNicholl, and Rute characterized Schnorr randomness using an effective version of Carleson's Theorem. We show here that, for computable $1<\infty$, the reals at which the Fourier series of a weakly computable vector in $L^p[-\pi,\pi]$ converges are precisely the Martin-L\"{o}f random reals. Furthermore, we show that radial limits of the Poisson integral of an $L^1(\mathbb{R})$-computable function coincide with the values of the function at exactly the Schnorr random reals and that radial limits of the Poisson integral of a weakly $L^1(\mathbb{R})$-computable function coincide with the values of the function at exactly the Martin-L\"{o}f random reals.
https://arxiv.org/abs/2601.03239
Academic Papers
svg
1f85f6426e065b2a318f7a45c0e2caa922d4af50fcbd38043af10bb5843aba9c
2026-01-07T00:00:00-05:00
Self-Supervised Learning from Noisy and Incomplete Data
arXiv:2601.03244v1 Announce Type: cross Abstract: Many important problems in science and engineering involve inferring a signal from noisy and/or incomplete observations, where the observation process is known. Historically, this problem has been tackled using hand-crafted regularization (e.g., sparsity, total-variation) to obtain meaningful estimates. Recent data-driven methods often offer better solutions by directly learning a solver from examples of ground-truth signals and associated observations. However, in many real-world applications, obtaining ground-truth references for training is expensive or impossible. Self-supervised learning methods offer a promising alternative by learning a solver from measurement data alone, bypassing the need for ground-truth references. This manuscript provides a comprehensive summary of different self-supervised methods for inverse problems, with a special emphasis on their theoretical underpinnings, and presents practical applications in imaging inverse problems.
https://arxiv.org/abs/2601.03244
Academic Papers
svg
d8858a13150a648533da99f229fd609a7ebc049f39389a07ea3d148f071e91f4
2026-01-07T00:00:00-05:00
Nonlinear Spectral Modeling and Control of Soft-Robotic Muscles from Data
arXiv:2601.03247v1 Announce Type: cross Abstract: Artificial muscles are essential for compliant musculoskeletal robotics but complicate control due to nonlinear multiphysics dynamics. Hydraulically amplified electrostatic (HASEL) actuators, a class of soft artificial muscles, offer high performance but exhibit memory effects and hysteresis. Here we present a data-driven reduction and control strategy grounded in spectral submanifold (SSM) theory. In the adiabatic regime, where inputs vary slowly relative to intrinsic transients, trajectories rapidly converge to a low-dimensional slow manifold. We learn an explicit input-to-output map on this manifold from forced-response trajectories alone, avoiding decay experiments that can trigger hysteresis. We deploy the SSM-based model for real-time control of an antagonistic HASEL-clutch joint. This approach yields a substantial reduction in tracking error compared to feedback-only and feedforward-only baselines under identical settings. This record-and-control workflow enables rapid characterization and high-performance control of soft muscles and muscle-driven joints without detailed physics-based modeling.
https://arxiv.org/abs/2601.03247
Academic Papers
svg
b510446bc2ba388decc75973c33c2f5301eca3f08829b887aae71d430c928161
2026-01-07T00:00:00-05:00
Auditing for Core Stability in Participatory Budgeting
arXiv:2209.14468v2 Announce Type: replace Abstract: We consider the participatory budgeting problem where each of $n$ voters specifies additive utilities over $m$ candidate projects with given sizes, and the goal is to choose a subset of projects (i.e., a committee) with total size at most $k$. Participatory budgeting mathematically generalizes multiwinner elections, and both have received great attention in computational social choice recently. A well-studied notion of group fairness in this setting is core stability: Each voter is assigned an "entitlement" of $\frac{k}{n}$, so that a subset $S$ of voters can pay for a committee of size at most $|S| \cdot \frac{k}{n}$. A given committee is in the core if no subset of voters can pay for another committee that provides each of them strictly larger utility. This provides proportional representation to all voters in a strong sense. In this paper, we study the following auditing question: Given a committee computed by some preference aggregation method, how close is it to the core? Concretely, how much does the entitlement of each voter need to be scaled down by, so that the core property subsequently holds? As our main contribution, we present computational hardness results for this problem, as well as a logarithmic approximation algorithm via linear program rounding. We show that our analysis is tight against the linear programming bound. Additionally, we consider two related notions of group fairness that have similar audit properties. The first is Lindahl priceability, which audits the closeness of a committee to a market clearing solution. We show that this is related to the linear programming relaxation of auditing the core, leading to efficient exact and approximation algorithms for auditing. The second is a novel weakening of the core that we term the sub-core, and we present computational results for auditing this notion as well.
https://arxiv.org/abs/2209.14468
Academic Papers
svg
cffb3d25b42ef05388e1f2f7be7fe92a6e02a96a7797a73d922b86e6154b1c37
2026-01-07T00:00:00-05:00
Teeth3DS+: An Extended Benchmark for Intraoral 3D Scans Analysis
arXiv:2210.06094v3 Announce Type: replace Abstract: Intraoral 3D scanning is now widely adopted in modern dentistry and plays a central role in supporting key tasks such as tooth segmentation, detection, labeling, and dental landmark identification. Accurate analysis of these scans is essential for orthodontic and restorative treatment planning, as it enables automated workflows and minimizes the need for manual intervention. However, the development of robust learning-based solutions remains challenging due to the limited availability of high-quality public datasets and standardized benchmarks. This article presents Teeth3DS+, an extended public benchmark dedicated to intraoral 3D scan analysis. Developed in the context of the MICCAI 3DTeethSeg and 3DTeethLand challenges, Teeth3DS+ supports multiple fundamental tasks, including tooth detection, segmentation, labeling, 3D modeling, and dental landmark identification. The dataset consists of rigorously curated intraoral scans acquired using state-of-the-art scanners and validated by experienced orthodontists and dental surgeons. In addition to the data, Teeth3DS+ provides standardized data splits and evaluation protocols to enable fair and reproducible comparison of methods, with the goal of fostering progress in learning-based analysis of 3D dental scans. Detailed instructions for accessing the dataset are available at https://crns-smartvision.github.io/teeth3ds
https://arxiv.org/abs/2210.06094
Academic Papers
svg
3728e66becd8fcd50be0234a82c6c72cd583c253ae621a293c6fbe6378d58e8a
2026-01-07T00:00:00-05:00
MAST: Model-Agnostic Sparsified Training
arXiv:2311.16086v2 Announce Type: replace Abstract: We introduce a novel optimization problem formulation that departs from the conventional way of minimizing machine learning model loss as a black-box function. Unlike traditional formulations, the proposed approach explicitly incorporates an initially pre-trained model and random sketch operators, allowing for sparsification of both the model and gradient during training. We establish the insightful properties of the proposed objective function and highlight its connections to the standard formulation. Furthermore, we present several variants of the Stochastic Gradient Descent (SGD) method adapted to the new problem formulation, including SGD with general sampling, a distributed version, and SGD with variance reduction techniques. We achieve tighter convergence rates and relax assumptions, bridging the gap between theoretical principles and practical applications, covering several important techniques such as Dropout and Sparse training. This work presents promising opportunities to enhance the theoretical understanding of model training through a sparsification-aware optimization approach.
https://arxiv.org/abs/2311.16086
Academic Papers
svg
7bb974cb4de3d475044e4071fd307f9c70b365d7c944769cf98365b140d78479
2026-01-07T00:00:00-05:00
Time-Transformer: Integrating Local and Global Features for Better Time Series Generation (Extended Version)
arXiv:2312.11714v4 Announce Type: replace Abstract: Generating time series data is a promising approach to address data deficiency problems. However, it is also challenging due to the complex temporal properties of time series data, including local correlations as well as global dependencies. Most existing generative models have failed to effectively learn both the local and global properties of time series data. To address this open problem, we propose a novel time series generative model named 'Time-Transformer AAE', which consists of an adversarial autoencoder (AAE) and a newly designed architecture named 'Time-Transformer' within the decoder. The Time-Transformer first simultaneously learns local and global features in a layer-wise parallel design, combining the abilities of Temporal Convolutional Networks and Transformer in extracting local features and global dependencies respectively. Second, a bidirectional cross attention is proposed to provide complementary guidance across the two branches and achieve proper fusion between local and global features. Experimental results demonstrate that our model can outperform existing state-of-the-art models in 5 out of 6 datasets, specifically on those with data containing both global and local properties. Furthermore, we highlight our model's advantage on handling this kind of data via an artificial dataset. Finally, we show our model's ability to address a real-world problem: data augmentation to support learning with small datasets and imbalanced datasets.
https://arxiv.org/abs/2312.11714
Academic Papers
svg
bbd46047abd7329b7ed9916f934334ec99bc030fd12c191d1ac732cb34618c7f
2026-01-07T00:00:00-05:00
A Large-Scale Analysis on the Use of Arrival Time Prediction for Automated Shuttle Services in the Real World
arXiv:2401.05322v2 Announce Type: replace Abstract: Urban mobility is on the cusp of transformation with the emergence of shared, connected, and cooperative automated vehicles. Yet, for them to be accepted by customers, trust in their punctuality is vital. Many pilot initiatives operate without a fixed schedule, enhancing the importance of reliable arrival time (AT) predictions. This study presents an AT prediction system for automated shuttles, utilizing separate models for dwell and running time predictions, validated on real-world data from six cities. Alongside established methods such as XGBoost, we explore the benefits of leveraging spatial correlations using graph neural networks (GNN). To accurately handle the case of a shuttle bypassing a stop, we propose a hierarchical model combining a random forest classifier and a GNN. The results for the final AT prediction are promising, showing low errors even when predicting several stops ahead. Yet, no single model emerges as universally superior, and we provide insights into the characteristics of pilot sites that influence the model selection process and prediction performance. Finally, we identify dwell time prediction as the key determinant in overall AT prediction accuracy when automated shuttles are deployed in low-traffic areas or under regulatory speed limits. Our meta-analysis across six pilot sites in different cities provides insights into the current state of autonomous public transport prediction models and paves the way for more data-informed decision-making as the field advances.
https://arxiv.org/abs/2401.05322
Academic Papers
svg
6580bcb1541536960fc8fd1c5a44b1f97b0297c50885836141d6d3539898f555
2026-01-07T00:00:00-05:00
On the permutation automorphisms of binary cubic codes
arXiv:2402.10667v3 Announce Type: replace Abstract: A binary linear code whose permutation automorphism group has a fixed point free permutation of order $3$ is called a binary cubic code. The scope of this paper is to investigate the structural properties of binary cubic codes. Let $C$ be a binary cubic $[n,k]$ code. In this paper, we prove that if $n\geq 30$ and $C$ has permutation automorphism group of order three, then $k\geq 6$. Additionally, we show that if $n < 30$ and $k\leq 4$, then the permutation automorphism group of $C$ has order greater than three. Moreover, along the way, we provide some results on the structure of the higher dimensional cubic codes. In particular, we present some results concerning the structure of the putative extremal self-dual $[72,36,16]$ code under the assumption that it is cubic.
https://arxiv.org/abs/2402.10667
Academic Papers
svg
a5ecf2a89f98be240388f81ae2a8b4a5891b9b6ee024bc2371c3da0c667d1d74
2026-01-07T00:00:00-05:00
HAPNet: Toward Superior RGB-Thermal Scene Parsing via Hybrid, Asymmetric, and Progressive Heterogeneous Feature Fusion
arXiv:2404.03527v3 Announce Type: replace Abstract: Data-fusion networks have shown significant promise for RGB-thermal scene parsing. However, the majority of existing studies have relied on symmetric duplex encoders for heterogeneous feature extraction and fusion, paying inadequate attention to the inherent differences between RGB and thermal modalities. Recent progress in vision foundation models (VFMs) trained through self-supervision on vast amounts of unlabeled data has proven their ability to extract informative, general-purpose features. However, this potential has yet to be fully leveraged in the domain. In this study, we take one step toward this new research area by exploring a feasible strategy to fully exploit VFM features for RGB-thermal scene parsing. Specifically, we delve deeper into the unique characteristics of RGB and thermal modalities, thereby designing a hybrid, asymmetric encoder that incorporates both a VFM and a convolutional neural network. This design allows for more effective extraction of complementary heterogeneous features, which are subsequently fused in a dual-path, progressive manner. Moreover, we introduce an auxiliary task to further enrich the local semantics of the fused features, thereby improving the overall performance of RGB-thermal scene parsing. Our proposed HAPNet, equipped with all these components, demonstrates superior performance compared to all other state-of-the-art RGB-thermal scene parsing networks, achieving top ranks across three widely used public RGB-thermal scene parsing datasets. We believe this new paradigm has opened up new opportunities for future developments in data-fusion scene parsing approaches.
https://arxiv.org/abs/2404.03527
Academic Papers
svg
c3c11359141cfad3e14bdbf5f67013106a112fce2d99040390e2e6d28bce8d96
2026-01-07T00:00:00-05:00
Empowering Source-Free Domain Adaptation via MLLM-Guided Reliability-Based Curriculum Learning
arXiv:2405.18376v3 Announce Type: replace Abstract: Existing SFDA methods struggle to fully use pre-trained knowledge and often rely on a single model's predictions or handcrafted prompts, limiting robustness under domain shift. Multimodal Large Language Models (MLLMs) offer a promising alternative: they encode rich visual-semantic knowledge and generalize well without task-specific tuning. However, their use in SFDA is hindered by instruction-following failures, inconsistent outputs, and high inference costs. We propose Reliability-based Curriculum Learning (RCL), a novel framework that distills robust supervision from multiple frozen MLLMs into a compact target model. RCL organizes adaptation as a three-stage curriculum that progressively incorporates pseudo-labels based on inter-model agreement and model confidence, enabling stable and noise-aware training. Our approach achieves state-of-the-art performance on standard SFDA datasets, Office-Home, DomainNet-126, and VisDA-C, outperforming zero-shot MLLMs, their ensembles, all without accessing source data or tuning foundation models. Our code is available at: https://github.com/Dong-Jie-Chen/RCL.
https://arxiv.org/abs/2405.18376
Academic Papers
svg
6426f139036f57f2242944907d887a2e098d4d6e556dbba80d916f5f29854f7f
2026-01-07T00:00:00-05:00
Topological Perspectives on Optimal Multimodal Embedding Spaces
arXiv:2405.18867v2 Announce Type: replace Abstract: Recent strides in multimodal model development have ignited a paradigm shift in the realm of text-to-image generation. Among these advancements, CLIP stands out as a remarkable achievement which is a sophisticated autoencoder adept at encoding both textual and visual information within a unified latent space. This paper delves into a comparative analysis between CLIP and its recent counterpart, CLOOB. To unravel the intricate distinctions within the embedding spaces crafted by these models, we employ topological data analysis. Our approach encompasses a comprehensive examination of the modality gap drivers, the clustering structures existing across both high and low dimensions, and the pivotal role that dimension collapse plays in shaping their respective embedding spaces. Empirical experiments substantiate the implications of our analyses on downstream performance across various contextual scenarios. Through this investigation, we aim to shed light on the nuanced intricacies that underlie the comparative efficacy of CLIP and CLOOB, offering insights into their respective strengths and weaknesses, and providing a foundation for further refinement and advancement in multimodal model research.
https://arxiv.org/abs/2405.18867
Academic Papers
svg
fb17f27cfee8ee8d9cb82b17e5d6510a703bb71251ea3cc629b3a5967130669f
2026-01-07T00:00:00-05:00
A Survey on Failure Analysis and Fault Injection in AI Systems
arXiv:2407.00125v2 Announce Type: replace Abstract: The rapid advancement of Artificial Intelligence (AI) has led to its integration into various areas, especially with Large Language Models (LLMs) significantly enhancing capabilities in Artificial Intelligence Generated Content (AIGC). However, the complexity of AI systems has also exposed their vulnerabilities, necessitating robust methods for failure analysis (FA) and fault injection (FI) to ensure resilience and reliability. Despite the importance of these techniques, there lacks a comprehensive review of FA and FI methodologies in AI systems. This study fills this gap by presenting a detailed survey of existing FA and FI approaches across six layers of AI systems. We systematically analyze 160 papers and repositories to answer three research questions including (1) what are the prevalent failures in AI systems, (2) what types of faults can current FI tools simulate, (3) what gaps exist between the simulated faults and real-world failures. Our findings reveal a taxonomy of AI system failures, assess the capabilities of existing FI tools, and highlight discrepancies between real-world and simulated failures. Moreover, this survey contributes to the field by providing a framework for fault diagnosis, evaluating the state-of-the-art in FI, and identifying areas for improvement in FI techniques to enhance the resilience of AI systems.
https://arxiv.org/abs/2407.00125
Academic Papers
svg
ed07ed443443f30ea60f3fb90e2ac47b5af5caace769175a1c3038865ad4b907
2026-01-07T00:00:00-05:00
Limits to Predicting Online Speech Using Large Language Models
arXiv:2407.12850v3 Announce Type: replace Abstract: Our paper studies the predictability of online speech -- that is, how well language models learn to model the distribution of user generated content on X (previously Twitter). We define predictability as a measure of the model's uncertainty, i.e. its negative log-likelihood. As the basis of our study, we collect 10M tweets for ``tweet-tuning'' base models and a further 6.25M posts from more than five thousand X (previously Twitter) users and their peers. In our study involving more than 5000 subjects, we find that predicting posts of individual users remains surprisingly hard. Moreover, it matters greatly what context is used: models using the users' own history significantly outperform models using posts from their social circle. We validate these results across four large language models ranging in size from 1.5 billion to 70 billion parameters. Moreover, our results replicate if instead of prompting the model with additional context, we finetune on it. We follow up with a detailed investigation on what is learned in-context and a demographic analysis. Up to 20\% of what is learned in-context is the use of @-mentions and hashtags. Our main results hold across the demographic groups we studied.
https://arxiv.org/abs/2407.12850
Academic Papers
svg
1f3a531e0816f4c7436eff804d1f867e51cd5da57d3de548c2f0518dc0c1782b
2026-01-07T00:00:00-05:00
An Uncertainty-Aware Generalization Framework for Cardiovascular Image Segmentation
arXiv:2409.14305v2 Announce Type: replace Abstract: Deep learning models have achieved significant success in segmenting cardiovascular structures, but there is a growing need to improve their generalization and robustness. Current methods often face challenges such as overfitting and limited accuracy, largely due to their reliance on large annotated datasets and limited optimization techniques. This paper introduces the UU-Mamba model, an extension of the U-Mamba architecture, designed to address these challenges in both cardiac and vascular segmentation. By incorporating Sharpness-Aware Minimization (SAM), the model enhances generalization by seeking flatter minima in the loss landscape. Additionally, we propose an uncertainty-aware loss function that integrates region-based, distribution-based, and pixel-based components, improving segmentation accuracy by capturing both local and global features. We expand our evaluations on the ImageCAS (coronary artery) and Aorta (aortic branches and zones) datasets, which present more complex segmentation challenges than the ACDC dataset (left and right ventricles) used in prior work, showcasing the model's adaptability and resilience. Our results confirm UU-Mamba's superior performance compared to leading models such as TransUNet, Swin-Unet, nnUNet, and nnFormer. We also provide a more in-depth assessment of the model's robustness and segmentation accuracy through extensive experiments.
https://arxiv.org/abs/2409.14305
Academic Papers
svg
30e5368e83c02bd179b5a59eead5346b7706539ca5e4a81b4a6024cdbd401218
2026-01-07T00:00:00-05:00
Conformal Prediction for Dose-Response Models with Continuous Treatments
arXiv:2409.20412v2 Announce Type: replace Abstract: Understanding the dose-response relation between a continuous treatment and the outcome for an individual can greatly drive decision-making, particularly in areas like personalized drug dosing and personalized healthcare interventions. Point estimates are often insufficient in these high-risk environments, highlighting the need for uncertainty quantification to support informed decisions. Conformal prediction, a distribution-free and model-agnostic method for uncertainty quantification, has seen limited application in continuous treatments or dose-response models. To address this gap, we propose a novel methodology that frames the causal dose-response problem as a covariate shift, leveraging weighted conformal prediction. By incorporating propensity estimation, conformal predictive systems, and likelihood ratios, we present a practical solution for generating prediction intervals for dose-response models. Additionally, our method approximates local coverage for every treatment value by applying kernel functions as weights in weighted conformal prediction. Finally, we use a new synthetic benchmark dataset to demonstrate the significance of covariate shift assumptions in achieving robust prediction intervals for dose-response models.
https://arxiv.org/abs/2409.20412
Academic Papers
svg
0639935b688063239c30c1527dbf0b913318ffaf80fab5efc4bea2e29c28fe13
2026-01-07T00:00:00-05:00
Large Language Models can Achieve Social Balance
arXiv:2410.04054v3 Announce Type: replace Abstract: Large Language Models (LLMs) can be deployed in situations where they process positive/negative interactions with other agents. We study how this is done under the sociological framework of social balance, which explains the emergence of one faction or multiple antagonistic ones among agents. Across different LLM models, we find that balance depends on the (i) type of interaction, (ii) update mechanism, and (iii) population size. Across (i)-(iii), we characterize the frequency at which social balance is achieved, the justifications for the social dynamics, and the diversity and stability of interactions. Finally, we explain how our findings inform the deployment of agentic systems.
https://arxiv.org/abs/2410.04054
Academic Papers
svg
6d56c086f417bfca0591fdde71512cf1ba305aafa5ada618def43b3b0c958f35
2026-01-07T00:00:00-05:00
A Machine Learning Model for Solving Lane-Emden Equation using Legendre Wavelet Neural Network
arXiv:2410.05409v2 Announce Type: replace Abstract: As we know differential equations are very useful for electrical engineers to solve a variety of problems like: voltage across a capacitor, input versus output voltage, etc. Therefore, the goal of this paper is to find the solutions of non-linear differential equations based on the Lane Emden equation of second order using the Legendre wavelet neural network (LWNN) method. Here all the considered equations are singular initial value problems. To manage the singularity challenge, we have employed an artificial neural network method. This approach utilizes a neural network of a single layer, where the hidden layer is omitted by enlarging the input using Legendre wavelets functions. We have applied a feed-forward neural network method to the proposed problem along with the principle of error backpropagation. The effectiveness of the Legendre wavelet Neural Network method is validated through Lane Emden equations..
https://arxiv.org/abs/2410.05409
Academic Papers
svg
bd3745e31dc48cec8dc6c08ddf8a5055451a18d88fb2d6cbd49f350d4e049808
2026-01-07T00:00:00-05:00
Limits to scalable evaluation at the frontier: LLM as Judge won't beat twice the data
arXiv:2410.13341v3 Announce Type: replace Abstract: High quality annotations are increasingly a bottleneck in the explosively growing machine learning ecosystem. Scalable evaluation methods that avoid costly annotation have therefore become an important research ambition. Many hope to use strong existing models in lieu of costly labels to provide cheap model evaluations. Unfortunately, this method of using models as judges introduces biases, such as self-preferencing, that can distort model comparisons. An emerging family of debiasing tools promises to fix these issues by using a few high quality labels to debias a large number of model judgments. In this paper, we study how far such debiasing methods, in principle, can go. Our main result shows that when the judge is no more accurate than the evaluated model, no debiasing method can decrease the required amount of ground truth labels by more than half. Our result speaks to the severe limitations of the LLM-as-a-judge paradigm at the evaluation frontier where the goal is to assess newly released models that are possibly better than the judge. Through an empirical evaluation, we demonstrate that the sample size savings achievable in practice are even more modest than what our theoretical limit suggests. Along the way, our work provides new observations about debiasing methods for model evaluation, and points out promising avenues for future work.
https://arxiv.org/abs/2410.13341
Academic Papers
svg
d4160dbe55d57300e1d670ce95d459ee47dbb2bfd2ab5a70288821118e2a7dcd
2026-01-07T00:00:00-05:00
How Many Images Does It Take? Estimating Imitation Thresholds in Text-to-Image Models
arXiv:2410.15002v2 Announce Type: replace Abstract: Text-to-image models are trained using large datasets of image-text pairs collected from the internet. These datasets often include copyrighted and private images. Training models on such datasets enables them to generate images that might violate copyright laws and individual privacy. This phenomenon is termed imitation -- generation of images with content that has recognizable similarity to its training images. In this work we estimate the point at which a model was trained on enough instances of a concept to be able to imitate it -- the imitation threshold. We posit this question as a new problem and propose an efficient approach that estimates the imitation threshold without incurring the colossal cost of training these models from scratch. We experiment with two domains -- human faces and art styles, and evaluate four text-to-image models that were trained on three pretraining datasets. We estimate the imitation threshold of these models to be in the range of 200-700 images, depending on the domain and the model. The imitation threshold provides an empirical basis for copyright violation claims and acts as a guiding principle for text-to-image model developers that aim to comply with copyright and privacy laws. Website: https://how-many-van-goghs-does-it-take.github.io/. Code: https://github.com/vsahil/MIMETIC-2.
https://arxiv.org/abs/2410.15002
Academic Papers
svg
16d45ea0d04fba85453b297e84ce7aa06e1d072dffe3a561f81e5e6fea8c6511
2026-01-07T00:00:00-05:00
Uncovering Autoregressive LLM Knowledge of Thematic Fit in Event Representation
arXiv:2410.15173v2 Announce Type: replace Abstract: We show closed models possess much thematic fit knowledge and set a new state of the art, while open models also seem to capture much relevant knowledge (in semantic filtering), but yield lower scores. Surprisingly, multi-step reasoning only helped closed models (with few exceptions); generated sentences hurt closed models' performance; and output form had little to no effect. We analyze the reasons for these findings, and conclude that more foundational work is needed for a single LLM to perform the best on all tasks with the same experimental condition, let alone improve results further. Source code is available at: https://github.com/SafeyahShemali/LLM_Thematic_Fit_25
https://arxiv.org/abs/2410.15173
Academic Papers
svg
c17d70df0ac7948e4eb92efc60f7162fd980561d70f8e88c4e4d2769d5092669
2026-01-07T00:00:00-05:00
SaVe-TAG: LLM-based Interpolation for Long-Tailed Text-Attributed Graphs
arXiv:2410.16882v4 Announce Type: replace Abstract: Real-world graph data often follows long-tailed distributions, making it difficult for Graph Neural Networks (GNNs) to generalize well across both head and tail classes. Recent advances in Vicinal Risk Minimization (VRM) have shown promise in mitigating class imbalance with numeric interpolation; however, existing approaches largely rely on embedding-space arithmetic, which fails to capture the rich semantics inherent in text-attributed graphs. In this work, we propose our method, SaVe-TAG (Semantic-aware Vicinal Risk Minimization for Long-Tailed Text-Attributed Graphs), a novel VRM framework that leverages Large Language Models (LLMs) to perform text-level interpolation, generating on-manifold, boundary-enriching synthetic samples for minority classes. To mitigate the risk of noisy generation, we introduce a confidence-based edge assignment mechanism that uses graph topology as a natural filter to ensure structural consistency. We provide theoretical justification for our method and conduct extensive experiments on benchmark datasets, showing that our approach consistently outperforms both numeric interpolation and prior long-tailed node classification baselines. Our results highlight the importance of integrating semantic and structural signals for balanced and effective learning on text-attributed graphs. The source code is publicly available at: https://github.com/LWang-Laura/SaVe-TAG.
https://arxiv.org/abs/2410.16882
Academic Papers
svg
3a0a87ba9532d801a0dbf62d5c00049a5c2950451249eefee8324d7ac1eb8d0d
2026-01-07T00:00:00-05:00
EviRerank: Adaptive Evidence Construction for Long-Document LLM Reranking
arXiv:2411.06254v5 Announce Type: replace Abstract: Decoder-only LLM rerankers struggle with long documents: inference is costly and relevance signals can be diluted by irrelevant context. Motivated by an attention analysis indicating a consistent degradation trend when non-relevant text is appended, we propose EviRerank, an evidence-based long-document reranking framework for decoder-only LLMs. EviRerank (i) scores document blocks with a lightweight selector (BM25, bi-encoder, or cross-encoder), (ii) constructs a compact reranking context under a hard token cap by dynamically budgeting evidence blocks with Adaptive Evidence Budgeting (AEB) and adding a global summary cue via Summary Augmentation (SA), and (iii) reranks with a decoder-only LLM. Across TREC DL'19, DL'23, and MLDR-zh, EviRerank consistently outperforms full-document LLM reranking and strong block-selection baselines while substantially reducing the required input length. On TREC DL'19, EviRerank achieves 0.743 nDCG@10 and 0.307 MAP, establishing a new best result and improving over RankLLaMA (0.701/0.288) by +0.042 nDCG@10 (+6.0%) and +0.019 MAP (+6.6%).
https://arxiv.org/abs/2411.06254
Academic Papers
svg
66320ea8f06f6a4a1d884532422eacbf759ebd7b0391562a3e07d246214c1cfd
2026-01-07T00:00:00-05:00
Communication Compression for Tensor Parallel LLM Inference
arXiv:2411.09510v3 Announce Type: replace Abstract: Large Language Models (LLMs) have pushed the frontier of artificial intelligence but are comprised of hundreds of billions of parameters and operations. For faster inference latency, LLMs are deployed on multiple hardware accelerators through various Model Parallelism strategies. Our paper looks into the details on one such strategy - Tensor Parallel - and proposes to reduce latency by compressing inter-accelerator communication. We leverage fine grained quantization techniques to compress selected activations by 3.5 - 4.5x. Our proposed method leads up to 2x reduction of time-to-first-token (TTFT) with negligible model performance degradation.
https://arxiv.org/abs/2411.09510
Academic Papers
svg
2cfb6d91a5fdd5cc851126ed06d47923e5d8123b0cfd5109c07c0374aebe02a7
2026-01-07T00:00:00-05:00
FCC: Fully Connected Correlation for One-Shot Segmentation
arXiv:2411.11917v2 Announce Type: replace Abstract: Few-shot segmentation (FSS) aims to segment the target object in a query image using only a small set of support images and masks. Therefore, having strong prior information for the target object using the support set is essential for guiding the initial training of FSS, which leads to the success of few-shot segmentation in challenging cases, such as when the target object shows considerable variation in appearance, texture, or scale across the support and query images. Previous methods have tried to obtain prior information by creating correlation maps from pixel-level correlation on final-layer or same-layer features. However, we found these approaches can offer limited and partial information when advanced models like Vision Transformers are used as the backbone. Vision Transformer encoders have a multi-layer structure with identical shapes in their intermediate layers. Leveraging the feature comparison from all layers in the encoder can enhance the performance of few-shot segmentation. We introduce FCC (Fully Connected Correlation) to integrate pixel-level correlations between support and query features, capturing associations that reveal target-specific patterns and correspondences in both same-layers and cross-layers. FCC captures previously inaccessible target information, effectively addressing the limitations of support mask. Our approach consistently demonstrates state-of-the-art performance on PASCAL, COCO, and domain shift tests. We conducted an ablation study and cross-layer correlation analysis to validate FCC's core methodology. These findings reveal the effectiveness of FCC in enhancing prior information and overall model performance.
https://arxiv.org/abs/2411.11917
Academic Papers
svg
08e99b8a7113b920cdc167438308d62625686dd2a1dbb4f626a22f082601ab95
2026-01-07T00:00:00-05:00
Learning Visual Hierarchies in Hyperbolic Space for Image Retrieval
arXiv:2411.17490v4 Announce Type: replace Abstract: Structuring latent representations in a hierarchical manner enables models to learn patterns at multiple levels of abstraction. However, most prevalent image understanding models focus on visual similarity, and learning visual hierarchies is relatively unexplored. In this work, for the first time, we introduce a learning paradigm that can encode user-defined multi-level complex visual hierarchies in hyperbolic space without requiring explicit hierarchical labels. As a concrete example, first, we define a part-based image hierarchy using object-level annotations within and across images. Then, we introduce an approach to enforce the hierarchy using contrastive loss with pairwise entailment metrics. Finally, we discuss new evaluation metrics to effectively measure hierarchical image retrieval. Encoding these complex relationships ensures that the learned representations capture semantic and structural information that transcends mere visual similarity. Experiments in part-based image retrieval show significant improvements in hierarchical retrieval tasks, demonstrating the capability of our model in capturing visual hierarchies.
https://arxiv.org/abs/2411.17490
Academic Papers
svg
6a64f2def54761f12b5ef3f7bf9cf64a07c5a7b87196d21424c18087f99d9702
2026-01-07T00:00:00-05:00
AdaVLN: Towards Visual Language Navigation in Continuous Indoor Environments with Moving Humans
arXiv:2411.18539v3 Announce Type: replace Abstract: Visual Language Navigation is a task that challenges robots to navigate in realistic environments based on natural language instructions. While previous research has largely focused on static settings, real-world navigation must often contend with dynamic human obstacles. Hence, we propose an extension to the task, termed Adaptive Visual Language Navigation (AdaVLN), which seeks to narrow this gap. AdaVLN requires robots to navigate complex 3D indoor environments populated with dynamically moving human obstacles, adding a layer of complexity to navigation tasks that mimic the real-world. To support exploration of this task, we also present AdaVLN simulator and AdaR2R datasets. The AdaVLN simulator enables easy inclusion of fully animated human models directly into common datasets like Matterport3D. We also introduce a "freeze-time" mechanism for both the navigation task and simulator, which pauses world state updates during agent inference, enabling fair comparisons and experimental reproducibility across different hardware. We evaluate several baseline models on this task, analyze the unique challenges introduced by AdaVLN, and demonstrate its potential to bridge the sim-to-real gap in VLN research.
https://arxiv.org/abs/2411.18539
Academic Papers
svg
d6596e9b5dd33e3da14311ecbf3e1988c3d1880ea0b3e59a40082b29b7fdf6f2
2026-01-07T00:00:00-05:00
Neural Power-Optimal Magnetorquer Solution for Multi-Agent Formation and Attitude Control
arXiv:2412.00548v2 Announce Type: replace Abstract: This paper presents a learning-based current calculation model to achieve power-optimal magnetic-field interaction for multi-agent formation and attitude control. In aerospace engineering, electromagnetic coils are referred to as magnetorquer (MTQ) coils and used as satellite attitude actuators in Earth's orbit and for long-term formation and attitude control. This study derives a unique, continuous, and power-optimal current solution via sequential convex programming and approximates it using a multilayer perceptron model. The effectiveness of our strategy was demonstrated through numerical simulations and experimental trials on the formation and attitude control.
https://arxiv.org/abs/2412.00548
Academic Papers
svg
7cdc7d0685579fa193c845e0b5025197a54435b9ad5c65f84347e24c3d61bbb4
2026-01-07T00:00:00-05:00
MemHunter: Automated and Verifiable Memorization Detection at Dataset-scale in LLMs
arXiv:2412.07261v3 Announce Type: replace Abstract: Large language models (LLMs) have been shown to memorize and reproduce content from their training data, raising significant privacy concerns, especially with web-scale datasets. Existing methods for detecting memorization are primarily sample-specific, relying on manually crafted or discretely optimized memory-inducing prompts generated on a per-sample basis, which become impractical for dataset-level detection due to the prohibitive computational cost of iterating through all samples. In real-world scenarios, data owners may need to verify whether a susceptible LLM has memorized their dataset, particularly if the LLM may have collected the data from the web without authorization. To address this, we introduce MemHunter, which trains a memory-inducing LLM and employs hypothesis testing to efficiently detect memorization at the dataset level, without requiring sample-specific memory inducing. Experiments on models like Pythia and Llama demonstrate that MemHunter can extract up to 40% more training data than existing methods under constrained time resources and reduce search time by up to 80% when integrated as a plug-in. Crucially, MemHunter is the first method capable of dataset-level memorization detection, providing a critical tool for assessing privacy risks in LLMs powered by large-scale datasets.
https://arxiv.org/abs/2412.07261
Academic Papers
svg
fac8ea32ab43c99b03720fa66333eecce1a3dee7f726244decc9658a6d28f5ed
2026-01-07T00:00:00-05:00
RobotDiffuse: Diffusion-Based Motion Planning for Redundant Manipulators with the ROP Obstacle Avoidance Dataset
arXiv:2412.19500v2 Announce Type: replace Abstract: Redundant manipulators, with their higher Degrees of Freedom (DoFs), offer enhanced kinematic performance and versatility, making them suitable for applications like manufacturing, surgical robotics, and human-robot collaboration. However, motion planning for these manipulators is challenging due to increased DoFs and complex, dynamic environments. While traditional motion planning algorithms struggle with high-dimensional spaces, deep learning-based methods often face instability and inefficiency in complex tasks. This paper introduces RobotDiffuse, a diffusion model-based approach for motion planning in redundant manipulators. By integrating physical constraints with a point cloud encoder and replacing the U-Net structure with an encoder-only transformer, RobotDiffuse improves the model's ability to capture temporal dependencies and generate smoother, more coherent motion plans. We validate the approach using a complex simulator and release a new dataset, Robot-obtalcles-panda (ROP), with 35M robot poses and 0.14M obstacle avoidance scenarios. The highest overall score obtained in the experiment demonstrates the effectiveness of RobotDiffuse and the promise of diffusion models for motion planning tasks. The dataset can be accessed at https://github.com/ACRoboT-buaa/RobotDiffuse.
https://arxiv.org/abs/2412.19500
Academic Papers
svg
f1171506fc52442088b3070f0453468222239b851937adc4b6728fcfa98fec92
2026-01-07T00:00:00-05:00
Steering Flexible Linear Objects in Planar Environments by Two Robot Hands Using Euler's Elastica Solutions
arXiv:2501.02874v4 Announce Type: replace Abstract: The manipulation of flexible objects such as cables, wires and fresh food items by robot hands forms a special challenge in robot grasp mechanics. This paper considers the steering of flexible linear objects in planar environments by two robot hands. The flexible linear object, modeled as an elastic non-stretchable rod, is manipulated by varying the gripping endpoint positions while keeping equal endpoint tangents. The flexible linear object shape has a closed form solution in terms of the grasp endpoint positions and tangents, called Euler's elastica. This paper obtains the elastica solutions under the optimal control framework, then uses the elastica solutions to obtain closed-form criteria for non self-intersection, stability and obstacle avoidance of the flexible linear object. The new tools are incorporated into a planning scheme for steering flexible linear objects in planar environments populated by sparsely spaced obstacles. The scheme is fully implemented and demonstrated with detailed examples.
https://arxiv.org/abs/2501.02874
Academic Papers
svg
e113b1ae6d45593fac49177a6196f0f2e8da792a95c197d111a9cfdc5c28f129
2026-01-07T00:00:00-05:00
The structure of polynomial growth for tree automata/transducers and MSO set queries
arXiv:2501.10270v4 Announce Type: replace Abstract: Given an $\mathbb{N}$-weighted tree automaton, we give a decision procedure for exponential vs polynomial growth (with respect to the input size) in quadratic time, and an algorithm that computes the exact polynomial degree of growth in cubic time. As a special case, they apply to the growth of the ambiguity of a nondeterministic tree automaton, i.e. the number of distinct accepting runs over a given input. We deduce analogous decidability results (ignoring complexity) for the growth of the number of results of set queries in Monadic Second-Order logic (MSO) over ranked trees. In the case of polynomial growth of degree $k$, we also prove a reparameterization theorem for such queries: their results can be mapped to $k$-tuples of input nodes in a finite-to-one and MSO-definable fashion. We then apply these tools to study growth rates and subclass membership problems for tree-to-tree functions. Using new proof strategies, we recover and generalize known results concerning polyregular functions, total deterministic macro tree transducers, and partial nondeterministic top-down tree transducers. In particular, we give a procedure to decide polynomial size-to-height increase for both macro tree transducers and MSO set interpretations, and compute the degree. The paper concludes with a survey of a wide range of related work.
https://arxiv.org/abs/2501.10270
Academic Papers
svg
f0e6419086c45d31b08ca48367df88432ad9de1fbdb1528a47e6529d27b9837f
2026-01-07T00:00:00-05:00
SLVC-DIDA: Signature-less Verifiable Credential-based Issuer-hiding and Multi-party Authentication for Decentralized Identity
arXiv:2501.11052v3 Announce Type: replace Abstract: As an emerging paradigm in digital identity, Decentralized Identity (DID) appears advantages over traditional identity management methods in a variety of aspects, e.g., enhancing user-centric online services and ensuring complete user autonomy and control. Verifiable Credential (VC) techniques are used to facilitate decentralized DID-based access control across multiple entities. However, existing DID schemes generally rely on a distributed public key infrastructure that also causes challenges, such as context information deduction, key exposure, and issuer data leakage. To address the issues above, this paper proposes a issuer-hiding and privacy-preserving DID multi-party authentication model with a signature-less VC scheme, named SLVC-DIDA, for the first time. Our proposed scheme avoids the dependence on signing keys by employing hashing and issuer membership proofs, which supports universal zero-knowledge multi-party DID authentications, eliminating additional technical integrations. We adopt a novel zero-knowledge circuit to maintain the anonymity of the issuer set, thereby enabling public verification while safeguarding the privacy of identity attributes via a Merkle tree-based VC list. Furthermore, by eliminating reliance on a Public Key Infrastructure (PKI), SLVC-DIDA enables decentralized and self-sovereign DID authentication. Our experiments further evaluate the effectiveness and practicality of SLVC-DIDA.
https://arxiv.org/abs/2501.11052
Academic Papers
svg
e710dc095379daf260c75efda2af3546b007fcc8c75aaee10eb2858f42358067
2026-01-07T00:00:00-05:00
Model-checking real-time systems: revisiting the alternating automaton route
arXiv:2501.17576v2 Announce Type: replace Abstract: Alternating timed automata (ATA) are an extension of timed automata, that are closed under complementation and hence amenable to logic-to-automata translations. Several timed logics, including Metric Temporal Logic (MTL), can be converted to equivalent 1-clock ATAs (1-ATAs). Satisfiability of an MTL formula reduces to checking emptiness of a 1-ATA. A straightforward modification of the 1-ATA emptiness algorithm can be applied for model-checking timed automata models against 1-ATA specifications. However, existing emptiness algorithms for 1-ATA proceed by an extended region construction, and are not suitable for implementations. Our goal in this work is to initiate the study of zone-based methods directly for 1-ATAs. We first introduce a deactivation operation on the 1-ATA syntax to allow an explicit deactivation of the clock in transitions. Using the deactivation operation, we improve the existing MTL-to-1-ATA conversion and present a fragment of MTL for which the equivalent 1-ATA generate a bounded number of variables. Secondly, we develop the idea of zones for 1-ATA and present an emptiness algorithm which explores a corresponding zone graph. For termination, a special entailment check between zones is necessary. Our main technical contributions are: (1) an algorithm for the entailment check using simple zone operations and (2) an NP-hardness for the entailment check in the general case. Finally, we adapt our methods to the problem of model-checking timed automata models against 1-ATA specifications. We observe that when the timed automaton is strongly non-Zeno or when the 1-ATA generates a bounded number of variables, a modified entailment check with quadratic complexity can be applied.
https://arxiv.org/abs/2501.17576
Academic Papers
svg
92ad737bf1831ea93fbb7373c19efb8354ab44c15c00423d5498a164adeddbc8
2026-01-07T00:00:00-05:00
Successor-Generator Planning with LLM-generated Heuristics
arXiv:2501.18784v4 Announce Type: replace Abstract: Heuristics are a central component of deterministic planning, particularly in domain-independent settings where general applicability is prioritized over task-specific tuning. This work revisits that paradigm in light of recent advances in large language models (LLMs), which enable the automatic synthesis of heuristics directly from problem definitions -- bypassing the need for handcrafted domain knowledge. We present a method that employs LLMs to generate problem-specific heuristic functions from planning tasks specified through successor generators, goal tests, and initial states written in a general-purpose programming language. These heuristics are compiled and integrated into standard heuristic search algorithms, such as greedy best-first search. Our approach achieves competitive, and in many cases state-of-the-art, performance across a broad range of established planning benchmarks. Moreover, it enables the solution of problems that are difficult to express in traditional formalisms, including those with complex numeric constraints or custom transition dynamics. We provide an extensive empirical evaluation that characterizes the strengths and limitations of the approach across diverse planning settings, demonstrating its effectiveness.
https://arxiv.org/abs/2501.18784
Academic Papers
svg
d09a9c55f06daddd500968432cd5fd9da3a7390ec5671ac1ddfd8af543fb60d7
2026-01-07T00:00:00-05:00
Leveraging the true depth of LLMs
arXiv:2502.02790v3 Announce Type: replace Abstract: The remarkable capabilities of Large Language Models (LLMs) are overshadowed by their immense computational cost. While recent work has shown that many LLM layers can be reordered or even removed with minimal impact on accuracy, these insights have not been translated into significant inference speedups. To bridge this gap, we introduce a novel method that restructures the computational graph by grouping and evaluating consecutive layer pairs in parallel. This approach, requiring no retraining, yields a 1.19x throughput gain on Llama 2 7B while reducing the average benchmark accuracy by only 1.5\%. We demonstrate the practical value of this method for large-scale LLM deployment and show that some of the lost accuracy can be recovered with lightweight fine-tuning of the parallelized layers.
https://arxiv.org/abs/2502.02790
Academic Papers
svg
9086f07f4b63841c14a25fdca7b707f6c640391e8d8ee3ed1d5d20c21538529b
2026-01-07T00:00:00-05:00
Training Set Reconstruction from Differentially Private Forests: How Effective is DP?
arXiv:2502.05307v4 Announce Type: replace Abstract: Recent research has shown that structured machine learning models such as tree ensembles are vulnerable to privacy attacks targeting their training data. To mitigate these risks, differential privacy (DP) has become a widely adopted countermeasure, as it offers rigorous privacy protection. In this paper, we introduce a reconstruction attack targeting state-of-the-art $\epsilon$-DP random forests. By leveraging a constraint programming model that incorporates knowledge of the forest's structure and DP mechanism characteristics, our approach formally reconstructs the most likely dataset that could have produced a given forest. Through extensive computational experiments, we examine the interplay between model utility, privacy guarantees and reconstruction accuracy across various configurations. Our results reveal that random forests trained with meaningful DP guarantees can still leak portions of their training data. Specifically, while DP reduces the success of reconstruction attacks, the only forests fully robust to our attack exhibit predictive performance no better than a constant classifier. Building on these insights, we also provide practical recommendations for the construction of DP random forests that are more resilient to reconstruction attacks while maintaining a non-trivial predictive performance.
https://arxiv.org/abs/2502.05307
Academic Papers
svg
b235a060ca0aa0a7c2e1c50c96141647cc60257d9989fb2134fee68a055bac54
2026-01-07T00:00:00-05:00
DenseSplat: Densifying Gaussian Splatting SLAM with Neural Radiance Prior
arXiv:2502.09111v2 Announce Type: replace Abstract: Gaussian SLAM systems excel in real-time rendering and fine-grained reconstruction compared to NeRF-based systems. However, their reliance on extensive keyframes is impractical for deployment in real-world robotic systems, which typically operate under sparse-view conditions that can result in substantial holes in the map. To address these challenges, we introduce DenseSplat, the first SLAM system that effectively combines the advantages of NeRF and 3DGS. DenseSplat utilizes sparse keyframes and NeRF priors for initializing primitives that densely populate maps and seamlessly fill gaps. It also implements geometry-aware primitive sampling and pruning strategies to manage granularity and enhance rendering efficiency. Moreover, DenseSplat integrates loop closure and bundle adjustment, significantly enhancing frame-to-frame tracking accuracy. Extensive experiments on multiple large-scale datasets demonstrate that DenseSplat achieves superior performance in tracking and mapping compared to current state-of-the-art methods.
https://arxiv.org/abs/2502.09111
Academic Papers
svg
a08aa3d9a4ff50ccc53b6079171988cf16ea5829a5056f420886f9d41906d231
2026-01-07T00:00:00-05:00
Whose story is it? Personalizing story generation by inferring author styles
arXiv:2502.13028v3 Announce Type: replace Abstract: Personalization is critical for improving user experience in interactive writing and educational applications, yet remains understudied in story generation. We study the task of personalizing story generation, where our goal is to mimic an author's writing style, given other stories written by them. We collect Mythos, a dataset of 3.6k stories from 112 authors, with an average of 16 stories per author, across five distinct sources reflecting diverse story-writing settings. We propose a two-stage pipeline for personalized story generation: first, we infer authors' implicit writing characteristics and organize them into an Author Writing Sheet, which is validated by humans to be of high quality; second, we simulate the author's persona using tailored persona descriptions and personalized story rules. We find that stories personalized using the Author Writing Sheet outperform a non-personalized baseline, achieving a 78% win-rate in capturing authors' past style and 59% in similarity to ground-truth author stories. Human evaluation supports these findings and further highlights trends, such as Reddit stories being easier to personalize, and the Creativity and Language Use aspects of stories being easier to personalize than the Plot.
https://arxiv.org/abs/2502.13028
Academic Papers
svg
f497f6c2ff7db3cf1427bf96be4ec49acd417c00888301d0204120746d8d6b9a
2026-01-07T00:00:00-05:00
Geolocation with Real Human Gameplay Data: A Large-Scale Dataset and Human-Like Reasoning Framework
arXiv:2502.13759v3 Announce Type: replace Abstract: Geolocation, the task of identifying an image's location, requires complex reasoning and is crucial for navigation, monitoring, and cultural preservation. However, current methods often produce coarse, imprecise, and non-interpretable localization. A major challenge lies in the quality and scale of existing geolocation datasets. These datasets are typically small-scale and automatically constructed, leading to noisy data and inconsistent task difficulty, with images that either reveal answers too easily or lack sufficient clues for reliable inference. To address these challenges, we introduce a comprehensive geolocation framework with three key components: GeoComp, a large-scale dataset; GeoCoT, a novel reasoning method; and GeoEval, an evaluation metric, collectively designed to address critical challenges and drive advancements in geolocation research. At the core of this framework is GeoComp (Geolocation Competition Dataset), a large-scale dataset collected from a geolocation game platform involving 740K users over two years. It comprises 25 million entries of metadata and 3 million geo-tagged locations spanning much of the globe, with each location annotated thousands to tens of thousands of times by human users. The dataset offers diverse difficulty levels for detailed analysis and highlights key gaps in current models. Building on this dataset, we propose Geographical Chain-of-Thought (GeoCoT), a novel multi-step reasoning framework designed to enhance the reasoning capabilities of Large Vision Models (LVMs) in geolocation tasks. GeoCoT improves performance by integrating contextual and spatial cues through a multi-step process that mimics human geolocation reasoning. Finally, using the GeoEval metric, we demonstrate that GeoCoT significantly boosts geolocation accuracy by up to 25% while enhancing interpretability.
https://arxiv.org/abs/2502.13759
Academic Papers
svg
2020033a5ec7bb8fcf08d6fd0be5b670015199e5d0b9c919716caf5f2a98a0fb
2026-01-07T00:00:00-05:00
Towards Threshold-Free KV Cache Pruning
arXiv:2502.16886v3 Announce Type: replace Abstract: To reduce memory consumption during LLM inference, prior works have proposed numerous methods that focus on KV cache pruning based on various criteria. While these techniques often accomplish lossless memory reduction on many datasets, they often rely on an under-emphasized condition: a dataset/domain-specific budget size threshold needs to be pre-determined to achieve the optimal performance. However, such input-specific tuning may be considerably limited in real-world scenarios, as open-domain inputs span diverse domains, lengths and difficulty levels, without clear boundaries for pre-tuning. Thus, the dependence of an input-sensitive threshold can be an inherent limitation that may cause large degradation on arbitrary inputs. In this work, we propose a new objective that lifts the threshold constraints for robust KV pruning, calling for "threshold-free" methods that automatically adjust budget sizes while ensuring full-cache performance. We then propose a novel method ReFreeKV as the first solution fulfilling this objective, validated by intensive experiments on 13 datasets of diverse context lengths, task types, and model sizes.
https://arxiv.org/abs/2502.16886
Academic Papers
svg
88e7ac7f7bfb35c4dcceef9d9ba7a42aefd7362371e98a682a595d6fd9a402e3
2026-01-07T00:00:00-05:00
It's Not All Black and White: Degree of Truthfulness for Risk-Avoiding Agents
arXiv:2502.18805v3 Announce Type: replace Abstract: The classic notion of \emph{truthfulness} requires that no agent has a profitable manipulation -- an untruthful report that, for \emph{some} combination of reports of the other agents, increases her utility. This strong notion implicitly assumes that the manipulating agent either knows what all other agents are going to report, or is willing to take the risk and act as-if she knows their reports. Without knowledge of the others' reports, most manipulations are \emph{risky} -- they might decrease the manipulator's utility for some other combinations of reports by the other agents. Accordingly, a recent paper (Bu, Song and Tao, ``On the existence of truthful fair cake cutting mechanisms'', Artificial Intelligence 319 (2023), 103904) suggests a relaxed notion, which we refer to as \emph{risk-avoiding truthfulness (RAT)}, which requires only that no agent can gain from a \emph{safe} manipulation -- one that is sometimes beneficial and never harmful. Truthfulness and RAT are two extremes: the former considers manipulators with complete knowledge of others, whereas the latter considers manipulators with no knowledge at all. In reality, agents often know about some -- but not all -- of the other agents. This paper introduces the \emph{RAT-degree} of a mechanism, defined as the smallest number of agents whose reports, if known, may allow another agent to safely manipulate, or $n$ if there is no such number. This notion interpolates between classic truthfulness (degree $n$) and RAT (degree at least $1$): a mechanism with a higher RAT-degree is harder to manipulate safely. To illustrate the generality and applicability of this concept, we analyze the RAT-degree of prominent mechanisms across various social choice settings, including auctions, indivisible goods allocations, cake-cutting, voting, and two-sided matching.
https://arxiv.org/abs/2502.18805
Academic Papers
svg
51591928363885aceb5298e4114261d87fd250681b0a92ce8f1005c947578a46
2026-01-07T00:00:00-05:00
Protecting multimodal large language models against misleading visualizations
arXiv:2502.20503v5 Announce Type: replace Abstract: Visualizations play a pivotal role in daily communication in an increasingly data-driven world. Research on multimodal large language models (MLLMs) for automated chart understanding has accelerated massively, with steady improvements on standard benchmarks. However, for MLLMs to be reliable, they must be robust to misleading visualizations, i.e., charts that distort the underlying data, leading readers to draw inaccurate conclusions. Here, we uncover an important vulnerability: MLLM question-answering (QA) accuracy on misleading visualizations drops on average to the level of the random baseline. To address this, we provide the first comparison of six inference-time methods to improve QA performance on misleading visualizations, without compromising accuracy on non-misleading ones. We find that two methods, table-based QA and redrawing the visualization, are effective, with improvements of up to 19.6 percentage points. We make our code and data available.
https://arxiv.org/abs/2502.20503
Academic Papers
svg
41165b96d83651f0acb4fe4a4a43ee1a143a6ea548df43c8ed6aca13bda0dc7b
2026-01-07T00:00:00-05:00
Active operator learning with predictive uncertainty quantification for partial differential equations
arXiv:2503.03178v2 Announce Type: replace Abstract: With the increased prevalence of neural operators being used to provide rapid solutions to partial differential equations (PDEs), understanding the accuracy of model predictions and the associated error levels is necessary for deploying reliable surrogate models in scientific applications. Existing uncertainty quantification (UQ) frameworks employ ensembles or Bayesian methods, which can incur substantial computational costs during both training and inference. We propose a lightweight predictive UQ method tailored for Deep operator networks (DeepONets) that also generalizes to other operator networks. Numerical experiments on linear and nonlinear PDEs demonstrate that the framework's uncertainty estimates are unbiased and provide accurate out-of-distribution uncertainty predictions with a sufficiently large training dataset. Our framework provides fast inference and uncertainty estimates that can efficiently drive outer-loop analyses that would be prohibitively expensive with conventional solvers. We demonstrate how predictive uncertainties can be used in the context of Bayesian optimization and active learning problems to yield improvements in accuracy and data-efficiency for outer-loop optimization procedures. In the active learning setup, we extend the framework to Fourier Neural Operators (FNO) and describe a generalized method for other operator networks. To enable real-time deployment, we introduce an inference strategy based on precomputed trunk outputs and a sparse placement matrix, reducing evaluation time by more than a factor of five. Our method provides a practical route to uncertainty-aware operator learning in time-sensitive settings.
https://arxiv.org/abs/2503.03178
Academic Papers
svg
b8c20966dc75941e5b78008ba943692a004f35401d3a15b6d441cd27299200df
2026-01-07T00:00:00-05:00
The MASK Benchmark: Disentangling Honesty From Accuracy in AI Systems
arXiv:2503.03750v3 Announce Type: replace Abstract: As large language models (LLMs) become more capable and agentic, the requirement for trust in their outputs grows significantly, yet at the same time concerns have been mounting that models may learn to lie in pursuit of their goals. To address these concerns, a body of work has emerged around the notion of "honesty" in LLMs, along with interventions aimed at mitigating deceptive behaviors. However, some benchmarks claiming to measure honesty in fact simply measure accuracy--the correctness of a model's beliefs--in disguise. Moreover, no benchmarks currently exist for directly measuring whether language models lie. In this work, we introduce a large-scale human-collected dataset for directly measuring lying, allowing us to disentangle accuracy from honesty. Across a diverse set of LLMs, we find that while larger models obtain higher accuracy on our benchmark, they do not become more honest. Surprisingly, most frontier LLMs obtain high scores on truthfulness benchmarks yet exhibit a substantial propensity to lie under pressure, resulting in low honesty scores on our benchmark. We find that simple methods, such as representation engineering interventions, can improve honesty. These results underscore the growing need for robust evaluations and effective interventions to ensure LLMs remain trustworthy.
https://arxiv.org/abs/2503.03750
Academic Papers
svg
7457946d9aced9bc20874b321bbed108168fd73c04961370881f61c77c86957c
2026-01-07T00:00:00-05:00
E$^2$AT: Multimodal Jailbreak Defense via Dynamic Joint Optimization for Multimodal Large Language Models
arXiv:2503.04833v3 Announce Type: replace Abstract: Research endeavors have been made in learning robust Multimodal Large Language Models (MLLMs) against jailbreak attacks. However, existing methods for improving MLLMs' robustness still face critical challenges: \ding{172} how to efficiently tune massive weight parameters and \ding{173} how to ensure robustness against attacks across both visual and textual modalities. To this end, we propose an \textbf{E}fficient \textbf{E}nd-to-end \textbf{A}dversarial \textbf{T}raining (E$^2$AT) framework for both visual and textual adversarial attacks. Specifically, for the visual aspect, E$^2$AT incorporates an efficient projector-based AT module that aligns the attack samples at the feature level. For training objectives, we propose a Dynamic Joint Multimodal Optimization (DJMO) strategy to enhance generalization ability against jailbreak attacks by dynamically adjusting weights between normal and adversarial objectives. Extensive experiments are conducted with five major jailbreak attack methods across three mainstream MLLMs. Results demonstrate that our E$^2$AT achieves the state-of-the-art performance, outperforming existing baselines by an average margin of 34\% across text and image modalities, while maintaining clean task performance. Furthermore, evaluations of real-world embodied intelligent systems highlight the practical applicability of E$^2$AT, paving the way for the development of more secure and reliable multimodal systems. Our code is available on \href{https://anonymous.4open.science/r/E2AT_568}{\textcolor{red}{https://anonymous.4open.science/r/E2AT\_568}}.
https://arxiv.org/abs/2503.04833
Academic Papers
svg
dc11ceae56d72accb3aaf073b575a306bb03768374b37078b26878544522a890
2026-01-07T00:00:00-05:00
From Intrinsic Toxicity to Reception-Based Toxicity: A Contextual Framework for Prediction and Evaluation
arXiv:2503.16072v3 Announce Type: replace Abstract: Most toxicity detection models treat toxicity as an intrinsic property of text, overlooking the role of context in shaping its impact. In this position paper, drawing on insights from psychology, neuroscience, and computational social science, we reconceptualise toxicity as a socially emergent signal of stress. We formalise this perspective in the Contextual Stress Framework (CSF), which defines toxicity as a stress-inducing norm violation within a given context and introduces an additional dimension for toxicity detection. As one possible realisation of CSF, we introduce PONOS (Proportion Of Negative Observed Sentiments), a metric that quantifies toxicity through collective social reception rather than lexical features. We validate this approach on a novel dataset, demonstrating improved contextual sensitivity and adaptability when used alongside existing models.
https://arxiv.org/abs/2503.16072
Academic Papers
svg
8d422a635e9110c0878378b322b35389a4f53c3898b864888c27cc1436825ec1
2026-01-07T00:00:00-05:00
Offline Model-Based Optimization: Comprehensive Review
arXiv:2503.17286v2 Announce Type: replace Abstract: Offline optimization is a fundamental challenge in science and engineering, where the goal is to optimize black-box functions using only offline datasets. This setting is particularly relevant when querying the objective function is prohibitively expensive or infeasible, with applications spanning protein engineering, material discovery, neural architecture search, and beyond. The main difficulty lies in accurately estimating the objective landscape beyond the available data, where extrapolations are fraught with significant epistemic uncertainty. This uncertainty can lead to objective hacking(reward hacking), exploiting model inaccuracies in unseen regions, or other spurious optimizations that yield misleadingly high performance estimates outside the training distribution. Recent advances in model-based optimization(MBO) have harnessed the generalization capabilities of deep neural networks to develop offline-specific surrogate and generative models. Trained with carefully designed strategies, these models are more robust against out-of-distribution issues, facilitating the discovery of improved designs. Despite its growing impact in accelerating scientific discovery, the field lacks a comprehensive review. To bridge this gap, we present the first thorough review of offline MBO. We begin by formalizing the problem for both single-objective and multi-objective settings and by reviewing recent benchmarks and evaluation metrics. We then categorize existing approaches into two key areas: surrogate modeling, which emphasizes accurate function approximation in out-of-distribution regions, and generative modeling, which explores high-dimensional design spaces to identify high-performing designs. Finally, we examine the key challenges and propose promising directions for advancement in this rapidly evolving field including safe control of superintelligent systems.
https://arxiv.org/abs/2503.17286
Academic Papers
svg
2727336d363862126a47d592eea3095096761e485128d0e35783b35d436d6750
2026-01-07T00:00:00-05:00
Graph-Structured Driven Dual Adaptation for Mitigating Popularity Bias
arXiv:2503.23358v2 Announce Type: replace Abstract: Popularity bias is a common challenge in recommender systems. It often causes unbalanced item recommendation performance and intensifies the Matthew effect. Due to limited user-item interactions, unpopular items are frequently constrained to the embedding neighborhoods of only a few users, leading to representation collapse and weakening the model's generalization. Although existing supervised alignment and reweighting methods can help mitigate this problem, they still face two major limitations: (1) they overlook the inherent variability among different Graph Convolutional Networks (GCNs) layers, which can result in negative gains in deeper layers; (2) they rely heavily on fixed hyperparameters to balance popular and unpopular items, limiting adaptability to diverse data distributions and increasing model complexity. To address these challenges, we propose Graph-Structured Dual Adaptation Framework (GSDA), a dual adaptive framework for mitigating popularity bias in recommendation. Our theoretical analysis shows that supervised alignment in GCNs is hindered by the over-smoothing effect, where the distinction between popular and unpopular items diminishes as layers deepen, reducing the effectiveness of alignment at deeper levels. To overcome this limitation, GSDA integrates a hierarchical adaptive alignment mechanism that counteracts entropy decay across layers together with a distribution-aware contrastive weighting strategy based on the Gini coefficient, enabling the model to adapt its debiasing strength dynamically without relying on fixed hyperparameters. Extensive experiments on three benchmark datasets demonstrate that GSDA effectively alleviates popularity bias while consistently outperforming state-of-the-art methods in recommendation performance.
https://arxiv.org/abs/2503.23358
Academic Papers
svg
4049ee9110f5128486a32d02000f55d1545755c58e2877eec3fb20dcecc43afe
2026-01-07T00:00:00-05:00
Self-Routing RAG: Binding Selective Retrieval with Knowledge Verbalization
arXiv:2504.01018v3 Announce Type: replace Abstract: Selective retrieval aims to make retrieval-augmented generation (RAG) more efficient and reliable by skipping retrieval when an LLM's parametric knowledge suffices. Despite promising results, existing methods are constrained by a binary design choice: either retrieve from a single external source or skip retrieval and let the LLM directly produce the final answer. We argue that this fallback underestimates the model's knowledge and obscures the more general multi-source decision problem that arises in practical systems. We propose Self-Routing RAG (SR-RAG), which casts selective retrieval as knowledge source selection and treats the LLM itself as a first-class knowledge source. SR-RAG learns to select an appropriate knowledge source, optionally verbalize parametric knowledge, and answer using the selected source, all within a single left-to-right generation pass. SR-RAG further augments source selection by combining LLM-based uncertainty with a flexible external policy datastore to improve decision calibration. Across four benchmarks and three 7B-class LLMs, SR-RAG outperforms a strong selective retrieval baseline by 8.5%/2.1%/4.7% while performing 26%/40%/21% fewer retrievals, and it achieves favorable accuracy-latency trade-offs without dataset-specific threshold tuning.
https://arxiv.org/abs/2504.01018
Academic Papers
svg
5139c919f24a6782159f1e71c6a62560e1874a72d7c0622c5ea5678acb8346cb
2026-01-07T00:00:00-05:00
EgoLog: Ego-Centric Fine-Grained Daily Log with Ubiquitous Wearables
arXiv:2504.02624v3 Announce Type: replace Abstract: Despite advances in human activity recognition (HAR) with different modalities, a precise, robust, and accurate daily log system is not yet available. Current solutions primarily rely on controlled, lab-based data collection, which limits their real-world applicability. The challenges towards a fine-grained daily log are 1) contextual awareness, 2) spatial awareness, and 3) effective fusion of multi-modal sensor data. To solve them, we propose EgoLog, which integrates effective audio-IMU fusion for daily log with ubiquitous wearables. Our approach first fuses audio and IMU data from two perspectives: temporal understanding and spatial understanding. We extract scenario-level features and aggregate them in the time dimension, while using motion compensation to enhance the performance of sound source localization. The knowledge obtained from these steps is then integrated into a multi-modal HAR framework. Here, the scenario provides prior knowledge, and the spatial location helps differentiate the user from the background. Furthermore, we integrate a LLM to enhance scenario recognition through logical reasoning. The knowledge derived from the LLM is subsequently transferred back to the local device to enable efficient, on-device inference. Evaluated on both public and self-collected dataset, EgoLog achieves effective multimodal fusion for both activity and scenraio recognition, outperforms the baseline by 12% and 15%, respectively.
https://arxiv.org/abs/2504.02624
Academic Papers
svg
8021f77571630ee258aae0604ff32cd57bf0201f9ad656d53c01f4ccda8460c4
2026-01-07T00:00:00-05:00
Solving the Paint Shop Problem with Flexible Management of Multi-Lane Buffers Using Reinforcement Learning and Action Masking
arXiv:2504.02644v2 Announce Type: replace Abstract: In the paint shop problem, an unordered incoming sequence of cars assigned to different colors has to be reshuffled with the objective of minimizing the number of color changes. To reshuffle the incoming sequence, manufacturers can employ a first-in-first-out multi-lane buffer system allowing store and retrieve operations. So far, prior studies primarily focused on simple decision heuristics like greedy or simplified problem variants that do not allow full flexibility when performing store and retrieve operations. In this study, we propose a reinforcement learning approach to minimize color changes for the flexible problem variant, where store and retrieve operations can be performed in an arbitrary order. After proving that greedy retrieval is optimal, we incorporate this finding into the model using action masking. Our evaluation, based on 170 problem instances with 2-8 buffer lanes and 5-15 colors, shows that our approach reduces color changes compared to existing methods by considerable margins depending on the problem size. Furthermore, we demonstrate the robustness of our approach towards different buffer sizes and imbalanced color distributions.
https://arxiv.org/abs/2504.02644
Academic Papers
svg
6a3148f64c5fadcfbfc698bcf8c63db9fec2b90bfc75667286dea9072bd45e7c
2026-01-07T00:00:00-05:00
Heuristic Methods are Good Teachers to Distill MLPs for Graph Link Prediction
arXiv:2504.06193v2 Announce Type: replace Abstract: Link prediction is a crucial graph-learning task with applications including citation prediction and product recommendation. Distilling Graph Neural Networks (GNNs) teachers into Multi-Layer Perceptrons (MLPs) students has emerged as an effective approach to achieve strong performance and reducing computational cost by removing graph dependency. However, existing distillation methods only use standard GNNs and overlook alternative teachers such as specialized model for link prediction (GNN4LP) and heuristic methods (e.g., common neighbors). This paper first explores the impact of different teachers in GNN-to-MLP distillation. Surprisingly, we find that stronger teachers do not always produce stronger students: MLPs distilled from GNN4LP can underperform those distilled from simpler GNNs, while weaker heuristic methods can teach MLPs to near-GNN performance with drastically reduced training costs. Building on these insights, we propose Ensemble Heuristic-Distilled MLPs (EHDM), which eliminates graph dependencies while effectively integrating complementary signals via a gating mechanism. Experiments on ten datasets show an average 7.93% improvement over previous GNN-to-MLP approaches with 1.95-3.32 times less training time, indicating EHDM is an efficient and effective link prediction method.
https://arxiv.org/abs/2504.06193
Academic Papers
svg
7cf12c4d159d52ae04e7bf88ec3fdcb332e915729720c2b05c09bccf1f2e4c8d
2026-01-07T00:00:00-05:00
Efficient Swept Volume-Based Trajectory Generation for Arbitrary-Shaped Ground Robot Navigation
arXiv:2504.07554v2 Announce Type: replace Abstract: Navigating an arbitrary-shaped ground robot safely in cluttered environments remains a challenging problem. The existing trajectory planners that account for the robot's physical geometry severely suffer from the intractable runtime. To achieve both computational efficiency and Continuous Collision Avoidance (CCA) of arbitrary-shaped ground robot planning, we proposed a novel coarse-to-fine navigation framework that significantly accelerates planning. In the first stage, a sampling-based method selectively generates distinct topological paths that guarantee a minimum inflated margin. In the second stage, a geometry-aware front-end strategy is designed to discretize these topologies into full-state robot motion sequences while concurrently partitioning the paths into SE(2) sub-problems and simpler R2 sub-problems for back-end optimization. In the final stage, an SVSDF-based optimizer generates trajectories tailored to these sub-problems and seamlessly splices them into a continuous final motion plan. Extensive benchmark comparisons show that the proposed method is one to several orders of magnitude faster than the cutting-edge methods in runtime while maintaining a high planning success rate and ensuring CCA.
https://arxiv.org/abs/2504.07554
Academic Papers
svg
14d4bf920589a6ff2396054c67239aedd6d883c33d068e2697c76a4491c47758
2026-01-07T00:00:00-05:00
SignX: Continuous Sign Recognition in Compact Pose-Rich Latent Space
arXiv:2504.16315v2 Announce Type: replace Abstract: The complexity of sign language data processing brings many challenges. The current approach to recognition of ASL signs aims to translate RGB sign language videos through pose information into English-based ID Glosses, which serve to uniquely identify ASL signs. This paper proposes SignX, a novel framework for continuous sign language recognition in compact pose-rich latent space. First, we construct a unified latent representation that encodes heterogeneous pose formats (SMPLer-X, DWPose, Mediapipe, PrimeDepth, and Sapiens Segmentation) into a compact, information-dense space. Second, we train a ViT-based Video2Pose module to extract this latent representation directly from raw videos. Finally, we develop a temporal modeling and sequence refinement method that operates entirely in this latent space. This multi-stage design achieves end-to-end sign language recognition while significantly reducing computational consumption. Experimental results demonstrate that SignX achieves state-of-the-art accuracy on continuous sign language recognition.
https://arxiv.org/abs/2504.16315
Academic Papers
svg
336d91947ab888334728495634d346ade1baf7f6431c526c2aac759a98b8c6d2
2026-01-07T00:00:00-05:00
Beyond Platforms -- Growing Distributed Transaction Networks for Digital Commerce
arXiv:2504.18602v4 Announce Type: replace Abstract: We talk of the internet as digital infrastructure; but we leave the building of rails and roads to the quasi-monopolistic platform providers. Decentralised architectures provide a number of advantages: They are potentially more inclusive for small players; more resilient against adversarial events; and seem to generate more innovation. However, it is not well understood how to evolve, adapt and govern decentralised infrastructures. This article reports qualitative empirical research on the development and governance of the Beckn Protocol, an open source protocol for decentralised transactions, the successful development of domain-specific adaptations, and implementation and scaling of commercial infrastructures based on it. It explores how the architecture and governance support local innovation for specific business domains, and how the domain-specific innovations feed back into the development of the core concept The research applied a case study approach, combining interviews with core members of the Beckn community; triangulated by interviews with community leaders of domain specific adaptations and by analysis of online documents and the protocol itself. The article shows the possibility of such a decentralised approach to IT Infrastructures. It analyses the Beckn Protocol, domain specific adaptations, and networks built as a software ecosystem. Based on this analysis, a number of generative mechanisms, socio-technical arrangements that support adoption, innovation, and scaling of infrastructures are highlighted.
https://arxiv.org/abs/2504.18602
Academic Papers
svg
3debc47aa3625d66f57e7a8e7f50c4fbe56d664da34d7687bc4ce40767c1e902
2026-01-07T00:00:00-05:00
PartHOI: Part-based Hand-Object Interaction Transfer via Generalized Cylinders
arXiv:2504.20599v2 Announce Type: replace Abstract: Learning-based methods to understand and model hand-object interactions (HOI) require a large amount of high-quality HOI data. One way to create HOI data is to transfer hand poses from a source object to another based on the objects' geometry. However, current methods for transferring hand poses between objects rely on shape matching, limiting the ability to transfer poses across different categories due to differences in their shapes and sizes. We observe that HOI often involves specific semantic parts of objects, which often have more consistent shapes across categories. In addition, constructing size-invariant correspondences between these parts is important for cross-category transfer. Based on these insights, we introduce a novel method PartHOI for part-based HOI transfer. Using a generalized cylinder representation to parameterize an object parts' geometry, PartHOI establishes a robust geometric correspondence between object parts, and enables the transfer of contact points. Given the transferred points, we optimize a hand pose to fit the target object well. Qualitative and quantitative results demonstrate that our method can generalize HOI transfers well even for cross-category objects, and produce high-fidelity results that are superior to the existing methods.
https://arxiv.org/abs/2504.20599
Academic Papers
svg
68f6ecbeee524dfba3b4abefd396187db84edf384e0ad1c4bbba9c1bfbddefad
2026-01-07T00:00:00-05:00
UniversalRAG: Retrieval-Augmented Generation over Corpora of Diverse Modalities and Granularities
arXiv:2504.20734v3 Announce Type: replace Abstract: Retrieval-Augmented Generation (RAG) has shown substantial promise in improving factual accuracy by grounding model responses with external knowledge relevant to queries. However, most existing approaches are limited to a text-only corpus, and while recent efforts have extended RAG to other modalities such as images and videos, they typically operate over a single modality-specific corpus. In contrast, real-world queries vary widely in the type of knowledge they require, which a single type of knowledge source cannot address. To address this, we introduce UniversalRAG, designed to retrieve and integrate knowledge from heterogeneous sources with diverse modalities and granularities. Specifically, motivated by the observation that forcing all modalities into a unified representation space derived from a single aggregated corpus causes a modality gap, where the retrieval tends to favor items from the same modality as the query, we propose modality-aware routing, which dynamically identifies the most appropriate modality-specific corpus and performs targeted retrieval within it, and further justify its effectiveness with a theoretical analysis. Moreover, beyond modality, we organize each modality into multiple granularity levels, enabling fine-tuned retrieval tailored to the complexity and scope of the query. We validate UniversalRAG on 10 benchmarks of multiple modalities, showing its superiority over various modality-specific and unified baselines.
https://arxiv.org/abs/2504.20734
Academic Papers
svg
7cd957061df0c01bf2ab90ec7e16b94e76adda75fc774a90e96d3c97369b12d3
2026-01-07T00:00:00-05:00
The Great Data Standoff: Researchers vs. Platforms Under the Digital Services Act
arXiv:2505.01122v2 Announce Type: replace Abstract: To facilitate accountability and transparency, the Digital Services Act (DSA) sets up a process through which Very Large Online Platforms (VLOPs) need to grant vetted researchers access to their internal data (Article 40(4)). Operationalising such access is challenging for at least two reasons. First, data access is only available for research on systemic risks affecting European citizens, a concept with high levels of legal uncertainty. Second, data access suffers from an inherent standoff problem. Researchers need to request specific data but are not in a position to know all internal data processed by VLOPs, who, in turn, expect data specificity for potential access. In light of these limitations, data access under the DSA remains a mystery. To contribute to the discussion of how Article 40 can be interpreted and applied, we provide a concrete illustration of what data access can look like in a real-world systemic risk case study. We focus on the 2024 Romanian presidential election interference incident, the first event of its kind to trigger systemic risk investigations by the European Commission. During the elections, one candidate is said to have benefited from TikTok algorithmic amplification through a complex dis- and misinformation campaign. By analysing this incident, we can comprehend election-related systemic risk to explore practical research tasks and compare necessary data with available TikTok data. In particular, we make two contributions: (i) we combine insights from law, computer science and platform governance to shed light on the complexities of studying systemic risks in the context of election interference, focusing on two relevant factors: platform manipulation and hidden advertising; and (ii) we provide practical insights into various categories of available data for the study of TikTok, based on platform documentation, data donations and the Research API.
https://arxiv.org/abs/2505.01122
Academic Papers
svg
7b5e0a0eeff4ce087815ad0c9d85c068efbf76c71f2b4e59dbd42c3204f7ccfc
2026-01-07T00:00:00-05:00
HONEYBEE: Efficient Role-based Access Control for Vector Databases via Dynamic Partitioning
arXiv:2505.01538v2 Announce Type: replace Abstract: Enterprise deployments of vector databases require access control policies to protect sensitive data. These systems often implement access control through hybrid vector queries that combine nearest-neighbor search with relational predicates based on user permissions. However, existing approaches face a fundamental trade-off: dedicated per-user indexes minimize query latency but incur high memory redundancy, while shared indexes with post-search filtering reduce memory overhead at the cost of increased latency. This paper introduces HONEYBEE, a dynamic partitioning framework that leverages the structure of Role-Based Access Control (RBAC) policies to create a smooth trade-off between these extremes. RBAC policies organize users into roles and assign permissions at the role level, creating a natural ``thin waist" in the permission structure that is ideal for partitioning decisions. Specifically, HONEYBEE produces overlapping partitions where vectors can be strategically replicated across different partitions to reduce query latency while controlling memory overhead. To guide these decisions, HONEYBEE develops analytical models of vector search performance and recall, and formulates partitioning as a constrained optimization problem that balances memory usage, query efficiency, and recall. Evaluations on RBAC workloads demonstrate that HONEYBEE achieves up to 13.5X lower query latency than row-level security with only a 1.24X increase in memory usage, while achieving comparable query performance to dedicated, per-role indexes with 90.4% reduction in additional memory consumption, offering a practical middle ground for secure and efficient vector search.
https://arxiv.org/abs/2505.01538
Academic Papers
svg
fa878917bc6e0df21a4cf638020e58f6d1128f4d578e2a5c3025096d75c03554
2026-01-07T00:00:00-05:00
Characterizing the Robustness of Black-Box LLM Planners Under Perturbed Observations with Adaptive Stress Testing
arXiv:2505.05665v3 Announce Type: replace Abstract: Large language models (LLMs) have recently demonstrated success in decision-making tasks including planning, control, and prediction, but their tendency to hallucinate unsafe and undesired outputs poses risks. This unwanted behavior is further exacerbated in environments where sensors are noisy or unreliable. Characterizing the behavior of LLM planners to varied observations is necessary to proactively avoid failures in safety-critical scenarios. We specifically investigate the response of LLMs along two different perturbation dimensions. Like prior works, one dimension generates semantically similar prompts with varied phrasing by randomizing order of details, modifying access to few-shot examples, etc. Unique to our work, the second dimension simulates access to varied sensors and noise to mimic raw sensor or detection algorithm failures. An initial case study in which perturbations are manually applied show that both dimensions lead LLMs to hallucinate in a multi-agent driving environment. However, manually covering the entire perturbation space for several scenarios is infeasible. As such, we propose a novel method for efficiently searching the space of prompt perturbations using adaptive stress testing (AST) with Monte-Carlo tree search (MCTS). Our AST formulation enables discovery of scenarios, sensor configurations, and prompt phrasing that cause language models to act with high uncertainty or even crash. By generating MCTS prompt perturbation trees across diverse scenarios, we show through extensive experiments that offline analyses can be used to proactively understand potential failures that may arise at runtime.
https://arxiv.org/abs/2505.05665
Academic Papers
svg
2c004a1d7943c0df399bf1d281faadad12124745bd6528b1f4b054aeca8deb52
2026-01-07T00:00:00-05:00
Reference-Free Evaluation of Taxonomies
arXiv:2505.11470v2 Announce Type: replace Abstract: We introduce two reference-free metrics for quality evaluation of taxonomies in the absence of labels. The first metric evaluates robustness by calculating the correlation between semantic and taxonomic similarity, addressing error types not considered by existing metrics. The second uses Natural Language Inference to assess logical adequacy. Both metrics are tested on five taxonomies and are shown to correlate well with F1 against ground truth taxonomies. We further demonstrate that our metrics can predict downstream performance in hierarchical classification when used with label hierarchies.
https://arxiv.org/abs/2505.11470
Academic Papers
svg
94a94a0cce22a116ed7706506e1b761f15a9e0d214ee8a0b7e272a9ab71445b4
2026-01-07T00:00:00-05:00
DisCO: Reinforcing Large Reasoning Models with Discriminative Constrained Optimization
arXiv:2505.12366v5 Announce Type: replace Abstract: The recent success and openness of DeepSeek-R1 have brought widespread attention to Group Relative Policy Optimization (GRPO) as a reinforcement learning method for large reasoning models (LRMs). In this work, we analyze the GRPO objective under a binary reward setting and reveal an inherent limitation of question-level difficulty bias. We also identify a connection between GRPO and traditional discriminative methods in supervised learning. Motivated by these insights, we introduce a new Discriminative Constrained Optimization (DisCO) framework for reinforcing LRMs, grounded in the principle of discriminative learning. The main differences between DisCO and GRPO and its recent variants are: (1) it replaces the group relative objective with a discriminative objective defined by a scoring function; (2) it abandons clipping-based surrogates in favor of non-clipping RL surrogate objectives used as scoring functions; (3) it employs a simple yet effective constrained optimization approach to enforce the KL divergence constraint. As a result, DisCO offers notable advantages over GRPO and its variants: (i) it completely eliminates difficulty bias by adopting discriminative objectives; (ii) it addresses the entropy instability in GRPO and its variants through the use of non-clipping scoring functions and a constrained optimization approach, yielding long and stable training dynamics; (iii) it allows the incorporation of advanced discriminative learning techniques to address data imbalance, where a significant number of questions have more negative than positive generated answers during training. Our experiments on enhancing the mathematical reasoning capabilities of SFT-finetuned models show that DisCO significantly outperforms GRPO and its improved variants such as DAPO, achieving average gains of 7\% over GRPO and 6\% over DAPO across six benchmark tasks for a 1.5B model.
https://arxiv.org/abs/2505.12366
Academic Papers
svg
9eed8b05576cd0ef3b3b0f6dc1d9cee63eb0cec2b93bfc2063d2d5fc8b0d9d31
2026-01-07T00:00:00-05:00
EvoGPT: Leveraging LLM-Driven Seed Diversity to Improve Search-Based Test Suite Generation
arXiv:2505.12424v2 Announce Type: replace Abstract: Search-Based Software Testing (SBST) is a well-established approach for automated unit test generation, yet it often suffers from premature convergence and limited diversity in the generated test suites. Recently, Large Language Models (LLMs) have emerged as an alternative technique for unit test generation. We present EvoGPT, a hybrid test generation system that integrates LLM-based test generation with SBST-based test suite optimization. EvoGPT uses LLMs to generate an initial population of test suites, and uses an Evolutionary Algorithm (EA) to further optimize this test suite population. A distinguishing feature of EvoGPT is its explicit enforcement of diversity, achieved through the use of multiple temperatures and prompt instructions during test generation. In addition, each LLM-generated test is refined using a generation-repair loop and coverage-guided assertion generation. To address evolutionary plateaus, EvoGPT also detects stagnation during search and injects additional LLM-generated tests aimed at previously uncovered branches. Here too diversity is enforced using multiple temperatures and prompt instructions. We evaluate EvoGPT on Defects4J, a standard benchmark for test generation. The results show that EvoGPT achieves, on average, a 10\% improvement in both code coverage and mutation score metrics compared to TestART, an LLM-only baseline; and EvoSuite, a standard SBST baseline. An ablation study indicates that explicitly enforcing diversity both at initialization and during the search is key to effectively leveraging LLMs for automated unit test generation.
https://arxiv.org/abs/2505.12424
Academic Papers
svg
bd18fad9149a7ae958dc48800954b8e27f970543ba5c6bc3ddea7421605cf202
2026-01-07T00:00:00-05:00
The Virtual Reality Koinos Method: Analysis of Symmetrical Dyadic Collaboration in Virtual Reality from the perspective of communication models
arXiv:2505.14078v2 Announce Type: replace Abstract: Understanding which factors could influence co-presence in Virtual Reality could help develop more qualitative social interactions, or social interactions that generate similar sensations, emotions and feelings than the ones generated during Face-to-Face interactions. Co-presence is studied since the beginning of Virtual Reality (VR); though, no consensus is identified on what factors could influence it, except the consensus on the definition of "being there together" inside the Virtual Environment. In this paper, we introduce the Koinos method to explain social interactions in VR through communication models, (i) theoretically, and (ii) on two VR experiments that change the virtual partner social and physical representations. These analyses lead us to propose an equation to predict and help manage the sense of co-presence in VR.
https://arxiv.org/abs/2505.14078
Academic Papers
svg
55f51c285162fc0567a456a54fdfec87da3ff4018741fe1ccb41810b14af0468
2026-01-07T00:00:00-05:00
Extensible Post Quantum Cryptography Based Authentication
arXiv:2505.16112v2 Announce Type: replace Abstract: Cryptography underpins the security of modern digital infrastructure, from cloud services to health data. However, many widely deployed systems will become vulnerable after the advent of scalable quantum computing. Although quantum-safe cryptographic primitives have been developed, such as lattice-based digital signature algorithms (DSAs) and key encapsulation mechanisms (KEMs), their unique structural and performance characteristics make them unsuitable for existing protocols. In this work, we introduce a quantum-safe single-shot protocol for machine-to-machine authentication and authorization that is specifically designed to leverage the strengths of lattice-based DSAs and KEMs. Operating entirely over insecure channels, this protocol enables the forward-secure establishment of tokens in constrained environments. By demonstrating how new quantum-safe cryptographic primitives can be incorporated into secure systems, this study lays the groundwork for scalable, resilient, and future-proof identity infrastructures in a quantum-enabled world.
https://arxiv.org/abs/2505.16112
Academic Papers
svg
7ce297993c9f7ff606f3f58e035b46ab6c03e1d3edb62ff348936afbf7b27235
2026-01-07T00:00:00-05:00
EduBench: A Comprehensive Benchmarking Dataset for Evaluating Large Language Models in Diverse Educational Scenarios
arXiv:2505.16160v4 Announce Type: replace Abstract: As large language models continue to advance, their application in educational contexts remains underexplored and under-optimized. In this paper, we address this gap by introducing the first diverse benchmark tailored for educational scenarios, incorporating synthetic data containing 9 major scenarios and over 4,000 distinct educational contexts. To enable comprehensive assessment, we propose a set of multi-dimensional evaluation metrics that cover 12 critical aspects relevant to both teachers and students. We further apply human annotation to ensure the effectiveness of the model-generated evaluation responses. Additionally, we succeed to train a relatively small-scale model on our constructed dataset and demonstrate that it can achieve performance comparable to state-of-the-art large models (e.g., Deepseek V3, Qwen Max) on the test set. Overall, this work provides a practical foundation for the development and evaluation of education-oriented language models. Code and data are released at https://github.com/ybai-nlp/EduBench.
https://arxiv.org/abs/2505.16160
Academic Papers
svg
5e291809314edc9ec27420bff771b7504ff619fc2af34a1c44a37aa73c7fded4
2026-01-07T00:00:00-05:00
Asynchronous Global Protocols, Precisely: Full Proofs
arXiv:2505.17676v2 Announce Type: replace Abstract: Asynchronous multiparty session types are a type-based framework which ensure the compatibility of components in a distributed system by checking compliance against a specified global protocol. We propose a top-down approach, starting with the global protocol which is then projected into a set of local specifications. Next, we use an asynchronous refinement relation, precise asynchronous multiparty subtyping, to enable local specifications to be optimised by permuting actions within individual asynchronous components. This supports local reasoning, as each component can be independently developed and refined in isolation, before being integrated into a larger system. We show that this methodology guarantees both type soundness and liveness of the collection of optimised components. In this article, we first propose new operational semantics of global protocols which capture sound optimisations in the context of asynchronous message-passing. Next we define an asynchronous association between global protocols and a set of optimised local types. Thirdly, we prove, for the first time, the correctness of the most expressive endpoint projection in the literature, coinductive full merging projection. We then show the main theorems of this article: soundness and completeness of the operational correspondence of the asynchronous association. As a consequence, the association acts as an invariant that can be used to transfer key theorems from the bottom-up system to the top-down system. In particular, we used this to prove type soundness, session-fidelity, deadlock-freedom and liveness of the collection of optimised endpoints.
https://arxiv.org/abs/2505.17676
Academic Papers
svg
50d19a611d3cb2eac5cbc2f6cc1b78f45cc636cedb1f65ac658c10bea92350aa
2026-01-07T00:00:00-05:00
PatentMind: A Multi-Aspect Reasoning Graph for Patent Similarity Evaluation
arXiv:2505.19347v3 Announce Type: replace Abstract: Patent similarity evaluation plays a critical role in intellectual property analysis. However, existing methods often overlook the intricate structure of patent documents, which integrate technical specifications, legal boundaries, and application contexts. We introduce PatentMind, a novel framework for patent similarity assessment based on a Multi-Aspect Reasoning Graph (MARG). PatentMind decomposes patents into their three dimensions of technical features, application domains, and claim scopes, then dimension-specific similarity scores are calculated over the MARG. These scores are dynamically weighted through a context-aware reasoning process, which integrates contextual signals to emulate expert-level judgment. To support evaluation, we construct a human-annotated benchmark PatentSimBench, comprising 500 patent pairs. Experimental results demonstrate that the PatentMind-generated scores show a strong correlation ($r=0.938$) with expert annotations, significantly outperforming embedding-based models, patent-specific models, and advanced prompt engineering methods. Beyond computational linguistics, our framework provides a structured and semantically grounded foundation for real-world decision-making, particularly for tasks such as infringement risk assessment, underscoring its broader impact on both patent analytics and evaluation.
https://arxiv.org/abs/2505.19347
Academic Papers
svg
5c72f76eddb0ebe0ea73d9b5f02ce9027fefd42bfb6b73d89c493d907dc1a05b
2026-01-07T00:00:00-05:00
VisRet: Visualization Improves Knowledge-Intensive Text-to-Image Retrieval
arXiv:2505.20291v3 Announce Type: replace Abstract: Text-to-image retrieval (T2I retrieval) remains challenging because cross-modal embeddings often behave as bags of concepts, underrepresenting structured visual relationships such as pose and viewpoint. We propose Visualize-then-Retrieve (VisRet), a retrieval paradigm that mitigates this limitation of cross-modal similarity alignment. VisRet first projects textual queries into the image modality via T2I generation, then performs retrieval within the image modality to bypass the weaknesses of cross-modal retrievers in recognizing subtle visual-spatial features. Across four benchmarks (Visual-RAG, INQUIRE-Rerank, Microsoft COCO, and our new Visual-RAG-ME featuring multi-entity comparisons), VisRet substantially outperforms cross-modal similarity matching and baselines that recast T2I retrieval as text-to-text similarity matching, improving nDCG@30 by 0.125 on average with CLIP as the retriever and by 0.121 with E5-V. For downstream question answering, VisRet increases accuracy on Visual-RAG and Visual-RAG-ME by 3.8% and 15.7% in top-1 retrieval, and by 3.9% and 11.1% in top-10 retrieval. Ablation studies show compatibility with different T2I instruction LLMs, T2I generation models, and downstream LLMs. VisRet provides a simple yet effective perspective for advancing in text-image retrieval. Our code and the new benchmark are publicly available at https://github.com/xiaowu0162/Visualize-then-Retrieve.
https://arxiv.org/abs/2505.20291
Academic Papers
svg
8e30a1e4206deeb931570153cefc4298fdfdc5f3beac702db309c6efb386e25c
2026-01-07T00:00:00-05:00
POLAR: A Benchmark for Multilingual, Multicultural, and Multi-Event Online Polarization
arXiv:2505.20624v2 Announce Type: replace Abstract: Online polarization poses a growing challenge for democratic discourse, yet most computational social science research remains monolingual, culturally narrow, or event-specific. We introduce POLAR, a multilingual, multicultural, and multievent dataset with over 23k instances in seven languages from diverse online platforms and real-world events. Polarization is annotated along three axes: presence, type, and manifestation, using a variety of annotation platforms adapted to each cultural context. We conduct two main experiments: (1) we fine-tune six multilingual pretrained language models in both monolingual and cross-lingual setups; and (2) we evaluate a range of open and closed large language models (LLMs) in few-shot and zero-shot scenarios. Results show that while most models perform well on binary polarization detection, they achieve substantially lower scores when predicting polarization types and manifestations. These findings highlight the complex, highly contextual nature of polarization and the need for robust, adaptable approaches in NLP and computational social science. All resources will be released to support further research and effective mitigation of digital polarization globally.
https://arxiv.org/abs/2505.20624
Academic Papers
svg
a03d6b4049dee49aca3d7e19a580f08268fb1ea243c40cffa0bb9e34716696b1
2026-01-07T00:00:00-05:00
RoboTransfer: Controllable Geometry-Consistent Video Diffusion for Manipulation Policy Transfer
arXiv:2505.23171v2 Announce Type: replace Abstract: The goal of general-purpose robotics is to create agents that can seamlessly adapt to and operate in diverse, unstructured human environments. Imitation learning has become a key paradigm for robotic manipulation, yet collecting large-scale and diverse demonstrations is prohibitively expensive. Simulators provide a cost-effective alternative, but the sim-to-real gap remains a major obstacle to scalability. We present RoboTransfer, a diffusion-based video generation framework for synthesizing robotic data. By leveraging cross-view feature interactions and globally consistent 3D geometry, RoboTransfer ensures multi-view geometric consistency while enabling fine-grained control over scene elements, such as background editing and object replacement. Extensive experiments demonstrate that RoboTransfer produces videos with superior geometric consistency and visual fidelity. Furthermore, policies trained on this synthetic data exhibit enhanced generalization to novel, unseen scenarios. Project page: https://horizonrobotics.github.io/robot_lab/robotransfer.
https://arxiv.org/abs/2505.23171
Academic Papers
svg
98460fa86903614528e43fa9a565dac77c2c2b27da6d49e8e714e26158ed1eb1
2026-01-07T00:00:00-05:00
Melding the Serverless Control Plane with the Conventional Cluster Manager for Speed and Resource Efficiency
arXiv:2505.24551v4 Announce Type: replace Abstract: Serverless platforms face a trade-off: conventional cluster managers like Kubernetes offer compatibility for co-locating Function-as-a-Service (FaaS) and Backend-as-a-Service (BaaS) components of serverless applications, at the cost of high cold-start latency, whereas specialized FaaS-only systems like Dirigent achieve low latency by sacrificing compatibility, preventing integrated management and optimization. Our analysis reveals that FaaS traffic is bimodal: predictable, sustainable traffic consumes >98% of cluster resources, whereas sporadic, excessive bursts stress the control plane's scaling latency, not its throughput. With these insights, we design PulseNet, a serverless architecture that uses a dual-track control plane tailored to both traffic types. PulseNet's standard track manages sustainable traffic with long-lived, full-featured Regular Instances under a conventional cluster manager, preserving compatibility for the majority of the workload. To handle excessive traffic, an expedited track bypasses the slow manager to rapidly create short-lived, disposable Emergency Instances, minimizing cold-start latency and resource waste from idle instances. This hybrid approach achieves 35% better performance than Dirigent, a FaaS-only system, on a production workload at the same cost and outperforms other Kubernetes-compatible systems by 1.5-3.5x, reducing the cost by up to 70%.
https://arxiv.org/abs/2505.24551
Academic Papers
svg
f6098852382fd2890ce1607f2fbd20c49024d0d0b0527b3f2a272b053ff5a8bc
2026-01-07T00:00:00-05:00
Social Construction of Urban Space: Using LLMs to Identify Neighborhood Boundaries From Craigslist Ads
arXiv:2506.00634v2 Announce Type: replace Abstract: Rental listings offer a window into how urban space is socially constructed through language. We analyze Chicago Craigslist rental advertisements from 2018 to 2024 to examine how listing agents characterize neighborhoods, identifying mismatches between institutional boundaries and neighborhood claims. Through manual and large language model annotation, we classify unstructured listings from Craigslist according to their neighborhood. Further geospatial analysis reveals three distinct patterns: properties with conflicting neighborhood designations due to competing spatial definitions, border properties with valid claims to adjacent neighborhoods, and "reputation laundering" where listings claim association with distant, desirable neighborhoods. Through topic modeling, we identify patterns that correlate with spatial positioning: listings further from neighborhood centers emphasize different amenities than centrally-located units. Natural language processing techniques reveal how definitions of urban spaces are contested in ways that traditional methods overlook.
https://arxiv.org/abs/2506.00634
Academic Papers
svg
39dcc862d12fc632fb35c36ea3926ca7c6307867c807b170d0644332700037b2
2026-01-07T00:00:00-05:00
Quantifying task-relevant representational similarity using decision variable correlation
arXiv:2506.02164v3 Announce Type: replace Abstract: Previous studies have compared neural activities in the visual cortex to representations in deep neural networks trained on image classification. Interestingly, while some suggest that their representations are highly similar, others argued the opposite. Here, we propose a new approach to characterize the similarity of the decision strategies of two observers (models or brains) using decision variable correlation (DVC). DVC quantifies the image-by-image correlation between the decoded decisions based on the internal neural representations in a classification task. Thus, it can capture task-relevant information rather than general representational alignment. We evaluate DVC using monkey V4/IT recordings and network models trained on image classification tasks. We find that model-model similarity is comparable to monkey-monkey similarity, whereas model-monkey similarity is consistently lower. Strikingly, DVC decreases with increasing network performance on ImageNet-1k. Adversarial training does not improve model-monkey similarity in task-relevant dimensions assessed using DVC, although it markedly increases the model-model similarity. Similarly, pre-training on larger datasets does not improve model-monkey similarity. These results suggest a divergence between the task-relevant representations in monkey V4/IT and those learned by models trained on image classification tasks.
https://arxiv.org/abs/2506.02164
Academic Papers
svg
88850293a1b513974655ff869b1a3a1c62ab82ca717847a58ae1377bf6b14b3a
2026-01-07T00:00:00-05:00
Something Just Like TRuST : Toxicity Recognition of Span and Target
arXiv:2506.02326v2 Announce Type: replace Abstract: Toxic language includes content that is offensive, abusive, or that promotes harm. Progress in preventing toxic output from large language models (LLMs) is hampered by inconsistent definitions of toxicity. We introduce TRuST, a large-scale dataset that unifies and expands prior resources through a carefully synthesized definition of toxicity, and corresponding annotation scheme. It consists of ~300k annotations, with high-quality human annotation on ~11k. To ensure high-quality, we designed a rigorous, multi-stage human annotation process, and evaluated the diversity of the annotators. Then we benchmarked state-of-the-art LLMs and pre-trained models on three tasks: toxicity detection, identification of the target group, and of toxic words. Our results indicate that fine-tuned PLMs outperform LLMs on the three tasks, and that current reasoning models do not reliably improve performance. TRuST constitutes one of the most comprehensive resources for evaluating and mitigating LLM toxicity, and other research in socially-aware and safer language technologies.
https://arxiv.org/abs/2506.02326
Academic Papers
svg
3231c68f3b0d5110ff38eba2f40f98f0f3529b4b4d76742f8b13d23726198d1b
2026-01-07T00:00:00-05:00
OThink-R1: Intrinsic Fast/Slow Thinking Mode Switching for Over-Reasoning Mitigation
arXiv:2506.02397v3 Announce Type: replace Abstract: Human cognition operates through two complementary modes: fast intuitive thinking and slow deliberate thinking. Vanilla large language models (LLMs) predominantly follow the fast-thinking paradigm, producing immediate responses; while recent large reasoning models (LRMs) adopt slow-thinking strategies, generating detailed reasoning chains before arriving at answers. While LRMs often achieve higher accuracy, this comes at the cost of substantially increased token usage. To address this efficiency-accuracy trade-off, we propose OThink-R1, a hybrid reasoning framework that integrates both modes within a single LRM and enables automatic mode switching based on problem characteristics. We first identify three major patterns of essential and redundant reasoning trajectories in LRMs, which guide the design of an auxiliary LLM-based judge that adaptively determines when slow thinking is necessary. Leveraging the judge's decisions, we construct a hybrid fine-tuning dataset by pruning redundant reasoning to produce fast-thinking samples and retaining complete reasoning for slow-thinking samples. This dataset is then used to fine-tune LRMs, equipping them with inherent autonomous mode-selection capabilities. Extensive experiments on mathematical and question-answering benchmarks show that OThink-R1 reduces reasoning token usage significantly while maintaining competitive accuracy. The code is available at https://github.com/AgenticIR-Lab/OThink-R1.
https://arxiv.org/abs/2506.02397
Academic Papers
svg
903a964b4ce4c40fe2875e344981292ad9f2cd89f18b39ad7761b37eb1735523
2026-01-07T00:00:00-05:00
Cyber Security of Sensor Systems for State Sequence Estimation: an AI Approach
arXiv:2506.06572v2 Announce Type: replace Abstract: Sensor systems are extremely popular today and vulnerable to sensor data attacks. Due to possible devastating consequences, counteracting sensor data attacks is an extremely important topic, which has not seen sufficient study. This paper develops the first methods that accurately identify/eliminate only the problematic attacked sensor data presented to a sequence estimation/regression algorithm under a powerful attack model constructed based on known/observed attacks. The approach does not assume a known form for the statistical model of the sensor data, allowing data-driven and machine learning sequence estimation/regression algorithms to be protected. A simple protection approach for attackers not endowed with knowledge of the details of our protection approach is first developed, followed by additional processing for attacks based on protection system knowledge. In the cases tested for which it was designed, experimental results show that the simple approach achieves performance indistinguishable, to two decimal places, from that for an approach which knows which sensors are attacked. For cases where the attacker has knowledge of the protection approach, experimental results indicate the additional processing can be configured so that the worst-case degradation under the additional processing and a large number of sensors attacked can be made significantly smaller than the worst-case degradation of the simple approach, and close to an approach which knows which sensors are attacked, for the same number of attacked sensors with just a slight degradation under no attacks. Mathematical descriptions of the worst-case attacks are used to demonstrate the additional processing will provide similar advantages for cases for which we do not have numerical results. All the data-driven processing used in our approaches employ only unattacked training data.
https://arxiv.org/abs/2506.06572
Academic Papers
svg
0f0014b5c7524dc5b97f1ce8ef249ba67e5961f5d0a78b396e71c1b468e2114d
2026-01-07T00:00:00-05:00
Aligning Text, Images, and 3D Structure Token-by-Token
arXiv:2506.08002v2 Announce Type: replace Abstract: Creating machines capable of understanding the world in 3D is essential in assisting designers that build and edit 3D environments and robots navigating and interacting within a three-dimensional space. Inspired by advances in language and image modeling, we investigate the potential of autoregressive models for a new modality: structured 3D scenes. To this end, we propose a unified LLM framework that aligns language, images, and 3D scenes and provide a detailed ''cookbook'' outlining critical design choices for achieving optimal training and performance addressing key questions related to data representation, modality-specific objectives, and more. We show how to tokenize complex 3D objects to incorporate into our structured 3D scene modality. We evaluate performance across four core 3D tasks -- rendering, recognition, instruction-following, and question-answering -- and four 3D datasets, synthetic and real-world. We show our model's effectiveness on reconstructing complete 3D scenes consisting of complex objects from a single image and on real-world 3D object recognition tasks. Project webpage: https://glab-caltech.github.io/kyvo/
https://arxiv.org/abs/2506.08002
Academic Papers
svg
14f7441a694db23d7b137a7c480d274271d44cae7e3f07974e7a4e8e5f1cbbcf
2026-01-07T00:00:00-05:00
TTrace: Lightweight Error Checking and Diagnosis for Distributed Training
arXiv:2506.09280v2 Announce Type: replace Abstract: Distributed training is essential for scaling the training of large neural network models, such as large language models (LLMs), across thousands of GPUs. However, the complexity of distributed training programs makes them particularly prone to silent bugs, which do not produce explicit error signals but lead to incorrect training outcomes. Effectively detecting and localizing such silent bugs in distributed training is challenging. Common debugging practices based on monitoring training loss or gradient norm curves are indirect, inefficient, and provide no way to localize bugs. To address those challenges, we design and implement TTrace, the first systematic differential testing system for detecting and localizing silent bugs in distributed training. TTrace aligns intermediate tensors from distributed training with those from a trusted reference implementation. To properly compare the floating-point values in the corresponding tensors, we propose a novel mathematical analysis that provides a guideline for setting tolerances, enabling TTrace to distinguish bug-induced errors from numerical errors. Experimental results demonstrate that TTrace effectively detects 11 existing bugs and 3 new bugs in the widely used Megatron-LM framework, while requiring fewer than 10 lines of code changes. TTrace is effective in various training recipes, including low-precision recipes involving BF16 and FP8. Notably, a popular open-source training framework has already adopted the method proposed by TTrace in its development workflow.
https://arxiv.org/abs/2506.09280
Academic Papers
svg
43a89748d657d0930fe10a2fe29905fa14ad07fd04e78702bc3a83f1ba0b0037
2026-01-07T00:00:00-05:00
Chain-of-Action: Trajectory Autoregressive Modeling for Robotic Manipulation
arXiv:2506.09990v2 Announce Type: replace Abstract: We present Chain-of-Action (CoA), a novel visuo-motor policy paradigm built upon Trajectory Autoregressive Modeling. Unlike conventional approaches that predict next step action(s) forward, CoA generates an entire trajectory by explicit backward reasoning with task-specific goals through an action-level Chain-of-Thought (CoT) process. This process is unified within a single autoregressive structure: (1) the first token corresponds to a stable keyframe action that encodes the task-specific goals; and (2) subsequent action tokens are generated autoregressively, conditioned on the initial keyframe and previously predicted actions. This backward action reasoning enforces a global-to-local structure, allowing each local action to be tightly constrained by the final goal. To further realize the action reasoning structure, CoA incorporates four complementary designs: continuous action token representation; dynamic stopping for variable-length trajectory generation; reverse temporal ensemble; and multi-token prediction to balance action chunk modeling with global structure. As a result, CoA gives strong spatial generalization capabilities while preserving the flexibility and simplicity of a visuo-motor policy. Empirically, we observe CoA achieves the state-of-the-art performance across 60 RLBench tasks and 8 real-world manipulation tasks.
https://arxiv.org/abs/2506.09990
Academic Papers
svg
420f018c46f0dce0217ac20b6f0985f669148172f3e26632f0686ad681ad8ad7
2026-01-07T00:00:00-05:00
A new type of federated clustering: A non-model-sharing approach
arXiv:2506.10244v3 Announce Type: replace Abstract: In recent years, the growing need to leverage sensitive data across institutions has led to increased attention on federated learning (FL), a decentralized machine learning paradigm that enables model training without sharing raw data. However, existing FL-based clustering methods, known as federated clustering, typically assume simple data partitioning scenarios such as horizontal or vertical splits, and cannot handle more complex distributed structures. This study proposes data collaboration clustering (DC-Clustering), a novel federated clustering method that supports clustering over complex data partitioning scenarios where horizontal and vertical splits coexist. In DC-Clustering, each institution shares only intermediate representations instead of raw data, ensuring privacy preservation while enabling collaborative clustering. The method allows flexible selection between k-means and spectral clustering, and achieves final results with a single round of communication with the central server. We conducted extensive experiments using synthetic and open benchmark datasets. The results show that our method achieves clustering performance comparable to centralized clustering where all data are pooled. DC-Clustering addresses an important gap in current FL research by enabling effective knowledge discovery from distributed heterogeneous data. Its practical properties -- privacy preservation, communication efficiency, and flexibility -- make it a promising tool for privacy-sensitive domains such as healthcare and finance.
https://arxiv.org/abs/2506.10244
Academic Papers
svg
9b51ec48b26e502584f5aa6dc582153cf23b124c6e530a74198ce34cdc3b1906
2026-01-07T00:00:00-05:00
On Differential and Boomerang Properties of a Class of Binomials over Finite Fields of Odd Characteristic
arXiv:2506.11486v2 Announce Type: replace Abstract: In this paper, we investigate the differential and boomerang properties of a class of binomial $F_{r,u}(x) = x^r(1 + u\chi(x))$ over the finite field $\mathbb{F}_{p^n}$, where $r = \frac{p^n+1}{4}$, $p^n \equiv 3 \pmod{4}$, and $\chi(x) = x^{\frac{p^n -1}{2}}$ is the quadratic character in $\mathbb{F}_{p^n}$. We show that $F_{r,\pm1}$ is locally-PN with boomerang uniformity $0$ when $p^n \equiv 3 \pmod{8}$. To the best of our knowledge, it is the second known non-PN function class with boomerang uniformity $0$, and the first such example over odd characteristic fields with $p > 3$. Moreover, we show that $F_{r,\pm1}$ is locally-APN with boomerang uniformity at most $2$ when $p^n \equiv 7 \pmod{8}$. We also provide complete classifications of the differential and boomerang spectra of $F_{r,\pm1}$. Furthermore, we thoroughly investigate the differential uniformity of $F_{r,u}$ for $u\in \mathbb{F}_{p^n}^* \setminus \{\pm1\}$.
https://arxiv.org/abs/2506.11486
Academic Papers
svg
580eb3eae1c09358981f496084cb96be8fee473050d513f4d4d6c299ff63d2eb
2026-01-07T00:00:00-05:00
Infini-gram mini: Exact n-gram Search at the Internet Scale with FM-Index
arXiv:2506.12229v5 Announce Type: replace Abstract: Language models are trained mainly on massive text data from the Internet, and it becomes increasingly important to understand this data source. Exact-match search engines enable searching in large text corpora - counting string appearances and retrieving the enclosing documents - yet the high storage overhead hinders their application on Internet-scale data. We present infini-gram mini, an efficient and scalable system that can make petabyte-level text corpora searchable. Based on the FM-index data structure (Ferragina and Manzini, 2000), which simultaneously indexes and compresses text, our system creates indexes with size only 44% of the corpus. Infini-gram mini greatly improves upon the best existing implementation of FM-index in terms of indexing speed (18$\times$) and memory use during both indexing (3.2$\times$ reduction) and querying (down to a negligible amount). We index 83TB of Internet text in 99 days with a single CPU node with 128 vCPUs (or 19 hours if using 137 such nodes). We show one important use case of infini-gram mini in a large-scale analysis of benchmark contamination. We find several core LM evaluation benchmarks to be heavily contaminated in Internet crawls (up to 74.2% in GSM8K), which could lead to overestimating the capabilities of language models if trained on such data. We host a benchmark contamination bulletin to share the contamination rate of many core and community-contributed benchmarks. We also release a web interface and an API endpoint to serve general search queries on infini-gram mini indexes.
https://arxiv.org/abs/2506.12229
Academic Papers
svg
27bbff596b90d90f803dab4dcacaf15098ea0a5f5179bf4b9cb7e9a940762b43
2026-01-07T00:00:00-05:00
BandPilot: Towards Performance- and Contention-Aware GPU Dispatching in AI Clusters
arXiv:2506.15595v4 Announce Type: replace Abstract: Modern multi-tenant AI clusters are increasingly communication-bound, driven by high-volume and multi-round GPU-to-GPU collective communication. Consequently, the GPU dispatcher's choice of a physical GPU subset for each tenant largely determines the job's effective collective bandwidth and thus its performance ceiling. Existing dispatchers predominantly rely on static, topology-aware heuristics that prioritize GPU resource compactness, assuming that minimizing physical distance maximizes communication bandwidth. However, we reveal that this assumption often fails due to complex system-level bottlenecks, such as non-linear NIC saturation and inter-node link heterogeneity.This paper presents BandPilot, a performance- and contention-aware GPU dispatching primitive that optimizes effective collective bandwidth for multi-tenant AI clusters. Specifically, BandPilot learns a data-efficient bandwidth model from sparse NCCL measurements via a hierarchical design. Guided by the model, a fast hybrid search combines an equilibrium-driven constructor with a pruned elimination search to navigate the combinatorial allocation space in real time. To account for multi-tenant interference, BandPilot virtually merges a candidate allocation with co-located cross-host jobs to conservatively estimate shared bottleneck capacity and predict contention-degraded bandwidth. Across a 32-GPU H100 cluster and heterogeneous simulations, BandPilot achieves 92-97% bandwidth efficiency relative to the best-found reference, improving average efficiency by 20-40% over topology-compactness heuristics.
https://arxiv.org/abs/2506.15595
Academic Papers
svg
f3a59609832241c776453c099f90e63917fe4e57f5cde735fc9edd593635aba3
2026-01-07T00:00:00-05:00
SLR: Automated Synthesis for Scalable Logical Reasoning
arXiv:2506.15787v5 Announce Type: replace Abstract: We introduce SLR, an end-to-end framework for systematic evaluation and training of Large Language Models (LLMs) via Scalable Logical Reasoning. Given a user's task specification, SLR automatically synthesizes (i) an instruction prompt for an inductive reasoning task, (ii) a validation program, executable on model outputs to provide verifiable rewards, and (iii) the latent ground-truth rule. This process is fully automated, scalable, requires no human annotations, and offers precise control over task difficulty. Using SLR, we create SLR-Bench, a benchmark comprising 19k prompts organized into 20 curriculum levels that progressively increase in relational, arithmetic, and recursive complexity. Large-scale evaluation reveals that contemporary LLMs readily produce syntactically valid rules, yet often fail at correct logical inference. Recent reasoning LLMs demonstrate improved performance but incur very high test-time computation, with costs exceeding $300 for just 1,000 prompts. Finally, curriculum learning via SLR doubles Llama-3-8B accuracy on SLR-Bench, achieving parity with Gemini-Flash-Thinking at a fraction of computational cost. Moreover, these reasoning capabilities generalize to a wide range of established benchmarks, underscoring the effectiveness of SLR for downstream reasoning.
https://arxiv.org/abs/2506.15787
Academic Papers
svg
5f3bd63fd11b9249273ea16dc12b64e6cc6829718a404534a02398495c672d95
2026-01-07T00:00:00-05:00
Unpacking Generative AI in Education: Computational Modeling of Teacher and Student Perspectives in Social Media Discourse
arXiv:2506.16412v2 Announce Type: replace Abstract: Generative AI (GAI) technologies are quickly reshaping the educational landscape. As adoption accelerates, understanding how students and educators perceive these tools is essential. This study presents one of the most comprehensive analyses to date of stakeholder discourse dynamics on GAI in education using social media data. Our dataset includes 1,199 Reddit posts and 13,959 corresponding top-level comments. We apply sentiment analysis, topic modeling, and author classification. To support this, we propose and validate a modular framework that leverages prompt-based large language models (LLMs) for analysis of online social discourse, and we evaluate this framework against classical natural language processing (NLP) models. Our GPT-4o pipeline consistently outperforms prior approaches across all tasks. For example, it achieved 90.6% accuracy in sentiment analysis against gold-standard human annotations. Topic extraction uncovered 12 latent topics in the public discourse with varying sentiment and author distributions. Teachers and students convey optimism about GAI's potential for personalized learning and productivity in higher education. However, key differences emerged: students often voice distress over false accusations of cheating by AI detectors, while teachers generally express concern about job security, academic integrity, and institutional pressures to adopt GAI tools. These contrasting perspectives highlight the tension between innovation and oversight in GAI-enabled learning environments. Our findings suggest a need for clearer institutional policies, more transparent GAI integration practices, and support mechanisms for both educators and students. More broadly, this study demonstrates the potential of LLM-based frameworks for modeling stakeholder discourse within online communities.
https://arxiv.org/abs/2506.16412
Academic Papers
svg
4f442b315b22861f18a4ae23f0dede25c410e528f685855e3ad2aa926cd782b3
2026-01-07T00:00:00-05:00
Aha Moment Revisited: Are VLMs Truly Capable of Self Verification in Inference-time Scaling?
arXiv:2506.17417v3 Announce Type: replace Abstract: Inference time techniques such as decoding time scaling and self refinement have been shown to substantially improve mathematical reasoning in large language models (LLMs), largely attributed to emergent self correction and self verification behaviors often elicited through reinforcement learning (RL). In this work, we ask whether the same recipe transfers to vision language models (VLMs), especially RL finetuned variants that claim strong visual mathematical reasoning. Through extensive evaluation, we reach three main findings that differ markedly from text only models. First, generation time capability matters more than verification and refinement: simple majority voting consistently and substantially outperforms verification centric strategies such as best of N with self verification. Second, behaviors often associated with RL tuned models at inference time, such as the 'Aha moment,' do not yield reliable reasoning performance improvements. Third, visual information is not effectively integrated into the model's self verification process. Overall, our analysis highlights a key limitation: current RL trained VLMs derive limited benefit from self verification in the visual modality, which constrains the effectiveness of inference time scaling for visual mathematical reasoning.
https://arxiv.org/abs/2506.17417
Academic Papers
svg
1c8d869ffaf9de181e96b88b7d3f45f1bb08cd371f51814f22a11d3597c651db
2026-01-07T00:00:00-05:00
MemeMind: A Large-Scale Multimodal Dataset with Chain-of-Thought Reasoning for Harmful Meme Detection
arXiv:2506.18919v3 Announce Type: replace Abstract: As a multimodal medium combining images and text, memes frequently convey implicit harmful content through metaphors and humor, rendering the detection of harmful memes a complex and challenging task. Although recent studies have made progress in detection accuracy and interpretability, large-scale, high-quality datasets for harmful memes remain scarce, and current methods still struggle to capture implicit risks and nuanced semantics. Thus, we construct MemeMind, a large-scale harmful meme dataset. Aligned with the international standards and the context of internet, MemeMind provides detailed Chain-of-Thought (CoT) reasoning annotations to support fine-grained analysis of implicit intentions in memes. Based on this dataset, we further propose MemeGuard, a reasoning-oriented multimodal detection model that significantly improves both the accuracy of harmful meme detection and the interpretability of model decisions. Extensive experimental results demonstrate that MemeGuard outperforms existing state-of-the-art methods on the MemeMind dataset, establishing a solid foundation for future research in harmful meme detection.
https://arxiv.org/abs/2506.18919
Academic Papers
svg
81c076fcac0a2478844fe98ed11a891ff67de07da476ad3017ba5a8da8434f19
2026-01-07T00:00:00-05:00
MIRAGE: A Benchmark for Multimodal Information-Seeking and Reasoning in Agricultural Expert-Guided Conversations
arXiv:2506.20100v2 Announce Type: replace Abstract: We introduce MIRAGE, a new benchmark for multimodal expert-level reasoning and decision-making in consultative interaction settings. Designed for the agriculture domain, MIRAGE captures the full complexity of expert consultations by combining natural user queries, expert-authored responses, and image-based context, offering a high-fidelity benchmark for evaluating models on grounded reasoning, clarification strategies, and long-form generation in a real-world, knowledge-intensive domain. Grounded in over 35,000 real user-expert interactions and curated through a carefully designed multi-step pipeline, MIRAGE spans diverse crop health, pest diagnosis, and crop management scenarios. The benchmark includes more than 7,000 unique biological entities, covering plant species, pests, and diseases, making it one of the most taxonomically diverse benchmarks available for vision-language models, grounded in the real world. Unlike existing benchmarks that rely on well-specified user inputs and closed-set taxonomies, MIRAGE features underspecified, context-rich scenarios with open-world settings, requiring models to infer latent knowledge gaps, handle rare entities, and either proactively guide the interaction or respond. Project Page: https://mirage-benchmark.github.io
https://arxiv.org/abs/2506.20100
Academic Papers
svg
4d297df40b755a7887003105274bc37778ba85ddb8d18df77bd19ab20ace45bd
2026-01-07T00:00:00-05:00
Agent.xpu: Efficient Scheduling of Agentic LLM Workloads on Heterogeneous SoC
arXiv:2506.24045v2 Announce Type: replace Abstract: Personal LLM agents increasingly combine foreground reactive interactions with background proactive monitoring, forming long-lived, stateful LLM flows that interleave prefill and token-by-token decode. While modern heterogeneous SoCs integrate CPUs, iGPUs, and NPUs to support on-device intelligence, existing LLM engines assume static, single-shot inference and lack mechanisms for flow-level concurrency, prioritization, and efficient accelerator coordination. As a result, commodity SoCs remain poorly matched to the dynamic, mixed-criticality execution patterns of personal agents. This paper presents Agent$.$xpu, the first LLM engine that orchestrates concurrent reactive and proactive LLM flows on commodity SoCs. Extensive profiling uncovers unique SoC characteristics of operator-accelerator affinity, asymmetric DDR contention, and stage-divergent batching behaviors distinct from cloud-serving assumptions. Agent$.$xpu introduces three key techniques: a heterogeneous execution graph (HEG) capturing NPU/iGPU affinity and elastic operator binding; flow-aware NPU-iGPU coordination with stage elasticity, decoupling prefill and decode to reduce bandwidth contention and enforce priorities; and fine-grained preemption with slack-aware piggybacking to guarantee reactive responsiveness without starving proactive work. Across realistic personal-agent workloads, Agent$.$xpu delivers 1.2-4.9$\times$ proactive throughput and reduces reactive latency by at least 91%, compared with both industrial iGPU-only serving engine and NPU-iGPU static inference with optimal tensor-partitioning schemes. Agent$.$xpu also minimizes energy consumption and graphics interference via controlled iGPU usage.
https://arxiv.org/abs/2506.24045
Academic Papers
svg
9b96d13438d43510385675fac637045dde7b1a6233d883141d5cd8bfdf03f2ee
2026-01-07T00:00:00-05:00
Stable Preference Optimization: A Bilevel Approach to Catastrophic Preference Shift
arXiv:2507.07723v2 Announce Type: replace Abstract: Direct Preference Learning has emerged as a dominant offline paradigm for preference optimization. Most of these methods are based on the Bradley-Terry (BT) model for pairwise preference ranking, which directly aligns language model with human preference. Prior work has observed a counter-intuitive phenomenon termed likelihood displacement, where the absolute probability of preferred responses decreases simultaneously during training. We demonstrate that such displacement can lead to a more devastating failure mode, which we defined as \textit{Catastrophic Preference Shift}, where the lost preference probability mass inadvertently shifts toward out-of-distribution (OOD) responses. Such a failure mode is a key limitation shared across BT-style direct preference learning methods, due to the fundamental conflict between the unconstrained discriminative alignment and generative foundational capabilities, ultimately leading to severe performance degradation (e.g., SimPO suffers a significant drop in reasoning accuracy from 73.5\% to 37.5\%). We analyze existing BT-style methods from the probability evolution perspective and theoretically prove that these methods exhibit over-reliance on model initialization and can lead to preference shift. To resolve these counter-intuitive behaviors, we propose a theoretically grounded Stable Preference Optimization (SPO) framework that constrains preference learning within a safe alignment region. Empirical evaluations demonstrate that SPO effectively stabilizes and enhances the performance of existing BT-style preference learning methods. SPO provides new insights into the design of preference learning objectives and opens up new avenues towards more reliable and interpretable language model alignment.
https://arxiv.org/abs/2507.07723
Academic Papers
svg
3fcc7e9cd4dcba26d2fbd1e45d2f82c279e0c76ce1447d7c0519ec2243b59a31
2026-01-07T00:00:00-05:00
Information-Theoretic Generalization Bounds of Replay-based Continual Learning
arXiv:2507.12043v2 Announce Type: replace Abstract: Continual learning (CL) has emerged as a dominant paradigm for acquiring knowledge from sequential tasks while avoiding catastrophic forgetting. Although many CL methods have been proposed to show impressive empirical performance, the theoretical understanding of their generalization behavior remains limited, particularly for replay-based approaches. This paper establishes a unified theoretical framework for replay-based CL, deriving a series of information-theoretic generalization bounds that explicitly elucidate the impact of the memory buffer alongside the current task on generalization performance. Specifically, our hypothesis-based bounds capture the trade-off between the number of selected exemplars and the information dependency between the hypothesis and the memory buffer. Our prediction-based bounds yield tighter and computationally tractable upper bounds on the generalization error by leveraging low-dimensional variables. Theoretical analysis is general and broadly applicable to a wide range of learning algorithms, exemplified by stochastic gradient Langevin dynamics (SGLD) as a representative method. Comprehensive experimental evaluations demonstrate the effectiveness of our derived bounds in capturing the generalization dynamics in replay-based CL settings.
https://arxiv.org/abs/2507.12043
Academic Papers
svg
c5ac3642ba58dd8dd35411b3041fa393b5426d452d60bf0a147235b11d6880be
2026-01-07T00:00:00-05:00
Constructions of binary self-orthogonal singly-even minimal linear codes violating the Aschikhmin-Barg condition with few weights
arXiv:2507.12240v3 Announce Type: replace Abstract: We first establish a simple yet powerful necessary and sufficient condition for a binary linear code to be SO, leading to a complete characterization of singly-even codes in this family. We further derive necessary and sufficient conditions on Boolean and vectorial Boolean functions for generating such codes via a standard construction method. Building on this foundation, we propose three general frameworks for constructing binary SO singly-even minimal non-AB linear codes with few weights. The first two approaches are based on designing Boolean and vectorial Boolean functions that simultaneously satisfy multiple conditions. The third method generates new SO codes from existing ones. As a result, we obtain many infinite classes of binary self-orthogonal singly-even minimal linear codes violating the AB condition with few weights and fully determined weight distributions. Particularly, numerical results show that some duals of our codes are optimal or near-optimal.
https://arxiv.org/abs/2507.12240
Academic Papers
svg
39f4d02f01d7d8a46a2b2a68f0e0e7ad15f73f91e91a4e1ef38f522ac734d36a
2026-01-07T00:00:00-05:00
Compositional Discrete Latent Code for High Fidelity, Productive Diffusion Models
arXiv:2507.12318v3 Announce Type: replace Abstract: We argue that diffusion models' success in modeling complex distributions is, for the most part, coming from their input conditioning. This paper investigates the representation used to condition diffusion models from the perspective that ideal representations should improve sample fidelity, be easy to generate, and be compositional to allow out-of-training samples generation. We introduce Discrete Latent Code (DLC), an image representation derived from Simplicial Embeddings trained with a self-supervised learning objective. DLCs are sequences of discrete tokens, as opposed to the standard continuous image embeddings. They are easy to generate and their compositionality enables sampling of novel images beyond the training distribution. Diffusion models trained with DLCs have improved generation fidelity, establishing a new state-of-the-art for unconditional image generation on ImageNet. Additionally, we show that composing DLCs allows the image generator to produce out-of-distribution samples that coherently combine the semantics of images in diverse ways. Finally, we showcase how DLCs can enable text-to-image generation by leveraging large-scale pretrained language models. We efficiently finetune a text diffusion language model to generate DLCs that produce novel samples outside of the image generator training distribution.
https://arxiv.org/abs/2507.12318
Academic Papers
svg
33799b3f66cb2f80ef2853c3c9869db2672117d75e249b6dc69f0c89b0e5eb74
2026-01-07T00:00:00-05:00
BusterX++: Towards Unified Cross-Modal AI-Generated Content Detection and Explanation with MLLM
arXiv:2507.14632v3 Announce Type: replace Abstract: Recent advances in generative AI have dramatically improved image and video synthesis capabilities, significantly increasing the risk of misinformation through sophisticated fake content. In response, detection methods have evolved from traditional approaches to multimodal large language models (MLLMs), offering enhanced transparency and interpretability in identifying synthetic media. However, current detection systems remain fundamentally limited by their single-modality design. These approaches analyze images or videos separately, making them ineffective against synthetic content that combines multiple media formats. To address these challenges, we introduce \textbf{BusterX++}, a framework for unified detection and explanation of synthetic image and video, with a direct reinforcement learning (RL) post-training strategy. To enable comprehensive evaluation, we also present \textbf{GenBuster++}, a unified benchmark leveraging state-of-the-art image and video generation techniques. This benchmark comprises 4,000 images and video clips, meticulously curated by human experts to ensure high quality, diversity, and real-world applicability. Extensive experiments demonstrate the effectiveness and generalizability of our approach.
https://arxiv.org/abs/2507.14632
Academic Papers
svg
e0940e6c0c391b2903e91a5a4c106f20085f9168b6b76dee0a38e8078bd2edfd
2026-01-07T00:00:00-05:00
Awakening LLMs' Reasoning Potential: A Fine-Grained Pipeline to Evaluate and Mitigate Vague Perception
arXiv:2507.16199v5 Announce Type: replace Abstract: Large language models (LLMs) are increasingly trained to abstain on difficult questions by answering unknown. However, we observe that LLMs often misuse this option: they output unknown even when LLMs can actually solve the questions, or they fail to understand why questions are truly unsolvable. We formalize this mismatch between potential ability and the inclination of abstention as the Vague Perception phenomenon. We introduce the WakenLLM pipeline that (1) extracts Vague Perception samples and (2) measures how many of them can be converted to correct answers under stimulation. Based on stage-wise metrics (TCR, OCR, etc.) and the upper-bound accuracy Acc(WakenLLM), we quantify LLMs' reasoning potential beyond one-shot accuracy. Experiments on six LLMs suggest that, without further training or parameter revisions, LLMs can achieve up to a 68.53% increase in accuracy on Vague Perception samples through our designed pipeline. We further analyze how Vague Perception, Conformity and Degradation vary from model families and parameter sizes, and offer model selection strategies in multi-stage reasoning workflows. Finally, by comparing WakenLLM against mainstream reasoning baselines, both training and non-training ones, we show that existing baselines only activate a small portion of LLMs' reasoning potential, pointing to perception-aware reasoning as a promising direction for future LLM designing. Code and datasets are available at https://github.com/WakenLLMTeam/WakenLLM-toolkit.
https://arxiv.org/abs/2507.16199
Academic Papers
svg
2a5ea62a6f879d4bc02edde7c674dabfabdab06ddf650ff5a37a70bff2b2b3bc
2026-01-07T00:00:00-05:00
TELEVAL: A Dynamic Benchmark Designed for Spoken Language Models in Chinese Interactive Scenarios
arXiv:2507.18061v2 Announce Type: replace Abstract: Spoken language models (SLMs) have advanced rapidly in recent years, accompanied by a growing number of evaluation benchmarks. However, most existing benchmarks emphasize task completion and capability scaling, while remaining poorly aligned with how users interact with SLMs in real-world spoken conversations. Effective spoken interaction requires not only accurate understanding of user intent and content, but also the ability to respond with appropriate interactional strategies. In this paper, we present TELEVAL, a dynamic, user-centered benchmark for evaluating SLMs in realistic Chinese spoken interaction scenarios. TELEVAL consolidates evaluation into two core aspects. Reliable Content Fulfillment assesses whether models can comprehend spoken inputs and produce semantically correct responses. Interactional Appropriateness evaluates whether models act as socially capable interlocutors, requiring them not only to generate human-like, colloquial responses, but also to implicitly incorporate paralinguistic cues for natural interaction. Experiments reveal that, despite strong performance on semantic and knowledge-oriented tasks, current SLMs still struggle to produce natural and interactionally appropriate responses, highlighting the need for more interaction-faithful evaluation.
https://arxiv.org/abs/2507.18061
Academic Papers
svg
ade4761b92f4a9dd374c709e3d4e97520cbaae0023ede2f6c921283c21ad414a
2026-01-07T00:00:00-05:00
Learning an Efficient Multi-Turn Dialogue Evaluator from Multiple LLM Judges
arXiv:2508.00454v4 Announce Type: replace Abstract: Evaluating the conversational abilities of large language models (LLMs) remains a challenging task. Current mainstream approaches primarily rely on the "LLM-as-a-judge" paradigm, where an LLM is prompted to serve as an evaluator to assess dialogue quality. However, such methods often suffer from various biases, which undermine the reliability and consistency of the evaluation results. To mitigate these biases, recent methods employ multiple LLMs as judges and aggregate their judgments to select the optimal assessment. Although effective, this multi-judge approach incurs significant computational overhead during inference. In this paper, we propose an efficient dialogue evaluator that captures the collective wisdom of multiple LLM judges by aggregating their preference knowledge into a single model. Our approach preserves the advantages of diverse multi-judge feedback while drastically reducing the evaluation cost, enabling fast, flexible, and fine-grained dialogue quality assessment. Extensive experiments on seven single rating and pairwise comparison dialogue evaluation benchmarks demonstrate that our method outperforms existing baselines across diverse scenarios, showcasing its efficiency and robustness.
https://arxiv.org/abs/2508.00454
Academic Papers
svg
d21280ac63310d7508611b3276b5fcba8fff64a27bd5694abdd00b509cdd8c49
2026-01-07T00:00:00-05:00
Pro2Guard: Proactive Runtime Enforcement of LLM Agent Safety via Probabilistic Model Checking
arXiv:2508.00500v2 Announce Type: replace Abstract: Large Language Model (LLM) agents demonstrate strong autonomy, but their stochastic behavior introduces unpredictable safety risks. Existing rule-based enforcement systems, such as AgentSpec, are reactive, intervening only when unsafe behavior is imminent or has occurred, lacking foresight for long-horizon dependencies. To overcome these limitations, we present a proactive runtime enforcement framework for LLM agents. The framework abstracts agent behaviors into symbolic states and learns a Discrete-Time Markov Chain (DTMC) from execution traces. At runtime, it predicts the probability of leading to undesired behaviors and intervenes before violations occur when the estimated risk exceeds a user-defined threshold. Designed to provide PAC-correctness guarantee, the framework achieves statistically reliable enforcement of agent safety. We evaluate the framework across two safety-critical domains: autonomous vehicles and embodied agents. It proactively enforces safety and maintains high task performance, outperforming existing methods.
https://arxiv.org/abs/2508.00500
Academic Papers
svg