id
stringlengths
64
64
published
stringlengths
19
25
title
stringlengths
7
262
description
stringlengths
6
54.4k
link
stringlengths
31
227
category
stringclasses
6 values
image
stringlengths
3
247
9dbe04a5ea62f8de169cc45069a8ea1834c6bd4bbfefbf86c8a742b992ca18c8
2026-01-01T00:00:00-05:00
Text-to-Image Models and Their Representation of People from Different Nationalities Engaging in Activities
arXiv:2504.06313v4 Announce Type: replace Abstract: This paper investigates how popular text-to-image (T2I) models, DALL-E 3 and Gemini 3 Pro Preview, depict people from 206 nationalities when prompted to generate images of individuals engaging in common everyday activities. Five scenarios were developed, and 2,060 images were generated using input prompts that specified nationalities across five activities. When aggregating across activities and models, results showed that 28.4% of the images depicted individuals wearing traditional attire, including attire that is impractical for the specified activities in several cases. This pattern was statistically significantly associated with regions, with the Middle East & North Africa and Sub-Saharan Africa disproportionately affected, and was also associated with World Bank income groups. Similar region- and income-linked patterns were observed for images labeled as depicting impractical attire in two athletics-related activities. To assess image-text alignment, CLIP, ALIGN, and GPT-4.1 mini were used to score 9,270 image-prompt pairs. Images labeled as featuring traditional attire received statistically significantly higher alignment scores when prompts included country names, and this pattern weakened or reversed when country names were removed. Revised prompt analysis showed that one model frequently inserted the word "traditional" (50.3% for traditional-labeled images vs. 16.6% otherwise). These results indicate that these representational patterns can be shaped by several components of the pipeline, including image generator, evaluation models, and prompt revision.
https://arxiv.org/abs/2504.06313
Academic Papers
svg
bcfd05f4f38c3e2f21647422f4a9b00be437722ae7c4378b4bcb494b80f94712
2026-01-01T00:00:00-05:00
Beyond Degradation Redundancy: Contrastive Prompt Learning for All-in-One Image Restoration
arXiv:2504.09973v3 Announce Type: replace Abstract: All-in-One Image Restoration (AiOIR), which addresses diverse degradation types with a unified model, presents significant challenges in designing task-aware prompts that effectively guide restoration across multiple degradation scenarios. While adaptive prompt learning enables end-to-end optimization, it often yields overlapping or redundant task representations. Conversely, explicit prompts derived from pretrained classifiers enhance discriminability but discard critical visual information needed for reconstruction. To address these limitations, we introduce Contrastive Prompt Learning (CPL), a framework that aims to improve prompt-task alignment through two complementary components: a Sparse Prompt Module (SPM) that efficiently captures degradation-aware representations while reducing redundancy, and a Contrastive Prompt Regularization (CPR) that explicitly strengthens task boundaries by incorporating negative prompt samples across different degradation types. Unlike previous approaches that focus primarily on degradation classification, CPL directly optimizes the interaction between prompts and the restoration model. Extensive experiments across five benchmarks show that CPL consistently boosts the performance of strong AiOIR baselines across diverse scenarios. Our approach achieves state-of-the-art average performance on these benchmarks, providing a general and robust solution for AiOIR. The code is available at https://github.com/Aitical/CPLIR
https://arxiv.org/abs/2504.09973
Academic Papers
svg
caf962ddd3c4e8b72b6cdb694691efaefbb3f2af2ccd97925d29d820419c3057
2026-01-01T00:00:00-05:00
xVerify: Efficient Answer Verifier for Reasoning Model Evaluations
arXiv:2504.10481v2 Announce Type: replace Abstract: With the release of OpenAI's o1 model, reasoning models that adopt slow-thinking strategies have become increasingly common. Their outputs often contain complex reasoning, intermediate steps, and self-reflection, making existing evaluation methods and reward models inadequate. In particular, they struggle to judge answer equivalence and to reliably extract final answers from long, complex responses. To address this challenge, we propose xVerify, an efficient answer verifier for evaluating reasoning models. xVerify shows strong equivalence judgment capabilities, enabling accurate comparison between model outputs and reference answers across diverse question types. To train and evaluate xVerify, we construct the VAR dataset, which consists of question-answer pairs generated by multiple LLMs across various datasets. The dataset incorporates multiple reasoning models and challenging evaluation sets specifically designed for reasoning assessment, with a multi-round annotation process to ensure label quality. Based on VAR, we train xVerify models at different scales. Experimental results on both test and generalization sets show that all xVerify variants achieve over 95% F1 score and accuracy. Notably, the smallest model, xVerify-0.5B-I, outperforms all evaluation methods except GPT-4o, while xVerify-3B-Ib surpasses GPT-4o in overall performance. In addition, reinforcement learning experiments using xVerify as the reward model yield an 18.4% improvement for Qwen2.5-7B compared with direct generation, exceeding the gains achieved with Math Verify as the reward. These results demonstrate the effectiveness and generalizability of xVerify. All xVerify resources are available on \href{https://github.com/IAAR-Shanghai/xVerify}{GitHub}.
https://arxiv.org/abs/2504.10481
Academic Papers
svg
90101cc05da186abf63d72cd045f6ace6c326619a3f1e55dd5ccbd9bd7188454
2026-01-01T00:00:00-05:00
Pre-DPO: Improving Data Utilization in Direct Preference Optimization Using a Guiding Reference Model
arXiv:2504.15843v3 Announce Type: replace Abstract: Direct Preference Optimization (DPO) simplifies reinforcement learning from human feedback (RLHF) for large language models (LLMs) by directly optimizing human preferences without an explicit reward model. We find that during DPO training, the reference model plays the role of a data weight adjuster. However, the common practice of initializing the policy and reference models identically in DPO can lead to inefficient data utilization and impose a performance ceiling. Meanwhile, the lack of a reference model in Simple Preference Optimization (SimPO) reduces training robustness and necessitates stricter conditions to prevent catastrophic forgetting. In this work, we propose Pre-DPO, a simple yet effective DPO-based training paradigm that enhances preference optimization performance by leveraging a guiding reference model. This reference model provides foresight into the optimal policy state achievable through the training preference data, serving as a guiding mechanism that adaptively assigns higher weights to samples more suitable for the model and lower weights to those less suitable. Extensive experiments on AlpacaEval 2.0 and Arena-Hard v0.1 benchmarks demonstrate that Pre-DPO consistently improves the performance of both DPO and SimPO, without relying on external models or additional data.
https://arxiv.org/abs/2504.15843
Academic Papers
svg
f2e846a322b329431d7c13819b659c14bddd512878ff75608470d61b919b5db2
2026-01-01T00:00:00-05:00
ParetoHqD: Fast Offline Multiobjective Alignment of Large Language Models using Pareto High-quality Data
arXiv:2504.16628v3 Announce Type: replace Abstract: Aligning large language models with multiple human expectations and values is crucial for ensuring that they adequately serve a variety of user needs. To this end, offline multiobjective alignment algorithms such as the Rewards-in-Context algorithm have shown strong performance and efficiency. However, inappropriate preference representations and training with imbalanced reward scores limit the performance of such algorithms. In this work, we introduce ParetoHqD that addresses the above issues by representing human preferences as preference directions in the objective space and regarding data near the Pareto front as "high-quality" data. For each preference, ParetoHqD follows a two-stage supervised fine-tuning process, where each stage uses an individual Pareto high-quality training set that best matches its preference direction. The experimental results have demonstrated the superiority of ParetoHqD over five baselines on two multiobjective alignment tasks.
https://arxiv.org/abs/2504.16628
Academic Papers
svg
38ec21bc6fcfb506f522266f0e3bd64c230e0629f18bce0c6373a164ba9cd9de
2026-01-01T00:00:00-05:00
Dynamic Approximate Maximum Matching in the Distributed Vertex Partition Model
arXiv:2504.17338v3 Announce Type: replace Abstract: We initiate the study of approximate maximum matching in the vertex partition model, for graphs subject to dynamic changes. We assume that the $n$ vertices of the graph are partitioned among $k$ players, who execute a distributed algorithm and communicate via message passing. An adaptive adversary may perform dynamic updates to the graph topology by inserting or removing edges between the nodes, and the algorithm needs to respond to these changes by adapting the output of the players, with the goal of maintaining an approximate maximum matching. The main performance metric in this setting is the algorithm's update time, which corresponds to the number of rounds required for updating the solution upon an adversarial change. For the standard setting of single-edge insertions and deletions, we give a randomized Las Vegas algorithm with an expected update time of $O( \lceil \frac{\sqrt{m}}{\beta k} \rceil )$ rounds that maintains a $\frac{2}{3}$-approximate maximum matching that is also maximal, where $m$ is the number of edges in the graph and $\beta$ is the available link bandwidth. For batch-dynamic updates, where the adversary may insert up to $\ell\ge 1$ edges at once, we prove the following. There is a randomized algorithm that succeeds with high probability in maintaining a $\frac{2}{3}$-approximate maximum matching and has a worst case update time of $O(\lceil\frac{\ell\log n}{\sqrt{\beta k}}\rceil )$ rounds. Any algorithm for maintaining a maximal matching without 3-augmenting paths under batches of $\ell$-edge insertions has an update time of $\Omega( \frac{\ell}{\beta k \log n} )$ rounds in the worst case.
https://arxiv.org/abs/2504.17338
Academic Papers
svg
ad3e3024b1e1e24e630d1a10f233c2757c4072fdbb6f5d533b7e901286301eaa
2026-01-01T00:00:00-05:00
ALF: Advertiser Large Foundation Model for Multi-Modal Advertiser Understanding
arXiv:2504.18785v3 Announce Type: replace Abstract: We present ALF (Advertiser Large Foundation model), a multi-modal transformer architecture for understanding advertiser behavior and intent across text, image, video, and structured data modalities. Through contrastive learning and multi-task optimization, ALF creates unified advertiser representations that capture both content and behavioral patterns. Our model achieves state-of-the-art performance on critical tasks including fraud detection, policy violation identification, and advertiser similarity matching. In production deployment, ALF demonstrates significant real-world impact by delivering simultaneous gains in both precision and recall, for instance boosting recall by over 40 percentage points on one critical policy and increasing precision to 99.8% on another. The architecture's effectiveness stems from its novel combination of multi-modal transformations, inter-sample attention mechanism, spectrally normalized projections, and calibrated probabilistic outputs.
https://arxiv.org/abs/2504.18785
Academic Papers
svg
eb161926a7674ba35b8ffe227ecc6fe79789cdc595c3fedba3d7a407de616270
2026-01-01T00:00:00-05:00
Neurosymbolic Association Rule Mining from Tabular Data
arXiv:2504.19354v5 Announce Type: replace Abstract: Association Rule Mining (ARM) is the task of mining patterns among data features in the form of logical rules, with applications across a myriad of domains. However, high-dimensional datasets often result in an excessive number of rules, increasing execution time and negatively impacting downstream task performance. Managing this rule explosion remains a central challenge in ARM research. To address this, we introduce Aerial+, a novel neurosymbolic ARM method. Aerial+ leverages an under-complete autoencoder to create a neural representation of the data, capturing associations between features. It extracts rules from this neural representation by exploiting the model's reconstruction mechanism. Extensive evaluations on five datasets against seven baselines demonstrate that Aerial+ achieves state-of-the-art results by learning more concise, high-quality rule sets with full data coverage. When integrated into rule-based interpretable machine learning models, Aerial+ significantly reduces execution time while maintaining or improving accuracy.
https://arxiv.org/abs/2504.19354
Academic Papers
svg
0be55ee733ed088f1b4227627967e59bb23a176dec7aeb5dcf4fae25e4fd37b7
2026-01-01T00:00:00-05:00
Analysis of Errors in Robotic Surgical Skill Acquisition with Video-Based Detection
arXiv:2504.19571v2 Announce Type: replace Abstract: Robot-assisted minimally invasive surgeries offer many advantages but require complex motor tasks that take surgeons years to master. There is currently a lack of knowledge on how surgeons acquire these robotic surgical skills. Toward bridging this gap, a previous study followed surgical residents learning complex surgical dry lab tasks on a surgical robot over six months. Errors are an important measure for training and skill evaluation, but unlike in virtual simulations, in dry lab training, errors are difficult to monitor automatically. Here, we analyzed errors in the ring tower transfer task, in which surgical residents moved a ring along a curved wire as quickly and accurately as possible. We developed an image-processing algorithm using color and size thresholds, optical flow and short time Fourier transforms to detect collision errors and achieved a detection accuracy of approximately 95%. Using the detected errors and task completion time, we found that the residents reduced their completion time and number of errors over the six months, while the percentage of task time spent making errors remained relatively constant on average. This analysis sheds light on the learning process of the residents and can serve as a step towards providing error-related feedback to robotic surgeons.
https://arxiv.org/abs/2504.19571
Academic Papers
svg
4dee9acde8cafcad91fe3ddc507c7243a31e3a481fdb9d76a7993e6d451ada02
2026-01-01T00:00:00-05:00
Adapting In-Domain Few-Shot Segmentation to New Domains without Source Domain Retraining
arXiv:2504.21414v4 Announce Type: replace Abstract: Cross-domain few-shot segmentation (CD-FSS) aims to segment objects of novel classes in new domains, which is often challenging due to the diverse characteristics of target domains and the limited availability of support data. Most CD-FSS methods redesign and retrain in-domain FSS models using abundant base data from the source domain, which are effective but costly to train. To address these issues, we propose adapting informative model structures of the well-trained FSS model for target domains by learning domain characteristics from few-shot labeled support samples during inference, thereby eliminating the need for source domain retraining. Specifically, we first adaptively identify domain-specific model structures by measuring parameter importance using a novel structure Fisher score in a data-dependent manner. Then, we progressively train the selected informative model structures with hierarchically constructed training samples, progressing from fewer to more support shots. The resulting Informative Structure Adaptation (ISA) method effectively addresses domain shifts and equips existing well-trained in-domain FSS models with flexible adaptation capabilities for new domains, eliminating the need to redesign or retrain CD-FSS models on base data. Extensive experiments validate the effectiveness of our method, demonstrating superior performance across multiple CD-FSS benchmarks. Codes are at https://github.com/fanq15/ISA.
https://arxiv.org/abs/2504.21414
Academic Papers
svg
9ba9f6c49547f261c4810f1ed0634f72febbcb8906d419e45a189b73ac17c63c
2026-01-01T00:00:00-05:00
Zoomer: Adaptive Image Focus Optimization for Black-box MLLM
arXiv:2505.00742v2 Announce Type: replace Abstract: Multimodal large language models (MLLMs) such as GPT-4o, Gemini Pro, and Claude 3.5 have enabled unified reasoning over text and visual inputs, yet they often hallucinate in real world scenarios especially when small objects or fine spatial context are involved. We pinpoint two core causes of this failure: the absence of region-adaptive attention and inflexible token budgets that force uniform downsampling, leading to critical information loss. To overcome these limitations, we introduce Zoomer, a visual prompting framework that delivers token-efficient, detail-preserving image representations for black-box MLLMs. Zoomer integrates (1) a prompt-aware emphasis module to highlight semantically relevant regions, (2) a spatial-preserving orchestration schema to maintain object relationships, and (3) a budget-aware strategy to adaptively allocate tokens between global context and local details. Extensive experiments on nine benchmarks and three commercial MLLMs demonstrate that Zoomer boosts accuracy by up to 27% while cutting image token usage by up to 67%. Our approach establishes a principled methodology for robust, resource-aware multimodal understanding in settings where model internals are inaccessible.
https://arxiv.org/abs/2505.00742
Academic Papers
svg
c180ab16b11f75cc531916938731c604ae152bf646fec145c6f586b243d4b98e
2026-01-01T00:00:00-05:00
Multi-Antenna Users in Cell-Free Massive MIMO: Stream Allocation and Necessity of Downlink Pilots
arXiv:2505.02951v2 Announce Type: replace Abstract: We consider a cell-free massive multiple-input multiple-output (MIMO) system with multiple antennas on the users and access points (APs). In previous works, the downlink spectral efficiency (SE) has been evaluated using the hardening bound that requires no downlink pilots. This approach works well for single-antenna users. In this paper, we show that much higher SEs can be achieved if downlink pilots are sent when having multi-antenna users. The reason is that the effective channel matrix does not harden. We propose a pilot-based downlink estimation scheme, derive a new SE expression, and show numerically that it yields substantially higher performance when having correlated Rayleigh fading channels. In cases with multi-antenna users, the APs can either transmit the same or different data streams. The latter reduces the fronthaul signaling but comes with a SE loss. We propose precoding and combining schemes for these cases and consider whether channel knowledge is shared between the APs. Finally, we show numerically how the number of users, APs, and the number of antennas on users and APs affect the SE.
https://arxiv.org/abs/2505.02951
Academic Papers
svg
16880e5e6bd1d35b4695c951908fee637e9d01312009415746dcb9c3188a7e3b
2026-01-01T00:00:00-05:00
An Analysis of Hyper-Parameter Optimization Methods for Retrieval Augmented Generation
arXiv:2505.03452v3 Announce Type: replace Abstract: Optimizing Retrieval-Augmented Generation (RAG) configurations for specific tasks is a complex and resource-intensive challenge. Motivated by this challenge, frameworks for RAG hyper-parameter optimization (HPO) have recently emerged, yet their effectiveness has not been rigorously benchmarked. To fill this gap, we present a comprehensive study involving five HPO algorithms over five datasets from diverse domains, including a newly curated real-world product documentation dataset. Our study explores the largest RAG HPO search space to date that includes full grid-search evaluations, and uses three evaluation metrics as optimization targets. Analysis of the results shows that RAG HPO can be done efficiently, either greedily or with random search, and that it significantly boosts RAG performance for all datasets. For greedy HPO approaches, we show that optimizing model selection first is preferable to the common practice of following the RAG pipeline order during optimization.
https://arxiv.org/abs/2505.03452
Academic Papers
svg
4ffcdca28a20571339050f1e60e3ecbf828f99ef1743c4404fef5cbad0de14d1
2026-01-01T00:00:00-05:00
Selfish, Local and Online Scheduling via Vector Fitting
arXiv:2505.10082v3 Announce Type: replace Abstract: We provide a dual fitting technique on a semidefinite program yielding simple proofs of tight bounds for the robust price of anarchy of several congestion and scheduling games under the sum of weighted completion times objective. The same approach also allows to bound the approximation ratio of local search algorithms and the competitive ratio of online algorithms for the scheduling problem $R || \sum w_j C_j$. All of our results are obtained through a simple unified dual fitting argument on the same semidefinite programming relaxation, which can essentially be obtained through the first round of the Lasserre/Sum of Squares hierarchy. As our main application, we show that the known coordination ratio bounds of respectively $4, (3 + \sqrt{5})/2 \approx 2.618,$ and $32/15 \approx 2.133$ for the scheduling game $R || \sum w_j C_j$ under the coordination mechanisms Smith's Rule, Proportional Sharing and Rand (STOC 2011) can be extended to congestion games and obtained through this approach. For the natural restriction where the weight of each player is proportional to its processing time on every resource, we show that the last bound can be improved from 2.133 to 2. This improvement can also be made for general instances when considering the price of anarchy of the game, rather than the coordination ratio. As a further application of this technique, we show that it recovers the tight bound of $(3 + \sqrt{5})/2$ for the price of anarchy of weighted affine congestion games and the Kawaguchi-Kyan bound of $(1+ \sqrt{2})/2$ for the pure price of anarchy of $P || \sum w_j C_j$. Moreover, this approach can analyze a simple local search algorithm for $R || \sum w_j C_j$, the best currently known combinatorial approximation algorithm for this problem achieving an approximation ratio of $(5 + \sqrt{5})/4 + \varepsilon$ and an online greedy algorithm which is $4$-competitive.
https://arxiv.org/abs/2505.10082
Academic Papers
svg
61d83fd871977297e1b26817ddd45def92754e94fff17f432779d1f2112a8f04
2026-01-01T00:00:00-05:00
Neural Field Equations with random data
arXiv:2505.16343v3 Announce Type: replace Abstract: We study neural field equations, which are prototypical models of large-scale cortical activity, subject to random data. We view this spatially-extended, nonlocal evolution equation as a Cauchy problem on abstract Banach spaces, with randomness in the synaptic kernel, firing rate function, external stimuli, and initial conditions. We determine conditions on the random data that guarantee existence, uniqueness, and measurability of the solution in an appropriate Banach space, and examine the regularity of the solution in relation to the regularity of the inputs. We present results for linear and nonlinear neural fields, and for the two most common functional setups in the numerical analysis of this problem. In addition to the continuous problem, we analyse in abstract form neural fields that have been spatially discretised, setting the foundations for analysing uncertainty quantification (UQ) schemes.
https://arxiv.org/abs/2505.16343
Academic Papers
svg
ca1941452e77d198c8a8bda49f7dafd66444580c26c1d61c89693b30cd743c9e
2026-01-01T00:00:00-05:00
MangaVQA and MangaLMM: A Benchmark and Specialized Model for Multimodal Manga Understanding
arXiv:2505.20298v2 Announce Type: replace Abstract: Manga, or Japanese comics, is a richly multimodal narrative form that blends images and text in complex ways. Teaching large multimodal models (LMMs) to understand such narratives at a human-like level could help manga creators reflect on and refine their stories. To this end, we introduce two benchmarks for multimodal manga understanding: MangaOCR, which targets in-page text recognition, and MangaVQA, a novel benchmark designed to evaluate contextual understanding through visual question answering. MangaVQA consists of 526 high-quality, manually constructed question-answer pairs, enabling reliable evaluation across diverse narrative and visual scenarios. Building on these benchmarks, we develop MangaLMM, a manga-specialized model finetuned from the open-source LMM Qwen2.5-VL to jointly handle both tasks. Through extensive experiments, including comparisons with proprietary models such as GPT-4o and Gemini 2.5, we assess how well LMMs understand manga. Our benchmark and model provide a comprehensive foundation for evaluating and advancing LMMs in the richly narrative domain of manga.
https://arxiv.org/abs/2505.20298
Academic Papers
svg
fb43425a4dfdf80d1fb20a2233efe2ce4ec02a9624f39b9edabfe967bc3d343a
2026-01-01T00:00:00-05:00
OSVI-WM: One-Shot Visual Imitation for Unseen Tasks using World-Model-Guided Trajectory Generation
arXiv:2505.20425v2 Announce Type: replace Abstract: Visual imitation learning enables robotic agents to acquire skills by observing expert demonstration videos. In the one-shot setting, the agent generates a policy after observing a single expert demonstration without additional fine-tuning. Existing approaches typically train and evaluate on the same set of tasks, varying only object configurations, and struggle to generalize to unseen tasks with different semantic or structural requirements. While some recent methods attempt to address this, they exhibit low success rates on hard test tasks that, despite being visually similar to some training tasks, differ in context and require distinct responses. Additionally, most existing methods lack an explicit model of environment dynamics, limiting their ability to reason about future states. To address these limitations, we propose a novel framework for one-shot visual imitation learning via world-model-guided trajectory generation. Given an expert demonstration video and the agent's initial observation, our method leverages a learned world model to predict a sequence of latent states and actions. This latent trajectory is then decoded into physical waypoints that guide the agent's execution. Our method is evaluated on two simulated benchmarks and three real-world robotic platforms, where it consistently outperforms prior approaches, with over 30% improvement in some cases. The code is available at https://github.com/raktimgg/osvi-wm.
https://arxiv.org/abs/2505.20425
Academic Papers
svg
3bcba872a4dc355e8e6520e1def0d3857510344f14478fab85adb5755316775e
2026-01-01T00:00:00-05:00
Do LLMs Understand Collaborative Signals? Diagnosis and Repair
arXiv:2505.20730v3 Announce Type: replace Abstract: Collaborative information from user-item interactions is a fundamental source of signal in successful recommender systems. Recently, researchers have attempted to incorporate this knowledge into large language model-based recommender approaches (LLMRec) to enhance their performance. However, there has been little fundamental analysis of whether LLMs can effectively reason over collaborative information. In this paper, we analyze the ability of LLMs to reason about collaborative information in recommendation tasks, comparing their performance to traditional matrix factorization (MF) models. We propose a simple and effective method to improve LLMs' reasoning capabilities using retrieval-augmented generation (RAG) over the user-item interaction matrix with four different prompting strategies. Our results show that the LLM outperforms the MF model whenever we provide relevant information in a clear and easy-to-follow format, and prompt the LLM to reason based on it. We observe that with this strategy, in almost all cases, the more information we provide, the better the LLM performs.
https://arxiv.org/abs/2505.20730
Academic Papers
svg
b52118f735babf47b3dd33f4f4ac388f521a80015a30451d74e685cb2281ea6f
2026-01-01T00:00:00-05:00
GoMatching++: Parameter- and Data-Efficient Arbitrary-Shaped Video Text Spotting and Benchmarking
arXiv:2505.22228v2 Announce Type: replace Abstract: Video text spotting (VTS) extends image text spotting (ITS) by adding text tracking, significantly increasing task complexity. Despite progress in VTS, existing methods still fall short of the performance seen in ITS. This paper identifies a key limitation in current video text spotters: limited recognition capability, even after extensive end-to-end training. To address this, we propose GoMatching++, a parameter- and data-efficient method that transforms an off-the-shelf image text spotter into a video specialist. The core idea lies in freezing the image text spotter and introducing a lightweight, trainable tracker, which can be optimized efficiently with minimal training data. Our approach includes two key components: (1) a rescoring mechanism to bridge the domain gap between image and video data, and (2) the LST-Matcher, which enhances the frozen image text spotter's ability to handle video text. We explore various architectures for LST-Matcher to ensure efficiency in both parameters and training data. As a result, GoMatching++ sets new performance records on challenging benchmarks such as ICDAR15-video, DSText, and BOVText, while significantly reducing training costs. To address the lack of curved text datasets in VTS, we introduce ArTVideo, a new benchmark featuring over 30% curved text with detailed annotations. We also provide a comprehensive statistical analysis and experimental results for ArTVideo. We believe that GoMatching++ and the ArTVideo benchmark will drive future advancements in video text spotting. The source code, models and dataset are publicly available at https://github.com/Hxyz-123/GoMatching.
https://arxiv.org/abs/2505.22228
Academic Papers
svg
4575413b51425b3c0b11c825b9c72a9fd5ad0355ec63a0dc6bc211f0ab05d8e2
2026-01-01T00:00:00-05:00
Improving Reliability and Explainability of Medical Question Answering through Atomic Fact Checking in Retrieval-Augmented LLMs
arXiv:2505.24830v3 Announce Type: replace Abstract: Large language models (LLMs) exhibit extensive medical knowledge but are prone to hallucinations and inaccurate citations, which pose a challenge to their clinical adoption and regulatory compliance. Current methods, such as Retrieval Augmented Generation, partially address these issues by grounding answers in source documents, but hallucinations and low fact-level explainability persist. In this work, we introduce a novel atomic fact-checking framework designed to enhance the reliability and explainability of LLMs used in medical long-form question answering. This method decomposes LLM-generated responses into discrete, verifiable units called atomic facts, each of which is independently verified against an authoritative knowledge base of medical guidelines. This approach enables targeted correction of errors and direct tracing to source literature, thereby improving the factual accuracy and explainability of medical Q&A. Extensive evaluation using multi-reader assessments by medical experts and an automated open Q&A benchmark demonstrated significant improvements in factual accuracy and explainability. Our framework achieved up to a 40% overall answer improvement and a 50% hallucination detection rate. The ability to trace each atomic fact back to the most relevant chunks from the database provides a granular, transparent explanation of the generated responses, addressing a major gap in current medical AI applications. This work represents a crucial step towards more trustworthy and reliable clinical applications of LLMs, addressing key prerequisites for clinical application and fostering greater confidence in AI-assisted healthcare.
https://arxiv.org/abs/2505.24830
Academic Papers
svg
b3a5e4b5d806bcae3a9c947c419af603f5eae68a11aa04651ed90bd56d95392d
2026-01-01T00:00:00-05:00
TalkingHeadBench: A Multi-Modal Benchmark & Analysis of Talking-Head DeepFake Detection
arXiv:2505.24866v2 Announce Type: replace Abstract: The rapid advancement of talking-head deepfake generation fueled by advanced generative models has elevated the realism of synthetic videos to a level that poses substantial risks in domains such as media, politics, and finance. However, current benchmarks for deepfake talking-head detection fail to reflect this progress, relying on outdated generators and offering limited insight into model robustness and generalization. We introduce TalkingHeadBench, a comprehensive multi-model multi-generator benchmark and curated dataset designed to evaluate the performance of state-of-the-art detectors on the most advanced generators. Our dataset includes deepfakes synthesized by leading academic and commercial models and features carefully constructed protocols to assess generalization under distribution shifts in identity and generator characteristics. We benchmark a diverse set of existing detection methods, including CNNs, vision transformers, and temporal models, and analyze their robustness and generalization capabilities. In addition, we provide error analysis using Grad-CAM visualizations to expose common failure modes and detector biases. TalkingHeadBench is hosted on https://huggingface.co/datasets/luchaoqi/TalkingHeadBench with open access to all data splits and protocols. Our benchmark aims to accelerate research towards more robust and generalizable detection models in the face of rapidly evolving generative techniques.
https://arxiv.org/abs/2505.24866
Academic Papers
svg
1a484339f20fe8f5b707dc6c2c0f5ec0f84d076e7a8f85df68148f82fe8e47a7
2026-01-01T00:00:00-05:00
Automatic Stage Lighting Control: Is it a Rule-Driven Process or Generative Task?
arXiv:2506.01482v2 Announce Type: replace Abstract: Stage lighting is a vital component in live music performances, shaping an engaging experience for both musicians and audiences. In recent years, Automatic Stage Lighting Control (ASLC) has attracted growing interest due to the high costs of hiring or training professional lighting engineers. However, most existing ASLC solutions only classify music into limited categories and map them to predefined light patterns, resulting in formulaic and monotonous outcomes that lack rationality. To address this gap, this paper presents Skip-BART, an end-to-end model that directly learns from experienced lighting engineers and predict vivid, human-like stage lighting. To the best of our knowledge, this is the first work to conceptualize ASLC as a generative task rather than merely a classification problem. Our method adapts the BART model to take audio music as input and produce light hue and value (intensity) as output, incorporating a novel skip connection mechanism to enhance the relationship between music and light within the frame grid. To address the lack of available datasets, we create the first stage lighting dataset, along with several pre-training and transfer learning techniques to improve model training with limited data. We validate our method through both quantitative analysis and an human evaluation, demonstrating that Skip-BART outperforms conventional rule-based methods across all evaluation metrics and shows only a limited gap compared to real lighting engineers. To support further research, we have made our self-collected dataset, code, and trained model parameters available at https://github.com/RS2002/Skip-BART .
https://arxiv.org/abs/2506.01482
Academic Papers
svg
237122b69e6437e36a1d499082396963fe894de98307afc90aade6bc5c2197eb
2026-01-01T00:00:00-05:00
Controllable Human-centric Keyframe Interpolation with Generative Prior
arXiv:2506.03119v2 Announce Type: replace Abstract: Existing interpolation methods use pre-trained video diffusion priors to generate intermediate frames between sparsely sampled keyframes. In the absence of 3D geometric guidance, these methods struggle to produce plausible results for complex, articulated human motions and offer limited control over the synthesized dynamics. In this paper, we introduce PoseFuse3D Keyframe Interpolator (PoseFuse3D-KI), a novel framework that integrates 3D human guidance signals into the diffusion process for Controllable Human-centric Keyframe Interpolation (CHKI). To provide rich spatial and structural cues for interpolation, our PoseFuse3D, a 3D-informed control model, features a novel SMPL-X encoder that transforms 3D geometry and shape into the 2D latent conditioning space, alongside a fusion network that integrates these 3D cues with 2D pose embeddings. For evaluation, we build CHKI-Video, a new dataset annotated with both 2D poses and 3D SMPL-X parameters. We show that PoseFuse3D-KI consistently outperforms state-of-the-art baselines on CHKI-Video, achieving a 9% improvement in PSNR and a 38% reduction in LPIPS. Comprehensive ablations demonstrate that our PoseFuse3D model improves interpolation fidelity.
https://arxiv.org/abs/2506.03119
Academic Papers
svg
cfd1443a4d77249013010834827878d35a920bebb17d73d530224761d5255031
2026-01-01T00:00:00-05:00
Not All Tokens Are Meant to Be Forgotten
arXiv:2506.03142v2 Announce Type: replace Abstract: Large Language Models (LLMs), pre-trained on massive text corpora, exhibit remarkable human-level language understanding, reasoning, and decision-making abilities. However, they tend to memorize unwanted information, such as private or copyrighted content, raising significant privacy and legal concerns. Unlearning has emerged as a promising solution, but existing methods face a significant challenge of over-forgetting. This issue arises because they indiscriminately suppress the generation of all the tokens in forget samples, leading to a substantial loss of model utility. To overcome this challenge, we introduce the Targeted Information Forgetting (TIF) framework, which consists of (1) a flexible targeted information identifier designed to differentiate between unwanted words (UW) and general words (GW) in the forget samples, and (2) a novel Targeted Preference Optimization approach that leverages Logit Preference Loss to unlearn unwanted information associated with UW and Preservation Loss to retain general information in GW, effectively improving the unlearning process while mitigating utility degradation. Extensive experiments on the TOFU and MUSE benchmarks demonstrate that the proposed TIF framework enhances unlearning effectiveness while preserving model utility and achieving state-of-the-art results.
https://arxiv.org/abs/2506.03142
Academic Papers
svg
03c4f86d740075082f71c3cfb19fa43bdd26a54cbe5b1f1bd49580bea113d311
2026-01-01T00:00:00-05:00
Contextual Integrity in LLMs via Reasoning and Reinforcement Learning
arXiv:2506.04245v4 Announce Type: replace Abstract: As the era of autonomous agents making decisions on behalf of users unfolds, ensuring contextual integrity (CI) -- what is the appropriate information to share while carrying out a certain task -- becomes a central question to the field. We posit that CI demands a form of reasoning where the agent needs to reason about the context in which it is operating. To test this, we first prompt LLMs to reason explicitly about CI when deciding what information to disclose. We then extend this approach by developing a reinforcement learning (RL) framework that further instills in models the reasoning necessary to achieve CI. Using a synthetic, automatically created, dataset of only $\sim700$ examples but with diverse contexts and information disclosure norms, we show that our method substantially reduces inappropriate information disclosure while maintaining task performance across multiple model sizes and families. Importantly, improvements transfer from this synthetic dataset to established CI benchmarks such as PrivacyLens that has human annotations and evaluates privacy leakage of AI assistants in actions and tool calls. Our code is available at: https://github.com/EricGLan/CI-RL
https://arxiv.org/abs/2506.04245
Academic Papers
svg
45b1949d9e732f7beb9c120121c1824c348de7cf7a7bacdc12c3d82d6c6d25c2
2026-01-01T00:00:00-05:00
BiTrajDiff: Bidirectional Trajectory Generation with Diffusion Models for Offline Reinforcement Learning
arXiv:2506.05762v3 Announce Type: replace Abstract: Recent advances in offline Reinforcement Learning (RL) have proven that effective policy learning can benefit from imposing conservative constraints on pre-collected datasets. However, such static datasets often exhibit distribution bias, resulting in limited generalizability. To address this limitation, a straightforward solution is data augmentation (DA), which leverages generative models to enrich data distribution. Despite the promising results, current DA techniques focus solely on reconstructing future trajectories from given states, while ignoring the exploration of history transitions that reach them. This single-direction paradigm inevitably hinders the discovery of diverse behavior patterns, especially those leading to critical states that may have yielded high-reward outcomes. In this work, we introduce Bidirectional Trajectory Diffusion (BiTrajDiff), a novel DA framework for offline RL that models both future and history trajectories from any intermediate states. Specifically, we decompose the trajectory generation task into two independent yet complementary diffusion processes: one generating forward trajectories to predict future dynamics, and the other generating backward trajectories to trace essential history transitions.BiTrajDiff can efficiently leverage critical states as anchors to expand into potentially valuable yet underexplored regions of the state space, thereby facilitating dataset diversity. Extensive experiments on the D4RL benchmark suite demonstrate that BiTrajDiff achieves superior performance compared to other advanced DA methods across various offline RL backbones.
https://arxiv.org/abs/2506.05762
Academic Papers
svg
cf6ade97d3910509ad072f5d2dc17230dd7e87175aedfd545bee8e6abdcb9a59
2026-01-01T00:00:00-05:00
Guiding Cross-Modal Representations with MLLM Priors via Preference Alignment
arXiv:2506.06970v3 Announce Type: replace Abstract: Despite Contrastive Language-Image Pretraining (CLIP)'s remarkable capability to retrieve content across modalities, a substantial modality gap persists in its feature space. Intriguingly, we discover that off-the-shelf MLLMs (Multimodal Large Language Models) demonstrate powerful inherent modality alignment properties. While recent MLLM-based retrievers with unified architectures partially mitigate this gap, their reliance on coarse modality alignment mechanisms fundamentally limits their potential. In this work, We introduce MAPLE (Modality-Aligned Preference Learning for Embeddings), a novel framework that leverages the fine grained alignment priors inherent in MLLM to guide cross modal representation learning. MAPLE formulates the learning process as reinforcement learning with two key components: (1) Automatic preference data construction using off-the-shelf MLLM, and (2) a new Relative Preference Alignment (RPA) loss, which adapts Direct Preference Optimization (DPO) to the embedding learning setting. Experimental results show that our preference-guided alignment achieves substantial gains in fine-grained cross-modal retrieval, underscoring its effectiveness in handling nuanced semantic distinctions.
https://arxiv.org/abs/2506.06970
Academic Papers
svg
48ce4ee948180f203def92d7594fab438db061406a659ad0500e58853a38129c
2026-01-01T00:00:00-05:00
Reproducibility in the Control of Autonomous Mobility-on-Demand Systems
arXiv:2506.07345v2 Announce Type: replace Abstract: Autonomous Mobility-on-Demand (AMoD) systems, powered by advances in robotics, control, and Machine Learning (ML), offer a promising paradigm for future urban transportation. AMoD offers fast and personalized travel services by leveraging centralized control of autonomous vehicle fleets to optimize operations and enhance service performance. However, the rapid growth of this field has outpaced the development of standardized practices for evaluating and reporting results, leading to significant challenges in reproducibility. As AMoD control algorithms become increasingly complex and data-driven, a lack of transparency in modeling assumptions, experimental setups, and algorithmic implementation hinders scientific progress and undermines confidence in the results. This paper presents a systematic study of reproducibility in AMoD research. We identify key components across the research pipeline, spanning system modeling, control problems, simulation design, algorithm specification, and evaluation, and analyze common sources of irreproducibility. We survey prevalent practices in the literature, highlight gaps, and propose a structured framework to assess and improve reproducibility. Specifically, concrete guidelines are offered, along with a "reproducibility checklist", to support future work in achieving replicable, comparable, and extensible results. While focused on AMoD, the principles and practices we advocate generalize to a broader class of cyber-physical systems that rely on networked autonomy and data-driven control. This work aims to lay the foundation for a more transparent and reproducible research culture in the design and deployment of intelligent mobility systems.
https://arxiv.org/abs/2506.07345
Academic Papers
svg
2c7f977a17e3dcd159eb6806d4dbe4f496d83830c9b5aebbd6212030b3af5792
2026-01-01T00:00:00-05:00
Toward Robust Legal Text Formalization into Defeasible Deontic Logic using LLMs
arXiv:2506.08899v3 Announce Type: replace Abstract: We present a comprehensive approach to the automated formalization of legal texts using large language models (LLMs), targeting their transformation into Defeasible Deontic Logic (DDL). Our method employs a structured pipeline that segments complex normative language into atomic snippets, extracts deontic rules, and evaluates them for syntactic and semantic coherence. We introduce a refined success metric that more precisely captures the completeness of formalizations, and a novel two-stage pipeline with a dedicated refinement step to improve logical consistency and coverage. The evaluation procedure has been strengthened with stricter error assessment, and we provide comparative results across multiple LLM configurations, including newly released models and various prompting and fine-tuning strategies. Experiments on legal norms from the Australian Telecommunications Consumer Protections Code demonstrate that, when guided effectively, LLMs can produce formalizations that align closely with expert-crafted representations, underscoring their potential for scalable legal informatics.
https://arxiv.org/abs/2506.08899
Academic Papers
svg
8c53cebfa09094384fc1088077d2bea60ef3d4d5206e463c578a33b2a20c9aba
2026-01-01T00:00:00-05:00
Rapid prediction of cardiac activation in the left ventricle with geometric deep learning: a step towards cardiac resynchronization therapy planning
arXiv:2506.08987v3 Announce Type: replace Abstract: Cardiac resynchronization therapy (CRT) is a common intervention for patients with dyssynchronous heart failure, yet approximately one-third of recipients fail to respond, partly due to suboptimal lead placement. Identifying optimal pacing sites remains challenging, largely due to patient-specific anatomical variability and limitations of current individualized planning strategies. In a step toward an in-silico approach, we develop two geometric deep learning models, based on graph neural network (GNN) and geometry-informed neural operator (GINO), to predict activation time maps on left ventricular (LV) geometries in real time. Trained on a large dataset generated from finite-element simulations spanning a wide range of synthetic LV shapes, pacing site configurations, and tissue conductivities, the GINO model outperforms the GNN on synthetic cases (1.38% vs 2.44% error), while both demonstrate comparable performance on real-world LV geometries (GINO: 4.79% vs GNN: 4.07%). Using the trained models, we develop a workflow to identify an optimal pacing site on the LV from a given activation time map and show that both models can robustly recover ground-truth subject-specific parameters from noisy inputs. In conjunction with an interactive web-based interface (https://dcsim.egr.msu.edu/), this study shows potential and motivates future extension toward a clinical decision-support tool for personalized pre-procedural CRT optimization.
https://arxiv.org/abs/2506.08987
Academic Papers
svg
c4a85e35e86fc43d6e248252f09781ffee6557362d4d6043c37e7187b34531ee
2026-01-01T00:00:00-05:00
A Geometric Multigrid Preconditioner for Discontinuous Galerkin Shifted Boundary Method
arXiv:2506.12899v2 Announce Type: replace Abstract: This paper introduces a geometric multigrid preconditioner for the Shifted Boundary Method (SBM) designed to solve PDEs on complex geometries. While SBM simplifies mesh generation by using a non-conforming background grid, it often results in non-symmetric and potentially ill-conditioned linear systems that are challenging to solve efficiently. Standard multigrid methods with pointwise smoothers prove ineffective for such systems due to the localized perturbations introduced by the shifted boundary conditions. To address this challenge, we introduce a Discontinuous Galerkin (DG) formulation for SBM that enables the design of a cell-wise multiplicative smoother within an $hp$-multigrid framework. The element-local nature of DG methods naturally facilitates cell-wise correction, which can effectively handle the local complexities arising from the boundary treatment. Numerical results for the Poisson equation demonstrate favorable performance with mesh refinement for linear ($p=1$) and quadratic ($p=2$) elements in both 2D and 3D, with iteration counts showing mild growth. However, challenges emerge for cubic ($p=3$) elements, particularly in 3D, where the current smoother shows reduced effectiveness.
https://arxiv.org/abs/2506.12899
Academic Papers
svg
130535fc0fc5b3e1b9553a721bb1dce8758a03faef165ddc199fb85ed837929c
2026-01-01T00:00:00-05:00
ChartBlender: An Interactive System for Authoring and Synchronizing Visualization Charts in Video
arXiv:2506.13129v2 Announce Type: replace Abstract: Embedding data visualizations in video can enhance the communication of complex information. However, this process is often labor-intensive, requiring designers to adjust visualizations frame by frame manually. In this work, we present ChartBlender, a novel system that streamlines this process by enabling users to create data visualizations, embed them seamlessly into video scenes, and automatically synchronize them with both camera motion and moving objects. Particularly, ChartBlender incorporates a tracking algorithm that supports both object and camera tracking, ensuring robust alignment of visualizations with dynamic video content. To maintain visual clarity and aesthetic coherence, we also explore the design space of video-suited visualizations and develop a library of customizable templates optimized for video embedding. We evaluate \oursName\ChartBlender through two controlled experiments and expert interviews with five domain experts. Results show that our system enables accurate synchronization and accelerates the production of data-driven videos.
https://arxiv.org/abs/2506.13129
Academic Papers
svg
25237066474128641ff9af8a023ed1743dfa217ef4682fa6b30e9a60f42a1c15
2026-01-01T00:00:00-05:00
A Survey on LLM-Assisted Clinical Trial Recruitment
arXiv:2506.15301v3 Announce Type: replace Abstract: Recent advances in LLMs have greatly improved general-domain NLP tasks. Yet, their adoption in critical domains, such as clinical trial recruitment, remains limited. As trials are designed in natural language and patient data is represented as both structured and unstructured text, the task of matching trials and patients benefits from knowledge aggregation and reasoning abilities of LLMs. Classical approaches are trial-specific and LLMs with their ability to consolidate distributed knowledge hold the potential to build a more general solution. Yet recent applications of LLM-assisted methods rely on proprietary models and weak evaluation benchmarks. In this survey, we are the first to analyze the task of trial-patient matching and contextualize emerging LLM-based approaches in clinical trial recruitment. We critically examine existing benchmarks, approaches and evaluation frameworks, the challenges to adopting LLM technologies in clinical research and exciting future directions.
https://arxiv.org/abs/2506.15301
Academic Papers
svg
39b9d68c4b9935201487a8938cc09a2d5420745e28878c0b22bd032651c650d5
2026-01-01T00:00:00-05:00
Robust Robotic Exploration and Mapping Using Generative Occupancy Map Synthesis
arXiv:2506.20049v2 Announce Type: replace Abstract: We present a novel approach for enhancing robotic exploration by using generative occupancy mapping. We implement SceneSense, a diffusion model designed and trained for predicting 3D occupancy maps given partial observations. Our proposed approach probabilistically fuses these predictions into a running occupancy map in real-time, resulting in significant improvements in map quality and traversability. We deploy SceneSense on a quadruped robot and validate its performance with real-world experiments to demonstrate the effectiveness of the model. In these experiments we show that occupancy maps enhanced with SceneSense predictions better estimate the distribution of our fully observed ground truth data ($24.44\%$ FID improvement around the robot and $75.59\%$ improvement at range). We additionally show that integrating SceneSense enhanced maps into our robotic exploration stack as a ``drop-in'' map improvement, utilizing an existing off-the-shelf planner, results in improvements in robustness and traversability time. Finally, we show results of full exploration evaluations with our proposed system in two dissimilar environments and find that locally enhanced maps provide more consistent exploration results than maps constructed only from direct sensor measurements.
https://arxiv.org/abs/2506.20049
Academic Papers
svg
4d22b841d0080cab989b7436fef67a934b83ebeb73225cdea5778949b5b12af3
2026-01-01T00:00:00-05:00
OmniVCus: Feedforward Subject-driven Video Customization with Multimodal Control Conditions
arXiv:2506.23361v3 Announce Type: replace Abstract: Existing feedforward subject-driven video customization methods mainly study single-subject scenarios due to the difficulty of constructing multi-subject training data pairs. Another challenging problem that how to use the signals such as depth, mask, camera, and text prompts to control and edit the subject in the customized video is still less explored. In this paper, we first propose a data construction pipeline, VideoCus-Factory, to produce training data pairs for multi-subject customization from raw videos without labels and control signals such as depth-to-video and mask-to-video pairs. Based on our constructed data, we develop an Image-Video Transfer Mixed (IVTM) training with image editing data to enable instructive editing for the subject in the customized video. Then we propose a diffusion Transformer framework, OmniVCus, with two embedding mechanisms, Lottery Embedding (LE) and Temporally Aligned Embedding (TAE). LE enables inference with more subjects by using the training subjects to activate more frame embeddings. TAE encourages the generation process to extract guidance from temporally aligned control signals by assigning the same frame embeddings to the control and noise tokens. Experiments demonstrate that our method significantly surpasses state-of-the-art methods in both quantitative and qualitative evaluations. Video demos are at our project page: https://caiyuanhao1998.github.io/project/OmniVCus/. Our code, models, data are released at https://github.com/caiyuanhao1998/Open-OmniVCus
https://arxiv.org/abs/2506.23361
Academic Papers
svg
6710791ae1bcc3e354086fdfae7993b77673ccefe927c572dc0b4f317f50381b
2026-01-01T00:00:00-05:00
Passage-traversing optimal path planning with sampling-based algorithms
arXiv:2506.23614v2 Announce Type: replace Abstract: This paper introduces a new paradigm of optimal path planning, i.e., passage-traversing optimal path planning (PTOPP), that optimizes paths' traversed passages for specified optimization objectives. In particular, PTOPP is utilized to find the path with optimal accessible free space along its entire length, which represents a basic requirement for paths in robotics. As passages are places where free space shrinks and becomes constrained, the core idea is to leverage the path's passage traversal status to characterize its accessible free space comprehensively. To this end, a novel passage detection and free space decomposition method using proximity graphs is proposed, enabling fast detection of sparse but informative passages and environment decompositions. Based on this preprocessing, optimal path planning with accessible free space objectives or constraints is formulated as PTOPP problems compatible with sampling-based optimal planners. Then, sampling-based algorithms for PTOPP, including their dependent primitive procedures, are developed leveraging partitioned environments for fast passage traversal check. All these methods are implemented and thoroughly tested for effectiveness and efficiency validation. Compared to existing approaches, such as clearance-based methods, PTOPP demonstrates significant advantages in configurability, solution optimality, and efficiency, addressing prior limitations and incapabilities. It is believed to provide an efficient and versatile solution to accessible free space optimization over conventional avenues and more generally, to a broad class of path planning problems that can be formulated as PTOPP.
https://arxiv.org/abs/2506.23614
Academic Papers
svg
a67e1890ecc0ea77b210908418cdae060e0b9099d5de247e7c3e8f3492696c2b
2026-01-01T00:00:00-05:00
Learning from Random Subspace Exploration: Generalized Test-Time Augmentation with Self-supervised Distillation
arXiv:2507.01347v2 Announce Type: replace Abstract: We introduce Generalized Test-Time Augmentation (GTTA), a highly effective method for improving the performance of a trained model, which unlike other existing Test-Time Augmentation approaches from the literature is general enough to be used off-the-shelf for many vision and non-vision tasks, such as classification, regression, image segmentation and object detection. By applying a new general data transformation, that randomly perturbs multiple times the PCA subspace projection of a test input, GTTA creates valid augmented samples from the data distribution with high diversity, properties we theoretically show that are essential for a Test-Time Augmentation method to be effective. Different from other existing methods, we also propose a final self-supervised learning stage in which the ensemble output, acting as an unsupervised teacher, is used to train the initial single student model, thus reducing significantly the test time computational cost. Our comparisons to strong TTA approaches and SoTA models on various vision and non-vision well-known datasets and tasks, such as image classification and segmentation, pneumonia detection, speech recognition and house price prediction, validate the generality of the proposed GTTA. Furthermore, we also prove its effectiveness on the more specific real-world task of salmon segmentation and detection in low-visibility underwater videos, for which we introduce DeepSalmon, the largest dataset of its kind in the literature.
https://arxiv.org/abs/2507.01347
Academic Papers
svg
9f671bf2d95d234fcd262155034e10353540fb2320ae908c43c4a09e3113a370
2026-01-01T00:00:00-05:00
MuRating: A High Quality Data Selecting Approach to Multilingual Large Language Model Pretraining
arXiv:2507.01785v2 Announce Type: replace Abstract: Data quality is a critical driver of large language model performance, yet existing model-based selection methods focus almost exclusively on English. We introduce MuRating, a scalable framework that transfers high-quality English data-quality signals into a single rater for 17 target languages. MuRating aggregates multiple English "raters" via pairwise comparisons to learn unified document-quality scores,then projects these judgments through translation to train a multilingual evaluator on monolingual, cross-lingual, and parallel text pairs. Applied to web data, MuRating selects balanced subsets of English and multilingual content to pretrain a 1.2 B-parameter LLaMA model. Compared to strong baselines, including QuRater, AskLLM, DCLM and so on, our approach boosts average accuracy on both English benchmarks and multilingual evaluations, with especially large gains on knowledge-intensive tasks. We further analyze translation fidelity, selection biases, and underrepresentation of narrative material, outlining directions for future work.
https://arxiv.org/abs/2507.01785
Academic Papers
svg
71eb515e332deeb393f3db465e8f282674826a3dc6ae2ea071546d8e894535b7
2026-01-01T00:00:00-05:00
Large Language Model-Driven Closed-Loop UAV Operation with Semantic Observations
arXiv:2507.01930v5 Announce Type: replace Abstract: Recent advances in large Language Models (LLMs) have revolutionized mobile robots, including unmanned aerial vehicles (UAVs), enabling their intelligent operation within Internet of Things (IoT) ecosystems. However, LLMs still face challenges from logical reasoning and complex decision-making, leading to concerns about the reliability of LLM-driven UAV operations in IoT applications. In this paper, we propose a closed-loop LLM-driven UAV operation code generation framework that enables reliable UAV operations powered by effective feedback and refinement using two LLM modules, i.e., a Code Generator and an Evaluator. Our framework transforms numerical state observations from UAV operations into semantic trajectory descriptions to enhance the evaluator LLM's understanding of UAV dynamics for precise feedback generation. Our framework also enables a simulation-based refinement process, and hence eliminates the risks to physical UAVs caused by incorrect code execution during the refinement. Extensive experiments on UAV control tasks with different complexities are conducted. The experimental results show that our framework can achieve reliable UAV operations using LLMs, which significantly outperforms baseline methods in terms of success rate and completeness with the increase of task complexity.
https://arxiv.org/abs/2507.01930
Academic Papers
svg
21485e3297e6cbaa291c48446fd506b588e891292252bb17b84879998958566b
2026-01-01T00:00:00-05:00
Dynamic Strategy Adaptation in Multi-Agent Environments with Large Language Models
arXiv:2507.02002v4 Announce Type: replace Abstract: Large language models (LLMs) demonstrate strong reasoning abilities across mathematical, strategic, and linguistic tasks, yet little is known about how well they reason in dynamic, real-time, multi-agent scenarios, such as collaborative environments in which agents continuously adapt to each other's behavior, as in cooperative gameplay settings. In this paper, we bridge this gap by combining LLM-driven agents with strategic reasoning and real-time adaptation in cooperative, multi-agent environments grounded in game-theoretic principles such as belief consistency and Nash equilibrium. The proposed framework applies broadly to dynamic scenarios in which agents coordinate, communicate, and make decisions in response to continuously changing conditions. We provide real-time strategy refinement and adaptive feedback mechanisms that enable agents to dynamically adjust policies based on immediate contextual interactions, in contrast to previous efforts that evaluate LLM capabilities in static or turn-based settings. Empirical results show that our method achieves up to a 26\% improvement in return over PPO baselines in high-noise environments, while maintaining real-time latency under 1.05 milliseconds. Our approach improves collaboration efficiency, task completion rates, and flexibility, illustrating that game-theoretic guidance integrated with real-time feedback enhances LLM performance, ultimately fostering more resilient and flexible strategic multi-agent systems.
https://arxiv.org/abs/2507.02002
Academic Papers
svg
0217e3c0bfce7eb36e2af07a00d5b9824464cc95e954c8236c8e7cdd24f83701
2026-01-01T00:00:00-05:00
Probabilistically Tightened Linear Relaxation-based Perturbation Analysis for Neural Network Verification
arXiv:2507.05405v2 Announce Type: replace Abstract: We present $\textbf{P}$robabilistically $\textbf{T}$ightened $\textbf{Li}$near $\textbf{R}$elaxation-based $\textbf{P}$erturbation $\textbf{A}$nalysis ($\texttt{PT-LiRPA}$), a novel framework that combines over-approximation techniques from LiRPA-based approaches with a sampling-based method to compute tight intermediate reachable sets. In detail, we show that with negligible computational overhead, $\texttt{PT-LiRPA}$ exploiting the estimated reachable sets, significantly tightens the lower and upper linear bounds of a neural network's output, reducing the computational cost of formal verification tools while providing probabilistic guarantees on verification soundness. Extensive experiments on standard formal verification benchmarks, including the International Verification of Neural Networks Competition, show that our $\texttt{PT-LiRPA}$-based verifier improves robustness certificates, i.e., the certified lower bound of $\varepsilon$ perturbation tolerated by the models, by up to 3.31X and 2.26X compared to related work. Importantly, our probabilistic approach results in a valuable solution for challenging competition entries where state-of-the-art formal verification methods fail, allowing us to provide answers with high confidence (i.e., at least 99%).
https://arxiv.org/abs/2507.05405
Academic Papers
svg
6966d7eadd65c8536d274a986f293a45284edf668967718d1c31a9b2798ebd20
2026-01-01T00:00:00-05:00
PERK: Long-Context Reasoning as Parameter-Efficient Test-Time Learning
arXiv:2507.06415v2 Announce Type: replace Abstract: Long-context reasoning requires accurately identifying relevant information in extensive, noisy input contexts. Previous research shows that using test-time learning to encode context directly into model parameters can effectively enable reasoning over noisy information. However, meta-learning methods for enabling test-time learning are prohibitively memory-intensive, preventing their application to long context settings. In this work, we propose PERK (Parameter Efficient Reasoning over Knowledge), a scalable approach for learning to encode long input contexts using gradient updates to a lightweight model adapter at test time. Specifically, PERK employs two nested optimization loops in a meta-training phase. The inner loop rapidly encodes contexts into a low-rank adapter (LoRA) that serves as a parameter-efficient memory module for the base model. Concurrently, the outer loop learns to use the updated adapter to accurately recall and reason over relevant information from the encoded long context. Our evaluations on several long-context reasoning tasks show that PERK significantly outperforms the standard prompt-based long-context baseline, achieving average absolute performance gains of up to 90% for smaller models (GPT-2) and up to 27% for our largest evaluated model, Qwen-2.5-0.5B. In general, PERK is more robust to reasoning complexity, length extrapolation, and the locations of relevant information in contexts. Finally, we show that while PERK is memory-intensive during training, it scales more efficiently at inference time than prompt-based long-context inference.
https://arxiv.org/abs/2507.06415
Academic Papers
svg
043bb593a8f1a8f4520ce2e282afb95249399c81e3b3cd6c443b667435013e08
2026-01-01T00:00:00-05:00
Mathematical artificial data for operator learning
arXiv:2507.06752v2 Announce Type: replace Abstract: Machine learning has emerged as a transformative tool for solving differential equations (DEs), yet prevailing methodologies remain constrained by dual limitations: data-driven methods demand costly labeled datasets while model-driven techniques face efficiency-accuracy trade-offs. We present the Mathematical Artificial Data (MAD) framework, a new paradigm that integrates physical laws with data-driven learning to facilitate large-scale operator discovery. By exploiting DEs' intrinsic mathematical structure to generate physics-embedded analytical solutions and associated synthetic data, MAD fundamentally eliminates dependence on experimental or simulated training data. This enables computationally efficient operator learning across multi-parameter systems while maintaining mathematical rigor. Through numerical demonstrations spanning 2D parametric problems where both the boundary values and source term are functions, we showcase MAD's generalizability and superior efficiency/accuracy across various DE scenarios. This physics-embedded-data-driven framework and its capacity to handle complex parameter spaces gives it the potential to become a universal paradigm for physics-informed machine intelligence in scientific computing.
https://arxiv.org/abs/2507.06752
Academic Papers
svg
0c3cd49514ea3bece2d0cee93d1d91024b963e2665972fcf3e37375749d14da6
2026-01-01T00:00:00-05:00
One Graph to Track Them All: Dynamic GNNs for Single- and Multi-View Tracking
arXiv:2507.08494v2 Announce Type: replace Abstract: This work presents a unified, fully differentiable model for multi-people tracking that learns to associate detections into trajectories without relying on pre-computed tracklets. The model builds a dynamic spatiotemporal graph that aggregates spatial, contextual, and temporal information, enabling seamless information propagation across entire sequences. To improve occlusion handling, the graph can also encode scene-specific information. We also introduce a new large-scale dataset with 25 partially overlapping views, detailed scene reconstructions, and extensive occlusions. Experiments show the model achieves state-of-the-art performance on public benchmarks and the new dataset, with flexibility across diverse conditions. Both the dataset and approach will be publicly released to advance research in multi-people tracking.
https://arxiv.org/abs/2507.08494
Academic Papers
svg
b26ac6ce23551b1c66757c7bc55ed68cafbe7e4c49a9967e959f44b3ed183616
2026-01-01T00:00:00-05:00
Lightweight Deep Learning-Based Channel Estimation for RIS-Aided Extremely Large-Scale MIMO Systems on Resource-Limited Edge Devices
arXiv:2507.09627v2 Announce Type: replace Abstract: Next-generation wireless technologies such as 6G aim to meet demanding requirements such as ultra-high data rates, low latency, and enhanced connectivity. Extremely Large-Scale MIMO (XL-MIMO) and Reconfigurable Intelligent Surface (RIS) are key enablers, with XL-MIMO boosting spectral and energy efficiency through numerous antennas, and RIS offering dynamic control over the wireless environment via passive reflective elements. However, realizing their full potential depends on accurate Channel State Information (CSI). Recent advances in deep learning have facilitated efficient cascaded channel estimation. However, the scalability and practical deployment of existing estimation models in XL-MIMO systems remain limited. The growing number of antennas and RIS elements introduces a significant barrier to real-time and efficient channel estimation, drastically increasing data volume, escalating computational complexity, requiring advanced hardware, and resulting in substantial energy consumption. To address these challenges, we propose a lightweight deep learning framework for efficient cascaded channel estimation in XL-MIMO systems, designed to minimize computational complexity and make it suitable for deployment on resource-constrained edge devices. Using spatial correlations in the channel, we introduce a patch-based training mechanism that reduces the dimensionality of input to patch-level representations while preserving essential information, allowing scalable training for large-scale systems. Simulation results under diverse conditions demonstrate that our framework significantly improves estimation accuracy and reduces computational complexity, regardless of the increasing number of antennas and RIS elements in XL-MIMO systems.
https://arxiv.org/abs/2507.09627
Academic Papers
svg
035376205c279c212c3919c525f60df19ce4b90e6568a67eef2984154006538e
2026-01-01T00:00:00-05:00
Compressed data structures for Heegaard splittings
arXiv:2507.11406v2 Announce Type: replace Abstract: Heegaard splittings provide a natural representation of closed 3-manifolds by gluing two handlebodies along a common surface. These splittings can be equivalently given by two finite sets of meridians lying on the surface, which define a Heegaard diagram. We present a data structure to effectively represent Heegaard diagrams as normal curves with respect to triangulations of a surface, where the complexity is measured by the space required to express the normal coordinates' vectors in binary. This structure can be significantly more compact than triangulations of 3-manifolds, yielding exponential gains for certain families. Even with this succinct definition of complexity, we establish polynomial-time algorithms for comparing and manipulating diagrams, performing stabilizations, detecting trivial stabilizations and reductions, and computing topological invariants of the underlying manifolds, such as their fundamental and homology groups. We also contrast early implementations of our techniques with standard software programs for 3-manifolds, achieving faster algorithms for the average cases and exponential gains in speed for some particular presentations of the inputs.
https://arxiv.org/abs/2507.11406
Academic Papers
svg
960f1f91594c49e107da7f6ce6e71ea8e13edbb0389837f2b20812b451cb9909
2026-01-01T00:00:00-05:00
An Ecosystem for Ontology Interoperability
arXiv:2507.12311v5 Announce Type: replace Abstract: Ontology interoperability is one of the complicated issues that restricts the use of ontologies in knowledge graphs (KGs). Different ontologies with conflicting and overlapping concepts make it difficult to design, develop, and deploy an interoperable ontology for downstream tasks. We propose an ecosystem for ontology interoperability. The ecosystem employs three state-of-the-art semantic techniques in different phases of the ontology engineering life cycle: ontology design patterns (ODPs) in the design phase, ontology matching and versioning (OM\&OV) in the develop phase, and data-driven ontology validation (DOVA) in the deploy phase, to achieve better ontology interoperability and data integration in real-world applications. A case study of sensor observation in the building domain validates the usefulness of the proposed ecosystem.
https://arxiv.org/abs/2507.12311
Academic Papers
svg
81b0b91952abacdcd9c5895e33dd720518b49fd9d7ce49216580766f0e82cb15
2026-01-01T00:00:00-05:00
Sampling from Gaussian Processes: A Tutorial and Applications in Global Sensitivity Analysis and Optimization
arXiv:2507.14746v2 Announce Type: replace Abstract: High-fidelity simulations and physical experiments are essential for engineering analysis and design, yet their high cost often makes two critical tasks--global sensitivity analysis (GSA) and optimization--prohibitively expensive. This limitation motivates the common use of Gaussian processes (GPs) as proxy regression models that provide uncertainty-aware predictions from a limited number of high-quality observations. GPs naturally enable efficient sampling strategies that support informed decision-making under uncertainty by extracting information from a subset of possible functions for the model of interest. However, direct sampling from GPs is inefficient due to their infinite-dimensional nature and the high cost associated with large covariance matrix operations. Despite their popularity in machine learning and statistics communities, sampling from GPs has received little attention in the community of engineering optimization. In this paper, we present the formulation and detailed implementation of two notable sampling methods--random Fourier features and pathwise conditioning--for generating posterior samples from GPs at reduced computational cost. Alternative approaches are briefly described. Importantly, we detail how the generated samples can be applied in GSA, single-objective optimization, and multi-objective optimization. We show successful applications of these sampling methods through a series of numerical examples.
https://arxiv.org/abs/2507.14746
Academic Papers
svg
00d34d71d10f318da5726a5816f01bebe7471e220cc94f68f3b46f860540dcec
2026-01-01T00:00:00-05:00
One Step is Enough: Multi-Agent Reinforcement Learning based on One-Step Policy Optimization for Order Dispatch on Ride-Sharing Platforms
arXiv:2507.15351v2 Announce Type: replace Abstract: Order dispatch is a critical task in ride-sharing systems with Autonomous Vehicles (AVs), directly influencing efficiency and profits. Recently, Multi-Agent Reinforcement Learning (MARL) has emerged as a promising solution to this problem by decomposing the large state and action spaces among individual agents, effectively addressing the Curse of Dimensionality (CoD) in transportation market, which is caused by the substantial number of vehicles, passengers, and orders. However, conventional MARL-based approaches heavily rely on accurate estimation of the value function, which becomes problematic in large-scale, highly uncertain environments. To address this issue, we propose two novel methods that bypass value function estimation, leveraging the homogeneous property of AV fleets. First, we draw an analogy between AV fleets and groups in Group Relative Policy Optimization (GRPO), adapting it to the order dispatch task. By replacing the Proximal Policy Optimization (PPO) baseline with the group average reward-to-go, GRPO eliminates critic estimation errors and reduces training bias. Inspired by this baseline replacement, we further propose One-Step Policy Optimization (OSPO), demonstrating that the optimal policy can be trained using only one-step group rewards under a homogeneous fleet. Experiments on a real-world ride-hailing dataset show that both GRPO and OSPO achieve promising performance across all scenarios, efficiently optimizing pickup times and the number of served orders using simple Multilayer Perceptron (MLP) networks. Furthermore, OSPO outperforms GRPO in all scenarios, attributed to its elimination of bias caused by the bounded time horizon of GRPO. Our code, trained models, and processed data are provided at https://github.com/RS2002/OSPO .
https://arxiv.org/abs/2507.15351
Academic Papers
svg
fe6db51ef47e726db4acb4667f8f9330a4ba57049ce8e20df09b06906b566fd7
2026-01-01T00:00:00-05:00
Natural Language Processing for Tigrinya: Current State and Future Directions
arXiv:2507.17974v3 Announce Type: replace Abstract: Despite being spoken by millions of people, Tigrinya remains severely underrepresented in Natural Language Processing (NLP) research. This work presents a comprehensive survey of NLP research for Tigrinya, analyzing over 50 studies from 2011 to 2025. We systematically review the current state of computational resources, models, and applications across fifteen downstream tasks, including morphological processing, part-of-speech tagging, named entity recognition, machine translation, question-answering, speech recognition, and synthesis. Our analysis reveals a clear trajectory from foundational, rule-based systems to modern neural architectures, with progress consistently driven by milestones in resource creation. We identify key challenges rooted in Tigrinya's morphological properties and resource scarcity, and highlight promising research directions, including morphology-aware modeling, cross-lingual transfer, and community-centered resource development. This work serves both as a reference for researchers and as a roadmap for advancing Tigrinya NLP. An anthology of surveyed studies and resources is publicly available.
https://arxiv.org/abs/2507.17974
Academic Papers
svg
28ff3530d62b75f2a1f22c51ef3b952e1191ed1b0c6fe87e89d87f672ecc6d04
2026-01-01T00:00:00-05:00
GestureHYDRA: Semantic Co-speech Gesture Synthesis via Hybrid Modality Diffusion Transformer and Cascaded-Synchronized Retrieval-Augmented Generation
arXiv:2507.22731v2 Announce Type: replace Abstract: While increasing attention has been paid to co-speech gesture synthesis, most previous works neglect to investigate hand gestures with explicit and essential semantics. In this paper, we study co-speech gesture generation with an emphasis on specific hand gesture activation, which can deliver more instructional information than common body movements. To achieve this, we first build a high-quality dataset of 3D human body movements including a set of semantically explicit hand gestures that are commonly used by live streamers. Then we present a hybrid-modality gesture generation system GestureHYDRA built upon a hybrid-modality diffusion transformer architecture with novelly designed motion-style injective transformer layers, which enables advanced gesture modeling ability and versatile gesture operations. To guarantee these specific hand gestures can be activated, we introduce a cascaded retrieval-augmented generation strategy built upon a semantic gesture repository annotated for each subject and an adaptive audio-gesture synchronization mechanism, which substantially improves semantic gesture activation and production efficiency. Quantitative and qualitative experiments demonstrate that our proposed approach achieves superior performance over all the counterparts. The project page can be found at https://mumuwei.github.io/GestureHYDRA/.
https://arxiv.org/abs/2507.22731
Academic Papers
svg
883ae0146d9306e13d32db4dd27da7ec3f0f59e819a6aeb0239f4fa17b17b374
2026-01-01T00:00:00-05:00
Learning Network Dismantling Without Handcrafted Inputs
arXiv:2508.00706v2 Announce Type: replace Abstract: The application of message-passing Graph Neural Networks has been a breakthrough for important network science problems. However, the competitive performance often relies on using handcrafted structural features as inputs, which increases computational cost and introduces bias into the otherwise purely data-driven network representations. Here, we eliminate the need for handcrafted features by introducing an attention mechanism and utilizing message-iteration profiles, in addition to an effective algorithmic approach to generate a structurally diverse training set of small synthetic networks. Thereby, we build an expressive message-passing framework and use it to efficiently solve the NP-hard problem of Network Dismantling, virtually equivalent to vital node identification, with significant real-world applications. Trained solely on diversified synthetic networks, our proposed model -- MIND: Message Iteration Network Dismantler -- generalizes to large, unseen real networks with millions of nodes, outperforming state-of-the-art network dismantling methods. Increased efficiency and generalizability of the proposed model can be leveraged beyond dismantling in a range of complex network problems.
https://arxiv.org/abs/2508.00706
Academic Papers
svg
6d29ce2f3cd271fed08e6bc3a64898c14fb818d62a16ceb85baabe0d5ced88fa
2026-01-01T00:00:00-05:00
Multi-step retrieval and reasoning improves radiology question answering with large language models
arXiv:2508.00743v4 Announce Type: replace Abstract: Clinical decision-making in radiology increasingly benefits from artificial intelligence (AI), particularly through large language models (LLMs). However, traditional retrieval-augmented generation (RAG) systems for radiology question answering (QA) typically rely on single-step retrieval, limiting their ability to handle complex clinical reasoning tasks. Here we propose radiology Retrieval and Reasoning (RaR), a multi-step retrieval and reasoning framework designed to improve diagnostic accuracy, factual consistency, and clinical reliability of LLMs in radiology question answering. We evaluated 25 LLMs spanning diverse architectures, parameter scales (0.5B to >670B), and training paradigms (general-purpose, reasoning-optimized, clinically fine-tuned), using 104 expert-curated radiology questions from previously established RSNA-RadioQA and ExtendedQA datasets. To assess generalizability, we additionally tested on an unseen internal dataset of 65 real-world radiology board examination questions. RaR significantly improved mean diagnostic accuracy over zero-shot prompting and conventional online RAG. The greatest gains occurred in small-scale models, while very large models (>200B parameters) demonstrated minimal changes (<2% improvement). Additionally, RaR retrieval reduced hallucinations (mean 9.4%) and retrieved clinically relevant context in 46% of cases, substantially aiding factual grounding. Even clinically fine-tuned models showed gains from RaR (e.g., MedGemma-27B), indicating that retrieval remains beneficial despite embedded domain knowledge. These results highlight the potential of RaR to enhance factuality and diagnostic accuracy in radiology QA, warranting future studies to validate their clinical utility. All datasets, code, and the full RaR framework are publicly available to support open research and clinical translation.
https://arxiv.org/abs/2508.00743
Academic Papers
svg
639e16e34a8fff18e94879e4f0c90c04f53fcae7629c36cbae81ec320c6d1d2e
2026-01-01T00:00:00-05:00
SplatSSC: Decoupled Depth-Guided Gaussian Splatting for Semantic Scene Completion
arXiv:2508.02261v3 Announce Type: replace Abstract: Monocular 3D Semantic Scene Completion (SSC) is a challenging yet promising task that aims to infer dense geometric and semantic descriptions of a scene from a single image. While recent object-centric paradigms significantly improve efficiency by leveraging flexible 3D Gaussian primitives, they still rely heavily on a large number of randomly initialized primitives, which inevitably leads to 1) inefficient primitive initialization and 2) outlier primitives that introduce erroneous artifacts. In this paper, we propose SplatSSC, a novel framework that resolves these limitations with a depth-guided initialization strategy and a principled Gaussian aggregator. Instead of random initialization, SplatSSC utilizes a dedicated depth branch composed of a Group-wise Multi-scale Fusion (GMF) module, which integrates multi-scale image and depth features to generate a sparse yet representative set of initial Gaussian primitives. To mitigate noise from outlier primitives, we develop the Decoupled Gaussian Aggregator (DGA), which enhances robustness by decomposing geometric and semantic predictions during the Gaussian-to-voxel splatting process. Complemented with a specialized Probability Scale Loss, our method achieves state-of-the-art performance on the Occ-ScanNet dataset, outperforming prior approaches by over 6.3% in IoU and 4.1% in mIoU, while reducing both latency and memory cost by more than 9.3%.
https://arxiv.org/abs/2508.02261
Academic Papers
svg
37320b475bf74a9b1c895b458f9c025a3a06f210d33273423894cb5d379c5106
2026-01-01T00:00:00-05:00
BadBlocks: Lightweight and Stealthy Backdoor Threat in Text-to-Image Diffusion Models
arXiv:2508.03221v4 Announce Type: replace Abstract: Diffusion models have recently achieved remarkable success in image generation, yet growing evidence shows their vulnerability to backdoor attacks, where adversaries implant covert triggers to manipulate outputs. While existing defenses can detect many such attacks via visual inspection and neural network-based analysis, we identify a more lightweight and stealthy threat, termed BadBlocks. BadBlocks selectively contaminates specific blocks within the UNet architecture while preserving the normal behavior of the remaining components. Compared with prior methods, it requires only about 30% of the computation and 20% of the GPU time, yet achieves high attack success rates with minimal perceptual degradation. Extensive experiments demonstrate that BadBlocks can effectively evade state-of-the-art defenses, particularly attention-based detection frameworks. Ablation studies further reveal that effective backdoor injection does not require fine-tuning the entire network and highlight the critical role of certain layers in backdoor mapping. Overall, BadBlocks substantially lowers the barrier for backdooring large-scale diffusion models, even on consumer-grade GPUs.
https://arxiv.org/abs/2508.03221
Academic Papers
svg
97528cd59bb316c52fdee4883a0036a9ad17bb81ed02065e6a6d693284155fd7
2026-01-01T00:00:00-05:00
evTransFER: A Transfer Learning Framework for Event-based Facial Expression Recognition
arXiv:2508.03609v2 Announce Type: replace Abstract: Event-based cameras are bio-inspired sensors that asynchronously capture pixel intensity changes with microsecond latency, high temporal resolution, and high dynamic range, providing information on the spatiotemporal dynamics of a scene. We propose evTransFER, a transfer learning-based framework for facial expression recognition using event-based cameras. The main contribution is a feature extractor designed to encode facial spatiotemporal dynamics, built by training an adversarial generative method on facial reconstruction and transferring the encoder weights to the facial expression recognition system. We demonstrate that the proposed transfer learning method improves facial expression recognition compared to training a network from scratch. We propose an architecture that incorporates an LSTM to capture longer-term facial expression dynamics and introduces a new event-based representation called TIE. We evaluated the framework using both the synthetic event-based facial expression database e-CK+ and the real neuromorphic dataset NEFER. On e-CK+, evTransFER achieved a recognition rate of 93.6\%, surpassing state-of-the-art methods. For NEFER, which comprises event sequence with real sensor noise and sparse activity, the proposed transfer learning strategy achieved an accuracy of up to 76.7\%. In both datasets, the outcomes surpassed current methodologies and exceeded results when compared with models trained from scratch.
https://arxiv.org/abs/2508.03609
Academic Papers
svg
7de00af585deef6c98b69cc597c64ba987e3ab499eb8524a5ea7a7cd7ab90fcf
2026-01-01T00:00:00-05:00
Forgetting: A New Mechanism Towards Better Large Language Model Fine-tuning
arXiv:2508.04329v4 Announce Type: replace Abstract: Supervised fine-tuning (SFT) plays a critical role for pretrained large language models (LLMs), notably enhancing their capacity to acquire domain-specific knowledge while preserving or potentially augmenting their general-purpose capabilities. However, the efficacy of SFT hinges on data quality as well as data volume, otherwise it may result in limited performance gains or even degradation relative to the associated baselines. To mitigate such reliance, we suggest categorizing tokens within each corpus into two parts -- positive and negative tokens -- based on whether they are useful to improve model performance. Positive tokens can be trained in common ways, whereas negative tokens, which may lack essential semantics or be misleading, should be explicitly forgotten. Overall, the token categorization facilitate the model to learn less informative message, and the forgetting process shapes a knowledge boundary to guide the model on what information to learn more precisely. We conduct experiments across diverse and well-established benchmarks using various model architectures, demonstrating that this forgetting mechanism enhances model performance.
https://arxiv.org/abs/2508.04329
Academic Papers
svg
cf3489a31712d466589f4aa5a98496bd5a9a29f166cfffaff7a0e80c905e5358
2026-01-01T00:00:00-05:00
ITDR: An Instruction Tuning Dataset for Enhancing Large Language Models in Recommendations
arXiv:2508.05667v2 Announce Type: replace Abstract: Large language models (LLMs) have demonstrated outstanding performance in natural language processing tasks. However, in the field of recommender systems, due to the inherent structural discrepancy between user behavior data and natural language, LLMs struggle to effectively model the associations between user preferences and items. Although prompt-based methods can generate recommendation results, their inadequate understanding of recommendation tasks leads to constrained performance. To address this gap, we construct a comprehensive instruction tuning dataset, ITDR, which encompasses seven subtasks across two root tasks: user-item interaction and user-item understanding. The dataset integrates data from 13 public recommendation datasets and is built using manually crafted standardized templates, comprising approximately 200,000 instances. Experimental results demonstrate that ITDR significantly enhances the performance of mainstream open-source LLMs such as GLM-4, Qwen2.5, Qwen2.5-Instruct and LLaMA-3.2 on recommendation tasks. Furthermore, we analyze the correlations between tasks and explore the impact of task descriptions and data scale on instruction tuning effectiveness. Finally, we perform comparative experiments against closed-source LLMs with massive parameters. Our tuning dataset ITDR, the fine-tuned large recommendation models, all LoRA modules, and the complete experimental results are available at https://github.com/hellolzk/ITDR.
https://arxiv.org/abs/2508.05667
Academic Papers
svg
e3e2cd9a6c96904aeabd18eb0c5f083eedb9247627250b81f4d6f413a365313d
2026-01-01T00:00:00-05:00
MCITlib: Multimodal Continual Instruction Tuning Library and Benchmark
arXiv:2508.07307v3 Announce Type: replace Abstract: Continual learning enables AI systems to acquire new knowledge while retaining previously learned information. While traditional unimodal methods have made progress, the rise of Multimodal Large Language Models (MLLMs) brings new challenges in Multimodal Continual Learning (MCL), where models are expected to address both catastrophic forgetting and cross-modal coordination. To advance research in this area, we present MCITlib, a comprehensive library for Multimodal Continual Instruction Tuning. MCITlib currently implements 8 representative algorithms and conducts evaluations on 3 benchmarks under 2 backbone models. The library will be continuously updated to support future developments in MCL. The codebase is released at https://github.com/Ghy0501/MCITlib.
https://arxiv.org/abs/2508.07307
Academic Papers
svg
0a56cca35b52a48cc23eddab37981b7da589198382ce970dd4a2b0b5ef739a43
2026-01-01T00:00:00-05:00
Online Convex Optimization with Heavy Tails: Old Algorithms, New Regrets, and Applications
arXiv:2508.07473v2 Announce Type: replace Abstract: In Online Convex Optimization (OCO), when the stochastic gradient has a finite variance, many algorithms provably work and guarantee a sublinear regret. However, limited results are known if the gradient estimate has a heavy tail, i.e., the stochastic gradient only admits a finite $\mathsf{p}$-th central moment for some $\mathsf{p}\in\left(1,2\right]$. Motivated by it, this work examines different old algorithms for OCO (e.g., Online Gradient Descent) in the more challenging heavy-tailed setting. Under the standard bounded domain assumption, we establish new regrets for these classical methods without any algorithmic modification. Remarkably, these regret bounds are fully optimal in all parameters (can be achieved even without knowing $\mathsf{p}$), suggesting that OCO with heavy tails can be solved effectively without any extra operation (e.g., gradient clipping). Our new results have several applications. A particularly interesting one is the first provable and optimal convergence result for nonsmooth nonconvex optimization under heavy-tailed noise without gradient clipping. Furthermore, we explore broader settings (e.g., smooth OCO) and extend our ideas to optimistic algorithms to handle different cases simultaneously.
https://arxiv.org/abs/2508.07473
Academic Papers
svg
44c9143ca924c632fa1527307d749287c8eaa01b215c46abff39721caf523efe
2026-01-01T00:00:00-05:00
Generalising Traffic Forecasting to Regions without Traffic Observations
arXiv:2508.08947v2 Announce Type: replace Abstract: Traffic forecasting is essential for intelligent transportation systems. Accurate forecasting relies on continuous observations collected by traffic sensors. However, due to high deployment and maintenance costs, not all regions are equipped with such sensors. This paper aims to forecast for regions without traffic sensors, where the lack of historical traffic observations challenges the generalisability of existing models. We propose a model named GenCast, the core idea of which is to exploit external knowledge to compensate for the missing observations and to enhance generalisation. We integrate physics-informed neural networks into GenCast, enabling physical principles to regularise the learning process. We introduce an external signal learning module to explore correlations between traffic states and external signals such as weather conditions, further improving model generalisability. Additionally, we design a spatial grouping module to filter localised features that hinder model generalisability. Extensive experiments show that GenCast consistently reduces forecasting errors on multiple real-world datasets.
https://arxiv.org/abs/2508.08947
Academic Papers
svg
cc5b340254726b6f1dd7abb27fad755958e3824748f875ee83352cca5080e069
2026-01-01T00:00:00-05:00
CLF-RL: Control Lyapunov Function Guided Reinforcement Learning
arXiv:2508.09354v2 Announce Type: replace Abstract: Reinforcement learning (RL) has shown promise in generating robust locomotion policies for bipedal robots, but often suffers from tedious reward design and sensitivity to poorly shaped objectives. In this work, we propose a structured reward shaping framework that leverages model-based trajectory generation and control Lyapunov functions (CLFs) to guide policy learning. We explore two model-based planners for generating reference trajectories: a reduced-order linear inverted pendulum (LIP) model for velocity-conditioned motion planning, and a precomputed gait library based on hybrid zero dynamics (HZD) using full-order dynamics. These planners define desired end-effector and joint trajectories, which are used to construct CLF-based rewards that penalize tracking error and encourage rapid convergence. This formulation provides meaningful intermediate rewards, and is straightforward to implement once a reference is available. Both the reference trajectories and CLF shaping are used only during training, resulting in a lightweight policy at deployment. We validate our method both in simulation and through extensive real-world experiments on a Unitree G1 robot. CLF-RL demonstrates significantly improved robustness relative to the baseline RL policy and better performance than a classic tracking reward RL formulation.
https://arxiv.org/abs/2508.09354
Academic Papers
svg
229dfe9ccfb53eda4b42c36c8d90389725352437e96987afd5a64a7e1d697548
2026-01-01T00:00:00-05:00
RAJ-PGA: Reasoning-Activated Jailbreak and Principle-Guided Alignment Framework for Large Reasoning Models
arXiv:2508.12897v2 Announce Type: replace Abstract: Large Reasoning Models (LRMs) face a distinct safety vulnerability: their internal reasoning chains may generate harmful content even when the final output appears benign. To address this overlooked risk, we first propose a novel attack paradigm, Reasoning-Activated Jailbreak (RAJ) via Concretization, which demonstrates that refining malicious prompts to be more specific can trigger step-by-step logical reasoning that overrides the model's safety protocols. To systematically mitigate this vulnerability, we further develop a scalable framework for constructing high-quality safety alignment datasets. This framework first leverages the RAJ attack to elicit challenging harmful reasoning chains from LRMs, then transforms these high-risk traces into safe, constructive, and educational responses through a tailored Principle-Guided Alignment (PGA) mechanism. Then, we introduce the PGA dataset, a verified alignment dataset containing 3,989 samples using our proposed method. Extensive experiments show that fine-tuning LRMs with PGA dataset significantly enhances model safety, achieving up to a 29.5% improvement in defense success rates across multiple jailbreak benchmarks. Critically, our approach not only defends against sophisticated reasoning-based attacks but also preserves, even enhances, the model's general reasoning capabilities. This work provides a scalable and effective pathway for safety alignment in reasoning-intensive AI systems, addressing the core trade-off between safety and functional performance.
https://arxiv.org/abs/2508.12897
Academic Papers
svg
a98fda5948bd399d20fd89425d3bc62cfd8b75de875b969754403ece9eb0beed
2026-01-01T00:00:00-05:00
Holistic Evaluation of Multimodal LLMs on Spatial Intelligence
arXiv:2508.13142v5 Announce Type: replace Abstract: Multimodal models have achieved remarkable progress in recent years. Nevertheless, they continue to exhibit notable limitations in spatial understanding and reasoning, the very capability that anchors artificial general intelligence in the physical world. With the recent release of GPT-5, allegedly the most powerful AI model to date, it is timely to examine where the leading models (GPT, Gemini, Grok, Seed, Qwen, and Intern) stand on the path toward spatial intelligence (SI). We thus propose EASI for holistic Evaluation of multimodAl LLMs on Spatial Intelligence. EASI conceptualizes a comprehensive taxonomy of spatial tasks that unifies existing benchmarks and a growing collection of newly curated ones, enabling systematic evaluation of state-of-the-art models. In this report, we conduct the study across eight key benchmarks, at a cost exceeding ten billion total tokens. Our empirical study then reveals that (1) GPT-5 demonstrates unprecedented strength in SI, yet (2) still falls short of human performance significantly across a broad spectrum of SI-tasks. Moreover, we (3) show that SI-tasks expose greater model capability deficiency than non-SI tasks, to the extent that (4) proprietary models do not exhibit a decisive advantage when facing the most difficult ones. In addition, we conduct a qualitative evaluation across a diverse set of scenarios that are intuitive for humans, yet fail the most advanced multimodal models. EASI is an ongoing community effort: we have open-sourced the EASI codebase that provides a one-stop and reproducible solution with standardized interfaces, integrated protocols and prompts that significantly reduce the friction of configuring and running multiple benchmarks; we have also launched an accompanying EASI leaderboard to provide a continually updated snapshot of model performance across the full SI spectrum, accelerating collective progress toward robust SI.
https://arxiv.org/abs/2508.13142
Academic Papers
svg
98cd00bc9ab96d6033373d43b2e6c718a7d316770377d1e23a01bd4c44cfeaef
2026-01-01T00:00:00-05:00
Mamba2 Meets Silence: Robust Vocal Source Separation for Sparse Regions
arXiv:2508.14556v2 Announce Type: replace Abstract: We introduce a new music source separation model tailored for accurate vocal isolation. Unlike Transformer-based approaches, which often fail to capture intermittently occurring vocals, our model leverages Mamba2, a recent state space model, to better capture long-range temporal dependencies. To handle long input sequences efficiently, we combine a band-splitting strategy with a dual-path architecture. Experiments show that our approach outperforms recent state-of-the-art models, achieving a cSDR of 11.03 dB-the best reported to date-and delivering substantial gains in uSDR. Moreover, the model exhibits stable and consistent performance across varying input lengths and vocal occurrence patterns. These results demonstrate the effectiveness of Mamba-based models for high-resolution audio processing and open up new directions for broader applications in audio research.
https://arxiv.org/abs/2508.14556
Academic Papers
svg
1a031f147baad7a8a996f9a827403e558a15a6c2dec41c3cb3de4aad311031ed
2026-01-01T00:00:00-05:00
MedQARo: A Large-Scale Benchmark for Evaluating Large Language Models on Medical Question Answering in Romanian
arXiv:2508.16390v3 Announce Type: replace Abstract: Question answering (QA) is an actively studied topic, being a core natural language processing (NLP) task that needs to be addressed before achieving Artificial General Intelligence (AGI). However, the lack of QA datasets in specific domains and languages hinders the development of robust AI models able to generalize across various domains and languages. To this end, we introduce MedQARo, the first large-scale medical QA benchmark in Romanian, alongside a comprehensive evaluation of state-of-the-art (SOTA) large language models (LLMs). We construct a high-quality and large-scale dataset comprising 105,880 QA pairs related to cancer patients from two medical centers. The questions regard medical case summaries of 1,242 patients, requiring either keyword extraction or reasoning to be answered correctly. MedQARo is the result of a time-consuming manual annotation process carried out by seven physicians specialized in oncology or radiotherapy, who spent a total of about 3,000 work hours to generate the QA pairs. Our benchmark contains both in-domain and cross-domain (cross-center and cross-cancer) test collections, enabling a precise assessment of generalization capabilities. We experiment with four open-source LLMs from distinct families of models on MedQARo. Each model is employed in two scenarios, namely one based on zero-shot prompting and one based on supervised fine-tuning. We also evaluate two state-of-the-art LLMs exposed only through APIs, namely GPT-5.2 and Gemini 3 Flash. Our results show that fine-tuned models significantly outperform zero-shot models, clearly indicating that pretrained models fail to generalize on MedQARo. Our findings demonstrate the importance of both domain-specific and language-specific fine-tuning for reliable clinical QA in Romanian. We publicly release our dataset and code at https://github.com/ana-rogoz/MedQARo.
https://arxiv.org/abs/2508.16390
Academic Papers
svg
2e33b89de9d1c93f67e56254da9d2ea90537e427770e8d4b4321e0ad5bc00272
2026-01-01T00:00:00-05:00
CrystalDiT: A Diffusion Transformer for Crystal Generation
arXiv:2508.16614v3 Announce Type: replace Abstract: We present CrystalDiT, a diffusion transformer for crystal structure generation that achieves state-of-the-art performance by challenging the trend of architectural complexity. Instead of intricate, multi-stream designs, CrystalDiT employs a unified transformer that imposes a powerful inductive bias: treating lattice and atomic properties as a single, interdependent system. Combined with a periodic table-based atomic representation and a balanced training strategy, our approach achieves 8.78% SUN (Stable, Unique, Novel) rate on MP-20, substantially outperforming recent methods including FlowMM (4.21%) and MatterGen (3.66%). Notably, CrystalDiT generates 63.28% unique and novel structures while maintaining comparable stability rates, demonstrating that architectural simplicity can be more effective than complexity for materials discovery. Our results suggest that in data-limited scientific domains, carefully designed simple architectures outperform sophisticated alternatives that are prone to overfitting.
https://arxiv.org/abs/2508.16614
Academic Papers
svg
c6115d5068d65648f663390007ff4e92f65a6ce9df0daedee54980f9223450e9
2026-01-01T00:00:00-05:00
STRelay: A Universal Spatio-Temporal Relaying Framework for Location Prediction over Human Trajectory Data
arXiv:2508.16620v2 Announce Type: replace Abstract: Next location prediction is a critical task in human mobility modeling, enabling applications like travel planning and urban mobility management. Existing methods mainly rely on historical spatiotemporal trajectory data to train sequence models that directly forecast future locations. However, they often overlook the importance of the future spatiotemporal contexts, which are highly informative for the future locations. For example, knowing how much time and distance a user will travel could serve as a critical clue for predicting the user's next location. Against this background, we propose \textbf{STRelay}, a universal \textbf{\underline{S}}patio\textbf{\underline{T}}emporal \textbf{\underline{Relay}}ing framework explicitly modeling the future spatiotemporal context given a human trajectory, to boost the performance of different location prediction models. Specifically, STRelay models future spatiotemporal contexts in a relaying manner, which is subsequently integrated with the encoded historical representation from a base location prediction model, enabling multi-task learning by simultaneously predicting the next time interval, next moving distance interval, and finally the next location. We evaluate STRelay integrated with five state-of-the-art location prediction base models on four real-world trajectory datasets. Results demonstrate that STRelay consistently improves prediction performance across all cases by 2.49\%-11.30\%. Additionally, we find that the future spatiotemporal contexts are particularly helpful for entertainment-related locations and also for user groups who prefer traveling longer distances. The performance gain on such non-daily-routine activities, which often suffer from higher uncertainty, is indeed complementary to the base location prediction models that often excel at modeling regular daily routine patterns.
https://arxiv.org/abs/2508.16620
Academic Papers
svg
db18c2eaff95b52e43d5814a4e09d00af582adb92f98ce8cf8ec5b582bbae3d8
2026-01-01T00:00:00-05:00
RAST: A Retrieval Augmented Spatio-Temporal Framework for Traffic Prediction
arXiv:2508.16623v2 Announce Type: replace Abstract: Traffic prediction is a cornerstone of modern intelligent transportation systems and a critical task in spatio-temporal forecasting. Although advanced Spatio-temporal Graph Neural Networks (STGNNs) and pre-trained models have achieved significant progress in traffic prediction, two key challenges remain: (i) limited contextual capacity when modeling complex spatio-temporal dependencies, and (ii) low predictability at fine-grained spatio-temporal points due to heterogeneous patterns. Inspired by Retrieval-Augmented Generation (RAG), we propose RAST, a universal framework that integrates retrieval-augmented mechanisms with spatio-temporal modeling to address these challenges. Our framework consists of three key designs: 1) Decoupled Encoder and Query Generator to capture decoupled spatial and temporal features and construct a fusion query via residual fusion; 2) Spatio-temporal Retrieval Store and Retrievers to maintain and retrieve vectorized fine-grained patterns; and 3) Universal Backbone Predictor that flexibly accommodates pre-trained STGNNs or simple MLP predictors. Extensive experiments on six real-world traffic networks, including large-scale datasets, demonstrate that RAST achieves superior performance while maintaining computational efficiency.
https://arxiv.org/abs/2508.16623
Academic Papers
svg
e0c2f3b4a89a4d0cdf7f24cec1e2138f5bd29cfb2f68017295a22139dacba392
2026-01-01T00:00:00-05:00
Natural Image Classification via Quasi-Cyclic Graph Ensembles and Random-Bond Ising Models at the Nishimori Temperature
arXiv:2508.18717v2 Announce Type: replace Abstract: Modern multi-class image classification relies on high-dimensional CNN feature vectors, which are computationally expensive and obscure the underlying data geometry. Conventional graph-based classifiers degrade on natural multi-class images because typical graphs fail to preserve separability on feature manifolds with complex topology. We address this with a physics-inspired pipeline frozen MobileNetV2 embeddings are treated as Ising spins on a sparse Multi-Edge Type QC-LDPC graph forming a Random Bond Ising Model. The system is tuned to its Nishimori temperature identified where the smallest Bethe-Hessian eigenvalue vanishes. Our method rests on two innovations: we prove a spectral-topological correspondence linking graph trapping sets to invariants via the Ihara-Bass zeta function removing these structures boosts top-1 accuracy over four-fold in multi-class settings; we develop a quadratic-Newton estimator for the Nishimori temperature converging in around 9 Arnoldi iterations for a 6-times speedup enabling spectral embedding on scales like ImageNet-100. The resulting graphs compress 1280-dimensional MobileNetV2 features to 32 dimensions for ImageNet10 and 64 for ImageNet-100 We achieve 98.7% top-1 accuracy on ImageNet-10 and 84.92% on ImageNet-100 with a three-graph soft ensemble Versus MobileNetV2 our hard ensemble increases top-1 by 0.1% while cutting FLOPs by 2.67-times compared to ResNet50 the soft ensemble drops top1 by only 1.09% yet reduces FLOPs by 29-times. Novelty lies in (a) rigorously linking trapping sets to topological defects, (b) an efficient Nishimori temperature estimator and (c) demonstrating that topology-guided LDPC embedding produces highly compressed accurate classifiers for resource-constrained deployment
https://arxiv.org/abs/2508.18717
Academic Papers
svg
284124fde3d8af27675bbb8696c5c8fb9873597863ce8e95b10e59f308b2717a
2026-01-01T00:00:00-05:00
CVBench: Benchmarking Cross-Video Synergies for Complex Multimodal Reasoning
arXiv:2508.19542v3 Announce Type: replace Abstract: While multimodal large language models (MLLMs) exhibit strong performance on single-video tasks (e.g., video question answering), their capability for spatiotemporal pattern reasoning across multiple videos remains a critical gap in pattern recognition research. However, this capability is essential for real-world applications, including multi-camera surveillance and cross-video procedural learning. To bridge this gap, we present CVBench, the first diagnostic benchmark designed to assess cross-video relational reasoning rigorously. CVBench comprises 1,000 question-answer pairs spanning three hierarchical tiers: cross-video object association (identifying shared entities), cross-video event association (linking temporal or causal event chains), and cross-video complex reasoning (integrating commonsense and domain knowledge). Built from five domain-diverse video clusters (e.g., sports, life records), the benchmark challenges models to analyze and integrate spatiotemporal patterns from dynamic visual streams. Extensive evaluation of 10+ leading MLLMs (including GPT-4o, Gemini-2.0-flash, Qwen2.5-VL) under zero-shot or chain-of-thought prompting paradigms. Key findings reveal stark performance gaps: even top models, such as GPT-4o, achieve only 63.5% accuracy on causal reasoning tasks, compared to the 91.3% accuracy of human performance. Crucially, our analysis reveals fundamental bottlenecks inherent in current MLLMs architectures, notably deficient inter-video context retention and poor disambiguation of overlapping entities. CVBench establishes a rigorous framework for advancing pattern recognition methodologies in multi-video scenarios, providing architectural insights for next-generation models. The data and evaluation code are available at: https://github.com/Hokhim2/CVBench.
https://arxiv.org/abs/2508.19542
Academic Papers
svg
6d0498aeec78d825931a9388adfb9f902007e91f58d66c7e630c6b690f66f266
2026-01-01T00:00:00-05:00
Towards Operational Validation of LLM-Agent Social Simulations: A Replicated Study of a Reddit-like Technology Forum
arXiv:2508.21740v2 Announce Type: replace Abstract: Large Language Models (LLMs) enable generative social simulations that can capture culturally informed, norm-guided interaction on online social platforms. We build a technology community simulation modeled on Voat, a Reddit-like alt-right news aggregator and discussion platform active from 2014 to 2020. Using the YSocial framework, we seed the simulation with a fixed catalog of technology links sampled from Voat's shared URLs (covering 30+ domains) and calibrate parameters to Voat's v/technology using samples from the MADOC dataset. Agents use a base, uncensored model (Dolphin 3.0, based on Llama 3.1 8B) and concise personas (demographics, political leaning, interests, education, toxicity propensity) to generate posts, replies, and reactions under platform rules for link and text submissions, threaded replies and daily activity cycles. We run a 30-day simulation and evaluate operational validity by comparing distributions and structures with matched Voat data: activity patterns, interaction networks, toxicity, and topic coverage. Results indicate familiar online regularities: similar activity rhythms, heavy-tailed participation, sparse low-clustering interaction networks, core-periphery structure, topical alignment with Voat, and elevated toxicity. Limitations of the current study include the stateless agent design and evaluation based on a single 30-day run, which constrains external validity and variance estimates. The simulation generates realistic discussions, often featuring toxic language, primarily centered on technology topics such as Big Tech and AI. This approach offers a valuable method for examining toxicity dynamics and testing moderation strategies within a controlled environment.
https://arxiv.org/abs/2508.21740
Academic Papers
svg
234cbf658716c77ec97b994914de5b99774e354abf34064fc4fc09937e9672ff
2026-01-01T00:00:00-05:00
Aligned Anchor Groups Guided Line Segment Detector
arXiv:2509.00786v2 Announce Type: replace Abstract: This paper introduces a novel line segment detector, the Aligned Anchor Groups guided Line Segment Detector (AAGLSD), designed to detect line segments from images with high precision and completeness. The algorithm employs a hierarchical approach to extract candidate pixels with different saliency levels, including regular anchors and aligned anchor groups. AAGLSD initiates from these aligned anchor groups, sequentially linking anchors and updating the currently predicted line segment simultaneously. The final predictions are derived through straightforward validation and merging of adjacent line segments, avoiding complex refinement strategies. AAGLSD is evaluated on various datasets and quantitative experiments demonstrate that the proposed method can effectively extract complete line segments from input images compared to other advanced line segment detectors. The implementation is available at https://github.com/zyl0609/AAGLSD.
https://arxiv.org/abs/2509.00786
Academic Papers
svg
329f131c08d574db2af8776b95e5319d731c557d813e4bcb11fd33db5d37bbca
2026-01-01T00:00:00-05:00
Bidirectional Sparse Attention for Faster Video Diffusion Training
arXiv:2509.01085v4 Announce Type: replace Abstract: Video diffusion Transformer (DiT) models excel in generative quality but hit major computational bottlenecks when producing high-resolution, long-duration videos. The quadratic complexity of full attention leads to prohibitively high training and inference costs. Full attention inefficiency stems from two key challenges: excessive computation due to the inherent sparsity of Queries and Key-Value pairs, and redundant computation as fixed sparse patterns fail to leverage DiT's dynamic attention. To overcome this limitation, we propose a Bidirectional Sparse Attention (BSA) framework for faster video DiT training, the first to dynamically sparsify both Queries and Key-Value pairs within 3D full attention, thereby substantially improving training and inference efficiency. BSA addresses these issues through two key components. Query sparsity is optimized by selecting the most informative query tokens via semantic similarity and with a dynamic spatial-time training strategy, while KV sparsity is achieved by computing a statistical dynamic threshold to retain only the most salient KV blocks for computation. Extensive experiments demonstrate that BSA significantly accelerates DiT training across long sequences, reducing FLOPs by up to 20x and achieving 17.79x faster attention training, while preserving or even surpassing the generative quality of full attention.
https://arxiv.org/abs/2509.01085
Academic Papers
svg
e394cf39f1cfb4cc38b9f67a28af4a83470d227e88ef0d3341da79079c756f90
2026-01-01T00:00:00-05:00
Towards Data-Driven Metrics for Social Robot Navigation Benchmarking
arXiv:2509.01251v2 Announce Type: replace Abstract: This paper presents a joint effort towards the development of a data-driven Social Robot Navigation metric to facilitate benchmarking and policy optimization for ground robots. We compiled a dataset with 4427 trajectories -- 182 real and 4245 simulated -- and presented it to human raters, yielding a total of 4402 rated trajectories after data quality assurance. Notably, we provide the first all-encompassing learned social robot navigation metric, along qualitative and quantitative results, including the test loss achieved, a comparison against hand-crafted metrics, and an ablation study. All data, software, and model weights are publicly available.
https://arxiv.org/abs/2509.01251
Academic Papers
svg
e61834a7b21d3e657a06d57c215673e7bba46796104651465709e77cde7b2bdd
2026-01-01T00:00:00-05:00
In-N-Out: A Parameter-Level API Graph Dataset for Tool Agents
arXiv:2509.01560v3 Announce Type: replace Abstract: Tool agents--LLM-based systems that interact with external APIs--offer a way to execute real-world tasks. However, as tasks become increasingly complex, these agents struggle to identify and call the correct APIs in the proper order. To tackle this problem, we investigate converting API documentation into a structured API graph that captures API dependencies and leveraging it for multi-tool queries that require compositional API calls. To support this, we introduce In-N-Out, the first expert-annotated dataset of API graphs built from two real-world API benchmarks and their documentation. Using In-N-Out significantly improves performance on both tool retrieval and multi-tool query generation, nearly doubling that of LLMs using documentation alone. Moreover, graphs generated by models fine-tuned on In-N-Out close 90% of this gap, showing that our dataset helps models learn to comprehend API documentation and parameter relationships. Our findings highlight the promise of using explicit API graphs for tool agents and the utility of In-N-Out as a valuable resource. We release our dataset and code at https://github.com/holi-lab/In-N-Out-API-Graph.
https://arxiv.org/abs/2509.01560
Academic Papers
svg
c8b034fb147ebf3f610f9148a0ce4640da649b72bd3d1cc94d2eca2b3248c58c
2026-01-01T00:00:00-05:00
Plan Verification for LLM-Based Embodied Task Completion Agents
arXiv:2509.02761v4 Announce Type: replace Abstract: Large language model (LLM) based task plans and corresponding human demonstrations for embodied AI may be noisy, with unnecessary actions, redundant navigation, and logical errors that reduce policy quality. We propose an iterative verification framework in which a Judge LLM critiques action sequences and a Planner LLM applies the revisions, yielding progressively cleaner and more spatially coherent trajectories. Unlike rule-based approaches, our method relies on natural language prompting, enabling broad generalization across error types including irrelevant actions, contradictions, and missing steps. On a set of manually annotated actions from the TEACh embodied AI dataset, our framework achieves up to 90% recall and 100% precision across four state-of-the-art LLMs (GPT o4-mini, DeepSeek-R1, Gemini 2.5, LLaMA 4 Scout). The refinement loop converges quickly, with 96.5% of sequences requiring at most three iterations, while improving both temporal efficiency and spatial action organization. Crucially, the method preserves human error-recovery patterns rather than collapsing them, supporting future work on robust corrective behavior. By establishing plan verification as a reliable LLM capability for spatial planning and action refinement, we provide a scalable path to higher-quality training data for imitation learning in embodied AI.
https://arxiv.org/abs/2509.02761
Academic Papers
svg
255a1955fd4b90098ccb23368064df3ca0bab7d37978783f1452e723f43290ef
2026-01-01T00:00:00-05:00
Hybrid dynamical systems modeling of power systems
arXiv:2509.02822v2 Announce Type: replace Abstract: The increasing integration of renewable energy sources has introduced complex dynamic behavior in power systems that challenge the adequacy of traditional continuous-time modeling approaches. These developments call for modeling frameworks that can capture the intricate interplay between continuous dynamics and discrete events characterizing modern grid operations. Hybrid dynamical systems offer a rigorous foundation for representing such mixed dynamics and have emerged as a valuable tool in power system analysis. Despite their potential, existing studies remain focused on isolated applications or case-specific implementations, offering limited generalizability and guidance for model selection. This paper addresses that gap by providing a comprehensive overview of hybrid modeling approaches relevant to power systems. It critically examines key formalisms, including hybrid automata, switched systems, and piecewise affine models, evaluating their respective strengths, limitations, and suitability across control, stability, and system design tasks. In doing so, the paper identifies open challenges and outlines future research directions to support the systematic application of hybrid methods in renewable-rich, converter-dominated power systems
https://arxiv.org/abs/2509.02822
Academic Papers
svg
3ff4b024c1ab535e1f62718fa299fe335e0a0194a01ea65a7cc3bca3b5d63b21
2026-01-01T00:00:00-05:00
STSR: High-Fidelity Speech Super-Resolution via Spectral-Transient Context Modeling
arXiv:2509.03913v4 Announce Type: replace Abstract: Speech super-resolution (SR) reconstructs high-fidelity wideband speech from low-resolution inputs-a task that necessitates reconciling global harmonic coherence with local transient sharpness. While diffusion-based generative models yield impressive fidelity, their practical deployment is often stymied by prohibitive computational demands. Conversely, efficient time-domain architectures lack the explicit frequency representations essential for capturing long-range spectral dependencies and ensuring precise harmonic alignment. We introduce STSR, a unified end-to-end framework formulated in the MDCT domain to circumvent these limitations. STSR employs a Spectral-Contextual Attention mechanism that harnesses hierarchical windowing to adaptively aggregate non-local spectral context, enabling consistent harmonic reconstruction up to 48 kHz. Concurrently, a sparse-aware regularization strategy is employed to mitigate the suppression of transient components inherent in compressed spectral representations. STSR consistently outperforms state-of-the-art baselines in both perceptual fidelity and zero-shot generalization, providing a robust, real-time paradigm for high-quality speech restoration.
https://arxiv.org/abs/2509.03913
Academic Papers
svg
e46bbd18e9634e447870c8afffae505c1db0b79e2a07c05e2c1f09dbfc7a2537
2026-01-01T00:00:00-05:00
ACE-RL: Adaptive Constraint-Enhanced Reward for Long-form Generation Reinforcement Learning
arXiv:2509.04903v3 Announce Type: replace Abstract: Long-form generation has become a critical and challenging application for Large Language Models (LLMs). Existing studies are limited by their reliance on scarce, high-quality long-form response data and their focus on coarse-grained, general-purpose metrics (e.g., coherence and helpfulness), overlooking the nuanced, scenario-specific requirements of real-world tasks. To address these limitations, we propose a framework utilizing Adaptive Constraint-Enhanced reward for long-form generation Reinforcement Learning (ACE-RL). ACE-RL first decomposes each instruction into a set of fine-grained, adaptive constraint criteria spanning key dimensions of long-form generation tasks. Subsequently, we design a reward mechanism to quantify the response quality based on their satisfaction over corresponding constraints, converting subjective quality evaluation into constraint verification. Finally, we leverage reinforcement learning to optimize LLMs using these fine-grained signals. Experimental results show that ACE-RL significantly outperforms existing SFT and RL baselines by 18.63% and 7.61% on WritingBench, and our top-performing model even surpasses proprietary systems like GPT-4o by 8.76%, providing a more effective training paradigm in long-form generation scenarios.
https://arxiv.org/abs/2509.04903
Academic Papers
svg
c7d78623771697d67930ea885a8ed0c52df8ea5068b02bbc638d758963c8a1c5
2026-01-01T00:00:00-05:00
Open-sci-ref-0.01: open and reproducible reference baselines for language model and dataset comparison
arXiv:2509.09009v3 Announce Type: replace Abstract: We introduce open-sci-ref, a family of dense transformer models trained as research baselines across multiple model (0.13B to 1.7B parameters) and token scales (up to 1T) on 8 recent open reference datasets. Evaluating the models on various standardized benchmarks, our training runs set establishes reference points that enable researchers to assess the sanity and quality of alternative training approaches across scales and datasets. Intermediate checkpoints allow comparison and studying of the training dynamics. The established reference baselines allow training procedures to be compared through their scaling trends, aligning them on a common compute axis. Comparison of open reference datasets reveals that training on NemoTron-CC HQ consistently outperforms other reference datasets, followed by DCLM-baseline and FineWeb-Edu. In addition to intermediate training checkpoints, the release includes logs, code, and downstream evaluations to simplify reproduction, standardize comparison, and facilitate future research.
https://arxiv.org/abs/2509.09009
Academic Papers
svg
b73b4c56d71f9aceca97457556cc8b7fb928372ff12901712646a38811d4dfca
2026-01-01T00:00:00-05:00
Vital Signs Monitoring with mmWave OFDM JCAS System
arXiv:2509.11767v2 Announce Type: replace Abstract: Wireless techniques for monitoring human vital signs, such as heart and breathing rates, offer a promising solution in the context of joint communication and sensing (JCAS) with applications in medicine, sports, safety, security, and even the military. This paper reports experimental results obtained at the Fraunhofer Institute for Integrated Circuits in Ilmenau, demonstrating the effectiveness of an indoor orthogonal frequency-division multiplexing (OFDM) JCAS system for detecting human heart and breathing rates. The system operated in a bistatic configuration at an FR2 frequency of 26.5 GHz with a variable bandwidth of up to 1 GHz. Measurements were taken under various scenarios, including a subject lying down, sitting, or walking, in both line-of-sight and non-line-of-sight conditions, and with one or two subjects present simultaneously. The results indicate that while vital sign detection is generally feasible, its effectiveness is influenced by several factors, such as the subjects clothing, activity, as well as the distance and angle relative to the sensing system. In addition, no significant influence of bandwidth was detected since the vital signs information is encoded in the phase of the signal.
https://arxiv.org/abs/2509.11767
Academic Papers
svg
c793344dea503512ac1a9e862ca5abaf7fcaf5df475ad0ab931d91a487f9337f
2026-01-01T00:00:00-05:00
A Novel Compression Framework for YOLOv8: Achieving Real-Time Aerial Object Detection on Edge Devices via Structured Pruning and Channel-Wise Distillation
arXiv:2509.12918v3 Announce Type: replace Abstract: Efficient deployment of deep learning models for aerial object detection on resource-constrained devices requires significant compression without com-promising performance. In this study, we propose a novel three-stage compression pipeline for the YOLOv8 object detection model, integrating sparsity-aware training, structured channel pruning, and Channel-Wise Knowledge Distillation (CWD). First, sparsity-aware training introduces dynamic sparsity during model optimization, effectively balancing parameter reduction and detection accuracy. Second, we apply structured channel pruning by leveraging batch normalization scaling factors to eliminate redundant channels, significantly reducing model size and computational complexity. Finally, to mitigate the accuracy drop caused by pruning, we employ CWD to transfer knowledge from the original model, using an adjustable temperature and loss weighting scheme tailored for small and medium object detection. Extensive experiments on the VisDrone dataset demonstrate the effectiveness of our approach across multiple YOLOv8 variants. For YOLOv8m, our method reduces model parameters from 25.85M to 6.85M (a 73.51% reduction), FLOPs from 49.6G to 13.3G, and MACs from 101G to 34.5G, while reducing AP50 by only 2.7%. The resulting compressed model achieves 47.9 AP50 and boosts inference speed from 26 FPS (YOLOv8m baseline) to 45 FPS, enabling real-time deployment on edge devices. We further apply TensorRT as a lightweight optimization step. While this introduces a minor drop in AP50 (from 47.9 to 47.6), it significantly improves inference speed from 45 to 68 FPS, demonstrating the practicality of our approach for high-throughput, re-source-constrained scenarios.
https://arxiv.org/abs/2509.12918
Academic Papers
svg
5c8dbcc173a140c9ef4c5136becbaa66f8cb85bcf0ef953b2d24135d86041ef3
2026-01-01T00:00:00-05:00
Towards Privacy-Preserving and Heterogeneity-aware Split Federated Learning via Probabilistic Masking
arXiv:2509.14603v2 Announce Type: replace Abstract: Split Federated Learning (SFL) has emerged as an efficient alternative to traditional Federated Learning (FL) by reducing client-side computation through model partitioning. However, exchanging of intermediate activations and model updates introduces significant privacy risks, especially from data reconstruction attacks that recover original inputs from intermediate representations. Existing defenses using noise injection often degrade model performance. To overcome these challenges, we present PM-SFL, a scalable and privacy-preserving SFL framework that incorporates Probabilistic Mask training to add structured randomness without relying on explicit noise. This mitigates data reconstruction risks while maintaining model utility. To address data heterogeneity, PM-SFL employs personalized mask learning that tailors submodel structures to each client's local data. For system heterogeneity, we introduce a layer-wise knowledge compensation mechanism, enabling clients with varying resources to participate effectively under adaptive model splitting. Theoretical analysis confirms its privacy protection, and experiments on image and wireless sensing tasks demonstrate that PM-SFL consistently improves accuracy, communication efficiency, and robustness to privacy attacks, with particularly strong performance under data and system heterogeneity.
https://arxiv.org/abs/2509.14603
Academic Papers
svg
f10739b5374a4d85933552b210b7c6abef9b08359cd2b274ec0683267237b59e
2026-01-01T00:00:00-05:00
Chunk Based Speech Pre-training with High Resolution Finite Scalar Quantization
arXiv:2509.15579v2 Announce Type: replace Abstract: Low latency speech human-machine communication is becoming increasingly necessary as speech technology advances quickly in the last decade. One of the primary factors behind the advancement of speech technology is self-supervised learning. Most self-supervised learning algorithms are designed with full utterance assumption and compromises have to made if partial utterances are presented, which are common in the streaming applications. In this work, we propose a chunk based self-supervised learning (Chunk SSL) algorithm as an unified solution for both streaming and offline speech pre-training. Chunk SSL is optimized with the masked prediction loss and an acoustic encoder is encouraged to restore indices of those masked speech frames with help from unmasked frames in the same chunk and preceding chunks. A copy and append data augmentation approach is proposed to conduct efficient chunk based pre-training. Chunk SSL utilizes a finite scalar quantization (FSQ) module to discretize input speech features and our study shows a high resolution FSQ codebook, i.e., a codebook with vocabulary size up to a few millions, is beneficial to transfer knowledge from the pre-training task to the downstream tasks. A group masked prediction loss is employed during pre-training to alleviate the high memory and computation cost introduced by the large codebook. The proposed approach is examined in two speech to text tasks, i.e., speech recognition and speech translation. Experimental results on the \textsc{Librispeech} and \textsc{Must-C} datasets show that the proposed method could achieve very competitive results for speech to text tasks at both streaming and offline modes.
https://arxiv.org/abs/2509.15579
Academic Papers
svg
21dbd1631c8c62256027ff7946e4f47011ada88ee0274a972ae46f8e6c88bd64
2026-01-01T00:00:00-05:00
Personalized Enhanced Federated Multi-View Clustering via Heat-Kernel Tensor Decomposition
arXiv:2509.16101v3 Announce Type: replace Abstract: This paper introduces mathematical frameworks that address the challenges of multi-view clustering in federated learning environments. The objective is to integrate optimization techniques based on new objective functions employing heat-kernel coefficients to replace conventional distance metrics with quantum-inspired measures. The proposed frameworks utilize advanced tensor decomposition methods, specifically, PARAFAC2 and Tucker decomposition to efficiently represent high-dimensional, multi-view data while preserving inter-view relationships. The research has yielded the development of four novel algorithms, an efficient federated kernel multi-view clustering (E-FKMVC) model, FedHK-PARAFAC2, FedHK-Tucker, and FedHK-MVC-Person with PARAFAC2 Decomposition (Personalized FedHK-PARAFAC2). The primary objective of these algorithms is to enhance the efficacy of clustering processes while ensuring the confidentiality and efficient communication in federated learning environments. Theoretical analyses of convergence guarantees, privacy bounds, and complexity are provided to validate the effectiveness of the proposed methods. In essence, this paper makes a significant academic contribution to the field of federated multi-view clustering through its innovative integration of mathematical modeling and algorithm design. This approach addresses the critical challenges of data heterogeneity and privacy concerns, paving the way for enhanced data management and analytics in various contexts.
https://arxiv.org/abs/2509.16101
Academic Papers
svg
b5f35651fdd5c7cb7cb630982773a839273ea55e401f518a44fb83becbbab65e
2026-01-01T00:00:00-05:00
Audio Super-Resolution with Latent Bridge Models
arXiv:2509.17609v3 Announce Type: replace Abstract: Audio super-resolution (SR), i.e., upsampling the low-resolution (LR) waveform to the high-resolution (HR) version, has recently been explored with diffusion and bridge models, while previous methods often suffer from sub-optimal upsampling quality due to their uninformative generation prior. Towards high-quality audio super-resolution, we present a new system with latent bridge models (LBMs), where we compress the audio waveform into a continuous latent space and design an LBM to enable a latent-to-latent generation process that naturally matches the LR-toHR upsampling process, thereby fully exploiting the instructive prior information contained in the LR waveform. To further enhance the training results despite the limited availability of HR samples, we introduce frequency-aware LBMs, where the prior and target frequency are taken as model input, enabling LBMs to explicitly learn an any-to-any upsampling process at the training stage. Furthermore, we design cascaded LBMs and present two prior augmentation strategies, where we make the first attempt to unlock the audio upsampling beyond 48 kHz and empower a seamless cascaded SR process, providing higher flexibility for audio post-production. Comprehensive experimental results evaluated on the VCTK, ESC-50, Song-Describer benchmark datasets and two internal testsets demonstrate that we achieve state-of-the-art objective and perceptual quality for any-to-48kHz SR across speech, audio, and music signals, as well as setting the first record for any-to-192kHz audio SR. Demo at https://AudioLBM.github.io/.
https://arxiv.org/abs/2509.17609
Academic Papers
svg
87e8aebd7e7aad6092cb6b828c2dc0c75828f0e7a2161a5901b11ed0f2caf3fb
2026-01-01T00:00:00-05:00
SiDiaC: Sinhala Diachronic Corpus
arXiv:2509.17912v2 Announce Type: replace Abstract: SiDiaC, the first comprehensive Sinhala Diachronic Corpus, covers a historical span from the 5th to the 20th century CE. SiDiaC comprises 58k words across 46 literary works, annotated carefully based on the written date, after filtering based on availability, authorship, copyright compliance, and data attribution. Texts from the National Library of Sri Lanka were digitised using Google Document AI OCR, followed by post-processing to correct formatting and modernise the orthography. The construction of SiDiaC was informed by practices from other corpora, such as FarPaHC, particularly in syntactic annotation and text normalisation strategies, due to the shared characteristics of low-resourced language status. This corpus is categorised based on genres into two layers: primary and secondary. Primary categorisation is binary, classifying each book into Non-Fiction or Fiction, while the secondary categorisation is more specific, grouping texts under Religious, History, Poetry, Language, and Medical genres. Despite challenges including limited access to rare texts and reliance on secondary date sources, SiDiaC serves as a foundational resource for Sinhala NLP, significantly extending the resources available for Sinhala, enabling diachronic studies in lexical change, neologism tracking, historical syntax, and corpus-based lexicography.
https://arxiv.org/abs/2509.17912
Academic Papers
svg
8777685fce2c4cc2a675bcf9f55cd65362619d9973ea8d55561c9ad42e8986a3
2026-01-01T00:00:00-05:00
Secure and Efficient Access Control for Computer-Use Agents via Context Space
arXiv:2509.22256v3 Announce Type: replace Abstract: Large language model (LLM)-based computer-use agents represent a convergence of AI and OS capabilities, enabling natural language to control system- and application-level functions. However, due to LLMs' inherent uncertainty issues, granting agents control over computers poses significant security risks. When agent actions deviate from user intentions, they can cause irreversible consequences. Existing mitigation approaches, such as user confirmation and LLM-based dynamic action validation, still suffer from limitations in usability, security, and performance. To address these challenges, we propose CSAgent, a system-level, static policy-based access control framework for computer-use agents. To bridge the gap between static policy and dynamic context and user intent, CSAgent introduces intent- and context-aware policies, and provides an automated toolchain to assist developers in constructing and refining them. CSAgent enforces these policies through an optimized OS service, ensuring that agent actions can only be executed under specific user intents and contexts. CSAgent supports protecting agents that control computers through diverse interfaces, including API, CLI, and GUI. We implement and evaluate CSAgent, which successfully defends against more than 99.56% of attacks while introducing only 1.99% performance overhead.
https://arxiv.org/abs/2509.22256
Academic Papers
svg
185e85ffce54c9958955186e839fa44c3cdac3205a1356f45871fd2d7f7df9e8
2026-01-01T00:00:00-05:00
Dynamical feedback control with operator learning for the Vlasov-Poisson system
arXiv:2509.23063v2 Announce Type: replace Abstract: To meet the demands of instantaneous control of instabilities over long time horizons in plasma fusion, we design a dynamic feedback control strategy for the Vlasov-Poisson system by constructing an operator that maps state perturbations to an external control field. In the first part of the paper, we propose learning such an operator using a neural network. Inspired by optimal control theory for linearized dynamics, we introduce a low-rank neural operator architecture and train it via adjoint state method. The resulting controller is effective at suppressing instabilities well beyond the training time horizon. To generalize control across varying initial data, we further introduce a novel cancellation-based control strategy that removes the destabilizing component of the electric field. This approach naturally defines an operator without requiring any training, ensures perturbation decay over infinite time, and demonstrates strong robustness under noisy feedback. Numerical experiments confirm the effectiveness of the method in both one- and multidimensional settings.
https://arxiv.org/abs/2509.23063
Academic Papers
svg
b55a6836b6bc99d70a880de8f87539b2acb83be24ebfa7af050eca0c224073b1
2026-01-01T00:00:00-05:00
Towards Comprehensive Interactive Change Understanding in Remote Sensing: A Large-scale Dataset and Dual-granularity Enhanced VLM
arXiv:2509.23105v2 Announce Type: replace Abstract: Remote sensing change understanding (RSCU) is essential for analyzing remote sensing images and understanding how human activities affect the environment. However, existing datasets lack deep understanding and interactions in the diverse change captioning, counting, and localization tasks. To tackle these gaps, we construct ChangeIMTI, a new large-scale interactive multi-task instruction dataset that encompasses four complementary tasks including change captioning, binary change classification, change counting, and change localization. Building upon this new dataset, we further design a novel vision-guided vision-language model (ChangeVG) with dual-granularity awareness for bi-temporal remote sensing images (i.e., two remote sensing images of the same area at different times). The introduced vision-guided module is a dual-branch architecture that synergistically combines fine-grained spatial feature extraction with high-level semantic summarization. These enriched representations further serve as the auxiliary prompts to guide large vision-language models (VLMs) (e.g., Qwen2.5-VL-7B) during instruction tuning, thereby facilitating the hierarchical cross-modal learning. We extensively conduct experiments across four tasks to demonstrate the superiority of our approach. Remarkably, on the change captioning task, our method outperforms the strongest method Semantic-CC by 1.39 points on the comprehensive S*m metric, which integrates the semantic similarity and descriptive accuracy to provide an overall evaluation of change caption. Moreover, we also perform a series of ablation studies to examine the critical components of our method. The source code and associated data for this work are publicly available at Github.
https://arxiv.org/abs/2509.23105
Academic Papers
svg
f95b2f7475ab16d46bd86fc2b30498bdf1242a033fddb252843e53b47a3e4916
2026-01-01T00:00:00-05:00
Unsupervised Online 3D Instance Segmentation with Synthetic Sequences and Dynamic Loss
arXiv:2509.23194v2 Announce Type: replace Abstract: Unsupervised online 3D instance segmentation is a fundamental yet challenging task, as it requires maintaining consistent object identities across LiDAR scans without relying on annotated training data. Existing methods, such as UNIT, have made progress in this direction but remain constrained by limited training diversity, rigid temporal sampling, and heavy dependence on noisy pseudo-labels. We propose a new framework that enriches the training distribution through synthetic point cloud sequence generation, enabling greater diversity without relying on manual labels or simulation engines. To better capture temporal dynamics, our method incorporates a flexible sampling strategy that leverages both adjacent and non-adjacent frames, allowing the model to learn from long-range dependencies as well as short-term variations. In addition, a dynamic-weighting loss emphasizes confident and informative samples, guiding the network toward more robust representations. Through extensive experiments on SemanticKITTI, nuScenes, and PandaSet, our method consistently outperforms UNIT and other unsupervised baselines, achieving higher segmentation accuracy and more robust temporal associations. The code will be publicly available at github.com/Eaphan/SFT3D.
https://arxiv.org/abs/2509.23194
Academic Papers
svg
c0936817609dcd098f38cc5661e690a0410738950cf5386094617796c4ba4744
2026-01-01T00:00:00-05:00
Adversarial Reinforcement Learning Framework for ESP Cheater Simulation
arXiv:2509.24274v2 Announce Type: replace Abstract: Extra-Sensory Perception (ESP) cheats, which reveal hidden in-game information such as enemy locations, are difficult to detect because their effects are not directly observable in player behavior. The lack of observable evidence makes it difficult to collect reliably labeled data, which is essential for training effective anti-cheat systems. Furthermore, cheaters often adapt their behavior by limiting or disguising their cheat usage, which further complicates detection and detector development. To address these challenges, we propose a simulation framework for controlled modeling of ESP cheaters, non-cheaters, and trajectory-based detectors. We model cheaters and non-cheaters as reinforcement learning agents with different levels of observability, while detectors classify their behavioral trajectories. Next, we formulate the interaction between the cheater and the detector as an adversarial game, allowing both players to co-adapt over time. To reflect realistic cheater strategies, we introduce a structured cheater model that dynamically switches between cheating and non-cheating behaviors based on detection risk. Experiments demonstrate that our framework successfully simulates adaptive cheater behaviors that strategically balance reward optimization and detection evasion. This work provides a controllable and extensible platform for studying adaptive cheating behaviors and developing effective cheat detectors.
https://arxiv.org/abs/2509.24274
Academic Papers
svg
50bff6f4f95343fea97bca22d89a2776a19aabbb5773343a852df3025101c970
2026-01-01T00:00:00-05:00
Deep Learning Accelerated Algebraic Multigrid Methods for Polytopal Discretizations of Second-Order Differential Problems
arXiv:2510.01442v2 Announce Type: replace Abstract: Algebraic Multigrid (AMG) methods are state-of-the-art algebraic solvers for partial differential equations. Still, their efficiency depends heavily on the choice of suitable parameters and/or ingredients. Paradigmatic examples include the so-called strong threshold parameter $\theta$, which controls the algebraic coarse-grid hierarchy, as well as the smoother, i.e., the relaxation methods used on the fine grid to damp out high-frequency errors. In AMG, since the coarse grids are constructed algebraically (without geometric intuition), the smoother's performance is even more critical. For the linear systems stemming from polytopal discretizations, such as Polytopal Discontinuous Galerkin (PolyDG) and Virtual Element Methods (VEM), AMG sensitivity to such choices is even more critical due to the significant variability of the underlying meshes, which results in algebraic systems with different sparsity patterns. We propose a novel deep learning approach that automatically tunes the strong threshold parameter, as well as the smoother choice in AMG solvers, for linear systems of equations arising from polytopal discretizations, thereby maximizing AMG performance. We interpret the sparse matrix resulting from polytopal discretization as a grayscale image, and by applying pooling, our neural network extracts compact features that preserve the necessary information at a low computational cost. We test various differential problems in both two- and three-dimensional settings, with heterogeneous coefficients and polygonal/polyhedral meshes, and demonstrate that the proposed approach generalizes well. In practice, we demonstrate that we can reduce AMG solver time by up to $27\%$ with minimal changes to existing PolyDG and VEM codes.
https://arxiv.org/abs/2510.01442
Academic Papers
svg
0e6786e9c54367c7ac4973056a73d9588d7ab762b93e1d8a0bcc8853fe419e89
2026-01-01T00:00:00-05:00
Triple-BERT: Do We Really Need MARL for Order Dispatch on Ride-Sharing Platforms?
arXiv:2510.03257v2 Announce Type: replace Abstract: On-demand ride-sharing platforms, such as Uber and Lyft, face the intricate real-time challenge of bundling and matching passengers-each with distinct origins and destinations-to available vehicles, all while navigating significant system uncertainties. Due to the extensive observation space arising from the large number of drivers and orders, order dispatching, though fundamentally a centralized task, is often addressed using Multi-Agent Reinforcement Learning (MARL). However, independent MARL methods fail to capture global information and exhibit poor cooperation among workers, while Centralized Training Decentralized Execution (CTDE) MARL methods suffer from the curse of dimensionality. To overcome these challenges, we propose Triple-BERT, a centralized Single Agent Reinforcement Learning (MARL) method designed specifically for large-scale order dispatching on ride-sharing platforms. Built on a variant TD3, our approach addresses the vast action space through an action decomposition strategy that breaks down the joint action probability into individual driver action probabilities. To handle the extensive observation space, we introduce a novel BERT-based network, where parameter reuse mitigates parameter growth as the number of drivers and orders increases, and the attention mechanism effectively captures the complex relationships among the large pool of driver and orders. We validate our method using a real-world ride-hailing dataset from Manhattan. Triple-BERT achieves approximately an 11.95% improvement over current state-of-the-art methods, with a 4.26% increase in served orders and a 22.25% reduction in pickup times. Our code, trained model parameters, and processed data are publicly available at the repository https://github.com/RS2002/Triple-BERT .
https://arxiv.org/abs/2510.03257
Academic Papers
svg
9e5791e909e86d6091362c3b53022272c48f0dd5a550c08caae7a8d4fc599046
2026-01-01T00:00:00-05:00
Distributed Information Bottleneck Theory for Multi-Modal Task-Aware Semantic Communication
arXiv:2510.04000v3 Announce Type: replace Abstract: Semantic communication shifts the focus from bit-level accuracy to task-relevant semantic delivery, enabling efficient and intelligent communication for next-generation networks. However, existing multi-modal solutions often process all available data modalities indiscriminately, ignoring that their contributions to downstream tasks are often unequal. This not only leads to severe resource inefficiency but also degrades task inference performance due to irrelevant or redundant information. To tackle this issue, we propose a novel task-aware distributed information bottleneck (TADIB) framework, which quantifies the contribution of any set of modalities to given tasks. Based on this theoretical framework, we design a practical coding scheme that intelligently selects and compresses only the most task-relevant modalities at the transmitter. To find the optimal selection and the codecs in the network, we adopt the probabilistic relaxation of discrete selection, enabling distributed encoders to make coordinated decisions with score function estimation and common randomness. Extensive experiments on public datasets demonstrate that our solution matches or surpasses the inference quality of full-modal baselines while significantly reducing communication and computational costs.
https://arxiv.org/abs/2510.04000
Academic Papers
svg
8e634d8d45d6d842e9511b70e47cf55c2080f7d6e1d825a3b084dae8b73fc33a
2026-01-01T00:00:00-05:00
LiRA: A Multi-Agent Framework for Reliable and Readable Literature Review Generation
arXiv:2510.05138v3 Announce Type: replace Abstract: The rapid growth of scientific publications has made it increasingly difficult to keep literature reviews comprehensive and up-to-date. Though prior work has focused on automating retrieval and screening, the writing phase of systematic reviews remains largely under-explored, especially with regard to readability and factual accuracy. To address this, we present LiRA (Literature Review Agents), a multi-agent collaborative workflow which emulates the human literature review process. LiRA utilizes specialized agents for content outlining, subsection writing, editing, and reviewing, producing cohesive and comprehensive review articles. Evaluated on SciReviewGen and a proprietary ScienceDirect dataset, LiRA outperforms current baselines such as AutoSurvey and MASS-Survey in writing and citation quality, while maintaining competitive similarity to human-written reviews. We further evaluate LiRA in real-world scenarios using document retrieval and assess its robustness to reviewer model variation. Our findings highlight the potential of agentic LLM workflows, even without domain-specific tuning, to improve the reliability and usability of automated scientific writing.
https://arxiv.org/abs/2510.05138
Academic Papers
svg
5b5281a8cfda1719dc527916fa7c4a1c7b392880a998170ba1fe20e8060781ba
2026-01-01T00:00:00-05:00
Smoother-type a posteriori error estimates for finite element methods
arXiv:2510.07677v2 Announce Type: replace Abstract: This work develops user-friendly a posteriori error estimates of finite element methods, based on smoothers of linear iterative solvers. The proposed method employs simple smoothers, such as Jacobi or Gauss--Seidel iteration, on an auxiliary finer mesh to process the finite element residual for a posteriori error control. The implementation has linear complexity and requires only a coarse-to-fine prolongation operator. For symmetric problems, we prove the reliability and efficiency of smoother-type error estimators under a saturation assumption. Numerical experiments for various PDEs demonstrate that the proposed smoother-type error estimators outperform residual-type estimators in accuracy and exhibit robustness with respect to parameters and polynomial degrees.
https://arxiv.org/abs/2510.07677
Academic Papers
svg
b2ffe23e24a4a6fafd3dfccda7bc0f860419ef0ac9e81f8428bdf002a0e2d016
2026-01-01T00:00:00-05:00
Large Language Model Sourcing: A Survey
arXiv:2510.10161v2 Announce Type: replace Abstract: Due to the black-box nature of large language models (LLMs) and the realism of their generated content, issues such as hallucinations, bias, unfairness, and copyright infringement have become significant. In this context, sourcing information from multiple perspectives is essential. This survey presents a systematic investigation organized around four interrelated dimensions: Model Sourcing, Model Structure Sourcing, Training Data Sourcing, and External Data Sourcing. Moreover, a unified dual-paradigm taxonomy is proposed that classifies existing sourcing methods into prior-based (proactive traceability embedding) and posterior-based (retrospective inference) approaches. Traceability across these dimensions enhances the transparency, accountability, and trustworthiness of LLMs deployment in real-world applications.
https://arxiv.org/abs/2510.10161
Academic Papers
svg
93f5698176db4b451e4671e5712d444470ad3e4853abe464304347413a9ac86e
2026-01-01T00:00:00-05:00
Bringing The Consistency Gap: Explicit Structured Memory for Interleaved Image-Text Generation
arXiv:2510.10969v3 Announce Type: replace Abstract: Existing Vision Language Models (VLMs) often struggle to preserve logic, entity identity, and artistic style during extended, interleaved image-text interactions. We identify this limitation as "Multimodal Context Drift", which stems from the inherent tendency of implicit neural representations to decay or become entangled over long sequences. To bridge this gap, we propose IUT-Plug, a model-agnostic Neuro-Symbolic Structured State Tracking mechanism. Unlike purely neural approaches that rely on transient attention maps, IUT-Plug introduces the Image Understanding Tree (IUT) as an explicit, persistent memory module. The framework operates by (1) parsing visual scenes into hierarchical symbolic structures (entities, attributes, and relationships); (2) performing incremental state updates to logically lock invariant properties while modifying changing elements; and (3) guiding generation through topological constraints. We evaluate our approach on a novel benchmark comprising 3,000 human-annotated samples. Experimental results demonstrate that IUT-Plug effectively mitigates context drift, achieving significantly higher consistency scores compared to unstructured text-prompting baselines. This confirms that explicit symbolic grounding is essential for maintaining robust long-horizon consistency in multimodal generation.
https://arxiv.org/abs/2510.10969
Academic Papers
svg