id
stringlengths
64
64
published
stringlengths
19
25
title
stringlengths
7
262
description
stringlengths
6
54.4k
link
stringlengths
31
227
category
stringclasses
6 values
image
stringlengths
3
247
cbd1615a232943b902b271ef46e83e9b79b26e616b5cf54ccbd4cd65000a6001
2026-01-23T00:00:00-05:00
Emergence and Evolution of Interpretable Concepts in Diffusion Models
arXiv:2504.15473v2 Announce Type: replace Abstract: Diffusion models have become the go-to method for text-to-image generation, producing high-quality images from pure noise. However, the inner workings of diffusion models is still largely a mystery due to their black-box nature and complex, multi-step generation process. Mechanistic interpretability techniques, such as Sparse Autoencoders (SAEs), have been successful in understanding and steering the behavior of large language models at scale. However, the great potential of SAEs has not yet been applied toward gaining insight into the intricate generative process of diffusion models. In this work, we leverage the SAE framework to probe the inner workings of a popular text-to-image diffusion model, and uncover a variety of human-interpretable concepts in its activations. Interestingly, we find that even before the first reverse diffusion step is completed, the final composition of the scene can be predicted surprisingly well by looking at the spatial distribution of activated concepts. Moreover, going beyond correlational analysis, we design intervention techniques aimed at manipulating image composition and style, and demonstrate that (1) in early stages of diffusion image composition can be effectively controlled, (2) in the middle stages image composition is finalized, however stylistic interventions are effective, and (3) in the final stages only minor textural details are subject to change.
https://arxiv.org/abs/2504.15473
Academic Papers
svg
f08714de489259f5220ca1142d9dfb21341bd2d0ba2f382fdfbf05868ba8a7a5
2026-01-23T00:00:00-05:00
Boosting Generative Image Modeling via Joint Image-Feature Synthesis
arXiv:2504.16064v3 Announce Type: replace Abstract: Latent diffusion models (LDMs) dominate high-quality image generation, yet integrating representation learning with generative modeling remains a challenge. We introduce a novel generative image modeling framework that seamlessly bridges this gap by leveraging a diffusion model to jointly model low-level image latents (from a variational autoencoder) and high-level semantic features (from a pretrained self-supervised encoder like DINO). Our latent-semantic diffusion approach learns to generate coherent image-feature pairs from pure noise, significantly enhancing both generative quality and training efficiency, all while requiring only minimal modifications to standard Diffusion Transformer architectures. By eliminating the need for complex distillation objectives, our unified design simplifies training and unlocks a powerful new inference strategy: Representation Guidance, which leverages learned semantics to steer and refine image generation. Evaluated in both conditional and unconditional settings, our method delivers substantial improvements in image quality and training convergence speed, establishing a new direction for representation-aware generative modeling. Project page and code: https://representationdiffusion.github.io
https://arxiv.org/abs/2504.16064
Academic Papers
svg
44aaab291b49f53a6a4977d3f6ad0567a3bac245b4a64e33eec057bd753ba918
2026-01-23T00:00:00-05:00
Who Is Responsible? Self-Adaptation Under Multiple Concurrent Uncertainties With Unknown Sources in Complex ROS-Based Systems
arXiv:2504.20477v2 Announce Type: replace Abstract: Robotic systems increasingly operate in dynamic, unpredictable environments, where tightly coupled sensors and software modules increase the probability of a single fault cascading across components and admitting multiple plausible strategies to resolve the underlying uncertainty. Most existing self-adaptive approaches that have been applied to robotics assume predefined one-to-one uncertainty-to-adaptation mappings. We present a ROS2-based self-adaptive approach building upon the MAPE-K feedback loop that addresses (1) multiple simultaneous uncertainties with differing criticality, (2) cascading uncertainties across components, and (3) multiple plausible resolving strategies per detected symptom. Central to our approach is an adaptation rule set which lets designers specify uncertainty patterns, assign criticality levels, and enumerate multiple plausible adaptation strategies. This rule set, combined with an automatically extracted live ROS2 dependency graph, enables lightweight root-cause analysis and strategy ranking to prioritize minimal and effective adaptations. Evaluations on an underwater robot scenario and a perception use case show that our approach can identify root causes among concurrent uncertainties, favours inexpensive adaptations, reduces unnecessary adaptations, and achieves performance comparable to existing baselines designed for sequential uncertainties. The code is publicly available.
https://arxiv.org/abs/2504.20477
Academic Papers
svg
6322f63737bc5bb448b8aad79bb1072dd6c200258561f2435f03d415b487d77a
2026-01-23T00:00:00-05:00
Passing the Buck to AI: How Individuals' Decision-Making Patterns Affect Reliance on AI
arXiv:2505.01537v2 Announce Type: replace Abstract: Psychological research has identified different patterns individuals have while making decisions, such as vigilance (making decisions after thorough information gathering), hypervigilance (rushed and anxious decision-making), and buckpassing (deferring decisions to others). We examine whether these decision-making patterns shape peoples' likelihood of seeking out or relying on AI. In an online experiment with 810 participants tasked with distinguishing food facts from myths, we found that a higher buckpassing tendency was positively correlated with both seeking out and relying on AI suggestions, while being negatively correlated with the time spent reading AI explanations. In contrast, the higher a participant tended towards vigilance, the more carefully they scrutinized the AI's information, as indicated by an increased time spent looking through the AI's explanations. These findings suggest that a person's decision-making pattern plays a significant role in their adoption and reliance on AI, which provides a new understanding of individual differences in AI-assisted decision-making.
https://arxiv.org/abs/2505.01537
Academic Papers
svg
db4aa719500bfad944cea44342c618efc4011c4d1c11690b75ae437c40953334
2026-01-23T00:00:00-05:00
Adaptively Point-weighting Curriculum Learning
arXiv:2505.01665v2 Announce Type: replace Abstract: Curriculum learning (CL) mimics human learning, in which easy samples are learned first, followed by harder samples, and has become an effective method for training deep networks. However, many existing automatic CL methods maintain a preference for easy samples during the entire training process regardless of the constantly evolving training state. This is just like a human curriculum that fails to provide individualized instruction, which can delay learning progress. To address this issue, we propose an adaptively point-weighting (APW) curriculum learning method that assigns a weight to each training sample based on its training loss. The weighting strategy of APW follows the easy-to-hard training paradigm, guided by the current training state of the network. We present a theoretical analysis of APW, including training effectiveness, training stability, and generalization performance. Experimental results validate these theoretical findings and demonstrate the superiority of the proposed APW method.
https://arxiv.org/abs/2505.01665
Academic Papers
svg
f7fa3888c1c33a65785fa91e50c065e7084b6082933b2589f492aaeb32aac7f7
2026-01-23T00:00:00-05:00
Semantics-Aware Unified Terrestrial Non-Terrestrial 6G Networks
arXiv:2505.01796v2 Announce Type: replace Abstract: The integration of Terrestrial and Non-Terrestrial Networks (TN-NTNs), introduced in 5G, is advancing toward a unified and seamless network of networks in Sixth-Generation (6G). This evolution markedly increases the volume of generated and exchanged data, imposing stringent technical and operational requirements along with higher cost and energy consumption. Consequently, efficient management of data generation and transmission within this unified architecture has become essential. In this article, we investigate semantics-aware information handling in unified TN-NTNs, where data communication between distant TN nodes is enabled via an NTN. We consider an Internet of Things (IoT) monitoring system in which status updates from a remote Energy Harvesting (EH) device are delivered to a destination monitor through a network of Low Earth Orbit (LEO) satellites. We leverage semantic metrics, such as Query Version Age of Information, which collectively capture the timeliness, relevance, and utility of information. This approach minimizes the transmission of stale, uninformative, or unusable information, thereby reducing the volume of data that must be transmitted and processed. The result is a substantial reduction in energy consumption and data exchange within the network-achieving up to 73% lower energy-charging requirements and fewer transmission demands than the state of the art-without compromising the conveyed information.
https://arxiv.org/abs/2505.01796
Academic Papers
svg
7dbd25589b5df1f67d89744c0bd22a7ba659d5f1ecaba22217d7006cd8974c12
2026-01-23T00:00:00-05:00
RADLADS: Rapid Attention Distillation to Linear Attention Decoders at Scale
arXiv:2505.03005v4 Announce Type: replace Abstract: We present Rapid Attention Distillation to Linear Attention Decoders at Scale (RADLADS), a protocol for rapidly converting softmax attention transformers into linear attention decoder models, along with two new RWKV-variant architectures, and models converted from popular Qwen2.5 open source models in 7B, 32B, and 72B sizes. Our conversion process requires only 350-700M tokens, less than 0.005% of the token count used to train the original teacher models. Converting to our 72B linear attention model costs less than \$2,000 USD at today's prices, yet quality at inference remains close to the original transformer. These models achieve state-of-the-art downstream performance across a set of standard benchmarks for linear attention models of their size. We release all our models on HuggingFace under the Apache 2.0 license, with the exception of our 72B models which are also governed by the Qwen License Agreement. Models at https://huggingface.co/collections/recursal/radlads-6818ee69e99e729ba8a87102 Training Code at https://github.com/recursal/RADLADS-paper
https://arxiv.org/abs/2505.03005
Academic Papers
svg
bd03987f77f48d8b450e2fb017239c9be65d1b30d2e014a3addb089eaa38450f
2026-01-23T00:00:00-05:00
Eliminating Out-of-Domain Recommendations in LLM-based Recommender Systems: A Unified View
arXiv:2505.03336v2 Announce Type: replace Abstract: Recommender systems based on Large Language Models (LLMs) are often plagued by hallucinations of out-of-domain (OOD) items. To address this, we propose RecLM, a unified framework that bridges the gap between retrieval and generation by instantiating three grounding paradigms under a single architecture: embedding-based retrieval, constrained generation over rewritten item titles, and discrete item-tokenizer generation. Using the same backbone LLM and prompts, we systematically compare these three views on public benchmarks. RecLM strictly eradicates OOD recommendations (OOD@10 = 0) across all variants, and the constrained generation variants RecLM-cgen and RecLM-token achieve overall state-of-the-art accuracy compared to both strong ID-based and LLM-based baselines. Our unified view provides a systematic basis for comparing three distinct paradigms to reduce item hallucinations, offering a practical framework to facilitate the application of LLMs to recommendation tasks. Source code is at https://github.com/microsoft/RecAI.
https://arxiv.org/abs/2505.03336
Academic Papers
svg
a603140519e449ba041c3c06e42c5823c362fa881888a11c1a462b624de8b809
2026-01-23T00:00:00-05:00
PyTDC: A multimodal machine learning training, evaluation, and inference platform for biomedical foundation models
arXiv:2505.05577v2 Announce Type: replace Abstract: Existing biomedical benchmarks do not provide end-to-end infrastructure for training, evaluation, and inference of models that integrate multimodal biological data and a broad range of machine learning tasks in therapeutics. We present PyTDC, an open-source machine-learning platform providing streamlined training, evaluation, and inference software for multimodal biological AI models. PyTDC unifies distributed, heterogeneous, continuously updated data sources and model weights and standardizes benchmarking and inference endpoints. This paper discusses the components of PyTDC's architecture and, to our knowledge, the first-of-its-kind case study on the introduced single-cell drug-target nomination ML task. We find state-of-the-art methods in graph representation learning and domain-specific methods from graph theory perform poorly on this task. Though we find a context-aware geometric deep learning method that outperforms the evaluated SoTA and domain-specific baseline methods, the model is unable to generalize to unseen cell types or incorporate additional modalities, highlighting PyTDC's capacity to facilitate an exciting avenue of research developing multimodal, context-aware, foundation models for open problems in biomedical AI.
https://arxiv.org/abs/2505.05577
Academic Papers
svg
27e173753a94adce8f6e72cd3c083daf4d225ff894e7bdffcec40c3dfb600230
2026-01-23T00:00:00-05:00
Decoupling Multi-Contrast Super-Resolution: Self-Supervised Implicit Re-Representation for Unpaired Cross-Modal Synthesis
arXiv:2505.05855v2 Announce Type: replace Abstract: Multi-contrast super-resolution (MCSR) is crucial for enhancing MRI but current deep learning methods are limited. They typically require large, paired low- and high-resolution (LR/HR) training datasets, which are scarce, and are trained for fixed upsampling scales. While recent self-supervised methods remove the paired data requirement, they fail to leverage valuable population-level priors. In this work, we propose a novel, decoupled MCSR framework that resolves both limitations. We reformulate MCSR into two stages: (1) an unpaired cross-modal synthesis (uCMS) module, trained once on unpaired population data to learn a robust anatomical prior; and (2) a lightweight, patient-specific implicit re-representation (IrR) module. This IrR module is optimized in a self-supervised manner to fuse the population prior with the subject's own LR target data. This design uniquely fuses population-level knowledge with patient-specific fidelity without requiring any paired LR/HR or paired cross-modal training data. By building the IrR module on an implicit neural representation, our framework is also inherently scale-agnostic. Our method demonstrates superior quantitative performance on different datasets, with exceptional robustness at extreme scales (16x, 32x), a regime where competing methods fail. Our work presents a data-efficient, flexible, and computationally lightweight paradigm for MCSR, enabling high-fidelity, arbitrary-scale
https://arxiv.org/abs/2505.05855
Academic Papers
svg
b2058441fb7a5d77ff4ec91d690c336b3a309bd195e5744fc70b341e1c8026c1
2026-01-23T00:00:00-05:00
A large-scale evaluation of commonsense knowledge in humans and large language models
arXiv:2505.10309v3 Announce Type: replace Abstract: Commonsense knowledge, a major constituent of artificial intelligence (AI), is primarily evaluated in practice by human-prescribed ground-truth labels. An important, albeit implicit, assumption of these labels is that they accurately capture what any human would think, effectively treating human common sense as homogeneous. However, recent empirical work has shown that humans vary enormously in what they consider commonsensical; thus what appears self-evident to one benchmark designer may not be so to another. Here, we propose a method for assessing commonsense knowledge in AI, specifically in large language models (LLMs), that incorporates empirically observed heterogeneity among humans by measuring the correspondence between a model's judgment and that of a human population. We first find that, when treated as independent survey respondents, most LLMs remain below the human median in their individual commonsense competence. Second, when used as simulators of a hypothetical population, LLMs correlate with real humans only modestly in the extent to which they agree on the same set of statements. In both cases, smaller, open-weight models are surprisingly more competitive than larger, proprietary frontier models. Our evaluation framework, which ties commonsense knowledge to its cultural basis, contributes to the growing call for adapting AI models to human collectivities that possess different, often incompatible, social stocks of knowledge.
https://arxiv.org/abs/2505.10309
Academic Papers
svg
40aab95452a71d79f9b6f82558735c8efdabd8fc4fa2cc66387c5511f3524ac0
2026-01-23T00:00:00-05:00
Sense and Sensitivity: Examining the Influence of Semantic Recall on Long Context Code Reasoning
arXiv:2505.13353v3 Announce Type: replace Abstract: Large language models (LLMs) are increasingly deployed for understanding large codebases, but whether they understand operational semantics of long code context or rely on pattern matching shortcuts remains unclear. We distinguish between lexical recall (retrieving code verbatim) and semantic recall (understanding operational semantics). Evaluating 10 state-of-the-art LLMs, we find that while frontier models achieve near-perfect, position-independent lexical recall, semantic recall degrades severely when code is centrally positioned in long contexts. We introduce semantic recall sensitivity to measure whether tasks require understanding of code's operational semantics vs. permit pattern matching shortcuts. Through a novel counterfactual measurement method, we show that models rely heavily on pattern matching shortcuts to solve existing code understanding benchmarks. We propose a new task SemTrace, which achieves high semantic recall sensitivity through unpredictable operations; LLMs' accuracy exhibits severe positional effects, with median accuracy drops of 92.73% versus CRUXEval's 53.36% as the relevant code snippet approaches the middle of the input code context. Our findings suggest current evaluations substantially underestimate semantic recall failures in long context code understanding.
https://arxiv.org/abs/2505.13353
Academic Papers
svg
e5fa97e74061d428797ee3a0bc9753593b274b7ffcfe9fe3e92bb9d367cdd641
2026-01-23T00:00:00-05:00
Multi-View Projection for Unsupervised Domain Adaptation in 3D Semantic Segmentation
arXiv:2505.15545v3 Announce Type: replace Abstract: 3D semantic segmentation plays a pivotal role in autonomous driving and road infrastructure analysis, yet state-of-the-art 3D models are prone to severe domain shift when deployed across different datasets. In this paper, we propose an Unsupervised Domain Adaptation approach where a 3D segmentation model is trained on the target dataset using pseudo-labels generated by a novel multi-view projection framework. Our approach first aligns Lidar scans into coherent 3D scenes and renders them from multiple virtual camera poses to create large-scale synthetic 2D semantic segmentation datasets in various modalities. The generated datasets are used to train an ensemble of 2D segmentation models in point cloud view domain on each modality. During inference, the models process a large amount of views per scene; the resulting logits are back-projected to 3D with a depth-aware voting scheme to generate final point-wise labels. These labels are then used to fine-tune a 3D segmentation model in the target domain. We evaluate our approach Real-to-Real on the nuScenes and SemanticKITTI datasets. We also evaluate it Simulation-to-Real with the SynLidar dataset. Our contributions are a novel method that achieves state-of-the-art results in Real-to-Real Unsupervised Domain Adaptation, and we also demonstrate an application of our method to segment rare classes, for which target 3D annotations are not available, by only using 2D annotations for those classes and leveraging 3D annotations for other classes in a source domain.
https://arxiv.org/abs/2505.15545
Academic Papers
svg
3e86c4d08a47146566f1bc11ef9db1885e79fb7b5533f23fa096092c063bb793
2026-01-23T00:00:00-05:00
CGS-GAN: 3D Consistent Gaussian Splatting GANs for High Resolution Human Head Synthesis
arXiv:2505.17590v3 Announce Type: replace Abstract: Recently, 3D GANs based on 3D Gaussian splatting have been proposed for high quality synthesis of human heads. However, existing methods stabilize training and enhance rendering quality from steep viewpoints by conditioning the random latent vector on the current camera position. This compromises 3D consistency, as we observe significant identity changes when re-synthesizing the 3D head with each camera shift. Conversely, fixing the camera to a single viewpoint yields high-quality renderings for that perspective but results in poor performance for novel views. Removing view-conditioning typically destabilizes GAN training, often causing the training to collapse. In response to these challenges, we introduce CGS-GAN, a novel 3D Gaussian Splatting GAN framework that enables stable training and high-quality 3D-consistent synthesis of human heads without relying on view-conditioning. To ensure training stability, we introduce a multi-view regularization technique that enhances generator convergence with minimal computational overhead. Additionally, we adapt the conditional loss used in existing 3D Gaussian splatting GANs and propose a generator architecture designed to not only stabilize training but also facilitate efficient rendering and straightforward scaling, enabling output resolutions up to $2048^2$. To evaluate the capabilities of CGS-GAN, we curate a new dataset derived from FFHQ. This dataset enables very high resolutions, focuses on larger portions of the human head, reduces view-dependent artifacts for improved 3D consistency, and excludes images where subjects are obscured by hands or other objects. As a result, our approach achieves very high rendering quality, supported by competitive FID scores, while ensuring consistent 3D scene generation. Check our our project page here: https://fraunhoferhhi.github.io/cgs-gan/
https://arxiv.org/abs/2505.17590
Academic Papers
svg
8b928e98b9b2b85a76955501ad438e5e14fef8ec2ddfeed3cea11c74d773edf1
2026-01-23T00:00:00-05:00
GenPO: Generative Diffusion Models Meet On-Policy Reinforcement Learning
arXiv:2505.18763v4 Announce Type: replace Abstract: Recent advances in reinforcement learning (RL) have demonstrated the powerful exploration capabilities and multimodality of generative diffusion-based policies. While substantial progress has been made in offline RL and off-policy RL settings, integrating diffusion policies into on-policy frameworks like PPO remains underexplored. This gap is particularly significant given the widespread use of large-scale parallel GPU-accelerated simulators, such as IsaacLab, which are optimized for on-policy RL algorithms and enable rapid training of complex robotic tasks. A key challenge lies in computing state-action log-likelihoods under diffusion policies, which is straightforward for Gaussian policies but intractable for flow-based models due to irreversible forward-reverse processes and discretization errors (e.g., Euler-Maruyama approximations). To bridge this gap, we propose GenPO, a generative policy optimization framework that leverages exact diffusion inversion to construct invertible action mappings. GenPO introduces a novel doubled dummy action mechanism that enables invertibility via alternating updates, resolving log-likelihood computation barriers. Furthermore, we also use the action log-likelihood for unbiased entropy and KL divergence estimation, enabling KL-adaptive learning rates and entropy regularization in on-policy updates. Extensive experiments on eight IsaacLab benchmarks, including legged locomotion (Ant, Humanoid, Anymal-D, Unitree H1, Go2), dexterous manipulation (Shadow Hand), aerial control (Quadcopter), and robotic arm tasks (Franka), demonstrate GenPO's superiority over existing RL baselines. Notably, GenPO is the first method to successfully integrate diffusion policies into on-policy RL, unlocking their potential for large-scale parallelized training and real-world robotic deployment.
https://arxiv.org/abs/2505.18763
Academic Papers
svg
5996efa4a3b2ca9e0c337cd184ad2c1cd7388aa142ebc60c1ffe0aba23e16bc5
2026-01-23T00:00:00-05:00
BAH Dataset for Ambivalence/Hesitancy Recognition in Videos for Digital Behavioural Change
arXiv:2505.19328v3 Announce Type: replace Abstract: Ambivalence and hesitancy (A/H), a closely related construct, is the primary reasons why individuals delay, avoid, or abandon health behaviour changes. It is a subtle and conflicting emotion that sets a person in a state between positive and negative orientations, or between acceptance and refusal to do something. It manifests by a discord in affect between multiple modalities or within a modality, such as facial and vocal expressions, and body language. Although experts can be trained to recognize A/H as done for in-person interactions, integrating them into digital health interventions is costly and less effective. Automatic A/H recognition is therefore critical for the personalization and cost-effectiveness of digital behaviour change interventions. However, no datasets currently exists for the design of machine learning models to recognize A/H. This paper introduces the Behavioural Ambivalence/Hesitancy (BAH) dataset collected for multimodal recognition of A/H in videos. It contains 1,427 videos with a total duration of 10.60 hours captured from 300 participants across Canada answering predefined questions to elicit A/H. It is intended to mirror real-world online personalized behaviour change interventions. BAH is annotated by three experts to provide timestamps that indicate where A/H occurs, and frame- and video-level annotations with A/H cues. Video transcripts, cropped and aligned faces, and participants' meta-data are also provided. Since A and H manifest similarly in practice, we provide a binary annotation indicating the presence or absence of A/H. Additionally, this paper includes benchmarking results using baseline models on BAH for frame- and video-level recognition, zero-shot prediction, and personalization using source-free domain adaptation. The data, code, and pretrained weights are available.
https://arxiv.org/abs/2505.19328
Academic Papers
svg
d98062fca5fbee376bcfc45c2e28adef3ad9b97e1a68ad94c31ac4a3ebc00747
2026-01-23T00:00:00-05:00
OccLE: Label-Efficient 3D Semantic Occupancy Prediction
arXiv:2505.20617v4 Announce Type: replace Abstract: 3D semantic occupancy prediction offers an intuitive and efficient scene understanding and has attracted significant interest in autonomous driving perception. Existing approaches either rely on full supervision, which demands costly voxel-level annotations, or on self-supervision, which provides limited guidance and yields suboptimal performance. To address these challenges, we propose OccLE, a Label-Efficient 3D Semantic Occupancy Prediction that takes images and LiDAR as inputs and maintains high performance with limited voxel annotations. Our intuition is to decouple the semantic and geometric learning tasks and then fuse the learned feature grids from both tasks for the final semantic occupancy prediction. Therefore, the semantic branch distills 2D foundation model to provide aligned pseudo labels for 2D and 3D semantic learning. The geometric branch integrates image and LiDAR inputs in cross-plane synergy based on their inherency, employing semi-supervision to enhance geometry learning. We fuse semantic-geometric feature grids through Dual Mamba and incorporate a scatter-accumulated projection to supervise unannotated prediction with aligned pseudo labels. Experiments show that OccLE achieves competitive performance with only 10\% of voxel annotations on the SemanticKITTI and Occ3D-nuScenes datasets. The code will be publicly released on https://github.com/NerdFNY/OccLE
https://arxiv.org/abs/2505.20617
Academic Papers
svg
5f0461c6081eda84e00e7132fc672ed88549c539ecbc892fcb96124491a17b1e
2026-01-23T00:00:00-05:00
NLP for Social Good: A Survey and Outlook of Challenges, Opportunities, and Responsible Deployment
arXiv:2505.22327v2 Announce Type: replace Abstract: Natural language processing (NLP) now shapes many aspects of our world, yet its potential for positive social impact is underexplored. This paper surveys work in ``NLP for Social Good" (NLP4SG) across nine domains relevant to global development and risk agendas, summarizing principal tasks and challenges. We analyze ACL Anthology trends, finding that inclusion and AI harms attract the most research, while domains such as poverty, peacebuilding, and environmental protection remain underexplored. Guided by our review, we outline opportunities for responsible and equitable NLP and conclude with a call for cross-disciplinary partnerships and human-centered approaches to ensure that future NLP technologies advance the public good.
https://arxiv.org/abs/2505.22327
Academic Papers
svg
3dc197c0f3af1011c35af0e3c6d41ee3e787a4e158ddc845c45e4aaf40380cf3
2026-01-23T00:00:00-05:00
MEDAL: A Framework for Benchmarking LLMs as Multilingual Open-Domain Dialogue Evaluators
arXiv:2505.22777v5 Announce Type: replace Abstract: Evaluating the quality of open-domain chatbots has become increasingly reliant on LLMs acting as automatic judges. However, existing meta-evaluation benchmarks are static, outdated, and lacking in multilingual coverage, limiting their ability to fully capture subtle weaknesses in evaluation. We introduce MEDAL, an automated multi-agent framework for curating more representative and diverse open-domain dialogue evaluation benchmarks. Our approach leverages several state-of-the-art LLMs to generate user-chatbot multilingual dialogues, conditioned on varied seed contexts. Then, a strong LLM (GPT-4.1) is used for a multidimensional analysis of the performance of the chatbots, uncovering noticeable cross-lingual performance differences. Guided by this large-scale evaluation, we curate a new meta-evaluation multilingual benchmark and human-annotate samples with nuanced quality judgments. This benchmark is then used to assess the ability of several reasoning and non-reasoning LLMs to act as evaluators of open-domain dialogues. Using MEDAL, we uncover that state-of-the-art judges fail to reliably detect nuanced issues such as lack of empathy, commonsense, or relevance.
https://arxiv.org/abs/2505.22777
Academic Papers
svg
b825fd06b98c8a460e3aa90100c4031ceda70022bef0cdf16c4f2d3f9d4b338b
2026-01-23T00:00:00-05:00
Skin Lesion Phenotyping via Nested Multi-modal Contrastive Learning
arXiv:2505.23709v2 Announce Type: replace Abstract: We introduce SLIMP (Skin Lesion Image-Metadata Pre-training) for learning rich representations of skin lesions through a novel nested contrastive learning approach that captures complex relationships between images and metadata. Melanoma detection and skin lesion classification based solely on images, pose significant challenges due to large variations in imaging conditions (lighting, color, resolution, distance, etc.) and lack of clinical and phenotypical context. Clinicians typically follow a holistic approach for assessing the risk level of the patient and for deciding which lesions may be malignant and need to be excised, by considering the patient's medical history as well as the appearance of other lesions of the patient. Inspired by this, SLIMP combines the appearance and the metadata of individual skin lesions with patient-level metadata relating to their medical record and other clinically relevant information. By fully exploiting all available data modalities throughout the learning process, the proposed pre-training strategy improves performance compared to other pre-training strategies on downstream skin lesions classification tasks highlighting the learned representations quality.
https://arxiv.org/abs/2505.23709
Academic Papers
svg
c7cf510beecf589b142bc4b2f0fc08bd44501ca8ea016ed18ed0634e84769664
2026-01-23T00:00:00-05:00
R-KV: Redundancy-aware KV Cache Compression for Reasoning Models
arXiv:2505.24133v4 Announce Type: replace Abstract: Reasoning models have demonstrated impressive performance in self-reflection and chain-of-thought reasoning. However, they often produce excessively long outputs, leading to prohibitively large key-value (KV) caches during inference. While chain-of-thought inference significantly improves performance on complex reasoning tasks, it can also lead to reasoning failures when deployed with existing KV cache compression approaches. To address this, we propose Redundancy-aware KV Cache Compression for Reasoning models (R-KV), a novel method specifically targeting redundant tokens in reasoning models. Our method preserves nearly 100% of the full KV cache performance using only 10% of the KV cache, substantially outperforming existing KV cache baselines, which reach only 60% of the performance. Remarkably, R-KV even achieves 105% of full KV cache performance with 16% of the KV cache. This KV-cache reduction also leads to a 90% memory saving and a 6.6X throughput over standard chain-of-thought reasoning inference. Experimental results show that R-KV consistently outperforms existing KV cache compression baselines across two mathematical reasoning datasets.
https://arxiv.org/abs/2505.24133
Academic Papers
svg
3194a2841aa53f2e041d3d7f05ceeb15bf20d87684c5dc917aa682a2f9827a86
2026-01-23T00:00:00-05:00
MMSU: A Massive Multi-task Spoken Language Understanding and Reasoning Benchmark
arXiv:2506.04779v2 Announce Type: replace Abstract: Speech inherently contains rich acoustic information that extends far beyond the textual language. In real-world spoken language understanding, effective interpretation often requires integrating semantic meaning (e.g., content), paralinguistic features (e.g., emotions, speed, pitch) and phonological characteristics (e.g., prosody, intonation, rhythm), which are embedded in speech. While recent multimodal Speech Large Language Models (SpeechLLMs) have demonstrated remarkable capabilities in processing audio information, their ability to perform fine-grained perception and complex reasoning in natural speech remains largely unexplored. To address this gap, we introduce MMSU, a comprehensive benchmark designed specifically for understanding and reasoning in spoken language. MMSU comprises 5,000 meticulously curated audio-question-answer triplets across 47 distinct tasks. To ground our benchmark in linguistic theory, we systematically incorporate a wide range of linguistic phenomena, including phonetics, prosody, rhetoric, syntactics, semantics, and paralinguistics. Through a rigorous evaluation of 14 advanced SpeechLLMs, we identify substantial room for improvement in existing models, highlighting meaningful directions for future optimization. MMSU establishes a new standard for comprehensive assessment of spoken language understanding, providing valuable insights for developing more sophisticated human-AI speech interaction systems. MMSU benchmark is available at https://huggingface.co/datasets/ddwang2000/MMSU. Evaluation Code is available at https://github.com/dingdongwang/MMSU_Bench.
https://arxiv.org/abs/2506.04779
Academic Papers
svg
b02145fdb782c07ab510e87aa1e36fbe050d1abf1f485da0d4c6d051edb0cf79
2026-01-23T00:00:00-05:00
How malicious AI swarms can threaten democracy: The fusion of agentic AI and LLMs marks a new frontier in information warfare
arXiv:2506.06299v4 Announce Type: replace Abstract: Advances in AI offer the prospect of manipulating beliefs and behaviors on a population-wide level. Large language models and autonomous agents now let influence campaigns reach unprecedented scale and precision. Generative tools can expand propaganda output without sacrificing credibility and inexpensively create falsehoods that are rated as more human-like than those written by humans. Techniques meant to refine AI reasoning, such as chain-of-thought prompting, can just as effectively be used to generate more convincing falsehoods. Enabled by these capabilities, a disruptive threat is emerging: swarms of collaborative, malicious AI agents. Fusing LLM reasoning with multi-agent architectures, these systems are capable of coordinating autonomously, infiltrating communities, and fabricating consensus efficiently. By adaptively mimicking human social dynamics, they threaten democracy. Because the resulting harms stem from design, commercial incentives, and governance, we prioritize interventions at multiple leverage points, focusing on pragmatic mechanisms over voluntary compliance.
https://arxiv.org/abs/2506.06299
Academic Papers
svg
f0acaa2587f540d37dc52c27988684132f0febd47096a46581dd9fae67b5e83a
2026-01-23T00:00:00-05:00
The PML method for calculating the propagative wave numbers of electromagnetic wave in periodic structures
arXiv:2506.07084v2 Announce Type: replace Abstract: When the electromagnetic wave is incident on the periodic structures, in addition to the scattering field, some guided modes that are traveling in the periodic medium could be generated. In the present paper, we study the calculation of guided modes. We formulate the problem as a nonlinear eigenvalue problem in an unbounded periodic domain. Then we use perfectly matched layers to truncate the unbounded domain, recast the problem to a quadratic eigenvalue problem, and prove the approximation property of the truncation. Finally, we formulate the quadratic eigenvalue problem to a general eigenvalue problem, use the finite element method to discrete the truncation problem, and show numerical examples to verify theoretical results.
https://arxiv.org/abs/2506.07084
Academic Papers
svg
e031fa87d9c50dd78f83b0c82c7d901424e7a9c280001e2a96c5eb1022236bdb
2026-01-23T00:00:00-05:00
VIKI-R: Coordinating Embodied Multi-Agent Cooperation via Reinforcement Learning
arXiv:2506.09049v3 Announce Type: replace Abstract: Coordinating multiple embodied agents in dynamic environments remains a core challenge in artificial intelligence, requiring both perception-driven reasoning and scalable cooperation strategies. While recent works have leveraged large language models (LLMs) for multi-agent planning, a few have begun to explore vision-language models (VLMs) for visual reasoning. However, these VLM-based approaches remain limited in their support for diverse embodiment types. In this work, we introduce VIKI-Bench, the first hierarchical benchmark tailored for embodied multi-agent cooperation, featuring three structured levels: agent activation, task planning, and trajectory perception. VIKI-Bench includes diverse robot embodiments, multi-view visual observations, and structured supervision signals to evaluate reasoning grounded in visual inputs. To demonstrate the utility of VIKI-Bench, we propose VIKI-R, a two-stage framework that fine-tunes a pretrained vision-language model (VLM) using Chain-of-Thought annotated demonstrations, followed by reinforcement learning under multi-level reward signals. Our extensive experiments show that VIKI-R significantly outperforms baselines method across all task levels. Furthermore, we show that reinforcement learning enables the emergence of compositional cooperation patterns among heterogeneous agents. Together, VIKI-Bench and VIKI-R offer a unified testbed and method for advancing multi-agent, visual-driven cooperation in embodied AI systems.
https://arxiv.org/abs/2506.09049
Academic Papers
svg
5efbfc07f195f9a62ac460d29250e7b52091fb04bb40d088bf8bc6327de850b9
2026-01-23T00:00:00-05:00
EmbedAgent: Benchmarking Large Language Models in Embedded System Development
arXiv:2506.11003v2 Announce Type: replace Abstract: Large Language Models (LLMs) have shown promise in various tasks, yet few benchmarks assess their capabilities in embedded system development.In this paper, we introduce EmbedAgent, a paradigm designed to simulate real-world roles in embedded system development, such as Embedded System Programmer, Architect, and Integrator. This paradigm enables LLMs to be tested in tasks that bridge the gap between digital and physical systems, allowing for a more comprehensive assessment of their capabilities. To evaluate LLMs on these tasks, we propose Embedbench, the first comprehensive benchmark for embedded system programming, circuit design, and cross-platform migration.Embedbench consists of 126 cases, covering 9 electronic components across 3 hardware platforms. Through extensive experiments on 10 mainstream LLMs, we uncover several key findings. Surprisingly, despite the simplicity of the cases, DeepSeek-R1 achieves only a 55.6% pass@1 rate when provided with schematic information, and 50.0% when tasked with generating the schematics itself. In the cross-platform migration tasks, LLMs show relatively strong performance with MicroPython on the Raspberry Pi Pico (with the top model achieving 73.8% pass@1), but perform poorly on ESP-IDF, where the best model reaches only 29.4% pass@1.Interestingly, we observe that general-purpose chat LLMs like DeepSeek-V3 often fail to utilize relevant pre-trained knowledge in this domain, while reasoning LLMs tend to overthink and overlook efficient knowledge during pretraining. Based on these insights, we propose two strategies: retrieval augmented generation and compiler feedback-to enhance LLM performance. These strategies result in significant improvements, with Deepseek-R1 reaching a 65.1% pass@1 with correct schematics, and 53.1% without. Additionally, the accuracy of the Arduino to ESP32 migration task improves from 21.4% to 27.8%.
https://arxiv.org/abs/2506.11003
Academic Papers
svg
0ca85beb6220d4d6ad5f8d2d8b2b858b53878532939492ce5893aab20ae16e4a
2026-01-23T00:00:00-05:00
Advances in LLMs with Focus on Reasoning, Adaptability, Efficiency and Ethics
arXiv:2506.12365v3 Announce Type: replace Abstract: This survey paper outlines the key developments in the field of Large Language Models (LLMs), including enhancements to their reasoning skills, adaptability to various tasks, increased computational efficiency, and the ability to make ethical decisions. The techniques that have been most effective in bridging the gap between human and machine communications include the Chain-of-Thought prompting, Instruction Tuning, and Reinforcement Learning from Human Feedback. The improvements in multimodal learning and few-shot or zero-shot techniques have further empowered LLMs to handle complex jobs with minor input. A significant focus is placed on efficiency, detailing scaling strategies, optimization techniques, and the influential Mixture-of-Experts (MoE) architecture, which strategically routes inputs to specialized subnetworks to boost predictive accuracy, while optimizing resource allocation. This survey also offers a broader perspective on recent advancements in LLMs, going beyond isolated aspects such as model architecture or ethical concerns. Additionally, it explores the role of LLMs in Agentic AI and their use as Autonomous Decision-Making Systems, and categorizes emerging methods that enhance LLM reasoning, efficiency, and ethical alignment. The survey also identifies underexplored areas such as interpretability, cross-modal integration, and sustainability. While significant advancements have been made in LLMs, challenges such as high computational costs, biases, and ethical risks remain. Overcoming these requires a focus on bias mitigation, transparent decision-making, and explicit ethical guidelines. Future research will generally focus on enhancing the model's ability to handle multiple inputs, thereby making it more intelligent, safe, and reliable.
https://arxiv.org/abs/2506.12365
Academic Papers
svg
89121eec2ec9f16642c26b1a774ec35b609aefbef0ee0c27e2502dd8ba302516
2026-01-23T00:00:00-05:00
Rasterizing Wireless Radiance Field via Deformable 2D Gaussian Splatting
arXiv:2506.12787v3 Announce Type: replace Abstract: Modeling the wireless radiance field (WRF) is fundamental to modern communication systems, enabling key tasks such as localization, sensing, and channel estimation. Traditional approaches, which rely on empirical formulas or physical simulations, often suffer from limited accuracy or require strong scene priors. Recent neural radiance field (NeRF-based) methods improve reconstruction fidelity through differentiable volumetric rendering, but their reliance on computationally expensive multilayer perceptron (MLP) queries hinders real-time deployment. To overcome these challenges, we introduce Gaussian splatting (GS) to the wireless domain, leveraging its efficiency in modeling optical radiance fields to enable compact and accurate WRF reconstruction. Specifically, we propose SwiftWRF, a deformable 2D Gaussian splatting framework that synthesizes WRF spectra at arbitrary positions under single-sided transceiver mobility. SwiftWRF employs CUDA-accelerated rasterization to render spectra at over 100000 fps and uses a lightweight MLP to model the deformation of 2D Gaussians, effectively capturing mobility-induced WRF variations. In addition to novel spectrum synthesis, the efficacy of SwiftWRF is further underscored in its applications in angle-of-arrival (AoA) and received signal strength indicator (RSSI) prediction. Experiments conducted on both real-world and synthetic indoor scenes demonstrate that SwiftWRF can reconstruct WRF spectra up to 500x faster than existing state-of-the-art methods, while significantly enhancing its signal quality. The project page is https://evan-sudo.github.io/swiftwrf/.
https://arxiv.org/abs/2506.12787
Academic Papers
svg
45c5c9d06f3adba96b88fdf3d4b649649cbd338e6c7bb9f527b2e38d38a6e9c8
2026-01-23T00:00:00-05:00
KEPLA: A Knowledge-Enhanced Deep Learning Framework for Accurate Protein-Ligand Binding Affinity Prediction
arXiv:2506.13196v4 Announce Type: replace Abstract: Accurate prediction of protein-ligand binding affinity is critical for drug discovery. While recent deep learning approaches have demonstrated promising results, they often rely solely on structural features of proteins and ligands, overlooking their valuable biochemical knowledge associated with binding affinity. To address this limitation, we propose KEPLA, a novel deep learning framework that explicitly integrates prior knowledge from Gene Ontology and ligand properties to enhance prediction performance. KEPLA takes protein sequences and ligand molecular graphs as input and optimizes two complementary objectives: (1) aligning global representations with knowledge graph relations to capture domain-specific biochemical insights, and (2) leveraging cross attention between local representations to construct fine-grained joint embeddings for prediction. Experiments on two benchmark datasets across both in-domain and cross-domain scenarios demonstrate that KEPLA consistently outperforms state-of-the-art baselines. Furthermore, interpretability analyses based on knowledge graph relations and cross attention maps provide valuable insights into the underlying predictive mechanisms.
https://arxiv.org/abs/2506.13196
Academic Papers
svg
259ce59e489a3ac1d210f02861e23c9041a83b6c454a1a5e69698d019492170e
2026-01-23T00:00:00-05:00
DAGs for the Masses
arXiv:2506.13998v3 Announce Type: replace Abstract: A recent approach to building consensus protocols on top of Directed Acyclic Graphs (DAGs) shows much promise due to its simplicity and stable throughput. However, as each node in the DAG typically includes a linear number of references to the nodes in the previous round, prior DAG protocols only scale up to a certain point when the overhead of maintaining the graph becomes the bottleneck. To enable large-scale deployments of DAG-based protocols, we propose a sparse DAG architecture, where each node includes only a constant number of references to random nodes in the previous round. We present a sparse version of Bullshark -- one of the most prominent DAG-based consensus protocols -- and demonstrate its improved scalability. Remarkably, unlike other protocols that use random sampling to reduce communication complexity, we manage to avoid sacrificing resilience: the protocol can tolerate up to $f<n/3$ Byzantine faults (where $n$ is the number of participants), same as its less scalable deterministic counterpart. The proposed ``sparse'' methodology can be applied to any protocol that maintains disseminated system updates and causal relations between them in a graph-like structure. Our simulations show that the considerable reduction of transmitted metadata in sparse DAGs results in more efficient network utilization and better scalability.
https://arxiv.org/abs/2506.13998
Academic Papers
svg
67d2aab040dd2cc2995c2f896f97c9c8f2e7678f1b38a2e761e2dd24febf2539
2026-01-23T00:00:00-05:00
FormGym: Doing Paperwork with Agents
arXiv:2506.14079v3 Announce Type: replace Abstract: Completing paperwork is a challenging and time-consuming problem. Form filling is especially challenging in the pure-image domain without access to OCR, typeset PDF text, or a DOM. For computer agents, it requires multiple abilities, including multi-modal understanding, information retrieval, and tool-use. We present a novel form-filling benchmark consisting of 432 fields spread across 55 documents and 3 tasks, requiring knowledge of 236 features per user. We find that baseline VLAs achieve less than 1% accuracy in most cases, primarily due to poor localization ability. GUI agents also struggle, scoring between 10.6-68.0% despite high cost and latency. Therefore, we also contribute FieldFinder, a tool to assist LLMs in identifying where to place text on a form. With FieldFinder, all models achieve equal or better performance in all six study conditions, with a maximum increase from 2% to 56%.
https://arxiv.org/abs/2506.14079
Academic Papers
svg
b939b615f3d6404fbb198f95126475c5b61d8779c69517ad3617481a5f732658
2026-01-23T00:00:00-05:00
Dynamic Exploration on Segment-Proposal Graphs for Tubular Centerline Tracking
arXiv:2506.18930v2 Announce Type: replace Abstract: Optimal curve methods provide a fundamental framework for tubular centerline tracking. Point-wise approaches, such as minimal paths, are theoretically elegant but often suffer from shortcut and short-branch combination problems in complex scenarios. Nonlocal segment-wise methods address these issues by mapping pre-extracted centerline fragments onto a segment-proposal graph, performing optimization in this abstract space, and recovering the target tubular centerline from the resulting optimal path. In this paradigm, graph construction is critical, as it directly determines the quality of the final result. However, existing segment-wise methods construct graphs in a static manner, requiring all edges and their weights to be pre-computed, i.e. the graph must be sufficiently complete prior to search. Otherwise, the true path may be absent from the candidate space, leading to search failure. To address this limitation, we propose a dynamic exploration scheme for constructing segment-proposal graphs, where the graph is built on demand during the search for optimal paths. By formulating the problem as a Markov decision process, we apply Q-learning to compute edge weights only for visited transitions and adaptively expand the action space when connectivity is insufficient. Experimental results on retinal vessels, roads, and rivers demonstrate consistent improvements over state-of-the-art methods in both accuracy and efficiency.
https://arxiv.org/abs/2506.18930
Academic Papers
svg
28941c1d3bf6f2bec901ec152120ee9049fe53c9960d40141532848ad05a00ed
2026-01-23T00:00:00-05:00
MultiHuman-Testbench: Benchmarking Image Generation for Multiple Humans
arXiv:2506.20879v4 Announce Type: replace Abstract: Generation of images containing multiple humans, performing complex actions, while preserving their facial identities, is a significant challenge. A major factor contributing to this is the lack of a dedicated benchmark. To address this, we introduce MultiHuman-Testbench, a novel benchmark for rigorously evaluating generative models for multi-human generation. The benchmark comprises 1,800 samples, including carefully curated text prompts, describing a range of simple to complex human actions. These prompts are matched with a total of 5,550 unique human face images, sampled uniformly to ensure diversity across age, ethnic background, and gender. Alongside captions, we provide human-selected pose conditioning images which accurately match the prompt. We propose a multi-faceted evaluation suite employing four key metrics to quantify face count, ID similarity, prompt alignment, and action detection. We conduct a thorough evaluation of a diverse set of models, including zero-shot approaches and training-based methods, with and without regional priors. We also propose novel techniques to incorporate image and region isolation using human segmentation and Hungarian matching, significantly improving ID similarity. Our proposed benchmark and key findings provide valuable insights and a standardized tool for advancing research in multi-human image generation. The dataset and evaluation codes will be available at https://github.com/Qualcomm-AI-research/MultiHuman-Testbench.
https://arxiv.org/abs/2506.20879
Academic Papers
svg
f7f47a5c8be8e2c15eaa925b27c9eeae0280bd2820952fe9e1bb7eb913ccba8c
2026-01-23T00:00:00-05:00
SciArena: An Open Evaluation Platform for Non-Verifiable Scientific Literature-Grounded Tasks
arXiv:2507.01001v2 Announce Type: replace Abstract: We present SciArena, an open and collaborative platform for evaluating foundation models on scientific literature-grounded tasks. Unlike traditional benchmarks for scientific literature understanding and synthesis, SciArena engages the research community directly, following the Chatbot Arena evaluation approach of community voting on model comparisons. By leveraging collective intelligence, SciArena offers a community-driven evaluation of model performance on open-ended scientific tasks that demand literature-grounded, long-form responses. The platform currently supports 47 foundation models and has collected over 20,000 votes from human researchers across diverse scientific domains. Our analysis of the data collected so far confirms its high quality. We discuss the results and insights based on the model ranking leaderboard. To further promote research in building model-based automated evaluation systems for literature tasks, we release SciArena-Eval, a meta-evaluation benchmark based on collected preference data. It measures the accuracy of models in judging answer quality by comparing their pairwise assessments with human votes. Our experiments highlight the benchmark's challenges and emphasize the need for more reliable automated evaluation methods.
https://arxiv.org/abs/2507.01001
Academic Papers
svg
a739b7268bc1abd3a61e61af1ca76c28f71fc3efb393ed1764f4f6bfd87a836f
2026-01-23T00:00:00-05:00
Training-Free Geospatial Place Representation Learning from Large-Scale Point-of-Interest Graph Data
arXiv:2507.02921v3 Announce Type: replace Abstract: Learning effective representations of urban environments requires capturing spatial structure beyond fixed administrative boundaries. Existing geospatial representation learning approaches typically aggregate Points of Interest(POI) into pre-defined administrative regions such as census units or ZIP code areas, assigning a single embedding to each region. However, POIs often form semantically meaningful groups that extend across, within, or beyond these boundaries, defining places that better reflect human activity and urban function. To address this limitation, we propose PlaceRep, a training-free geospatial representation learning method that constructs place-level representations by clustering spatially and semantically related POIs. PlaceRep summarizes large-scale POI graphs from U.S. Foursquare data to produce general-purpose urban region embeddings while automatically identifying places across multiple spatial scales. By eliminating model pre-training, PlaceRep provides a scalable and efficient solution for multi-granular geospatial analysis. Experiments using the tasks of population density estimation and housing price prediction as downstream tasks show that PlaceRep outperforms most state-of-the-art graph-based geospatial representation learning methods and achieves up to a 100x speedup in generating region-level representations on large-scale POI graphs. The implementation of PlaceRep is available at https://github.com/mohammadhashemii/PlaceRep.
https://arxiv.org/abs/2507.02921
Academic Papers
svg
07fb3f4975a39435e43be2918d0e421b5d88d0a4f0f44186b7dd1ce7bd8aefcd
2026-01-23T00:00:00-05:00
Toward Efficient Speech Emotion Recognition via Spectral Learning and Attention
arXiv:2507.03251v3 Announce Type: replace Abstract: Speech Emotion Recognition (SER) traditionally relies on auditory data analysis for emotion classification. Several studies have adopted different methods for SER. However, existing SER methods often struggle to capture subtle emotional variations and generalize across diverse datasets. In this article, we use Mel-Frequency Cepstral Coefficients (MFCCs) as spectral features to bridge the gap between computational emotion processing and human auditory perception. To further improve robustness and feature diversity, we propose a novel 1D-CNN-based SER framework that integrates data augmentation techniques. MFCC features extracted from the augmented data are processed using a 1D Convolutional Neural Network (CNN) architecture enhanced with channel and spatial attention mechanisms. These attention modules allow the model to highlight key emotional patterns, enhancing its ability to capture subtle variations in speech signals. The proposed method delivers cutting-edge performance, achieving the accuracy of 97.49% for SAVEE, 99.23% for RAVDESS, 89.31% for CREMA-D, 99.82% for TESS, 99.53% for EMO-DB, and 96.39% for EMOVO. Experimental results show new benchmarks in SER, demonstrating the effectiveness of our approach in recognizing emotional expressions with high precision. Our evaluation demonstrates that the integration of advanced Deep Learning (DL) methods substantially enhances generalization across diverse datasets, underscoring their potential to advance SER for real-world deployment in assistive technologies and human-computer interaction.
https://arxiv.org/abs/2507.03251
Academic Papers
svg
6ab0dfd572516220f1e9e5c68389b51883a701261f4f1822f70250880e2a5397
2026-01-23T00:00:00-05:00
You May Use the Same Channel Knowledge Map for Environment-Aware NLoS Sensing and Communication
arXiv:2507.03589v2 Announce Type: replace Abstract: As one of the key usage scenarios for the sixth generation (6G) wireless networks, integrated sensing and communication (ISAC) provides an efficient framework to achieve simultaneous wireless sensing and communication. However, traditional wireless sensing techniques mainly rely on the line-of-sight (LoS) assumptions, i.e., the sensing targets are directly visible to both the sensing transmitter and receiver. This hinders ISAC systems to be applied in complex environments such as the urban low-altitude airspace, which usually suffers from signal blockage and non-line-of-sight (NLoS) multi-path propagation. To address this challenge, in this paper, we propose a novel approach to enable environment-aware NLoS ISAC by leveraging the new technique called channel knowledge map (CKM), which was originally proposed for environment-aware wireless communications. One major novelty of our proposed method is that the same CKM built for wireless communication can be directly used to enable NLoS wireless sensing, thus enjoying the benefits of ``killing two birds with one stone''. To this end, the sensing targets are treated as virtual user equipment (UE), and the wireless communication channel priors are transformed into the sensing channel priors, allowing one single CKM to serve dual purposes. We illustrate our proposed framework by a specific CKM called \emph{channel angle-delay map} (CADM). Specifically, the proposed framework utilizes CADM to derive angle-delay priors of the sensing channel by exploiting the relationship between communication and sensing angle-delay distributions, enabling sensing target localization in the challenging NLoS environment. Extensive simulation results demonstrate significant performance improvements over classic geometry-based sensing methods, which is further validated by Cram\'er-Rao Lower Bound (CRLB) analysis.
https://arxiv.org/abs/2507.03589
Academic Papers
svg
ba63e093a5752026d1c2b1e5e889dc61c2ed54a1cf54456bab82d1fb080fa261
2026-01-23T00:00:00-05:00
Skipper: Maximal Matching with a Single Pass over Edges
arXiv:2507.04420v4 Announce Type: replace Abstract: Maximal Matching (MM) is a fundamental graph problem with diverse applications. While state-of-the-art parallel MM algorithms have a total expected work linear in number of edges, they require randomization, iterative graph processing, and contraction after each iteration. These overheads increase execution time and demand additional memory, reducing applicability to large-scale graphs. In this paper, we introduce Skipper, an asynchronous Maximal Matching algorithm that resolves conflicts instantaneously using a parallel reservation strategy, which merges both reservation and committing steps into a single step. Skipper processes each edge only once, definitively determining whether the edge is selected as a match. Skipper does not require graph contraction and minimizes memory space utilization, requiring only a single byte of memory space per vertex. Furthermore, Skipper operates in the asynchronous parallel random access machine (APRAM) model, relaxing synchronization between threads, and facilitating better parallelization gains. Our evaluation, conducted on real-world and synthetic graphs with up to 224 billion edges, shows that Skipper achieves a speedup of 4.9--15.6 times, with a geometric mean of 8.0 times.
https://arxiv.org/abs/2507.04420
Academic Papers
svg
367b59008ca77339ef5b9b09adc30fbd022907d424a701cee6b2502fec12ebb2
2026-01-23T00:00:00-05:00
Stability, Complexity and Data-Dependent Worst-Case Generalization Bounds
arXiv:2507.06775v2 Announce Type: replace Abstract: Providing generalization guarantees for stochastic optimization algorithms remains a key challenge in learning theory. Recently, numerous works demonstrated the impact of the geometric properties of optimization trajectories on generalization performance. These works propose worst-case generalization bounds in terms of various notions of intrinsic dimension and/or topological complexity, which were found to empirically correlate with the generalization error. However, most of these approaches involve intractable mutual information terms, which limit a full understanding of the bounds. In contrast, some authors built on algorithmic stability to obtain worst-case bounds involving geometric quantities of a combinatorial nature, which are impractical to compute. In this paper, we address these limitations by combining empirically relevant complexity measures with a framework that avoids intractable quantities. To this end, we introduce the concept of \emph{random set stability}, tailored for the data-dependent random sets produced by stochastic optimization algorithms. Within this framework, we show that the worst-case generalization error can be bounded in terms of (i) the random set stability parameter and (ii) empirically relevant, data- and algorithm-dependent complexity measures of the random set. Moreover, our framework improves existing topological generalization bounds by recovering previous complexity notions without relying on mutual information terms. Through a series of experiments in practically relevant settings, we validate our theory by evaluating the tightness of our bounds and the interplay between topological complexity and stability.
https://arxiv.org/abs/2507.06775
Academic Papers
svg
e0ff450d5fb1bbf8e0f8dbfabe769a33d558f5a51b0ceabcb8a5f9bee20c4416
2026-01-23T00:00:00-05:00
DocPolarBERT: A Pre-trained Model for Document Understanding with Relative Polar Coordinate Encoding of Layout Structures
arXiv:2507.08606v4 Announce Type: replace Abstract: We introduce DocPolarBERT, a layout-aware BERT model for document understanding that eliminates the need for absolute 2D positional embeddings. We extend self-attention to take into account text block positions in relative polar coordinate system rather than the Cartesian one. Despite being pre-trained on a dataset more than six times smaller than the widely used IIT-CDIP corpus, DocPolarBERT achieves state-of-the-art results. These results demonstrate that a carefully designed attention mechanism can compensate for reduced pre-training data, offering an efficient and effective alternative for document understanding.
https://arxiv.org/abs/2507.08606
Academic Papers
svg
926d0267c11e87fb99acb39d60ece6f36b02b495cca6a0302bcab8ca8524928c
2026-01-23T00:00:00-05:00
A Unified Framework for Efficient Kernel and Polynomial Interpolation
arXiv:2507.12629v3 Announce Type: replace Abstract: We present a unified interpolation scheme that combines compactly-supported positive-definite kernels and multivariate polynomials. This unified framework generalizes interpolation with compactly-supported kernels and also classical polynomial least squares approximation. To facilitate the efficient use of this unified interpolation scheme, we present specialized numerical linear algebra procedures that leverage standard matrix factorizations. These procedures allow for efficient computation and storage of the unified interpolant. We also present a modification to the numerical linear algebra that allows us to generalize the application of the unified framework to target functions on manifolds with and without boundary. Our numerical experiments on both Euclidean domains and manifolds indicate that the unified interpolant is superior to polynomial least squares for the interpolation of target functions in settings with boundaries.
https://arxiv.org/abs/2507.12629
Academic Papers
svg
c693a3497f13671fbed24c5069e2b9d9c306f6342447012473a72d783f99dc6d
2026-01-23T00:00:00-05:00
VTarbel: Targeted Label Attack with Minimal Knowledge on Detector-enhanced Vertical Federated Learning
arXiv:2507.14625v2 Announce Type: replace Abstract: Vertical federated learning (VFL) enables multiple parties with disjoint features to collaboratively train models without sharing raw data. While privacy vulnerabilities of VFL are extensively-studied, its security threats-particularly targeted label attacks-remain underexplored. In such attacks, a passive party perturbs inputs at inference to force misclassification into adversary-chosen labels. Existing methods rely on unrealistic assumptions (e.g., accessing VFL-model's outputs) and ignore anomaly detectors deployed in real-world systems. To bridge this gap, we introduce VTarbel, a two-stage, minimal-knowledge attack framework explicitly designed to evade detector-enhanced VFL inference. During the preparation stage, the attacker selects a minimal set of high-expressiveness samples (via maximum mean discrepancy), submits them through VFL protocol to collect predicted labels, and uses these pseudo-labels to train estimated detector and surrogate model on local features. In attack stage, these models guide gradient-based perturbations of remaining samples, crafting adversarial instances that induce targeted misclassifications and evade detection. We implement VTarbel and evaluate it against four model architectures, seven multimodal datasets, and two anomaly detectors. Across all settings, VTarbel outperforms four state-of-the-art baselines, evades detection, and retains effective against three representative privacy-preserving defenses. These results reveal critical security blind spots in current VFL deployments and underscore urgent need for robust, attack-aware defenses.
https://arxiv.org/abs/2507.14625
Academic Papers
svg
320b900b703c0da43b27e0eea557d3ef43b84a780cd15234581c9c61e5f5ca26
2026-01-23T00:00:00-05:00
VMask: Tunable Label Privacy Protection for Vertical Federated Learning via Layer Masking
arXiv:2507.14629v2 Announce Type: replace Abstract: Though vertical federated learning (VFL) is generally considered to be privacy-preserving, recent studies have shown that VFL system is vulnerable to label inference attacks originating from various attack surfaces. Among these attacks, the model completion (MC) attack is currently the most powerful one. Existing defense methods against it either sacrifice model accuracy or incur impractical computational overhead. In this paper, we propose VMask, a novel label privacy protection framework designed to defend against MC attack from the perspective of layer masking. Our key insight is to disrupt the strong correlation between input data and intermediate outputs by applying the secret sharing (SS) technique to mask layer parameters in the attacker's model. We devise a strategy for selecting critical layers to mask, reducing the overhead that would arise from naively applying SS to the entire model. Moreover, VMask is the first framework to offer a tunable privacy budget to defenders, allowing for flexible control over the levels of label privacy according to actual requirements. We built a VFL system, implemented VMask on it, and extensively evaluated it using five model architectures and 13 datasets with different modalities, comparing it to 12 other defense methods. The results demonstrate that VMask achieves the best privacy-utility trade-off, successfully thwarting the MC attack (reducing the label inference accuracy to a random guessing level) while preserving model performance (e.g., in Transformer-based model, the averaged drop of VFL model accuracy is only 0.09%). VMask's runtime is up to 60,846 times faster than cryptography-based methods, and it only marginally exceeds that of standard VFL by 1.8 times in a large Transformer-based model, which is generally acceptable.
https://arxiv.org/abs/2507.14629
Academic Papers
svg
1e04ec379d4cb01082d4e0e9e9bd086e747ffe009948cd6e2c321c7bcda53c7f
2026-01-23T00:00:00-05:00
Can Language Models Discover Scaling Laws?
arXiv:2507.21184v5 Announce Type: replace Abstract: Discovering scaling laws for predicting model performance at scale is a fundamental and open-ended challenge, mostly reliant on slow, case specific human experimentation. To investigate the potential for LLMs to automate this process, we collect over 5,000 experiments from existing literature and curate eight diverse scaling law discovery tasks. While existing agents struggle to produce accurate law formulas, this paper introduces SLDAgent, an evolution-based agent that co-optimize the scaling law model and the parameters, enabling it to autonomously explore complex relationships between variables. For the first time, we demonstrates that SLDAgent can automatically discover laws that exhibit consistently more accurate extrapolation than their established, human-derived counterparts across all tasks. Through comprehensive analysis, we elucidate why these discovered laws are superior and verify their practical utility in both pretraining and finetuning applications. This work establishes a new paradigm for agentic scientific discovery, showing that AI systems can understand their own scaling behavior, and can contribute novel and practical knowledge back to the research community.
https://arxiv.org/abs/2507.21184
Academic Papers
svg
5bb98a9e71923947ea01f67ca7eb192d56a725942f24aae42b4d04b72f0a0b1f
2026-01-23T00:00:00-05:00
SURE-Med: Systematic Uncertainty Reduction for Enhanced Reliability in Medical Report Generation
arXiv:2508.01693v2 Announce Type: replace Abstract: Automated medical report generation (MRG) holds great promise for reducing the heavy workload of radiologists. However, its clinical deployment is hindered by three major sources of uncertainty. First, visual uncertainty, caused by noisy or incorrect view annotations, compromises feature extraction. Second, label distribution uncertainty, stemming from long-tailed disease prevalence, biases models against rare but clinically critical conditions. Third, contextual uncertainty, introduced by unverified historical reports, often leads to factual hallucinations. These challenges collectively limit the reliability and clinical trustworthiness of MRG systems. To address these issues, we propose SURE-Med, a unified framework that systematically reduces uncertainty across three critical dimensions: visual, distributional, and contextual. To mitigate visual uncertainty, a Frontal-Aware View Repair Resampling module corrects view annotation errors and adaptively selects informative features from supplementary views. To tackle label distribution uncertainty, we introduce a Token Sensitive Learning objective that enhances the modeling of critical diagnostic sentences while reweighting underrepresented diagnostic terms, thereby improving sensitivity to infrequent conditions. To reduce contextual uncertainty, our Contextual Evidence Filter validates and selectively incorporates prior information that aligns with the current image, effectively suppressing hallucinations. Extensive experiments on the MIMIC-CXR and IU-Xray benchmarks demonstrate that SURE-Med achieves state-of-the-art performance. By holistically reducing uncertainty across multiple input modalities, SURE-Med sets a new benchmark for reliability in medical report generation and offers a robust step toward trustworthy clinical decision support.
https://arxiv.org/abs/2508.01693
Academic Papers
svg
b126e7c49904fb5a43a1616ae3b411f4e3980785baa93db6023dd6fbcf86798d
2026-01-23T00:00:00-05:00
Evolving in Tasks: Empowering the Multi-modality Large Language Model as the Computer Use Agent
arXiv:2508.04037v2 Announce Type: replace Abstract: Computer use agents represent an emerging area in artificial intelligence, aiming to operate computers autonomously to fulfill user tasks, attracting significant attention from both industry and academia. However, the performance of existing agents remains insufficient for practical deployment. In this paper, we propose the Self-Evolution Agent (SEA) for computer operation, alongside three core innovations in data generation, reinforcement learning, and model enhancement to develop this agent. Specifically, we first design an automatic pipeline to generate verifiable task trajectories for training. Second, we propose Efficient Step-wise Reinforcement Learning to reduce the substantial computational overhead of long-horizon training. Finally, we introduce a model enhancement method that integrates grounding and planning capabilities into a single model without additional training. Leveraging these innovations, our SEA (with only 7B parameters) outperforms existing models of the same parameter scale and achieves performance comparable to larger models (e.g., 32B/72B parameters) on computer use tasks. We plan to release the model weights and related code as open-source resources in the future.
https://arxiv.org/abs/2508.04037
Academic Papers
svg
48128bf868517f186bd70980ab88be3a9792fb08cfb3009712ef844725ae1eb7
2026-01-23T00:00:00-05:00
Cohesive Group Discovery in Interaction Graphs under Explicit Density Constraints
arXiv:2508.04174v3 Announce Type: replace Abstract: Discovering cohesive groups is a fundamental primitive in graph-based recommender systems, underpinning tasks such as social recommendation, bundle discovery, and community-aware modeling. In interaction graphs, cohesion is often modeled as the $\gamma$-quasi-clique, an induced subgraph whose internal edge density meets a user-defined threshold $\gamma$. This formulation provides explicit control over within-group connectivity while accommodating the sparsity inherent in real-world data. This paper presents EDQC, an effective framework for cohesive group discovery under explicit density constraints. EDQC leverages a lightweight energy diffusion process to rank vertices for localizing promising candidate regions. Guided by this ranking, the framework extracts and refines a candidate subgraph to ensure the output strictly satisfies the target density requirement. Extensive experiments on 75 real-world graphs across varying density thresholds demonstrate that EDQC identifies the largest mean $\gamma$-quasi-cliques in the vast majority of cases, achieving lower variance than the state-of-the-art methods while maintaining competitive runtime. Furthermore, statistical analysis confirms that EDQC significantly outperforms the baselines, underscoring its robustness and practical utility for cohesive group discovery in graph-based recommender systems.
https://arxiv.org/abs/2508.04174
Academic Papers
svg
a53b191bf6ac7eff8fe924ece2340a7c2ef4323240156f4db5d0dbb5b541501c
2026-01-23T00:00:00-05:00
ConlangCrafter: Constructing Languages with a Multi-Hop LLM Pipeline
arXiv:2508.06094v3 Announce Type: replace Abstract: Constructed languages (conlangs) such as Esperanto and Quenya have played diverse roles in art, philosophy, and international communication. Meanwhile, foundation models have revolutionized creative generation in text, images, and beyond. In this work, we leverage modern LLMs as computational creativity aids for end-to-end conlang creation. We introduce ConlangCrafter, a multi-hop pipeline that decomposes language design into modular stages -- phonology, morphology, syntax, lexicon generation, and translation. At each stage, our method leverages LLMs' metalinguistic reasoning capabilities, injecting randomness to encourage diversity and leveraging self-refinement feedback to encourage consistency in the emerging language description. We construct a novel, scalable evaluation framework for this task, evaluating metrics measuring consistency and typological diversity. Automatic and manual evaluations demonstrate ConlangCrafter's ability to produce coherent and varied conlangs without human linguistic expertise.
https://arxiv.org/abs/2508.06094
Academic Papers
svg
83a29b4c7d47fe3fce5650b07e70b1af895a55ccc0fedb8651fdd832034195ef
2026-01-23T00:00:00-05:00
A Segmentation-driven Editing Method for Bolt Defect Augmentation and Detection
arXiv:2508.10509v3 Announce Type: replace Abstract: Bolt defect detection is critical to ensure the safety of transmission lines. However, the scarcity of defect images and imbalanced data distributions significantly limit detection performance. To address this problem, we propose a segmentationdriven bolt defect editing method (SBDE) to augment the dataset. First, a bolt attribute segmentation model (Bolt-SAM) is proposed, which enhances the segmentation of complex bolt attributes through the CLAHE-FFT Adapter (CFA) and Multipart- Aware Mask Decoder (MAMD), generating high-quality masks for subsequent editing tasks. Second, a mask optimization module (MOD) is designed and integrated with the image inpainting model (LaMa) to construct the bolt defect attribute editing model (MOD-LaMa), which converts normal bolts into defective ones through attribute editing. Finally, an editing recovery augmentation (ERA) strategy is proposed to recover and put the edited defect bolts back into the original inspection scenes and expand the defect detection dataset. We constructed multiple bolt datasets and conducted extensive experiments. Experimental results demonstrate that the bolt defect images generated by SBDE significantly outperform state-of-the-art image editing models, and effectively improve the performance of bolt defect detection, which fully verifies the effectiveness and application potential of the proposed method. The code of the project is available at https://github.com/Jay-xyj/SBDE.
https://arxiv.org/abs/2508.10509
Academic Papers
svg
5f84c13e9dce3b49a51345815aba54cab5067e55c6705f9a413499c26579fdf0
2026-01-23T00:00:00-05:00
Mantis: A Foundation Model for Mechanistic Disease Forecasting
arXiv:2508.12260v4 Announce Type: replace Abstract: Infectious disease forecasting in novel outbreaks or low-resource settings is hampered by the need for large disease and covariate data sets, bespoke training, and expert tuning, all of which can hinder rapid generation of forecasts for new settings. To help address these challenges, we developed Mantis, a foundation model trained entirely on mechanistic simulations, which enables out-of-the-box forecasting across diseases, regions, and outcomes, even in settings with limited historical data. We evaluated Mantis against 48 forecasting models across six diseases with diverse modes of transmission, assessing both point forecast accuracy (mean absolute error) and probabilistic performance (weighted interval score and coverage). Despite using no real-world data during training, Mantis achieved lower mean absolute error than all models in the CDC's COVID-19 Forecast Hub when backtested on early pandemic forecasts which it had not previously seen. Across all other diseases tested, Mantis consistently ranked in the top two models across evaluation metrics. Mantis further generalized to diseases with transmission mechanisms not represented in its training data, demonstrating that it can capture fundamental contagion dynamics rather than memorizing disease-specific patterns. These capabilities illustrate that purely simulation-based foundation models such as Mantis can provide a practical foundation for disease forecasting: general-purpose, accurate, and deployable where traditional models struggle.
https://arxiv.org/abs/2508.12260
Academic Papers
svg
35b67ac2766921e8c44ff0b7785950b502dcc090911c973b05068646c4c1fab2
2026-01-23T00:00:00-05:00
Graph-Based Deterministic Polynomial Algorithm for NP Problems
arXiv:2508.13166v4 Announce Type: replace Abstract: The P versus NP problem asks whether every problem in NP, whose membership can be verified in polynomial time given a suitable certificate, can be decided by a deterministic Turing machine in polynomial time. In this paper, we present a proof that P = NP by constructing a deterministic polynomial-time algorithm for NP problems based on a graph-based computation framework. We introduce a structured computation model in which the transitions of a deterministic Turing machine are incrementally realized in the corresponding computation graph via edge extensions. Each extension step enforces a local feasibility condition that preserves consistency with valid NP verification paths across all possible certificates, ensuring that the maintained computation graph remains feasible at every stage. The total number of extension steps is polynomially bounded in the input size, and each step can be verified in polynomial time. As a result, the entire graph construction process runs in deterministic polynomial time and decides NP problems without enumerating certificates. This provides a direct and constructive resolution of the P versus NP question. Our result has significant implications for cryptography, combinatorial optimization, and artificial intelligence, where NP-complete problems play a central role.
https://arxiv.org/abs/2508.13166
Academic Papers
svg
238dda7be71150a2bcd399be3a7466ecc4d60ec1cf84888ed5502e98d4a61c79
2026-01-23T00:00:00-05:00
Toward Robust Semi-supervised Regression via Dual-stream Knowledge Distillation
arXiv:2508.14082v2 Announce Type: replace Abstract: Semi-supervised regression (SSR), which aims to predict continuous scores of samples while reducing reliance on a large amount of labeled data, has recently received considerable attention across various applications, including computer vision, natural language processing, and audio and medical analysis. Existing SSR methods typically train models on scarce labeled data by introducing constraint-based regularization or ordinal ranking to reduce overfitting. However, these approaches fail to fully exploit the abundance of unlabeled samples. While consistency-driven pseudo-labeling methods attempt to incorporate unlabeled data, they are highly sensitive to pseudo-label quality and noisy predictions. To address these challenges, we introduce a Dual-stream Knowledge Distillation framework (DKD), which is specially designed for the SSR task to distill knowledge from both continuous-valued knowledge and distribution information, better preserving regression magnitude information and improving sample efficiency. Specifically, in DKD, the teacher is optimized solely with ground-truth labels for label distribution estimation, while the student learns from a mixture of real labels and teacher-generated pseudo targets on unlabeled data. The distillation design ensures the effective supervision transfer, allowing the student to leverage pseudo labels more robustly. Then, we introduce an advanced Decoupled Distribution Alignment (DDA) to align the target class and non-target class between teacher and student on the distribution, enhancing the student's capacity to mitigate noise in pseudo-label supervision and learn a more well-calibrated regression predictor.
https://arxiv.org/abs/2508.14082
Academic Papers
svg
056ed0924c8bbb6eaed12d502edce9d298b83b77e821ffed51b8cc92094b5fe4
2026-01-23T00:00:00-05:00
Evaluating the Defense Potential of Machine Unlearning against Membership Inference Attacks
arXiv:2508.16150v4 Announce Type: replace Abstract: Membership Inference Attacks (MIAs) pose a significant privacy risk by enabling adversaries to determine if a specific data point was part of a model's training set. This work empirically investigates whether MU algorithms can function as a targeted, active defense mechanism, in scenarios where a privacy audit identifies specific classes or individuals as highly susceptible to MIAs post-training. By 'dulling' the model's categorical memory of these samples, the process effectively mitigates the membership signal and reduces the MIA success rate for the most vulnerable users. We evaluate the defense potential of three MU algorithms, Negative Gradient (neg grad), SCalable Remembering and Unlearning unBound (SCRUB), and Selective Fine-tuning and Targeted Confusion (SFTC), across four diverse datasets and three complexity-based model groups. Our findings reveal that MU can function as a countermeasure against MIAs, though its success is critically contingent on algorithm choice, model capacity, and a profound sensitivity to learning rates. While Negative Gradient often induces a generalized degradation of membership signals across both forget and retain set, SFTC identifies a critical ``divergence effect'' where targeted forgetting reinforces the membership signal of retained data. Conversely, SCRUB provides a more balanced defense with minimal collateral impact on MIA perspective.
https://arxiv.org/abs/2508.16150
Academic Papers
svg
3012e5817c2a19cb4a5d5300ca9af86087327f0295a65156866a931a0a3955f2
2026-01-23T00:00:00-05:00
Being Kind Isn't Always Being Safe: Diagnosing Affective Hallucination in LLMs
arXiv:2508.16921v2 Announce Type: replace Abstract: Large Language Models (LLMs) are increasingly engaged in emotionally vulnerable conversations that extend beyond information seeking to moments of personal distress. As they adopt affective tones and simulate empathy, they risk creating the illusion of genuine relational connection. We term this phenomenon Affective Hallucination, referring to emotionally immersive responses that evoke false social presence despite the model's lack of affective capacity. To address this, we introduce AHaBench, a benchmark of 500 mental-health-related prompts with expert-informed reference responses, evaluated along three dimensions: Emotional Enmeshment, Illusion of Presence, and Fostering Overdependence. We further release AHaPairs, a 5K-instance preference dataset enabling Direct Preference Optimization (DPO) for alignment with emotionally responsible behavior. DPO fine-tuning substantially reduces affective hallucination without compromising reasoning performance, and the Pearson correlation coefficients between GPT-4o and human judgments is also strong (r=0.85) indicating that human evaluations confirm AHaBench as an effective diagnostic tool. This work establishes affective hallucination as a distinct safety concern and provides resources for developing LLMs that are both factually reliable and psychologically safe. AHaBench and AHaPairs are accessible via https://huggingface.co/datasets/o0oMiNGo0o/AHaBench, and code for fine-tuning and evaluation are in https://github.com/0oOMiNGOo0/AHaBench. Warning: This paper contains examples of mental health-related language that may be emotionally distressing.
https://arxiv.org/abs/2508.16921
Academic Papers
svg
f76a7d4201f30d334fea7bbe698ce2cbb1430ba65a1549596e54e9e56e25c994
2026-01-23T00:00:00-05:00
Evaluating Compiler Optimization Impacts on zkVM Performance
arXiv:2508.17518v2 Announce Type: replace Abstract: Zero-knowledge proofs (ZKPs) are the cornerstone of programmable cryptography. They enable (1) privacy-preserving and verifiable computation across blockchains, and (2) an expanding range of off-chain applications such as credential schemes. Zero-knowledge virtual machines (zkVMs) lower the barrier by turning ZKPs into a drop-in backend for standard compilation pipelines. This lets developers write proof-generating programs in conventional languages (e.g., Rust or C++) instead of hand-crafting arithmetic circuits. However, these VMs inherit compiler infrastructures tuned for traditional architectures rather than for proof systems. In particular, standard compiler optimizations assume features that are absent in zkVMs, including cache locality, branch prediction, or instruction-level parallelism. Therefore, their impact on proof generation is questionable. We present the first systematic study of the impact of compiler optimizations on zkVMs. We evaluate 64 LLVM passes, six standard optimization levels, and an unoptimized baseline across 58 benchmarks on two RISC-V-based zkVMs (RISC Zero and SP1). While standard LLVM optimization levels do improve zkVM performance (over 40\%), their impact is far smaller than on traditional CPUs, since their decisions rely on hardware features rather than proof constraints. Guided by a fine-grained pass-level analysis, we~\emph{slightly} refine a small set of LLVM passes to be zkVM-aware, improving zkVM execution time by up to 45\% (average +4.6\% on RISC Zero, +1\% on SP1) and achieving consistent proving-time gains. Our work highlights the potential of compiler-level optimizations for zkVM performance and opens new direction for zkVM-specific passes, backends, and superoptimizers.
https://arxiv.org/abs/2508.17518
Academic Papers
svg
bc8546cce1d6937c43c3b935493e0c84ad41c0b6d784313fb3b9185250c8554c
2026-01-23T00:00:00-05:00
Membership Inference Attacks on LLM-based Recommender Systems
arXiv:2508.18665v5 Announce Type: replace Abstract: Large language models (LLMs) based recommender systems (RecSys) can adapt to different domains flexibly. It utilizes in-context learning (ICL), i.e., prompts, to customize the recommendation functions, which include sensitive historical user-specific item interactions, encompassing implicit feedback such as clicked items and explicit product reviews. Such private information may be exposed by novel privacy attacks. However, no study has been conducted on this important issue. We design several membership inference attacks (MIAs) aimed to revealing whether system prompts include victims' historical interactions. The attacks are \emph{Similarity, Memorization, Inquiry, and Poisoning attacks}, each utilizing unique features of LLMs or RecSys. We have carefully evaluated them on five of the latest open-source LLMs and three well-known RecSys benchmark datasets. The results confirm that the MIA threat to LLM RecSys is realistic: inquiry and poisoning attacks show significantly high attack advantages. We also discussed possible methods to mitigate such MIA threats. We have also analyzed the factors affecting these attacks, such as the number of shots in system prompts, the position of the victim in the shots, the number of poisoning items in the prompt,etc.
https://arxiv.org/abs/2508.18665
Academic Papers
svg
b9b3d0af414b3c9d64ca77062a4fdaf3feba734135f5a04ff3ea8168e04bd35a
2026-01-23T00:00:00-05:00
Attacks on Approximate Caches in Text-to-Image Diffusion Models
arXiv:2508.20424v3 Announce Type: replace Abstract: Diffusion models are a powerful class of generative models that produce images and other content from user prompts, but they are computationally intensive. To mitigate this cost, recent academic and industry work has adopted approximate caching, which reuses intermediate states from similar prompts in a cache. While efficient, this optimization introduces new security risks by breaking isolation among users. This paper provides a comprehensive assessment of the security vulnerabilities introduced by approximate caching. First, we demonstrate a remote covert channel established with the approximate cache, where a sender injects prompts with special keywords into the cache system and a receiver can recover that even after days, to exchange information. Second, we introduce a prompt stealing attack using the approximate cache, where an attacker can recover existing cached prompts from hits. Finally, we introduce a poisoning attack that embeds the attacker's logos into the previously stolen prompt, leading to unexpected logo rendering for the requests that hit the poisoned cache prompts. These attacks are all performed remotely through the serving system, demonstrating severe security vulnerabilities in approximate caching. The code for this work is available.
https://arxiv.org/abs/2508.20424
Academic Papers
svg
472fb12c7c1c462a4c707eae9207fc53e334d49415cfda5071af756844b8fd36
2026-01-23T00:00:00-05:00
The Percept-V Challenge: Can Multimodal LLMs Crack Simple Perception Problems?
arXiv:2508.21143v3 Announce Type: replace Abstract: Cognitive science research treats visual perception, the ability to understand and make sense of a visual input, as one of the early developmental signs of intelligence. Its TVPS-4 framework categorizes and tests human perception into seven skills such as visual discrimination, and form constancy. Do Multimodal Large Language Models (MLLMs) match up to humans in basic perception? Even though there are many benchmarks that evaluate MLLMs on advanced reasoning and knowledge skills, there is limited research that focuses evaluation on simple perception. In response, we introduce Percept-V, a dataset containing 6000 program-generated uncontaminated images divided into 30 domains, where each domain tests one or more TVPS-4 skills. Our focus is on perception, so we make our domains quite simple and the reasoning and knowledge required for solving them are minimal. Since modern-day MLLMs can solve much more complex tasks, our a-priori expectation is that they will solve these domains very easily. Contrary to our belief, our experiments show a weak performance of SoTA proprietary and open-source MLLMs compared to very high human performance on Percept-V. We find that as number of objects in the image increases, performance goes down rather fast. Our experiments also identify the perception skills that are considerably harder for all models.
https://arxiv.org/abs/2508.21143
Academic Papers
svg
86d5f689a03c282b9742c438917223b9211b605a98f69964ac8e8d4541b8d77a
2026-01-23T00:00:00-05:00
Is this chart lying to me? Automating the detection of misleading visualizations
arXiv:2508.21675v2 Announce Type: replace Abstract: Misleading visualizations are a potent driver of misinformation on social media and the web. By violating chart design principles, they distort data and lead readers to draw inaccurate conclusions. Prior work has shown that both humans and multimodal large language models (MLLMs) are frequently deceived by such visualizations. Automatically detecting misleading visualizations and identifying the specific design rules they violate could help protect readers and reduce the spread of misinformation. However, the training and evaluation of AI models has been limited by the absence of large, diverse, and openly available datasets. In this work, we introduce Misviz, a benchmark of 2,604 real-world visualizations annotated with 12 types of misleaders. To support model training, we also create Misviz-synth, a synthetic dataset of 57,665 visualizations generated using Matplotlib and based on real-world data tables. We perform a comprehensive evaluation on both datasets using state-of-the-art MLLMs, rule-based systems, and image-axis classifiers. Our results reveal that the task remains highly challenging. We release Misviz, Misviz-synth, and the accompanying code.
https://arxiv.org/abs/2508.21675
Academic Papers
svg
932e67ea5930937bd822ff88088c3ee1e75c7711f37382bca907e6817af712e0
2026-01-23T00:00:00-05:00
StoxLSTM: A Stochastic Extended Long Short-Term Memory Network for Time Series Forecasting
arXiv:2509.01187v2 Announce Type: replace Abstract: The Extended Long Short-Term Memory (xLSTM) network has demonstrated strong capability in modeling complex long-term dependencies in time series data. Despite its success, the deterministic architecture of xLSTM limits its representational capacity and forecasting performance, especially on challenging real-world time series datasets characterized by inherent uncertainty, stochasticity, and complex hierarchical latent dynamics. In this work, we propose StoxLSTM, a stochastic xLSTM within a designed state space modeling framework, which integrates latent stochastic variables directly into the recurrent units to effectively model deep latent temporal dynamics and uncertainty. The designed state space model follows an efficient non-autoregressive generative approach, achieving strong predictive performance without complex modifications to the original xLSTM architecture. Extensive experiments on publicly available benchmark datasets demonstrate that StoxLSTM consistently outperforms state-of-the-art baselines, achieving superior performance and generalization.
https://arxiv.org/abs/2509.01187
Academic Papers
svg
3a281235cb496322ec378f75b502723688784eeda895ca7758f62aab78f5ab1e
2026-01-23T00:00:00-05:00
Disentangling trust from cooperation: Trust as reduced monitoring across social dilemmas
arXiv:2509.04143v3 Announce Type: replace Abstract: It is commonly assumed that trust increases cooperation. However, game-theoretic models often fail to distinguish between cooperative actions and trust, making it difficult to independently measure trust and determine how its effects vary in different social dilemmas. To address this, we build on influential theories that equate trust with reduced monitoring of an agent's actions. We implement this as a heuristic that cognitively bounded agents can use in repeated games to avoid spending time and effort always monitoring their partner. Agents using this heuristic reduce monitoring of a partner's actions once a threshold level of cooperativeness has been observed -- providing a measurable and architecture-agnostic definition of trust. Using evolutionary game theory, we systematically analyse the success of strategies that use this trust heuristic across the entire space of two-player symmetric social dilemma games. We demonstrate that trust-as-reduced-monitoring facilitates cooperation in two different ways. First, when monitoring is costly, trust heuristics allow for higher levels of cooperation in social dilemmas where the temptation to defect is high. Second, when agents can make action errors, trust heuristics promote cooperation even in coordination problems. Our results disentangle trust from cooperation, and provide a behavioural measure of trust that applies across interaction types.
https://arxiv.org/abs/2509.04143
Academic Papers
svg
85ff9f7e2d3782fb2a1587a82ee2dbf9ab95e41061acf93e1817da8fa0855f8e
2026-01-23T00:00:00-05:00
Skywork UniPic 2.0: Building Kontext Model with Online RL for Unified Multimodal Model
arXiv:2509.04548v2 Announce Type: replace Abstract: Recent advances in multimodal models have demonstrated impressive capabilities in unified image generation and editing. However, many prominent open-source models prioritize scaling model parameters over optimizing training strategies, limiting their efficiency and performance. In this work, we present UniPic2-SD3.5M-Kontext, a 2B-parameter DiT model based on SD3.5-Medium, which achieves state-of-the-art image generation and editing while extending seamlessly into a unified multimodal framework. Our approach begins with architectural modifications to SD3.5-Medium and large-scale pre-training on high-quality data, enabling joint text-to-image generation and editing capabilities. To enhance instruction following and editing consistency, we propose a novel Progressive Dual-Task Reinforcement strategy (PDTR), which effectively strengthens both tasks in a staged manner. We empirically validate that the reinforcement phases for different tasks are mutually beneficial and do not induce negative interference. After pre-training and reinforcement strategies, UniPic2-SD3.5M-Kontext demonstrates stronger image generation and editing capabilities than models with significantly larger generation parameters-including BAGEL (7B) and Flux-Kontext (12B). Furthermore, following the MetaQuery, we connect the UniPic2-SD3.5M-Kontext and Qwen2.5-VL-7B via a connector and perform joint training to launch a unified multimodal model UniPic2-Metaquery. UniPic2-Metaquery integrates understanding, generation, and editing, achieving top-tier performance across diverse tasks with a simple and scalable training paradigm. This consistently validates the effectiveness and generalizability of our proposed training paradigm, which we formalize as Skywork UniPic 2.0.
https://arxiv.org/abs/2509.04548
Academic Papers
svg
17dc726e7ac8f68e1bf8ccade10fac932adc9faa2c3e1efcc71acdcb63ecaf81
2026-01-23T00:00:00-05:00
Collaborate, Deliberate, Evaluate: How LLM Alignment Affects Coordinated Multi-Agent Outcomes
arXiv:2509.05882v2 Announce Type: replace Abstract: As Large Language Models (LLMs) get integrated into diverse workflows, they are increasingly being regarded as "collaborators" with humans, and required to work in coordination with other AI systems. If such AI collaborators are to reliably coordinate their actions and behaviors with humans or other AIs, their properties and behaviors over multi-turn interactions must be known and predictable. This paper examines how different alignment methods affect LLM agents' effectiveness as partners in multi-turn, multi-party collaborations. We study this question through the lens of intervention agents that insert themselves into group dialogues not to provide answers, but to encourage the collaborative group to slow down and reflect upon their reasoning for deliberative decision-making. Common alignment techniques are typically developed under simplified single-user settings and assume the optimality of the underlying token MDP. Using the theoretical lens of the modified-action MDP, we show how they do not account for the dynamics of long-horizon multi-party interactions. We present a novel roleplay simulation methodology, where we align LLMs according to different methods and then deploy them in collaborative task dialogues to quantify how interventions affect the trajectory of group collaboration, belief alignment, and coordination. Our results show that an intervention agent that is robust to action modification significantly outperforms common alignment baselines in supporting correct task outcomes.
https://arxiv.org/abs/2509.05882
Academic Papers
svg
812665d487806521ca9f4b6c027f60b33f85f3c5169edc80cfd85dc9a51e2969
2026-01-23T00:00:00-05:00
Xi+: Uncertainty Supervision for Robust Speaker Embedding
arXiv:2509.05993v4 Announce Type: replace Abstract: There are various factors that can influence the performance of speaker recognition systems, such as emotion, language and other speaker-related or context-related variations. Since individual speech frames do not contribute equally to the utterance-level representation, it is essential to estimate the importance or reliability of each frame. The xi-vector model addresses this by assigning different weights to frames based on uncertainty estimation. However, its uncertainty estimation model is implicitly trained through classification loss alone and does not consider the temporal relationships between frames, which may lead to suboptimal supervision. In this paper, we propose an improved architecture, xi+. Compared to xi-vector, xi+ incorporates a temporal attention module to capture frame-level uncertainty in a context-aware manner. In addition, we introduce a novel loss function, Stochastic Variance Loss, which explicitly supervises the learning of uncertainty. Results demonstrate consistent performance improvements of about 10\% on the VoxCeleb1-O set and 11\% on the NIST SRE 2024 evaluation set.
https://arxiv.org/abs/2509.05993
Academic Papers
svg
7c5b962a734594edd2b725251f760ca9f3a1fd1a695966c3a412c2ca35934f5e
2026-01-23T00:00:00-05:00
Competitive Audio-Language Models with Data-Efficient Single-Stage Training on Public Data
arXiv:2509.07526v3 Announce Type: replace Abstract: Large language models (LLMs) have transformed NLP, yet their integration with audio remains underexplored despite audio's centrality to human communication. We introduce Falcon3-Audio, a family of Audio-Language Models (ALMs) built on instruction-tuned LLMs and Whisper encoders. Using a remarkably small amount of public audio data, less than 30K hours (5K unique), Falcon3-Audio-7B matches the best reported performance among open-weight models on the MMAU benchmark, with a score of 64.14, matching R1-AQA, while distinguishing itself through superior data and parameter efficiency, single-stage training, and transparency. Notably, our smallest 1B model remains competitive with larger open models ranging from 2B to 13B parameters. Through extensive ablations, we find that common complexities such as curriculum learning, multiple audio encoders, and intricate cross-attention connectors are not required for strong performance, even compared to models trained on over 500K hours of data.
https://arxiv.org/abs/2509.07526
Academic Papers
svg
47988681943adb9d1493e0994077740217cf776c6a87ad68b5026c34e813d2da
2026-01-23T00:00:00-05:00
Behind the Scenes: Mechanistic Interpretability of LoRA-adapted Whisper for Speech Emotion Recognition
arXiv:2509.08454v3 Announce Type: replace Abstract: Large pre-trained speech models such as Whisper offer strong generalization but pose significant challenges for resource-efficient adaptation. Low-Rank Adaptation (LoRA) has become a popular parameter-efficient fine-tuning method, yet its underlying mechanisms in speech tasks remain poorly understood. In this work, we conduct the first systematic mechanistic interpretability study of LoRA within the Whisper encoder for speech emotion recognition (SER). Using a suite of analytical tools, including layer contribution probing, logit-lens inspection, and representational similarity via singular value decomposition (SVD) and centered kernel alignment (CKA), we reveal two key mechanisms: a delayed specialization process that preserves general features in early layers before consolidating task-specific information, and a forward alignment, backward differentiation dynamic between LoRA's matrices. Our findings clarify how LoRA reshapes encoder hierarchies, providing both empirical insights and a deeper mechanistic understanding for designing efficient and interpretable adaptation strategies in large speech models. Our code is available at https://github.com/harryporry77/Behind-the-Scenes.
https://arxiv.org/abs/2509.08454
Academic Papers
svg
7a89ae2408e6493c78d491bc952b465c63259b1bd54e51493c01366f92f43b07
2026-01-23T00:00:00-05:00
LLMs Homogenize Values in Constructive Arguments on Value-Laden Topics
arXiv:2509.10637v2 Announce Type: replace Abstract: Large language models (LLMs) are increasingly used to promote prosocial and constructive discourse online. Yet little is known about how these models negotiate and shape underlying values when reframing people's arguments on value-laden topics. We conducted experiments with 465 participants from India and the United States, who wrote comments on homophobic and Islamophobic threads, and reviewed human-written and LLM-rewritten constructive versions of these comments. Our analysis shows that LLM systematically diminishes Conservative values while elevating prosocial values such as Benevolence and Universalism. When these comments were read by others, participants opposing same-sex marriage or Islam found human-written comments more aligned with their values, whereas those supportive of these communities found LLM-rewritten versions more aligned with their values. These findings suggest that value homogenization in LLM-mediated prosocial discourse runs the risk of marginalizing conservative viewpoints on value-laden topics and may inadvertently shape the dynamics of online discourse.
https://arxiv.org/abs/2509.10637
Academic Papers
svg
70270400749cef7127b4b90ebcfbcb441fb576c095ac9cd8de90618f325b859c
2026-01-23T00:00:00-05:00
No Mesh, No Problem: Estimating Coral Volume and Surface from Sparse Multi-View Images
arXiv:2509.11164v3 Announce Type: replace Abstract: Effective reef monitoring requires the quantification of coral growth via accurate volumetric and surface area estimates, which is a challenging task due to the complex morphology of corals. We propose a novel, lightweight, and scalable learning framework that addresses this challenge by predicting the 3D volume and surface area of coral-like objects from 2D multi-view RGB images. Our approach utilizes a pre-trained module (VGGT) to extract dense point maps from each view; these maps are merged into a unified point cloud and enriched with per-view confidence scores. The resulting cloud is fed to two parallel DGCNN decoder heads, which jointly output the volume and the surface area of the coral, as well as their corresponding confidence estimate. To enhance prediction stability and provide uncertainty estimates, we introduce a composite loss function based on Gaussian negative log-likelihood in both real and log domains. Our method achieves competitive accuracy and generalizes well to unseen morphologies. This framework paves the way for efficient and scalable coral geometry estimation directly from a sparse set of images, with potential applications in coral growth analysis and reef monitoring.
https://arxiv.org/abs/2509.11164
Academic Papers
svg
9a452ed3d5377f2c33ef14357289acf44de48007dd11446e31acb18250ef3148
2026-01-23T00:00:00-05:00
DF-LLaVA: Unlocking MLLM's potential for Synthetic Image Detection via Prompt-Guided Knowledge Injection
arXiv:2509.14957v2 Announce Type: replace Abstract: With the increasing prevalence of synthetic images, evaluating image authenticity and locating forgeries accurately while maintaining human interpretability remains a challenging task. Existing detection models primarily focus on simple authenticity classification, ultimately providing only a forgery probability or binary judgment, which offers limited explanatory insights into image authenticity. Moreover, while MLLM-based detection methods can provide more interpretable results, they still lag behind expert models in terms of pure authenticity classification accuracy. To address this, we propose DF-LLaVA, a simple yet effective framework that unlocks the intrinsic discrimination potential of MLLMs. Our approach first extracts latent knowledge from MLLMs and then injects it into training via prompts. This framework allows LLaVA to achieve outstanding detection accuracy exceeding expert models while still maintaining the interpretability offered by MLLMs. Extensive experiments confirm the superiority of our DF-LLaVA, achieving both high accuracy and explainability in synthetic image detection. Code is available online at: https://github.com/Eliot-Shen/DF-LLaVA.
https://arxiv.org/abs/2509.14957
Academic Papers
svg
8c5444abeb9d5b2541eeed149f51f6c1ebedfc56c3abe405934a4efd12ac712b
2026-01-23T00:00:00-05:00
From Canopy to Ground via ForestGen3D: Learning Cross-Domain Generation of 3D Forest Structure from Aerial-to-Terrestrial LiDAR
arXiv:2509.16346v2 Announce Type: replace Abstract: The 3D structure of living and non-living components in ecosystems plays a critical role in determining ecological processes and feedbacks from both natural and human-driven disturbances. Anticipating the effects of wildfire, drought, disease, or atmospheric deposition depends on accurate characterization of 3D vegetation structure, yet widespread measurement remains prohibitively expensive and often infeasible. We present ForestGen3D, a cross-domain generative framework that preserves aerial LiDAR (ALS) observed 3D forest structure while inferring missing sub-canopy detail. ForestGen3D is based on conditional denoising diffusion probabilistic models trained on co-registered ALS and terrestrial LiDAR (TLS) data. The model generates realistic TLS-like point clouds that remain spatially consistent with ALS geometry, enabling landscape-scalable reconstruction of full vertical forest structure. We evaluate ForestGen3D at tree, plot, and landscape scales using real-world data from mixed conifer ecosystems, and show through qualitative and quantitative geometric and distributional analyses that it produces high-fidelity reconstructions closely matching TLS reference data in terms of 3D structural similarity and downstream biophysical metrics, including tree height, DBH, crown diameter, and crown volume. We further introduce and demonstrate the expected point containment (EPC) metric which serves as a practical proxy for generation quality in settings where TLS ground truth is unavailable. Our results demonstrate that ForestGen3D enhances the utility of ALS only environments by inferring ecologically plausible sub-canopy structure while faithfully preserving the landscape heterogeneity encoded in ALS observations, thereby providing a richer 3D representation for ecological analysis, structural fuel characterization and related remote sensing applications.
https://arxiv.org/abs/2509.16346
Academic Papers
svg
22aaad587b2266efe3c77fd9fdf08a8fd3f8e9b1edb09ef28b08a7353ec5cef6
2026-01-23T00:00:00-05:00
TextCrafter: Optimization-Calibrated Noise for Defending Against Text Embedding Inversion
arXiv:2509.17302v5 Announce Type: replace Abstract: Text embedding inversion attacks reconstruct original sentences from latent representations, posing severe privacy threats in collaborative inference and edge computing. We propose TextCrafter, an optimization-based adversarial perturbation mechanism that combines RL learned, geometry aware noise injection orthogonal to user embeddings with cluster priors and PII signal guidance to suppress inversion while preserving task utility. Unlike prior defenses either non learnable or agnostic to perturbation direction, TextCrafter provides a directional protective policy that balances privacy and utility. Under strong privacy setting, TextCrafter maintains 70 percentage classification accuracy on four datasets and consistently outperforms Gaussian/LDP baselines across lower privacy budgets, demonstrating a superior privacy utility trade off.
https://arxiv.org/abs/2509.17302
Academic Papers
svg
e5b5282007516cba829693cfa790e726b21fd4df2666c9ac1f2c4f59c04429c8
2026-01-23T00:00:00-05:00
VideoPro: Adaptive Program Reasoning for Long Video Understanding
arXiv:2509.17743v3 Announce Type: replace Abstract: Large language models (LLMs) have shown promise in generating program workflows for visual tasks. However, previous approaches often rely on closed-source models, lack systematic reasoning, and struggle with long-form video question answering (videoQA). To address these challenges, we introduce the FS-VisPR framework, an adaptive visual program reasoning approach that balances fast reasoning for simple queries with slow reasoning for difficult ones. First, we design efficient visual modules (e.g., key clip retrieval and subtitle retrieval) to support long-form video tasks. Then, we construct a diverse and high-quality fast-slow reasoning dataset with a strong LLM to align open-source language models' ability to generate visual program workflows as FS-LLM. Next, we design a fast-slow reasoning framework with FS-LLM: Simple queries are directly solved by VideoLLMs, while difficult ones invoke visual program reasoning, motivated by human-like reasoning processes. During this process, low-confidence fast-thinking answers will trigger a second-stage slow-reasoning process, and a fallback mechanism to fast reasoning is activated if the program execution fails. Moreover, we improve visual programs through parameter search during both training and inference. By adjusting the parameters of the visual modules within the program, multiple variants are generated: during training, programs that yield correct answers are selected, while during inference, the program with the highest confidence result is applied. Experiments show that FS-VisPR improves both efficiency and reliability in visual program workflows. It achieves 50.4% accuracy on LVBench, surpassing GPT-4o, matching the performance of Qwen2.5VL-72B on VideoMME.
https://arxiv.org/abs/2509.17743
Academic Papers
svg
94a246981d11440f4b4b2bcb0a46c4a5decd2e7b1b72da1e5d40dd3e49d0aa2d
2026-01-23T00:00:00-05:00
FedIA: Towards Domain-Robust Aggregation in Federated Graph Learning
arXiv:2509.18171v3 Announce Type: replace Abstract: Federated Graph Learning (FGL) enables a central server to coordinate model training across distributed clients without local graph data being shared. However, FGL significantly suffers from cross-silo domain shifts, where each "silo" (domain) contains a limited number of clients with distinct graph topologies. These heterogeneities induce divergent optimization trajectories, ultimately leading to global model divergence. In this work, we reveal a severe architectural pathology termed Structural Orthogonality: the topology-dependent message passing mechanism forces gradients from different domains to target disjoint coordinates in the parameter space. Through a controlled comparison between backbones, we statistically prove that GNN updates are near-perpendicular across domains (with projection ratios $\to$ 0). Consequently, naive averaging leads to Consensus Collapse, a phenomenon where sparse, informative structural signals from individual domains are diluted by the near-zero updates of others. This forces the global model into a "sub-optimal" state that fails to represent domain-specific structural patterns, resulting in poor generalization. To address this, we propose FedIA, a lightweight server-side framework designed to reconcile update conflicts without auxiliary communication. FedIA operates in two stages: (1) Global Importance Masking (GIM) identifies a shared parameter subspace to filter out domain-specific structural noise and prevent signal dilution; (2) Confidence-Aware Momentum Weighting (CAM) dynamically re-weights client contributions based on gradient reliability to amplify stable optimization signals.
https://arxiv.org/abs/2509.18171
Academic Papers
svg
fe706675ad14361fb46e5180f9795a0244a22bc900a4ea24bace3a767049e515
2026-01-23T00:00:00-05:00
MCGrad: Multicalibration at Web Scale
arXiv:2509.19884v3 Announce Type: replace Abstract: We propose MCGrad, a novel and scalable multicalibration algorithm. Multicalibration - calibration in subgroups of the data - is an important property for the performance of machine learning-based systems. Existing multicalibration methods have thus far received limited traction in industry. We argue that this is because existing methods (1) require such subgroups to be manually specified, which ML practitioners often struggle with, (2) are not scalable, or (3) may harm other notions of model performance such as log loss and Area Under the Precision-Recall Curve (PRAUC). MCGrad does not require explicit specification of protected groups, is scalable, and often improves other ML evaluation metrics instead of harming them. MCGrad has been in production at Meta, and is now part of hundreds of production models. We present results from these deployments as well as results on public datasets. We provide an open source implementation of MCGrad at https://github.com/facebookincubator/MCGrad.
https://arxiv.org/abs/2509.19884
Academic Papers
svg
fd2d7ccc3f9239d7a1313d4c1ec25a5a4e1aa2a7bfd971de9716faf43d3b17f8
2026-01-23T00:00:00-05:00
Real-Time Object Detection Meets DINOv3
arXiv:2509.20787v3 Announce Type: replace Abstract: Benefiting from the simplicity and effectiveness of Dense O2O and MAL, DEIM has become the mainstream training framework for real-time DETRs, significantly outperforming the YOLO series. In this work, we extend it with DINOv3 features, resulting in DEIMv2. DEIMv2 spans eight model sizes from X to Atto, covering GPU, edge, and mobile deployment. For the X, L, M, and S variants, we adopt DINOv3-pretrained or distilled backbones and introduce a Spatial Tuning Adapter (STA), which efficiently converts DINOv3's single-scale output into multi-scale features and complements strong semantics with fine-grained details to enhance detection. For ultra-lightweight models (Nano, Pico, Femto, and Atto), we employ HGNetv2 with depth and width pruning to meet strict resource budgets. Together with a simplified decoder and an upgraded Dense O2O, this unified design enables DEIMv2 to achieve a superior performance-cost trade-off across diverse scenarios, establishing new state-of-the-art results. Notably, our largest model, DEIMv2-X, achieves 57.8 AP with only 50.3 million parameters, surpassing prior X-scale models that require over 60 million parameters for just 56.5 AP. On the compact side, DEIMv2-S is the first sub-10 million model (9.71 million) to exceed the 50 AP milestone on COCO, reaching 50.9 AP. Even the ultra-lightweight DEIMv2-Pico, with just 1.5 million parameters, delivers 38.5 AP, matching YOLOv10-Nano (2.3 million) with around 50 percent fewer parameters. Our code and pre-trained models are available at https://github.com/Intellindust-AI-Lab/DEIMv2
https://arxiv.org/abs/2509.20787
Academic Papers
svg
6d6d57f575496e02301cec1845669a850cbee747a23b964a8c814370846d9d0c
2026-01-23T00:00:00-05:00
PhishLumos: An Adaptive Multi-Agent System for Proactive Phishing Campaign Mitigation
arXiv:2509.21772v2 Announce Type: replace Abstract: Phishing attacks are a significant societal threat, disproportionately harming vulnerable populations and eroding trust in essential digital services. Current defenses are often reactive, failing against modern evasive tactics like cloaking that conceal malicious content. To address this, we introduce PhishLumos, an adaptive multi-agent system that proactively mitigates entire attack campaigns. It confronts a core cybersecurity imbalance: attackers can easily scale operations, while defense remains an intensive expert task. Instead of being blocked by evasion, PhishLumos treats it as a critical signal to investigate the underlying infrastructure. Its Large Language Model (LLM)-powered agents uncover shared hosting, certificates, and domain registration patterns. On real-world data, our system identified 100% of campaigns in the median case, over a week before their confirmation by cybersecurity experts. PhishLumos demonstrates a practical shift from reactive URL blocking to proactive campaign mitigation, protecting users before they are harmed and making the digital world safer for all.
https://arxiv.org/abs/2509.21772
Academic Papers
svg
3e153487198c6006513d8d0d3bf6dd806babc91cd6b6f5a3c65d691b938071c2
2026-01-23T00:00:00-05:00
VeriLLM: A Lightweight Framework for Publicly Verifiable Decentralized Inference
arXiv:2509.24257v4 Announce Type: replace Abstract: Decentralized inference provides a scalable and resilient paradigm for serving large language models (LLMs), enabling fragmented global resource utilization and reducing reliance on centralized providers. However, in a permissionless environment without trusted nodes, ensuring the correctness of model outputs remains a core challenge. We introduce VeriLLM, a publicly verifiable protocol for decentralized LLM inference that achieves security with incentive guarantees while maintaining practical efficiency. VeriLLM combines lightweight empirical rerunning with minimal on-chain checks to preclude free-riding, allowing verifiers to validate results at approximately 1% of the underlying inference cost by exploiting the structural separation between prefill and autoregressive decoding. To prevent verification bottlenecks, we design an isomorphic inference--verification architecture that multiplexes both inference and verification roles across the same GPU workers. This design (i) improves GPU utilization and overall throughput, (ii) enlarges the effective validator set, enhancing robustness and liveness, and (iii) enforces task indistinguishability to prevent node-specific optimizations or selective behavior. Through theoretical analysis and system-level evaluation, we show that VeriLLM achieves reliable public verifiability with minimal overhead, offering a practical foundation for trustworthy and scalable decentralized LLM inference.
https://arxiv.org/abs/2509.24257
Academic Papers
svg
b82e2211ce74864c32d0da42d43ebd41e58f31983723a52ece15dc0b246d872d
2026-01-23T00:00:00-05:00
BPMN Assistant: An LLM-Based Approach to Business Process Modeling
arXiv:2509.24592v2 Announce Type: replace Abstract: This paper presents BPMN Assistant, a tool that leverages Large Language Models for natural language-based creation and editing of BPMN diagrams. While direct XML generation is common, it is verbose, slow, and prone to syntax errors during complex modifications. We introduce a specialized JSON-based intermediate representation designed to facilitate atomic editing operations through function calling. We evaluate our approach against direct XML manipulation using a suite of state-of-the-art models, including GPT-5.1, Claude 4.5 Sonnet, and DeepSeek V3. Results demonstrate that the JSON-based approach significantly outperforms direct XML in editing tasks, achieving higher or equivalent success rates across all evaluated models. Furthermore, despite requiring more input context, our approach reduces generation latency by approximately 43% and output token count by over 75%, offering a more reliable and responsive solution for interactive process modeling.
https://arxiv.org/abs/2509.24592
Academic Papers
svg
cdd71dc7016d1c694208b04d4ffb3bbab0d513141531c8876db5296409f394c6
2026-01-23T00:00:00-05:00
PatchEAD: Unifying Industrial Visual Prompting Frameworks for Patch-Exclusive Anomaly Detection
arXiv:2509.25856v2 Announce Type: replace Abstract: Industrial anomaly detection is increasingly relying on foundation models, aiming for strong out-of-distribution generalization and rapid adaptation in real-world deployments. Notably, past studies have primarily focused on textual prompt tuning, leaving the intrinsic visual counterpart fragmented into processing steps specific to each foundation model. We aim to address this limitation by proposing a unified patch-focused framework, Patch-Exclusive Anomaly Detection (PatchEAD), enabling training-free anomaly detection that is compatible with diverse foundation models. The framework constructs visual prompting techniques, including an alignment module and foreground masking. Our experiments show superior few-shot and batch zero-shot performance compared to prior work, despite the absence of textual features. Our study further examines how backbone structure and pretrained characteristics affect patch-similarity robustness, providing actionable guidance for selecting and configuring foundation models for real-world visual inspection. These results confirm that a well-unified patch-only framework can enable quick, calibration-light deployment without the need for carefully engineered textual prompts.
https://arxiv.org/abs/2509.25856
Academic Papers
svg
9adbe5ba7b9d4c28aabf297a0173cdce572a8123edee05cc2fc08b4fb2f28808
2026-01-23T00:00:00-05:00
Signature-Informed Transformer for Asset Allocation
arXiv:2510.03129v3 Announce Type: replace Abstract: Modern deep learning for asset allocation typically separates forecasting from optimization. We argue this creates a fundamental mismatch where minimizing prediction errors fails to yield robust portfolios. We propose the Signature Informed Transformer to address this by unifying feature extraction and decision making into a single policy. Our model employs path signatures to encode complex path dependencies and introduces a specialized attention mechanism that targets geometric asset relationships. By directly minimizing the Conditional Value at Risk we ensure the training objective aligns with financial goals. We prove that our attention module rigorously amplifies signature derived signals. Experiments across diverse equity universes show our approach significantly outperforms both traditional strategies and advanced forecasting baselines. The code is available at: https://anonymous.4open.science/r/Signature-Informed-Transformer-For-Asset-Allocation-DB88
https://arxiv.org/abs/2510.03129
Academic Papers
svg
8afd93c9a56c5e1bdcb668a6359e9cbd523a7d15659080f614c9e73b88c6d5a9
2026-01-23T00:00:00-05:00
DECOR: Deep Embedding Clustering with Orientation Robustness
arXiv:2510.03328v2 Announce Type: replace Abstract: In semiconductor manufacturing, early detection of wafer defects is critical for product yield optimization. However, raw wafer data from wafer quality tests are often complex, unlabeled, imbalanced and can contain multiple defects on a single wafer, making it crucial to design clustering methods that remain reliable under such imperfect data conditions. We introduce DECOR, a deep clustering with orientation robustness framework that groups complex defect patterns from wafer maps into consistent clusters. We evaluate our method on the open source MixedWM38 dataset, demonstrating its ability to discover clusters without manual tuning. DECOR explicitly accounts for orientation variations in wafer maps, ensuring that spatially similar defects are consistently clustered regardless of its rotation or alignment. Experiments indicate that our method outperforms existing clustering baseline methods, thus providing a reliable and scalable solution in automated visual inspection systems.
https://arxiv.org/abs/2510.03328
Academic Papers
svg
f8fb36a82aa02ec5953776c270c2f07f54a9876bb13357ba19fb57b3a4118a08
2026-01-23T00:00:00-05:00
PAD-TRO: Projection-Augmented Diffusion for Direct Trajectory Optimization
arXiv:2510.04436v2 Announce Type: replace Abstract: Recently, diffusion models have gained popularity and attention in trajectory optimization due to their capability of modeling multi-modal probability distributions. However, addressing nonlinear equality constraints, i.e, dynamic feasibility, remains a great challenge in diffusion-based trajectory optimization. Recent diffusion-based trajectory optimization frameworks rely on a single-shooting style approach where the denoised control sequence is applied to forward propagate the dynamical system, which cannot explicitly enforce constraints on the states and frequently leads to sub-optimal solutions. In this work, we propose a novel direct trajectory optimization approach via model-based diffusion, which directly generates a sequence of states. To ensure dynamic feasibility, we propose a gradient-free projection mechanism that is incorporated into the reverse diffusion process. Our results show that, compared to a recent state-of-the-art baseline, our approach leads to zero dynamic feasibility error and approximately 4x higher success rate in a quadrotor waypoint navigation scenario involving dense static obstacles.
https://arxiv.org/abs/2510.04436
Academic Papers
svg
0ead608fc3860c72e8f58067b849d2db9cc75c3ab97766b0a3a309483460198b
2026-01-23T00:00:00-05:00
New Insights into Involutory and Orthogonal MDS Matrices
arXiv:2510.05766v2 Announce Type: replace Abstract: MDS matrices play a critical role in the design of diffusion layers for block ciphers and hash functions due to their optimal branch number. Involutory and orthogonal MDS matrices offer additional benefits by allowing identical or nearly identical circuitry for both encryption and decryption, leading to equivalent implementation costs for both processes. These properties have been further generalized through the notions of semi-involutory and semi-orthogonal matrices. While much of the existing literature focuses on identifying efficiently implementable MDS candidates or proposing new constructions for MDS matrices of various orders, this work takes a different direction. Rather than introducing novel constructions or prioritizing implementation efficiency, we investigate structural relationships between the generalized variants and their conventional counterparts. Specifically, we establish nontrivial interconnections between semi-involutory and involutory matrices, as well as between semi-orthogonal and orthogonal matrices. Exploiting these relationships, we show that the number of semi-involutory MDS matrices can be directly derived from the number of involutory MDS matrices, and vice versa. A similar correspondence holds for semi-orthogonal and orthogonal MDS matrices. We also examine the intersection of these classes and show that the number of $3 \times 3$ MDS matrices that are both semi-involutory and semi-orthogonal coincides with the number of semi-involutory MDS matrices over $\mathbb{F}_{2^m}$. Furthermore, we derive the general structure of orthogonal matrices of arbitrary order $n$ over $\mathbb{F}_{2^m}$. Finally, leveraging the aforementioned interconnections, we present an alternative and direct derivation of the explicit formulae for counting $3 \times 3$ semi-involutory MDS matrices and $3 \times 3$ semi-orthogonal MDS matrices.
https://arxiv.org/abs/2510.05766
Academic Papers
svg
487a41a91c837e1f4fa5b2ada8daa802ac5dacbb610e294c4a9c7aac13ee98fe
2026-01-23T00:00:00-05:00
Does LLM Focus on the Right Words? Mitigating Context Bias in LLM-based Recommenders
arXiv:2510.10978v2 Announce Type: replace Abstract: Large language models (LLMs), owing to their extensive open-domain knowledge and semantic reasoning capabilities, have been increasingly integrated into recommender systems (RS). However, a substantial gap remains between the pre-training objectives of LLMs and the specific requirements of recommendation tasks. To address this gap, supervised fine-tuning (SFT) is commonly performed on specially curated recommendation datasets to further enhance their predictive ability. Despite its success, SFT exhibits a critical limitation: it induces Context Bias, whereby the model over-relies on auxiliary tokens, such as task descriptions and prefix-generated tokens, while underutilizing core user interaction tokens that encode user-specific preferences. This bias not only undermines recommendation accuracy but also raises unfairness concerns. To address this issue, we propose Group Distributionally Robust Optimization-based Tuning (GDRT), a novel fine-tuning paradigm that enforces consistent model performance across token groups with varying degrees of relevance to auxiliary tokens. By adaptively upweighting underperforming groups, typically those weakly correlated with auxiliary tokens, GDRT shifts the model's attention from superficial auxiliary cues to informative user interaction tokens, thereby mitigating context bias. Extensive experiments conducted on three public datasets demonstrate that GDRT effectively mitigates context bias, yielding substantial improvements in recommendation accuracy (with an average NDCG@10 gain of 24.29%) and significantly enhancing recommendation fairness. The code is available at https://github.com/WANGBohaO-jpg/GDRT.
https://arxiv.org/abs/2510.10978
Academic Papers
svg
f053dcaa4f1e4df3176924248665a072a8d161cb57c767b33935a650d0bbca96
2026-01-23T00:00:00-05:00
CoSPED: Consistent Soft Prompt Targeted Data Extraction and Defense
arXiv:2510.11137v3 Announce Type: replace Abstract: Large language models have gained widespread attention recently, but their potential security vulnerabilities, especially privacy leakage, are also becoming apparent. To test and evaluate for data extraction risks in LLM, we proposed CoSPED, short for Consistent Soft Prompt targeted data Extraction and Defense. We introduce several innovative components, including Dynamic Loss, Additive Loss, Common Loss, and Self Consistency Decoding Strategy, and tested to enhance the consistency of the soft prompt tuning process. Through extensive experimentation with various combinations, we achieved an extraction rate of 65.2% at a 50-token prefix comparison. Our comparisons of CoSPED with other reference works confirm our superior extraction rates. We evaluate CoSPED on more scenarios, achieving Pythia model extraction rate of 51.7% and introducing cross-model comparison. Finally, we explore defense through Rank-One Model Editing and achieve a reduction in the extraction rate to 1.6%, which proves that our analysis of extraction mechanisms can directly inform effective mitigation strategies against soft prompt-based attacks.
https://arxiv.org/abs/2510.11137
Academic Papers
svg
38ef27f0111bb36507a21fc582f9692285239ac79c3d51005898bf1408f8548d
2026-01-23T00:00:00-05:00
Community Engagement and the Lifespan of Open-Source Software Projects
arXiv:2510.15408v2 Announce Type: replace Abstract: Open-source software (OSS) projects depend on community engagement (CE) for longevity. However, CE's quantifiable impact on project dynamics and lifespan is underexplored. Objectives: This study defines CE in OSS, identifies key metrics, and evaluates their influence on project dynamics (releases, commits, branches) and lifespan. Methods: We analyzed 33,946 GitHub repositories, defining and operationalizing CE with validated per-month metrics (issues, comments, watchers, stargazers). Non-parametric tests and correlations assessed relationships with project dynamics and lifespan across quartiles. Results: CE metrics significantly associate with project dynamics, with stronger correlations in highly engaged projects. For lifespan, a complex pattern emerged: per-month CE rates are highest in younger projects, declining with age. Yet, a subset of long-lived projects maintains exceptionally high activity. Initial CE bursts appear crucial for establishment, while sustained high engagement drives extreme longevity. Active issue engagement's influence intensifies with age, but passive attention's declines. Conclusion: CE dynamically drives OSS project longevity and development. Our findings establish validated CE metrics and offer deeper insights into how diverse community activity patterns contribute to project longevity.
https://arxiv.org/abs/2510.15408
Academic Papers
svg
2b023f5bbcf01f2cc6a744b0555e3ad14cb23254f43d371daac49abb8af1f8ae
2026-01-23T00:00:00-05:00
Enhanced Fish Freshness Classification with Incremental Handcrafted Feature Fusion
arXiv:2510.17145v2 Announce Type: replace Abstract: Accurate assessment of fish freshness remains a major challenge in the food industry, with direct consequences for product quality, market value, and consumer health. Conventional sensory evaluation is inherently subjective, inconsistent, and difficult to standardize across contexts, often limited by subtle, species-dependent spoilage cues. To address these limitations, we propose a handcrafted feature-based approach that systematically extracts and incrementally fuses complementary descriptors, including color statistics, histograms across multiple color spaces, and texture features such as Local Binary Patterns (LBP) and Gray-Level Co-occurrence Matrices (GLCM), from fish eye images. Our method captures global chromatic variations from full images and localized degradations from ROI segments, fusing each independently to evaluate their effectiveness in assessing freshness. Experiments on the Freshness of the Fish Eyes (FFE) dataset demonstrate the approach's effectiveness: in a standard train-test setting, a LightGBM classifier achieved 77.56% accuracy, a 14.35% improvement over the previous deep learning baseline of 63.21%. With augmented data, an Artificial Neural Network (ANN) reached 97.49% accuracy, surpassing the prior best of 77.3% by 20.19%. These results demonstrate that carefully engineered, handcrafted features, when strategically processed, yield a robust, interpretable, and reliable solution for automated fish freshness assessment, providing valuable insights for practical applications in food quality monitoring.
https://arxiv.org/abs/2510.17145
Academic Papers
svg
76ff4ed00d3faac49e8e565b5d6d20bcf9387fd713a6c77c17b24100c6d0c226
2026-01-23T00:00:00-05:00
Auditing and Mitigating Bias in Gender Classification Algorithms: A Data-Centric Approach
arXiv:2510.17873v2 Announce Type: replace Abstract: Gender classification systems often inherit and amplify demographic imbalances in their training data. We first audit five widely used gender classification datasets, revealing that all suffer from significant intersectional underrepresentation. To measure the downstream impact of these flaws, we train identical MobileNetV2 classifiers on the two most balanced of these datasets, UTKFace and FairFace. Our fairness evaluation shows that even these models exhibit significant bias, misclassifying female faces at a higher rate than male faces and amplifying existing racial skew. To counter these data-induced biases, we construct BalancedFace, a new public dataset created by blending images from FairFace and UTKFace, supplemented with images from other collections to fill missing demographic gaps. It is engineered to equalize subgroup shares across 189 intersections of age, race, and gender using only real, unedited images. When a standard classifier is trained on BalancedFace, it reduces the maximum True Positive Rate gap across racial subgroups by over 50% and brings the average Disparate Impact score 63% closer to the ideal of 1.0 compared to the next-best dataset, all with a minimal loss of overall accuracy. These results underline the profound value of data-centric interventions and provide an openly available resource for fair gender classification research.
https://arxiv.org/abs/2510.17873
Academic Papers
svg
16164898ce64e01961a903453206226699898880c8ebf937967dad682f011fde
2026-01-23T00:00:00-05:00
Robust Reinforcement Learning in Finance: Modeling Market Impact with Elliptic Uncertainty Sets
arXiv:2510.19950v2 Announce Type: replace Abstract: In financial applications, reinforcement learning (RL) agents are commonly trained on historical data, where their actions do not influence prices. However, during deployment, these agents trade in live markets where their own transactions can shift asset prices, a phenomenon known as market impact. This mismatch between training and deployment environments can significantly degrade performance. Traditional robust RL approaches address this model misspecification by optimizing the worst-case performance over a set of uncertainties, but typically rely on symmetric structures that fail to capture the directional nature of market impact. To address this issue, we develop a novel class of elliptic uncertainty sets. We establish both implicit and explicit closed-form solutions for the worst-case uncertainty under these sets, enabling efficient and tractable robust policy evaluation. Experiments on single-asset and multi-asset trading tasks demonstrate that our method achieves superior Sharpe ratio and remains robust under increasing trade volumes, offering a more faithful and scalable approach to RL in financial markets.
https://arxiv.org/abs/2510.19950
Academic Papers
svg
19360dca539da04efc8be8f5c7c4bd0ceb61dd0049f86c0d6d514a2300803424
2026-01-23T00:00:00-05:00
Yesnt: Are Diffusion Relighting Models Ready for Capture Stage Compositing? A Hybrid Alternative to Bridge the Gap
arXiv:2510.23494v2 Announce Type: replace Abstract: Volumetric video relighting is essential for bringing captured performances into virtual worlds, but current approaches struggle to deliver temporally stable, production-ready results. Diffusion-based intrinsic decomposition methods show promise for single frames, yet suffer from stochastic noise and instability when extended to sequences, while video diffusion models remain constrained by memory and scale. We propose a hybrid relighting framework that combines diffusion-derived material priors with temporal regularization and physically motivated rendering. Our method aggregates multiple stochastic estimates of per-frame material properties into temporally consistent shading components, using optical-flow-guided regularization. For indirect effects such as shadows and reflections, we extract a mesh proxy from Gaussian Opacity Fields and render it within a standard graphics pipeline. Experiments on real and synthetic captures show that this hybrid strategy achieves substantially more stable relighting across sequences than diffusion-only baselines, while scaling beyond the clip lengths feasible for video diffusion. These results indicate that hybrid approaches, which balance learned priors with physically grounded constraints, are a practical step toward production-ready volumetric video relighting.
https://arxiv.org/abs/2510.23494
Academic Papers
svg
b1d6b83cda39402451cda5b2b18b601180e3c14c9988c5e18de301d38bbc3e49
2026-01-23T00:00:00-05:00
TDFlow: Agentic Workflows for Test Driven Development
arXiv:2510.23761v2 Announce Type: replace Abstract: We introduce TDFlow, a novel test-driven agentic workflow that frames repository-scale software engineering as a test-resolution task, specifically designed to solve human-written tests. Given a set of tests, TDFlow repeatedly proposes, revises, and debugs repository-scale patches using precisely engineered sub-agents and tightly constrained tools. The workflow decomposes software engineering program repair into four components governed by respective sub-agents. This simple, forced decoupling of patch proposing, debugging, patch revision, and optional test generation (1) reduces long-context burden on any individual sub-agent, (2) focuses each sub-agent on specific, pre-defined sub-tasks, and (3) allows for specialized performance improvement on specific sub-tasks. When provided human-written tests, TDFlow attains 88.8% pass rate on SWE-Bench Lite (an absolute improvement of 27.8% over the next best system) and 94.3% on SWE-Bench Verified. Manual inspection of the 800 TDFlow runs within SWE-Bench Lite and Verified uncover only 7 instances of test hacking, which were subsequently counted as failures. Furthermore, we show that the primary obstacle to human-level software engineering performance lies within writing successful reproduction tests. We envision a human-LLM interactive system powered by TDFlow where human developers write tests solved by LLM systems. Together, these results indicate that modern LLMs, when embedded in a narrowly engineered, test-driven workflow, already achieve human-level test resolution -- with the final frontier for fully autonomous repository repair being the accurate generation of valid reproduction tests.
https://arxiv.org/abs/2510.23761
Academic Papers
svg
9d17b9d8a870906c7a9f7823744d03420067840edd2b49f5403e90785a19d1ad
2026-01-23T00:00:00-05:00
Understanding Reader Perception Shifts upon Disclosure of AI Authorship
arXiv:2510.24011v2 Announce Type: replace Abstract: As AI writing support becomes ubiquitous, how disclosing its use affects reader perception remains a critical, underexplored question. We conducted a study with 261 participants to examine how revealing varying levels of AI involvement shifts author impressions across six distinct communicative acts. Our analysis of 990 responses shows that disclosure generally erodes perceptions of trustworthiness, caring, competence, and likability, with the sharpest declines in social and interpersonal writing. A thematic analysis of participants' feedback links these negative shifts to a perceived loss of human sincerity, diminished author effort, and the contextual inappropriateness of AI. Conversely, we find that higher AI literacy mitigates these negative perceptions, leading to greater tolerance or even appreciation for AI use. Our results highlight the nuanced social dynamics of AI-mediated authorship and inform design implications for creating transparent, context-sensitive writing systems that better preserve trust and authenticity.
https://arxiv.org/abs/2510.24011
Academic Papers
svg
ebc661795910ec4430850c1cbb11be66b268a85f28f2e4f923aab054728196ac
2026-01-23T00:00:00-05:00
PaTaRM: Bridging Pairwise and Pointwise Signals via Preference-Aware Task-Adaptive Reward Modeling
arXiv:2510.24235v2 Announce Type: replace Abstract: Reward models (RMs) are central to reinforcement learning from human feedback (RLHF), providing the critical supervision signals that align large language models (LLMs) with human preferences. Generative reward models (GRMs) provide greater interpretability than traditional scalar RMs, but they come with a critical trade-off: pairwise methods are hindered by a training-inference mismatch, while pointwise methods require expensive absolute annotations. To bridge this gap, we propose the Preference-aware Task-adaptive Reward Model (PaTaRM). Unlike prior approaches, PaTaRM enables robust pointwise training using readily available pairwise data via a novel Preference-Aware Reward (PAR) mechanism, eliminating the need for explicit rating labels. Furthermore, it incorporates a Task-Adaptive Rubric system that dynamically generates instance-specific criteria for precise evaluation. Extensive experiments demonstrate that PATRM achieves a 8.7% average improvement on RewardBench and RMBench across Qwen3-8B/14B models. Crucially, it boosts downstream RLHF performance by an average relative improvement of 13.6% across IFEval and InFoBench, validating its effectiveness for policy alignment. Our code is available at https://github.com/JaneEyre0530/PaTaRM.
https://arxiv.org/abs/2510.24235
Academic Papers
svg
724902e2a2a682cc49657482f6b091a9c9de1f460f85a619fccda5bd4aada2e7
2026-01-23T00:00:00-05:00
Systems of Graph Formulas and their Equivalence to Alternating Graph Automata
arXiv:2510.25260v2 Announce Type: replace Abstract: Graph-based modeling plays a fundamental role in many areas of computer science. In this paper, we introduce systems of graph formulas with variables for specifying graph properties; this notion generalizes the graph formulas introduced in earlier work by incorporating recursion. We show that these formula systems have the same expressive power as alternating graph automata, a computational model that extends traditional finite-state automata to graphs, and allows both existential and universal states. In particular, we provide a bidirectional translation between formula systems and alternating graph automata, proving their equivalence in specifying graph languages. This result implies that alternating graph automata can be naturally represented using logic-based formulations, thus bridging the gap between automata-theoretic and logic-based approaches to graph language specification.
https://arxiv.org/abs/2510.25260
Academic Papers
svg
c69c41feea549e2268feadb812a3e922dc6773dd8785df10fdf072d71a501018
2026-01-23T00:00:00-05:00
Data-Enabled Predictive Control and Guidance for Autonomous Underwater Vehicles
arXiv:2510.25309v2 Announce Type: replace Abstract: This paper presents a fully data-driven control framework for autonomous underwater vehicles (AUVs) based on Data-Enabled Predictive Control (DeePC). The approach eliminates the need for explicit hydrodynamic modeling by exploiting measured input-output data to predict and optimize future system behavior. Classic DeePC was employed in the heading control, while a cascaded DeePC architecture is proposed for depth regulation. For 3-D waypoint path following, the Adaptive Line-of-Sight algorithm is extended to a predictive formulation and integrated with DeePC. All methods are validated in extensive simulation on the REMUS~100 AUV and compared with classical PI/PID control. The results demonstrate superior tracking performance and robustness of DeePC under ocean-current disturbances and nonlinear operating conditions, while significantly reducing modeling effort.
https://arxiv.org/abs/2510.25309
Academic Papers
svg
634c3b5d267c38edbc1b953571bd3b4a361fbd558c15ff05b3b689a7bd9e37cc
2026-01-23T00:00:00-05:00
Analyzing the Impact of Demand Response on Short-Circuit Current via a Unit Commitment Model
arXiv:2511.00296v2 Announce Type: replace Abstract: In low-carbon grids, system flexibility can be enhanced through mechanisms such as Demand Response (DR), enabling the efficient utilization of renewable energy. However, as Synchronous Generators (SGs) are being replaced by renewable energy sources characterized by Inverter-Based Resources (IBR), system stability is severely affected. Due to the limited overload capability of IBRs, their Short-Circuit Current (SCC) contribution is much smaller than that of SGs. As a result, protection devices may fail to trip during faults. Consequently, the remaining SGs play a key role in providing sufficient SCC. Since the commitment of SGs is closely related to system loading conditions, DR can indirectly affect their SCC provision, a relationship that has not yet been investigated in the literature. Therefore, this paper incorporates both DR and SCC constraints into a unit commitment problem and conducts case studies on an IEEE 30-bus system. The results show that although DR can reduce total costs by adjusting power demand, it may also lead to inadequate SCC levels. Nevertheless, when flexible loads are properly coordinated with SCC requirements, the total cost increases by only 0.3%, which is significantly lower than the cost of system dispatch without DR. This demonstrates that DR can facilitate stable system operation in a cost-effective manner.
https://arxiv.org/abs/2511.00296
Academic Papers
svg
f5ad4aa3a43dbfd1fe33533dca8e5761791b3ff2cb3ad81696d491e74a4af5d4
2026-01-23T00:00:00-05:00
Subtree Mode and Applications
arXiv:2511.01376v2 Announce Type: replace Abstract: The mode of a collection of values (i.e., the most frequent value in the collection) is a key summary statistic. Finding the mode in a given range of an array of values is thus of great importance, and constructing a data structure to solve this problem is in fact the well-known Range Mode problem. In this work, we introduce the Subtree Mode (SM) problem, the analogous problem in a leaf-colored tree, where the task is to compute the most frequent color in the leaves of the subtree of a given node. SM is motivated by several applications in domains such as text analytics and biology, where the data are hierarchical and can thus be represented as a (leaf-colored) tree. Our central contribution is a time-optimal algorithm for SM that computes the answer for every node of an input $N$-node tree in $O(N)$ time. We further show how our solution can be adapted for node-colored trees, or for computing the $k$ most frequent colors, for any given $k=O(1)$, in the optimal $O(N)$ time. Moreover, we prove that a similarly fast solution for when the input is a sink-colored directed acyclic graph instead of a leaf-colored tree is highly unlikely. Our experiments on real datasets with trees of up to $7.3$ billion nodes demonstrate that our algorithm is faster than baselines by at least one order of magnitude and much more space efficient. They also show that it is effective in pattern mining, sequence-to-database search, and biology applications.
https://arxiv.org/abs/2511.01376
Academic Papers
svg
0dab709a0ecaeffc8c35c9647f4f2ad8e3d3b0420171c25113266db6dbfd0db0
2026-01-23T00:00:00-05:00
Fast Ramsey Quantifier Elimination in LIRA (with applications to liveness checking)
arXiv:2511.05323v2 Announce Type: replace Abstract: Ramsey quantifiers have recently been proposed as a unified framework for handling properties of interests in program verification involving proofs in the form of infinite cliques, which are not expressible in first-order logic. Among others, these include liveness verification and monadic decomposability. We present the tool REAL, which implements an efficient elimination of Ramsey quantifiers in existential linear arithmetic theories over integers (LIA), reals (LRA), and the mixed case (LIRA). The tool supports a convenient input format, which is an extension of SMT-LIB over the aforementioned theories with Ramsey quantifiers. We also demonstrate a substantial speedup from the original prototype. As an application, we provide an automatic translation from FASTer (a tool for verifying reachability over infinite-state systems) output format to our extension of SMT-LIB and show how our tool extends FASTer to liveness checking.
https://arxiv.org/abs/2511.05323
Academic Papers
svg
cf30dd621b64b94d0d9aacd01c61a8323fed3e5f523a7ca1c3b146bc5873e855
2026-01-23T00:00:00-05:00
Can LLM Infer Risk Information From MCP Server System Logs?
arXiv:2511.05867v3 Announce Type: replace Abstract: Large Language Models (LLMs) demonstrate strong capabilities in solving complex tasks when integrated with external tools. The Model Context Protocol (MCP) has become a standard interface for enabling such tool-based interactions. However, these interactions introduce substantial security concerns, particularly when the MCP server is compromised or untrustworthy. While prior benchmarks primarily focus on prompt injection attacks or analyze the vulnerabilities of LLM-MCP interaction trajectories, limited attention has been given to the underlying system logs associated with malicious MCP servers. To address this gap, we present the first synthetic benchmark for evaluating LLMs' ability to identify security risks from system logs. We define nine categories of MCP server risks and generate 1,800 synthetic system logs using ten state-of-the-art LLMs. These logs are embedded in the return values of 243 curated MCP servers, yielding a dataset of 2,421 chat histories for training and 471 queries for evaluation. Our pilot experiments reveal that smaller models often fail to detect risky system logs, leading to high false negatives. While models trained with supervised fine-tuning (SFT) tend to over-flag benign logs, resulting in elevated false positives, Reinforcement Learning with Verifiable Reward (RLVR) offers a better precision-recall balance. In particular, after training with Group Relative Policy Optimization (GRPO), Llama3.1-8B-Instruct achieves 83 percent accuracy, surpassing the best-performing large remote model by 9 percentage points. Fine-grained, per-category analysis further underscores the effectiveness of reinforcement learning in enhancing LLM safety within the MCP framework. Code and data are available at https://github.com/PorUna-byte/MCP-RiskCue.
https://arxiv.org/abs/2511.05867
Academic Papers
svg
09482726199e4cfcb3c5868a67a35c2bf8cbe8a9a907725cd7100e796db285fc
2026-01-23T00:00:00-05:00
PlantTraitNet: An Uncertainty-Aware Multimodal Framework for Global-Scale Plant Trait Inference from Citizen Science Data
arXiv:2511.06943v2 Announce Type: replace Abstract: Global plant maps of plant traits, such as leaf nitrogen or plant height, are essential for understanding ecosystem processes, including the carbon and energy cycles of the Earth system. However, existing trait maps remain limited by the high cost and sparse geographic coverage of field-based measurements. Citizen science initiatives offer a largely untapped resource to overcome these limitations, with over 50 million geotagged plant photographs worldwide capturing valuable visual information on plant morphology and physiology. In this study, we introduce PlantTraitNet, a multi-modal, multi-task uncertainty-aware deep learning framework that predictsfour key plant traits (plant height, leaf area, specific leaf area, and nitrogen content) from citizen science photos using weak supervision. By aggregating individual trait predictions across space, we generate global maps of trait distributions. We validate these maps against independent vegetation survey data (sPlotOpen) and benchmark them against leading global trait products. Our results show that PlantTraitNet consistently outperforms existing trait maps across all evaluated traits, demonstrating that citizen science imagery, when integrated with computer vision and geospatial AI, enables not only scalable but also more accurate global trait mapping. This approach offers a powerful new pathway for ecological research and Earth system modeling.
https://arxiv.org/abs/2511.06943
Academic Papers
svg