id
stringlengths
9
16
title
stringlengths
4
278
categories
listlengths
1
13
abstract
stringlengths
3
4.08k
filtered_category_membership
dict
2502.11483
No-regret incentive-compatible online learning under exact truthfulness with non-myopic experts
[ "cs.LG", "cs.GT", "stat.ML" ]
We study an online forecasting setting in which, over $T$ rounds, $N$ strategic experts each report a forecast to a mechanism, the mechanism selects one forecast, and then the outcome is revealed. In any given round, each expert has a belief about the outcome, but the expert wishes to select its report so as to maximize the total number of times it is selected. The goal of the mechanism is to obtain low belief regret: the difference between its cumulative loss (based on its selected forecasts) and the cumulative loss of the best expert in hindsight (as measured by the experts' beliefs). We consider exactly truthful mechanisms for non-myopic experts, meaning that truthfully reporting its belief strictly maximizes the expert's subjective probability of being selected in any future round. Even in the full-information setting, it is an open problem to obtain the first no-regret exactly truthful mechanism in this setting. We develop the first no-regret mechanism for this setting via an online extension of the Independent-Event Lotteries Forecasting Competition Mechanism (I-ELF). By viewing this online I-ELF as a novel instance of Follow the Perturbed Leader (FPL) with noise based on random walks with loss-dependent perturbations, we obtain $\tilde{O}(\sqrt{T N})$ regret. Our results are fueled by new tail bounds for Poisson binomial random variables that we develop. We extend our results to the bandit setting, where we give an exactly truthful mechanism obtaining $\tilde{O}(T^{2/3} N^{1/3})$ regret; this is the first no-regret result even among approximately truthful mechanisms.
{ "Other": 1, "cs.AI": 0, "cs.CE": 0, "cs.CL": 0, "cs.CR": 0, "cs.CV": 0, "cs.CY": 0, "cs.DB": 0, "cs.HC": 0, "cs.IR": 0, "cs.IT": 0, "cs.LG": 1, "cs.MA": 0, "cs.NE": 0, "cs.RO": 0, "cs.SD": 0, "cs.SI": 0, "cs.SY": 0 }
2502.11484
Dictionary-Learning-Based Data Pruning for System Identification
[ "cs.LG", "cs.SY", "eess.SY" ]
System identification is normally involved in augmenting time series data by time shifting and nonlinearisation (via polynomial basis), which introduce redundancy both feature-wise and sample-wise. Many research works focus on reducing redundancy feature-wise, while less attention is paid to sample-wise redundancy. This paper proposes a novel data pruning method, called (mini-batch) FastCan, to reduce sample-wise redundancy based on dictionary learning. Time series data is represented by some representative samples, called atoms, via dictionary learning. The useful samples are selected based on their correlation with the atoms. The method is tested on one simulated dataset and two benchmark datasets. The R-squared between the coefficients of models trained on the full and the coefficients of models trained on pruned datasets is adopted to evaluate the performance of data pruning methods. It is found that the proposed method significantly outperforms the random pruning method.
{ "Other": 0, "cs.AI": 0, "cs.CE": 0, "cs.CL": 0, "cs.CR": 0, "cs.CV": 0, "cs.CY": 0, "cs.DB": 0, "cs.HC": 0, "cs.IR": 0, "cs.IT": 0, "cs.LG": 1, "cs.MA": 0, "cs.NE": 0, "cs.RO": 0, "cs.SD": 0, "cs.SI": 0, "cs.SY": 1 }
2502.11486
Anti-Degeneracy Scheme for Lidar SLAM based on Particle Filter in Geometry Feature-Less Environments
[ "cs.RO" ]
Simultaneous localization and mapping (SLAM) based on particle filtering has been extensively employed in indoor scenarios due to its high efficiency. However, in geometry feature-less scenes, the accuracy is severely reduced due to lack of constraints. In this article, we propose an anti-degeneracy system based on deep learning. Firstly, we design a scale-invariant linear mapping to convert coordinates in continuous space into discrete indexes, in which a data augmentation method based on Gaussian model is proposed to ensure the model performance by effectively mitigating the impact of changes in the number of particles on the feature distribution. Secondly, we develop a degeneracy detection model using residual neural networks (ResNet) and transformer which is able to identify degeneracy by scrutinizing the distribution of the particle population. Thirdly, an adaptive anti-degeneracy strategy is designed, which first performs fusion and perturbation on the resample process to provide rich and accurate initial values for the pose optimization, and use a hierarchical pose optimization combining coarse and fine matching, which is able to adaptively adjust the optimization frequency and the sensor trustworthiness according to the degree of degeneracy, in order to enhance the ability of searching the global optimal pose. Finally, we demonstrate the optimality of the model, as well as the improvement of the image matrix method and GPU on the computation time through ablation experiments, and verify the performance of the anti-degeneracy system in different scenarios through simulation experiments and real experiments. This work has been submitted to IEEE for publication. Copyright may be transferred without notice, after which this version may no longer be available.
{ "Other": 0, "cs.AI": 0, "cs.CE": 0, "cs.CL": 0, "cs.CR": 0, "cs.CV": 0, "cs.CY": 0, "cs.DB": 0, "cs.HC": 0, "cs.IR": 0, "cs.IT": 0, "cs.LG": 0, "cs.MA": 0, "cs.NE": 0, "cs.RO": 1, "cs.SD": 0, "cs.SI": 0, "cs.SY": 0 }
2502.11487
Non-Binary LDPC Arithmetic Error Correction For Processing-in-Memory
[ "cs.AR", "cs.IT", "math.IT" ]
Processing-in-memory (PIM) based on emerging devices such as memristors is more vulnerable to noise than traditional memories, due to the physical non-idealities and complex operations in analog domains. To ensure high reliability, efficient error-correcting code (ECC) is highly desired. However, state-of-the-art ECC schemes for PIM suffer drawbacks including dataflow interruptions, low code rates, and limited error correction patterns. In this work, we propose non-binary low-density parity-check (NB-LDPC) error correction running over the Galois field. Such NB-LDPC scheme with a long word length of 1024 bits can correct up to 8-bit errors with a code rate over 88%. Nonbinary GF operations can support both memory mode and PIM mode even with multi-level memory cells. We fabricate a 40nm prototype PIM chip equipped with our proposed NB-LDPC scheme for validation purposes. Experiments show that PIM with NB-LDPC error correction demonstrates up to 59.65 times bit error rate (BER) improvement over the original PIM without such error correction. The test chip delivers 2.978 times power efficiency enhancement over prior works.
{ "Other": 1, "cs.AI": 0, "cs.CE": 0, "cs.CL": 0, "cs.CR": 0, "cs.CV": 0, "cs.CY": 0, "cs.DB": 0, "cs.HC": 0, "cs.IR": 0, "cs.IT": 1, "cs.LG": 0, "cs.MA": 0, "cs.NE": 0, "cs.RO": 0, "cs.SD": 0, "cs.SI": 0, "cs.SY": 0 }
2502.11490
GPU-accelerated Multi-relational Parallel Graph Retrieval for Web-scale Recommendations
[ "cs.LG", "cs.DC", "cs.IR" ]
Web recommendations provide personalized items from massive catalogs for users, which rely heavily on retrieval stages to trade off the effectiveness and efficiency of selecting a small relevant set from billion-scale candidates in online digital platforms. As one of the largest Chinese search engine and news feed providers, Baidu resorts to Deep Neural Network (DNN) and graph-based Approximate Nearest Neighbor Search (ANNS) algorithms for accurate relevance estimation and efficient search for relevant items. However, current retrieval at Baidu fails in comprehensive user-item relational understanding due to dissected interaction modeling, and performs inefficiently in large-scale graph-based ANNS because of suboptimal traversal navigation and the GPU computational bottleneck under high concurrency. To this end, we propose a GPU-accelerated Multi-relational Parallel Graph Retrieval (GMP-GR) framework to achieve effective yet efficient retrieval in web-scale recommendations. First, we propose a multi-relational user-item relevance metric learning method that unifies diverse user behaviors through multi-objective optimization and employs a self-covariant loss to enhance pathfinding performance. Second, we develop a hierarchical parallel graph-based ANNS to boost graph retrieval throughput, which conducts breadth-depth-balanced searches on a large-scale item graph and cost-effectively handles irregular neural computation via adaptive aggregation on GPUs. In addition, we integrate system optimization strategies in the deployment of GMP-GR in Baidu. Extensive experiments demonstrate the superiority of GMP-GR in retrieval accuracy and efficiency. Deployed across more than twenty applications at Baidu, GMP-GR serves hundreds of millions of users with a throughput exceeding one hundred million requests per second.
{ "Other": 1, "cs.AI": 0, "cs.CE": 0, "cs.CL": 0, "cs.CR": 0, "cs.CV": 0, "cs.CY": 0, "cs.DB": 0, "cs.HC": 0, "cs.IR": 1, "cs.IT": 0, "cs.LG": 1, "cs.MA": 0, "cs.NE": 0, "cs.RO": 0, "cs.SD": 0, "cs.SI": 0, "cs.SY": 0 }
2502.11491
Ontology-Guided Reverse Thinking Makes Large Language Models Stronger on Knowledge Graph Question Answering
[ "cs.CL", "cs.AI" ]
Large language models (LLMs) have shown remarkable capabilities in natural language processing. However, in knowledge graph question answering tasks (KGQA), there remains the issue of answering questions that require multi-hop reasoning. Existing methods rely on entity vector matching, but the purpose of the question is abstract and difficult to match with specific entities. As a result, it is difficult to establish reasoning paths to the purpose, which leads to information loss and redundancy. To address this issue, inspired by human reverse thinking, we propose Ontology-Guided Reverse Thinking (ORT), a novel framework that constructs reasoning paths from purposes back to conditions. ORT operates in three key phases: (1) using LLM to extract purpose labels and condition labels, (2) constructing label reasoning paths based on the KG ontology, and (3) using the label reasoning paths to guide knowledge retrieval. Experiments on the WebQSP and CWQ datasets show that ORT achieves state-of-the-art performance and significantly enhances the capability of LLMs for KGQA.
{ "Other": 0, "cs.AI": 1, "cs.CE": 0, "cs.CL": 1, "cs.CR": 0, "cs.CV": 0, "cs.CY": 0, "cs.DB": 0, "cs.HC": 0, "cs.IR": 0, "cs.IT": 0, "cs.LG": 0, "cs.MA": 0, "cs.NE": 0, "cs.RO": 0, "cs.SD": 0, "cs.SI": 0, "cs.SY": 0 }
2502.11492
Why Vision Language Models Struggle with Visual Arithmetic? Towards Enhanced Chart and Geometry Understanding
[ "cs.AI", "cs.CL", "cs.CV" ]
Vision Language Models (VLMs) have achieved remarkable progress in multimodal tasks, yet they often struggle with visual arithmetic, seemingly simple capabilities like object counting or length comparison, which are essential for relevant complex tasks like chart understanding and geometric reasoning. In this work, we first investigate the root causes of this deficiency through a suite of probing tasks focusing on basic visual arithmetic. Our analysis reveals that while pre-trained vision encoders typically capture sufficient information, the text decoder often fails to decode it correctly for arithmetic reasoning. To address this, we propose CogAlign, a novel post-training strategy inspired by Piaget's theory of cognitive development. CogAlign trains VLMs to recognize invariant properties under visual transformations. We demonstrate that this approach significantly improves the performance of three diverse VLMs on our proposed probing tasks. Furthermore, CogAlign enhances performance by an average of 4.6% on CHOCOLATE and 2.9% on MATH-VISION, outperforming or matching supervised fine-tuning methods while requiring only 60% less training data. These results highlight the effectiveness and generalizability of CogAlign in improving fundamental visual arithmetic capabilities and their transfer to downstream tasks.
{ "Other": 0, "cs.AI": 1, "cs.CE": 0, "cs.CL": 1, "cs.CR": 0, "cs.CV": 1, "cs.CY": 0, "cs.DB": 0, "cs.HC": 0, "cs.IR": 0, "cs.IT": 0, "cs.LG": 0, "cs.MA": 0, "cs.NE": 0, "cs.RO": 0, "cs.SD": 0, "cs.SI": 0, "cs.SY": 0 }
2502.11493
DAST: Context-Aware Compression in LLMs via Dynamic Allocation of Soft Tokens
[ "cs.CL" ]
Large Language Models (LLMs) face computational inefficiencies and redundant processing when handling long context inputs, prompting a focus on compression techniques. While existing semantic vector-based compression methods achieve promising performance, these methods fail to account for the intrinsic information density variations between context chunks, instead allocating soft tokens uniformly across context chunks. This uniform distribution inevitably diminishes allocation to information-critical regions. To address this, we propose Dynamic Allocation of Soft Tokens (DAST), a simple yet effective method that leverages the LLM's intrinsic understanding of contextual relevance to guide compression. DAST combines perplexity-based local information with attention-driven global information to dynamically allocate soft tokens to the informative-rich chunks, enabling effective, context-aware compression. Experimental results across multiple benchmarks demonstrate that DAST surpasses state-of-the-art methods.
{ "Other": 0, "cs.AI": 0, "cs.CE": 0, "cs.CL": 1, "cs.CR": 0, "cs.CV": 0, "cs.CY": 0, "cs.DB": 0, "cs.HC": 0, "cs.IR": 0, "cs.IT": 0, "cs.LG": 0, "cs.MA": 0, "cs.NE": 0, "cs.RO": 0, "cs.SD": 0, "cs.SI": 0, "cs.SY": 0 }
2502.11494
Stop Looking for Important Tokens in Multimodal Language Models: Duplication Matters More
[ "cs.CL", "cs.CV" ]
Vision tokens in multimodal large language models often dominate huge computational overhead due to their excessive length compared to linguistic modality. Abundant recent methods aim to solve this problem with token pruning, which first defines an importance criterion for tokens and then prunes the unimportant vision tokens during inference. However, in this paper, we show that the importance is not an ideal indicator to decide whether a token should be pruned. Surprisingly, it usually results in inferior performance than random token pruning and leading to incompatibility to efficient attention computation operators.Instead, we propose DART (Duplication-Aware Reduction of Tokens), which prunes tokens based on its duplication with other tokens, leading to significant and training-free acceleration. Concretely, DART selects a small subset of pivot tokens and then retains the tokens with low duplication to the pivots, ensuring minimal information loss during token pruning. Experiments demonstrate that DART can prune 88.9% vision tokens while maintaining comparable performance, leading to a 1.99$\times$ and 2.99$\times$ speed-up in total time and prefilling stage, respectively, with good compatibility to efficient attention operators. Our codes are available at https://github.com/ZichenWen1/DART.
{ "Other": 0, "cs.AI": 0, "cs.CE": 0, "cs.CL": 1, "cs.CR": 0, "cs.CV": 1, "cs.CY": 0, "cs.DB": 0, "cs.HC": 0, "cs.IR": 0, "cs.IT": 0, "cs.LG": 0, "cs.MA": 0, "cs.NE": 0, "cs.RO": 0, "cs.SD": 0, "cs.SI": 0, "cs.SY": 0 }
2502.11495
Balanced Multi-Factor In-Context Learning for Multilingual Large Language Models
[ "cs.CL" ]
Multilingual large language models (MLLMs) are able to leverage in-context learning (ICL) to achieve high performance by leveraging cross-lingual knowledge transfer without parameter updates. However, their effectiveness is highly sensitive to example selection, particularly in multilingual settings. Based on the findings of existing work, three key factors influence multilingual ICL: (1) semantic similarity, (2) linguistic alignment, and (3) language-specific performance. However, existing approaches address these factors independently, without explicitly disentangling their combined impact, leaving optimal example selection underexplored. To address this gap, we propose balanced multi-factor ICL (\textbf{BMF-ICL}), a method that quantifies and optimally balances these factors for improved example selection. Experiments on mCSQA and TYDI across four MLLMs demonstrate that BMF-ICL outperforms existing methods. Further analysis highlights the importance of incorporating all three factors and the importance of selecting examples from multiple languages.
{ "Other": 0, "cs.AI": 0, "cs.CE": 0, "cs.CL": 1, "cs.CR": 0, "cs.CV": 0, "cs.CY": 0, "cs.DB": 0, "cs.HC": 0, "cs.IR": 0, "cs.IT": 0, "cs.LG": 0, "cs.MA": 0, "cs.NE": 0, "cs.RO": 0, "cs.SD": 0, "cs.SI": 0, "cs.SY": 0 }
2502.11501
Token Pruning in Multimodal Large Language Models: Are We Solving the Right Problem?
[ "cs.CL", "cs.CV" ]
Multimodal large language models (MLLMs) have shown remarkable performance for cross-modal understanding and generation, yet still suffer from severe inference costs. Recently, abundant works have been proposed to solve this problem with token pruning, which identifies the redundant tokens in MLLMs and then prunes them to reduce the computation and KV storage costs, leading to significant acceleration without training. While these methods claim efficiency gains, critical questions about their fundamental design and evaluation remain unanswered: Why do many existing approaches underperform even compared to naive random token selection? Are attention-based scoring sufficient for reliably identifying redundant tokens? Is language information really helpful during token pruning? What makes a good trade-off between token importance and duplication? Are current evaluation protocols comprehensive and unbiased? The ignorance of previous research on these problems hinders the long-term development of token pruning. In this paper, we answer these questions one by one, providing insights into the design of future token pruning methods.
{ "Other": 0, "cs.AI": 0, "cs.CE": 0, "cs.CL": 1, "cs.CR": 0, "cs.CV": 1, "cs.CY": 0, "cs.DB": 0, "cs.HC": 0, "cs.IR": 0, "cs.IT": 0, "cs.LG": 0, "cs.MA": 0, "cs.NE": 0, "cs.RO": 0, "cs.SD": 0, "cs.SI": 0, "cs.SY": 0 }
2502.11504
Accelerated Gradient-based Design Optimization Via Differentiable Physics-Informed Neural Operator: A Composites Autoclave Processing Case Study
[ "cs.LG", "cs.AI", "cs.NA", "math.NA" ]
Simulation and optimization are crucial for advancing the engineering design of complex systems and processes. Traditional optimization methods require substantial computational time and effort due to their reliance on resource-intensive simulations, such as finite element analysis, and the complexity of rigorous optimization algorithms. Data-agnostic AI-based surrogate models, such as Physics-Informed Neural Operators (PINOs), offer a promising alternative to these conventional simulations, providing drastically reduced inference time, unparalleled data efficiency, and zero-shot super-resolution capability. However, the predictive accuracy of these models is often constrained to small, low-dimensional design spaces or systems with relatively simple dynamics. To address this, we introduce a novel Physics-Informed DeepONet (PIDON) architecture, which extends the capabilities of conventional neural operators to effectively model the nonlinear behavior of complex engineering systems across high-dimensional design spaces and a wide range of dynamic design configurations. This new architecture outperforms existing SOTA models, enabling better predictions across broader design spaces. Leveraging PIDON's differentiability, we integrate a gradient-based optimization approach using the Adam optimizer to efficiently determine optimal design variables. This forms an end-to-end gradient-based optimization framework that accelerates the design process while enhancing scalability and efficiency. We demonstrate the effectiveness of this framework in the optimization of aerospace-grade composites curing processes achieving a 3x speedup in obtaining optimal design variables compared to gradient-free methods. Beyond composites processing, the proposed model has the potential to be used as a scalable and efficient optimization tool for broader applications in advanced engineering and digital twin systems.
{ "Other": 1, "cs.AI": 1, "cs.CE": 0, "cs.CL": 0, "cs.CR": 0, "cs.CV": 0, "cs.CY": 0, "cs.DB": 0, "cs.HC": 0, "cs.IR": 0, "cs.IT": 0, "cs.LG": 1, "cs.MA": 0, "cs.NE": 0, "cs.RO": 0, "cs.SD": 0, "cs.SI": 0, "cs.SY": 0 }
2502.11505
A GNN-based Spectral Filtering Mechanism for Imbalance Classification in Network Digital Twin
[ "cs.LG", "cs.NI" ]
Graph Neural Networks are gaining attention in Fifth-Generation (5G) core network digital twins, which are data-driven complex systems with numerous components. Analyzing these data can be challenging due to rare failure types, leading to imbalanced classification in multiclass settings. Digital twins of 5G networks increasingly employ graph classification as the main method for identifying failure types. However, the skewed distribution of failure occurrences is a major class imbalance issue that prevents effective graph data mining. Previous studies have not sufficiently tackled this complex problem. In this paper, we propose Class-Fourier Graph Neural Network (CF-GNN) introduces a class-oriented spectral filtering mechanism that ensures precise classification by estimating a unique spectral filter for each class. We employ eigenvalue and eigenvector spectral filtering to capture and adapt to variations in the minority classes, ensuring accurate class-specific feature discrimination, and adept at graph representation learning for complex local structures among neighbors in an end-to-end setting. Extensive experiments have demonstrated that the proposed CF-GNN could help with both the creation of new techniques for enhancing classifiers and the investigation of the characteristics of the multi-class imbalanced data in a network digital twin system.
{ "Other": 1, "cs.AI": 0, "cs.CE": 0, "cs.CL": 0, "cs.CR": 0, "cs.CV": 0, "cs.CY": 0, "cs.DB": 0, "cs.HC": 0, "cs.IR": 0, "cs.IT": 0, "cs.LG": 1, "cs.MA": 0, "cs.NE": 0, "cs.RO": 0, "cs.SD": 0, "cs.SI": 0, "cs.SY": 0 }
2502.11506
Learning Surrogate Potential Mean Field Games via Gaussian Processes: A Data-Driven Approach to Ill-Posed Inverse Problems
[ "cs.LG", "math.OC", "stat.ML" ]
Mean field games (MFGs) describe the collective behavior of large populations of interacting agents. In this work, we tackle ill-posed inverse problems in potential MFGs, aiming to recover the agents' population, momentum, and environmental setup from limited, noisy measurements and partial observations. These problems are ill-posed because multiple MFG configurations can explain the same data, or different parameters can yield nearly identical observations. Nonetheless, they remain crucial in practice for real-world scenarios where data are inherently sparse or noisy, or where the MFG structure is not fully determined. Our focus is on finding surrogate MFGs that accurately reproduce the observed data despite these challenges. We propose two Gaussian process (GP)-based frameworks: an inf-sup formulation and a bilevel approach. The choice between them depends on whether the unknown parameters introduce concavity in the objective. In the inf-sup framework, we use the linearity of GPs and their parameterization structure to maintain convex-concave properties, allowing us to apply standard convex optimization algorithms. In the bilevel framework, we employ a gradient-descent-based algorithm and introduce two methods for computing the outer gradient. The first method leverages an existing solver for the inner potential MFG and applies automatic differentiation, while the second adopts an adjoint-based strategy that computes the outer gradient independently of the inner solver. Our numerical experiments show that when sufficient prior information is available, the unknown parameters can be accurately recovered. Otherwise, if prior information is limited, the inverse problem is ill-posed, but our frameworks can still produce surrogate MFG models that closely match observed data.
{ "Other": 0, "cs.AI": 0, "cs.CE": 0, "cs.CL": 0, "cs.CR": 0, "cs.CV": 0, "cs.CY": 0, "cs.DB": 0, "cs.HC": 0, "cs.IR": 0, "cs.IT": 0, "cs.LG": 1, "cs.MA": 0, "cs.NE": 0, "cs.RO": 0, "cs.SD": 0, "cs.SI": 0, "cs.SY": 0 }
2502.11508
Chinese Spelling Correction: A Comprehensive Survey of Progress, Challenges, and Opportunities
[ "cs.CL", "cs.AI" ]
Chinese Spelling Correction (CSC) is a critical task in natural language processing, aimed at detecting and correcting spelling errors in Chinese text. This survey provides a comprehensive overview of CSC, tracing its evolution from pre-trained language models to large language models, and critically analyzing their respective strengths and weaknesses in this domain. Moreover, we further present a detailed examination of existing benchmark datasets, highlighting their inherent challenges and limitations. Finally, we propose promising future research directions, particularly focusing on leveraging the potential of LLMs and their reasoning capabilities for improved CSC performance. To the best of our knowledge, this is the first comprehensive survey dedicated to the field of CSC. We believe this work will serve as a valuable resource for researchers, fostering a deeper understanding of the field and inspiring future advancements.
{ "Other": 0, "cs.AI": 1, "cs.CE": 0, "cs.CL": 1, "cs.CR": 0, "cs.CV": 0, "cs.CY": 0, "cs.DB": 0, "cs.HC": 0, "cs.IR": 0, "cs.IT": 0, "cs.LG": 0, "cs.MA": 0, "cs.NE": 0, "cs.RO": 0, "cs.SD": 0, "cs.SI": 0, "cs.SY": 0 }
2502.11509
DifCluE: Generating Counterfactual Explanations with Diffusion Autoencoders and modal clustering
[ "cs.LG", "cs.AI" ]
Generating multiple counterfactual explanations for different modes within a class presents a significant challenge, as these modes are distinct yet converge under the same classification. Diffusion probabilistic models (DPMs) have demonstrated a strong ability to capture the underlying modes of data distributions. In this paper, we harness the power of a Diffusion Autoencoder to generate multiple distinct counterfactual explanations. By clustering in the latent space, we uncover the directions corresponding to the different modes within a class, enabling the generation of diverse and meaningful counterfactuals. We introduce a novel methodology, DifCluE, which consistently identifies these modes and produces more reliable counterfactual explanations. Our experimental results demonstrate that DifCluE outperforms the current state-of-the-art in generating multiple counterfactual explanations, offering a significant advancement in model interpretability.
{ "Other": 0, "cs.AI": 1, "cs.CE": 0, "cs.CL": 0, "cs.CR": 0, "cs.CV": 0, "cs.CY": 0, "cs.DB": 0, "cs.HC": 0, "cs.IR": 0, "cs.IT": 0, "cs.LG": 1, "cs.MA": 0, "cs.NE": 0, "cs.RO": 0, "cs.SD": 0, "cs.SI": 0, "cs.SY": 0 }
2502.11513
MaZO: Masked Zeroth-Order Optimization for Multi-Task Fine-Tuning of Large Language Models
[ "cs.LG", "cs.AI" ]
Large language models have demonstrated exceptional capabilities across diverse tasks, but their fine-tuning demands significant memory, posing challenges for resource-constrained environments. Zeroth-order (ZO) optimization provides a memory-efficient alternative by eliminating the need for backpropagation. However, ZO optimization suffers from high gradient variance, and prior research has largely focused on single-task learning, leaving its application to multi-task learning unexplored. Multi-task learning is crucial for leveraging shared knowledge across tasks to improve generalization, yet it introduces unique challenges under ZO settings, such as amplified gradient variance and collinearity. In this paper, we present MaZO, the first framework specifically designed for multi-task LLM fine-tuning under ZO optimization. MaZO tackles these challenges at the parameter level through two key innovations: a weight importance metric to identify critical parameters and a multi-task weight update mask to selectively update these parameters, reducing the dimensionality of the parameter space and mitigating task conflicts. Experiments demonstrate that MaZO achieves state-of-the-art performance, surpassing even multi-task learning methods designed for first-order optimization.
{ "Other": 0, "cs.AI": 1, "cs.CE": 0, "cs.CL": 0, "cs.CR": 0, "cs.CV": 0, "cs.CY": 0, "cs.DB": 0, "cs.HC": 0, "cs.IR": 0, "cs.IT": 0, "cs.LG": 1, "cs.MA": 0, "cs.NE": 0, "cs.RO": 0, "cs.SD": 0, "cs.SI": 0, "cs.SY": 0 }
2502.11514
Investigating Inference-time Scaling for Chain of Multi-modal Thought: A Preliminary Study
[ "cs.CL" ]
Recently, inference-time scaling of chain-of-thought (CoT) has been demonstrated as a promising approach for addressing multi-modal reasoning tasks. While existing studies have predominantly centered on text-based thinking, the integration of both visual and textual modalities within the reasoning process remains unexplored. In this study, we pioneer the exploration of inference-time scaling with multi-modal thought, aiming to bridge this gap. To provide a comprehensive analysis, we systematically investigate popular sampling-based and tree search-based inference-time scaling methods on 10 challenging tasks spanning various domains. Besides, we uniformly adopt a consistency-enhanced verifier to ensure effective guidance for both methods across different thought paradigms. Results show that multi-modal thought promotes better performance against conventional text-only thought, and blending the two types of thought fosters more diverse thinking. Despite these advantages, multi-modal thoughts necessitate higher token consumption for processing richer visual inputs, which raises concerns in practical applications. We hope that our findings on the merits and drawbacks of this research line will inspire future works in the field.
{ "Other": 0, "cs.AI": 0, "cs.CE": 0, "cs.CL": 1, "cs.CR": 0, "cs.CV": 0, "cs.CY": 0, "cs.DB": 0, "cs.HC": 0, "cs.IR": 0, "cs.IT": 0, "cs.LG": 0, "cs.MA": 0, "cs.NE": 0, "cs.RO": 0, "cs.SD": 0, "cs.SI": 0, "cs.SY": 0 }
2502.11515
SayAnything: Audio-Driven Lip Synchronization with Conditional Video Diffusion
[ "cs.CV" ]
Recent advances in diffusion models have led to significant progress in audio-driven lip synchronization. However, existing methods typically rely on constrained audio-visual alignment priors or multi-stage learning of intermediate representations to force lip motion synthesis. This leads to complex training pipelines and limited motion naturalness. In this paper, we present SayAnything, a conditional video diffusion framework that directly synthesizes lip movements from audio input while preserving speaker identity. Specifically, we propose three specialized modules including identity preservation module, audio guidance module, and editing control module. Our novel design effectively balances different condition signals in the latent space, enabling precise control over appearance, motion, and region-specific generation without requiring additional supervision signals or intermediate representations. Extensive experiments demonstrate that SayAnything generates highly realistic videos with improved lip-teeth coherence, enabling unseen characters to say anything, while effectively generalizing to animated characters.
{ "Other": 0, "cs.AI": 0, "cs.CE": 0, "cs.CL": 0, "cs.CR": 0, "cs.CV": 1, "cs.CY": 0, "cs.DB": 0, "cs.HC": 0, "cs.IR": 0, "cs.IT": 0, "cs.LG": 0, "cs.MA": 0, "cs.NE": 0, "cs.RO": 0, "cs.SD": 0, "cs.SI": 0, "cs.SY": 0 }
2502.11516
CRB-Rate Tradeoff in RSMA-enabled Near-Field Integrated Multi-Target Sensing and Multi-User Communications
[ "cs.IT", "math.IT" ]
Extremely large-scale antenna arrays enhance spectral efficiency and spatial resolution in integrated sensing and communication (ISAC) networks while expanding the Rayleigh distance, triggering a shift from conventional far-field plane waves to near-field (NF) spherical waves. However, full-digital beamforming is infeasible due to the need for dedicated radio frequency (RF) chains. To address this, NF-ISAC with a rate-splitting multiple access (RSMA) scheme is developed for advanced interference management, considering fully-connected and partially-connected hybrid analog and digital (HAD) beamforming architectures. Specifically, the Cram\'{e}r-Rao bound (CRB) for joint distance and angle sensing is derived, and the achievable performance region between the max-min communication rate and the multi-target CRB is defined. To fully characterize the Pareto boundary of the CRB-rate region, a sensing-centric minimization problem is formulated under communication rate constraints for two HAD beamforming architectures. A penalty dual decomposition (PDD)-based double-loop algorithm is developed to optimize fully-connected HAD beamformers. To reduce computational complexity, a two-stage design algorithm for fully connected HAD beamforming is also proposed. Additionally, the PDD-based double-loop algorithm is extended to the partially-connected HAD architecture. Simulations demonstrate the proposed schemes and algorithms: 1) achieve performance comparable to a fully digital beamformer with fewer RF chains, 2) outperform space division multiple access and far-field ISAC, and 3) yield enhanced CRB-rate trade-off performance.
{ "Other": 0, "cs.AI": 0, "cs.CE": 0, "cs.CL": 0, "cs.CR": 0, "cs.CV": 0, "cs.CY": 0, "cs.DB": 0, "cs.HC": 0, "cs.IR": 0, "cs.IT": 1, "cs.LG": 0, "cs.MA": 0, "cs.NE": 0, "cs.RO": 0, "cs.SD": 0, "cs.SI": 0, "cs.SY": 0 }
2502.11517
Learning to Keep a Promise: Scaling Language Model Decoding Parallelism with Learned Asynchronous Decoding
[ "cs.CL", "cs.DC", "cs.LG" ]
Decoding with autoregressive large language models (LLMs) traditionally occurs sequentially, generating one token after another. An emerging line of work explored parallel decoding by identifying and simultaneously generating semantically independent chunks of LLM responses. However, these techniques rely on hand-crafted heuristics tied to syntactic structures like lists and paragraphs, making them rigid and imprecise. We present PASTA, a learning-based system that teaches LLMs to identify semantic independence and express parallel decoding opportunities in their own responses. At its core are PASTA-LANG and its interpreter: PASTA-LANG is an annotation language that enables LLMs to express semantic independence in their own responses; the language interpreter acts on these annotations to orchestrate parallel decoding on-the-fly at inference time. Through a two-stage finetuning process, we train LLMs to generate PASTA-LANG annotations that optimize both response quality and decoding speed. Evaluation on AlpacaEval, an instruction following benchmark, shows that our approach Pareto-dominates existing methods in terms of decoding speed and response quality; our results demonstrate geometric mean speedups ranging from 1.21x to 1.93x with corresponding quality changes of +2.2% to -7.1%, measured by length-controlled win rates against sequential decoding baseline.
{ "Other": 1, "cs.AI": 0, "cs.CE": 0, "cs.CL": 1, "cs.CR": 0, "cs.CV": 0, "cs.CY": 0, "cs.DB": 0, "cs.HC": 0, "cs.IR": 0, "cs.IT": 0, "cs.LG": 1, "cs.MA": 0, "cs.NE": 0, "cs.RO": 0, "cs.SD": 0, "cs.SI": 0, "cs.SY": 0 }
2502.11518
Generative Multi-Agent Collaboration in Embodied AI: A Systematic Review
[ "cs.MA", "cs.AI", "cs.LG" ]
Embodied multi-agent systems (EMAS) have attracted growing attention for their potential to address complex, real-world challenges in areas such as logistics and robotics. Recent advances in foundation models pave the way for generative agents capable of richer communication and adaptive problem-solving. This survey provides a systematic examination of how EMAS can benefit from these generative capabilities. We propose a taxonomy that categorizes EMAS by system architectures and embodiment modalities, emphasizing how collaboration spans both physical and virtual contexts. Central building blocks, perception, planning, communication, and feedback, are then analyzed to illustrate how generative techniques bolster system robustness and flexibility. Through concrete examples, we demonstrate the transformative effects of integrating foundation models into embodied, multi-agent frameworks. Finally, we discuss challenges and future directions, underlining the significant promise of EMAS to reshape the landscape of AI-driven collaboration.
{ "Other": 0, "cs.AI": 1, "cs.CE": 0, "cs.CL": 0, "cs.CR": 0, "cs.CV": 0, "cs.CY": 0, "cs.DB": 0, "cs.HC": 0, "cs.IR": 0, "cs.IT": 0, "cs.LG": 1, "cs.MA": 1, "cs.NE": 0, "cs.RO": 0, "cs.SD": 0, "cs.SI": 0, "cs.SY": 0 }
2502.11519
UniGO: A Unified Graph Neural Network for Modeling Opinion Dynamics on Graphs
[ "cs.SI", "cs.AI" ]
Polarization and fragmentation in social media amplify user biases, making it increasingly important to understand the evolution of opinions. Opinion dynamics provide interpretability for studying opinion evolution, yet incorporating these insights into predictive models remains challenging. This challenge arises due to the inherent complexity of the diversity of opinion fusion rules and the difficulty in capturing equilibrium states while avoiding over-smoothing. This paper constructs a unified opinion dynamics model to integrate different opinion fusion rules and generates corresponding synthetic datasets. To fully leverage the advantages of unified opinion dynamics, we introduces UniGO, a framework for modeling opinion evolution on graphs. Using a coarsen-refine mechanism, UniGO efficiently models opinion dynamics through a graph neural network, mitigating over-smoothing while preserving equilibrium phenomena. UniGO leverages pretraining on synthetic datasets, which enhances its ability to generalize to real-world scenarios, providing a viable paradigm for applications of opinion dynamics. Experimental results on both synthetic and real-world datasets demonstrate UniGO's effectiveness in capturing complex opinion formation processes and predicting future evolution. The pretrained model also shows strong generalization capability, validating the benefits of using synthetic data to boost real-world performance.
{ "Other": 0, "cs.AI": 1, "cs.CE": 0, "cs.CL": 0, "cs.CR": 0, "cs.CV": 0, "cs.CY": 0, "cs.DB": 0, "cs.HC": 0, "cs.IR": 0, "cs.IT": 0, "cs.LG": 0, "cs.MA": 0, "cs.NE": 0, "cs.RO": 0, "cs.SD": 0, "cs.SI": 1, "cs.SY": 0 }
2502.11520
AURORA:Automated Training Framework of Universal Process Reward Models via Ensemble Prompting and Reverse Verification
[ "cs.CL" ]
The reasoning capabilities of advanced large language models (LLMs) like o1 have revolutionized artificial intelligence applications. Nevertheless, evaluating and optimizing complex reasoning processes remain significant challenges due to diverse policy distributions and the inherent limitations of human effort and accuracy. In this paper, we present AURORA, a novel automated framework for training universal process reward models (PRMs) using ensemble prompting and reverse verification. The framework employs a two-phase approach: First, it uses diverse prompting strategies and ensemble methods to perform automated annotation and evaluation of processes, ensuring robust assessments for reward learning. Second, it leverages practical reference answers for reverse verification, enhancing the model's ability to validate outputs and improving training accuracy. To assess the framework's performance, we extend beyond the existing ProcessBench benchmark by introducing UniversalBench, which evaluates reward predictions across full trajectories under diverse policy distribtion with long Chain-of-Thought (CoT) outputs. Experimental results demonstrate that AURORA enhances process evaluation accuracy, improves PRMs' accuracy for diverse policy distributions and long-CoT responses. The project will be open-sourced at https://auroraprm.github.io/. The Universal-PRM-7B is available at https://huggingface.co/infly/Universal-PRM-7B.
{ "Other": 0, "cs.AI": 0, "cs.CE": 0, "cs.CL": 1, "cs.CR": 0, "cs.CV": 0, "cs.CY": 0, "cs.DB": 0, "cs.HC": 0, "cs.IR": 0, "cs.IT": 0, "cs.LG": 0, "cs.MA": 0, "cs.NE": 0, "cs.RO": 0, "cs.SD": 0, "cs.SI": 0, "cs.SY": 0 }
2502.11521
DeFiScope: Detecting Various DeFi Price Manipulations with LLM Reasoning
[ "cs.CR", "cs.AI" ]
DeFi (Decentralized Finance) is one of the most important applications of today's cryptocurrencies and smart contracts. It manages hundreds of billions in Total Value Locked (TVL) on-chain, yet it remains susceptible to common DeFi price manipulation attacks. Despite state-of-the-art (SOTA) systems like DeFiRanger and DeFort, we found that they are less effective to non-standard price models in custom DeFi protocols, which account for 44.2% of the 95 DeFi price manipulation attacks reported over the past three years. In this paper, we introduce the first LLM-based approach, DeFiScope, for detecting DeFi price manipulation attacks in both standard and custom price models. Our insight is that large language models (LLMs) have certain intelligence to abstract price calculation from code and infer the trend of token price changes based on the extracted price models. To further strengthen LLMs in this aspect, we leverage Foundry to synthesize on-chain data and use it to fine-tune a DeFi price-specific LLM. Together with the high-level DeFi operations recovered from low-level transaction data, DeFiScope detects various DeFi price manipulations according to systematically mined patterns. Experimental results show that DeFiScope achieves a high precision of 96% and a recall rate of 80%, significantly outperforming SOTA approaches. Moreover, we evaluate DeFiScope's cost-effectiveness and demonstrate its practicality by helping our industry partner confirm 147 real-world price manipulation attacks, including discovering 81 previously unknown historical incidents.
{ "Other": 0, "cs.AI": 1, "cs.CE": 0, "cs.CL": 0, "cs.CR": 1, "cs.CV": 0, "cs.CY": 0, "cs.DB": 0, "cs.HC": 0, "cs.IR": 0, "cs.IT": 0, "cs.LG": 0, "cs.MA": 0, "cs.NE": 0, "cs.RO": 0, "cs.SD": 0, "cs.SI": 0, "cs.SY": 0 }
2502.11525
Training Large Language Models to be Better Rule Followers
[ "cs.CL" ]
Large language models (LLMs) have shown impressive performance across a wide range of tasks. However, they often exhibit unexpected failures in seemingly straightforward tasks, suggesting a reliance on case-based reasoning rather than rule-based reasoning. While the vast training corpus of LLMs contains numerous textual "rules", current training methods fail to leverage these rules effectively. Crucially, the relationships between these "rules" and their corresponding "instances" are not explicitly modeled. As a result, while LLMs can often recall rules with ease, they fail to apply these rules strictly and consistently in relevant reasoning scenarios. In this paper, we investigate the rule-following capabilities of LLMs and propose Meta Rule-Following Fine-Tuning (Meta-RFFT) to enhance the cross-task transferability of rule-following abilities. We first construct a dataset of 88 tasks requiring following rules, encompassing diverse reasoning domains. We demonstrate through extensive experiments that models trained on large-scale rule-following tasks are better rule followers, outperforming the baselines in both downstream fine-tuning and few-shot prompting scenarios. This highlights the cross-task transferability of models with the aid of Meta-RFFT. Furthermore, we examine the influence of factors such as dataset size, rule formulation, and in-context learning.
{ "Other": 0, "cs.AI": 0, "cs.CE": 0, "cs.CL": 1, "cs.CR": 0, "cs.CV": 0, "cs.CY": 0, "cs.DB": 0, "cs.HC": 0, "cs.IR": 0, "cs.IT": 0, "cs.LG": 0, "cs.MA": 0, "cs.NE": 0, "cs.RO": 0, "cs.SD": 0, "cs.SI": 0, "cs.SY": 0 }
2502.11528
A Survey of Personalized Large Language Models: Progress and Future Directions
[ "cs.AI" ]
Large Language Models (LLMs) excel in handling general knowledge tasks, yet they struggle with user-specific personalization, such as understanding individual emotions, writing styles, and preferences. Personalized Large Language Models (PLLMs) tackle these challenges by leveraging individual user data, such as user profiles, historical dialogues, content, and interactions, to deliver responses that are contextually relevant and tailored to each user's specific needs. This is a highly valuable research topic, as PLLMs can significantly enhance user satisfaction and have broad applications in conversational agents, recommendation systems, emotion recognition, medical assistants, and more. This survey reviews recent advancements in PLLMs from three technical perspectives: prompting for personalized context (input level), finetuning for personalized adapters (model level), and alignment for personalized preferences (objective level). To provide deeper insights, we also discuss current limitations and outline several promising directions for future research. Updated information about this survey can be found at the https://github.com/JiahongLiu21/Awesome-Personalized-Large-Language-Models.
{ "Other": 0, "cs.AI": 1, "cs.CE": 0, "cs.CL": 0, "cs.CR": 0, "cs.CV": 0, "cs.CY": 0, "cs.DB": 0, "cs.HC": 0, "cs.IR": 0, "cs.IT": 0, "cs.LG": 0, "cs.MA": 0, "cs.NE": 0, "cs.RO": 0, "cs.SD": 0, "cs.SI": 0, "cs.SY": 0 }
2502.11532
Control-CLIP: Decoupling Category and Style Guidance in CLIP for Specific-Domain Generation
[ "cs.CV" ]
Text-to-image diffusion models have shown remarkable capabilities of generating high-quality images closely aligned with textual inputs. However, the effectiveness of text guidance heavily relies on the CLIP text encoder, which is trained to pay more attention to general content but struggles to capture semantics in specific domains like styles. As a result, generation models tend to fail on prompts like "a photo of a cat in Pokemon style" in terms of simply producing images depicting "a photo of a cat". To fill this gap, we propose Control-CLIP, a novel decoupled CLIP fine-tuning framework that enables the CLIP model to learn the meaning of category and style in a complement manner. With specially designed fine-tuning tasks on minimal data and a modified cross-attention mechanism, Control-CLIP can precisely guide the diffusion model to a specific domain. Moreover, the parameters of the diffusion model remain unchanged at all, preserving the original generation performance and diversity. Experiments across multiple domains confirm the effectiveness of our approach, particularly highlighting its robust plug-and-play capability in generating content with various specific styles.
{ "Other": 0, "cs.AI": 0, "cs.CE": 0, "cs.CL": 0, "cs.CR": 0, "cs.CV": 1, "cs.CY": 0, "cs.DB": 0, "cs.HC": 0, "cs.IR": 0, "cs.IT": 0, "cs.LG": 0, "cs.MA": 0, "cs.NE": 0, "cs.RO": 0, "cs.SD": 0, "cs.SI": 0, "cs.SY": 0 }
2502.11533
Be Cautious When Merging Unfamiliar LLMs: A Phishing Model Capable of Stealing Privacy
[ "cs.CL" ]
Model merging is a widespread technology in large language models (LLMs) that integrates multiple task-specific LLMs into a unified one, enabling the merged model to inherit the specialized capabilities of these LLMs. Most task-specific LLMs are sourced from open-source communities and have not undergone rigorous auditing, potentially imposing risks in model merging. This paper highlights an overlooked privacy risk: \textit{an unsafe model could compromise the privacy of other LLMs involved in the model merging.} Specifically, we propose PhiMM, a privacy attack approach that trains a phishing model capable of stealing privacy using a crafted privacy phishing instruction dataset. Furthermore, we introduce a novel model cloaking method that mimics a specialized capability to conceal attack intent, luring users into merging the phishing model. Once victims merge the phishing model, the attacker can extract personally identifiable information (PII) or infer membership information (MI) by querying the merged model with the phishing instruction. Experimental results show that merging a phishing model increases the risk of privacy breaches. Compared to the results before merging, PII leakage increased by 3.9\% and MI leakage increased by 17.4\% on average. We release the code of PhiMM through a link.
{ "Other": 0, "cs.AI": 0, "cs.CE": 0, "cs.CL": 1, "cs.CR": 0, "cs.CV": 0, "cs.CY": 0, "cs.DB": 0, "cs.HC": 0, "cs.IR": 0, "cs.IT": 0, "cs.LG": 0, "cs.MA": 0, "cs.NE": 0, "cs.RO": 0, "cs.SD": 0, "cs.SI": 0, "cs.SY": 0 }
2502.11534
SurgPose: a Dataset for Articulated Robotic Surgical Tool Pose Estimation and Tracking
[ "cs.RO", "cs.CV" ]
Accurate and efficient surgical robotic tool pose estimation is of fundamental significance to downstream applications such as augmented reality (AR) in surgical training and learning-based autonomous manipulation. While significant advancements have been made in pose estimation for humans and animals, it is still a challenge in surgical robotics due to the scarcity of published data. The relatively large absolute error of the da Vinci end effector kinematics and arduous calibration procedure make calibrated kinematics data collection expensive. Driven by this limitation, we collected a dataset, dubbed SurgPose, providing instance-aware semantic keypoints and skeletons for visual surgical tool pose estimation and tracking. By marking keypoints using ultraviolet (UV) reactive paint, which is invisible under white light and fluorescent under UV light, we execute the same trajectory under different lighting conditions to collect raw videos and keypoint annotations, respectively. The SurgPose dataset consists of approximately 120k surgical instrument instances (80k for training and 40k for validation) of 6 categories. Each instrument instance is labeled with 7 semantic keypoints. Since the videos are collected in stereo pairs, the 2D pose can be lifted to 3D based on stereo-matching depth. In addition to releasing the dataset, we test a few baseline approaches to surgical instrument tracking to demonstrate the utility of SurgPose. More details can be found at surgpose.github.io.
{ "Other": 0, "cs.AI": 0, "cs.CE": 0, "cs.CL": 0, "cs.CR": 0, "cs.CV": 1, "cs.CY": 0, "cs.DB": 0, "cs.HC": 0, "cs.IR": 0, "cs.IT": 0, "cs.LG": 0, "cs.MA": 0, "cs.NE": 0, "cs.RO": 1, "cs.SD": 0, "cs.SI": 0, "cs.SY": 0 }
2502.11535
Disentangled Iterative Surface Fitting for Contact-stable Grasp Planning
[ "cs.RO" ]
In this work, we address the limitation of surface fitting-based grasp planning algorithm, which primarily focuses on geometric alignment between the gripper and object surface while overlooking the stability of contact point distribution, often resulting in unstable grasps due to inadequate contact configurations. To overcome this limitation, we propose a novel surface fitting algorithm that integrates contact stability while preserving geometric compatibility. Inspired by human grasping behavior, our method disentangles the grasp pose optimization into three sequential steps: (1) rotation optimization to align contact normals, (2) translation refinement to improve Center of Mass (CoM) alignment, and (3) gripper aperture adjustment to optimize contact point distribution. We validate our approach through simulations on ten YCB dataset objects, demonstrating an 80% improvement in grasp success over conventional surface fitting methods that disregard contact stability. Further details can be found on our project page: https://tomoya-yamanokuchi.github.io/disf-project-page/.
{ "Other": 0, "cs.AI": 0, "cs.CE": 0, "cs.CL": 0, "cs.CR": 0, "cs.CV": 0, "cs.CY": 0, "cs.DB": 0, "cs.HC": 0, "cs.IR": 0, "cs.IT": 0, "cs.LG": 0, "cs.MA": 0, "cs.NE": 0, "cs.RO": 1, "cs.SD": 0, "cs.SI": 0, "cs.SY": 0 }
2502.11537
$\text{M}^{\text{3}}$: A Modular World Model over Streams of Tokens
[ "cs.LG", "cs.AI" ]
Token-based world models emerged as a promising modular framework, modeling dynamics over token streams while optimizing tokenization separately. While successful in visual environments with discrete actions (e.g., Atari games), their broader applicability remains uncertain. In this paper, we introduce $\text{M}^{\text{3}}$, a $\textbf{m}$odular $\textbf{w}$orld $\textbf{m}$odel that extends this framework, enabling flexible combinations of observation and action modalities through independent modality-specific components. $\text{M}^{\text{3}}$ integrates several improvements from existing literature to enhance agent performance. Through extensive empirical evaluation across diverse benchmarks, $\text{M}^{\text{3}}$ achieves state-of-the-art sample efficiency for planning-free world models. Notably, among these methods, it is the first to reach a human-level median score on Atari 100K, with superhuman performance on 13 games. Our code and model weights are publicly available at https://github.com/leor-c/M3.
{ "Other": 0, "cs.AI": 1, "cs.CE": 0, "cs.CL": 0, "cs.CR": 0, "cs.CV": 0, "cs.CY": 0, "cs.DB": 0, "cs.HC": 0, "cs.IR": 0, "cs.IT": 0, "cs.LG": 1, "cs.MA": 0, "cs.NE": 0, "cs.RO": 0, "cs.SD": 0, "cs.SI": 0, "cs.SY": 0 }
2502.11538
How to Divide: A Set Partitioning Strategy Balancing the Trade-off Between Intra-Subset Correlation and Inter-Subset Gain Mutual Influence in Distributed Attack Detection Scheduling Task
[ "cs.DC", "cs.SY", "eess.SY" ]
Recently, the efficiency of attack detection in large-scale sensor networks has remained a critical research challenge. Studies have shown that while distributed algorithms offer higher efficiency compared to centralized approaches, they often come at the cost of reduced performance. To strike a balance between detection efficiency and performance in large-scale sensor networks, this paper explores the feasibility of extending existing algorithms to a distributed framework. Starting from the perspective of set partitioning strategies, this study analyzes the key factor that contributes to the performance differences between distributed and centralized algorithms. By examining the gain mutual influence of sensor subsets, an optimal set partitioning strategy is designed to minimize inter-subset mutual influence while enhancing intra-subset correlation. To further reduce the computational cost of gain updates, a suboptimal partitioning strategy based on Grassmann distance is proposed, improving the efficiency of selecting suspicious sensors. Theoretical analysis demonstrates that this approach effectively reduces the computational cost of gain updates while maintaining detection performance. Finally, simulation results validate the effectiveness of the proposed method in enhancing attack detection performance.
{ "Other": 1, "cs.AI": 0, "cs.CE": 0, "cs.CL": 0, "cs.CR": 0, "cs.CV": 0, "cs.CY": 0, "cs.DB": 0, "cs.HC": 0, "cs.IR": 0, "cs.IT": 0, "cs.LG": 0, "cs.MA": 0, "cs.NE": 0, "cs.RO": 0, "cs.SD": 0, "cs.SI": 0, "cs.SY": 1 }
2502.11541
MuSC: Improving Complex Instruction Following with Multi-granularity Self-Contrastive Training
[ "cs.CL", "cs.AI" ]
Complex instruction-following with elaborate constraints is imperative for Large Language Models (LLMs). While existing methods have constructed data for complex instruction alignment, they all rely on a more advanced model, especially GPT-4, limiting their application. In this paper, we propose a Multi-granularity Self-Contrastive Training (MuSC) framework, to improve the complex instruction alignment without relying on a stronger model. Our method is conducted on both coarse and fine granularity. On coarse-granularity, we construct constraint-aware preference data based on instruction decomposition and recombination. On fine-granularity, we perform token-aware preference optimization with dynamic token-level supervision. Our method is evaluated on open-sourced models, and experiment results show our method achieves significant improvement on both complex and general instruction-following benchmarks, surpassing previous self-alignment methods.
{ "Other": 0, "cs.AI": 1, "cs.CE": 0, "cs.CL": 1, "cs.CR": 0, "cs.CV": 0, "cs.CY": 0, "cs.DB": 0, "cs.HC": 0, "cs.IR": 0, "cs.IT": 0, "cs.LG": 0, "cs.MA": 0, "cs.NE": 0, "cs.RO": 0, "cs.SD": 0, "cs.SI": 0, "cs.SY": 0 }
2502.11544
Evaluating o1-Like LLMs: Unlocking Reasoning for Translation through Comprehensive Analysis
[ "cs.CL" ]
The o1-Like LLMs are transforming AI by simulating human cognitive processes, but their performance in multilingual machine translation (MMT) remains underexplored. This study examines: (1) how o1-Like LLMs perform in MMT tasks and (2) what factors influence their translation quality. We evaluate multiple o1-Like LLMs and compare them with traditional models like ChatGPT and GPT-4o. Results show that o1-Like LLMs establish new multilingual translation benchmarks, with DeepSeek-R1 surpassing GPT-4o in contextless tasks. They demonstrate strengths in historical and cultural translation but exhibit a tendency for rambling issues in Chinese-centric outputs. Further analysis reveals three key insights: (1) High inference costs and slower processing speeds make complex translation tasks more resource-intensive. (2) Translation quality improves with model size, enhancing commonsense reasoning and cultural translation. (3) The temperature parameter significantly impacts output quality-lower temperatures yield more stable and accurate translations, while higher temperatures reduce coherence and precision.
{ "Other": 0, "cs.AI": 0, "cs.CE": 0, "cs.CL": 1, "cs.CR": 0, "cs.CV": 0, "cs.CY": 0, "cs.DB": 0, "cs.HC": 0, "cs.IR": 0, "cs.IT": 0, "cs.LG": 0, "cs.MA": 0, "cs.NE": 0, "cs.RO": 0, "cs.SD": 0, "cs.SI": 0, "cs.SY": 0 }
2502.11546
DCAD-2000: A Multilingual Dataset across 2000+ Languages with Data Cleaning as Anomaly Detection
[ "cs.CL" ]
The rapid development of multilingual large language models (LLMs) highlights the need for high-quality, diverse, and clean multilingual datasets. In this paper, we introduce DCAD-2000 (Data Cleaning as Anomaly Detection), a large-scale multilingual corpus built using newly extracted Common Crawl data and existing multilingual datasets. DCAD-2000 includes over 2,282 languages, 46.72TB of data, and 8.63 billion documents, spanning 155 high- and medium-resource languages and 159 writing scripts. To overcome the limitations of current data cleaning methods, which rely on manual heuristic thresholds, we propose reframing data cleaning as an anomaly detection task. This dynamic filtering approach significantly enhances data quality by identifying and removing noisy or anomalous content. We evaluate the quality of DCAD-2000 on the FineTask benchmark, demonstrating substantial improvements in multilingual dataset quality and task performance.
{ "Other": 0, "cs.AI": 0, "cs.CE": 0, "cs.CL": 1, "cs.CR": 0, "cs.CV": 0, "cs.CY": 0, "cs.DB": 0, "cs.HC": 0, "cs.IR": 0, "cs.IT": 0, "cs.LG": 0, "cs.MA": 0, "cs.NE": 0, "cs.RO": 0, "cs.SD": 0, "cs.SI": 0, "cs.SY": 0 }
2502.11554
Toward Metaphor-Fluid Conversation Design for Voice User Interfaces
[ "cs.HC", "cs.AI", "cs.CL", "cs.CY", "cs.ET" ]
Metaphors play a critical role in shaping user experiences with Voice User Interfaces (VUIs), yet existing designs often rely on static, human-centric metaphors that fail to adapt to diverse contexts and user needs. This paper introduces Metaphor-Fluid Design, a novel approach that dynamically adjusts metaphorical representations based on conversational use-contexts. We compare this approach to a Default VUI, which characterizes the present implementation of commercial VUIs commonly designed around the persona of an assistant, offering a uniform interaction style across contexts. In Study 1 (N=130), metaphors were mapped to four key use-contexts-commands, information seeking, sociality, and error recovery-along the dimensions of formality and hierarchy, revealing distinct preferences for task-specific metaphorical designs. Study 2 (N=91) evaluates a Metaphor-Fluid VUI against a Default VUI, showing that the Metaphor-Fluid VUI enhances perceived intention to adopt, enjoyment, and likability by aligning better with user expectations for different contexts. However, individual differences in metaphor preferences highlight the need for personalization. These findings challenge the one-size-fits-all paradigm of VUI design and demonstrate the potential of Metaphor-Fluid Design to create more adaptive and engaging human-AI interactions.
{ "Other": 1, "cs.AI": 1, "cs.CE": 0, "cs.CL": 1, "cs.CR": 0, "cs.CV": 0, "cs.CY": 1, "cs.DB": 0, "cs.HC": 1, "cs.IR": 0, "cs.IT": 0, "cs.LG": 0, "cs.MA": 0, "cs.NE": 0, "cs.RO": 0, "cs.SD": 0, "cs.SI": 0, "cs.SY": 0 }
2502.11555
Equilibrate RLHF: Towards Balancing Helpfulness-Safety Trade-off in Large Language Models
[ "cs.AI" ]
Fine-tuning large language models (LLMs) based on human preferences, commonly achieved through reinforcement learning from human feedback (RLHF), has been effective in improving their performance. However, maintaining LLM safety throughout the fine-tuning process remains a significant challenge, as resolving conflicts between safety and helpfulness can be non-trivial. Typically, the safety alignment of LLM is trained on data with safety-related categories. However, our experiments find that naively increasing the scale of safety training data usually leads the LLMs to an ``overly safe'' state rather than a ``truly safe'' state, boosting the refusal rate through extensive safety-aligned data without genuinely understanding the requirements for safe responses. Such an approach can inadvertently diminish the models' helpfulness. To understand the phenomenon, we first investigate the role of safety data by categorizing them into three different groups, and observe that each group behaves differently as training data scales up. To boost the balance between safety and helpfulness, we propose an Equilibrate RLHF framework including a Fine-grained Data-centric (FDC) approach that achieves better safety alignment even with fewer training data, and an Adaptive Message-wise Alignment (AMA) approach, which selectively highlight the key segments through a gradient masking strategy. Extensive experimental results demonstrate that our approach significantly enhances the safety alignment of LLMs while balancing safety and helpfulness.
{ "Other": 0, "cs.AI": 1, "cs.CE": 0, "cs.CL": 0, "cs.CR": 0, "cs.CV": 0, "cs.CY": 0, "cs.DB": 0, "cs.HC": 0, "cs.IR": 0, "cs.IT": 0, "cs.LG": 0, "cs.MA": 0, "cs.NE": 0, "cs.RO": 0, "cs.SD": 0, "cs.SI": 0, "cs.SY": 0 }
2502.11557
Fast Maximum Common Subgraph Search: A Redundancy-Reduced Backtracking Approach
[ "cs.DB", "cs.DS" ]
Given two input graphs, finding the largest subgraph that occurs in both, i.e., finding the maximum common subgraph, is a fundamental operator for evaluating the similarity between two graphs in graph data analysis. Existing works for solving the problem are of either theoretical or practical interest, but not both. Specifically, the algorithms with a theoretical guarantee on the running time are known to be not practically efficient; algorithms following the recently proposed backtracking framework called McSplit, run fast in practice but do not have any theoretical guarantees. In this paper, we propose a new backtracking algorithm called RRSplit, which at once achieves better practical efficiency and provides a non-trivial theoretical guarantee on the worst-case running time. To achieve the former, we develop a series of reductions and upper bounds for reducing redundant computations, i.e., the time for exploring some unpromising branches of exploration that hold no maximum common subgraph. To achieve the latter, we formally prove that RRSplit incurs a worst-case time complexity which matches the best-known complexity for the problem. Finally, we conduct extensive experiments on four benchmark graph collections, and the results demonstrate that our algorithm outperforms the practical state-of-the-art by several orders of magnitude.
{ "Other": 1, "cs.AI": 0, "cs.CE": 0, "cs.CL": 0, "cs.CR": 0, "cs.CV": 0, "cs.CY": 0, "cs.DB": 1, "cs.HC": 0, "cs.IR": 0, "cs.IT": 0, "cs.LG": 0, "cs.MA": 0, "cs.NE": 0, "cs.RO": 0, "cs.SD": 0, "cs.SI": 0, "cs.SY": 0 }
2502.11559
Auto-Search and Refinement: An Automated Framework for Gender Bias Mitigation in Large Language Models
[ "cs.CL", "cs.AI" ]
Pre-training large language models (LLMs) on vast text corpora enhances natural language processing capabilities but risks encoding social biases, particularly gender bias. While parameter-modification methods like fine-tuning mitigate bias, they are resource-intensive, unsuitable for closed-source models, and lack adaptability to evolving societal norms. Instruction-based approaches offer flexibility but often compromise task performance. To address these limitations, we propose $\textit{FaIRMaker}$, an automated and model-independent framework that employs an $\textbf{auto-search and refinement}$ paradigm to adaptively generate Fairwords, which act as instructions integrated into input queries to reduce gender bias and enhance response quality. Extensive experiments demonstrate that $\textit{FaIRMaker}$ automatically searches for and dynamically refines Fairwords, effectively mitigating gender bias while preserving task integrity and ensuring compatibility with both API-based and open-source LLMs.
{ "Other": 0, "cs.AI": 1, "cs.CE": 0, "cs.CL": 1, "cs.CR": 0, "cs.CV": 0, "cs.CY": 0, "cs.DB": 0, "cs.HC": 0, "cs.IR": 0, "cs.IT": 0, "cs.LG": 0, "cs.MA": 0, "cs.NE": 0, "cs.RO": 0, "cs.SD": 0, "cs.SI": 0, "cs.SY": 0 }
2502.11560
A Survey of Automatic Prompt Engineering: An Optimization Perspective
[ "cs.AI", "cs.LG" ]
The rise of foundation models has shifted focus from resource-intensive fine-tuning to prompt engineering, a paradigm that steers model behavior through input design rather than weight updates. While manual prompt engineering faces limitations in scalability, adaptability, and cross-modal alignment, automated methods, spanning foundation model (FM) based optimization, evolutionary methods, gradient-based optimization, and reinforcement learning, offer promising solutions. Existing surveys, however, remain fragmented across modalities and methodologies. This paper presents the first comprehensive survey on automated prompt engineering through a unified optimization-theoretic lens. We formalize prompt optimization as a maximization problem over discrete, continuous, and hybrid prompt spaces, systematically organizing methods by their optimization variables (instructions, soft prompts, exemplars), task-specific objectives, and computational frameworks. By bridging theoretical formulation with practical implementations across text, vision, and multimodal domains, this survey establishes a foundational framework for both researchers and practitioners, while highlighting underexplored frontiers in constrained optimization and agent-oriented prompt design.
{ "Other": 0, "cs.AI": 1, "cs.CE": 0, "cs.CL": 0, "cs.CR": 0, "cs.CV": 0, "cs.CY": 0, "cs.DB": 0, "cs.HC": 0, "cs.IR": 0, "cs.IT": 0, "cs.LG": 1, "cs.MA": 0, "cs.NE": 0, "cs.RO": 0, "cs.SD": 0, "cs.SI": 0, "cs.SY": 0 }
2502.11562
Reinforced Information Retrieval
[ "cs.CL" ]
While retrieval techniques are widely used in practice, they still face significant challenges in cross-domain scenarios. Recently, generation-augmented methods have emerged as a promising solution to this problem. These methods enhance raw queries by incorporating additional information from an LLM-based generator, facilitating more direct retrieval of relevant documents. However, existing methods struggle with highly specialized situations that require extensive domain expertise. To address this problem, we present \textbf{Reinforced-IR}, a novel approach that jointly adapts a pre-trained retriever and generator for precise cross-domain retrieval. A key innovation of Reinforced-IR is its \textbf{Self-Boosting} framework, which enables retriever and generator to learn from each other's feedback. Specifically, the generator is reinforced to generate query augmentations that enhance the retriever's performance, while the retriever is trained to better discriminate the relevant documents identified by the generator. This iterative process allows the end-to-end retrieval performance to be progressively optimized using an unlabeled corpus from the target domain. In our experiment, Reinforced-IR outperforms existing domain adaptation methods by a large margin, leading to substantial improvements in retrieval quality across a wide range of application scenarios.
{ "Other": 0, "cs.AI": 0, "cs.CE": 0, "cs.CL": 1, "cs.CR": 0, "cs.CV": 0, "cs.CY": 0, "cs.DB": 0, "cs.HC": 0, "cs.IR": 0, "cs.IT": 0, "cs.LG": 0, "cs.MA": 0, "cs.NE": 0, "cs.RO": 0, "cs.SD": 0, "cs.SI": 0, "cs.SY": 0 }
2502.11563
Leader and Follower: Interactive Motion Generation under Trajectory Constraints
[ "cs.RO", "cs.AI" ]
With the rapid advancement of game and film production, generating interactive motion from texts has garnered significant attention due to its potential to revolutionize content creation processes. In many practical applications, there is a need to impose strict constraints on the motion range or trajectory of virtual characters. However, existing methods that rely solely on textual input face substantial challenges in accurately capturing the user's intent, particularly in specifying the desired trajectory. As a result, the generated motions often lack plausibility and accuracy. Moreover, existing trajectory - based methods for customized motion generation rely on retraining for single - actor scenarios, which limits flexibility and adaptability to different datasets, as well as interactivity in two-actor motions. To generate interactive motion following specified trajectories, this paper decouples complex motion into a Leader - Follower dynamic, inspired by role allocation in partner dancing. Based on this framework, this paper explores the motion range refinement process in interactive motion generation and proposes a training-free approach, integrating a Pace Controller and a Kinematic Synchronization Adapter. The framework enhances the ability of existing models to generate motion that adheres to trajectory by controlling the leader's movement and correcting the follower's motion to align with the leader. Experimental results show that the proposed approach, by better leveraging trajectory information, outperforms existing methods in both realism and accuracy.
{ "Other": 0, "cs.AI": 1, "cs.CE": 0, "cs.CL": 0, "cs.CR": 0, "cs.CV": 0, "cs.CY": 0, "cs.DB": 0, "cs.HC": 0, "cs.IR": 0, "cs.IT": 0, "cs.LG": 0, "cs.MA": 0, "cs.NE": 0, "cs.RO": 1, "cs.SD": 0, "cs.SI": 0, "cs.SY": 0 }
2502.11564
Continuous Diffusion Model for Language Modeling
[ "cs.LG" ]
Diffusion models have emerged as a promising alternative to autoregressive models in modeling discrete categorical data. Yet diffusion models that directly work on discrete data space do not fully exploit the power of iterative refinement, as the signals are lost during the transition between discrete states. Existing continuous diffusion models for discrete data have limited performance compared to discrete approaches, and the unclear link between them restricts the development of diffusion models for discrete data. In this work, we propose a continuous diffusion model for language modeling that incorporates the geometry of the underlying categorical distribution. We establish a connection between the discrete diffusion and continuous flow on the statistical manifold, and building on the analogy, we introduce a simple design for the diffusion process that generalizes previous discrete diffusion models. We further propose a simulation-free training framework based on radial symmetry and a simple technique to address the high dimensionality of the manifold. Comprehensive experiments on language modeling benchmarks and other modalities show that our method outperforms existing discrete diffusion models and approaches the performance of autoregressive models. Codes available at \href{https://github.com/harryjo97/RDLM}{https://github.com/harryjo97/RDLM}.
{ "Other": 0, "cs.AI": 0, "cs.CE": 0, "cs.CL": 0, "cs.CR": 0, "cs.CV": 0, "cs.CY": 0, "cs.DB": 0, "cs.HC": 0, "cs.IR": 0, "cs.IT": 0, "cs.LG": 1, "cs.MA": 0, "cs.NE": 0, "cs.RO": 0, "cs.SD": 0, "cs.SI": 0, "cs.SY": 0 }
2502.11565
STARS-Enabled Full-Duplex Two-Way mMIMO System Under Spatially-Correlated Channels
[ "cs.IT", "math.IT" ]
\underline{S}imultaneous \underline{t}ransmitting \underline{a}nd \underline{r}eflecting \underline{s}urface (STARS)-assisted systems have emerged to fill this gap by providing $ 360^{\circ}$ wireless coverage. In parallel, full-duplex (FD) communication offers a higher achievable rate through efficient spectrum utilization compared to the half-duplex (HD) counterpart. Moreover, two-way/bi-directional communications in an FD system can further enhance the system's spectral efficiency. Hence, in this paper, we propose a STARS-enabled massive MIMO deployment in an FD two-way communication network for highly efficient spectrum utilization, while covering the dead zones around the STARS. This model enables simultaneous information exchange between multiple nodes, while \emph{potentially} doubling the spectral efficiency (SE). By invoking the use-and-then-forget (UaTF) combining scheme, we derive a closed-form expression for an achievable SE at each user of the system considering both uplink and downlink communications based on statistical channel state information (CSI), while also accounting for imperfect CSI and correlated fading conditions. Moreover, we formulate an optimization problem to obtain an optimal passive beamforming matrix design at the STARS that maximizes the sum achievable SE. The considered problem is non-convex and we propose a provably-convergent low-complexity algorithm, termed as \underline{pro}jected \underline{gr}adient \underline{a}scent \underline{m}ethod (ProGrAM), to obtain a stationary solution. Extensive numerical results are provided to establish the performance superiority of the FD STARS-enabled system over the HD STARS-enabled and FD conventional RIS (cRIS)-enabled counterparts, and also to show the effect of different parameters of interest on the system performance.
{ "Other": 0, "cs.AI": 0, "cs.CE": 0, "cs.CL": 0, "cs.CR": 0, "cs.CV": 0, "cs.CY": 0, "cs.DB": 0, "cs.HC": 0, "cs.IR": 0, "cs.IT": 1, "cs.LG": 0, "cs.MA": 0, "cs.NE": 0, "cs.RO": 0, "cs.SD": 0, "cs.SI": 0, "cs.SY": 0 }
2502.11569
Towards Reasoning Ability of Small Language Models
[ "cs.CL", "cs.AI", "cs.LG" ]
Reasoning has long been viewed as an emergent property of large language models (LLMs), appearing at or above a certain scale ($\sim$100B parameters). However, recent studies challenge this assumption, showing that small language models (SLMs) can also achieve competitive reasoning performance. SLMs are increasingly favored for their efficiency and deployability. However, there is a lack of systematic study on the reasoning abilities of diverse SLMs, including those trained from scratch or derived from LLMs through quantization, pruning, and distillation. This raises a critical question: Can SLMs achieve reasoning abilities comparable to LLMs? In this work, we systematically survey, benchmark, and analyze 72 SLMs from six model families across 14 reasoning benchmarks. For reliable evaluation, we examine four evaluation methods and compare four LLM judges against human evaluations on 800 data points. We repeat all experiments three times to ensure a robust performance assessment. Additionally, we analyze the impact of different prompting strategies in small models. Beyond accuracy, we also evaluate model robustness under adversarial conditions and intermediate reasoning steps. Our findings challenge the assumption that scaling is the only way to achieve strong reasoning. Instead, we foresee a future where SLMs with strong reasoning capabilities can be developed through structured training or post-training compression. They can serve as efficient alternatives to LLMs for reasoning-intensive tasks.
{ "Other": 0, "cs.AI": 1, "cs.CE": 0, "cs.CL": 1, "cs.CR": 0, "cs.CV": 0, "cs.CY": 0, "cs.DB": 0, "cs.HC": 0, "cs.IR": 0, "cs.IT": 0, "cs.LG": 1, "cs.MA": 0, "cs.NE": 0, "cs.RO": 0, "cs.SD": 0, "cs.SI": 0, "cs.SY": 0 }
2502.11570
Towards a Trustworthy Anomaly Detection for Critical Applications through Approximated Partial AUC Loss
[ "cs.LG", "cs.CV" ]
Anomaly Detection is a crucial step for critical applications such in the industrial, medical or cybersecurity domains. These sectors share the same requirement of handling differently the different types of classification errors. Indeed, even if false positives are acceptable, false negatives are not, because it would reflect a missed detection of a quality issue, a disease or a cyber threat. To fulfill this requirement, we propose a method that dynamically applies a trustworthy approximated partial AUC ROC loss (tapAUC). A binary classifier is trained to optimize the specific range of the AUC ROC curve that prevents the True Positive Rate (TPR) to reach 100% while minimizing the False Positive Rate (FPR). The optimal threshold that does not trigger any false negative is then kept and used at the test step. The results show a TPR of 92.52% at a 20.43% FPR for an average across 6 datasets, representing a TPR improvement of 4.3% for a FPR cost of 12.2% against other state-of-the-art methods. The code is available at https://github.com/ArnaudBougaham/tapAUC.
{ "Other": 0, "cs.AI": 0, "cs.CE": 0, "cs.CL": 0, "cs.CR": 0, "cs.CV": 1, "cs.CY": 0, "cs.DB": 0, "cs.HC": 0, "cs.IR": 0, "cs.IT": 0, "cs.LG": 1, "cs.MA": 0, "cs.NE": 0, "cs.RO": 0, "cs.SD": 0, "cs.SI": 0, "cs.SY": 0 }
2502.11571
FaMTEB: Massive Text Embedding Benchmark in Persian Language
[ "cs.CL", "cs.IR", "cs.LG" ]
In this paper, we introduce a comprehensive benchmark for Persian (Farsi) text embeddings, built upon the Massive Text Embedding Benchmark (MTEB). Our benchmark includes 63 datasets spanning seven different tasks: classification, clustering, pair classification, reranking, retrieval, summary retrieval, and semantic textual similarity. The datasets are formed as a combination of existing, translated, and newly generated data, offering a diverse evaluation framework for Persian language models. Given the increasing use of text embedding models in chatbots, evaluation datasets are becoming inseparable ingredients in chatbot challenges and Retrieval-Augmented Generation systems. As a contribution, we include chatbot evaluation datasets in the MTEB benchmark for the first time. In addition, in this paper, we introduce the new task of summary retrieval which is not part of the tasks included in standard MTEB. Another contribution of this paper is the introduction of a substantial number of new Persian language NLP datasets suitable for training and evaluation, some of which have no previous counterparts in Persian. We evaluate the performance of several Persian and multilingual embedding models in a range of tasks. This work introduces an open-source benchmark with datasets, code and a public leaderboard.
{ "Other": 0, "cs.AI": 0, "cs.CE": 0, "cs.CL": 1, "cs.CR": 0, "cs.CV": 0, "cs.CY": 0, "cs.DB": 0, "cs.HC": 0, "cs.IR": 1, "cs.IT": 0, "cs.LG": 1, "cs.MA": 0, "cs.NE": 0, "cs.RO": 0, "cs.SD": 0, "cs.SI": 0, "cs.SY": 0 }
2502.11573
InfiR : Crafting Effective Small Language Models and Multimodal Small Language Models in Reasoning
[ "cs.CL", "cs.AI" ]
Large Language Models (LLMs) and Multimodal Large Language Models (MLLMs) have made significant advancements in reasoning capabilities. However, they still face challenges such as high computational demands and privacy concerns. This paper focuses on developing efficient Small Language Models (SLMs) and Multimodal Small Language Models (MSLMs) that retain competitive reasoning abilities. We introduce a novel training pipeline that enhances reasoning capabilities and facilitates deployment on edge devices, achieving state-of-the-art performance while minimizing development costs. \InfR~ aims to advance AI systems by improving reasoning, reducing adoption barriers, and addressing privacy concerns through smaller model sizes. Resources are available at https://github. com/Reallm-Labs/InfiR.
{ "Other": 0, "cs.AI": 1, "cs.CE": 0, "cs.CL": 1, "cs.CR": 0, "cs.CV": 0, "cs.CY": 0, "cs.DB": 0, "cs.HC": 0, "cs.IR": 0, "cs.IT": 0, "cs.LG": 0, "cs.MA": 0, "cs.NE": 0, "cs.RO": 0, "cs.SD": 0, "cs.SI": 0, "cs.SY": 0 }
2502.11574
Large Language Models and Mathematical Reasoning Failures
[ "cs.AI" ]
This paper investigates the mathematical reasoning capabilities of large language models (LLMs) using 50 newly constructed high-school-level word problems. Unlike prior studies that focus solely on answer correctness, we rigorously analyze both final answers and solution steps to identify reasoning failures. Evaluating eight state-of-the-art models - including Mixtral, Llama, Gemini, GPT-4o, and OpenAI's o1 variants - we find that while newer models (e.g., o3-mini, deepseek-r1) achieve higher accuracy, all models exhibit errors in spatial reasoning, strategic planning, and arithmetic, sometimes producing correct answers through flawed logic. Common failure modes include unwarranted assumptions, over-reliance on numerical patterns, and difficulty translating physical intuition into mathematical steps. Manual analysis reveals that models struggle with problems requiring multi-step deduction or real-world knowledge, despite possessing broad mathematical knowledge. Our results underscore the importance of evaluating reasoning processes, not just answers, and caution against overestimating LLMs' problem-solving proficiency. The study highlights persistent gaps in LLMs' generalization abilities, emphasizing the need for targeted improvements in structured reasoning and constraint handling.
{ "Other": 0, "cs.AI": 1, "cs.CE": 0, "cs.CL": 0, "cs.CR": 0, "cs.CV": 0, "cs.CY": 0, "cs.DB": 0, "cs.HC": 0, "cs.IR": 0, "cs.IT": 0, "cs.LG": 0, "cs.MA": 0, "cs.NE": 0, "cs.RO": 0, "cs.SD": 0, "cs.SI": 0, "cs.SY": 0 }
2502.11578
Language Complexity Measurement as a Noisy Zero-Shot Proxy for Evaluating LLM Performance
[ "cs.CL", "cs.AI" ]
Large Language Models (LLMs) have made significant strides in natural language generation but often face challenges in tasks requiring precise calculations and structural analysis. This paper investigates the performance of state-of-the-art LLMs on language complexity measurement tasks, through the computation of the LIX readability metric and Average Dependency Distance (ADD). Using Swedish high school and university-level essays, we evaluate the models' abilities to compute LIX scores and perform dependency parsing, comparing their results to established ground truths. Our findings reveal that while all models demonstrate some capacity for these tasks, ChatGPT-o1-mini performs most consistently, achieving the highest accuracy in both LIX computation and dependency parsing. Additionally, we observe a strong significant correlation -0.875 p 0.026 (N=6) between the models' accuracy in computing LIX and their overall performance on the Massive Multitask Language Understanding (MMLU) benchmark. These results suggest that language complexity measurement abilities can serve as a noisy zero-shot proxies for assessing the general capabilities of LLMs, providing a practical method for model evaluation without the need for extensive benchmarking datasets.
{ "Other": 0, "cs.AI": 1, "cs.CE": 0, "cs.CL": 1, "cs.CR": 0, "cs.CV": 0, "cs.CY": 0, "cs.DB": 0, "cs.HC": 0, "cs.IR": 0, "cs.IT": 0, "cs.LG": 0, "cs.MA": 0, "cs.NE": 0, "cs.RO": 0, "cs.SD": 0, "cs.SI": 0, "cs.SY": 0 }
2502.11583
Distributional autoencoders know the score
[ "stat.ML", "cs.LG" ]
This work presents novel and desirable properties of a recently introduced class of autoencoders -- the Distributional Principal Autoencoder (DPA) -- that combines distributionally correct reconstruction with principal components-like interpretability of the encodings. First, we show that the level sets of the encoder orient themselves exactly with regard to the score of the data distribution. This both explains the method's often remarkable performance in disentangling the the factors of variation of the data, as well as opens up possibilities of recovering its distribution while having access to samples only. In settings where the score itself has physical meaning -- such as when the data obey the Boltzmann distribution -- we demonstrate that the method can recover scientifically important quantities such as the \textit{minimum free energy path}. Second, we show that if the data lie on a manifold that can be approximated by the encoder, the optimal encoder's components beyond the dimension of the manifold will carry absolutely no additional information about the data distribution. This promises new ways of determining the number of relevant dimensions of the data beyond common heuristics such as the scree plot. Finally, the fact that the method is learning the score means that it could have promise as a generative model, potentially rivaling approaches such as diffusion, which similarly attempts to approximate the score of the data distribution.
{ "Other": 0, "cs.AI": 0, "cs.CE": 0, "cs.CL": 0, "cs.CR": 0, "cs.CV": 0, "cs.CY": 0, "cs.DB": 0, "cs.HC": 0, "cs.IR": 0, "cs.IT": 0, "cs.LG": 1, "cs.MA": 0, "cs.NE": 0, "cs.RO": 0, "cs.SD": 0, "cs.SI": 0, "cs.SY": 0 }
2502.11584
Runtime Enforcement of CPS against Signal Temporal Logic
[ "eess.SY", "cs.SY" ]
Cyber-Physical Systems (CPSs), especially those involving autonomy, need guarantees of their safety. Runtime Enforcement (RE) is a lightweight method to formally ensure that some specified properties are satisfied over the executions of the system. Hence, there is recent interest in the RE of CPS. However, existing methods are not designed to tackle specifications suitable for the hybrid dynamics of CPS. With this in mind, we develop runtime enforcement of CPS using properties defined in Signal Temporal Logic (STL). In this work, we aim to construct a runtime enforcer for a given STL formula to minimally modify a signal to satisfy the formula. To achieve this, the STL formula to be enforced is first translated into a timed transducer, while the signal to be corrected is encoded as timed words. We provide timed transducers for the temporal operators \emph{until} and \emph{release} noting that other temporal operators can be expressed using these two. Our approach enables effective enforcement of STL properties for CPS. A case study is provided to illustrate the approach and generate empirical evidence of its suitability for CPS.
{ "Other": 0, "cs.AI": 0, "cs.CE": 0, "cs.CL": 0, "cs.CR": 0, "cs.CV": 0, "cs.CY": 0, "cs.DB": 0, "cs.HC": 0, "cs.IR": 0, "cs.IT": 0, "cs.LG": 0, "cs.MA": 0, "cs.NE": 0, "cs.RO": 0, "cs.SD": 0, "cs.SI": 0, "cs.SY": 1 }
2502.11585
Calibration of Vehicular Traffic Simulation Models by Local Optimization
[ "cs.AI" ]
Simulation is a valuable tool for traffic management experts to assist them in refining and improving transportation systems and anticipating the impact of possible changes in the infrastructure network before their actual implementation. Calibrating simulation models using traffic count data is challenging because of the complexity of the environment, the lack of data, and the uncertainties in traffic dynamics. This paper introduces a novel stochastic simulation-based traffic calibration technique. The novelty of the proposed method is: (i) it performs local traffic calibration, (ii) it allows calibrating simulated traffic in large-scale environments, (iii) it requires only the traffic count data. The local approach enables decentralizing the calibration task to reach near real-time performance, enabling the fostering of digital twins. Using only traffic count data makes the proposed method generic so that it can be applied in different traffic scenarios at various scales (from neighborhood to region). We assess the proposed technique on a model of Brussels, Belgium, using data from real traffic monitoring devices. The proposed method has been implemented using the open-source traffic simulator SUMO. Experimental results show that the traffic model calibrated using the proposed method is on average 16% more accurate than those obtained by the state-of-the-art methods, using the same dataset. We also make available the output traffic model obtained from real data.
{ "Other": 0, "cs.AI": 1, "cs.CE": 0, "cs.CL": 0, "cs.CR": 0, "cs.CV": 0, "cs.CY": 0, "cs.DB": 0, "cs.HC": 0, "cs.IR": 0, "cs.IT": 0, "cs.LG": 0, "cs.MA": 0, "cs.NE": 0, "cs.RO": 0, "cs.SD": 0, "cs.SI": 0, "cs.SY": 0 }
2502.11586
Syllables to Scenes: Literary-Guided Free-Viewpoint 3D Scene Synthesis from Japanese Haiku
[ "cs.CV" ]
In the era of the metaverse, where immersive technologies redefine human experiences, translating abstract literary concepts into navigable 3D environments presents a fundamental challenge in preserving semantic and emotional fidelity. This research introduces HaikuVerse, a novel framework for transforming poetic abstraction into spatial representation, with Japanese Haiku serving as an ideal test case due to its sophisticated encapsulation of profound emotions and imagery within minimal text. While existing text-to-3D methods struggle with nuanced interpretations, we present a literary-guided approach that synergizes traditional poetry analysis with advanced generative technologies. Our framework centers on two key innovations: (1) Hierarchical Literary-Criticism Theory Grounded Parsing (H-LCTGP), which captures both explicit imagery and implicit emotional resonance through structured semantic decomposition, and (2) Progressive Dimensional Synthesis (PDS), a multi-stage pipeline that systematically transforms poetic elements into coherent 3D scenes through sequential diffusion processes, geometric optimization, and real-time enhancement. Extensive experiments demonstrate that HaikuVerse significantly outperforms conventional text-to-3D approaches in both literary fidelity and visual quality, establishing a new paradigm for preserving cultural heritage in immersive digital spaces. Project website at: https://syllables-to-scenes.github.io/
{ "Other": 0, "cs.AI": 0, "cs.CE": 0, "cs.CL": 0, "cs.CR": 0, "cs.CV": 1, "cs.CY": 0, "cs.DB": 0, "cs.HC": 0, "cs.IR": 0, "cs.IT": 0, "cs.LG": 0, "cs.MA": 0, "cs.NE": 0, "cs.RO": 0, "cs.SD": 0, "cs.SI": 0, "cs.SY": 0 }
2502.11588
A Unified Modeling Framework for Automated Penetration Testing
[ "cs.AI", "cs.NI" ]
The integration of artificial intelligence into automated penetration testing (AutoPT) has highlighted the necessity of simulation modeling for the training of intelligent agents, due to its cost-efficiency and swift feedback capabilities. Despite the proliferation of AutoPT research, there is a recognized gap in the availability of a unified framework for simulation modeling methods. This paper presents a systematic review and synthesis of existing techniques, introducing MDCPM to categorize studies based on literature objectives, network simulation complexity, dependency of technical and tactical operations, and scenario feedback and variation. To bridge the gap in unified method for multi-dimensional and multi-level simulation modeling, dynamic environment modeling, and the scarcity of public datasets, we introduce AutoPT-Sim, a novel modeling framework that based on policy automation and encompasses the combination of all sub dimensions. AutoPT-Sim offers a comprehensive approach to modeling network environments, attackers, and defenders, transcending the constraints of static modeling and accommodating networks of diverse scales. We publicly release a generated standard network environment dataset and the code of Network Generator. By integrating publicly available datasets flexibly, support is offered for various simulation modeling levels focused on policy automation in MDCPM and the network generator help researchers output customized target network data by adjusting parameters or fine-tuning the network generator.
{ "Other": 1, "cs.AI": 1, "cs.CE": 0, "cs.CL": 0, "cs.CR": 0, "cs.CV": 0, "cs.CY": 0, "cs.DB": 0, "cs.HC": 0, "cs.IR": 0, "cs.IT": 0, "cs.LG": 0, "cs.MA": 0, "cs.NE": 0, "cs.RO": 0, "cs.SD": 0, "cs.SI": 0, "cs.SY": 0 }
2502.11594
iMOVE: Instance-Motion-Aware Video Understanding
[ "cs.CV" ]
Enhancing the fine-grained instance spatiotemporal motion perception capabilities of Video Large Language Models is crucial for improving their temporal and general video understanding. However, current models struggle to perceive detailed and complex instance motions. To address these challenges, we have made improvements from both data and model perspectives. In terms of data, we have meticulously curated iMOVE-IT, the first large-scale instance-motion-aware video instruction-tuning dataset. This dataset is enriched with comprehensive instance motion annotations and spatiotemporal mutual-supervision tasks, providing extensive training for the model's instance-motion-awareness. Building on this foundation, we introduce iMOVE, an instance-motion-aware video foundation model that utilizes Event-aware Spatiotemporal Efficient Modeling to retain informative instance spatiotemporal motion details while maintaining computational efficiency. It also incorporates Relative Spatiotemporal Position Tokens to ensure awareness of instance spatiotemporal positions. Evaluations indicate that iMOVE excels not only in video temporal understanding and general video understanding but also demonstrates significant advantages in long-term video understanding.
{ "Other": 0, "cs.AI": 0, "cs.CE": 0, "cs.CL": 0, "cs.CR": 0, "cs.CV": 1, "cs.CY": 0, "cs.DB": 0, "cs.HC": 0, "cs.IR": 0, "cs.IT": 0, "cs.LG": 0, "cs.MA": 0, "cs.NE": 0, "cs.RO": 0, "cs.SD": 0, "cs.SI": 0, "cs.SY": 0 }
2502.11596
LLM Embeddings for Deep Learning on Tabular Data
[ "cs.LG", "cs.AI" ]
Tabular deep-learning methods require embedding numerical and categorical input features into high-dimensional spaces before processing them. Existing methods deal with this heterogeneous nature of tabular data by employing separate type-specific encoding approaches. This limits the cross-table transfer potential and the exploitation of pre-trained knowledge. We propose a novel approach that first transforms tabular data into text, and then leverages pre-trained representations from LLMs to encode this data, resulting in a plug-and-play solution to improv ing deep-learning tabular methods. We demonstrate that our approach improves accuracy over competitive models, such as MLP, ResNet and FT-Transformer, by validating on seven classification datasets.
{ "Other": 0, "cs.AI": 1, "cs.CE": 0, "cs.CL": 0, "cs.CR": 0, "cs.CV": 0, "cs.CY": 0, "cs.DB": 0, "cs.HC": 0, "cs.IR": 0, "cs.IT": 0, "cs.LG": 1, "cs.MA": 0, "cs.NE": 0, "cs.RO": 0, "cs.SD": 0, "cs.SI": 0, "cs.SY": 0 }
2502.11598
Can LLM Watermarks Robustly Prevent Unauthorized Knowledge Distillation?
[ "cs.CL" ]
The radioactive nature of Large Language Model (LLM) watermarking enables the detection of watermarks inherited by student models when trained on the outputs of watermarked teacher models, making it a promising tool for preventing unauthorized knowledge distillation. However, the robustness of watermark radioactivity against adversarial actors remains largely unexplored. In this paper, we investigate whether student models can acquire the capabilities of teacher models through knowledge distillation while avoiding watermark inheritance. We propose two categories of watermark removal approaches: pre-distillation removal through untargeted and targeted training data paraphrasing (UP and TP), and post-distillation removal through inference-time watermark neutralization (WN). Extensive experiments across multiple model pairs, watermarking schemes and hyper-parameter settings demonstrate that both TP and WN thoroughly eliminate inherited watermarks, with WN achieving this while maintaining knowledge transfer efficiency and low computational overhead. Given the ongoing deployment of watermarking techniques in production LLMs, these findings emphasize the urgent need for more robust defense strategies. Our code is available at https://github.com/THU-BPM/Watermark-Radioactivity-Attack.
{ "Other": 0, "cs.AI": 0, "cs.CE": 0, "cs.CL": 1, "cs.CR": 0, "cs.CV": 0, "cs.CY": 0, "cs.DB": 0, "cs.HC": 0, "cs.IR": 0, "cs.IT": 0, "cs.LG": 0, "cs.MA": 0, "cs.NE": 0, "cs.RO": 0, "cs.SD": 0, "cs.SI": 0, "cs.SY": 0 }
2502.11599
Self-orthogonal codes from plateaued functions and their applications in quantum codes and LCD codes
[ "cs.IT", "math.IT" ]
Self-orthogonal codes have received great attention due to their important applications in quantum codes, LCD codes and lattices. Recently, several families of self-orthogonal codes containing the all-$1$ vector were constructed by augmentation technique. In this paper, utilizing plateaued functions, we construct some classes of linear codes which do not contain the all-$1$ vector. We also investigate their punctured codes. The weight distributions of the constructed codes are explicitly determined. Under certain conditions, these codes are proved to be self-orthogonal. Furthermore, some classes of optimal linear codes are obtained from their duals. Using the self-orthogonal punctured codes, we also construct several new families of at least almost optimal quantum codes and optimal LCD codes.
{ "Other": 0, "cs.AI": 0, "cs.CE": 0, "cs.CL": 0, "cs.CR": 0, "cs.CV": 0, "cs.CY": 0, "cs.DB": 0, "cs.HC": 0, "cs.IR": 0, "cs.IT": 1, "cs.LG": 0, "cs.MA": 0, "cs.NE": 0, "cs.RO": 0, "cs.SD": 0, "cs.SI": 0, "cs.SY": 0 }
2502.11603
DR.GAP: Mitigating Bias in Large Language Models using Gender-Aware Prompting with Demonstration and Reasoning
[ "cs.CL", "cs.AI" ]
Large Language Models (LLMs) exhibit strong natural language processing capabilities but also inherit and amplify societal biases, including gender bias, raising fairness concerns. Existing debiasing methods face significant limitations: parameter tuning requires access to model weights, prompt-based approaches often degrade model utility, and optimization-based techniques lack generalizability. To address these challenges, we propose DR.GAP (Demonstration and Reasoning for Gender-Aware Prompting), an automated and model-agnostic approach that mitigates gender bias while preserving model performance. DR.GAP selects bias-revealing examples and generates structured reasoning to guide models toward more impartial responses. Extensive experiments on coreference resolution and QA tasks across multiple LLMs (GPT-3.5, Llama3, and Llama2-Alpaca) demonstrate its effectiveness, generalization ability, and robustness. DR.GAP can generalize to vision-language models (VLMs), achieving significant bias reduction.
{ "Other": 0, "cs.AI": 1, "cs.CE": 0, "cs.CL": 1, "cs.CR": 0, "cs.CV": 0, "cs.CY": 0, "cs.DB": 0, "cs.HC": 0, "cs.IR": 0, "cs.IT": 0, "cs.LG": 0, "cs.MA": 0, "cs.NE": 0, "cs.RO": 0, "cs.SD": 0, "cs.SI": 0, "cs.SY": 0 }
2502.11604
An Actor-Critic Algorithm with Function Approximation for Risk Sensitive Cost Markov Decision Processes
[ "cs.LG", "stat.ML" ]
In this paper, we consider the risk-sensitive cost criterion with exponentiated costs for Markov decision processes and develop a model-free policy gradient algorithm in this setting. Unlike additive cost criteria such as average or discounted cost, the risk-sensitive cost criterion is less studied due to the complexity resulting from the multiplicative structure of the resulting Bellman equation. We develop an actor-critic algorithm with function approximation in this setting and provide its asymptotic convergence analysis. We also show the results of numerical experiments that demonstrate the superiority in performance of our algorithm over other recent algorithms in the literature.
{ "Other": 0, "cs.AI": 0, "cs.CE": 0, "cs.CL": 0, "cs.CR": 0, "cs.CV": 0, "cs.CY": 0, "cs.DB": 0, "cs.HC": 0, "cs.IR": 0, "cs.IT": 0, "cs.LG": 1, "cs.MA": 0, "cs.NE": 0, "cs.RO": 0, "cs.SD": 0, "cs.SI": 0, "cs.SY": 0 }
2502.11607
GraphThought: Graph Combinatorial Optimization with Thought Generation
[ "cs.LG" ]
Large language models (LLMs) have demonstrated remarkable capabilities across various domains, especially in text processing and generative tasks. Recent advancements in the reasoning capabilities of state-of-the-art LLMs, such as OpenAI-o1, have significantly broadened their applicability, particularly in complex problem-solving and logical inference. However, most existing LLMs struggle with notable limitations in handling graph combinatorial optimization (GCO) problems. To bridge this gap, we formally define the Optimal Thoughts Design (OTD) problem, including its state and action thought space. We then introduce a novel framework, GraphThought, designed to generate high-quality thought datasets for GCO problems. Leveraging these datasets, we fine-tune the Llama-3-8B-Instruct model to develop Llama-GT. Notably, despite its compact 8B-parameter architecture, Llama-GT matches the performance of state-of-the-art LLMs on the GraphArena benchmark. Experimental results show that our approach outperforms both proprietary and open-source models, even rivaling specialized models like o1-mini. This work sets a new state-of-the-art benchmark while challenging the prevailing notion that model scale is the primary driver of reasoning capability.
{ "Other": 0, "cs.AI": 0, "cs.CE": 0, "cs.CL": 0, "cs.CR": 0, "cs.CV": 0, "cs.CY": 0, "cs.DB": 0, "cs.HC": 0, "cs.IR": 0, "cs.IT": 0, "cs.LG": 1, "cs.MA": 0, "cs.NE": 0, "cs.RO": 0, "cs.SD": 0, "cs.SI": 0, "cs.SY": 0 }
2502.11609
Exploiting Task Relationships for Continual Learning Using Transferability-Aware Task Embeddings
[ "cs.LG" ]
Continual learning (CL) has been an essential topic in the contemporary application of deep neural networks, where catastrophic forgetting (CF) can impede a model's ability to acquire knowledge progressively. Existing CL strategies primarily address CF by regularizing model updates or separating task-specific and shared components. However, these methods focus on task model elements while overlooking the potential of leveraging inter-task relationships for learning enhancement. To address this, we propose a transferability-aware task embedding named H-embedding and train a hypernet under its guidance to learn task-conditioned model weights for CL tasks. Particularly, H-embedding is introduced based on an information theoretical transferability measure and is designed to be online and easy to compute. The framework is also characterized by notable practicality, which only requires storing a low-dimensional task embedding for each task, and can be efficiently trained in an end-to-end way. Extensive evaluations and experimental analyses on datasets including Permuted MNIST, Cifar10/100, and ImageNet-R demonstrate that our framework performs prominently compared to various baseline methods, displaying great potential in exploiting intrinsic task relationships.
{ "Other": 0, "cs.AI": 0, "cs.CE": 0, "cs.CL": 0, "cs.CR": 0, "cs.CV": 0, "cs.CY": 0, "cs.DB": 0, "cs.HC": 0, "cs.IR": 0, "cs.IT": 0, "cs.LG": 1, "cs.MA": 0, "cs.NE": 0, "cs.RO": 0, "cs.SD": 0, "cs.SI": 0, "cs.SY": 0 }
2502.11610
Accuracy Assessment of OpenAlex and Clarivate Scholar ID with an LLM-Assisted Benchmark
[ "cs.IR" ]
In quantitative SciSci (science of science) studies, accurately identifying individual scholars is paramount for scientific data analysis. However, the variability in how names are represented-due to commonality, abbreviations, and different spelling conventions-complicates this task. While identifier systems like ORCID are being developed, many scholars remain unregistered, and numerous publications are not included. Scholarly databases such as Clarivate and OpenAlex have introduced their own ID systems as preliminary name disambiguation solutions. This study evaluates the effectiveness of these systems across different groups to determine their suitability for various application scenarios. We sampled authors from the top quartile (Q1) of Web of Science (WOS) journals based on country, discipline, and number of corresponding author papers. For each group, we selected 100 scholars and meticulously annotated all their papers using a Search-enhanced Large Language Model method. Using these annotations, we identified the corresponding IDs in OpenAlex and Clarivate, extracted all associated papers, filtered for Q1 WOS journals, and calculated precision and recall by comparing against the annotated dataset.
{ "Other": 0, "cs.AI": 0, "cs.CE": 0, "cs.CL": 0, "cs.CR": 0, "cs.CV": 0, "cs.CY": 0, "cs.DB": 0, "cs.HC": 0, "cs.IR": 1, "cs.IT": 0, "cs.LG": 0, "cs.MA": 0, "cs.NE": 0, "cs.RO": 0, "cs.SD": 0, "cs.SI": 0, "cs.SY": 0 }
2502.11611
Identifying Gender Stereotypes and Biases in Automated Translation from English to Italian using Similarity Networks
[ "cs.CL", "cs.AI" ]
This paper is a collaborative effort between Linguistics, Law, and Computer Science to evaluate stereotypes and biases in automated translation systems. We advocate gender-neutral translation as a means to promote gender inclusion and improve the objectivity of machine translation. Our approach focuses on identifying gender bias in English-to-Italian translations. First, we define gender bias following human rights law and linguistics literature. Then we proceed by identifying gender-specific terms such as she/lei and he/lui as key elements. We then evaluate the cosine similarity between these target terms and others in the dataset to reveal the model's perception of semantic relations. Using numerical features, we effectively evaluate the intensity and direction of the bias. Our findings provide tangible insights for developing and training gender-neutral translation algorithms.
{ "Other": 0, "cs.AI": 1, "cs.CE": 0, "cs.CL": 1, "cs.CR": 0, "cs.CV": 0, "cs.CY": 0, "cs.DB": 0, "cs.HC": 0, "cs.IR": 0, "cs.IT": 0, "cs.LG": 0, "cs.MA": 0, "cs.NE": 0, "cs.RO": 0, "cs.SD": 0, "cs.SI": 0, "cs.SY": 0 }
2502.11612
Maximum Entropy Reinforcement Learning with Diffusion Policy
[ "cs.LG", "cs.AI" ]
The Soft Actor-Critic (SAC) algorithm with a Gaussian policy has become a mainstream implementation for realizing the Maximum Entropy Reinforcement Learning (MaxEnt RL) objective, which incorporates entropy maximization to encourage exploration and enhance policy robustness. While the Gaussian policy performs well on simpler tasks, its exploration capacity and potential performance in complex multi-goal RL environments are limited by its inherent unimodality. In this paper, we employ the diffusion model, a powerful generative model capable of capturing complex multimodal distributions, as the policy representation to fulfill the MaxEnt RL objective, developing a method named MaxEnt RL with Diffusion Policy (MaxEntDP). Our method enables efficient exploration and brings the policy closer to the optimal MaxEnt policy. Experimental results on Mujoco benchmarks show that MaxEntDP outperforms the Gaussian policy and other generative models within the MaxEnt RL framework, and performs comparably to other state-of-the-art diffusion-based online RL algorithms. Our code is available at https://github.com/diffusionyes/MaxEntDP.
{ "Other": 0, "cs.AI": 1, "cs.CE": 0, "cs.CL": 0, "cs.CR": 0, "cs.CV": 0, "cs.CY": 0, "cs.DB": 0, "cs.HC": 0, "cs.IR": 0, "cs.IT": 0, "cs.LG": 1, "cs.MA": 0, "cs.NE": 0, "cs.RO": 0, "cs.SD": 0, "cs.SI": 0, "cs.SY": 0 }
2502.11614
Is Human-Like Text Liked by Humans? Multilingual Human Detection and Preference Against AI
[ "cs.CL", "cs.AI" ]
Prior studies have shown that distinguishing text generated by large language models (LLMs) from human-written one is highly challenging, and often no better than random guessing. To verify the generalizability of this finding across languages and domains, we perform an extensive case study to identify the upper bound of human detection accuracy. Across 16 datasets covering 9 languages and 9 domains, 19 annotators achieved an average detection accuracy of 87.6%, thus challenging previous conclusions. We find that major gaps between human and machine text lie in concreteness, cultural nuances, and diversity. Prompting by explicitly explaining the distinctions in the prompts can partially bridge the gaps in over 50% of the cases. However, we also find that humans do not always prefer human-written text, particularly when they cannot clearly identify its source.
{ "Other": 0, "cs.AI": 1, "cs.CE": 0, "cs.CL": 1, "cs.CR": 0, "cs.CV": 0, "cs.CY": 0, "cs.DB": 0, "cs.HC": 0, "cs.IR": 0, "cs.IT": 0, "cs.LG": 0, "cs.MA": 0, "cs.NE": 0, "cs.RO": 0, "cs.SD": 0, "cs.SI": 0, "cs.SY": 0 }
2502.11617
In-Context Parametric Inference: Point or Distribution Estimators?
[ "cs.LG", "cs.AI", "stat.ML" ]
Bayesian and frequentist inference are two fundamental paradigms in statistical estimation. Bayesian methods treat hypotheses as random variables, incorporating priors and updating beliefs via Bayes' theorem, whereas frequentist methods assume fixed but unknown hypotheses, relying on estimators like maximum likelihood. While extensive research has compared these approaches, the frequentist paradigm of obtaining point estimates has become predominant in deep learning, as Bayesian inference is challenging due to the computational complexity and the approximation gap of posterior estimation methods. However, a good understanding of trade-offs between the two approaches is lacking in the regime of amortized estimators, where in-context learners are trained to estimate either point values via maximum likelihood or maximum a posteriori estimation, or full posteriors using normalizing flows, score-based diffusion samplers, or diagonal Gaussian approximations, conditioned on observations. To help resolve this, we conduct a rigorous comparative analysis spanning diverse problem settings, from linear models to shallow neural networks, with a robust evaluation framework assessing both in-distribution and out-of-distribution generalization on tractable tasks. Our experiments indicate that amortized point estimators generally outperform posterior inference, though the latter remain competitive in some low-dimensional problems, and we further discuss why this might be the case.
{ "Other": 0, "cs.AI": 1, "cs.CE": 0, "cs.CL": 0, "cs.CR": 0, "cs.CV": 0, "cs.CY": 0, "cs.DB": 0, "cs.HC": 0, "cs.IR": 0, "cs.IT": 0, "cs.LG": 1, "cs.MA": 0, "cs.NE": 0, "cs.RO": 0, "cs.SD": 0, "cs.SI": 0, "cs.SY": 0 }
2502.11618
Real-time Neural Rendering of LiDAR Point Clouds
[ "cs.CV", "cs.GR" ]
Static LiDAR scanners produce accurate, dense, colored point clouds, but often contain obtrusive artifacts which makes them ill-suited for direct display. We propose an efficient method to render photorealistic images of such scans without any expensive preprocessing or training of a scene-specific model. A naive projection of the point cloud to the output view using 1x1 pixels is fast and retains the available detail, but also results in unintelligible renderings as background points leak in between the foreground pixels. The key insight is that these projections can be transformed into a realistic result using a deep convolutional model in the form of a U-Net, and a depth-based heuristic that prefilters the data. The U-Net also handles LiDAR-specific problems such as missing parts due to occlusion, color inconsistencies and varying point densities. We also describe a method to generate synthetic training data to deal with imperfectly-aligned ground truth images. Our method achieves real-time rendering rates using an off-the-shelf GPU and outperforms the state-of-the-art in both speed and quality.
{ "Other": 1, "cs.AI": 0, "cs.CE": 0, "cs.CL": 0, "cs.CR": 0, "cs.CV": 1, "cs.CY": 0, "cs.DB": 0, "cs.HC": 0, "cs.IR": 0, "cs.IT": 0, "cs.LG": 0, "cs.MA": 0, "cs.NE": 0, "cs.RO": 0, "cs.SD": 0, "cs.SI": 0, "cs.SY": 0 }
2502.11619
Membership Inference Attacks for Face Images Against Fine-Tuned Latent Diffusion Models
[ "cs.CV" ]
The rise of generative image models leads to privacy concerns when it comes to the huge datasets used to train such models. This paper investigates the possibility of inferring if a set of face images was used for fine-tuning a Latent Diffusion Model (LDM). A Membership Inference Attack (MIA) method is presented for this task. Using generated auxiliary data for the training of the attack model leads to significantly better performance, and so does the use of watermarks. The guidance scale used for inference was found to have a significant influence. If a LDM is fine-tuned for long enough, the text prompt used for inference has no significant influence. The proposed MIA is found to be viable in a realistic black-box setup against LDMs fine-tuned on face-images.
{ "Other": 0, "cs.AI": 0, "cs.CE": 0, "cs.CL": 0, "cs.CR": 0, "cs.CV": 1, "cs.CY": 0, "cs.DB": 0, "cs.HC": 0, "cs.IR": 0, "cs.IT": 0, "cs.LG": 0, "cs.MA": 0, "cs.NE": 0, "cs.RO": 0, "cs.SD": 0, "cs.SI": 0, "cs.SY": 0 }
2502.11633
CLASS: Enhancing Cross-Modal Text-Molecule Retrieval Performance and Training Efficiency
[ "cs.CL" ]
Cross-modal text-molecule retrieval task bridges molecule structures and natural language descriptions. Existing methods predominantly focus on aligning text modality and molecule modality, yet they overlook adaptively adjusting the learning states at different training stages and enhancing training efficiency. To tackle these challenges, this paper proposes a Curriculum Learning-bAsed croSS-modal text-molecule training framework (CLASS), which can be integrated with any backbone to yield promising performance improvement. Specifically, we quantify the sample difficulty considering both text modality and molecule modality, and design a sample scheduler to introduce training samples via an easy-to-difficult paradigm as the training advances, remarkably reducing the scale of training samples at the early stage of training and improving training efficiency. Moreover, we introduce adaptive intensity learning to increase the training intensity as the training progresses, which adaptively controls the learning intensity across all curriculum stages. Experimental results on the ChEBI-20 dataset demonstrate that our proposed method gains superior performance, simultaneously achieving prominent time savings.
{ "Other": 0, "cs.AI": 0, "cs.CE": 0, "cs.CL": 1, "cs.CR": 0, "cs.CV": 0, "cs.CY": 0, "cs.DB": 0, "cs.HC": 0, "cs.IR": 0, "cs.IT": 0, "cs.LG": 0, "cs.MA": 0, "cs.NE": 0, "cs.RO": 0, "cs.SD": 0, "cs.SI": 0, "cs.SY": 0 }
2502.11638
Enhancing Out-of-Distribution Detection in Medical Imaging with Normalizing Flows
[ "cs.CV" ]
Out-of-distribution (OOD) detection is crucial in AI-driven medical imaging to ensure reliability and safety by identifying inputs outside a model's training distribution. Existing methods often require retraining or modifications to pre-trained models, which is impractical for clinical applications. This study introduces a post-hoc normalizing flow-based approach that seamlessly integrates with pre-trained models. By leveraging normalizing flows, it estimates the likelihood of feature vectors extracted from pre-trained models, capturing semantically meaningful representations without relying on pixel-level statistics. The method was evaluated using the MedMNIST benchmark and a newly curated MedOOD dataset simulating clinically relevant distributional shifts. Performance was measured using standard OOD detection metrics (e.g., AUROC, FPR@95, AUPR_IN, AUPR_OUT), with statistical analyses comparing it against ten baseline methods. On MedMNIST, the proposed model achieved an AUROC of 93.80%, outperforming state-of-the-art methods. On MedOOD, it achieved an AUROC of 84.61%, demonstrating superior performance against other methods. Its post-hoc nature ensures compatibility with existing clinical workflows, addressing the limitations of previous approaches. The model and code to build OOD datasets are available at https://github.com/dlotfi/MedOODFlow.
{ "Other": 0, "cs.AI": 0, "cs.CE": 0, "cs.CL": 0, "cs.CR": 0, "cs.CV": 1, "cs.CY": 0, "cs.DB": 0, "cs.HC": 0, "cs.IR": 0, "cs.IT": 0, "cs.LG": 0, "cs.MA": 0, "cs.NE": 0, "cs.RO": 0, "cs.SD": 0, "cs.SI": 0, "cs.SY": 0 }
2502.11639
Neural Interpretable Reasoning
[ "cs.LG", "cs.AI", "cs.NE" ]
We formalize a novel modeling framework for achieving interpretability in deep learning, anchored in the principle of inference equivariance. While the direct verification of interpretability scales exponentially with the number of variables of the system, we show that this complexity can be mitigated by treating interpretability as a Markovian property and employing neural re-parametrization techniques. Building on these insights, we propose a new modeling paradigm -- neural generation and interpretable execution -- that enables scalable verification of equivariance. This paradigm provides a general approach for designing Neural Interpretable Reasoners that are not only expressive but also transparent.
{ "Other": 0, "cs.AI": 1, "cs.CE": 0, "cs.CL": 0, "cs.CR": 0, "cs.CV": 0, "cs.CY": 0, "cs.DB": 0, "cs.HC": 0, "cs.IR": 0, "cs.IT": 0, "cs.LG": 1, "cs.MA": 0, "cs.NE": 1, "cs.RO": 0, "cs.SD": 0, "cs.SI": 0, "cs.SY": 0 }
2502.11641
A Zero-Knowledge Proof for the Syndrome Decoding Problem in the Lee Metric
[ "cs.CR", "cs.IT", "math.IT" ]
The syndrome decoding problem is one of the NP-complete problems lying at the foundation of code-based cryptography. The variant thereof where the distance between vectors is measured with respect to the Lee metric, rather than the more commonly used Hamming metric, has been analyzed recently in several works due to its potential relevance for building more efficient code-based cryptosystems. The purpose of this article is to describe a zero-knowledge proof for this variant of the problem.
{ "Other": 0, "cs.AI": 0, "cs.CE": 0, "cs.CL": 0, "cs.CR": 1, "cs.CV": 0, "cs.CY": 0, "cs.DB": 0, "cs.HC": 0, "cs.IR": 0, "cs.IT": 1, "cs.LG": 0, "cs.MA": 0, "cs.NE": 0, "cs.RO": 0, "cs.SD": 0, "cs.SI": 0, "cs.SY": 0 }
2502.11642
GaussianMotion: End-to-End Learning of Animatable Gaussian Avatars with Pose Guidance from Text
[ "cs.CV" ]
In this paper, we introduce GaussianMotion, a novel human rendering model that generates fully animatable scenes aligned with textual descriptions using Gaussian Splatting. Although existing methods achieve reasonable text-to-3D generation of human bodies using various 3D representations, they often face limitations in fidelity and efficiency, or primarily focus on static models with limited pose control. In contrast, our method generates fully animatable 3D avatars by combining deformable 3D Gaussian Splatting with text-to-3D score distillation, achieving high fidelity and efficient rendering for arbitrary poses. By densely generating diverse random poses during optimization, our deformable 3D human model learns to capture a wide range of natural motions distilled from a pose-conditioned diffusion model in an end-to-end manner. Furthermore, we propose Adaptive Score Distillation that effectively balances realistic detail and smoothness to achieve optimal 3D results. Experimental results demonstrate that our approach outperforms existing baselines by producing high-quality textures in both static and animated results, and by generating diverse 3D human models from various textual inputs.
{ "Other": 0, "cs.AI": 0, "cs.CE": 0, "cs.CL": 0, "cs.CR": 0, "cs.CV": 1, "cs.CY": 0, "cs.DB": 0, "cs.HC": 0, "cs.IR": 0, "cs.IT": 0, "cs.LG": 0, "cs.MA": 0, "cs.NE": 0, "cs.RO": 0, "cs.SD": 0, "cs.SI": 0, "cs.SY": 0 }
2502.11644
InTec: integrated things-edge computing: a framework for distributing machine learning pipelines in edge AI systems
[ "cs.DC", "cs.AI" ]
With the rapid expansion of the Internet of Things (IoT), sensors, smartphones, and wearables have become integral to daily life, powering smart applications in home automation, healthcare, and intelligent transportation. However, these advancements face significant challenges due to latency and bandwidth constraints imposed by traditional cloud based machine learning (ML) frameworks. The need for innovative solutions is evident as cloud computing struggles with increased latency and network congestion. Previous attempts to offload parts of the ML pipeline to edge and cloud layers have yet to fully resolve these issues, often worsening system response times and network congestion due to the computational limitations of edge devices. In response to these challenges, this study introduces the InTec (Integrated Things Edge Computing) framework, a groundbreaking innovation in IoT architecture. Unlike existing methods, InTec fully leverages the potential of a three tier architecture by strategically distributing ML tasks across the Things, Edge, and Cloud layers. This comprehensive approach enables real time data processing at the point of data generation, significantly reducing latency, optimizing network traffic, and enhancing system reliability. InTec effectiveness is validated through empirical evaluation using the MHEALTH dataset for human motion detection in smart homes, demonstrating notable improvements in key metrics: an 81.56 percent reduction in response time, a 10.92 percent decrease in network traffic, a 9.82 percent improvement in throughput, a 21.86 percent reduction in edge energy consumption, and a 25.83 percent reduction in cloud energy consumption. These advancements establish InTec as a new benchmark for scalable, responsive, and energy efficient IoT applications, demonstrating its potential to revolutionize how the ML pipeline is integrated into Edge AI (EI) systems.
{ "Other": 1, "cs.AI": 1, "cs.CE": 0, "cs.CL": 0, "cs.CR": 0, "cs.CV": 0, "cs.CY": 0, "cs.DB": 0, "cs.HC": 0, "cs.IR": 0, "cs.IT": 0, "cs.LG": 0, "cs.MA": 0, "cs.NE": 0, "cs.RO": 0, "cs.SD": 0, "cs.SI": 0, "cs.SY": 0 }
2502.11645
Deviation Ratings: A General, Clone-Invariant Rating Method
[ "cs.GT", "cs.CL", "cs.MA", "stat.OT" ]
Many real-world multi-agent or multi-task evaluation scenarios can be naturally modelled as normal-form games due to inherent strategic (adversarial, cooperative, and mixed motive) interactions. These strategic interactions may be agentic (e.g. players trying to win), fundamental (e.g. cost vs quality), or complementary (e.g. niche finding and specialization). In such a formulation, it is the strategies (actions, policies, agents, models, tasks, prompts, etc.) that are rated. However, the rating problem is complicated by redundancy and complexity of N-player strategic interactions. Repeated or similar strategies can distort ratings for those that counter or complement them. Previous work proposed ``clone invariant'' ratings to handle such redundancies, but this was limited to two-player zero-sum (i.e. strictly competitive) interactions. This work introduces the first N-player general-sum clone invariant rating, called deviation ratings, based on coarse correlated equilibria. The rating is explored on several domains including LLMs evaluation.
{ "Other": 1, "cs.AI": 0, "cs.CE": 0, "cs.CL": 1, "cs.CR": 0, "cs.CV": 0, "cs.CY": 0, "cs.DB": 0, "cs.HC": 0, "cs.IR": 0, "cs.IT": 0, "cs.LG": 0, "cs.MA": 1, "cs.NE": 0, "cs.RO": 0, "cs.SD": 0, "cs.SI": 0, "cs.SY": 0 }
2502.11646
Hyperspherical Energy Transformer with Recurrent Depth
[ "cs.LG" ]
Transformer-based foundation models have achieved unprecedented success with a gigantic amount of parameters and computational resources. Yet, the core building blocks of these models, the Transformer layers, and how they are arranged and configured are primarily engineered from the bottom up and driven by heuristics. For advancing next-generation architectures, it demands exploring a prototypical model that is amenable to high interpretability and of practical competence. To this end, we take a step from the top-down view and design neural networks from an energy minimization perspective. Specifically, to promote isotropic token distribution on the sphere, we formulate a modified Hopfield energy function on the subspace-embedded hypersphere, based on which Transformer layers with symmetric structures are designed as the iterative optimization for the energy function. By integrating layers with the same parameters, we propose \textit{Hyper-Spherical Energy Transformer} (Hyper-SET), an alternative to the vanilla Transformer with recurrent depth. This design inherently provides greater interpretability and allows for scaling to deeper layers without a significant increase in the number of parameters. We also empirically demonstrate that Hyper-SET achieves comparable or even superior performance on both synthetic and real-world tasks, such as solving Sudoku and masked image modeling, while utilizing fewer parameters.
{ "Other": 0, "cs.AI": 0, "cs.CE": 0, "cs.CL": 0, "cs.CR": 0, "cs.CV": 0, "cs.CY": 0, "cs.DB": 0, "cs.HC": 0, "cs.IR": 0, "cs.IT": 0, "cs.LG": 1, "cs.MA": 0, "cs.NE": 0, "cs.RO": 0, "cs.SD": 0, "cs.SI": 0, "cs.SY": 0 }
2502.11647
DELMAN: Dynamic Defense Against Large Language Model Jailbreaking with Model Editing
[ "cs.CR", "cs.AI" ]
Large Language Models (LLMs) are widely applied in decision making, but their deployment is threatened by jailbreak attacks, where adversarial users manipulate model behavior to bypass safety measures. Existing defense mechanisms, such as safety fine-tuning and model editing, either require extensive parameter modifications or lack precision, leading to performance degradation on general tasks, which is unsuitable to post-deployment safety alignment. To address these challenges, we propose DELMAN (Dynamic Editing for LLMs JAilbreak DefeNse), a novel approach leveraging direct model editing for precise, dynamic protection against jailbreak attacks. DELMAN directly updates a minimal set of relevant parameters to neutralize harmful behaviors while preserving the model's utility. To avoid triggering a safe response in benign context, we incorporate KL-divergence regularization to ensure the updated model remains consistent with the original model when processing benign queries. Experimental results demonstrate that DELMAN outperforms baseline methods in mitigating jailbreak attacks while preserving the model's utility, and adapts seamlessly to new attack instances, providing a practical and efficient solution for post-deployment model protection.
{ "Other": 0, "cs.AI": 1, "cs.CE": 0, "cs.CL": 0, "cs.CR": 1, "cs.CV": 0, "cs.CY": 0, "cs.DB": 0, "cs.HC": 0, "cs.IR": 0, "cs.IT": 0, "cs.LG": 0, "cs.MA": 0, "cs.NE": 0, "cs.RO": 0, "cs.SD": 0, "cs.SI": 0, "cs.SY": 0 }
2502.11649
Competing LLM Agents in a Non-Cooperative Game of Opinion Polarisation
[ "cs.AI", "cs.SI" ]
We introduce a novel non-cooperative game to analyse opinion formation and resistance, incorporating principles from social psychology such as confirmation bias, resource constraints, and influence penalties. Our simulation features Large Language Model (LLM) agents competing to influence a population, with penalties imposed for generating messages that propagate or counter misinformation. This framework integrates resource optimisation into the agents' decision-making process. Our findings demonstrate that while higher confirmation bias strengthens opinion alignment within groups, it also exacerbates overall polarisation. Conversely, lower confirmation bias leads to fragmented opinions and limited shifts in individual beliefs. Investing heavily in a high-resource debunking strategy can initially align the population with the debunking agent, but risks rapid resource depletion and diminished long-term influence.
{ "Other": 0, "cs.AI": 1, "cs.CE": 0, "cs.CL": 0, "cs.CR": 0, "cs.CV": 0, "cs.CY": 0, "cs.DB": 0, "cs.HC": 0, "cs.IR": 0, "cs.IT": 0, "cs.LG": 0, "cs.MA": 0, "cs.NE": 0, "cs.RO": 0, "cs.SD": 0, "cs.SI": 1, "cs.SY": 0 }
2502.11651
MMXU: A Multi-Modal and Multi-X-ray Understanding Dataset for Disease Progression
[ "cs.CV", "cs.AI" ]
Large vision-language models (LVLMs) have shown great promise in medical applications, particularly in visual question answering (MedVQA) and diagnosis from medical images. However, existing datasets and models often fail to consider critical aspects of medical diagnostics, such as the integration of historical records and the analysis of disease progression over time. In this paper, we introduce MMXU (Multimodal and MultiX-ray Understanding), a novel dataset for MedVQA that focuses on identifying changes in specific regions between two patient visits. Unlike previous datasets that primarily address single-image questions, MMXU enables multi-image questions, incorporating both current and historical patient data. We demonstrate the limitations of current LVLMs in identifying disease progression on MMXU-\textit{test}, even those that perform well on traditional benchmarks. To address this, we propose a MedRecord-Augmented Generation (MAG) approach, incorporating both global and regional historical records. Our experiments show that integrating historical records significantly enhances diagnostic accuracy by at least 20\%, bridging the gap between current LVLMs and human expert performance. Additionally, we fine-tune models with MAG on MMXU-\textit{dev}, which demonstrates notable improvements. We hope this work could illuminate the avenue of advancing the use of LVLMs in medical diagnostics by emphasizing the importance of historical context in interpreting medical images. Our dataset is released at \href{https://github.com/linjiemu/MMXU}{https://github.com/linjiemu/MMXU}.
{ "Other": 0, "cs.AI": 1, "cs.CE": 0, "cs.CL": 0, "cs.CR": 0, "cs.CV": 1, "cs.CY": 0, "cs.DB": 0, "cs.HC": 0, "cs.IR": 0, "cs.IT": 0, "cs.LG": 0, "cs.MA": 0, "cs.NE": 0, "cs.RO": 0, "cs.SD": 0, "cs.SI": 0, "cs.SY": 0 }
2502.11655
Object-Centric Image to Video Generation with Language Guidance
[ "cs.CV" ]
Accurate and flexible world models are crucial for autonomous systems to understand their environment and predict future events. Object-centric models, with structured latent spaces, have shown promise in modeling object dynamics and interactions, but often face challenges in scaling to complex datasets and incorporating external guidance, limiting their applicability in robotics. To address these limitations, we propose TextOCVP, an object-centric model for image-to-video generation guided by textual descriptions. TextOCVP parses an observed scene into object representations, called slots, and utilizes a text-conditioned transformer predictor to forecast future object states and video frames. Our approach jointly models object dynamics and interactions while incorporating textual guidance, thus leading to accurate and controllable predictions. Our method's structured latent space offers enhanced control over the prediction process, outperforming several image-to-video generative baselines. Additionally, we demonstrate that structured object-centric representations provide superior controllability and interpretability, facilitating the modeling of object dynamics and enabling more precise and understandable predictions. Videos and code are available at https://play-slot.github.io/TextOCVP/.
{ "Other": 0, "cs.AI": 0, "cs.CE": 0, "cs.CL": 0, "cs.CR": 0, "cs.CV": 1, "cs.CY": 0, "cs.DB": 0, "cs.HC": 0, "cs.IR": 0, "cs.IT": 0, "cs.LG": 0, "cs.MA": 0, "cs.NE": 0, "cs.RO": 0, "cs.SD": 0, "cs.SI": 0, "cs.SY": 0 }
2502.11656
Uncovering the Impact of Chain-of-Thought Reasoning for Direct Preference Optimization: Lessons from Text-to-SQL
[ "cs.CL", "cs.DB" ]
Direct Preference Optimization (DPO) has proven effective in complex reasoning tasks like math word problems and code generation. However, when applied to Text-to-SQL datasets, it often fails to improve performance and can even degrade it. Our investigation reveals the root cause: unlike math and code tasks, which naturally integrate Chain-of-Thought (CoT) reasoning with DPO, Text-to-SQL datasets typically include only final answers (gold SQL queries) without detailed CoT solutions. By augmenting Text-to-SQL datasets with synthetic CoT solutions, we achieve, for the first time, consistent and significant performance improvements using DPO. Our analysis shows that CoT reasoning is crucial for unlocking DPO's potential, as it mitigates reward hacking, strengthens discriminative capabilities, and improves scalability. These findings offer valuable insights for building more robust Text-to-SQL models. To support further research, we publicly release the code and CoT-enhanced datasets.
{ "Other": 0, "cs.AI": 0, "cs.CE": 0, "cs.CL": 1, "cs.CR": 0, "cs.CV": 0, "cs.CY": 0, "cs.DB": 1, "cs.HC": 0, "cs.IR": 0, "cs.IT": 0, "cs.LG": 0, "cs.MA": 0, "cs.NE": 0, "cs.RO": 0, "cs.SD": 0, "cs.SI": 0, "cs.SY": 0 }
2502.11657
How does ion temperature gradient turbulence depend on magnetic geometry? Insights from data and machine learning
[ "physics.plasm-ph", "cs.LG" ]
Magnetic geometry has a significant effect on the level of turbulent transport in fusion plasmas. Here, we model and analyze this dependence using multiple machine learning methods and a dataset of > 200,000 nonlinear simulations of ion-temperature-gradient turbulence in diverse non-axisymmetric geometries. The dataset is generated using a large collection of both optimized and randomly generated stellarator equilibria. At fixed gradients, the turbulent heat flux varies between geometries by several orders of magnitude. Trends are apparent among the configurations with particularly high or low heat flux. Regression and classification techniques from machine learning are then applied to extract patterns in the dataset. Due to a symmetry of the gyrokinetic equation, the heat flux and regressions thereof should be invariant to translations of the raw features in the parallel coordinate, similar to translation invariance in computer vision applications. Multiple regression models including convolutional neural networks (CNNs) and decision trees can achieve reasonable predictive power for the heat flux in held-out test configurations, with highest accuracy for the CNNs. Using Spearman correlation, sequential feature selection, and Shapley values to measure feature importance, it is consistently found that the most important geometric lever on the heat flux is the flux surface compression in regions of bad curvature. The second most important feature relates to the magnitude of geodesic curvature. These two features align remarkably with surrogates that have been proposed based on theory, while the methods here allow a natural extension to more features for increased accuracy. The dataset, released with this publication, may also be used to test other proposed surrogates, and we find many previously published proxies do correlate well with both the heat flux and stability boundary.
{ "Other": 0, "cs.AI": 0, "cs.CE": 0, "cs.CL": 0, "cs.CR": 0, "cs.CV": 0, "cs.CY": 0, "cs.DB": 0, "cs.HC": 0, "cs.IR": 0, "cs.IT": 0, "cs.LG": 1, "cs.MA": 0, "cs.NE": 0, "cs.RO": 0, "cs.SD": 0, "cs.SI": 0, "cs.SY": 0 }
2502.11658
"I'm not for sale" -- Perceptions and limited awareness of privacy risks by digital natives about location data
[ "cs.CY", "cs.AI", "cs.CR" ]
Although mobile devices benefit users in their daily lives in numerous ways, they also raise several privacy concerns. For instance, they can reveal sensitive information that can be inferred from location data. This location data is shared through service providers as well as mobile applications. Understanding how and with whom users share their location data -- as well as users' perception of the underlying privacy risks --, are important notions to grasp in order to design usable privacy-enhancing technologies. In this work, we perform a quantitative and qualitative analysis of smartphone users' awareness, perception and self-reported behavior towards location data-sharing through a survey of n=99 young adult participants (i.e., digital natives). We compare stated practices with actual behaviors to better understand their mental models, and survey participants' understanding of privacy risks before and after the inspection of location traces and the information that can be inferred therefrom. Our empirical results show that participants have risky privacy practices: about 54% of participants underestimate the number of mobile applications to which they have granted access to their data, and 33% forget or do not think of revoking access to their data. Also, by using a demonstrator to perform inferences from location data, we observe that slightly more than half of participants (57%) are surprised by the extent of potentially inferred information, and that 47% intend to reduce access to their data via permissions as a result of using the demonstrator. Last, a majority of participants have little knowledge of the tools to better protect themselves, but are nonetheless willing to follow suggestions to improve privacy (51%). Educating people, including digital natives, about privacy risks through transparency tools seems a promising approach.
{ "Other": 0, "cs.AI": 1, "cs.CE": 0, "cs.CL": 0, "cs.CR": 1, "cs.CV": 0, "cs.CY": 1, "cs.DB": 0, "cs.HC": 0, "cs.IR": 0, "cs.IT": 0, "cs.LG": 0, "cs.MA": 0, "cs.NE": 0, "cs.RO": 0, "cs.SD": 0, "cs.SI": 0, "cs.SY": 0 }
2502.11663
MaskGWM: A Generalizable Driving World Model with Video Mask Reconstruction
[ "cs.CV" ]
World models that forecast environmental changes from actions are vital for autonomous driving models with strong generalization. The prevailing driving world model mainly build on video prediction model. Although these models can produce high-fidelity video sequences with advanced diffusion-based generator, they are constrained by their predictive duration and overall generalization capabilities. In this paper, we explore to solve this problem by combining generation loss with MAE-style feature-level context learning. In particular, we instantiate this target with three key design: (1) A more scalable Diffusion Transformer (DiT) structure trained with extra mask construction task. (2) we devise diffusion-related mask tokens to deal with the fuzzy relations between mask reconstruction and generative diffusion process. (3) we extend mask construction task to spatial-temporal domain by utilizing row-wise mask for shifted self-attention rather than masked self-attention in MAE. Then, we adopt a row-wise cross-view module to align with this mask design. Based on above improvement, we propose MaskGWM: a Generalizable driving World Model embodied with Video Mask reconstruction. Our model contains two variants: MaskGWM-long, focusing on long-horizon prediction, and MaskGWM-mview, dedicated to multi-view generation. Comprehensive experiments on standard benchmarks validate the effectiveness of the proposed method, which contain normal validation of Nuscene dataset, long-horizon rollout of OpenDV-2K dataset and zero-shot validation of Waymo dataset. Quantitative metrics on these datasets show our method notably improving state-of-the-art driving world model.
{ "Other": 0, "cs.AI": 0, "cs.CE": 0, "cs.CL": 0, "cs.CR": 0, "cs.CV": 1, "cs.CY": 0, "cs.DB": 0, "cs.HC": 0, "cs.IR": 0, "cs.IT": 0, "cs.LG": 0, "cs.MA": 0, "cs.NE": 0, "cs.RO": 0, "cs.SD": 0, "cs.SI": 0, "cs.SY": 0 }
2502.11664
VRoPE: Rotary Position Embedding for Video Large Language Models
[ "cs.AI" ]
Rotary Position Embedding (RoPE) has shown strong performance in text-based Large Language Models (LLMs), but extending it to video remains a challenge due to the intricate spatiotemporal structure of video frames. Existing adaptations, such as RoPE-3D, attempt to encode spatial and temporal dimensions separately but suffer from two major limitations: positional bias in attention distribution and disruptions in video-text transitions. To overcome these issues, we propose Video Rotary Position Embedding (VRoPE), a novel positional encoding method tailored for Video-LLMs. Our approach restructures positional indices to preserve spatial coherence and ensure a smooth transition between video and text tokens. Additionally, we introduce a more balanced encoding strategy that mitigates attention biases, ensuring a more uniform distribution of spatial focus. Extensive experiments on Vicuna and Qwen2 across different model scales demonstrate that VRoPE consistently outperforms previous RoPE variants, achieving significant improvements in video understanding, temporal reasoning, and retrieval tasks. Code will be available at https://github.com/johncaged/VRoPE
{ "Other": 0, "cs.AI": 1, "cs.CE": 0, "cs.CL": 0, "cs.CR": 0, "cs.CV": 0, "cs.CY": 0, "cs.DB": 0, "cs.HC": 0, "cs.IR": 0, "cs.IT": 0, "cs.LG": 0, "cs.MA": 0, "cs.NE": 0, "cs.RO": 0, "cs.SD": 0, "cs.SI": 0, "cs.SY": 0 }
2502.11665
On the kernel learning problem
[ "stat.ML", "cs.LG", "math.CA", "math.FA", "math.OC" ]
The classical kernel ridge regression problem aims to find the best fit for the output $Y$ as a function of the input data $X\in \mathbb{R}^d$, with a fixed choice of regularization term imposed by a given choice of a reproducing kernel Hilbert space, such as a Sobolev space. Here we consider a generalization of the kernel ridge regression problem, by introducing an extra matrix parameter $U$, which aims to detect the scale parameters and the feature variables in the data, and thereby improve the efficiency of kernel ridge regression. This naturally leads to a nonlinear variational problem to optimize the choice of $U$. We study various foundational mathematical aspects of this variational problem, and in particular how this behaves in the presence of multiscale structures in the data.
{ "Other": 0, "cs.AI": 0, "cs.CE": 0, "cs.CL": 0, "cs.CR": 0, "cs.CV": 0, "cs.CY": 0, "cs.DB": 0, "cs.HC": 0, "cs.IR": 0, "cs.IT": 0, "cs.LG": 1, "cs.MA": 0, "cs.NE": 0, "cs.RO": 0, "cs.SD": 0, "cs.SI": 0, "cs.SY": 0 }
2502.11669
Deep Subspace Learning for Surface Anomaly Classification Based on 3D Point Cloud Data
[ "stat.ML", "cs.LG" ]
Surface anomaly classification is critical for manufacturing system fault diagnosis and quality control. However, the following challenges always hinder accurate anomaly classification in practice: (i) Anomaly patterns exhibit intra-class variation and inter-class similarity, presenting challenges in the accurate classification of each sample. (ii) Despite the predefined classes, new types of anomalies can occur during production that require to be detected accurately. (iii) Anomalous data is rare in manufacturing processes, leading to limited data for model learning. To tackle the above challenges simultaneously, this paper proposes a novel deep subspace learning-based 3D anomaly classification model. Specifically, starting from a lightweight encoder to extract the latent representations, we model each class as a subspace to account for the intra-class variation, while promoting distinct subspaces of different classes to tackle the inter-class similarity. Moreover, the explicit modeling of subspaces offers the capability to detect out-of-distribution samples, i.e., new types of anomalies, and the regularization effect with much fewer learnable parameters of our proposed subspace classifier, compared to the popular Multi-Layer Perceptions (MLPs). Extensive numerical experiments demonstrate our method achieves better anomaly classification results than benchmark methods, and can effectively identify the new types of anomalies.
{ "Other": 0, "cs.AI": 0, "cs.CE": 0, "cs.CL": 0, "cs.CR": 0, "cs.CV": 0, "cs.CY": 0, "cs.DB": 0, "cs.HC": 0, "cs.IR": 0, "cs.IT": 0, "cs.LG": 1, "cs.MA": 0, "cs.NE": 0, "cs.RO": 0, "cs.SD": 0, "cs.SI": 0, "cs.SY": 0 }
2502.11671
Diversity-Oriented Data Augmentation with Large Language Models
[ "cs.CL", "cs.AI", "cs.LG" ]
Data augmentation is an essential technique in natural language processing (NLP) for enriching training datasets by generating diverse samples. This process is crucial for improving the robustness and generalization capabilities of NLP models. However, a significant challenge remains: \textit{Insufficient Attention to Sample Distribution Diversity}. Most existing methods focus on increasing the sample numbers while neglecting the sample distribution diversity, which can lead to model overfitting. In response, we explore data augmentation's impact on dataset diversity and propose a \textbf{\underline{D}}iversity-\textbf{\underline{o}}riented data \textbf{\underline{Aug}}mentation framework (\textbf{DoAug}). % \(\mathscr{DoAug}\) Specifically, we utilize a diversity-oriented fine-tuning approach to train an LLM as a diverse paraphraser, which is capable of augmenting textual datasets by generating diversified paraphrases. Then, we apply the LLM paraphraser to a selected coreset of highly informative samples and integrate the paraphrases with the original data to create a more diverse augmented dataset. Finally, we conduct extensive experiments on 12 real-world textual datasets. The results show that our fine-tuned LLM augmenter improves diversity while preserving label consistency, thereby enhancing the robustness and performance of downstream tasks. Specifically, it achieves an average performance gain of \(10.52\%\), surpassing the runner-up baseline with more than three percentage points.
{ "Other": 0, "cs.AI": 1, "cs.CE": 0, "cs.CL": 1, "cs.CR": 0, "cs.CV": 0, "cs.CY": 0, "cs.DB": 0, "cs.HC": 0, "cs.IR": 0, "cs.IT": 0, "cs.LG": 1, "cs.MA": 0, "cs.NE": 0, "cs.RO": 0, "cs.SD": 0, "cs.SI": 0, "cs.SY": 0 }
2502.11672
Exact Upper and Lower Bounds for the Output Distribution of Neural Networks with Random Inputs
[ "cs.LG", "stat.ME", "stat.ML" ]
We derive exact upper and lower bounds for the cumulative distribution function (cdf) of the output of a neural network over its entire support subject to noisy (stochastic) inputs. The upper and lower bounds converge to the true cdf over its domain as the resolution increases. Our method applies to any feedforward NN using continuous monotonic piecewise differentiable activation functions (e.g., ReLU, tanh and softmax) and convolutional NNs, which were beyond the scope of competing approaches. The novelty and an instrumental tool of our approach is to bound general NNs with ReLU NNs. The ReLU NN based bounds are then used to derive upper and lower bounds of the cdf of the NN output. Experiments demonstrate that our method delivers guaranteed bounds of the predictive output distribution over its support, thus providing exact error guarantees, in contrast to competing approaches.
{ "Other": 0, "cs.AI": 0, "cs.CE": 0, "cs.CL": 0, "cs.CR": 0, "cs.CV": 0, "cs.CY": 0, "cs.DB": 0, "cs.HC": 0, "cs.IR": 0, "cs.IT": 0, "cs.LG": 1, "cs.MA": 0, "cs.NE": 0, "cs.RO": 0, "cs.SD": 0, "cs.SI": 0, "cs.SY": 0 }
2502.11673
Best of Both Worlds: Regret Minimization versus Minimax Play
[ "cs.LG", "stat.ML" ]
In this paper, we investigate the existence of online learning algorithms with bandit feedback that simultaneously guarantee $O(1)$ regret compared to a given comparator strategy, and $O(\sqrt{T})$ regret compared to the best strategy in hindsight, where $T$ is the number of rounds. We provide the first affirmative answer to this question. In the context of symmetric zero-sum games, both in normal- and extensive form, we show that our results allow us to guarantee to risk at most $O(1)$ loss while being able to gain $\Omega(T)$ from exploitable opponents, thereby combining the benefits of both no-regret algorithms and minimax play.
{ "Other": 0, "cs.AI": 0, "cs.CE": 0, "cs.CL": 0, "cs.CR": 0, "cs.CV": 0, "cs.CY": 0, "cs.DB": 0, "cs.HC": 0, "cs.IR": 0, "cs.IT": 0, "cs.LG": 1, "cs.MA": 0, "cs.NE": 0, "cs.RO": 0, "cs.SD": 0, "cs.SI": 0, "cs.SY": 0 }
2502.11677
Towards Fully Exploiting LLM Internal States to Enhance Knowledge Boundary Perception
[ "cs.CL" ]
Large language models (LLMs) exhibit impressive performance across diverse tasks but often struggle to accurately gauge their knowledge boundaries, leading to confident yet incorrect responses. This paper explores leveraging LLMs' internal states to enhance their perception of knowledge boundaries from efficiency and risk perspectives. We investigate whether LLMs can estimate their confidence using internal states before response generation, potentially saving computational resources. Our experiments on datasets like Natural Questions, HotpotQA, and MMLU reveal that LLMs demonstrate significant pre-generation perception, which is further refined post-generation, with perception gaps remaining stable across varying conditions. To mitigate risks in critical domains, we introduce Consistency-based Confidence Calibration ($C^3$), which assesses confidence consistency through question reformulation. $C^3$ significantly improves LLMs' ability to recognize their knowledge gaps, enhancing the unknown perception rate by 5.6\% on NQ and 4.9\% on HotpotQA. Our findings suggest that pre-generation confidence estimation can optimize efficiency, while $C^3$ effectively controls output risks, advancing the reliability of LLMs in practical applications.
{ "Other": 0, "cs.AI": 0, "cs.CE": 0, "cs.CL": 1, "cs.CR": 0, "cs.CV": 0, "cs.CY": 0, "cs.DB": 0, "cs.HC": 0, "cs.IR": 0, "cs.IT": 0, "cs.LG": 0, "cs.MA": 0, "cs.NE": 0, "cs.RO": 0, "cs.SD": 0, "cs.SI": 0, "cs.SY": 0 }
2502.11678
Exploring LLM-based Student Simulation for Metacognitive Cultivation
[ "cs.CY", "cs.CL" ]
Metacognitive education plays a crucial role in cultivating students' self-regulation and reflective thinking, providing essential support for those with learning difficulties through academic advising. Simulating students with insufficient learning capabilities using large language models offers a promising approach to refining pedagogical methods without ethical concerns. However, existing simulations often fail to authentically represent students' learning struggles and face challenges in evaluation due to the lack of reliable metrics and ethical constraints in data collection. To address these issues, we propose a pipeline for automatically generating and filtering high-quality simulated student agents. Our approach leverages a two-round automated scoring system validated by human experts and employs a score propagation module to obtain more consistent scores across the student graph. Experimental results demonstrate that our pipeline efficiently identifies high-quality student agents, and we discuss the traits that influence the simulation's effectiveness. By simulating students with varying degrees of learning difficulties, our work paves the way for broader applications in personalized learning and educational assessment.
{ "Other": 0, "cs.AI": 0, "cs.CE": 0, "cs.CL": 1, "cs.CR": 0, "cs.CV": 0, "cs.CY": 1, "cs.DB": 0, "cs.HC": 0, "cs.IR": 0, "cs.IT": 0, "cs.LG": 0, "cs.MA": 0, "cs.NE": 0, "cs.RO": 0, "cs.SD": 0, "cs.SI": 0, "cs.SY": 0 }
2502.11680
Spectral structure learning for clinical time series
[ "cs.LG" ]
We develop and evaluate a structure learning algorithm for clinical time series. Clinical time series are multivariate time series observed in multiple patients and irregularly sampled, challenging existing structure learning algorithms. We assume that our times series are realizations of StructGP, a k-dimensional multi-output or multi-task stationary Gaussian process (GP), with independent patients sharing the same covariance function. StructGP encodes ordered conditional relations between time series, represented in a directed acyclic graph. We implement an adapted NOTEARS algorithm, which based on a differentiable definition of acyclicity, recovers the graph by solving a series of continuous optimization problems. Simulation results show that up to mean degree 3 and 20 tasks, we reach a median recall of 0.93% [IQR, 0.86, 0.97] while keeping a median precision of 0.71% [0.57-0.84], for recovering directed edges. We further show that the regularization path is key to identifying the graph. With StructGP, we proposed a model of time series dependencies, that flexibly adapt to different time series regularity, while enabling us to learn these dependencies from observations.
{ "Other": 0, "cs.AI": 0, "cs.CE": 0, "cs.CL": 0, "cs.CR": 0, "cs.CV": 0, "cs.CY": 0, "cs.DB": 0, "cs.HC": 0, "cs.IR": 0, "cs.IT": 0, "cs.LG": 1, "cs.MA": 0, "cs.NE": 0, "cs.RO": 0, "cs.SD": 0, "cs.SI": 0, "cs.SY": 0 }
2502.11681
RIDE: Enhancing Large Language Model Alignment through Restyled In-Context Learning Demonstration Exemplars
[ "cs.CL", "cs.AI" ]
Alignment tuning is crucial for ensuring large language models (LLMs) behave ethically and helpfully. Current alignment approaches require high-quality annotations and significant training resources. This paper proposes a low-cost, tuning-free method using in-context learning (ICL) to enhance LLM alignment. Through an analysis of high-quality ICL demos, we identified style as a key factor influencing LLM alignment capabilities and explicitly restyled ICL exemplars based on this stylistic framework. Additionally, we combined the restyled demos to achieve a balance between the two conflicting aspects of LLM alignment--factuality and safety. We packaged the restyled examples as prompts to trigger few-shot learning, improving LLM alignment. Compared to the best baseline approach, with an average score of 5.00 as the maximum, our method achieves a maximum 0.10 increase on the Alpaca task (from 4.50 to 4.60), a 0.22 enhancement on the Just-eval benchmark (from 4.34 to 4.56), and a maximum improvement of 0.32 (from 3.53 to 3.85) on the MT-Bench dataset. We release the code and data at https://github.com/AnonymousCode-ComputerScience/RIDE.
{ "Other": 0, "cs.AI": 1, "cs.CE": 0, "cs.CL": 1, "cs.CR": 0, "cs.CV": 0, "cs.CY": 0, "cs.DB": 0, "cs.HC": 0, "cs.IR": 0, "cs.IT": 0, "cs.LG": 0, "cs.MA": 0, "cs.NE": 0, "cs.RO": 0, "cs.SD": 0, "cs.SI": 0, "cs.SY": 0 }
2502.11682
Double Momentum and Error Feedback for Clipping with Fast Rates and Differential Privacy
[ "cs.LG", "math.OC", "stat.ML" ]
Strong Differential Privacy (DP) and Optimization guarantees are two desirable properties for a method in Federated Learning (FL). However, existing algorithms do not achieve both properties at once: they either have optimal DP guarantees but rely on restrictive assumptions such as bounded gradients/bounded data heterogeneity, or they ensure strong optimization performance but lack DP guarantees. To address this gap in the literature, we propose and analyze a new method called Clip21-SGD2M based on a novel combination of clipping, heavy-ball momentum, and Error Feedback. In particular, for non-convex smooth distributed problems with clients having arbitrarily heterogeneous data, we prove that Clip21-SGD2M has optimal convergence rate and also near optimal (local-)DP neighborhood. Our numerical experiments on non-convex logistic regression and training of neural networks highlight the superiority of Clip21-SGD2M over baselines in terms of the optimization performance for a given DP-budget.
{ "Other": 0, "cs.AI": 0, "cs.CE": 0, "cs.CL": 0, "cs.CR": 0, "cs.CV": 0, "cs.CY": 0, "cs.DB": 0, "cs.HC": 0, "cs.IR": 0, "cs.IT": 0, "cs.LG": 1, "cs.MA": 0, "cs.NE": 0, "cs.RO": 0, "cs.SD": 0, "cs.SI": 0, "cs.SY": 0 }
2502.11684
MathFimer: Enhancing Mathematical Reasoning by Expanding Reasoning Steps through Fill-in-the-Middle Task
[ "cs.CL", "cs.AI" ]
Mathematical reasoning represents a critical frontier in advancing large language models (LLMs). While step-by-step approaches have emerged as the dominant paradigm for mathematical problem-solving in LLMs, the quality of reasoning steps in training data fundamentally constrains the performance of the models. Recent studies has demonstrated that more detailed intermediate steps can enhance model performance, yet existing methods for step expansion either require more powerful external models or incur substantial computational costs. In this paper, we introduce MathFimer, a novel framework for mathematical reasoning step expansion inspired by the "Fill-in-the-middle" task from code completion. By decomposing solution chains into prefix-suffix pairs and training models to reconstruct missing intermediate steps, we develop a specialized model, MathFimer-7B, on our carefully curated NuminaMath-FIM dataset. We then apply these models to enhance existing mathematical reasoning datasets by inserting detailed intermediate steps into their solution chains, creating MathFimer-expanded versions. Through comprehensive experiments on multiple mathematical reasoning datasets, including MathInstruct, MetaMathQA and etc., we demonstrate that models trained on MathFimer-expanded data consistently outperform their counterparts trained on original data across various benchmarks such as GSM8K and MATH. Our approach offers a practical, scalable solution for enhancing mathematical reasoning capabilities in LLMs without relying on powerful external models or expensive inference procedures.
{ "Other": 0, "cs.AI": 1, "cs.CE": 0, "cs.CL": 1, "cs.CR": 0, "cs.CV": 0, "cs.CY": 0, "cs.DB": 0, "cs.HC": 0, "cs.IR": 0, "cs.IT": 0, "cs.LG": 0, "cs.MA": 0, "cs.NE": 0, "cs.RO": 0, "cs.SD": 0, "cs.SI": 0, "cs.SY": 0 }
2502.11687
ReVeil: Unconstrained Concealed Backdoor Attack on Deep Neural Networks using Machine Unlearning
[ "cs.CR", "cs.AI", "cs.LG" ]
Backdoor attacks embed hidden functionalities in deep neural networks (DNN), triggering malicious behavior with specific inputs. Advanced defenses monitor anomalous DNN inferences to detect such attacks. However, concealed backdoors evade detection by maintaining a low pre-deployment attack success rate (ASR) and restoring high ASR post-deployment via machine unlearning. Existing concealed backdoors are often constrained by requiring white-box or black-box access or auxiliary data, limiting their practicality when such access or data is unavailable. This paper introduces ReVeil, a concealed backdoor attack targeting the data collection phase of the DNN training pipeline, requiring no model access or auxiliary data. ReVeil maintains low pre-deployment ASR across four datasets and four trigger patterns, successfully evades three popular backdoor detection methods, and restores high ASR post-deployment through machine unlearning.
{ "Other": 0, "cs.AI": 1, "cs.CE": 0, "cs.CL": 0, "cs.CR": 1, "cs.CV": 0, "cs.CY": 0, "cs.DB": 0, "cs.HC": 0, "cs.IR": 0, "cs.IT": 0, "cs.LG": 1, "cs.MA": 0, "cs.NE": 0, "cs.RO": 0, "cs.SD": 0, "cs.SI": 0, "cs.SY": 0 }