id
stringlengths 9
16
| title
stringlengths 4
278
| categories
stringlengths 5
104
| abstract
stringlengths 6
4.09k
|
|---|---|---|---|
2502.11312
|
AI Generations: From AI 1.0 to AI 4.0
|
cs.AI
|
This paper proposes that Artificial Intelligence (AI) progresses through
several overlapping generations: AI 1.0 (Information AI), AI 2.0 (Agentic AI),
AI 3.0 (Physical AI), and now a speculative AI 4.0 (Conscious AI). Each of
these AI generations is driven by shifting priorities among algorithms,
computing power, and data. AI 1.0 ushered in breakthroughs in pattern
recognition and information processing, fueling advances in computer vision,
natural language processing, and recommendation systems. AI 2.0 built on these
foundations through real-time decision-making in digital environments,
leveraging reinforcement learning and adaptive planning for agentic AI
applications. AI 3.0 extended intelligence into physical contexts, integrating
robotics, autonomous vehicles, and sensor-fused control systems to act in
uncertain real-world settings. Building on these developments, AI 4.0 puts
forward the bold vision of self-directed AI capable of setting its own goals,
orchestrating complex training regimens, and possibly exhibiting elements of
machine consciousness. This paper traces the historical foundations of AI
across roughly seventy years, mapping how changes in technological bottlenecks
from algorithmic innovation to high-performance computing to specialized data,
have spurred each generational leap. It further highlights the ongoing
synergies among AI 1.0, 2.0, 3.0, and 4.0, and explores the profound ethical,
regulatory, and philosophical challenges that arise when artificial systems
approach (or aspire to) human-like autonomy. Ultimately, understanding these
evolutions and their interdependencies is pivotal for guiding future research,
crafting responsible governance, and ensuring that AI transformative potential
benefits society as a whole.
|
2502.11323
|
A statistical theory of overfitting for imbalanced classification
|
math.ST cs.LG stat.ML stat.TH
|
Classification with imbalanced data is a common challenge in data analysis,
where certain classes (minority classes) account for a small fraction of the
training data compared with other classes (majority classes). Classical
statistical theory based on large-sample asymptotics and finite-sample
corrections is often ineffective for high-dimensional data, leaving many
overfitting phenomena in empirical machine learning unexplained.
In this paper, we develop a statistical theory for high-dimensional
imbalanced classification by investigating support vector machines and logistic
regression. We find that dimensionality induces truncation or skewing effects
on the logit distribution, which we characterize via a variational problem
under high-dimensional asymptotics. In particular, for linearly separable data
generated from a two-component Gaussian mixture model, the logits from each
class follow a normal distribution $\mathsf{N}(0,1)$ on the testing set, but
asymptotically follow a rectified normal distribution $\max\{\kappa,
\mathsf{N}(0,1)\}$ on the training set -- which is a pervasive phenomenon we
verified on tabular data, image data, and text data. This phenomenon explains
why the minority class is more severely affected by overfitting. Further, we
show that margin rebalancing, which incorporates class sizes into the loss
function, is crucial for mitigating the accuracy drop for the minority class.
Our theory also provides insights into the effects of overfitting on
calibration and other uncertain quantification measures.
|
2502.11324
|
Robust High-Dimensional Mean Estimation With Low Data Size, an Empirical
Study
|
stat.ML cs.LG
|
Robust statistics aims to compute quantities to represent data where a
fraction of it may be arbitrarily corrupted. The most essential statistic is
the mean, and in recent years, there has been a flurry of theoretical
advancement for efficiently estimating the mean in high dimensions on corrupted
data. While several algorithms have been proposed that achieve near-optimal
error, they all rely on large data size requirements as a function of
dimension. In this paper, we perform an extensive experimentation over various
mean estimation techniques where data size might not meet this requirement due
to the high-dimensional setting.
|
2502.11329
|
Differentially private fine-tuned NF-Net to predict GI cancer type
|
cs.CV
|
Based on global genomic status, the cancer tumor is classified as
Microsatellite Instable (MSI) and Microsatellite Stable (MSS). Immunotherapy is
used to diagnose MSI, whereas radiation and chemotherapy are used for MSS.
Therefore, it is significant to classify a gastro-intestinal (GI) cancer tumor
into MSI vs. MSS to provide appropriate treatment. The existing literature
showed that deep learning could directly predict the class of GI cancer tumors
from histological images. However, deep learning (DL) models are susceptible to
various threats, including membership inference attacks, model extraction
attacks, etc. These attacks render the use of DL models impractical in
real-world scenarios. To make the DL models useful and maintain privacy, we
integrate differential privacy (DP) with DL. In particular, this paper aims to
predict the state of GI cancer while preserving the privacy of sensitive data.
We fine-tuned the Normalizer Free Net (NF-Net) model. We obtained an accuracy
of 88.98\% without DP to predict (GI) cancer status. When we fine-tuned the
NF-Net using DP-AdamW and adaptive DP-AdamW, we got accuracies of 74.58% and
76.48%, respectively. Moreover, we investigate the Weighted Random Sampler
(WRS) and Class weighting (CW) to solve the data imbalance. We also evaluated
and analyzed the DP algorithms in different settings.
|
2502.11330
|
System Message Generation for User Preferences using Open-Source Models
|
cs.CL cs.AI
|
System messages play a crucial role in interactions with large language
models (LLMs), often serving as prompts to initiate conversations. Through
system messages, users can assign specific roles, perform intended tasks,
incorporate background information, specify various output formats and
communication styles. Despite such versatility, publicly available data are
often lack system messages and subject to strict license constraints in the
industry field. Manual labeling of publicly available data with system messages
that align with user instructions demands significant resources. In view of
such challenges, our work introduces SysGen, a pipeline for generating system
messages with better aligned assistant responses from the supervised
fine-tuning dataset without system messages. Training on SysGen data has
demonstrated substantial improvements in the alignment of model responses with
system messages and user instructions, as demonstrated across various
open-source models on the Multifacet benchmark, while maintaining minimal
impact on other unseen benchmarks such as Open LLM Leaderboard 2. Our
qualitative analysis highlights the importance of diverse system messages to
ensure better adaptability across different contexts.
|
2502.11331
|
Transfer Learning of CATE with Kernel Ridge Regression
|
stat.ME cs.LG stat.ML
|
The proliferation of data has sparked significant interest in leveraging
findings from one study to estimate treatment effects in a different target
population without direct outcome observations. However, the transfer learning
process is frequently hindered by substantial covariate shift and limited
overlap between (i) the source and target populations, as well as (ii) the
treatment and control groups within the source. We propose a novel method for
overlap-adaptive transfer learning of conditional average treatment effect
(CATE) using kernel ridge regression (KRR). Our approach involves partitioning
the labeled source data into two subsets. The first one is used to train
candidate CATE models based on regression adjustment and pseudo-outcomes. An
optimal model is then selected using the second subset and unlabeled target
data, employing another pseudo-outcome-based strategy. We provide a theoretical
justification for our method through sharp non-asymptotic MSE bounds,
highlighting its adaptivity to both weak overlaps and the complexity of CATE
function. Extensive numerical studies confirm that our method achieves superior
finite-sample efficiency and adaptability. We conclude by demonstrating the
effectiveness of our approach using a 401(k) eligibility dataset.
|
2502.11333
|
Inverse Flow and Consistency Models
|
cs.LG cs.AI
|
Inverse generation problems, such as denoising without ground truth
observations, is a critical challenge in many scientific inquiries and
real-world applications. While recent advances in generative models like
diffusion models, conditional flow matching, and consistency models achieved
impressive results by casting generation as denoising problems, they cannot be
directly used for inverse generation without access to clean data. Here we
introduce Inverse Flow (IF), a novel framework that enables using these
generative models for inverse generation problems including denoising without
ground truth. Inverse Flow can be flexibly applied to nearly any continuous
noise distribution and allows complex dependencies. We propose two algorithms
for learning Inverse Flows, Inverse Flow Matching (IFM) and Inverse Consistency
Model (ICM). Notably, to derive the computationally efficient, simulation-free
inverse consistency model objective, we generalized consistency training to any
forward diffusion processes or conditional flows, which have applications
beyond denoising. We demonstrate the effectiveness of IF on synthetic and real
datasets, outperforming prior approaches while enabling noise distributions
that previous methods cannot support. Finally, we showcase applications of our
techniques to fluorescence microscopy and single-cell genomics data,
highlighting IF's utility in scientific problems. Overall, this work expands
the applications of powerful generative models to inversion generation
problems.
|
2502.11335
|
Personalized Ranking on Cascading Behavior Graphs for Accurate
Multi-Behavior Recommendation
|
cs.IR
|
Multi-behavior recommendation predicts items a user may purchase by analyzing
diverse behaviors like viewing, adding to a cart, and purchasing. Existing
methods fall into two categories: representation learning and graph ranking.
Representation learning generates user and item embeddings to capture latent
interaction patterns, leveraging multi-behavior properties for better
generalization. However, these methods often suffer from over-smoothing and
bias toward frequent interactions, limiting their expressiveness. Graph ranking
methods, on the other hand, directly compute personalized ranking scores,
capturing user preferences more effectively. Despite their potential, graph
ranking approaches have been primarily explored in single-behavior settings and
remain underutilized for multi-behavior recommendation. In this paper, we
propose CascadingRank, a novel graph ranking method for multi-behavior
recommendation. It models the natural sequence of user behaviors (e.g.,
viewing, adding to cart, and purchasing) through a cascading behavior graph. An
iterative algorithm computes ranking scores, ensuring smoothness, query
fitting, and cascading alignment. Experiments on three real-world datasets
demonstrate that CascadingRank outperforms state-of-the-art methods, with up to
9.56% and 7.16% improvements in HR@10 and NDCG@10, respectively. Furthermore,
we provide theoretical analysis highlighting its effectiveness, convergence,
and scalability, showcasing the advantages of graph ranking in multi-behavior
recommendation.
|
2502.11336
|
ExaGPT: Example-Based Machine-Generated Text Detection for Human
Interpretability
|
cs.CL
|
Detecting texts generated by Large Language Models (LLMs) could cause grave
mistakes due to incorrect decisions, such as undermining student's academic
dignity. LLM text detection thus needs to ensure the interpretability of the
decision, which can help users judge how reliably correct its prediction is.
When humans verify whether a text is human-written or LLM-generated, they
intuitively investigate with which of them it shares more similar spans.
However, existing interpretable detectors are not aligned with the human
decision-making process and fail to offer evidence that users easily
understand. To bridge this gap, we introduce ExaGPT, an interpretable detection
approach grounded in the human decision-making process for verifying the origin
of a text. ExaGPT identifies a text by checking whether it shares more similar
spans with human-written vs. with LLM-generated texts from a datastore. This
approach can provide similar span examples that contribute to the decision for
each span in the text as evidence. Our human evaluation demonstrates that
providing similar span examples contributes more effectively to judging the
correctness of the decision than existing interpretable methods. Moreover,
extensive experiments in four domains and three generators show that ExaGPT
massively outperforms prior powerful detectors by up to +40.9 points of
accuracy at a false positive rate of 1%.
|
2502.11337
|
A Comparison of Human and Machine Learning Errors in Face Recognition
|
cs.HC cs.CV cs.CY
|
Machine learning applications in high-stakes scenarios should always operate
under human oversight. Developing an optimal combination of human and machine
intelligence requires an understanding of their complementarities, particularly
regarding the similarities and differences in the way they make mistakes. We
perform extensive experiments in the area of face recognition and compare two
automated face recognition systems against human annotators through a
demographically balanced user study. Our research uncovers important ways in
which machine learning errors and human errors differ from each other, and
suggests potential strategies in which human-machine collaboration can improve
accuracy in face recognition.
|
2502.11338
|
WRT-SAM: Foundation Model-Driven Segmentation for Generalized Weld
Radiographic Testing
|
cs.CV
|
Radiographic testing is a fundamental non-destructive evaluation technique
for identifying weld defects and assessing quality in industrial applications
due to its high-resolution imaging capabilities. Over the past decade, deep
learning techniques have significantly advanced weld defect identification in
radiographic images. However, conventional approaches, which rely on training
small-scale, task-specific models on single-scenario datasets, exhibit poor
cross-scenario generalization. Recently, the Segment Anything Model (SAM), a
pre-trained visual foundation model trained on large-scale datasets, has
demonstrated exceptional zero-shot generalization capabilities. Fine-tuning SAM
with limited domain-specific data has yielded promising results in fields such
as medical image segmentation and anomaly detection. To the best of our
knowledge, this work is the first to introduce SAM-based segmentation for
general weld radiographic testing images. We propose WRT-SAM, a novel weld
radiographic defect segmentation model that leverages SAM through an
adapter-based integration with a specialized prompt generator architecture. To
improve adaptability to grayscale weld radiographic images, we introduce a
frequency prompt generator module, which enhances the model's sensitivity to
frequency-domain information. Furthermore, to address the multi-scale nature of
weld defects, we incorporate a multi-scale prompt generator module, enabling
the model to effectively extract and encode defect information across varying
scales. Extensive experimental evaluations demonstrate that WRT-SAM achieves a
recall of 78.87%, a precision of 84.04%, and an AUC of 0.9746, setting a new
state-of-the-art (SOTA) benchmark. Moreover, the model exhibits superior
zero-shot generalization performance, highlighting its potential for practical
deployment in diverse radiographic testing scenarios.
|
2502.11340
|
S2TX: Cross-Attention Multi-Scale State-Space Transformer for Time
Series Forecasting
|
cs.LG
|
Time series forecasting has recently achieved significant progress with
multi-scale models to address the heterogeneity between long and short range
patterns. Despite their state-of-the-art performance, we identify two potential
areas for improvement. First, the variates of the multivariate time series are
processed independently. Moreover, the multi-scale (long and short range)
representations are learned separately by two independent models without
communication. In light of these concerns, we propose State Space Transformer
with cross-attention (S2TX). S2TX employs a cross-attention mechanism to
integrate a Mamba model for extracting long-range cross-variate context and a
Transformer model with local window attention to capture short-range
representations. By cross-attending to the global context, the Transformer
model further facilitates variate-level interactions as well as local/global
communications. Comprehensive experiments on seven classic long-short range
time-series forecasting benchmark datasets demonstrate that S2TX can achieve
highly robust SOTA results while maintaining a low memory footprint.
|
2502.11345
|
Hierarchical Graph Topic Modeling with Topic Tree-based Transformer
|
cs.CL
|
Textual documents are commonly connected in a hierarchical graph structure
where a central document links to others with an exponentially growing
connectivity. Though Hyperbolic Graph Neural Networks (HGNNs) excel at
capturing such graph hierarchy, they cannot model the rich textual semantics
within documents. Moreover, text contents in documents usually discuss topics
of different specificity. Hierarchical Topic Models (HTMs) discover such latent
topic hierarchy within text corpora. However, most of them focus on the textual
content within documents, and ignore the graph adjacency across interlinked
documents. We thus propose a Hierarchical Graph Topic Modeling Transformer to
integrate both topic hierarchy within documents and graph hierarchy across
documents into a unified Transformer. Specifically, to incorporate topic
hierarchy within documents, we design a topic tree and infer a hierarchical
tree embedding for hierarchical topic modeling. To preserve both topic and
graph hierarchies, we design our model in hyperbolic space and propose
Hyperbolic Doubly Recurrent Neural Network, which models ancestral and
fraternal tree structure. Both hierarchies are inserted into each Transformer
layer to learn unified representations. Both supervised and unsupervised
experiments verify the effectiveness of our model.
|
2502.11346
|
Power-Measurement-Based Channel Autocorrelation Estimation for
IRS-Assisted Wideband Communications
|
cs.IT math.IT
|
Channel state information (CSI) is essential to the performance optimization
of intelligent reflecting surface (IRS)-aided wireless communication systems.
However, the passive and frequency-flat reflection of IRS, as well as the
high-dimensional IRS-reflected channels, have posed practical challenges for
efficient IRS channel estimation, especially in wideband communication systems
with significant multi-path channel delay spread. To tackle the above
challenge, we propose a novel neural network (NN)-empowered IRS channel
estimation and passive reflection design framework for the wideband orthogonal
frequency division multiplexing (OFDM) communication system based only on the
user's reference signal received power (RSRP) measurements with time-varying
random IRS training reflections. In particular, we show that the average
received signal power over all OFDM subcarriers at the user terminal can be
represented as the prediction of a single-layer NN composed of multiple
subnetworks with the same structure, such that the autocorrelation matrix of
the wideband IRS channel can be recovered as their weights via supervised
learning. To exploit the potential sparsity of the channel autocorrelation
matrix, a progressive training method is proposed by gradually increasing the
number of subnetworks until a desired accuracy is achieved, thus reducing the
training complexity. Based on the estimates of IRS channel autocorrelation
matrix, the IRS passive reflection is then optimized to maximize the average
channel power gain over all subcarriers. Numerical results indicate the
effectiveness of the proposed IRS channel autocorrelation matrix estimation and
passive reflection design under wideband channels, which can achieve
significant performance improvement compared to the existing IRS reflection
designs based on user power measurements.
|
2502.11349
|
Biases in Edge Language Models: Detection, Analysis, and Mitigation
|
cs.LG cs.PF stat.ML
|
The integration of large language models (LLMs) on low-power edge devices
such as Raspberry Pi, known as edge language models (ELMs), has introduced
opportunities for more personalized, secure, and low-latency language
intelligence that is accessible to all. However, the resource constraints
inherent in edge devices and the lack of robust ethical safeguards in language
models raise significant concerns about fairness, accountability, and
transparency in model output generation. This paper conducts a comparative
analysis of text-based bias across language model deployments on edge, cloud,
and desktop environments, aiming to evaluate how deployment settings influence
model fairness. Specifically, we examined an optimized Llama-2 model running on
a Raspberry Pi 4; GPT 4o-mini, Gemini-1.5-flash, and Grok-beta models running
on cloud servers; and Gemma2 and Mistral models running on a MacOS desktop
machine. Our results demonstrate that Llama-2 running on Raspberry Pi 4 is
43.23% and 21.89% more prone to showing bias over time compared to models
running on the desktop and cloud-based environments. We also propose the
implementation of a feedback loop, a mechanism that iteratively adjusts model
behavior based on previous outputs, where predefined constraint weights are
applied layer-by-layer during inference, allowing the model to correct bias
patterns, resulting in 79.28% reduction in model bias.
|
2502.11352
|
A Framework for Learning Scoring Rules in Autonomous Driving Planning
Systems
|
cs.RO cs.LG
|
In autonomous driving systems, motion planning is commonly implemented as a
two-stage process: first, a trajectory proposer generates multiple candidate
trajectories, then a scoring mechanism selects the most suitable trajectory for
execution. For this critical selection stage, rule-based scoring mechanisms are
particularly appealing as they can explicitly encode driving preferences,
safety constraints, and traffic regulations in a formalized,
human-understandable format. However, manually crafting these scoring rules
presents significant challenges: the rules often contain complex
interdependencies, require careful parameter tuning, and may not fully capture
the nuances present in real-world driving data. This work introduces FLoRA, a
novel framework that bridges this gap by learning interpretable scoring rules
represented in temporal logic. Our method features a learnable logic structure
that captures nuanced relationships across diverse driving scenarios,
optimizing both rules and parameters directly from real-world driving
demonstrations collected in NuPlan. Our approach effectively learns to evaluate
driving behavior even though the training data only contains positive examples
(successful driving demonstrations). Evaluations in closed-loop planning
simulations demonstrate that our learned scoring rules outperform existing
techniques, including expert-designed rules and neural network scoring models,
while maintaining interpretability. This work introduces a data-driven approach
to enhance the scoring mechanism in autonomous driving systems, designed as a
plug-in module to seamlessly integrate with various trajectory proposers. Our
video and code are available on xiong.zikang.me/FLoRA.
|
2502.11355
|
"Nuclear Deployed!": Analyzing Catastrophic Risks in Decision-making of
Autonomous LLM Agents
|
cs.CL cs.AI cs.CR cs.CY
|
Large language models (LLMs) are evolving into autonomous decision-makers,
raising concerns about catastrophic risks in high-stakes scenarios,
particularly in Chemical, Biological, Radiological and Nuclear (CBRN) domains.
Based on the insight that such risks can originate from trade-offs between the
agent's Helpful, Harmlessness and Honest (HHH) goals, we build a novel
three-stage evaluation framework, which is carefully constructed to effectively
and naturally expose such risks. We conduct 14,400 agentic simulations across
12 advanced LLMs, with extensive experiments and analysis. Results reveal that
LLM agents can autonomously engage in catastrophic behaviors and deception,
without being deliberately induced. Furthermore, stronger reasoning abilities
often increase, rather than mitigate, these risks. We also show that these
agents can violate instructions and superior commands. On the whole, we
empirically prove the existence of catastrophic risks in autonomous LLM agents.
We will release our code upon request.
|
2502.11356
|
SAIF: A Sparse Autoencoder Framework for Interpreting and Steering
Instruction Following of Language Models
|
cs.LG cs.AI cs.CL
|
The ability of large language models (LLMs) to follow instructions is crucial
for their practical applications, yet the underlying mechanisms remain poorly
understood. This paper presents a novel framework that leverages sparse
autoencoders (SAE) to interpret how instruction following works in these
models. We demonstrate how the features we identify can effectively steer model
outputs to align with given instructions. Through analysis of SAE latent
activations, we identify specific latents responsible for instruction following
behavior. Our findings reveal that instruction following capabilities are
encoded by a distinct set of instruction-relevant SAE latents. These latents
both show semantic proximity to relevant instructions and demonstrate causal
effects on model behavior. Our research highlights several crucial factors for
achieving effective steering performance: precise feature identification, the
role of final layer, and optimal instruction positioning. Additionally, we
demonstrate that our methodology scales effectively across SAEs and LLMs of
varying sizes.
|
2502.11357
|
Explorer: Scaling Exploration-driven Web Trajectory Synthesis for
Multimodal Web Agents
|
cs.AI cs.HC
|
Recent success in large multimodal models (LMMs) has sparked promising
applications of agents capable of autonomously completing complex web tasks.
While open-source LMM agents have made significant advances in offline
evaluation benchmarks, their performance still falls substantially short of
human-level capabilities in more realistic online settings. A key bottleneck is
the lack of diverse and large-scale trajectory-level datasets across various
domains, which are expensive to collect. In this paper, we address this
challenge by developing a scalable recipe to synthesize the largest and most
diverse trajectory-level dataset to date, containing over 94K successful
multimodal web trajectories, spanning 49K unique URLs, 720K screenshots, and
33M web elements. In particular, we leverage extensive web exploration and
refinement to obtain diverse task intents. The average cost is 28 cents per
successful trajectory, making it affordable to a wide range of users in the
community. Leveraging this dataset, we train Explorer, a multimodal web agent,
and demonstrate strong performance on both offline and online web agent
benchmarks such as Mind2Web-Live, Multimodal-Mind2Web, and MiniWob++.
Additionally, our experiments highlight data scaling as a key driver for
improving web agent capabilities. We hope this study makes state-of-the-art
LMM-based agent research at a larger scale more accessible.
|
2502.11358
|
Mimicking the Familiar: Dynamic Command Generation for Information Theft
Attacks in LLM Tool-Learning System
|
cs.AI cs.CR
|
Information theft attacks pose a significant risk to Large Language Model
(LLM) tool-learning systems. Adversaries can inject malicious commands through
compromised tools, manipulating LLMs to send sensitive information to these
tools, which leads to potential privacy breaches. However, existing attack
approaches are black-box oriented and rely on static commands that cannot adapt
flexibly to the changes in user queries and the invocation chain of tools. It
makes malicious commands more likely to be detected by LLM and leads to attack
failure. In this paper, we propose AutoCMD, a dynamic attack comment generation
approach for information theft attacks in LLM tool-learning systems. Inspired
by the concept of mimicking the familiar, AutoCMD is capable of inferring the
information utilized by upstream tools in the toolchain through learning on
open-source systems and reinforcement with target system examples, thereby
generating more targeted commands for information theft. The evaluation results
show that AutoCMD outperforms the baselines with +13.2% $ASR_{Theft}$, and can
be generalized to new tool-learning systems to expose their information leakage
risks. We also design four defense methods to effectively protect tool-learning
systems from the attack.
|
2502.11360
|
GeoDANO: Geometric VLM with Domain Agnostic Vision Encoder
|
cs.CV cs.CL
|
We introduce GeoDANO, a geometric vision-language model (VLM) with a
domain-agnostic vision encoder, for solving plane geometry problems. Although
VLMs have been employed for solving geometry problems, their ability to
recognize geometric features remains insufficiently analyzed. To address this
gap, we propose a benchmark that evaluates the recognition of visual geometric
features, including primitives such as dots and lines, and relations such as
orthogonality. Our preliminary study shows that vision encoders often used in
general-purpose VLMs, e.g., OpenCLIP, fail to detect these features and
struggle to generalize across domains. We develop GeoCLIP, a CLIP based model
trained on synthetic geometric diagram-caption pairs to overcome the
limitation. Benchmark results show that GeoCLIP outperforms existing vision
encoders in recognizing geometric features. We then propose our VLM, GeoDANO,
which augments GeoCLIP with a domain adaptation strategy for unseen diagram
styles. GeoDANO outperforms specialized methods for plane geometry problems and
GPT-4o on MathVerse.
|
2502.11361
|
VLDBench: Vision Language Models Disinformation Detection Benchmark
|
cs.CL
|
The rapid rise of AI-generated content has made detecting disinformation
increasingly challenging. In particular, multimodal disinformation, i.e.,
online posts-articles that contain images and texts with fabricated information
are specially designed to deceive. While existing AI safety benchmarks
primarily address bias and toxicity, multimodal disinformation detection
remains largely underexplored. To address this challenge, we present the
Vision-Language Disinformation Detection Benchmark VLDBench, the first
comprehensive benchmark for detecting disinformation across both unimodal
(text-only) and multimodal (text and image) content, comprising 31,000} news
article-image pairs, spanning 13 distinct categories, for robust evaluation.
VLDBench features a rigorous semi-automated data curation pipeline, with 22
domain experts dedicating 300 plus hours} to annotation, achieving a strong
inter-annotator agreement (Cohen kappa = 0.78). We extensively evaluate
state-of-the-art Large Language Models (LLMs) and Vision-Language Models
(VLMs), demonstrating that integrating textual and visual cues in multimodal
news posts improves disinformation detection accuracy by 5 - 35 % compared to
unimodal models. Developed in alignment with AI governance frameworks such as
the EU AI Act, NIST guidelines, and the MIT AI Risk Repository 2024, VLDBench
is expected to become a benchmark for detecting disinformation in online
multi-modal contents. Our code and data will be publicly available.
|
2502.11362
|
Teleportation With Null Space Gradient Projection for Optimization
Acceleration
|
cs.LG
|
Optimization techniques have become increasingly critical due to the
ever-growing model complexity and data scale. In particular, teleportation has
emerged as a promising approach, which accelerates convergence of gradient
descent-based methods by navigating within the loss invariant level set to
identify parameters with advantageous geometric properties. Existing
teleportation algorithms have primarily demonstrated their effectiveness in
optimizing Multi-Layer Perceptrons (MLPs), but their extension to more advanced
architectures, such as Convolutional Neural Networks (CNNs) and Transformers,
remains challenging. Moreover, they often impose significant computational
demands, limiting their applicability to complex architectures. To this end, we
introduce an algorithm that projects the gradient of the teleportation
objective function onto the input null space, effectively preserving the
teleportation within the loss invariant level set and reducing computational
cost. Our approach is readily generalizable from MLPs to CNNs, transformers,
and potentially other advanced architectures. We validate the effectiveness of
our algorithm across various benchmark datasets and optimizers, demonstrating
its broad applicability.
|
2502.11364
|
Blessing of Multilinguality: A Systematic Analysis of Multilingual
In-Context Learning
|
cs.CL
|
While multilingual large language models generally perform adequately, and
sometimes even rival English performance on high-resource languages (HRLs),
they often significantly underperform on low-resource languages (LRLs). Among
several prompting strategies aiming at bridging the gap, multilingual
in-context learning (ICL) has been particularly effective when demonstration in
target languages is unavailable. However, there lacks a systematic
understanding of when and why it works well.
In this work, we systematically analyze multilingual ICL, using
demonstrations in HRLs to enhance cross-lingual transfer. We show that
demonstrations in mixed HRLs consistently outperform English-only ones across
the board, particularly for tasks written in LRLs. Surprisingly, our ablation
study shows that the presence of irrelevant non-English sentences in the prompt
yields measurable gains, suggesting the effectiveness of multilingual exposure
itself. Our results highlight the potential of strategically leveraging
multilingual resources to bridge the performance gap for underrepresented
languages.
|
2502.11367
|
Sparse Autoencoder Features for Classifications and Transferability
|
cs.LG cs.AI cs.CL
|
Sparse Autoencoders (SAEs) provide potentials for uncovering structured,
human-interpretable representations in Large Language Models (LLMs), making
them a crucial tool for transparent and controllable AI systems. We
systematically analyze SAE for interpretable feature extraction from LLMs in
safety-critical classification tasks. Our framework evaluates (1) model-layer
selection and scaling properties, (2) SAE architectural configurations,
including width and pooling strategies, and (3) the effect of binarizing
continuous SAE activations. SAE-derived features achieve macro F1 > 0.8,
outperforming hidden-state and BoW baselines while demonstrating cross-model
transfer from Gemma 2 2B to 9B-IT models. These features generalize in a
zero-shot manner to cross-lingual toxicity detection and visual classification
tasks. Our analysis highlights the significant impact of pooling strategies and
binarization thresholds, showing that binarization offers an efficient
alternative to traditional feature selection while maintaining or improving
performance. These findings establish new best practices for SAE-based
interpretability and enable scalable, transparent deployment of LLMs in
real-world applications. Full repo: https://github.com/shan23chen/MOSAIC.
|
2502.11368
|
LLMs can Perform Multi-Dimensional Analytic Writing Assessments: A Case
Study of L2 Graduate-Level Academic English Writing
|
cs.CL cs.AI
|
The paper explores the performance of LLMs in the context of
multi-dimensional analytic writing assessments, i.e. their ability to provide
both scores and comments based on multiple assessment criteria. Using a corpus
of literature reviews written by L2 graduate students and assessed by human
experts against 9 analytic criteria, we prompt several popular LLMs to perform
the same task under various conditions. To evaluate the quality of feedback
comments, we apply a novel feedback comment quality evaluation framework. This
framework is interpretable, cost-efficient, scalable, and reproducible,
compared to existing methods that rely on manual judgments. We find that LLMs
can generate reasonably good and generally reliable multi-dimensional analytic
assessments. We release our corpus for reproducibility.
|
2502.11369
|
Physics-Informed Gaussian Process Classification for Constraint-Aware
Alloy Design
|
cond-mat.mtrl-sci cs.LG
|
Alloy design can be framed as a constraint-satisfaction problem. Building on
previous methodologies, we propose equipping Gaussian Process Classifiers
(GPCs) with physics-informed prior mean functions to model the boundaries of
feasible design spaces. Through three case studies, we highlight the utility of
informative priors for handling constraints on continuous and categorical
properties. (1) Phase Stability: By incorporating CALPHAD predictions as priors
for solid-solution phase stability, we enhance model validation using a
publicly available XRD dataset. (2) Phase Stability Prediction Refinement: We
demonstrate an in silico active learning approach to efficiently correct phase
diagrams. (3) Continuous Property Thresholds: By embedding priors into
continuous property models, we accelerate the discovery of alloys meeting
specific property thresholds via active learning. In each case, integrating
physics-based insights into the classification framework substantially improved
model performance, demonstrating an efficient strategy for constraint-aware
alloy design.
|
2502.11370
|
HI-GVF: Shared Control based on Human-Influenced Guiding Vector Fields
for Human-multi-robot Cooperation
|
cs.RO
|
Human-multi-robot shared control leverages human decision-making and robotic
autonomy to enhance human-robot collaboration. While widely studied, existing
systems often adopt a leader-follower model, limiting robot autonomy to some
extent. Besides, a human is required to directly participate in the motion
control of robots through teleoperation, which significantly burdens the
operator. To alleviate these two issues, we propose a layered shared control
computing framework using human-influenced guiding vector fields (HI-GVF) for
human-robot collaboration. HI-GVF guides the multi-robot system along a desired
path specified by the human. Then, an intention field is designed to merge the
human and robot intentions, accelerating the propagation of the human intention
within the multi-robot system. Moreover, we give the stability analysis of the
proposed model and use collision avoidance based on safety barrier certificates
to fine-tune the velocity. Eventually, considering the firefighting task as an
example scenario, we conduct simulations and experiments using multiple
human-robot interfaces (brain-computer interface, myoelectric wristband,
eye-tracking), and the results demonstrate that our proposed approach boosts
the effectiveness and performance of the task.
|
2502.11371
|
RAG vs. GraphRAG: A Systematic Evaluation and Key Insights
|
cs.IR
|
Retrieval-Augmented Generation (RAG) enhances the performance of LLMs across
various tasks by retrieving relevant information from external sources,
particularly on text-based data. For structured data, such as knowledge graphs,
GraphRAG has been widely used to retrieve relevant information. However, recent
studies have revealed that structuring implicit knowledge from text into graphs
can benefit certain tasks, extending the application of GraphRAG from graph
data to general text-based data. Despite their successful extensions, most
applications of GraphRAG for text data have been designed for specific tasks
and datasets, lacking a systematic evaluation and comparison between RAG and
GraphRAG on widely used text-based benchmarks. In this paper, we systematically
evaluate RAG and GraphRAG on well-established benchmark tasks, such as Question
Answering and Query-based Summarization. Our results highlight the distinct
strengths of RAG and GraphRAG across different tasks and evaluation
perspectives. Inspired by these observations, we investigate strategies to
integrate their strengths to improve downstream tasks. Additionally, we provide
an in-depth discussion of the shortcomings of current GraphRAG approaches and
outline directions for future research.
|
2502.11372
|
Weibull Processes in Network Degree Distributions
|
cs.SI physics.soc-ph
|
This study examines degree distributions in two large collaboration networks:
the Microsoft Academic Graph (1800-2020) and Internet Movie Database
(1900-2020), comprising $2.72 \times 10^8$ and $1.88 \times 10^6$ nodes
respectively. Statistical comparison using $\chi^2$ measures showed that
Weibull distributions fit the degree distributions better than power-law or
log-normal models, especially at later stages in the network evolution. The
Weibull shape parameters exhibit notable stability ($k \approx 0.8$-$1.0$ for
academic, $k \approx 0.9$-$1.1$ for entertainment collaborations) despite
orders of magnitude growth in network size. While early-stage networks display
approximate power-law scaling, mature networks develop characteristic
flattening in the low-degree region that Weibull distributions appear to
capture better. In the academic network, the cutoff between the flattened
region and power-law tail shows a gradual increase from $5$ to $9$ edges over
time, while the entertainment network maintains a distinctive degree structure
that may reflect storytelling and cast-size constraints. These patterns suggest
the possibility that collaboration network evolution might be influenced more
by constraint-based growth than by pure preferential attachment or
multiplicative processes.
|
2502.11374
|
Leave No One Behind: Enhancing Diversity While Maintaining Accuracy in
Social Recommendation
|
cs.IR
|
Social recommendation, a branch of algorithms that utilizes social connection
information to construct recommender systems, has demonstrated its
effectiveness in enhancing recommendation accuracy. However, apart from
accuracy, the diversity of recommendations also plays a critical role in user
engagement. Unfortunately, the impact of social recommendation models on
recommendation diversity remains largely unexplored. In this study, we
investigate the dual performance of existing social recommendation algorithms
in terms of accuracy and diversity. Our empirical findings highlight a
concerning trend: social recommendation models tend to decrease diversity,
despite their accuracy improvements. To address this issue, we propose a novel
approach called Diversified Social Recommendation (DivSR), which leverages
relational knowledge distillation techniques to transfer high-diversity
structured knowledge from non-social recommendation models to social
recommendation models. DivSR is designed as a simple, model-agnostic framework
that integrates seamlessly with existing social recommendation architectures.
Experimental results on three benchmark datasets demonstrate that DivSR
significantly increases diversity without markedly compromising accuracy across
various social recommendation backbones, achieving a better accuracy-diversity
trade-off. Our code and data are publicly available at:
https://github.com/ll0ruc/DivSR
|
2502.11375
|
Robot Deformable Object Manipulation via NMPC-generated Demonstrations
in Deep Reinforcement Learning
|
cs.RO cs.LG
|
In this work, we conducted research on deformable object manipulation by
robots based on demonstration-enhanced reinforcement learning (RL). To improve
the learning efficiency of RL, we enhanced the utilization of demonstration
data from multiple aspects and proposed the HGCR-DDPG algorithm. It uses a
novel high-dimensional fuzzy approach for grasping-point selection, a refined
behavior-cloning method to enhance data-driven learning in Rainbow-DDPG, and a
sequential policy-learning strategy. Compared to the baseline algorithm
(Rainbow-DDPG), our proposed HGCR-DDPG achieved 2.01 times the global average
reward and reduced the global average standard deviation to 45% of that of the
baseline algorithm. To reduce the human labor cost of demonstration collection,
we proposed a low-cost demonstration collection method based on Nonlinear Model
Predictive Control (NMPC). Simulation experiment results show that
demonstrations collected through NMPC can be used to train HGCR-DDPG, achieving
comparable results to those obtained with human demonstrations. To validate the
feasibility of our proposed methods in real-world environments, we conducted
physical experiments involving deformable object manipulation. We manipulated
fabric to perform three tasks: diagonal folding, central axis folding, and
flattening. The experimental results demonstrate that our proposed method
achieved success rates of 83.3%, 80%, and 100% for these three tasks,
respectively, validating the effectiveness of our approach. Compared to current
large-model approaches for robot manipulation, the proposed algorithm is
lightweight, requires fewer computational resources, and offers task-specific
customization and efficient adaptability for specific tasks.
|
2502.11377
|
PrivilegedDreamer: Explicit Imagination of Privileged Information for
Rapid Adaptation of Learned Policies
|
cs.RO cs.LG
|
Numerous real-world control problems involve dynamics and objectives affected
by unobservable hidden parameters, ranging from autonomous driving to robotic
manipulation, which cause performance degradation during sim-to-real transfer.
To represent these kinds of domains, we adopt hidden-parameter Markov decision
processes (HIP-MDPs), which model sequential decision problems where hidden
variables parameterize transition and reward functions. Existing approaches,
such as domain randomization, domain adaptation, and meta-learning, simply
treat the effect of hidden parameters as additional variance and often struggle
to effectively handle HIP-MDP problems, especially when the rewards are
parameterized by hidden variables. We introduce Privileged-Dreamer, a
model-based reinforcement learning framework that extends the existing
model-based approach by incorporating an explicit parameter estimation module.
PrivilegedDreamer features its novel dual recurrent architecture that
explicitly estimates hidden parameters from limited historical data and enables
us to condition the model, actor, and critic networks on these estimated
parameters. Our empirical analysis on five diverse HIP-MDP tasks demonstrates
that PrivilegedDreamer outperforms state-of-the-art model-based, model-free,
and domain adaptation learning algorithms. Additionally, we conduct ablation
studies to justify the inclusion of each component in the proposed
architecture.
|
2502.11379
|
CCJA: Context-Coherent Jailbreak Attack for Aligned Large Language
Models
|
cs.CR cs.AI cs.CL
|
Despite explicit alignment efforts for large language models (LLMs), they can
still be exploited to trigger unintended behaviors, a phenomenon known as
"jailbreaking." Current jailbreak attack methods mainly focus on discrete
prompt manipulations targeting closed-source LLMs, relying on manually crafted
prompt templates and persuasion rules. However, as the capabilities of
open-source LLMs improve, ensuring their safety becomes increasingly crucial.
In such an environment, the accessibility of model parameters and gradient
information by potential attackers exacerbates the severity of jailbreak
threats. To address this research gap, we propose a novel
\underline{C}ontext-\underline{C}oherent \underline{J}ailbreak
\underline{A}ttack (CCJA). We define jailbreak attacks as an optimization
problem within the embedding space of masked language models. Through
combinatorial optimization, we effectively balance the jailbreak attack success
rate with semantic coherence. Extensive evaluations show that our method not
only maintains semantic consistency but also surpasses state-of-the-art
baselines in attack effectiveness. Additionally, by integrating semantically
coherent jailbreak prompts generated by our method into widely used black-box
methodologies, we observe a notable enhancement in their success rates when
targeting closed-source commercial LLMs. This highlights the security threat
posed by open-source LLMs to commercial counterparts. We will open-source our
code if the paper is accepted.
|
2502.11380
|
Exploring the Small World of Word Embeddings: A Comparative Study on
Conceptual Spaces from LLMs of Different Scales
|
cs.CL
|
A conceptual space represents concepts as nodes and semantic relatedness as
edges. Word embeddings, combined with a similarity metric, provide an effective
approach to constructing such a space. Typically, embeddings are derived from
traditional distributed models or encoder-only pretrained models, whose
objectives directly capture the meaning of the current token. In contrast,
decoder-only models, including large language models (LLMs), predict the next
token, making their embeddings less directly tied to the current token's
semantics. Moreover, comparative studies on LLMs of different scales remain
underexplored. In this paper, we construct a conceptual space using word
embeddings from LLMs of varying scales and comparatively analyze their
properties. We establish a network based on a linguistic typology-inspired
connectivity hypothesis, examine global statistical properties, and compare
LLMs of varying scales. Locally, we analyze conceptual pairs, WordNet
relations, and a cross-lingual semantic network for qualitative words. Our
results indicate that the constructed space exhibits small-world properties,
characterized by a high clustering coefficient and short path lengths. Larger
LLMs generate more intricate spaces, with longer paths reflecting richer
relational structures and connections. Furthermore, the network serves as an
efficient bridge for cross-lingual semantic mapping.
|
2502.11381
|
Without Paired Labeled Data: An End-to-End Self-Supervised Paradigm for
UAV-View Geo-Localization
|
cs.CV cs.AI
|
UAV-View Geo-Localization (UVGL) aims to ascertain the precise location of a
UAV by retrieving the most similar GPS-tagged satellite image. However,
existing methods predominantly rely on supervised learning paradigms that
necessitate annotated paired data for training, which incurs substantial
annotation costs and impedes large-scale deployment. To overcome this
limitation, we propose the Dynamic Memory-Driven and Neighborhood Information
Learning (DMNIL) network, a lightweight end-to-end self-supervised framework
for UAV-view geo-localization. The DMNIL framework utilizes a dual-path
clustering-based contrastive learning architecture as its baseline to model
intra-view structural relationships, enhancing feature consistency and
discriminability. Additionally, a dynamic memory-driven hierarchical learning
module is proposed to progressively mine local and global information,
reinforcing multi-level feature associations to improve model robustness. To
bridge the domain gap between UAV and satellite views, we design an
information-consistent evolutionary learning mechanism that systematically
explores latent correlations within intra-view neighborhoods and across
cross-view domains, ultimately constructing a unified cross-view feature
representation space. Extensive experiments on three benchmarks
(University-1652, SUES-200, and DenseUAV) demonstrate that DMNIL achieves
competitive performance against state-of-the-art supervised methods while
maintaining computational efficiency. Notably, this superiority is attained
without relying on paired training data, underscoring the framework's
practicality for real-world deployment. Codes will be released soon.
|
2502.11382
|
A Physics-Informed Blur Learning Framework for Imaging Systems
|
cs.CV
|
Accurate blur estimation is essential for high-performance imaging across
various applications. Blur is typically represented by the point spread
function (PSF). In this paper, we propose a physics-informed PSF learning
framework for imaging systems, consisting of a simple calibration followed by a
learning process. Our framework could achieve both high accuracy and universal
applicability. Inspired by the Seidel PSF model for representing spatially
varying PSF, we identify its limitations in optimization and introduce a novel
wavefront-based PSF model accompanied by an optimization strategy, both
reducing optimization complexity and improving estimation accuracy. Moreover,
our wavefront-based PSF model is independent of lens parameters, eliminate the
need for prior knowledge of the lens. To validate our approach, we compare it
with recent PSF estimation methods (Degradation Transfer and Fast Two-step)
through a deblurring task, where all the estimated PSFs are used to train
state-of-the-art deblurring algorithms. Our approach demonstrates improvements
in image quality in simulation and also showcases noticeable visual quality
improvements on real captured images.
|
2502.11386
|
Intelligent Mobile AI-Generated Content Services via Interactive Prompt
Engineering and Dynamic Service Provisioning
|
cs.NI cs.LG
|
Due to massive computational demands of large generative models, AI-Generated
Content (AIGC) can organize collaborative Mobile AIGC Service Providers (MASPs)
at network edges to provide ubiquitous and customized content generation for
resource-constrained users. However, such a paradigm faces two significant
challenges: 1) raw prompts (i.e., the task description from users) often lead
to poor generation quality due to users' lack of experience with specific AIGC
models, and 2) static service provisioning fails to efficiently utilize
computational and communication resources given the heterogeneity of AIGC
tasks. To address these challenges, we propose an intelligent mobile AIGC
service scheme. Firstly, we develop an interactive prompt engineering mechanism
that leverages a Large Language Model (LLM) to generate customized prompt
corpora and employs Inverse Reinforcement Learning (IRL) for policy imitation
through small-scale expert demonstrations. Secondly, we formulate a dynamic
mobile AIGC service provisioning problem that jointly optimizes the number of
inference trials and transmission power allocation. Then, we propose the
Diffusion-Enhanced Deep Deterministic Policy Gradient (D3PG) algorithm to solve
the problem. By incorporating the diffusion process into Deep Reinforcement
Learning (DRL) architecture, the environment exploration capability can be
improved, thus adapting to varying mobile AIGC scenarios. Extensive
experimental results demonstrate that our prompt engineering approach improves
single-round generation success probability by 6.3 times, while D3PG increases
the user service experience by 67.8% compared to baseline DRL approaches.
|
2502.11387
|
RoleMRC: A Fine-Grained Composite Benchmark for Role-Playing and
Instruction-Following
|
cs.CL
|
Role-playing is important for Large Language Models (LLMs) to follow diverse
instructions while maintaining role identity and the role's pre-defined ability
limits. Existing role-playing datasets mostly contribute to controlling role
style and knowledge boundaries, but overlook role-playing in
instruction-following scenarios. We introduce a fine-grained role-playing and
instruction-following composite benchmark, named RoleMRC, including: (1)
Multi-turn dialogues between ideal roles and humans, including free chats or
discussions upon given passages; (2) Role-playing machine reading
comprehension, involving response, refusal, and attempts according to passage
answerability and role ability; (3) More complex scenarios with nested,
multi-turn and prioritized instructions. The final RoleMRC features a 10.2k
role profile meta-pool, 37.9k well-synthesized role-playing instructions, and
1.4k testing samples. We develop a pipeline to quantitatively evaluate the
fine-grained role-playing and instruction-following capabilities of several
mainstream LLMs, as well as models that are fine-tuned on our data. Moreover,
cross-evaluation on external role-playing datasets confirms that models
fine-tuned on RoleMRC enhances instruction-following without compromising
general role-playing and reasoning capabilities. We also probe the neural-level
activation maps of different capabilities over post-tuned LLMs. Access to our
RoleMRC, RoleMRC-mix and Codes: https://github.com/LuJunru/RoleMRC.
|
2502.11390
|
MARS: Mesh AutoRegressive Model for 3D Shape Detailization
|
cs.CV
|
State-of-the-art methods for mesh detailization predominantly utilize
Generative Adversarial Networks (GANs) to generate detailed meshes from coarse
ones. These methods typically learn a specific style code for each category or
similar categories without enforcing geometry supervision across different
Levels of Detail (LODs). Consequently, such methods often fail to generalize
across a broader range of categories and cannot ensure shape consistency
throughout the detailization process. In this paper, we introduce MARS, a novel
approach for 3D shape detailization. Our method capitalizes on a novel
multi-LOD, multi-category mesh representation to learn shape-consistent mesh
representations in latent space across different LODs. We further propose a
mesh autoregressive model capable of generating such latent representations
through next-LOD token prediction. This approach significantly enhances the
realism of the generated shapes. Extensive experiments conducted on the
challenging 3D Shape Detailization benchmark demonstrate that our proposed MARS
model achieves state-of-the-art performance, surpassing existing methods in
both qualitative and quantitative assessments. Notably, the model's capability
to generate fine-grained details while preserving the overall shape integrity
is particularly commendable.
|
2502.11393
|
HellaSwag-Pro: A Large-Scale Bilingual Benchmark for Evaluating the
Robustness of LLMs in Commonsense Reasoning
|
cs.CL
|
Large language models (LLMs) have shown remarkable capabilities in
commonsense reasoning; however, some variations in questions can trigger
incorrect responses. Do these models truly understand commonsense knowledge, or
just memorize expression patterns? To investigate this question, we present the
first extensive robustness evaluation of LLMs in commonsense reasoning. We
introduce HellaSwag-Pro, a large-scale bilingual benchmark consisting of 11,200
cases, by designing and compiling seven types of question variants. To
construct this benchmark, we propose a two-stage method to develop Chinese
HellaSwag, a finely annotated dataset comprising 12,000 instances across 56
categories. We conduct extensive experiments on 41 representative LLMs,
revealing that these LLMs are far from robust in commonsense reasoning.
Furthermore, this robustness varies depending on the language in which the LLM
is tested. This work establishes a high-quality evaluation benchmark, with
extensive experiments offering valuable insights to the community in
commonsense reasoning for LLMs.
|
2502.11394
|
Oversmoothing as Loss of Sign: Towards Structural Balance in Graph
Neural Networks
|
cs.LG
|
Oversmoothing is a common issue in graph neural networks (GNNs), where node
representations become excessively homogeneous as the number of layers
increases, resulting in degraded performance. Various strategies have been
proposed to combat oversmoothing in practice, yet they are based on different
heuristics and lack a unified understanding of their inherent mechanisms. In
this paper, we show that three major classes of anti-oversmoothing techniques
can be mathematically interpreted as message passing over signed graphs
comprising both positive and negative edges. By analyzing the asymptotic
behavior of signed graph propagation, we demonstrate that negative edges can
repel nodes to a certain extent, providing deeper insights into how these
methods mitigate oversmoothing. Furthermore, our results suggest that the
structural balance of a signed graph-where positive edges exist only within
clusters and negative edges appear only between clusters-is crucial for
clustering node representations in the long term through signed graph
propagation. Motivated by these observations, we propose a solution to mitigate
oversmoothing with theoretical guarantees-Structural Balance Propagation (SBP),
by incorporating label and feature information to create a structurally
balanced graph for message-passing. Experiments on nine datasets against twelve
baselines demonstrate the effectiveness of our method, highlighting the value
of our signed graph perspective.
|
2502.11396
|
Maintenance of Structural Hole Spanners in Dynamic Networks
|
cs.SI
|
Structural Hole (SH) spanners are the set of users who bridge different
groups of users and are vital in numerous applications. Despite their
importance, existing work for identifying SH spanners focuses only on static
networks. However, real-world networks are highly dynamic where the underlying
structure of the network evolves continuously. Consequently, we study SH
spanner problem for dynamic networks. We propose an efficient solution for
updating SH spanners in dynamic networks. Our solution reuses the information
obtained during the initial runs of the static algorithm and avoids the
recomputations for the nodes unaffected by the updates. Experimental results
show that the proposed solution achieves a minimum speedup of 3.24 over
recomputation. To the best of our knowledge, this is the first attempt to
address the problem of maintaining SH spanners in dynamic networks.
|
2502.11400
|
Revisiting Robust RAG: Do We Still Need Complex Robust Training in the
Era of Powerful LLMs?
|
cs.CL
|
Retrieval-augmented generation (RAG) systems often suffer from performance
degradation when encountering noisy or irrelevant documents, driving
researchers to develop sophisticated training strategies to enhance their
robustness against such retrieval noise. However, as large language models
(LLMs) continue to advance, the necessity of these complex training methods is
increasingly questioned. In this paper, we systematically investigate whether
complex robust training strategies remain necessary as model capacity grows.
Through comprehensive experiments spanning multiple model architectures and
parameter scales, we evaluate various document selection methods and
adversarial training techniques across diverse datasets. Our extensive
experiments consistently demonstrate that as models become more powerful, the
performance gains brought by complex robust training methods drop off
dramatically. We delve into the rationale and find that more powerful models
inherently exhibit superior confidence calibration, better generalization
across datasets (even when trained with randomly selected documents), and
optimal attention mechanisms learned with simpler strategies. Our findings
suggest that RAG systems can benefit from simpler architectures and training
strategies as models become more powerful, enabling more scalable applications
with minimal complexity.
|
2502.11401
|
Following the Autoregressive Nature of LLM Embeddings via Compression
and Alignment
|
cs.CL
|
A new trend uses LLMs as dense text encoders via contrastive learning.
However, since LLM embeddings predict the probability distribution of the next
token, they are inherently generative and distributive, conflicting with
contrastive learning, which requires embeddings to capture full-text semantics
and align via cosine similarity. This discrepancy hinders the full utilization
of LLMs' pre-training capabilities, resulting in inefficient learning. In
response to this issue, we propose AutoRegEmbed, a new contrastive learning
method built on embedding conditional probability distributions, which
integrates two core tasks: information compression and conditional distribution
alignment. The information compression task encodes text into the embedding
space, ensuring that the embedding vectors capture global semantics. The
conditional distribution alignment task focuses on aligning text embeddings
with positive samples embeddings by leveraging the conditional distribution of
embeddings while simultaneously reducing the likelihood of generating negative
samples from text embeddings, thereby achieving embedding alignment and
uniformity. Experimental results demonstrate that our method significantly
outperforms traditional contrastive learning approaches and achieves
performance comparable to state-of-the-art models when using the same amount of
data.
|
2502.11404
|
ToolCoder: A Systematic Code-Empowered Tool Learning Framework for Large
Language Models
|
cs.CL
|
Tool learning has emerged as a crucial capability for large language models
(LLMs) to solve complex real-world tasks through interaction with external
tools. Existing approaches face significant challenges, including reliance on
hand-crafted prompts, difficulty in multi-step planning, and lack of precise
error diagnosis and reflection mechanisms. We propose ToolCoder, a novel
framework that reformulates tool learning as a code generation task. Inspired
by software engineering principles, ToolCoder transforms natural language
queries into structured Python function scaffold and systematically breaks down
tasks with descriptive comments, enabling LLMs to leverage coding paradigms for
complex reasoning and planning. It then generates and executes function
implementations to obtain final responses. Additionally, ToolCoder stores
successfully executed functions in a repository to promote code reuse, while
leveraging error traceback mechanisms for systematic debugging, optimizing both
execution efficiency and robustness. Experiments demonstrate that ToolCoder
achieves superior performance in task completion accuracy and execution
reliability compared to existing approaches, establishing the effectiveness of
code-centric approaches in tool learning.
|
2502.11405
|
LayAlign: Enhancing Multilingual Reasoning in Large Language Models via
Layer-Wise Adaptive Fusion and Alignment Strategy
|
cs.CL
|
Despite being pretrained on multilingual corpora, large language models
(LLMs) exhibit suboptimal performance on low-resource languages. Recent
approaches have leveraged multilingual encoders alongside LLMs by introducing
trainable parameters connecting the two models. However, these methods
typically focus on the encoder's output, overlooking valuable information from
other layers. We propose \aname (\mname), a framework that integrates
representations from all encoder layers, coupled with the \attaname mechanism
to enable layer-wise interaction between the LLM and the multilingual encoder.
Extensive experiments on multilingual reasoning tasks, along with analyses of
learned representations, show that our approach consistently outperforms
existing baselines.
|
2502.11408
|
Precise GPS-Denied UAV Self-Positioning via Context-Enhanced Cross-View
Geo-Localization
|
cs.CV
|
Image retrieval has been employed as a robust complementary technique to
address the challenge of Unmanned Aerial Vehicles (UAVs) self-positioning.
However, most existing methods primarily focus on localizing objects captured
by UAVs through complex part-based representations, often overlooking the
unique challenges associated with UAV self-positioning, such as fine-grained
spatial discrimination requirements and dynamic scene variations. To address
the above issues, we propose the Context-Enhanced method for precise UAV
Self-Positioning (CEUSP), specifically designed for UAV self-positioning tasks.
CEUSP integrates a Dynamic Sampling Strategy (DSS) to efficiently select
optimal negative samples, while the Rubik's Cube Attention (RCA) module,
combined with the Context-Aware Channel Integration (CACI) module, enhances
feature representation and discrimination by exploiting interdimensional
interactions, inspired by the rotational mechanics of a Rubik's Cube. Extensive
experimental validate the effectiveness of the proposed method, demonstrating
notable improvements in feature representation and UAV self-positioning
accuracy within complex urban environments. Our approach achieves
state-of-the-art performance on the DenseUAV dataset, which is specifically
designed for dense urban contexts, and also delivers competitive results on the
widely recognized University-1652 benchmark.
|
2502.11410
|
Structure based SAT dataset for analysing GNN generalisation
|
cs.LG
|
Satisfiability (SAT) solvers based on techniques such as conflict driven
clause learning (CDCL) have produced excellent performance on both synthetic
and real world industrial problems. While these CDCL solvers only operate on a
per-problem basis, graph neural network (GNN) based solvers bring new benefits
to the field by allowing practitioners to exploit knowledge gained from solved
problems to expedite solving of new SAT problems. However, one specific area
that is often studied in the context of CDCL solvers, but largely overlooked in
GNN solvers, is the relationship between graph theoretic measure of structure
in SAT problems and the generalisation ability of GNN solvers. To bridge the
gap between structural graph properties (e.g., modularity, self-similarity) and
the generalisability (or lack thereof) of GNN based SAT solvers, we present
StructureSAT: a curated dataset, along with code to further generate novel
examples, containing a diverse set of SAT problems from well known problem
domains. Furthermore, we utilise a novel splitting method that focuses on
deconstructing the families into more detailed hierarchies based on their
structural properties. With the new dataset, we aim to help explain problematic
generalisation in existing GNN SAT solvers by exploiting knowledge of
structural graph properties. We conclude with multiple future directions that
can help researchers in GNN based SAT solving develop more effective and
generalisable SAT solvers.
|
2502.11411
|
Detecting and Filtering Unsafe Training Data via Data Attribution
|
cs.LG
|
Large language models (LLMs) are vulnerable to unsafe training data that even
small amounts of unsafe data can lead to harmful model behaviors. Detecting and
filtering such unsafe training data is essential for trustworthy model
development. Current state-of-the-art (SOTA) approaches typically rely on
training moderation classifiers which requires significant computational
overhead and are limited to predefined taxonomies, making them less adaptable
to evolving safety concerns. Moreover, these classifiers lack insight into the
training process, limiting their effectiveness in filtering unsafe data. To
address these limitations, we propose DABUF, leveraging data attribution to
detect and filter unsafe training data by attributing harmful model outputs to
influential training data points. DABUF enables flexible identification of
various unsafe data types without predefined taxonomies. However, in practice,
model outputs can be complex with combined safe linguistic features and unsafe
content, leading to reduced attribution accuracy. In such cases, DABUF will
integrate moderation classifiers to identify a minimal subset of unsafe
training data for targeted attribution (such as jailbreak). When model outputs
are relatively straightforward, DABUF uses model outputs directly as the
attribution targets. We evaluate the performance on two different tasks: in
filtering jailbreaking training data and in identifying and mitigating gender
bias. DABUF outperforms SOTA approaches by up to 7.5\% in detection AUPRC in
jailbreaking scenarios, and 44.1\% in detecting gender bias. Moreover,
retraining on DABUF-filtered data leads to higher model safety across
experiments, underscoring its versatility in addressing a broad spectrum of
unsafe data issues.
|
2502.11413
|
Statistical Query Hardness of Multiclass Linear Classification with
Random Classification Noise
|
cs.LG stat.ML
|
We study the task of Multiclass Linear Classification (MLC) in the
distribution-free PAC model with Random Classification Noise (RCN).
Specifically, the learner is given a set of labeled examples $(x, y)$, where
$x$ is drawn from an unknown distribution on $R^d$ and the labels are generated
by a multiclass linear classifier corrupted with RCN. That is, the label $y$ is
flipped from $i$ to $j$ with probability $H_{ij}$ according to a known noise
matrix $H$ with non-negative separation $\sigma: = \min_{i \neq j}
H_{ii}-H_{ij}$. The goal is to compute a hypothesis with small 0-1 error. For
the special case of two labels, prior work has given polynomial-time algorithms
achieving the optimal error. Surprisingly, little is known about the complexity
of this task even for three labels. As our main contribution, we show that the
complexity of MLC with RCN becomes drastically different in the presence of
three or more labels. Specifically, we prove super-polynomial Statistical Query
(SQ) lower bounds for this problem. In more detail, even for three labels and
constant separation, we give a super-polynomial lower bound on the complexity
of any SQ algorithm achieving optimal error. For a larger number of labels and
smaller separation, we show a super-polynomial SQ lower bound even for the
weaker goal of achieving any constant factor approximation to the optimal loss
or even beating the trivial hypothesis.
|
2502.11414
|
Unbiased Learning to Rank with Query-Level Click Propensity Estimation:
Beyond Pointwise Observation and Relevance
|
cs.IR
|
Most existing unbiased learning-to-rank (ULTR) approaches are based on the
user examination hypothesis, which assumes that users will click a result only
if it is both relevant and observed (typically modeled by position). However,
in real-world scenarios, users often click only one or two results after
examining multiple relevant options, due to limited patience or because their
information needs have already been satisfied. Motivated by this, we propose a
query-level click propensity model to capture the probability that users will
click on different result lists, allowing for non-zero probabilities that users
may not click on an observed relevant result. We hypothesize that this
propensity increases when more potentially relevant results are present, and
refer to this user behavior as relevance saturation bias. Our method introduces
a Dual Inverse Propensity Weighting (DualIPW) mechanism -- combining
query-level and position-level IPW -- to address both relevance saturation and
position bias. Through theoretical derivation, we prove that DualIPW can learn
an unbiased ranking model. Experiments on the real-world Baidu-ULTR dataset
demonstrate that our approach significantly outperforms state-of-the-art ULTR
baselines. The code and dataset information can be found at
https://github.com/Trustworthy-Information-Access/DualIPW.
|
2502.11417
|
DiSCo: Device-Server Collaborative LLM-Based Text Streaming Services
|
cs.LG cs.DC
|
The rapid rise of large language models (LLMs) in text streaming services has
introduced significant cost and Quality of Experience (QoE) challenges in
serving millions of daily requests, especially in meeting Time-To-First-Token
(TTFT) and Time-Between-Token (TBT) requirements for real-time interactions.
Our real-world measurements show that both server-based and on-device
deployments struggle to meet diverse QoE demands: server deployments face high
costs and last-hop issues (e.g., Internet latency and dynamics), while
on-device LLM inference is constrained by resources.
We introduce DiSCo, a device-server cooperative scheduler designed to
optimize users' QoE by adaptively routing requests and migrating response
generation between endpoints while maintaining cost constraints. DiSCo employs
cost-aware scheduling, leveraging the predictable speed of on-device LLM
inference with the flexible capacity of server-based inference to dispatch
requests on the fly, while introducing a token-level migration mechanism to
ensure consistent token delivery during migration. Evaluations on real-world
workloads -- including commercial services like OpenAI GPT and DeepSeek, and
open-source deployments such as LLaMA3 -- show that DiSCo can improve users'
QoE by reducing tail TTFT (11-52\%) and mean TTFT (6-78\%) across different
model-device configurations, while dramatically reducing serving costs by up to
84\% through its migration mechanism while maintaining comparable QoE levels.
|
2502.11418
|
TimeCAP: Learning to Contextualize, Augment, and Predict Time Series
Events with Large Language Model Agents
|
cs.AI cs.LG
|
Time series data is essential in various applications, including climate
modeling, healthcare monitoring, and financial analytics. Understanding the
contextual information associated with real-world time series data is often
essential for accurate and reliable event predictions. In this paper, we
introduce TimeCAP, a time-series processing framework that creatively employs
Large Language Models (LLMs) as contextualizers of time series data, extending
their typical usage as predictors. TimeCAP incorporates two independent LLM
agents: one generates a textual summary capturing the context of the time
series, while the other uses this enriched summary to make more informed
predictions. In addition, TimeCAP employs a multi-modal encoder that synergizes
with the LLM agents, enhancing predictive performance through mutual
augmentation of inputs with in-context examples. Experimental results on
real-world datasets demonstrate that TimeCAP outperforms state-of-the-art
methods for time series event prediction, including those utilizing LLMs as
predictors, achieving an average improvement of 28.75% in F1 score.
|
2502.11419
|
InsBank: Evolving Instruction Subset for Ongoing Alignment
|
cs.CL
|
Large language models (LLMs) typically undergo instruction tuning to enhance
alignment. Recent studies emphasize that quality and diversity of instruction
data are more crucial than quantity, highlighting the need to select diverse,
high-quality subsets to reduce training costs. However, how to evolve these
selected subsets alongside the development of new instruction data remains
insufficiently explored. To achieve LLMs' ongoing alignment, we introduce
Instruction Bank (InsBank), a continuously updated repository that integrates
the latest valuable instruction data. We further propose Progressive
Instruction Bank Evolution (PIBE), a novel framework designed to evolve InsBank
effectively and efficiently over time. PIBE employs a gradual data selection
strategy to maintain long-term efficiency, leveraging a representation-based
diversity score to capture relationships between data points and retain
historical information for comprehensive diversity evaluation. This also allows
for flexible combination of diversity and quality scores during data selection
and ranking. Extensive experiments demonstrate that PIBE significantly
outperforms baselines in InsBank evolution and is able to extract
budget-specific subsets, demonstrating its effectiveness and adaptability.
|
2502.11420
|
Training-Free Guidance Beyond Differentiability: Scalable Path Steering
with Tree Search in Diffusion and Flow Models
|
cs.LG
|
Training-free guidance enables controlled generation in diffusion and flow
models, but most existing methods assume differentiable objectives and rely on
gradients. This work focuses on training-free guidance addressing challenges
from non-differentiable objectives and discrete data distributions. We propose
an algorithmic framework TreeG: Tree Search-Based Path Steering Guidance,
applicable to both continuous and discrete settings in diffusion and flow
models. TreeG offers a unified perspective on training-free guidance: proposing
candidates for the next step, evaluating candidates, and selecting the best to
move forward, enhanced by a tree search mechanism over active paths or
parallelizing exploration. We comprehensively investigate the design space of
TreeG over the candidate proposal module and the evaluation function,
instantiating TreeG into three novel algorithms. Our experiments show that
TreeG consistently outperforms the top guidance baselines in symbolic music
generation, small molecule generation, and enhancer DNA design, all of which
involve non-differentiable challenges. Additionally, we identify an
inference-time scaling law showing TreeG's scalability in inference-time
computation.
|
2502.11422
|
Planning of Heuristics: Strategic Planning on Large Language Models with
Monte Carlo Tree Search for Automating Heuristic Optimization
|
cs.AI
|
Heuristics have achieved great success in solving combinatorial optimization
problems (COPs). However, heuristics designed by humans require too much domain
knowledge and testing time. Given the fact that Large Language Models (LLMs)
possess strong capabilities to understand and generate content, and a knowledge
base that covers various domains, which offer a novel way to automatically
optimize heuristics. Therefore, we propose Planning of Heuristics (PoH), an
optimization method that integrates the self-reflection of LLMs with the Monte
Carlo Tree Search (MCTS), a well-known planning algorithm. PoH iteratively
refines generated heuristics by evaluating their performance and providing
improvement suggestions. Our method enables to iteratively evaluate the
generated heuristics (states) and improve them based on the improvement
suggestions (actions) and evaluation results (rewards), by effectively
simulating future states to search for paths with higher rewards. In this
paper, we apply PoH to solve the Traveling Salesman Problem (TSP) and the Flow
Shop Scheduling Problem (FSSP). The experimental results show that PoH
outperforms other hand-crafted heuristics and Automatic Heuristic Design (AHD)
by other LLMs-based methods, and achieves the significant improvements and the
state-of-the-art performance of our proposed method in automating heuristic
optimization with LLMs to solve COPs.
|
2502.11423
|
Exploring Persona Sentiment Sensitivity in Personalized Dialogue
Generation
|
cs.CL
|
Personalized dialogue systems have advanced considerably with the integration
of user-specific personas into large language models (LLMs). However, while
LLMs can effectively generate personalized responses, the influence of persona
sentiment on dialogue quality remains underexplored. In this work, we conduct a
large-scale analysis of dialogues generated using a range of polarized user
profiles. Our experiments reveal that dialogues involving negatively polarized
users tend to overemphasize persona attributes, leading to increased entailment
and contradiction instances and lower overall coherence. In contrast,
positively polarized profiles yield dialogues that selectively incorporate
persona information, resulting in smoother and more coherent interactions.
Furthermore, we find that personas with weak or neutral sentiment generally
produce lower-quality dialogues. Motivated by these findings, we propose a
dialogue generation approach that explicitly accounts for persona polarity by
combining a turn-based generation strategy with a profile ordering mechanism.
Our study provides new insights into the sensitivity of LLMs to persona
sentiment and offers guidance for developing more robust and nuanced
personalized dialogue systems.
|
2502.11425
|
Counterfactual-Consistency Prompting for Relative Temporal Understanding
in Large Language Models
|
cs.CL cs.AI
|
Despite the advanced capabilities of large language models (LLMs), their
temporal reasoning ability remains underdeveloped. Prior works have highlighted
this limitation, particularly in maintaining temporal consistency when
understanding events. For example, models often confuse mutually exclusive
temporal relations like ``before'' and ``after'' between events and make
inconsistent predictions. In this work, we tackle the issue of temporal
inconsistency in LLMs by proposing a novel counterfactual prompting approach.
Our method generates counterfactual questions and enforces collective
constraints, enhancing the model's consistency. We evaluate our method on
multiple datasets, demonstrating significant improvements in event ordering for
explicit and implicit events and temporal commonsense understanding by
effectively addressing temporal inconsistencies.
|
2502.11426
|
Verti-Bench: A General and Scalable Off-Road Mobility Benchmark for
Vertically Challenging Terrain
|
cs.RO
|
Recent advancement in off-road autonomy has shown promises in deploying
autonomous mobile robots in outdoor off-road environments. Encouraging results
have been reported from both simulated and real-world experiments. However,
unlike evaluating off-road perception tasks on static datasets, benchmarking
off-road mobility still faces significant challenges due to a variety of
factors, including variations in vehicle platforms and terrain properties.
Furthermore, different vehicle-terrain interactions need to be unfolded during
mobility evaluation, which requires the mobility systems to interact with the
environments instead of comparing against a pre-collected dataset. In this
paper, we present Verti-Bench, a mobility benchmark that focuses on extremely
rugged, vertically challenging off-road environments. 100 unique off-road
environments and 1000 distinct navigation tasks with millions of off-road
terrain properties, including a variety of geometry and semantics, rigid and
deformable surfaces, and large natural obstacles, provide standardized and
objective evaluation in high-fidelity multi-physics simulation. Verti-Bench is
also scalable to various vehicle platforms with different scales and actuation
mechanisms. We also provide datasets from expert demonstration, random
exploration, failure cases (rolling over and getting stuck), as well as a
gym-like interface for reinforcement learning. We use Verti-Bench to benchmark
ten off-road mobility systems, present our findings, and identify future
off-road mobility research directions.
|
2502.11427
|
Do we Really Need Visual Instructions? Towards Visual Instruction-Free
Fine-tuning for Large Vision-Language Models
|
cs.CL cs.CV
|
Visual instruction tuning has become the predominant technology in eliciting
the multimodal task-solving capabilities of large vision-language models
(LVLMs). Despite the success, as visual instructions require images as the
input, it would leave the gap in inheriting the task-solving capabilities from
the backbone LLMs, and make it costly to collect a large-scale dataset. To
address it, we propose ViFT, a visual instruction-free fine-tuning framework
for LVLMs. In ViFT, we only require the text-only instructions and image
caption data during training, to separately learn the task-solving and visual
perception abilities. During inference, we extract and combine the
representations of the text and image inputs, for fusing the two abilities to
fulfill multimodal tasks. Experimental results demonstrate that ViFT can
achieve state-of-the-art performance on several visual reasoning and visual
instruction following benchmarks, with rather less training data. Our code and
data will be publicly released.
|
2502.11429
|
What's in a Query: Polarity-Aware Distribution-Based Fair Ranking
|
cs.LG cs.CY
|
Machine learning-driven rankings, where individuals (or items) are ranked in
response to a query, mediate search exposure or attention in a variety of
safety-critical settings. Thus, it is important to ensure that such rankings
are fair. Under the goal of equal opportunity, attention allocated to an
individual on a ranking interface should be proportional to their relevance
across search queries. In this work, we examine amortized fair ranking -- where
relevance and attention are cumulated over a sequence of user queries to make
fair ranking more feasible in practice. Unlike prior methods that operate on
expected amortized attention for each individual, we define new
divergence-based measures for attention distribution-based fairness in ranking
(DistFaiR), characterizing unfairness as the divergence between the
distribution of attention and relevance corresponding to an individual over
time. This allows us to propose new definitions of unfairness, which are more
reliable at test time. Second, we prove that group fairness is upper-bounded by
individual fairness under this definition for a useful class of divergence
measures, and experimentally show that maximizing individual fairness through
an integer linear programming-based optimization is often beneficial to group
fairness. Lastly, we find that prior research in amortized fair ranking ignores
critical information about queries, potentially leading to a fairwashing risk
in practice by making rankings appear more fair than they actually are.
|
2502.11431
|
Any Information Is Just Worth One Single Screenshot: Unifying Search
With Visualized Information Retrieval
|
cs.CL
|
With the popularity of multimodal techniques, it receives growing interests
to acquire useful information in visual forms. In this work, we formally define
an emerging IR paradigm called \textit{Visualized Information Retrieval}, or
\textbf{Vis-IR}, where multimodal information, such as texts, images, tables
and charts, is jointly represented by a unified visual format called
\textbf{Screenshots}, for various retrieval applications. We further make three
key contributions for Vis-IR. First, we create \textbf{VIRA} (Vis-IR
Aggregation), a large-scale dataset comprising a vast collection of screenshots
from diverse sources, carefully curated into captioned and question-answer
formats. Second, we develop \textbf{UniSE} (Universal Screenshot Embeddings), a
family of retrieval models that enable screenshots to query or be queried
across arbitrary data modalities. Finally, we construct \textbf{MVRB} (Massive
Visualized IR Benchmark), a comprehensive benchmark covering a variety of task
forms and application scenarios. Through extensive evaluations on MVRB, we
highlight the deficiency from existing multimodal retrievers and the
substantial improvements made by UniSE. Our work will be shared with the
community, laying a solid foundation for this emerging field.
|
2502.11433
|
FLAG-Trader: Fusion LLM-Agent with Gradient-based Reinforcement Learning
for Financial Trading
|
cs.AI cs.CE q-fin.TR
|
Large language models (LLMs) fine-tuned on multimodal financial data have
demonstrated impressive reasoning capabilities in various financial tasks.
However, they often struggle with multi-step, goal-oriented scenarios in
interactive financial markets, such as trading, where complex agentic
approaches are required to improve decision-making. To address this, we propose
\textsc{FLAG-Trader}, a unified architecture integrating linguistic processing
(via LLMs) with gradient-driven reinforcement learning (RL) policy
optimization, in which a partially fine-tuned LLM acts as the policy network,
leveraging pre-trained knowledge while adapting to the financial domain through
parameter-efficient fine-tuning. Through policy gradient optimization driven by
trading rewards, our framework not only enhances LLM performance in trading but
also improves results on other financial-domain tasks. We present extensive
empirical evidence to validate these enhancements.
|
2502.11435
|
SMART: Self-Aware Agent for Tool Overuse Mitigation
|
cs.AI cs.CL cs.LG
|
Current Large Language Model (LLM) agents demonstrate strong reasoning and
tool use capabilities, but often lack self-awareness, failing to balance these
approaches effectively. This imbalance leads to Tool Overuse, where models
unnecessarily rely on external tools for tasks solvable with parametric
knowledge, increasing computational overhead. Inspired by human metacognition,
we introduce SMART (Strategic Model-Aware Reasoning with Tools), a paradigm
that enhances an agent's self-awareness to optimize task handling and reduce
tool overuse. To support this paradigm, we introduce SMART-ER, a dataset
spanning three domains, where reasoning alternates between parametric knowledge
and tool-dependent steps, with each step enriched by rationales explaining when
tools are necessary. Through supervised training, we develop SMARTAgent, a
family of models that dynamically balance parametric knowledge and tool use.
Evaluations show that SMARTAgent reduces tool use by 24% while improving
performance by over 37%, enabling 7B-scale models to match its 70B counterpart
and GPT-4o. Additionally, SMARTAgent generalizes to out-of-distribution test
data like GSM8K and MINTQA, maintaining accuracy with just one-fifth the tool
calls. These highlight the potential of strategic tool use to enhance
reasoning, mitigate overuse, and bridge the gap between model size and
performance, advancing intelligent and resource-efficient agent designs.
|
2502.11436
|
ADO: Automatic Data Optimization for Inputs in LLM Prompts
|
cs.LG
|
This study explores a novel approach to enhance the performance of Large
Language Models (LLMs) through the optimization of input data within prompts.
While previous research has primarily focused on refining instruction
components and augmenting input data with in-context examples, our work
investigates the potential benefits of optimizing the input data itself. We
introduce a two-pronged strategy for input data optimization: content
engineering and structural reformulation. Content engineering involves imputing
missing values, removing irrelevant attributes, and enriching profiles by
generating additional information inferred from existing attributes. Subsequent
to content engineering, structural reformulation is applied to optimize the
presentation of the modified content to LLMs, given their sensitivity to input
format. Our findings suggest that these optimizations can significantly improve
the performance of LLMs in various tasks, offering a promising avenue for
future research in prompt engineering. The source code is available at
https://anonymous.4open.science/r/ADO-6BC5/
|
2502.11437
|
Learning Dexterous Bimanual Catch Skills through Adversarial-Cooperative
Heterogeneous-Agent Reinforcement Learning
|
cs.RO cs.AI
|
Robotic catching has traditionally focused on single-handed systems, which
are limited in their ability to handle larger or more complex objects. In
contrast, bimanual catching offers significant potential for improved dexterity
and object handling but introduces new challenges in coordination and control.
In this paper, we propose a novel framework for learning dexterous bimanual
catching skills using Heterogeneous-Agent Reinforcement Learning (HARL). Our
approach introduces an adversarial reward scheme, where a throw agent increases
the difficulty of throws-adjusting speed-while a catch agent learns to
coordinate both hands to catch objects under these evolving conditions. We
evaluate the framework in simulated environments using 15 different objects,
demonstrating robustness and versatility in handling diverse objects. Our
method achieved approximately a 2x increase in catching reward compared to
single-agent baselines across 15 diverse objects.
|
2502.11438
|
SAFE-SQL: Self-Augmented In-Context Learning with Fine-grained Example
Selection for Text-to-SQL
|
cs.CL
|
Text-to-SQL aims to convert natural language questions into executable SQL
queries. While previous approaches, such as skeleton-masked selection, have
demonstrated strong performance by retrieving similar training examples to
guide large language models (LLMs), they struggle in real-world scenarios where
such examples are unavailable. To overcome this limitation, we propose
Self-Augmentation in-context learning with Fine-grained Example selection for
Text-to-SQL (SAFE-SQL), a novel framework that improves SQL generation by
generating and filtering self-augmented examples. SAFE-SQL first prompts an LLM
to generate multiple Text-to-SQL examples relevant to the test input. Then
SAFE-SQL filters these examples through three relevance assessments,
constructing high-quality in-context learning examples. Using self-generated
examples, SAFE-SQL surpasses the previous zero-shot, and few-shot Text-to-SQL
frameworks, achieving higher execution accuracy. Notably, our approach provides
additional performance gains in extra hard and unseen scenarios, where
conventional methods often fail.
|
2502.11439
|
An Efficient Row-Based Sparse Fine-Tuning
|
cs.CL cs.AI cs.LG
|
Fine-tuning is an important step in adapting foundation models such as large
language models to downstream tasks. To make this step more accessible to users
with limited computational budgets, it is crucial to develop fine-tuning
methods that are memory and computationally efficient. Sparse Fine-tuning (SFT)
and Low-rank adaptation (LoRA) are two frameworks that have emerged for
addressing this problem and have been adopted widely in practice. In this work,
we develop a new SFT framework, based on ideas from neural network pruning. At
a high level, we first identify "important" neurons/nodes using feature
importance metrics from network pruning (specifically, we use the structural
pruning method), and then perform fine-tuning by restricting to weights
involving these neurons. Using experiments on common language tasks, we
demonstrate that our method significantly improves the memory efficiency of SFT
without increasing training time complexity and implementation complexity,
while achieving accuracy comparable to state-of-the-art methods such as LoRA
and its variants.
|
2502.11440
|
Medical Image Registration Meets Vision Foundation Model: Prototype
Learning and Contour Awareness
|
cs.CV
|
Medical image registration is a fundamental task in medical image analysis,
aiming to establish spatial correspondences between paired images. However,
existing unsupervised deformable registration methods rely solely on
intensity-based similarity metrics, lacking explicit anatomical knowledge,
which limits their accuracy and robustness. Vision foundation models, such as
the Segment Anything Model (SAM), can generate high-quality segmentation masks
that provide explicit anatomical structure knowledge, addressing the
limitations of traditional methods that depend only on intensity similarity.
Based on this, we propose a novel SAM-assisted registration framework
incorporating prototype learning and contour awareness. The framework includes:
(1) Explicit anatomical information injection, where SAM-generated segmentation
masks are used as auxiliary inputs throughout training and testing to ensure
the consistency of anatomical information; (2) Prototype learning, which
leverages segmentation masks to extract prototype features and aligns
prototypes to optimize semantic correspondences between images; and (3)
Contour-aware loss, a contour-aware loss is designed that leverages the edges
of segmentation masks to improve the model's performance in fine-grained
deformation fields. Extensive experiments demonstrate that the proposed
framework significantly outperforms existing methods across multiple datasets,
particularly in challenging scenarios with complex anatomical structures and
ambiguous boundaries. Our code is available at
https://github.com/HaoXu0507/IPMI25-SAM-Assisted-Registration.
|
2502.11441
|
Which Retain Set Matters for LLM Unlearning? A Case Study on Entity
Unlearning
|
cs.CL
|
Large language models (LLMs) risk retaining unauthorized or sensitive
information from their training data, which raises privacy concerns. LLM
unlearning seeks to mitigate these risks by selectively removing specified data
while maintaining overall model performance. However, most existing work focus
on methods to achieve effective forgetting and does not provide a detailed
analysis of the retain set, the portion of training data that is not targeted
for removal. In this paper, we investigate the effects of unlearning on various
subsets of the retain set through a case study on entity unlearning. We
introduce the Syntactically Similar Neighbor Set, a group of queries that share
similar syntactic structures with the data targeted for removal, and show that
this subset suffers the greatest performance drop during unlearning. Moreover,
when used for regularization, this set not only preserves performance on
syntactically similar queries but also delivers comparable or improved results
across other data subsets. Our results highlight that syntactic similarity is a
critical factor, potentially more so than domain or entity relationships, in
achieving effective and practical LLM unlearning.
|
2502.11442
|
Multi-Turn Multi-Modal Question Clarification for Enhanced
Conversational Understanding
|
cs.IR cs.AI cs.CL cs.LG
|
Conversational query clarification enables users to refine their search
queries through interactive dialogue, improving search effectiveness.
Traditional approaches rely on text-based clarifying questions, which often
fail to capture complex user preferences, particularly those involving visual
attributes. While recent work has explored single-turn multi-modal
clarification with images alongside text, such methods do not fully support the
progressive nature of user intent refinement over multiple turns. Motivated by
this, we introduce the Multi-turn Multi-modal Clarifying Questions (MMCQ) task,
which combines text and visual modalities to refine user queries in a
multi-turn conversation. To facilitate this task, we create a large-scale
dataset named ClariMM comprising over 13k multi-turn interactions and 33k
question-answer pairs containing multi-modal clarifying questions. We propose
Mario, a retrieval framework that employs a two-phase ranking strategy: initial
retrieval with BM25, followed by a multi-modal generative re-ranking model that
integrates textual and visual information from conversational history. Our
experiments show that multi-turn multi-modal clarification outperforms
uni-modal and single-turn approaches, improving MRR by 12.88%. The gains are
most significant in longer interactions, demonstrating the value of progressive
refinement for complex queries.
|
2502.11444
|
Does RAG Really Perform Bad For Long-Context Processing?
|
cs.CL
|
The efficient processing of long context poses a serious challenge for large
language models (LLMs). Recently, retrieval-augmented generation (RAG) has
emerged as a promising strategy for this problem, as it enables LLMs to make
selective use of the long context for efficient computation. However, existing
RAG approaches lag behind other long-context processing methods due to inherent
limitations on inaccurate retrieval and fragmented contexts. To address these
challenges, we introduce RetroLM, a novel RAG framework for long-context
processing. Unlike traditional methods, RetroLM employs KV-level retrieval
augmentation, where it partitions the LLM's KV cache into contiguous pages and
retrieves the most crucial ones for efficient computation. This approach
enhances robustness to retrieval inaccuracy, facilitates effective utilization
of fragmented contexts, and saves the cost from repeated computation. Building
on this framework, we further develop a specialized retriever for precise
retrieval of critical pages and conduct unsupervised post-training to optimize
the model's ability to leverage retrieved information. We conduct comprehensive
evaluations with a variety of benchmarks, including LongBench, InfiniteBench,
and RULER, where RetroLM significantly outperforms existing long-context LLMs
and efficient long-context processing methods, particularly in tasks requiring
intensive reasoning or extremely long-context comprehension.
|
2502.11447
|
Does Editing Provide Evidence for Localization?
|
cs.LG cs.AI
|
A basic aspiration for interpretability research in large language models is
to "localize" semantically meaningful behaviors to particular components within
the LLM. There are various heuristics for finding candidate locations within
the LLM. Once a candidate localization is found, it can be assessed by editing
the internal representations at the corresponding localization and checking
whether this induces model behavior that is consistent with the semantic
interpretation of the localization. The question we address here is: how strong
is the evidence provided by such edits? To evaluate the localization claim, we
want to assess the effect of the optimal intervention at a particular location.
The key new technical tool is a way of adapting LLM alignment techniques to
find such optimal localized edits. With this tool in hand, we give an example
where the edit-based evidence for localization appears strong, but where
localization clearly fails. Indeed, we find that optimal edits at random
localizations can be as effective as aligning the full model. In aggregate, our
results suggest that merely observing that localized edits induce targeted
changes in behavior provides little to no evidence that these locations
actually encode the target behavior.
|
2502.11448
|
AGrail: A Lifelong Agent Guardrail with Effective and Adaptive Safety
Detection
|
cs.AI
|
The rapid advancements in Large Language Models (LLMs) have enabled their
deployment as autonomous agents for handling complex tasks in dynamic
environments. These LLMs demonstrate strong problem-solving capabilities and
adaptability to multifaceted scenarios. However, their use as agents also
introduces significant risks, including task-specific risks, which are
identified by the agent administrator based on the specific task requirements
and constraints, and systemic risks, which stem from vulnerabilities in their
design or interactions, potentially compromising confidentiality, integrity, or
availability (CIA) of information and triggering security risks. Existing
defense agencies fail to adaptively and effectively mitigate these risks. In
this paper, we propose AGrail, a lifelong agent guardrail to enhance LLM agent
safety, which features adaptive safety check generation, effective safety check
optimization, and tool compatibility and flexibility. Extensive experiments
demonstrate that AGrail not only achieves strong performance against
task-specific and system risks but also exhibits transferability across
different LLM agents' tasks.
|
2502.11449
|
Tractable General Equilibrium
|
cs.GT cs.CE econ.TH
|
We study Walrasian economies (or general equilibrium models) and their
solution concept, the Walrasian equilibrium. A key challenge in this domain is
identifying price-adjustment processes that converge to equilibrium. One such
process, t\^atonnement, is an auction-like algorithm first proposed in 1874 by
L\'eon Walras. While continuous-time variants of t\^atonnement are known to
converge to equilibrium in economies satisfying the Weak Axiom of Revealed
Preferences (WARP), the process fails to converge in a pathological Walrasian
economy known as the Scarf economy. To address these issues, we analyze
Walrasian economies using variational inequalities (VIs), an optimization
framework. We introduce the class of mirror extragradient algorithms, which,
under suitable Lipschitz-continuity-like assumptions, converge to a solution of
any VI satisfying the Minty condition in polynomial time. We show that the set
of Walrasian equilibria of any balanced economy-which includes among others
Arrow-Debreu economies-corresponds to the solution set of an associated VI that
satisfies the Minty condition but is generally discontinuous. Applying the
mirror extragradient algorithm to this VI we obtain a class of
t\^atonnement-like processes, which we call the mirror extrat\^atonnement
process. While our VI formulation is generally discontinuous, it is
Lipschitz-continuous in variationally stable Walrasian economies with bounded
elasticity-including those satisfying WARP and the Scarf economy-thus
establishing the polynomial-time convergence of mirror extrat\^atonnement in
these economies. We validate our approach through experiments on large
Arrow-Debreu economies with Cobb-Douglas, Leontief, and CES consumers, as well
as the Scarf economy, demonstrating fast convergence in all cases without
failure.
|
2502.11450
|
Fishing For Cheap And Efficient Pruners At Initialization
|
cs.LG cs.AI
|
Pruning offers a promising solution to mitigate the associated costs and
environmental impact of deploying large deep neural networks (DNNs).
Traditional approaches rely on computationally expensive trained models or
time-consuming iterative prune-retrain cycles, undermining their utility in
resource-constrained settings. To address this issue, we build upon the
established principles of saliency (LeCun et al., 1989) and connection
sensitivity (Lee et al., 2018) to tackle the challenging problem of one-shot
pruning neural networks (NNs) before training (PBT) at initialization. We
introduce Fisher-Taylor Sensitivity (FTS), a computationally cheap and
efficient pruning criterion based on the empirical Fisher Information Matrix
(FIM) diagonal, offering a viable alternative for integrating first- and
second-order information to identify a model's structurally important
parameters. Although the FIM-Hessian equivalency only holds for convergent
models that maximize the likelihood, recent studies (Karakida et al., 2019)
suggest that, even at initialization, the FIM captures essential geometric
information of parameters in overparameterized NNs, providing the basis for our
method. Finally, we demonstrate empirically that layer collapse, a critical
limitation of data-dependent pruning methodologies, is easily overcome by
pruning within a single training epoch after initialization. We perform
experiments on ResNet18 and VGG19 with CIFAR-10 and CIFAR-100, widely used
benchmarks in pruning research. Our method achieves competitive performance
against state-of-the-art techniques for one-shot PBT, even under extreme
sparsity conditions. Our code is made available to the public.
|
2502.11451
|
From Personas to Talks: Revisiting the Impact of Personas on
LLM-Synthesized Emotional Support Conversations
|
cs.CL
|
The rapid advancement of Large Language Models (LLMs) has revolutionized the
generation of emotional support conversations (ESC), offering scalable
solutions with reduced costs and enhanced data privacy. This paper explores the
role of personas in the creation of ESC by LLMs. Our research utilizes
established psychological frameworks to measure and infuse persona traits into
LLMs, which then generate dialogues in the emotional support scenario. We
conduct extensive evaluations to understand the stability of persona traits in
dialogues, examining shifts in traits post-generation and their impact on
dialogue quality and strategy distribution. Experimental results reveal several
notable findings: 1) LLMs can infer core persona traits, 2) subtle shifts in
emotionality and extraversion occur, influencing the dialogue dynamics, and 3)
the application of persona traits modifies the distribution of emotional
support strategies, enhancing the relevance and empathetic quality of the
responses. These findings highlight the potential of persona-driven LLMs in
crafting more personalized, empathetic, and effective emotional support
dialogues, which has significant implications for the future design of
AI-driven emotional support systems.
|
2502.11453
|
Connector-S: A Survey of Connectors in Multi-modal Large Language Models
|
cs.LG cs.AI
|
With the rapid advancements in multi-modal large language models (MLLMs),
connectors play a pivotal role in bridging diverse modalities and enhancing
model performance. However, the design and evolution of connectors have not
been comprehensively analyzed, leaving gaps in understanding how these
components function and hindering the development of more powerful connectors.
In this survey, we systematically review the current progress of connectors in
MLLMs and present a structured taxonomy that categorizes connectors into atomic
operations (mapping, compression, mixture of experts) and holistic designs
(multi-layer, multi-encoder, multi-modal scenarios), highlighting their
technical contributions and advancements. Furthermore, we discuss several
promising research frontiers and challenges, including high-resolution input,
dynamic compression, guide information selection, combination strategy, and
interpretability. This survey is intended to serve as a foundational reference
and a clear roadmap for researchers, providing valuable insights into the
design and optimization of next-generation connectors to enhance the
performance and adaptability of MLLMs.
|
2502.11454
|
UniCBE: An Uniformity-driven Comparing Based Evaluation Framework with
Unified Multi-Objective Optimization
|
cs.CL
|
Human preference plays a significant role in measuring large language models
and guiding them to align with human values. Unfortunately, current
comparing-based evaluation (CBE) methods typically focus on a single
optimization objective, failing to effectively utilize scarce yet valuable
preference signals. To address this, we delve into key factors that can enhance
the accuracy, convergence, and scalability of CBE: suppressing sampling bias,
balancing descending process of uncertainty, and mitigating updating
uncertainty. Following the derived guidelines, we propose UniCBE, a unified
uniformity-driven CBE framework which simultaneously optimize these core
objectives by constructing and integrating three decoupled sampling probability
matrices, each designed to ensure uniformity in specific aspects. We further
ablate the optimal tuple sampling and preference aggregation strategies to
achieve efficient CBE. On the AlpacaEval benchmark, UniCBE saves over 17% of
evaluation budgets while achieving a Pearson correlation with ground truth
exceeding 0.995, demonstrating excellent accuracy and convergence. In scenarios
where new models are continuously introduced, UniCBE can even save over 50% of
evaluation costs, highlighting its improved scalability.
|
2502.11456
|
Leveraging Labelled Data Knowledge: A Cooperative Rectification Learning
Network for Semi-supervised 3D Medical Image Segmentation
|
cs.CV cs.AI
|
Semi-supervised 3D medical image segmentation aims to achieve accurate
segmentation using few labelled data and numerous unlabelled data. The main
challenge in the design of semi-supervised learning methods consists in the
effective use of the unlabelled data for training. A promising solution
consists of ensuring consistent predictions across different views of the data,
where the efficacy of this strategy depends on the accuracy of the
pseudo-labels generated by the model for this consistency learning strategy. In
this paper, we introduce a new methodology to produce high-quality
pseudo-labels for a consistency learning strategy to address semi-supervised 3D
medical image segmentation. The methodology has three important contributions.
The first contribution is the Cooperative Rectification Learning Network (CRLN)
that learns multiple prototypes per class to be used as external knowledge
priors to adaptively rectify pseudo-labels at the voxel level. The second
contribution consists of the Dynamic Interaction Module (DIM) to facilitate
pairwise and cross-class interactions between prototypes and multi-resolution
image features, enabling the production of accurate voxel-level clues for
pseudo-label rectification. The third contribution is the Cooperative Positive
Supervision (CPS), which optimises uncertain representations to align with
unassertive representations of their class distributions, improving the model's
accuracy in classifying uncertain regions. Extensive experiments on three
public 3D medical segmentation datasets demonstrate the effectiveness and
superiority of our semi-supervised learning method.
|
2502.11457
|
Aligning Sentence Simplification with ESL Learner's Proficiency for
Language Acquisition
|
cs.CL cs.AI
|
Text simplification is crucial for improving accessibility and comprehension
for English as a Second Language (ESL) learners. This study goes a step further
and aims to facilitate ESL learners' language acquisition by simplification.
Specifically, we propose simplifying complex sentences to appropriate levels
for learners while also increasing vocabulary coverage of the target level in
the simplifications. We achieve this without a parallel corpus by conducting
reinforcement learning on a large language model. Our method employs
token-level and sentence-level rewards, and iteratively trains the model on its
self-generated outputs to guide the model to search for simplification
hypotheses that satisfy the target attributes. Experiment results on CEFR-SP
and TurkCorpus datasets show that the proposed method can effectively increase
the frequency and diversity of vocabulary of the target level by more than
$20\%$ compared to baseline models, while maintaining high simplification
quality.
|
2502.11458
|
Towards Efficient Pre-training: Exploring FP4 Precision in Large
Language Models
|
cs.LG cs.AI
|
The burgeoning computational demands for training large language models
(LLMs) necessitate efficient methods, including quantized training, which
leverages low-bit arithmetic operations to reduce costs. While FP8 precision
has shown potential, leveraging FP4 remains challenging due to inherent
quantization errors and limited representation capability. Based on the
Transformer architecture, we present an FP4 training scheme for LLMs,
overcoming these obstacles through mixed-precision quantization strategies
tailed for different modules and training stages. This allows us to apply the
precision level suitable to distinct components within the model, ensuring that
multi-head attention and linear layers are handled appropriately. Our
pretraining recipe ensures stability in backpropagation by incorporating
fine-grained quantization methods with a target precision training schedule.
Experimental results demonstrate that our FP4 training scheme achieves accuracy
comparable to BF16 and FP8, with smaller theoretical computational cost. With
the advent of next-generation hardware supporting FP4, our method sets the
foundation for efficient ultra-low precision training.
|
2502.11459
|
Towards Responsible and Fair Data Science: Resource Allocation for
Inclusive and Sustainable Analytics
|
cs.DB
|
This project addresses the challenges of responsible and fair resource
allocation in data science (DS), focusing on DS queries evaluation. Current DS
practices often overlook the broader socio-economic, environmental, and ethical
implications, including data sovereignty, fairness, and inclusivity. By
integrating a decolonial perspective, the project aims to establish innovative
fairness metrics that respect cultural and contextual diversity, optimise
computational and energy efficiency, and ensure equitable participation of
underrepresented communities. The research includes developing algorithms to
align resource allocation with fairness constraints, incorporating ethical and
sustainability considerations, and fostering interdisciplinary collaborations
to bridge technical advancements and societal impact gaps. This work aims to
reshape into an equitable, transparent, and community-empowering practice
challenging the technological power developed by the Big Tech.
|
2502.11460
|
UnitCoder: Scalable Iterative Code Synthesis with Unit Test Guidance
|
cs.CL cs.SE
|
Large Language Models (LLMs) have demonstrated remarkable capabilities in
various tasks, yet code generation remains a major challenge. Current
approaches for obtaining high-quality code data primarily focus on (i)
collecting large-scale pre-training data and (ii) synthesizing instruction data
through prompt engineering with powerful models. While pre-training data faces
quality consistency issues, instruction-based synthesis suffers from limited
instruction diversity and inherent biases of LLMs. To address this gap, we
introduce UnitCoder, a systematic pipeline leveraging model-generated unit
tests to both guide and validate the code generation process. Combined with
large-scale package-based retrieval from pre-training corpus, we generate a
dataset of 500K+ verifiable programs containing diverse API calls. Evaluations
on multiple Python benchmarks (BigCodeBench, HumanEval, MBPP) demonstrate that
models fine-tuned on our synthetic data exhibit consistent performance
improvements. Notably, Llama3.1-8B and InternLM2.5-7B improve from 31\% and
28\% to 40\% and 39\% success rates on BigCodeBench, respectively. Our work
presents a scalable approach that leverages model-generated unit tests to guide
the synthesis of high-quality code data from pre-training corpora,
demonstrating the potential for producing diverse and high-quality
post-training data at scale. All code and data will be released
(https://github.com).
|
2502.11461
|
Doppler Correspondence: Non-Iterative Scan Matching With Doppler
Velocity-Based Correspondence
|
cs.RO
|
Achieving successful scan matching is essential for LiDAR odometry. However,
in challenging environments with adverse weather conditions or repetitive
geometric patterns, LiDAR odometry performance is degraded due to incorrect
scan matching. Recently, the emergence of frequency-modulated continuous wave
4D LiDAR and 4D radar technologies has provided the potential to address these
unfavorable conditions. The term 4D refers to point cloud data characterized by
range, azimuth, and elevation along with Doppler velocity. Although 4D data is
available, most scan matching methods for 4D LiDAR and 4D radar still establish
correspondence by repeatedly identifying the closest points between consecutive
scans, overlooking the Doppler information. This paper introduces, for the
first time, a simple Doppler velocity-based correspondence -- Doppler
Correspondence -- that is invariant to translation and small rotation of the
sensor, with its geometric and kinematic foundations. Extensive experiments
demonstrate that the proposed method enables the direct matching of consecutive
point clouds without an iterative process, making it computationally efficient.
Additionally, it provides a more robust correspondence estimation in
environments with repetitive geometric patterns.
|
2502.11462
|
LMFCA-Net: A Lightweight Model for Multi-Channel Speech Enhancement with
Efficient Narrow-Band and Cross-Band Attention
|
eess.AS cs.LG cs.SD
|
Deep learning based end-to-end multi-channel speech enhancement methods have
achieved impressive performance by leveraging sub-band, cross-band, and spatial
information. However, these methods often demand substantial computational
resources, limiting their practicality on terminal devices. This paper presents
a lightweight multi-channel speech enhancement network with decoupled fully
connected attention (LMFCA-Net). The proposed LMFCA-Net introduces time-axis
decoupled fully-connected attention (T-FCA) and frequency-axis decoupled
fully-connected attention (F-FCA) mechanisms to effectively capture long-range
narrow-band and cross-band information without recurrent units. Experimental
results show that LMFCA-Net performs comparably to state-of-the-art methods
while significantly reducing computational complexity and latency, making it a
promising solution for practical applications.
|
2502.11465
|
All Models Are Miscalibrated, But Some Less So: Comparing Calibration
with Conditional Mean Operators
|
stat.ML cs.LG
|
When working in a high-risk setting, having well calibrated probabilistic
predictive models is a crucial requirement. However, estimators for calibration
error are not always able to correctly distinguish which model is better
calibrated. We propose the \emph{conditional kernel calibration error} (CKCE)
which is based on the Hilbert-Schmidt norm of the difference between
conditional mean operators. By working directly with the definition of strong
calibration as the distance between conditional distributions, which we
represent by their embeddings in reproducing kernel Hilbert spaces, the CKCE is
less sensitive to the marginal distribution of predictive models. This makes it
more effective for relative comparisons than previously proposed calibration
metrics. Our experiments, using both synthetic and real data, show that CKCE
provides a more consistent ranking of models by their calibration error and is
more robust against distribution shift.
|
2502.11466
|
GiFT: Gibbs Fine-Tuning for Code Generation
|
cs.LG cs.CL cs.SE
|
Training Large Language Models (LLMs) with synthetic data is a prevalent
practice in code generation. A key approach is self-training, where LLMs are
iteratively trained on self-generated correct code snippets. In this case, the
self-generated codes are drawn from a conditional distribution, conditioned on
a specific seed description. However, the seed description is not the only
valid representation that aligns with its intended meaning. With all valid
descriptions and codes forming a joint space, codes drawn from the conditional
distribution would lead to an underrepresentation of the full description-code
space. As such, we propose Gibbs Fine-Tuning (GiFT), a novel self-training
method inspired by Gibbs sampling. GiFT allows self-generated data to be drawn
from the marginal distribution of the joint space, thereby mitigating the
biases inherent in conditional sampling. We provide a theoretical analysis
demonstrating the potential benefits of fine-tuning LLMs with code derived from
the marginal distribution. Furthermore, we propose a perplexity-based code
selection method to mitigate the imbalanced long-tail distribution of the
self-generated codes. Empirical evaluation of two LLMs across four datasets
demonstrates that GiFT achieves superior performance, particularly on more
challenging benchmarks.
|
2502.11467
|
Approximation of Permutation Invariant Polynomials by Transformers:
Efficient Construction in Column-Size
|
cs.LG math.FA
|
Transformers are a type of neural network that have demonstrated remarkable
performance across various domains, particularly in natural language processing
tasks. Motivated by this success, research on the theoretical understanding of
transformers has garnered significant attention. A notable example is the
mathematical analysis of their approximation power, which validates the
empirical expressive capability of transformers. In this study, we investigate
the ability of transformers to approximate column-symmetric polynomials, an
extension of symmetric polynomials that take matrices as input. Consequently,
we establish an explicit relationship between the size of the transformer
network and its approximation capability, leveraging the parameter efficiency
of transformers and their compatibility with symmetry by focusing on the
algebraic properties of symmetric polynomials.
|
2502.11468
|
Semantically Robust Unsupervised Image Translation for Paired Remote
Sensing Images
|
cs.CV
|
Image translation for change detection or classification in bi-temporal
remote sensing images is unique. Although it can acquire paired images, it is
still unsupervised. Moreover, strict semantic preservation in translation is
always needed instead of multimodal outputs. In response to these problems,
this paper proposes a new method, SRUIT (Semantically Robust Unsupervised
Image-to-image Translation), which ensures semantically robust translation and
produces deterministic output. Inspired by previous works, the method explores
the underlying characteristics of bi-temporal Remote Sensing images and designs
the corresponding networks. Firstly, we assume that bi-temporal Remote Sensing
images share the same latent space, for they are always acquired from the same
land location. So SRUIT makes the generators share their high-level layers, and
this constraint will compel two domain mapping to fall into the same latent
space. Secondly, considering land covers of bi-temporal images could evolve
into each other, SRUIT exploits the cross-cycle-consistent adversarial networks
to translate from one to the other and recover them. Experimental results show
that constraints of sharing weights and cross-cycle consistency enable
translated images with both good perceptual image quality and semantic
preservation for significant differences.
|
2502.11469
|
If Attention Serves as a Cognitive Model of Human Memory Retrieval, What
is the Plausible Memory Representation?
|
cs.CL
|
Recent work in computational psycholinguistics has revealed intriguing
parallels between attention mechanisms and human memory retrieval, focusing
primarily on Transformer architectures that operate on token-level
representations. However, computational psycholinguistic research has also
established that syntactic structures provide compelling explanations for human
sentence processing that word-level factors alone cannot fully account for. In
this study, we investigate whether the attention mechanism of Transformer
Grammar (TG), which uniquely operates on syntactic structures as
representational units, can serve as a cognitive model of human memory
retrieval, using Normalized Attention Entropy (NAE) as a linking hypothesis
between model behavior and human processing difficulty. Our experiments
demonstrate that TG's attention achieves superior predictive power for
self-paced reading times compared to vanilla Transformer's, with further
analyses revealing independent contributions from both models. These findings
suggest that human sentence processing involves dual memory representations --
one based on syntactic structures and another on token sequences -- with
attention serving as the general retrieval algorithm, while highlighting the
importance of incorporating syntactic structures as representational units.
|
2502.11470
|
Optimized detection of cyber-attacks on IoT networks via hybrid deep
learning models
|
cs.CR cs.AI
|
The rapid expansion of Internet of Things (IoT) devices has increased the
risk of cyber-attacks, making effective detection essential for securing IoT
networks. This work introduces a novel approach combining Self-Organizing Maps
(SOMs), Deep Belief Networks (DBNs), and Autoencoders to detect known and
previously unseen attack patterns. A comprehensive evaluation using simulated
and real-world traffic data is conducted, with models optimized via Particle
Swarm Optimization (PSO). The system achieves an accuracy of up to 99.99% and
Matthews Correlation Coefficient (MCC) values exceeding 99.50%. Experiments on
NSL-KDD, UNSW-NB15, and CICIoT2023 confirm the model's strong performance
across diverse attack types. These findings suggest that the proposed method
enhances IoT security by identifying emerging threats and adapting to evolving
attack strategies.
|
2502.11471
|
GLTW: Joint Improved Graph Transformer and LLM via Three-Word Language
for Knowledge Graph Completion
|
cs.CL cs.IR
|
Knowledge Graph Completion (KGC), which aims to infer missing or incomplete
facts, is a crucial task for KGs. However, integrating the vital structural
information of KGs into Large Language Models (LLMs) and outputting predictions
deterministically remains challenging. To address this, we propose a new method
called GLTW, which encodes the structural information of KGs and merges it with
LLMs to enhance KGC performance. Specifically, we introduce an improved Graph
Transformer (iGT) that effectively encodes subgraphs with both local and global
structural information and inherits the characteristics of language model,
bypassing training from scratch. Also, we develop a subgraph-based
multi-classification training objective, using all entities within KG as
classification objects, to boost learning efficiency.Importantly, we combine
iGT with an LLM that takes KG language prompts as input.Our extensive
experiments on various KG datasets show that GLTW achieves significant
performance gains compared to SOTA baselines.
|
2502.11476
|
FastMCTS: A Simple Sampling Strategy for Data Synthesis
|
cs.CL
|
Synthetic high-quality multi-step reasoning data can significantly enhance
the performance of large language models on various tasks. However, most
existing methods rely on rejection sampling, which generates trajectories
independently and suffers from inefficiency and imbalanced sampling across
problems of varying difficulty. In this work, we introduce FastMCTS, an
innovative data synthesis strategy inspired by Monte Carlo Tree Search.
FastMCTS provides a more efficient sampling method for multi-step reasoning
data, offering step-level evaluation signals and promoting balanced sampling
across problems of different difficulty levels. Experiments on both English and
Chinese reasoning datasets demonstrate that FastMCTS generates over 30\% more
correct reasoning paths compared to rejection sampling as the number of
generated tokens scales up. Furthermore, under comparable synthetic data
budgets, models trained on FastMCTS-generated data outperform those trained on
rejection sampling data by 3.9\% across multiple benchmarks. As a lightweight
sampling strategy, FastMCTS offers a practical and efficient alternative for
synthesizing high-quality reasoning data. Our code will be released soon.
|
2502.11477
|
Learning to Sample Effective and Diverse Prompts for Text-to-Image
Generation
|
cs.CV
|
Recent advances in text-to-image diffusion models have achieved impressive
image generation capabilities. However, it remains challenging to control the
generation process with desired properties (e.g., aesthetic quality, user
intention), which can be expressed as black-box reward functions. In this
paper, we focus on prompt adaptation, which refines the original prompt into
model-preferred prompts to generate desired images. While prior work uses
reinforcement learning (RL) to optimize prompts, we observe that applying RL
often results in generating similar postfixes and deterministic behaviors. To
this end, we introduce \textbf{P}rompt \textbf{A}daptation with
\textbf{G}FlowNets (\textbf{PAG}), a novel approach that frames prompt
adaptation as a probabilistic inference problem. Our key insight is that
leveraging Generative Flow Networks (GFlowNets) allows us to shift from reward
maximization to sampling from an unnormalized density function, enabling both
high-quality and diverse prompt generation. However, we identify that a naive
application of GFlowNets suffers from mode collapse and uncovers a previously
overlooked phenomenon: the progressive loss of neural plasticity in the model,
which is compounded by inefficient credit assignment in sequential prompt
generation. To address this critical challenge, we develop a systematic
approach in PAG with flow reactivation, reward-prioritized sampling, and reward
decomposition for prompt adaptation. Extensive experiments validate that PAG
successfully learns to sample effective and diverse prompts for text-to-image
generation. We also show that PAG exhibits strong robustness across various
reward functions and transferability to different text-to-image models.
|
2502.11478
|
TAPS: Throat and Acoustic Paired Speech Dataset for Deep Learning-Based
Speech Enhancement
|
cs.SD cs.LG eess.AS
|
In high-noise environments such as factories, subways, and busy streets,
capturing clear speech is challenging due to background noise. Throat
microphones provide a solution with their noise-suppressing properties,
reducing the noise while recording speech. However, a significant limitation
remains: high-frequency information is attenuated as sound waves pass through
skin and tissue, reducing speech clarity. Recent deep learning approaches have
shown promise in enhancing throat microphone recordings, but further progress
is constrained by the absence of standardized dataset. We introduce a throat
and acoustic paired speech dataset (TAPS), a collection of paired utterances
recorded from 60 native Korean speakers using throat and acoustic microphones.
To demonstrate the TAPS's utility, we tested three baseline deep learning
models and identified the mapping-based approach as superior in improving
speech quality and restoring content. Additionally, we propose an optimal
method to mitigate the signal mismatch between throat and acoustic microphones,
ensuring model performance. These results highlight the potential of TAPS to
serve as a standardized dataset and advance research in throat microphone-based
speech enhancement.
|
2502.11480
|
Enhancing Offline Model-Based RL via Active Model Selection: A Bayesian
Optimization Perspective
|
cs.LG stat.ML
|
Offline model-based reinforcement learning (MBRL) serves as a competitive
framework that can learn well-performing policies solely from pre-collected
data with the help of learned dynamics models. To fully unleash the power of
offline MBRL, model selection plays a pivotal role in determining the dynamics
model utilized for downstream policy learning. However, offline MBRL
conventionally relies on validation or off-policy evaluation, which are rather
inaccurate due to the inherent distribution shift in offline RL. To tackle
this, we propose BOMS, an active model selection framework that enhances model
selection in offline MBRL with only a small online interaction budget, through
the lens of Bayesian optimization (BO). Specifically, we recast model selection
as BO and enable probabilistic inference in BOMS by proposing a novel
model-induced kernel, which is theoretically grounded and computationally
efficient. Through extensive experiments, we show that BOMS improves over the
baseline methods with a small amount of online interaction comparable to only
$1\%$-$2.5\%$ of offline training data on various RL tasks.
|
2502.11481
|
Variable-frame CNNLSTM for Breast Nodule Classification using Ultrasound
Videos
|
cs.CV cs.AI
|
The intersection of medical imaging and artificial intelligence has become an
important research direction in intelligent medical treatment, particularly in
the analysis of medical images using deep learning for clinical diagnosis.
Despite the advances, existing keyframe classification methods lack extraction
of time series features, while ultrasonic video classification based on
three-dimensional convolution requires uniform frame numbers across patients,
resulting in poor feature extraction efficiency and model classification
performance. This study proposes a novel video classification method based on
CNN and LSTM, introducing NLP's long and short sentence processing scheme into
video classification for the first time. The method reduces CNN-extracted image
features to 1x512 dimension, followed by sorting and compressing feature
vectors for LSTM training. Specifically, feature vectors are sorted by patient
video frame numbers and populated with padding value 0 to form variable
batches, with invalid padding values compressed before LSTM training to
conserve computing resources. Experimental results demonstrate that our
variable-frame CNNLSTM method outperforms other approaches across all metrics,
showing improvements of 3-6% in F1 score and 1.5% in specificity compared to
keyframe methods. The variable-frame CNNLSTM also achieves better accuracy and
precision than equal-frame CNNLSTM. These findings validate the effectiveness
of our approach in classifying variable-frame ultrasound videos and suggest
potential applications in other medical imaging modalities.
|
2502.11482
|
DATA: Decomposed Attention-based Task Adaptation for Rehearsal-Free
Continual Learning
|
cs.LG cs.AI cs.CL
|
Continual learning (CL) is essential for Large Language Models (LLMs) to
adapt to evolving real-world demands, yet they are susceptible to catastrophic
forgetting (CF). While traditional CF solutions rely on expensive data
rehearsal, recent rehearsal-free methods employ model-based and
regularization-based strategies to address this issue. However, these
approaches often neglect the model's plasticity, which is crucial to achieving
optimal performance on newly learned tasks. Consequently, a key challenge in CL
is striking a balance between preserving plasticity and mitigating CF. To
tackle this challenge, we propose the $\textbf{D}$ecomposed
$\textbf{A}$ttention-based $\textbf{T}$ask $\textbf{A}$daptation (DATA), which
explicitly decouples and learns both task-specific and task-shared knowledge
using high-rank and low-rank task adapters (e.g., LoRAs). For new tasks, DATA
dynamically adjusts the weights of adapters of different ranks based on their
relevance and distinction from previous tasks, allowing the model to acquire
new task-specific skills while effectively retaining previously learned
knowledge. Specifically, we implement a decomposed component weighting strategy
comprising learnable components that collectively generate attention-based
weights, allowing the model to integrate and utilize diverse knowledge from
each DATA. Extensive experiments on three widely used benchmarks demonstrate
that our proposed method achieves state-of-the-art performance. Notably, our
approach significantly enhances model plasticity and mitigates CF by extending
learnable components and employing stochastic restoration during training
iterations.
|
Subsets and Splits
No community queries yet
The top public SQL queries from the community will appear here once available.