id
stringlengths 9
16
| title
stringlengths 4
278
| categories
stringlengths 5
104
| abstract
stringlengths 6
4.09k
|
|---|---|---|---|
2502.12499
|
GPU Memory Usage Optimization for Backward Propagation in Deep Network
Training
|
cs.LG cs.DS
|
In modern Deep Learning, it has been a trend to design larger Deep Neural
Networks (DNNs) for the execution of more complex tasks and better accuracy. On
the other hand, Convolutional Neural Networks (CNNs) have become the standard
method for most of computer vision tasks. However, the memory allocation for
the intermediate data in convolution layers can cause severe memory pressure
during model training. Many solutions have been proposed to resolve the
problem. Besides hardware-dependent solutions, a general methodology
rematerialization can reduce GPU memory usage by trading computation for memory
efficiently. The idea is to select a set of intermediate results during the
forward phase as checkpoints, and only save them in memory to reduce memory
usage. The backward phase recomputes the intermediate data from the closest
checkpoints in memory as needed. This recomputation increases execution time
but saves memory by not storing all intermediate results in memory during the
forward phase. In this paper, we will focus on efficiently finding the optimal
checkpoint subset to achieve the least peak memory usage during the model
training. We first describe the theoretical background of the training of a
neural network using mathematical equations. We use these equations to identify
all essential data required during both forward and backward phases to compute
the gradient of weights of the model. We first identify the checkpoint
selection problem and propose a dynamic programming algorithm with time
complexity O(n3) to solve the problem of finding the optimal checkpoint subset.
With extensive experiments, we formulate a more accurate description of the
problem using our theoretical analysis and revise the objective function based
on the tracing, and propose an O(n)-time algorithm for finding the optimal
checkpoint subset.
|
2502.12501
|
Crowd Comparative Reasoning: Unlocking Comprehensive Evaluations for
LLM-as-a-Judge
|
cs.CL
|
LLM-as-a-Judge, which generates chain-of-thought (CoT) judgments, has become
a widely adopted auto-evaluation method. However, its reliability is
compromised by the CoT reasoning's inability to capture comprehensive and
deeper details, often leading to incomplete outcomes. Existing methods mainly
rely on majority voting or criteria expansion, which is insufficient to address
the limitation in CoT. We propose Crowd-based Comparative Evaluation, which
introduces additional crowd responses to compare with the candidate responses,
thereby exposing deeper and more comprehensive details within the candidate
responses. This process effectively guides LLM-as-a-Judge to provide a more
detailed CoT judgment. Extensive experiments demonstrate that our approach
enhances evaluation reliability, achieving an average accuracy gain of 6.7%
across five benchmarks. Moreover, our method produces higher-quality CoTs that
facilitate judge distillation and exhibit superior performance in rejection
sampling for supervised fine-tuning (SFT), referred to as crowd rejection
sampling, thereby enabling more efficient SFT. Our analysis confirms that CoTs
generated by ours are more comprehensive and of higher quality, and evaluation
accuracy improves as inference scales.
|
2502.12502
|
Efficient OpAmp Adaptation for Zoom Attention to Golden Contexts
|
cs.CL
|
Large language models (LLMs) have shown significant promise in
question-answering (QA) tasks, particularly in retrieval-augmented generation
(RAG) scenarios and long-context applications. However, their performance is
hindered by noisy reference documents, which often distract from essential
information. Despite fine-tuning efforts, Transformer-based architectures
struggle to prioritize relevant content. This is evidenced by their tendency to
allocate disproportionate attention to irrelevant or later-positioned
documents. Recent work proposes the differential attention mechanism to address
this issue, but this mechanism is limited by an unsuitable common-mode
rejection ratio (CMRR) and high computational costs. Inspired by the
operational amplifier (OpAmp), we propose the OpAmp adaptation to address these
challenges, which is implemented with adapters efficiently. By integrating the
adapter into pre-trained Transformer blocks, our approach enhances focus on the
golden context without costly training from scratch. Empirical evaluations on
noisy-context benchmarks reveal that our Qwen2.5-OpAmp-72B model, trained with
our OpAmp adaptation, surpasses the performance of state-of-the-art LLMs,
including DeepSeek-V3 and GPT-4o.
|
2502.12507
|
Mixture of Attention Yields Accurate Results for Tabular Data
|
cs.LG cs.AI
|
Tabular data inherently exhibits significant feature heterogeneity, but
existing transformer-based methods lack specialized mechanisms to handle this
property. To bridge the gap, we propose MAYA, an encoder-decoder
transformer-based framework. In the encoder, we design a Mixture of Attention
(MOA) that constructs multiple parallel attention branches and averages the
features at each branch, effectively fusing heterogeneous features while
limiting parameter growth. Additionally, we employ collaborative learning with
a dynamic consistency weight constraint to produce more robust representations.
In the decoder stage, cross-attention is utilized to seamlessly integrate
tabular data with corresponding label features. This dual-attention mechanism
effectively captures both intra-instance and inter-instance interactions. We
evaluate the proposed method on a wide range of datasets and compare it with
other state-of-the-art transformer-based methods. Extensive experiments
demonstrate that our model achieves superior performance among
transformer-based methods in both tabular classification and regression tasks.
|
2502.12508
|
Understanding Generalization in Transformers: Error Bounds and Training
Dynamics Under Benign and Harmful Overfitting
|
cs.LG
|
Transformers serve as the foundational architecture for many successful
large-scale models, demonstrating the ability to overfit the training data
while maintaining strong generalization on unseen data, a phenomenon known as
benign overfitting. However, research on how the training dynamics influence
error bounds within the context of benign overfitting has been limited. This
paper addresses this gap by developing a generalization theory for a two-layer
transformer with labeled flip noise. Specifically, we present generalization
error bounds for both benign and harmful overfitting under varying
signal-to-noise ratios (SNR), where the training dynamics are categorized into
three distinct stages, each with its corresponding error bounds. Additionally,
we conduct extensive experiments to identify key factors that influence test
errors in transformers. Our experimental results align closely with the
theoretical predictions, validating our findings.
|
2502.12509
|
LegalCore: A Dataset for Legal Documents Event Coreference Resolution
|
cs.CL cs.AI
|
Recognizing events and their coreferential mentions in a document is
essential for understanding semantic meanings of text. The existing research on
event coreference resolution is mostly limited to news articles. In this paper,
we present the first dataset for the legal domain, LegalCore, which has been
annotated with comprehensive event and event coreference information. The legal
contract documents we annotated in this dataset are several times longer than
news articles, with an average length of around 25k tokens per document. The
annotations show that legal documents have dense event mentions and feature
both short-distance and super long-distance coreference links between event
mentions. We further benchmark mainstream Large Language Models (LLMs) on this
dataset for both event detection and event coreference resolution tasks, and
find that this dataset poses significant challenges for state-of-the-art
open-source and proprietary LLMs, which perform significantly worse than a
supervised baseline. We will publish the dataset as well as the code.
|
2502.12510
|
Aspect-Guided Multi-Level Perturbation Analysis of Large Language Models
in Automated Peer Review
|
cs.CL
|
We propose an aspect-guided, multi-level perturbation framework to evaluate
the robustness of Large Language Models (LLMs) in automated peer review. Our
framework explores perturbations in three key components of the peer review
process-papers, reviews, and rebuttals-across several quality aspects,
including contribution, soundness, presentation, tone, and completeness. By
applying targeted perturbations and examining their effects on both
LLM-as-Reviewer and LLM-as-Meta-Reviewer, we investigate how aspect-based
manipulations, such as omitting methodological details from papers or altering
reviewer conclusions, can introduce significant biases in the review process.
We identify several potential vulnerabilities: review conclusions that
recommend a strong reject may significantly influence meta-reviews, negative or
misleading reviews may be wrongly interpreted as thorough, and incomplete or
hostile rebuttals can unexpectedly lead to higher acceptance rates. Statistical
tests show that these biases persist under various Chain-of-Thought prompting
strategies, highlighting the lack of robust critical evaluation in current
LLMs. Our framework offers a practical methodology for diagnosing these
vulnerabilities, thereby contributing to the development of more reliable and
robust automated reviewing systems.
|
2502.12511
|
Myna: Masking-Based Contrastive Learning of Musical Representations
|
cs.SD cs.AI cs.LG
|
We present Myna, a simple yet effective approach for self-supervised musical
representation learning. Built on a contrastive learning framework, Myna
introduces two key innovations: (1) the use of a Vision Transformer (ViT) on
mel-spectrograms as the backbone and (2) a novel data augmentation strategy,
token masking, that masks 90 percent of spectrogram tokens. These innovations
deliver both effectiveness and efficiency: (i) Token masking enables a
significant increase in per-GPU batch size, from 48 or 120 in prior methods
(CLMR, MULE) to 4096. (ii) By avoiding traditional augmentations, Myna retains
pitch sensitivity, enhancing performance in tasks like key detection. (iii) The
use of vertical patches allows the model to better capture critical features
for key detection. Our hybrid model, Myna-22M-Hybrid, processes both 16x16 and
128x2 patches, achieving state-of-the-art results. Trained on a single GPU, it
outperforms MULE (62M) on average and rivals MERT-95M, which was trained on 16
and 64 GPUs, respectively. Additionally, it surpasses MERT-95M-public,
establishing itself as the best-performing model trained on publicly available
data. We release our code and models to promote reproducibility and facilitate
future research.
|
2502.12513
|
RealSyn: An Effective and Scalable Multimodal Interleaved Document
Transformation Paradigm
|
cs.CV
|
After pre-training on extensive image-text pairs, Contrastive Language-Image
Pre-training (CLIP) demonstrates promising performance on a wide variety of
benchmarks. However, a substantial volume of non-paired data, such as
multimodal interleaved documents, remains underutilized for vision-language
representation learning. To fully leverage these unpaired documents, we
initially establish a Real-World Data Extraction pipeline to extract
high-quality images and texts. Then we design a hierarchical retrieval method
to efficiently associate each image with multiple semantically relevant
realistic texts. To further enhance fine-grained visual information, we propose
an image semantic augmented generation module for synthetic text production.
Furthermore, we employ a semantic balance sampling strategy to improve dataset
diversity, enabling better learning of long-tail concepts. Based on these
innovations, we construct RealSyn, a dataset combining realistic and synthetic
texts, available in three scales: 15M, 30M, and 100M. Extensive experiments
demonstrate that RealSyn effectively advances vision-language representation
learning and exhibits strong scalability. Models pre-trained on RealSyn achieve
state-of-the-art performance on multiple downstream tasks. To facilitate future
research, the RealSyn dataset and pre-trained model weights are released at
https://github.com/deepglint/RealSyn.
|
2502.12514
|
Memory-updated-based Framework for 100% Reliable Flexible Flat Cables
Insertion
|
cs.RO
|
Automatic assembly lines have increasingly replaced human labor in various
tasks; however, the automation of Flexible Flat Cable (FFC) insertion remains
unrealized due to its high requirement for effective feedback and dynamic
operation, limiting approximately 11% of global industrial capacity. Despite
lots of approaches, like vision-based tactile sensors and reinforcement
learning, having been proposed, the implementation of human-like high-reliable
insertion (i.e., with a 100% success rate in completed insertion) remains a big
challenge. Drawing inspiration from human behavior in FFC insertion, which
involves sensing three-dimensional forces, translating them into physical
concepts, and continuously improving estimates, we propose a novel framework.
This framework includes a sensing module for collecting three-dimensional
tactile data, a perception module for interpreting this data into meaningful
physical signals, and a memory module based on Bayesian theory for reliability
estimation and control. This strategy enables the robot to accurately assess
its physical state and generate reliable status estimations and corrective
actions. Experimental results demonstrate that the robot using this framework
can detect alignment errors of 0.5 mm with an accuracy of 97.92% and then
achieve a 100% success rate in all completed tests after a few iterations. This
work addresses the challenges of unreliable perception and control in complex
insertion tasks, highlighting the path toward the development of fully
automated production lines.
|
2502.12516
|
Can LLMs Extract Frame-Semantic Arguments?
|
cs.CL
|
Frame-semantic parsing is a critical task in natural language understanding,
yet the ability of large language models (LLMs) to extract frame-semantic
arguments remains underexplored. This paper presents a comprehensive evaluation
of LLMs on frame-semantic argument identification, analyzing the impact of
input representation formats, model architectures, and generalization to unseen
and out-of-domain samples. Our experiments, spanning models from 0.5B to 78B
parameters, reveal that JSON-based representations significantly enhance
performance, and while larger models generally perform better, smaller models
can achieve competitive results through fine-tuning. We also introduce a novel
approach to frame identification leveraging predicted frame elements, achieving
state-of-the-art performance on ambiguous targets. Despite strong
generalization capabilities, our analysis finds that LLMs still struggle with
out-of-domain data.
|
2502.12518
|
New Constant Dimension Codes From the Inserting Mixed Dimension
Construction and Multilevel Construction
|
cs.IT math.IT
|
Constant dimension codes (CDCs) are essential for error correction in random
network coding. A fundamental problem of CDCs is to determine their maximal
possible size for given parameters. Inserting construction and multilevel
construction are two effective techniques for constructing CDCs. We first
provide a sufficient condition for a subspace to be added to the code from the
mixed dimension construction in Lao et al. (IEEE Trans. Inf. Theory 69(7):
4333-4344, 2023). By appropriately combining matrix blocks from small CDCs and
rank-metric codes, we introduce three inserting constructions based on the
mixed dimension construction. Furthermore, the mixed dimension construction and
these inserting constructions are improved by the multilevel construction that
is based on lifting rank-restricted Ferrers diagram rank-metric codes. Our
constructions yield some new lower bounds for CDCs, which are superior to the
previously best-known ones.
|
2502.12520
|
SAFEERASER: Enhancing Safety in Multimodal Large Language Models through
Multimodal Machine Unlearning
|
cs.CV
|
As Multimodal Large Language Models (MLLMs) develop, their potential security
issues have become increasingly prominent. Machine Unlearning (MU), as an
effective strategy for forgetting specific knowledge in training data, has been
widely used in privacy protection. However, MU for safety in MLLM has yet to be
fully explored. To address this issue, we propose SAFEERASER, a safety
unlearning benchmark for MLLMs, consisting of 3,000 images and 28.8K VQA pairs.
We comprehensively evaluate unlearning methods from two perspectives: forget
quality and model utility. Our findings show that existing MU methods struggle
to maintain model performance while implementing the forget operation and often
suffer from over-forgetting. Hence, we introduce Prompt Decouple (PD) Loss to
alleviate over-forgetting through decouple prompt during unlearning process. To
quantitatively measure over-forgetting mitigated by PD Loss, we propose a new
metric called Safe Answer Refusal Rate (SARR). Experimental results demonstrate
that combining PD Loss with existing unlearning methods can effectively prevent
over-forgetting and achieve a decrease of 79.5% in the SARR metric of LLaVA-7B
and LLaVA-13B, while maintaining forget quality and model utility. Our code and
dataset will be released upon acceptance. Warning: This paper contains examples
of harmful language and images, and reader discretion is recommended.
|
2502.12521
|
Inference-Time Computations for LLM Reasoning and Planning: A Benchmark
and Insights
|
cs.AI cs.LG
|
We examine the reasoning and planning capabilities of large language models
(LLMs) in solving complex tasks. Recent advances in inference-time techniques
demonstrate the potential to enhance LLM reasoning without additional training
by exploring intermediate steps during inference. Notably, OpenAI's o1 model
shows promising performance through its novel use of multi-step reasoning and
verification. Here, we explore how scaling inference-time techniques can
improve reasoning and planning, focusing on understanding the tradeoff between
computational cost and performance. To this end, we construct a comprehensive
benchmark, known as Sys2Bench, and perform extensive experiments evaluating
existing inference-time techniques on eleven diverse tasks across five
categories, including arithmetic reasoning, logical reasoning, common sense
reasoning, algorithmic reasoning, and planning. Our findings indicate that
simply scaling inference-time computation has limitations, as no single
inference-time technique consistently performs well across all reasoning and
planning tasks.
|
2502.12523
|
Cohesive Subgraph Discovery in Hypergraphs: A Locality-Driven Indexing
Framework
|
cs.SI
|
Hypergraphs are increasingly employed to model complex, diverse relationships
in modern networks, effectively capturing higher-order interactions. A critical
challenge in this domain is the discovery of cohesive subgraphs, which provides
valuable insights into hypergraph structures. However, selecting suitable
parameters for this task remains unresolved. To address this, we propose an
efficient indexing framework designed for online retrieval of cohesive
subgraphs. Our approach enables rapid identification of desired structures
without requiring exhaustive graph traversals, thus ensuring scalability and
practicality. This framework has broad applicability, supporting informed
decision-making across various domains by offering a comprehensive view of
network landscapes. Extensive experiments on real-world datasets demonstrate
the effectiveness and efficiency of our proposed indexing technique.
|
2502.12524
|
YOLOv12: Attention-Centric Real-Time Object Detectors
|
cs.CV cs.AI
|
Enhancing the network architecture of the YOLO framework has been crucial for
a long time, but has focused on CNN-based improvements despite the proven
superiority of attention mechanisms in modeling capabilities. This is because
attention-based models cannot match the speed of CNN-based models. This paper
proposes an attention-centric YOLO framework, namely YOLOv12, that matches the
speed of previous CNN-based ones while harnessing the performance benefits of
attention mechanisms. YOLOv12 surpasses all popular real-time object detectors
in accuracy with competitive speed. For example, YOLOv12-N achieves 40.6% mAP
with an inference latency of 1.64 ms on a T4 GPU, outperforming advanced
YOLOv10-N / YOLOv11-N by 2.1%/1.2% mAP with a comparable speed. This advantage
extends to other model scales. YOLOv12 also surpasses end-to-end real-time
detectors that improve DETR, such as RT-DETR / RT-DETRv2: YOLOv12-S beats
RT-DETR-R18 / RT-DETRv2-R18 while running 42% faster, using only 36% of the
computation and 45% of the parameters. More comparisons are shown in Figure 1.
|
2502.12525
|
From Abstract to Actionable: Pairwise Shapley Values for Explainable AI
|
cs.LG cs.AI
|
Explainable AI (XAI) is critical for ensuring transparency, accountability,
and trust in machine learning systems as black-box models are increasingly
deployed within high-stakes domains. Among XAI methods, Shapley values are
widely used for their fairness and consistency axioms. However, prevalent
Shapley value approximation methods commonly rely on abstract baselines or
computationally intensive calculations, which can limit their interpretability
and scalability. To address such challenges, we propose Pairwise Shapley
Values, a novel framework that grounds feature attributions in explicit,
human-relatable comparisons between pairs of data instances proximal in feature
space. Our method introduces pairwise reference selection combined with
single-value imputation to deliver intuitive, model-agnostic explanations while
significantly reducing computational overhead. Here, we demonstrate that
Pairwise Shapley Values enhance interpretability across diverse regression and
classification scenarios--including real estate pricing, polymer property
prediction, and drug discovery datasets. We conclude that the proposed methods
enable more transparent AI systems and advance the real-world applicability of
XAI.
|
2502.12527
|
Comprehensive Assessment and Analysis for NSFW Content Erasure in
Text-to-Image Diffusion Models
|
cs.CV
|
Text-to-image (T2I) diffusion models have gained widespread application
across various domains, demonstrating remarkable creative potential. However,
the strong generalization capabilities of these models can inadvertently led
they to generate NSFW content even with efforts on filtering NSFW content from
the training dataset, posing risks to their safe deployment. While several
concept erasure methods have been proposed to mitigate this issue, a
comprehensive evaluation of their effectiveness remains absent. To bridge this
gap, we present the first systematic investigation of concept erasure methods
for NSFW content and its sub-themes in text-to-image diffusion models. At the
task level, we provide a holistic evaluation of 11 state-of-the-art baseline
methods with 14 variants. Specifically, we analyze these methods from six
distinct assessment perspectives, including three conventional perspectives,
i.e., erasure proportion, image quality, and semantic alignment, and three new
perspectives, i.e., excessive erasure, the impact of explicit and implicit
unsafe prompts, and robustness. At the tool level, we perform a detailed
toxicity analysis of NSFW datasets and compare the performance of different
NSFW classifiers, offering deeper insights into their performance alongside a
compilation of comprehensive evaluation metrics. Our benchmark not only
systematically evaluates concept erasure methods, but also delves into the
underlying factors influencing their performance at the insight level. By
synthesizing insights from various evaluation perspectives, we provide a deeper
understanding of the challenges and opportunities in the field, offering
actionable guidance and inspiration for advancing research and practical
applications in concept erasure.
|
2502.12528
|
Contextual Linear Bandits with Delay as Payoff
|
cs.LG
|
A recent work by Schlisselberg et al. (2024) studies a delay-as-payoff model
for stochastic multi-armed bandits, where the payoff (either loss or reward) is
delayed for a period that is proportional to the payoff itself. While this
captures many real-world applications, the simple multi-armed bandit setting
limits the practicality of their results. In this paper, we address this
limitation by studying the delay-as-payoff model for contextual linear bandits.
Specifically, we start from the case with a fixed action set and propose an
efficient algorithm whose regret overhead compared to the standard no-delay
case is at most $D\Delta_{\max}\log T$, where $T$ is the total horizon, $D$ is
the maximum delay, and $\Delta_{\max}$ is the maximum suboptimality gap. When
payoff is loss, we also show further improvement of the bound, demonstrating a
separation between reward and loss similar to Schlisselberg et al. (2024).
Contrary to standard linear bandit algorithms that construct least squares
estimator and confidence ellipsoid, the main novelty of our algorithm is to
apply a phased arm elimination procedure by only picking actions in a
volumetric spanner of the action set, which addresses challenges arising from
both payoff-dependent delays and large action sets. We further extend our
results to the case with varying action sets by adopting the reduction from
Hanna et al. (2023). Finally, we implement our algorithm and showcase its
effectiveness and superior performance in experiments.
|
2502.12529
|
Alternating Regret for Online Convex Optimization
|
cs.LG
|
Motivated by alternating learning dynamics in two-player games, a recent work
by Cevher et al.(2024) shows that $o(\sqrt{T})$ alternating regret is possible
for any $T$-round adversarial Online Linear Optimization (OLO) problem, and
left as an open question whether the same is true for general Online Convex
Optimization (OCO). We answer this question in the affirmative by showing that
the continuous Hedge algorithm achieves
$\tilde{\mathcal{O}}(d^{\frac{2}{3}}T^{\frac{1}{3}})$ alternating regret for
any adversarial $d$-dimensional OCO problems. We show that this implies an
alternating learning dynamic that finds a Nash equilibrium for any
convex-concave zero-sum games or a coarse correlated equilibrium for any convex
two-player general-sum games at a rate of
$\tilde{\mathcal{O}}(d^{\frac{2}{3}}/T^{\frac{2}{3}})$. To further improve the
time complexity and/or the dimension dependence, we propose another simple
algorithm, Follow-the-Regularized-Leader with a regularizer whose convex
conjugate is 3rd-order smooth, for OCO with smooth and self-concordant loss
functions (such as linear or quadratic losses). We instantiate our algorithm
with different regularizers and show that, for example, when the decision set
is the $\ell_2$ ball, our algorithm achieves
$\tilde{\mathcal{O}}(T^{\frac{2}{5}})$ alternating regret with no dimension
dependence (and a better $\tilde{\mathcal{O}}(T^{\frac{1}{3}})$ bound for
quadratic losses). We complement our results by showing some algorithm-specific
alternating regret lower bounds, including a somewhat surprising
$\Omega(\sqrt{T})$ lower bound for a Regret Matching variant that is widely
used in alternating learning dynamics.
|
2502.12530
|
Policy-to-Language: Train LLMs to Explain Decisions with Flow-Matching
Generated Rewards
|
cs.CL cs.LG
|
As humans increasingly share environments with diverse agents powered by RL,
LLMs, and beyond, the ability to explain their policies in natural language
will be vital for reliable coexistence. In this paper, we build a
model-agnostic explanation generator based on an LLM. The technical novelty is
that the rewards for training this LLM are generated by a generative flow
matching model. This model has a specially designed structure with a hidden
layer merged with an LLM to harness the linguistic cues of explanations into
generating appropriate rewards. Experiments on both RL and LLM tasks
demonstrate that our method can generate dense and effective rewards while
saving on expensive human feedback; it thus enables effective explanations and
even improves the accuracy of the decisions in original tasks.
|
2502.12531
|
GSCE: A Prompt Framework with Enhanced Reasoning for Reliable LLM-driven
Drone Control
|
cs.RO cs.AI
|
The integration of Large Language Models (LLMs) into robotic control,
including drones, has the potential to revolutionize autonomous systems.
Research studies have demonstrated that LLMs can be leveraged to support
robotic operations. However, when facing tasks with complex reasoning, concerns
and challenges are raised about the reliability of solutions produced by LLMs.
In this paper, we propose a prompt framework with enhanced reasoning to enable
reliable LLM-driven control for drones. Our framework consists of novel
technical components designed using Guidelines, Skill APIs, Constraints, and
Examples, namely GSCE. GSCE is featured by its reliable and
constraint-compliant code generation. We performed thorough experiments using
GSCE for the control of drones with a wide level of task complexities. Our
experiment results demonstrate that GSCE can significantly improve task success
rates and completeness compared to baseline approaches, highlighting its
potential for reliable LLM-driven autonomous drone systems.
|
2502.12532
|
CityEQA: A Hierarchical LLM Agent on Embodied Question Answering
Benchmark in City Space
|
cs.AI
|
Embodied Question Answering (EQA) has primarily focused on indoor
environments, leaving the complexities of urban settings - spanning
environment, action, and perception - largely unexplored. To bridge this gap,
we introduce CityEQA, a new task where an embodied agent answers
open-vocabulary questions through active exploration in dynamic city spaces. To
support this task, we present CityEQA-EC, the first benchmark dataset featuring
1,412 human-annotated tasks across six categories, grounded in a realistic 3D
urban simulator. Moreover, we propose Planner-Manager-Actor (PMA), a novel
agent tailored for CityEQA. PMA enables long-horizon planning and hierarchical
task execution: the Planner breaks down the question answering into sub-tasks,
the Manager maintains an object-centric cognitive map for spatial reasoning
during the process control, and the specialized Actors handle navigation,
exploration, and collection sub-tasks. Experiments demonstrate that PMA
achieves 60.7% of human-level answering accuracy, significantly outperforming
frontier-based baselines. While promising, the performance gap compared to
humans highlights the need for enhanced visual reasoning in CityEQA. This work
paves the way for future advancements in urban spatial intelligence. Dataset
and code are available at https://github.com/BiluYong/CityEQA.git.
|
2502.12534
|
NoKSR: Kernel-Free Neural Surface Reconstruction via Point Cloud
Serialization
|
cs.CV
|
We present a novel approach to large-scale point cloud surface reconstruction
by developing an efficient framework that converts an irregular point cloud
into a signed distance field (SDF). Our backbone builds upon recent
transformer-based architectures (i.e., PointTransformerV3), that serializes the
point cloud into a locality-preserving sequence of tokens. We efficiently
predict the SDF value at a point by aggregating nearby tokens, where fast
approximate neighbors can be retrieved thanks to the serialization. We
serialize the point cloud at different levels/scales, and non-linearly
aggregate a feature to predict the SDF value. We show that aggregating across
multiple scales is critical to overcome the approximations introduced by the
serialization (i.e. false negatives in the neighborhood). Our frameworks sets
the new state-of-the-art in terms of accuracy and efficiency (better or similar
performance with half the latency of the best prior method, coupled with a
simpler implementation), particularly on outdoor datasets where sparse-grid
methods have shown limited performance.
|
2502.12535
|
Learning Transformation-Isomorphic Latent Space for Accurate Hand Pose
Estimation
|
cs.CV
|
Vision-based regression tasks, such as hand pose estimation, have achieved
higher accuracy and faster convergence through representation learning.
However, existing representation learning methods often encounter the following
issues: the high semantic level of features extracted from images is inadequate
for regressing low-level information, and the extracted features include
task-irrelevant information, reducing their compactness and interfering with
regression tasks. To address these challenges, we propose TI-Net, a highly
versatile visual Network backbone designed to construct a Transformation
Isomorphic latent space. Specifically, we employ linear transformations to
model geometric transformations in the latent space and ensure that {\rm
TI-Net} aligns them with those in the image space. This ensures that the latent
features capture compact, low-level information beneficial for pose estimation
tasks. We evaluated TI-Net on the hand pose estimation task to demonstrate the
network's superiority. On the DexYCB dataset, TI-Net achieved a 10% improvement
in the PA-MPJPE metric compared to specialized state-of-the-art (SOTA) hand
pose estimation methods. Our code will be released in the future.
|
2502.12536
|
An Algorithm Board in Neural Decoding
|
cs.NE cs.AI
|
Understanding the mechanisms of neural encoding and decoding has always been
a highly interesting research topic in fields such as neuroscience and
cognitive intelligence. In prior studies, some researchers identified a
symmetry in neural data decoded by unsupervised methods in motor scenarios and
constructed a cognitive learning system based on this pattern (i.e., symmetry).
Nevertheless, the distribution state of the data flow that significantly
influences neural decoding positions still remains a mystery within the system,
which further restricts the enhancement of the system's interpretability. Based
on this, this paper mainly explores changes in the distribution state within
the system from the machine learning and mathematical statistics perspectives.
In the experiment, we assessed the correctness of this symmetry using various
tools and indicators commonly utilized in mathematics and statistics. According
to the experimental results, the normal distribution (or Gaussian distribution)
plays a crucial role in the decoding of prediction positions within the system.
Eventually, an algorithm board similar to the Galton board was built to serve
as the mathematical foundation of the discovered symmetry.
|
2502.12537
|
Finding Optimal Trading History in Reinforcement Learning for Stock
Market Trading
|
cs.LG cs.AI
|
This paper investigates the optimization of temporal windows in Financial
Deep Reinforcement Learning (DRL) models using 2D Convolutional Neural Networks
(CNNs). We introduce a novel approach to treating the temporal field as a
hyperparameter and examine its impact on model performance across various
datasets and feature arrangements. We introduce a new hyperparameter for the
CNN policy, proposing that this temporal field can and should be treated as a
hyperparameter for these models. We examine the significance of this temporal
field by iteratively expanding the window of observations presented to the CNN
policy during the deep reinforcement learning process. Our iterative process
involves progressively increasing the observation period from two weeks to
twelve weeks, allowing us to examine the effects of different temporal windows
on the model's performance. This window expansion is implemented in two
settings. In one setting, we rearrange the features in the dataset to group
them by company, allowing the model to have a full view of company data in its
observation window and CNN kernel. In the second setting, we do not group the
features by company, and features are arranged by category. Our study reveals
that shorter temporal windows are most effective when no feature rearrangement
to group per company is in effect. However, the model will utilize longer
temporal windows and yield better performance once we introduce the feature
rearrangement. To examine the consistency of our findings, we repeated our
experiment on two datasets containing the same thirty companies from the Dow
Jones Index but with different features in each dataset and consistently
observed the above-mentioned patterns. The result is a trading model
significantly outperforming global financial services firms such as the Global
X Guru by the established Mirae Asset.
|
2502.12539
|
Design and Implementation of a Dual Uncrewed Surface Vessel Platform for
Bathymetry Research under High-flow Conditions
|
cs.RO cs.LG cs.SY eess.SY
|
Bathymetry, the study of underwater topography, relies on sonar mapping of
submerged structures. These measurements, critical for infrastructure health
monitoring, often require expensive instrumentation. The high financial risk
associated with sensor damage or vessel loss creates a reluctance to deploy
uncrewed surface vessels (USVs) for bathymetry. However, the crewed-boat
bathymetry operations, are costly, pose hazards to personnel, and frequently
fail to achieve the stable conditions necessary for bathymetry data collection,
especially under high currents. Further research is essential to advance
autonomous control, navigation, and data processing technologies, with a
particular focus on bathymetry. There is a notable lack of accessible hardware
platforms that allow for integrated research in both bathymetry-focused
autonomous control and navigation, as well as data evaluation and processing.
This paper addresses this gap through the design and implementation of two
complementary USV systems tailored for uncrewed bathymetry research. This
includes a low-cost USV for Navigation And Control research (NAC-USV) and a
second, high-end USV equipped with a high-resolution multi-beam sonar and the
associated hardware for Bathymetry data quality Evaluation and Post-processing
research (BEP-USV). The NAC-USV facilitates the investigation of autonomous,
fail-safe navigation and control, emphasizing the stability requirements for
high-quality bathymetry data collection while minimizing the risk to equipment.
The BEP-USV, which mirrors the NAC-USV hardware, is then used for additional
control validation and in-depth exploration of bathymetry data evaluation and
post-processing methodologies. We detail the design and implementation of both
systems, and open source the design. Furthermore, we demonstrate the system's
effectiveness in a range of operational scenarios.
|
2502.12541
|
When Segmentation Meets Hyperspectral Image: New Paradigm for
Hyperspectral Image Classification
|
cs.CV
|
Hyperspectral image (HSI) classification is a cornerstone of remote sensing,
enabling precise material and land-cover identification through rich spectral
information. While deep learning has driven significant progress in this task,
small patch-based classifiers, which account for over 90% of the progress, face
limitations: (1) the small patch (e.g., 7x7, 9x9)-based sampling approach
considers a limited receptive field, resulting in insufficient spatial
structural information critical for object-level identification and noise-like
misclassifications even within uniform regions; (2) undefined optimal patch
sizes lead to coarse label predictions, which degrade performance; and (3) a
lack of multi-shape awareness around objects. To address these challenges, we
draw inspiration from large-scale image segmentation techniques, which excel at
handling object boundaries-a capability essential for semantic labeling in HSI
classification. However, their application remains under-explored in this task
due to (1) the prevailing notion that larger patch sizes degrade performance,
(2) the extensive unlabeled regions in HSI groundtruth, and (3) the
misalignment of input shapes between HSI data and segmentation models. Thus, in
this study, we propose a novel paradigm and baseline, HSIseg, for HSI
classification that leverages segmentation techniques combined with a novel
Dynamic Shifted Regional Transformer (DSRT) to overcome these challenges. We
also introduce an intuitive progressive learning framework with adaptive
pseudo-labeling to iteratively incorporate unlabeled regions into the training
process, thereby advancing the application of segmentation techniques.
Additionally, we incorporate auxiliary data through multi-source data
collaboration, promoting better feature interaction. Validated on five public
HSI datasets, our proposal outperforms state-of-the-art methods.
|
2502.12542
|
Computing Voting Rules with Improvement Feedback
|
cs.GT cs.AI
|
Aggregating preferences under incomplete or constrained feedback is a
fundamental problem in social choice and related domains. While prior work has
established strong impossibility results for pairwise comparisons, this paper
extends the inquiry to improvement feedback, where voters express incremental
adjustments rather than complete preferences. We provide a complete
characterization of the positional scoring rules that can be computed given
improvement feedback. Interestingly, while plurality is learnable under
improvement feedback--unlike with pairwise feedback--strong impossibility
results persist for many other positional scoring rules. Furthermore, we show
that improvement feedback, unlike pairwise feedback, does not suffice for the
computation of any Condorcet-consistent rule. We complement our theoretical
findings with experimental results, providing further insights into the
practical implications of improvement feedback for preference aggregation.
|
2502.12545
|
IM360: Textured Mesh Reconstruction for Large-scale Indoor Mapping with
360$^\circ$ Cameras
|
cs.CV
|
We present a novel 3D reconstruction pipeline for 360$^\circ$ cameras for 3D
mapping and rendering of indoor environments. Traditional Structure-from-Motion
(SfM) methods may not work well in large-scale indoor scenes due to the
prevalence of textureless and repetitive regions. To overcome these challenges,
our approach (IM360) leverages the wide field of view of omnidirectional images
and integrates the spherical camera model into every core component of the SfM
pipeline. In order to develop a comprehensive 3D reconstruction solution, we
integrate a neural implicit surface reconstruction technique to generate
high-quality surfaces from sparse input data. Additionally, we utilize a
mesh-based neural rendering approach to refine texture maps and accurately
capture view-dependent properties by combining diffuse and specular components.
We evaluate our pipeline on large-scale indoor scenes from the Matterport3D and
Stanford2D3D datasets. In practice, IM360 demonstrate superior performance in
terms of textured mesh reconstruction over SOTA. We observe accuracy
improvements in terms of camera localization and registration as well as
rendering high frequency details.
|
2502.12546
|
Spatiotemporal Multi-Camera Calibration using Freely Moving People
|
cs.CV
|
We propose a novel method for spatiotemporal multi-camera calibration using
freely moving people in multiview videos. Since calibrating multiple cameras
and finding matches across their views are inherently interdependent,
performing both in a unified framework poses a significant challenge. We
address these issues as a single registration problem of matching two sets of
3D points, leveraging human motion in dynamic multi-person scenes. To this end,
we utilize 3D human poses obtained from an off-the-shelf monocular 3D human
pose estimator and transform them into 3D points on a unit sphere, to solve the
rotation, time offset, and the association alternatingly. We employ a
probabilistic approach that can jointly solve both problems of aligning
spatiotemporal data and establishing correspondences through soft assignment
between two views. The translation is determined by applying coplanarity
constraints. The pairwise registration results are integrated into a multiview
setup, and then a nonlinear optimization method is used to improve the accuracy
of the camera poses, temporal offsets, and multi-person associations. Extensive
experiments on synthetic and real data demonstrate the effectiveness and
flexibility of the proposed method as a practical marker-free calibration tool.
|
2502.12548
|
Improving the Stability of GNN Force Field Models by Reducing Feature
Correlation
|
cs.LG cs.AI
|
Recently, Graph Neural Network based Force Field (GNNFF) models are widely
used in Molecular Dynamics (MD) simulation, which is one of the most
cost-effective means in semiconductor material research. However, even such
models provide high accuracy in energy and force Mean Absolute Error (MAE) over
trained (in-distribution) datasets, they often become unstable during long-time
MD simulation when used for out-of-distribution datasets. In this paper, we
propose a feature correlation based method for GNNFF models to enhance the
stability of MD simulation. We reveal the negative relationship between feature
correlation and the stability of GNNFF models, and design a loss function with
a dynamic loss coefficient scheduler to reduce edge feature correlation that
can be applied in general GNNFF training. We also propose an empirical metric
to evaluate the stability in MD simulation. Experiments show our method can
significantly improve stability for GNNFF models especially in
out-of-distribution data with less than 3% computational overhead. For example,
we can ensure the stable MD simulation time from 0.03ps to 10ps for Allegro
model.
|
2502.12552
|
LLM Safety for Children
|
cs.CY cs.AI
|
This paper analyzes the safety of Large Language Models (LLMs) in
interactions with children below age of 18 years. Despite the transformative
applications of LLMs in various aspects of children's lives such as education
and therapy, there remains a significant gap in understanding and mitigating
potential content harms specific to this demographic. The study acknowledges
the diverse nature of children often overlooked by standard safety evaluations
and proposes a comprehensive approach to evaluating LLM safety specifically for
children. We list down potential risks that children may encounter when using
LLM powered applications. Additionally we develop Child User Models that
reflect the varied personalities and interests of children informed by
literature in child care and psychology. These user models aim to bridge the
existing gap in child safety literature across various fields. We utilize Child
User Models to evaluate the safety of six state of the art LLMs. Our
observations reveal significant safety gaps in LLMs particularly in categories
harmful to children but not adults
|
2502.12555
|
Warm Starting of CMA-ES for Contextual Optimization Problems
|
cs.NE
|
Several practical applications of evolutionary computation possess objective
functions that receive the design variables and externally given parameters.
Such problems are termed contextual optimization problems. These problems
require finding the optimal solutions corresponding to the given context
vectors. Existing contextual optimization methods train a policy model to
predict the optimal solution from context vectors. However, the performance of
such models is limited by their representation ability. By contrast, warm
starting methods have been used to initialize evolutionary algorithms on a
given problem using the optimization results on similar problems. Because warm
starting methods do not consider the context vectors, their performances can be
improved on contextual optimization problems. Herein, we propose a covariance
matrix adaptation evolution strategy with contextual warm starting (CMA-ES-CWS)
to efficiently optimize the contextual optimization problem with a given
context vector. The CMA-ES-CWS utilizes the optimization results of past
context vectors to train the multivariate Gaussian process regression.
Subsequently, the CMA-ES-CWS performs warm starting for a given context vector
by initializing the search distribution using posterior distribution of the
Gaussian process regression. The results of the numerical simulation suggest
that CMA-ES-CWS outperforms the existing contextual optimization and warm
starting methods.
|
2502.12556
|
From Maneuver to Mishap: A Systematic Literature Review on U-Turn Safety
Risks
|
eess.SY cs.SY
|
Understanding the impacts of U-turn configurations on intersection safety and
traffic operations is essential for developing effective strategies to enhance
road safety and efficiency. Extensive research has been conducted to
investigate the role of geometric designs, driver behavior, and advanced
technologies in mitigating crash risks and improving traffic flow at U-turn
facilities. By synthesizing this collective body of work through the guidelines
of Preferred Reporting Items for Systematic Reviews and Meta-Analyses (PRISMA),
this paper provides a valuable resource for transportation professionals,
policymakers, and researchers seeking evidence-based solutions. This systematic
review draws on studies from diverse traffic environments and regional
contexts, focusing on innovative design interventions, such as restricted
crossing U-turns (RCUTs) and median U-turn intersections (MUTs), as well as
integrated strategies leveraging technological advancements. By presenting a
comprehensive analysis of U-turn-related challenges and opportunities, this
review contributes to advancing transportation safety research and guiding the
development of adaptive strategies tailored to varied traffic conditions and
evolving technologies.
|
2502.12558
|
MomentSeeker: A Comprehensive Benchmark and A Strong Baseline For Moment
Retrieval Within Long Videos
|
cs.CV cs.AI
|
Retrieval augmented generation (RAG) holds great promise in addressing
challenges associated with long video understanding. These methods retrieve
useful moments from long videos for their presented tasks, thereby enabling
multimodal large language models (MLLMs) to generate high-quality answers in a
cost-effective way. In this work, we present MomentSeeker, a comprehensive
benchmark to evaluate retrieval models' performance in handling general
long-video moment retrieval (LVMR) tasks. MomentSeeker offers three key
advantages. First, it incorporates long videos of over 500 seconds on average,
making it the first benchmark specialized for long-video moment retrieval.
Second, it covers a wide range of task categories (including Moment Search,
Caption Alignment, Image-conditioned Moment Search, and Video-conditioned
Moment Search) and diverse application scenarios (e.g., sports, movies,
cartoons, and ego), making it a comprehensive tool for assessing retrieval
models' general LVMR performance. Additionally, the evaluation tasks are
carefully curated through human annotation, ensuring the reliability of
assessment. We further fine-tune an MLLM-based LVMR retriever on synthetic
data, which demonstrates strong performance on our benchmark. We perform
extensive experiments with various popular multimodal retrievers based on our
benchmark, whose results highlight the challenges of LVMR and limitations for
existing methods. Our created resources will be shared with community to
advance future research in this field.
|
2502.12560
|
How does a Language-Specific Tokenizer affect LLMs?
|
cs.CL
|
The necessity of language-specific tokenizers intuitively appears crucial for
effective natural language processing, yet empirical analyses on their
significance and underlying reasons are lacking. This study explores how
language-specific tokenizers influence the behavior of Large Language Models
predominantly trained with English text data, through the case study of Korean.
The research unfolds in two main stages: (1) the development of a
Korean-specific extended tokenizer and (2) experiments to compare models with
the basic tokenizer and the extended tokenizer through various Next Token
Prediction tasks. Our in-depth analysis reveals that the extended tokenizer
decreases confidence in incorrect predictions during generation and reduces
cross-entropy in complex tasks, indicating a tendency to produce less
nonsensical outputs. Consequently, the extended tokenizer provides stability
during generation, potentially leading to higher performance in downstream
tasks.
|
2502.12561
|
UXAgent: An LLM Agent-Based Usability Testing Framework for Web Design
|
cs.HC cs.CL
|
Usability testing is a fundamental yet challenging (e.g., inflexible to
iterate the study design flaws and hard to recruit study participants) research
method for user experience (UX) researchers to evaluate a web design. Recent
advances in Large Language Model-simulated Agent (LLM-Agent) research inspired
us to design UXAgent to support UX researchers in evaluating and reiterating
their usability testing study design before they conduct the real human subject
study. Our system features an LLM-Agent module and a universal browser
connector module so that UX researchers can automatically generate thousands of
simulated users to test the target website. The results are shown in
qualitative (e.g., interviewing how an agent thinks ), quantitative (e.g., # of
actions), and video recording formats for UX researchers to analyze. Through a
heuristic user evaluation with five UX researchers, participants praised the
innovation of our system but also expressed concerns about the future of LLM
Agent-assisted UX study.
|
2502.12562
|
SEA: Low-Resource Safety Alignment for Multimodal Large Language Models
via Synthetic Embeddings
|
cs.CL cs.CR cs.MM
|
Multimodal Large Language Models (MLLMs) have serious security
vulnerabilities.While safety alignment using multimodal datasets consisting of
text and data of additional modalities can effectively enhance MLLM's security,
it is costly to construct these datasets. Existing low-resource security
alignment methods, including textual alignment, have been found to struggle
with the security risks posed by additional modalities. To address this, we
propose Synthetic Embedding augmented safety Alignment (SEA), which optimizes
embeddings of additional modality through gradient updates to expand textual
datasets. This enables multimodal safety alignment training even when only
textual data is available. Extensive experiments on image, video, and
audio-based MLLMs demonstrate that SEA can synthesize a high-quality embedding
on a single RTX3090 GPU within 24 seconds. SEA significantly improves the
security of MLLMs when faced with threats from additional modalities. To assess
the security risks introduced by video and audio, we also introduced a new
benchmark called VA-SafetyBench. High attack success rates across multiple
MLLMs validate its challenge. Our code and data will be available at
https://github.com/ZeroNLP/SEA.
|
2502.12563
|
Evaluating Language Models on Grooming Risk Estimation Using Fuzzy
Theory
|
cs.CL cs.AI cs.LG
|
Encoding implicit language presents a challenge for language models,
especially in high-risk domains where maintaining high precision is important.
Automated detection of online child grooming is one such critical domain, where
predators manipulate victims using a combination of explicit and implicit
language to convey harmful intentions. While recent studies have shown the
potential of Transformer language models like SBERT for preemptive grooming
detection, they primarily depend on surface-level features and approximate real
victim grooming processes using vigilante and law enforcement conversations.
The question of whether these features and approximations are reasonable has
not been addressed thus far. In this paper, we address this gap and study
whether SBERT can effectively discern varying degrees of grooming risk inherent
in conversations, and evaluate its results across different participant groups.
Our analysis reveals that while fine-tuning aids language models in learning to
assign grooming scores, they show high variance in predictions, especially for
contexts containing higher degrees of grooming risk. These errors appear in
cases that 1) utilize indirect speech pathways to manipulate victims and 2)
lack sexually explicit content. This finding underscores the necessity for
robust modeling of indirect speech acts by language models, particularly those
employed by predators.
|
2502.12564
|
Sample Efficient Omniprediction and Downstream Swap Regret for
Non-Linear Losses
|
cs.LG cs.GT
|
We define "decision swap regret" which generalizes both prediction for
downstream swap regret and omniprediction, and give algorithms for obtaining it
for arbitrary multi-dimensional Lipschitz loss functions in online adversarial
settings. We also give sample complexity bounds in the batch setting via an
online-to-batch reduction. When applied to omniprediction, our algorithm gives
the first polynomial sample-complexity bounds for Lipschitz loss functions --
prior bounds either applied only to linear loss (or binary outcomes) or scaled
exponentially with the error parameter even under the assumption that the loss
functions were convex. When applied to prediction for downstream regret, we
give the first algorithm capable of guaranteeing swap regret bounds for all
downstream agents with non-linear loss functions over a multi-dimensional
outcome space: prior work applied only to linear loss functions, modeling risk
neutral agents. Our general bounds scale exponentially with the dimension of
the outcome space, but we give improved regret and sample complexity bounds for
specific families of multidimensional functions of economic interest: constant
elasticity of substitution (CES), Cobb-Douglas, and Leontief utility functions.
|
2502.12565
|
Self Iterative Label Refinement via Robust Unlabeled Learning
|
cs.CL
|
Recent advances in large language models (LLMs) have yielded impressive
performance on various tasks, yet they often depend on high-quality feedback
that can be costly. Self-refinement methods attempt to leverage LLMs' internal
evaluation mechanisms with minimal human supervision; however, these approaches
frequently suffer from inherent biases and overconfidence, especially in
domains where the models lack sufficient internal knowledge, resulting in
performance degradation. As an initial step toward enhancing self-refinement
for broader applications, we introduce an iterative refinement pipeline that
employs the Unlabeled-Unlabeled learning framework to improve LLM-generated
pseudo-labels for classification tasks. By exploiting two unlabeled datasets
with differing positive class ratios, our approach iteratively denoises and
refines the initial pseudo-labels, thereby mitigating the adverse effects of
internal biases with minimal human supervision. Evaluations on diverse
datasets, including low-resource language corpora, patent classifications, and
protein structure categorizations, demonstrate that our method consistently
outperforms both initial LLM's classification performance and the
self-refinement approaches by cutting-edge models (e.g., GPT-4o and
DeepSeek-R1).
|
2502.12566
|
Exploring the Impact of Personality Traits on LLM Bias and Toxicity
|
cs.AI
|
With the different roles that AI is expected to play in human life, imbuing
large language models (LLMs) with different personalities has attracted
increasing research interests. While the "personification" enhances human
experiences of interactivity and adaptability of LLMs, it gives rise to
critical concerns about content safety, particularly regarding bias, sentiment
and toxicity of LLM generation. This study explores how assigning different
personality traits to LLMs affects the toxicity and biases of their outputs.
Leveraging the widely accepted HEXACO personality framework developed in social
psychology, we design experimentally sound prompts to test three LLMs'
performance on three toxic and bias benchmarks. The findings demonstrate the
sensitivity of all three models to HEXACO personality traits and, more
importantly, a consistent variation in the biases, negative sentiment and
toxicity of their output. In particular, adjusting the levels of several
personality traits can effectively reduce bias and toxicity in model
performance, similar to humans' correlations between personality traits and
toxic behaviors. The findings highlight the additional need to examine content
safety besides the efficiency of training or fine-tuning methods for LLM
personification. They also suggest a potential for the adjustment of
personalities to be a simple and low-cost method to conduct controlled text
generation.
|
2502.12567
|
DeltaDiff: A Residual-Guided Diffusion Model for Enhanced Image
Super-Resolution
|
cs.CV
|
Recently, the application of diffusion models in super-resolution tasks has
become a popular research direction. Existing work is focused on fully
migrating diffusion models to SR tasks. The diffusion model is proposed in the
field of image generation, so in order to make the generated results diverse,
the diffusion model combines random Gaussian noise and distributed sampling to
increase the randomness of the model.
However, the essence of super-resolution tasks requires the model to generate
high-resolution images with fidelity. Excessive addition of random factors can
result in the model generating detailed information that does not belong to the
HR image. To address this issue, we propose a new diffusion model called
Deltadiff, which uses only residuals between images for diffusion, making the
entire diffusion process more stable. The experimental results show that our
method surpasses state-of-the-art models and generates results with better
fidelity. Our code and model are publicly available at
https://github.com/continueyang/DeltaDiff
|
2502.12568
|
A Cognitive Writing Perspective for Constrained Long-Form Text
Generation
|
cs.CL cs.AI
|
Like humans, Large Language Models (LLMs) struggle to generate high-quality
long-form text that adheres to strict requirements in a single pass. This
challenge is unsurprising, as successful human writing, according to the
Cognitive Writing Theory, is a complex cognitive process involving iterative
planning, translating, reviewing, and monitoring. Motivated by these cognitive
principles, we aim to equip LLMs with human-like cognitive writing capabilities
through CogWriter, a novel training-free framework that transforms LLM
constrained long-form text generation into a systematic cognitive writing
paradigm. Our framework consists of two key modules: (1) a Planning Agent that
performs hierarchical planning to decompose the task, and (2) multiple
Generation Agents that execute these plans in parallel. The system maintains
quality via continuous monitoring and reviewing mechanisms, which evaluate
outputs against specified requirements and trigger necessary revisions.
CogWriter demonstrates exceptional performance on LongGenBench, a benchmark for
complex constrained long-form text generation. Even when using Qwen-2.5-14B as
its backbone, CogWriter surpasses GPT-4o by 22% in complex instruction
completion accuracy while reliably generating texts exceeding 10,000 words. We
hope this cognitive science-inspired approach provides a paradigm for LLM
writing advancements:
\href{https://github.com/KaiyangWan/CogWriter}{CogWriter}.
|
2502.12569
|
Maximizing Value in Challenge the Champ Tournaments
|
cs.DS cs.GT cs.MA
|
A tournament is a method to decide the winner in a competition, and describes
the overall sequence in which matches between the players are held. While
deciding a worthy winner is the primary goal of a tournament, a close second is
to maximize the value generated for the matches played, with value for a match
measured either in terms of tickets sold, television viewership, advertising
revenue, or other means. Tournament organizers often seed the players -- i.e.,
decide which matches are played -- to increase this value.
We study the value maximization objective in a particular tournament format
called Challenge the Champ. This is a simple tournament format where an
ordering of the players is decided. The first player in this order is the
initial champion. The remaining players in order challenge the current
champion; if a challenger wins, she replaces the current champion. We model the
outcome of a match between two players using a complete directed graph, called
a strength graph, with each player represented as a vertex, and the direction
of an edge indicating the winner in a match. The value-maximization objective
has been recently explored for knockout tournaments when the strength graph is
a directed acyclic graph (DAG).
We extend the investigation to Challenge the Champ tournaments and general
strength graphs. We study different representations of the value of each match,
and completely characterize the computational complexity of the problem.
|
2502.12570
|
GVTNet: Graph Vision Transformer For Face Super-Resolution
|
cs.CV
|
Recent advances in face super-resolution research have utilized the
Transformer architecture. This method processes the input image into a series
of small patches. However, because of the strong correlation between different
facial components in facial images. When it comes to super-resolution of
low-resolution images, existing algorithms cannot handle the relationships
between patches well, resulting in distorted facial components in the
super-resolution results. To solve the problem, we propose a transformer
architecture based on graph neural networks called graph vision transformer
network. We treat each patch as a graph node and establish an adjacency matrix
based on the information between patches. In this way, the patch only interacts
between neighboring patches, further processing the relationship of facial
components. Quantitative and visualization experiments have underscored the
superiority of our algorithm over state-of-the-art techniques. Through detailed
comparisons, we have demonstrated that our algorithm possesses more advanced
super-resolution capabilities, particularly in enhancing facial components. The
PyTorch code is available at https://github.com/continueyang/GVTNet
|
2502.12571
|
A Novel Gain Modeling Technique for LLC Resonant Converters based on The
Hybrid Deep-Learning/GMDH Neural Network
|
eess.SY cs.SY
|
This paper presents a novel hybrid approach for modeling the voltage gain of
LLC resonant converters by combining deep-learning neural networks with the
polynomial based Group Method of Data Handling (GMDH). While deep learning
offers high accuracy in predicting nonlinear converter behavior, it produces
complex network models. GMDH neural networks, in contrast, yield simpler
algebraic equations that can be more convenient in converter design. By
training a deep network on data from an FPGA based real time simulator and then
using the network s predictions to train a GMDH model, the proposed hybrid
method achieves both high accuracy and design friendly simplicity. Experimental
results show significant improvements over traditional methods such as First
Harmonic Approximation (FHA) and frequency domain corrections, particularly for
wide operating ranges.
|
2502.12574
|
HeadInfer: Memory-Efficient LLM Inference by Head-wise Offloading
|
cs.LG cs.AI
|
Transformer-based large language models (LLMs) demonstrate impressive
performance in long context generation. Extending the context length has
disproportionately shifted the memory footprint of LLMs during inference to the
key-value cache (KV cache). In this paper, we propose HEADINFER, which offloads
the KV cache to CPU RAM while avoiding the need to fully store the KV cache for
any transformer layer on the GPU. HEADINFER employs a fine-grained, head-wise
offloading strategy, maintaining only selective attention heads KV cache on the
GPU while computing attention output dynamically. Through roofline analysis, we
demonstrate that HEADINFER maintains computational efficiency while
significantly reducing memory footprint. We evaluate HEADINFER on the
Llama-3-8B model with a 1-million-token sequence, reducing the GPU memory
footprint of the KV cache from 128 GB to 1 GB and the total GPU memory usage
from 207 GB to 17 GB, achieving a 92% reduction compared to BF16 baseline
inference. Notably, HEADINFER enables 4-million-token inference with an 8B
model on a single consumer GPU with 24GB memory (e.g., NVIDIA RTX 4090) without
approximation methods.
|
2502.12575
|
DemonAgent: Dynamically Encrypted Multi-Backdoor Implantation Attack on
LLM-based Agent
|
cs.CR cs.AI
|
As LLM-based agents become increasingly prevalent, backdoors can be implanted
into agents through user queries or environment feedback, raising critical
concerns regarding safety vulnerabilities. However, backdoor attacks are
typically detectable by safety audits that analyze the reasoning process of
agents. To this end, we propose a novel backdoor implantation strategy called
\textbf{Dynamically Encrypted Multi-Backdoor Implantation Attack}.
Specifically, we introduce dynamic encryption, which maps the backdoor into
benign content, effectively circumventing safety audits. To enhance
stealthiness, we further decompose the backdoor into multiple sub-backdoor
fragments. Based on these advancements, backdoors are allowed to bypass safety
audits significantly. Additionally, we present AgentBackdoorEval, a dataset
designed for the comprehensive evaluation of agent backdoor attacks.
Experimental results across multiple datasets demonstrate that our method
achieves an attack success rate nearing 100\% while maintaining a detection
rate of 0\%, illustrating its effectiveness in evading safety audits. Our
findings highlight the limitations of existing safety mechanisms in detecting
advanced attacks, underscoring the urgent need for more robust defenses against
backdoor threats. Code and data are available at
https://github.com/whfeLingYu/DemonAgent.
|
2502.12576
|
A Fuzzy Evaluation of Sentence Encoders on Grooming Risk Classification
|
cs.CL cs.AI cs.LG
|
With the advent of social media, children are becoming increasingly
vulnerable to the risk of grooming in online settings. Detecting grooming
instances in an online conversation poses a significant challenge as the
interactions are not necessarily sexually explicit, since the predators take
time to build trust and a relationship with their victim. Moreover, predators
evade detection using indirect and coded language. While previous studies have
fine-tuned Transformers to automatically identify grooming in chat
conversations, they overlook the impact of coded and indirect language on model
predictions, and how these align with human perceptions of grooming. In this
paper, we address this gap and evaluate bi-encoders on the task of classifying
different degrees of grooming risk in chat contexts, for three different
participant groups, i.e. law enforcement officers, real victims, and decoys.
Using a fuzzy-theoretic framework, we map human assessments of grooming
behaviors to estimate the actual degree of grooming risk. Our analysis reveals
that fine-tuned models fail to tag instances where the predator uses indirect
speech pathways and coded language to evade detection. Further, we find that
such instances are characterized by a higher presence of out-of-vocabulary
(OOV) words in samples, causing the model to misclassify. Our findings
highlight the need for more robust models to identify coded language from noisy
chat inputs in grooming contexts.
|
2502.12579
|
CHATS: Combining Human-Aligned Optimization and Test-Time Sampling for
Text-to-Image Generation
|
cs.CV
|
Diffusion models have emerged as a dominant approach for text-to-image
generation. Key components such as the human preference alignment and
classifier-free guidance play a crucial role in ensuring generation quality.
However, their independent application in current text-to-image models
continues to face significant challenges in achieving strong text-image
alignment, high generation quality, and consistency with human aesthetic
standards. In this work, we for the first time, explore facilitating the
collaboration of human performance alignment and test-time sampling to unlock
the potential of text-to-image models. Consequently, we introduce CHATS
(Combining Human-Aligned optimization and Test-time Sampling), a novel
generative framework that separately models the preferred and dispreferred
distributions and employs a proxy-prompt-based sampling strategy to utilize the
useful information contained in both distributions. We observe that CHATS
exhibits exceptional data efficiency, achieving strong performance with only a
small, high-quality funetuning dataset. Extensive experiments demonstrate that
CHATS surpasses traditional preference alignment methods, setting new
state-of-the-art across various standard benchmarks.
|
2502.12581
|
The Majority Vote Paradigm Shift: When Popular Meets Optimal
|
stat.ML cs.AI cs.LG
|
Reliably labelling data typically requires annotations from multiple human
workers. However, humans are far from being perfect. Hence, it is a common
practice to aggregate labels gathered from multiple annotators to make a more
confident estimate of the true label. Among many aggregation methods, the
simple and well known Majority Vote (MV) selects the class label polling the
highest number of votes. However, despite its importance, the optimality of
MV's label aggregation has not been extensively studied. We address this gap in
our work by characterising the conditions under which MV achieves the
theoretically optimal lower bound on label estimation error. Our results
capture the tolerable limits on annotation noise under which MV can optimally
recover labels for a given class distribution. This certificate of optimality
provides a more principled approach to model selection for label aggregation as
an alternative to otherwise inefficient practices that sometimes include higher
experts, gold labels, etc., that are all marred by the same human uncertainty
despite huge time and monetary costs. Experiments on both synthetic and real
world data corroborate our theoretical findings.
|
2502.12582
|
Adaptive Prototype Model for Attribute-based Multi-label Few-shot Action
Recognition
|
cs.CV
|
In real-world action recognition systems, incorporating more attributes helps
achieve a more comprehensive understanding of human behavior. However, using a
single model to simultaneously recognize multiple attributes can lead to a
decrease in accuracy. In this work, we propose a novel method i.e. Adaptive
Attribute Prototype Model (AAPM) for human action recognition, which captures
rich action-relevant attribute information and strikes a balance between
accuracy and robustness. Firstly, we introduce the Text-Constrain Module (TCM)
to incorporate textual information from potential labels, and constrain the
construction of different attributes prototype representations. In addition, we
explore the Attribute Assignment Method (AAM) to address the issue of training
bias and increase robustness during the training process.Furthermore, we
construct a new video dataset with attribute-based multi-label called
Multi-Kinetics for evaluation, which contains various attribute labels (e.g.
action, scene, object, etc.) related to human behavior. Extensive experiments
demonstrate that our AAPM achieves the state-of-the-art performance in both
attribute-based multi-label few-shot action recognition and single-label
few-shot action recognition. The project and dataset are available at an
anonymous account https://github.com/theAAPM/AAPM
|
2502.12583
|
LongFaith: Enhancing Long-Context Reasoning in LLMs with Faithful
Synthetic Data
|
cs.CL
|
Despite the growing development of long-context large language models (LLMs),
data-centric approaches relying on synthetic data have been hindered by issues
related to faithfulness, which limit their effectiveness in enhancing model
performance on tasks such as long-context reasoning and question answering
(QA). These challenges are often exacerbated by misinformation caused by lack
of verification, reasoning without attribution, and potential knowledge
conflicts. We propose LongFaith, a novel pipeline for synthesizing faithful
long-context reasoning instruction datasets. By integrating ground truth and
citation-based reasoning prompts, we eliminate distractions and improve the
accuracy of reasoning chains, thus mitigating the need for costly verification
processes. We open-source two synthesized datasets, LongFaith-SFT and
LongFaith-PO, which systematically address multiple dimensions of faithfulness,
including verified reasoning, attribution, and contextual grounding. Extensive
experiments on multi-hop reasoning datasets and LongBench demonstrate that
models fine-tuned on these datasets significantly improve performance. Our
ablation studies highlight the scalability and adaptability of the LongFaith
pipeline, showcasing its broad applicability in developing long-context LLMs.
|
2502.12584
|
Enhancing Semi-supervised Learning with Noisy Zero-shot Pseudolabels
|
cs.LG cs.AI
|
Semi-supervised learning (SSL) leverages limited labeled data alongside
abundant unlabeled data to address labeling costs in machine learning. While
recent foundation models enable zero-shot inference, attempts to integrate
these capabilities into SSL through pseudo-labeling have shown mixed results
due to unreliable zero-shot predictions. We present ZMT (Zero-Shot Multi-Task
Learning), a framework that jointly optimizes zero-shot pseudo-labels and
unsupervised representation learning objectives from contemporary SSL
approaches. Our method introduces a multi-task learning-based mechanism that
incorporates pseudo-labels while ensuring robustness to varying pseudo-label
quality. Experiments across 8 datasets in vision, language, and audio domains
demonstrate that ZMT reduces error by up to 56% compared to traditional SSL
methods, with particularly compelling results when pseudo-labels are noisy and
unreliable. ZMT represents a significant step toward making semi-supervised
learning more effective and accessible in resource-constrained environments.
|
2502.12586
|
G-Refer: Graph Retrieval-Augmented Large Language Model for Explainable
Recommendation
|
cs.IR cs.CL
|
Explainable recommendation has demonstrated significant advantages in
informing users about the logic behind recommendations, thereby increasing
system transparency, effectiveness, and trustworthiness. To provide
personalized and interpretable explanations, existing works often combine the
generation capabilities of large language models (LLMs) with collaborative
filtering (CF) information. CF information extracted from the user-item
interaction graph captures the user behaviors and preferences, which is crucial
for providing informative explanations. However, due to the complexity of graph
structure, effectively extracting the CF information from graphs still remains
a challenge. Moreover, existing methods often struggle with the integration of
extracted CF information with LLMs due to its implicit representation and the
modality gap between graph structures and natural language explanations. To
address these challenges, we propose G-Refer, a framework using graph
retrieval-augmented large language models (LLMs) for explainable
recommendation. Specifically, we first employ a hybrid graph retrieval
mechanism to retrieve explicit CF signals from both structural and semantic
perspectives. The retrieved CF information is explicitly formulated as
human-understandable text by the proposed graph translation and accounts for
the explanations generated by LLMs. To bridge the modality gap, we introduce
knowledge pruning and retrieval-augmented fine-tuning to enhance the ability of
LLMs to process and utilize the retrieved CF information to generate
explanations. Extensive experiments show that G-Refer achieves superior
performance compared with existing methods in both explainability and
stability. Codes and data are available at https://github.com/Yuhan1i/G-Refer.
|
2502.12587
|
RSMLP: A light Sampled MLP Structure for Incomplete Utterance Rewrite
|
cs.CL cs.AI
|
The Incomplete Utterance Rewriting (IUR) task has garnered significant
attention in recent years. Its goal is to reconstruct conversational utterances
to better align with the current context, thereby enhancing comprehension. In
this paper, we introduce a novel and versatile lightweight method,
Rewritten-Sampled MLP (RSMLP). By employing an MLP based architecture with a
carefully designed down-sampling strategy, RSMLP effectively extracts latent
semantic information between utterances and makes appropriate edits to restore
incomplete utterances. Due to its simple yet efficient structure, our method
achieves competitive performance on public IUR datasets and in real-world
applications.
|
2502.12589
|
RM-PoT: Reformulating Mathematical Problems and Solving via Program of
Thoughts
|
cs.AI
|
Recently, substantial advancements have been made in training language models
to carry out step-by-step reasoning for solving intricate numerical reasoning
tasks. Beyond the methods used to solve these problems, the structure and
formulation of the problems themselves also play a crucial role in determining
the performance of large language models. We observe that even small changes in
the surface form of mathematical problems can have a profound impact on both
the answer distribution and solve rate. This highlights the vulnerability of
LLMs to surface-level variations, revealing its limited robustness when
reasoning through complex problems. In this paper, we propose RM-PoT, a
three-stage framework that integrates problem reformulation (RM), code-aided
reasoning (PoT), and domain-aware few-shot learning to address these
limitations. Our approach first reformulates the input problem into diverse
surface forms to reduce structural bias, then retrieves five semantically
aligned examples from a pre-constructed domain-specific question bank to
provide contextual guidance, and finally generates executable Python code for
precise computation.
|
2502.12591
|
CutPaste&Find: Efficient Multimodal Hallucination Detector with
Visual-aid Knowledge Base
|
cs.CV cs.CL
|
Large Vision-Language Models (LVLMs) have demonstrated impressive multimodal
reasoning capabilities, but they remain susceptible to hallucination,
particularly object hallucination where non-existent objects or incorrect
attributes are fabricated in generated descriptions. Existing detection methods
achieve strong performance but rely heavily on expensive API calls and
iterative LVLM-based validation, making them impractical for large-scale or
offline use. To address these limitations, we propose CutPaste\&Find, a
lightweight and training-free framework for detecting hallucinations in
LVLM-generated outputs. Our approach leverages off-the-shelf visual and
linguistic modules to perform multi-step verification efficiently without
requiring LVLM inference. At the core of our framework is a Visual-aid
Knowledge Base that encodes rich entity-attribute relationships and associated
image representations. We introduce a scaling factor to refine similarity
scores, mitigating the issue of suboptimal alignment values even for
ground-truth image-text pairs. Comprehensive evaluations on benchmark datasets,
including POPE and R-Bench, demonstrate that CutPaste\&Find achieves
competitive hallucination detection performance while being significantly more
efficient and cost-effective than previous methods.
|
2502.12594
|
PASER: Post-Training Data Selection for Efficient Pruned Large Language
Model Recovery
|
cs.CL
|
Model pruning is an effective approach for compressing large language models.
However, this process often leads to significant degradation of model
capabilities. While post-training techniques such as instruction tuning are
commonly employed to recover model performance, existing methods often overlook
the uneven deterioration of model capabilities and incur high computational
costs. Moreover, some instruction data irrelevant to model capability recovery
may introduce negative effects. To address these challenges, we propose the
\textbf{P}ost-training d\textbf{A}ta \textbf{S}election method for
\textbf{E}fficient pruned large language model \textbf{R}ecovery
(\textbf{PASER}). PASER aims to identify instructions where model capabilities
are most severely compromised within a certain recovery data budget. Our
approach first applies manifold learning and spectral clustering to group
recovery data in the semantic space, revealing capability-specific instruction
sets. We then adaptively allocate the data budget to different clusters based
on the degrees of model capability degradation. In each cluster, we prioritize
data samples where model performance has declined dramatically. To mitigate
potential negative transfer, we also detect and filter out conflicting or
irrelevant recovery data. Extensive experiments demonstrate that PASER
significantly outperforms conventional baselines, effectively recovering the
general capabilities of pruned LLMs while utilizing merely 4\%-20\% of the
original post-training data.
|
2502.12598
|
Bring Your Own Knowledge: A Survey of Methods for LLM Knowledge
Expansion
|
cs.CL
|
Adapting large language models (LLMs) to new and diverse knowledge is
essential for their lasting effectiveness in real-world applications. This
survey provides an overview of state-of-the-art methods for expanding the
knowledge of LLMs, focusing on integrating various knowledge types, including
factual information, domain expertise, language proficiency, and user
preferences. We explore techniques, such as continual learning, model editing,
and retrieval-based explicit adaptation, while discussing challenges like
knowledge consistency and scalability. Designed as a guide for researchers and
practitioners, this survey sheds light on opportunities for advancing LLMs as
adaptable and robust knowledge systems.
|
2502.12599
|
Learning a High-quality Robotic Wiping Policy Using Systematic Reward
Analysis and Visual-Language Model Based Curriculum
|
cs.RO cs.LG
|
Autonomous robotic wiping is an important task in various industries, ranging
from industrial manufacturing to sanitization in healthcare. Deep reinforcement
learning (Deep RL) has emerged as a promising algorithm, however, it often
suffers from a high demand for repetitive reward engineering. Instead of
relying on manual tuning, we first analyze the convergence of quality-critical
robotic wiping, which requires both high-quality wiping and fast task
completion, to show the poor convergence of the problem and propose a new
bounded reward formulation to make the problem feasible. Then, we further
improve the learning process by proposing a novel visual-language model (VLM)
based curriculum, which actively monitors the progress and suggests
hyperparameter tuning. We demonstrate that the combined method can find a
desirable wiping policy on surfaces with various curvatures, frictions, and
waypoints, which cannot be learned with the baseline formulation. The demo of
this project can be found at: https://sites.google.com/view/highqualitywiping.
|
2502.12600
|
Revisiting the Generalization Problem of Low-level Vision Models Through
the Lens of Image Deraining
|
cs.CV
|
Generalization remains a significant challenge for low-level vision models,
which often struggle with unseen degradations in real-world scenarios despite
their success in controlled benchmarks. In this paper, we revisit the
generalization problem in low-level vision models. Image deraining is selected
as a case study due to its well-defined and easily decoupled structure,
allowing for more effective observation and analysis. Through comprehensive
experiments, we reveal that the generalization issue is not primarily due to
limited network capacity but rather the failure of existing training
strategies, which leads networks to overfit specific degradation patterns. Our
findings show that guiding networks to focus on learning the underlying image
content, rather than the degradation patterns, is key to improving
generalization. We demonstrate that balancing the complexity of background
images and degradations in the training data helps networks better fit the
image distribution. Furthermore, incorporating content priors from pre-trained
generative models significantly enhances generalization. Experiments on both
image deraining and image denoising validate the proposed strategies. We
believe the insights and solutions will inspire further research and improve
the generalization of low-level vision models.
|
2502.12601
|
COPU: Conformal Prediction for Uncertainty Quantification in Natural
Language Generation
|
cs.CL
|
Uncertainty Quantification (UQ) for Natural Language Generation (NLG) is
crucial for assessing the performance of Large Language Models (LLMs), as it
reveals confidence in predictions, identifies failure modes, and gauges output
reliability. Conformal Prediction (CP), a model-agnostic method that generates
prediction sets with a specified error rate, has been adopted for UQ in
classification tasks, where the size of the prediction set indicates the
model's uncertainty. However, when adapting CP to NLG, the sampling-based
method for generating candidate outputs cannot guarantee the inclusion of the
ground truth, limiting its applicability across a wide range of error rates. To
address this, we propose \ourmethod, a method that explicitly adds the ground
truth to the candidate outputs and uses logit scores to measure nonconformity.
Our experiments with six LLMs on four NLG tasks show that \ourmethod
outperforms baseline methods in calibrating error rates and empirical cover
rates, offering accurate UQ across a wide range of user-specified error rates.
|
2502.12602
|
Learning-based Dynamic Robot-to-Human Handover
|
cs.RO
|
This paper presents a novel learning-based approach to dynamic robot-to-human
handover, addressing the challenges of delivering objects to a moving receiver.
We hypothesize that dynamic handover, where the robot adjusts to the receiver's
movements, results in more efficient and comfortable interaction compared to
static handover, where the receiver is assumed to be stationary. To validate
this, we developed a nonparametric method for generating continuous handover
motion, conditioned on the receiver's movements, and trained the model using a
dataset of 1,000 human-to-human handover demonstrations. We integrated
preference learning for improved handover effectiveness and applied impedance
control to ensure user safety and adaptiveness. The approach was evaluated in
both simulation and real-world settings, with user studies demonstrating that
dynamic handover significantly reduces handover time and improves user comfort
compared to static methods. Videos and demonstrations of our approach are
available at https://zerotohero7886.github.io/dyn-r2h-handover .
|
2502.12603
|
Disentangling Long-Short Term State Under Unknown Interventions for
Online Time Series Forecasting
|
cs.LG cs.AI
|
Current methods for time series forecasting struggle in the online scenario,
since it is difficult to preserve long-term dependency while adapting
short-term changes when data are arriving sequentially. Although some recent
methods solve this problem by controlling the updates of latent states, they
cannot disentangle the long/short-term states, leading to the inability to
effectively adapt to nonstationary. To tackle this challenge, we propose a
general framework to disentangle long/short-term states for online time series
forecasting. Our idea is inspired by the observations where short-term changes
can be led by unknown interventions like abrupt policies in the stock market.
Based on this insight, we formalize a data generation process with unknown
interventions on short-term states. Under mild assumptions, we further leverage
the independence of short-term states led by unknown interventions to establish
the identification theory to achieve the disentanglement of long/short-term
states. Built on this theory, we develop a long short-term disentanglement
model (LSTD) to extract the long/short-term states with long/short-term
encoders, respectively. Furthermore, the LSTD model incorporates a smooth
constraint to preserve the long-term dependencies and an interrupted dependency
constraint to enforce the forgetting of short-term dependencies, together
boosting the disentanglement of long/short-term states. Experimental results on
several benchmark datasets show that our \textbf{LSTD} model outperforms
existing methods for online time series forecasting, validating its efficacy in
real-world applications.
|
2502.12604
|
S2C: Learning Noise-Resistant Differences for Unsupervised Change
Detection in Multimodal Remote Sensing Images
|
cs.CV
|
Unsupervised Change Detection (UCD) in multimodal Remote Sensing (RS) images
remains a difficult challenge due to the inherent spatio-temporal complexity
within data, and the heterogeneity arising from different imaging sensors.
Inspired by recent advancements in Visual Foundation Models (VFMs) and
Contrastive Learning (CL) methodologies, this research aims to develop CL
methodologies to translate implicit knowledge in VFM into change
representations, thus eliminating the need for explicit supervision. To this
end, we introduce a Semantic-to-Change (S2C) learning framework for UCD in both
homogeneous and multimodal RS images. Differently from existing CL
methodologies that typically focus on learning multi-temporal similarities, we
introduce a novel triplet learning strategy that explicitly models temporal
differences, which are crucial to the CD task. Furthermore, random spatial and
spectral perturbations are introduced during the training to enhance robustness
to temporal noise. In addition, a grid sparsity regularization is defined to
suppress insignificant changes, and an IoU-matching algorithm is developed to
refine the CD results. Experiments on four benchmark CD datasets demonstrate
that the proposed S2C learning framework achieves significant improvements in
accuracy, surpassing current state-of-the-art by over 31\%, 9\%, 23\%, and
15\%, respectively. It also demonstrates robustness and sample efficiency,
suitable for training and adaptation of various Visual Foundation Models (VFMs)
or backbone neural networks. The relevant code will be available at:
github.com/DingLei14/S2C.
|
2502.12605
|
Hypernetwork-based approach for optimal composition design in partially
controlled multi-agent systems
|
cs.MA cs.LG
|
Partially Controlled Multi-Agent Systems (PCMAS) are comprised of
controllable agents, managed by a system designer, and uncontrollable agents,
operating autonomously. This study addresses an optimal composition design
problem in PCMAS, which involves the system designer's problem, determining the
optimal number and policies of controllable agents, and the uncontrollable
agents' problem, identifying their best-response policies. Solving this
bi-level optimization problem is computationally intensive, as it requires
repeatedly solving multi-agent reinforcement learning problems under various
compositions for both types of agents. To address these challenges, we propose
a novel hypernetwork-based framework that jointly optimizes the system's
composition and agent policies. Unlike traditional methods that train separate
policy networks for each composition, the proposed framework generates policies
for both controllable and uncontrollable agents through a unified hypernetwork.
This approach enables efficient information sharing across similar
configurations, thereby reducing computational overhead. Additional
improvements are achieved by incorporating reward parameter optimization and
mean action networks. Using real-world New York City taxi data, we demonstrate
that our framework outperforms existing methods in approximating equilibrium
policies. Our experimental results show significant improvements in key
performance metrics, such as order response rate and served demand,
highlighting the practical utility of controlling agents and their potential to
enhance decision-making in PCMAS.
|
2502.12607
|
Generalized Kernel Inducing Points by Duality Gap for Dataset
Distillation
|
stat.ML cs.LG
|
We propose Duality Gap KIP (DGKIP), an extension of the Kernel Inducing
Points (KIP) method for dataset distillation. While existing dataset
distillation methods often rely on bi-level optimization, DGKIP eliminates the
need for such optimization by leveraging duality theory in convex programming.
The KIP method has been introduced as a way to avoid bi-level optimization;
however, it is limited to the squared loss and does not support other loss
functions (e.g., cross-entropy or hinge loss) that are more suitable for
classification tasks. DGKIP addresses this limitation by exploiting an upper
bound on parameter changes after dataset distillation using the duality gap,
enabling its application to a wider range of loss functions. We also
characterize theoretical properties of DGKIP by providing upper bounds on the
test error and prediction consistency after dataset distillation. Experimental
results on standard benchmarks such as MNIST and CIFAR-10 demonstrate that
DGKIP retains the efficiency of KIP while offering broader applicability and
robust performance.
|
2502.12608
|
Unveiling Mode Connectivity in Graph Neural Networks
|
cs.LG cs.AI
|
A fundamental challenge in understanding graph neural networks (GNNs) lies in
characterizing their optimization dynamics and loss landscape geometry,
critical for improving interpretability and robustness. While mode
connectivity, a lens for analyzing geometric properties of loss landscapes has
proven insightful for other deep learning architectures, its implications for
GNNs remain unexplored. This work presents the first investigation of mode
connectivity in GNNs. We uncover that GNNs exhibit distinct non-linear mode
connectivity, diverging from patterns observed in fully-connected networks or
CNNs. Crucially, we demonstrate that graph structure, rather than model
architecture, dominates this behavior, with graph properties like homophily
correlating with mode connectivity patterns. We further establish a link
between mode connectivity and generalization, proposing a generalization bound
based on loss barriers and revealing its utility as a diagnostic tool. Our
findings further bridge theoretical insights with practical implications: they
rationalize domain alignment strategies in graph learning and provide a
foundation for refining GNN training paradigms.
|
2502.12611
|
Who Writes What: Unveiling the Impact of Author Roles on AI-generated
Text Detection
|
cs.CL
|
The rise of Large Language Models (LLMs) necessitates accurate AI-generated
text detection. However, current approaches largely overlook the influence of
author characteristics. We investigate how sociolinguistic attributes-gender,
CEFR proficiency, academic field, and language environment-impact
state-of-the-art AI text detectors. Using the ICNALE corpus of human-authored
texts and parallel AI-generated texts from diverse LLMs, we conduct a rigorous
evaluation employing multi-factor ANOVA and weighted least squares (WLS). Our
results reveal significant biases: CEFR proficiency and language environment
consistently affected detector accuracy, while gender and academic field showed
detector-dependent effects. These findings highlight the crucial need for
socially aware AI text detection to avoid unfairly penalizing specific
demographic groups. We offer novel empirical evidence, a robust statistical
framework, and actionable insights for developing more equitable and reliable
detection systems in real-world, out-of-domain contexts. This work paves the
way for future research on bias mitigation, inclusive evaluation benchmarks,
and socially responsible LLM detectors.
|
2502.12614
|
Label Drop for Multi-Aspect Relation Modeling in Universal Information
Extraction
|
cs.CL cs.AI
|
Universal Information Extraction (UIE) has garnered significant attention due
to its ability to address model explosion problems effectively. Extractive UIE
can achieve strong performance using a relatively small model, making it widely
adopted. Extractive UIEs generally rely on task instructions for different
tasks, including single-target instructions and multiple-target instructions.
Single-target instruction UIE enables the extraction of only one type of
relation at a time, limiting its ability to model correlations between
relations and thus restricting its capability to extract complex relations.
While multiple-target instruction UIE allows for the extraction of multiple
relations simultaneously, the inclusion of irrelevant relations introduces
decision complexity and impacts extraction accuracy. Therefore, for
multi-relation extraction, we propose LDNet, which incorporates multi-aspect
relation modeling and a label drop mechanism. By assigning different relations
to different levels for understanding and decision-making, we reduce decision
confusion. Additionally, the label drop mechanism effectively mitigates the
impact of irrelevant relations. Experiments show that LDNet outperforms or
achieves competitive performance with state-of-the-art systems on 9 tasks, 33
datasets, in both single-modal and multi-modal, few-shot and zero-shot
settings.\footnote{https://github.com/Lu-Yang666/LDNet}
|
2502.12616
|
Improving Chain-of-Thought Reasoning via Quasi-Symbolic Abstractions
|
cs.CL
|
Chain-of-Though (CoT) represents a common strategy for reasoning in Large
Language Models (LLMs) by decomposing complex tasks into intermediate inference
steps. However, explanations generated via CoT are susceptible to content
biases that negatively affect their robustness and faithfulness. To mitigate
existing limitations, recent work has proposed using logical formalisms coupled
with external symbolic solvers. However, fully symbolic approaches possess the
bottleneck of requiring a complete translation from natural language to formal
languages, a process that affects efficiency and flexibility. To achieve a
trade-off, this paper investigates methods to disentangle content from logical
reasoning without a complete formalisation. In particular, we present QuaSAR
(for Quasi-Symbolic Abstract Reasoning), a variation of CoT that guides LLMs to
operate at a higher level of abstraction via quasi-symbolic explanations. Our
framework leverages the capability of LLMs to formalise only relevant variables
and predicates, enabling the coexistence of symbolic elements with natural
language. We show the impact of QuaSAR for in-context learning and for
constructing demonstrations to improve the reasoning capabilities of smaller
models. Our experiments show that quasi-symbolic abstractions can improve
CoT-based methods by up to 8% accuracy, enhancing robustness and consistency on
challenging adversarial variations on both natural language (i.e. MMLU-Redux)
and symbolic reasoning tasks (i.e., GSM-Symbolic).
|
2502.12617
|
A Graph-Enhanced Deep-Reinforcement Learning Framework for the Aircraft
Landing Problem
|
cs.LG cs.AI cs.SY eess.SY
|
The Aircraft Landing Problem (ALP) is one of the challenging problems in
aircraft transportation and management. The challenge is to schedule the
arriving aircraft in a sequence so that the cost and delays are optimized.
There are various solution approaches to solving this problem, most of which
are based on operations research algorithms and meta-heuristics. Although
traditional methods perform better on one or the other factors, there remains a
problem of solving real-time rescheduling and computational scalability
altogether. This paper presents a novel deep reinforcement learning (DRL)
framework that combines graph neural networks with actor-critic architectures
to address the ALP. This paper introduces three key contributions: A
graph-based state representation that efficiently captures temporal and spatial
relationships between aircraft, a specialized actor-critic architecture
designed to handle multiple competing objectives in landing scheduling, and a
runway balance strategy that ensures efficient resource utilization while
maintaining safety constraints. The results show that the trained algorithm can
be tested on different problem sets and the results are competitive to
operation research algorithms. The experimental results on standard benchmark
data sets demonstrate a 99.95 reduction in computational time compared to Mixed
Integer Programming (MIP) and 38 higher runway throughput over First Come First
Serve (FCFS) approaches. Therefore, the proposed solution is competitive to
traditional approaches and achieves substantial advancements. Notably, it does
not require retraining, making it particularly suitable for industrial
deployment. The frameworks capability to generate solutions within 1 second
enables real-time rescheduling, addressing critical requirements of air traffic
management.
|
2502.12618
|
Uncertainty-Aware Graph Structure Learning
|
cs.LG
|
Graph Neural Networks (GNNs) have become a prominent approach for learning
from graph-structured data. However, their effectiveness can be significantly
compromised when the graph structure is suboptimal. To address this issue,
Graph Structure Learning (GSL) has emerged as a promising technique that
refines node connections adaptively. Nevertheless, we identify two key
limitations in existing GSL methods: 1) Most methods primarily focus on node
similarity to construct relationships, while overlooking the quality of node
information. Blindly connecting low-quality nodes and aggregating their
ambiguous information can degrade the performance of other nodes. 2) The
constructed graph structures are often constrained to be symmetric, which may
limit the model's flexibility and effectiveness. To overcome these limitations,
we propose an Uncertainty-aware Graph Structure Learning (UnGSL) strategy.
UnGSL estimates the uncertainty of node information and utilizes it to adjust
the strength of directional connections, where the influence of nodes with high
uncertainty is adaptively reduced. Importantly, UnGSL serves as a plug-in
module that can be seamlessly integrated into existing GSL methods with minimal
additional computational cost. In our experiments, we implement UnGSL into six
representative GSL methods, demonstrating consistent performance improvements.
|
2502.12623
|
DeepResonance: Enhancing Multimodal Music Understanding via
Music-centric Multi-way Instruction Tuning
|
cs.SD cs.AI cs.CL cs.MM eess.AS
|
Recent advancements in music large language models (LLMs) have significantly
improved music understanding tasks, which involve the model's ability to
analyze and interpret various musical elements. These improvements primarily
focused on integrating both music and text inputs. However, the potential of
incorporating additional modalities such as images, videos and textual music
features to enhance music understanding remains unexplored. To bridge this gap,
we propose DeepResonance, a multimodal music understanding LLM fine-tuned via
multi-way instruction tuning with multi-way aligned music, text, image, and
video data. To this end, we construct Music4way-MI2T, Music4way-MV2T, and
Music4way-Any2T, three 4-way training and evaluation datasets designed to
enable DeepResonance to integrate both visual and textual music feature
content. We also introduce multi-sampled ImageBind embeddings and a
pre-alignment Transformer to enhance modality fusion prior to input into text
LLMs, tailoring DeepResonance for multi-way instruction tuning. Our model
achieves state-of-the-art performances across six music understanding tasks,
highlighting the benefits of the auxiliary modalities and the structural
superiority of DeepResonance. We plan to open-source the models and the newly
constructed datasets.
|
2502.12624
|
Implicit Repair with Reinforcement Learning in Emergent Communication
|
cs.LG cs.MA
|
Conversational repair is a mechanism used to detect and resolve
miscommunication and misinformation problems when two or more agents interact.
One particular and underexplored form of repair in emergent communication is
the implicit repair mechanism, where the interlocutor purposely conveys the
desired information in such a way as to prevent misinformation from any other
interlocutor. This work explores how redundancy can modify the emergent
communication protocol to continue conveying the necessary information to
complete the underlying task, even with additional external environmental
pressures such as noise. We focus on extending the signaling game, called the
Lewis Game, by adding noise in the communication channel and inputs received by
the agents. Our analysis shows that agents add redundancy to the transmitted
messages as an outcome to prevent the negative impact of noise on the task
success. Additionally, we observe that the emerging communication protocol's
generalization capabilities remain equivalent to architectures employed in
simpler games that are entirely deterministic. Additionally, our method is the
only one suitable for producing robust communication protocols that can handle
cases with and without noise while maintaining increased generalization
performance levels.
|
2502.12627
|
DAMamba: Vision State Space Model with Dynamic Adaptive Scan
|
cs.CV
|
State space models (SSMs) have recently garnered significant attention in
computer vision. However, due to the unique characteristics of image data,
adapting SSMs from natural language processing to computer vision has not
outperformed the state-of-the-art convolutional neural networks (CNNs) and
Vision Transformers (ViTs). Existing vision SSMs primarily leverage manually
designed scans to flatten image patches into sequences locally or globally.
This approach disrupts the original semantic spatial adjacency of the image and
lacks flexibility, making it difficult to capture complex image structures. To
address this limitation, we propose Dynamic Adaptive Scan (DAS), a data-driven
method that adaptively allocates scanning orders and regions. This enables more
flexible modeling capabilities while maintaining linear computational
complexity and global modeling capacity. Based on DAS, we further propose the
vision backbone DAMamba, which significantly outperforms current
state-of-the-art vision Mamba models in vision tasks such as image
classification, object detection, instance segmentation, and semantic
segmentation. Notably, it surpasses some of the latest state-of-the-art CNNs
and ViTs. Code will be available at https://github.com/ltzovo/DAMamba.
|
2502.12629
|
Rate Maximization for Downlink Pinching-Antenna Systems
|
cs.IT eess.SP math.IT
|
In this letter, we consider a new type of flexible-antenna system, termed
pinching-antenna, where multiple low-cost pinching antennas, realized by
activating small dielectric particles on a dielectric waveguide, are jointly
used to serve a single-antenna user. Our goal is to maximize the downlink
transmission rate by optimizing the locations of the pinching antennas.
However, these locations affect both the path losses and the phase shifts of
the user's effective channel gain, making the problem challenging to solve. To
address this challenge and solve the problem in a low complexity manner, a
relaxed optimization problem is developed that minimizes the impact of path
loss while ensuring that the received signals at the user are constructive.
This approach leads to a two-stage algorithm: in the first stage, the locations
of the pinching antennas are optimized to minimize the large-scale path loss;
in the second stage, the antenna locations are refined to maximize the received
signal strength. Simulation results show that pinching-antenna systems
significantly outperform conventional fixed-location antenna systems, and the
proposed algorithm achieves nearly the same performance as the highly complex
exhaustive search-based benchmark.
|
2502.12630
|
Automating Prompt Leakage Attacks on Large Language Models Using Agentic
Approach
|
cs.CR cs.AI
|
This paper presents a novel approach to evaluating the security of large
language models (LLMs) against prompt leakage-the exposure of system-level
prompts or proprietary configurations. We define prompt leakage as a critical
threat to secure LLM deployment and introduce a framework for testing the
robustness of LLMs using agentic teams. Leveraging AG2 (formerly AutoGen), we
implement a multi-agent system where cooperative agents are tasked with probing
and exploiting the target LLM to elicit its prompt.
Guided by traditional definitions of security in cryptography, we further
define a prompt leakage-safe system as one in which an attacker cannot
distinguish between two agents: one initialized with an original prompt and the
other with a prompt stripped of all sensitive information. In a safe system,
the agents' outputs will be indistinguishable to the attacker, ensuring that
sensitive information remains secure. This cryptographically inspired framework
provides a rigorous standard for evaluating and designing secure LLMs.
This work establishes a systematic methodology for adversarial testing of
prompt leakage, bridging the gap between automated threat modeling and
practical LLM security.
You can find the implementation of our prompt leakage probing on GitHub.
|
2502.12631
|
Score-Based Diffusion Policy Compatible with Reinforcement Learning via
Optimal Transport
|
cs.LG cs.AI
|
Diffusion policies have shown promise in learning complex behaviors from
demonstrations, particularly for tasks requiring precise control and long-term
planning. However, they face challenges in robustness when encountering
distribution shifts. This paper explores improving diffusion-based imitation
learning models through online interactions with the environment. We propose
OTPR (Optimal Transport-guided score-based diffusion Policy for Reinforcement
learning fine-tuning), a novel method that integrates diffusion policies with
RL using optimal transport theory. OTPR leverages the Q-function as a transport
cost and views the policy as an optimal transport map, enabling efficient and
stable fine-tuning. Moreover, we introduce masked optimal transport to guide
state-action matching using expert keypoints and a compatibility-based
resampling strategy to enhance training stability. Experiments on three
simulation tasks demonstrate OTPR's superior performance and robustness
compared to existing methods, especially in complex and sparse-reward
environments. In sum, OTPR provides an effective framework for combining IL and
RL, achieving versatile and reliable policy learning. The code will be released
at https://github.com/Sunmmyy/OTPR.git.
|
2502.12632
|
MALT Diffusion: Memory-Augmented Latent Transformers for Any-Length
Video Generation
|
cs.CV cs.LG
|
Diffusion models are successful for synthesizing high-quality videos but are
limited to generating short clips (e.g., 2-10 seconds). Synthesizing sustained
footage (e.g. over minutes) still remains an open research question. In this
paper, we propose MALT Diffusion (using Memory-Augmented Latent Transformers),
a new diffusion model specialized for long video generation. MALT Diffusion (or
just MALT) handles long videos by subdividing them into short segments and
doing segment-level autoregressive generation. To achieve this, we first
propose recurrent attention layers that encode multiple segments into a compact
memory latent vector; by maintaining this memory vector over time, MALT is able
to condition on it and continuously generate new footage based on a long
temporal context. We also present several training techniques that enable the
model to generate frames over a long horizon with consistent quality and
minimal degradation. We validate the effectiveness of MALT through experiments
on long video benchmarks. We first perform extensive analysis of MALT in
long-contextual understanding capability and stability using popular long video
benchmarks. For example, MALT achieves an FVD score of 220.4 on 128-frame video
generation on UCF-101, outperforming the previous state-of-the-art of 648.4.
Finally, we explore MALT's capabilities in a text-to-video generation setting
and show that it can produce long videos compared with recent techniques for
long text-to-video generation.
|
2502.12633
|
One Size doesn't Fit All: A Personalized Conversational Tutoring Agent
for Mathematics Instruction
|
cs.CL cs.AI
|
Large language models (LLMs) have been increasingly employed in various
intelligent educational systems, simulating human tutors to facilitate
effective human-machine interaction. However, previous studies often overlook
the significance of recognizing and adapting to individual learner
characteristics. Such adaptation is crucial for enhancing student engagement
and learning efficiency, particularly in mathematics instruction, where diverse
learning styles require personalized strategies to promote comprehension and
enthusiasm. In this paper, we propose a \textbf{P}erson\textbf{A}lized
\textbf{C}onversational tutoring ag\textbf{E}nt (PACE) for mathematics
instruction. PACE simulates students' learning styles based on the Felder and
Silverman learning style model, aligning with each student's persona. In this
way, our PACE can effectively assess the personality of students, allowing to
develop individualized teaching strategies that resonate with their unique
learning styles. To further enhance students' comprehension, PACE employs the
Socratic teaching method to provide instant feedback and encourage deep
thinking. By constructing personalized teaching data and training models, PACE
demonstrates the ability to identify and adapt to the unique needs of each
student, significantly improving the overall learning experience and outcomes.
Moreover, we establish multi-aspect evaluation criteria and conduct extensive
analysis to assess the performance of personalized teaching. Experimental
results demonstrate the superiority of our model in personalizing the
educational experience and motivating students compared to existing methods.
|
2502.12634
|
Introducing Context Information in Lifelong Sequential Modeling using
Temporal Convolutional Networks
|
cs.IR
|
The importance of lifelong sequential modeling (LSM) is growing in the realm
of social media recommendation systems. A key component in this process is the
attention module, which derives interest representations with respect to
candidate items from the sequence. Typically, attention modules function in a
point-wise fashion, concentrating only on the relevance of individual items in
the sequence to the candidate item. However, the context information in the
neighboring items that is useful for more accurately evaluating the
significance of each item has not been taken into account. In this study, we
introduce a novel network which employs the Temporal Convolutional Network
(TCN) to generate context-aware representations for each item throughout the
lifelong sequence. These improved representations are then utilized in the
attention module to produce context-aware interest representations. Expanding
on this TCN framework, we present a enhancement module which includes multiple
TCN layers and their respective attention modules to capture interest
representations across different context scopes. Additionally, we also
incorporate a lightweight sub-network to create convolution filters based on
users' basic profile features. These personalized filters are then applied in
the TCN layers instead of the original global filters to produce more
user-specific representations. We performed experiments on both a public
dataset and a proprietary dataset. The findings indicate that the proposed
network surpasses existing methods in terms of prediction accuracy and online
performance metrics.
|
2502.12635
|
Corrupted but Not Broken: Rethinking the Impact of Corrupted Data in
Visual Instruction Tuning
|
cs.CV
|
Visual Instruction Tuning (VIT) enhances Multimodal Large Language Models
(MLLMs) but it is hindered by corrupted datasets containing hallucinated
content, incorrect responses, and poor OCR quality. While prior works focus on
dataset refinement through high-quality data collection or rule-based
filtering, they are costly or limited to specific types of corruption. To
deeply understand how corrupted data affects MLLMs, in this paper, we
systematically investigate this issue and find that while corrupted data
degrades the performance of MLLMs, its effects are largely superficial in that
the performance of MLLMs can be largely restored by either disabling a small
subset of parameters or post-training with a small amount of clean data.
Additionally, corrupted MLLMs exhibit improved ability to distinguish clean
samples from corrupted ones, enabling the dataset cleaning without external
help. Based on those insights, we propose a corruption-robust training paradigm
combining self-validation and post-training, which significantly outperforms
existing corruption mitigation strategies.
|
2502.12638
|
NExT-Mol: 3D Diffusion Meets 1D Language Modeling for 3D Molecule
Generation
|
q-bio.QM cs.LG q-bio.BM
|
3D molecule generation is crucial for drug discovery and material design.
While prior efforts focus on 3D diffusion models for their benefits in modeling
continuous 3D conformers, they overlook the advantages of 1D SELFIES-based
Language Models (LMs), which can generate 100% valid molecules and leverage the
billion-scale 1D molecule datasets. To combine these advantages for 3D molecule
generation, we propose a foundation model -- NExT-Mol: 3D Diffusion Meets 1D
Language Modeling for 3D Molecule Generation. NExT-Mol uses an extensively
pretrained molecule LM for 1D molecule generation, and subsequently predicts
the generated molecule's 3D conformers with a 3D diffusion model. We enhance
NExT-Mol's performance by scaling up the LM's model size, refining the
diffusion neural architecture, and applying 1D to 3D transfer learning.
Notably, our 1D molecule LM significantly outperforms baselines in
distributional similarity while ensuring validity, and our 3D diffusion model
achieves leading performances in conformer prediction. Given these improvements
in 1D and 3D modeling, NExT-Mol achieves a 26% relative improvement in 3D FCD
for de novo 3D generation on GEOM-DRUGS, and a 13% average relative gain for
conditional 3D generation on QM9-2014. Our codes and pretrained checkpoints are
available at https://github.com/acharkq/NExT-Mol.
|
2502.12640
|
RecDreamer: Consistent Text-to-3D Generation via Uniform Score
Distillation
|
cs.CV
|
Current text-to-3D generation methods based on score distillation often
suffer from geometric inconsistencies, leading to repeated patterns across
different poses of 3D assets. This issue, known as the Multi-Face Janus
problem, arises because existing methods struggle to maintain consistency
across varying poses and are biased toward a canonical pose. While recent work
has improved pose control and approximation, these efforts are still limited by
this inherent bias, which skews the guidance during generation. To address
this, we propose a solution called RecDreamer, which reshapes the underlying
data distribution to achieve a more consistent pose representation. The core
idea behind our method is to rectify the prior distribution, ensuring that pose
variation is uniformly distributed rather than biased toward a canonical form.
By modifying the prescribed distribution through an auxiliary function, we can
reconstruct the density of the distribution to ensure compliance with specific
marginal constraints. In particular, we ensure that the marginal distribution
of poses follows a uniform distribution, thereby eliminating the biases
introduced by the prior knowledge. We incorporate this rectified data
distribution into existing score distillation algorithms, a process we refer to
as uniform score distillation. To efficiently compute the posterior
distribution required for the auxiliary function, RecDreamer introduces a
training-free classifier that estimates pose categories in a plug-and-play
manner. Additionally, we utilize various approximation techniques for noisy
states, significantly improving system performance. Our experimental results
demonstrate that RecDreamer effectively mitigates the Multi-Face Janus problem,
leading to more consistent 3D asset generation across different poses.
|
2502.12654
|
Free Energy and Network Structure: Breaking Scale-Free Behaviour Through
Information Processing Constraints
|
cs.SI physics.soc-ph
|
In this paper we show how The Free Energy Principle (FEP) can provide an
explanation for why real-world networks deviate from scale-free behaviour, and
how these characteristic deviations can emerge from constraints on information
processing. We propose a minimal FEP model for node behaviour reveals three
distinct regimes: when detection noise dominates, agents seek better
information, reducing isolated agents compared to expectations from classical
preferential attachment. In the optimal detection regime, super-linear growth
emerges from compounded improvements in detection, belief, and action, which
produce a preferred cluster scale. Finally, saturation effects occur as limits
on the agent's information processing capabilities prevent indefinite cluster
growth. These regimes produce the knee-shaped degree distributions observed in
real networks, explaining them as signatures of agents with optimal information
processing under constraints. We show that agents evolving under FEP principles
provides a mechanism for preferential attachment, connecting agent psychology
with the macroscopic network features that underpin the structure of real-world
networks.
|
2502.12655
|
LiMo-Calib: On-Site Fast LiDAR-Motor Calibration for Quadruped
Robot-Based Panoramic 3D Sensing System
|
cs.RO
|
Conventional single LiDAR systems are inherently constrained by their limited
field of view (FoV), leading to blind spots and incomplete environmental
awareness, particularly on robotic platforms with strict payload limitations.
Integrating a motorized LiDAR offers a practical solution by significantly
expanding the sensor's FoV and enabling adaptive panoramic 3D sensing. However,
the high-frequency vibrations of the quadruped robot introduce calibration
challenges, causing variations in the LiDAR-motor transformation that degrade
sensing accuracy. Existing calibration methods that use artificial targets or
dense feature extraction lack feasibility for on-site applications and
real-time implementation. To overcome these limitations, we propose LiMo-Calib,
an efficient on-site calibration method that eliminates the need for external
targets by leveraging geometric features directly from raw LiDAR scans.
LiMo-Calib optimizes feature selection based on normal distribution to
accelerate convergence while maintaining accuracy and incorporates a
reweighting mechanism that evaluates local plane fitting quality to enhance
robustness. We integrate and validate the proposed method on a motorized LiDAR
system mounted on a quadruped robot, demonstrating significant improvements in
calibration efficiency and 3D sensing accuracy, making LiMo-Calib well-suited
for real-world robotic applications. The demo video is available at:
https://youtu.be/FMINa-sap7g
|
2502.12658
|
R.R.: Unveiling LLM Training Privacy through Recollection and Ranking
|
cs.CL
|
Large Language Models (LLMs) pose significant privacy risks, potentially
leaking training data due to implicit memorization. Existing privacy attacks
primarily focus on membership inference attacks (MIAs) or data extraction
attacks, but reconstructing specific personally identifiable information (PII)
in LLM's training data remains challenging. In this paper, we propose R.R.
(Recollect and Rank), a novel two-step privacy stealing attack that enables
attackers to reconstruct PII entities from scrubbed training data where the PII
entities have been masked. In the first stage, we introduce a prompt paradigm
named recollection, which instructs the LLM to repeat a masked text but fill in
masks. Then we can use PII identifiers to extract recollected PII candidates.
In the second stage, we design a new criterion to score each PII candidate and
rank them. Motivated by membership inference, we leverage the reference model
as a calibration to our criterion. Experiments across three popular PII
datasets demonstrate that the R.R. achieves better PII identical performance
compared to baselines. These results highlight the vulnerability of LLMs to PII
leakage even when training data has been scrubbed. We release the replicate
package of R.R. at a link.
|
2502.12659
|
The Hidden Risks of Large Reasoning Models: A Safety Assessment of R1
|
cs.CY cs.AI
|
The rapid development of large reasoning models, such as OpenAI-o3 and
DeepSeek-R1, has led to significant improvements in complex reasoning over
non-reasoning large language models~(LLMs). However, their enhanced
capabilities, combined with the open-source access of models like DeepSeek-R1,
raise serious safety concerns, particularly regarding their potential for
misuse. In this work, we present a comprehensive safety assessment of these
reasoning models, leveraging established safety benchmarks to evaluate their
compliance with safety regulations. Furthermore, we investigate their
susceptibility to adversarial attacks, such as jailbreaking and prompt
injection, to assess their robustness in real-world applications. Through our
multi-faceted analysis, we uncover four key findings: (1) There is a
significant safety gap between the open-source R1 models and the o3-mini model,
on both safety benchmark and attack, suggesting more safety effort on R1 is
needed. (2) The distilled reasoning model shows poorer safety performance
compared to its safety-aligned base models. (3) The stronger the model's
reasoning ability, the greater the potential harm it may cause when answering
unsafe questions. (4) The thinking process in R1 models pose greater safety
concerns than their final answers. Our study provides insights into the
security implications of reasoning models and highlights the need for further
advancements in R1 models' safety to close the gap.
|
2502.12663
|
Demystifying Multilingual Chain-of-Thought in Process Reward Modeling
|
cs.CL
|
Large language models (LLMs) are designed to perform a wide range of tasks.
To improve their ability to solve complex problems requiring multi-step
reasoning, recent research leverages process reward modeling to provide
fine-grained feedback at each step of the reasoning process for reinforcement
learning (RL), but it predominantly focuses on English. In this paper, we
tackle the critical challenge of extending process reward models (PRMs) to
multilingual settings. To achieve this, we train multilingual PRMs on a dataset
spanning seven languages, which is translated from English. Through
comprehensive evaluations on two widely used reasoning benchmarks across 11
languages, we demonstrate that multilingual PRMs not only improve average
accuracy but also reduce early-stage reasoning errors. Furthermore, our results
highlight the sensitivity of multilingual PRMs to both the number of training
languages and the volume of English data, while also uncovering the benefits
arising from more candidate responses and trainable parameters. This work opens
promising avenues for robust multilingual applications in complex, multi-step
reasoning tasks. In addition, we release the code to foster research along this
line.
|
2502.12665
|
A$^2$ATS: Retrieval-Based KV Cache Reduction via Windowed Rotary
Position Embedding and Query-Aware Vector Quantization
|
cs.CL
|
Long context large language models (LLMs) pose significant challenges for
efficient serving due to the large memory footprint and high access overhead of
KV cache. Retrieval-based KV cache reduction methods can mitigate these
challenges, typically by offloading the complete KV cache to CPU and retrieving
necessary tokens on demand during inference. However, these methods still
suffer from unsatisfactory accuracy degradation and extra retrieval overhead.
To address these limitations, this paper proposes A$^2$ATS, a novel
retrieval-based KV cache reduction method. A$^2$ATS aims to obtain an accurate
approximation of attention scores by applying the vector quantization technique
to key states, thereby enabling efficient and precise retrieval of the top-K
tokens. First, we propose Windowed Rotary Position Embedding, which decouples
the positional dependency from query and key states after position embedding.
Then, we propose query-aware vector quantization that optimizes the objective
of attention score approximation directly. Finally, we design the heterogeneous
inference architecture for KV cache offloading, enabling long context serving
with larger batch sizes. Experimental results demonstrate that A$^2$ATS can
achieve a lower performance degradation with similar or lower overhead compared
to existing methods, thereby increasing long context serving throughput by up
to $2.7 \times$.
|
2502.12668
|
Evaluation of Best-of-N Sampling Strategies for Language Model Alignment
|
cs.CL
|
Best-of-N (BoN) sampling with a reward model has been shown to be an
effective strategy for aligning Large Language Models (LLMs) with human
preferences at the time of decoding. BoN sampling is susceptible to a problem
known as reward hacking. Since the reward model is an imperfect proxy for the
true objective, an excessive focus on optimizing its value can lead to a
compromise of its performance on the true objective. Previous work proposes
Regularized BoN sampling (RBoN), a BoN sampling with regularization to the
objective, and shows that it outperforms BoN sampling so that it mitigates
reward hacking and empirically (Jinnai et al., 2024). However, Jinnai et al.
(2024) introduce RBoN based on a heuristic and they lack the analysis of why
such regularization strategy improves the performance of BoN sampling. The aim
of this study is to analyze the effect of BoN sampling on regularization
strategies. Using the regularization strategies corresponds to robust
optimization, which maximizes the worst case over a set of possible
perturbations in the proxy reward. Although the theoretical guarantees are not
directly applicable to RBoN, RBoN corresponds to a practical implementation.
This paper proposes an extension of the RBoN framework, called Stochastic RBoN
sampling (SRBoN), which is a theoretically guaranteed approach to worst-case
RBoN in proxy reward. We then perform an empirical evaluation using the
AlpacaFarm and Anthropic's hh-rlhf datasets to evaluate which factors of the
regularization strategies contribute to the improvement of the true proxy
reward. In addition, we also propose another simple RBoN method, the Sentence
Length Regularized BoN, which has a better performance in the experiment as
compared to the previous methods.
|
2502.12669
|
Perovskite-LLM: Knowledge-Enhanced Large Language Models for Perovskite
Solar Cell Research
|
cs.AI
|
The rapid advancement of perovskite solar cells (PSCs) has led to an
exponential growth in research publications, creating an urgent need for
efficient knowledge management and reasoning systems in this domain. We present
a comprehensive knowledge-enhanced system for PSCs that integrates three key
components. First, we develop Perovskite-KG, a domain-specific knowledge graph
constructed from 1,517 research papers, containing 23,789 entities and 22,272
relationships. Second, we create two complementary datasets: Perovskite-Chat,
comprising 55,101 high-quality question-answer pairs generated through a novel
multi-agent framework, and Perovskite-Reasoning, containing 2,217 carefully
curated materials science problems. Third, we introduce two specialized large
language models: Perovskite-Chat-LLM for domain-specific knowledge assistance
and Perovskite-Reasoning-LLM for scientific reasoning tasks. Experimental
results demonstrate that our system significantly outperforms existing models
in both domain-specific knowledge retrieval and scientific reasoning tasks,
providing researchers with effective tools for literature review, experimental
design, and complex problem-solving in PSC research.
|
2502.12671
|
Baichuan-M1: Pushing the Medical Capability of Large Language Models
|
cs.CL
|
The current generation of large language models (LLMs) is typically designed
for broad, general-purpose applications, while domain-specific LLMs, especially
in vertical fields like medicine, remain relatively scarce. In particular, the
development of highly efficient and practical LLMs for the medical domain is
challenging due to the complexity of medical knowledge and the limited
availability of high-quality data. To bridge this gap, we introduce
Baichuan-M1, a series of large language models specifically optimized for
medical applications. Unlike traditional approaches that simply continue
pretraining on existing models or apply post-training to a general base model,
Baichuan-M1 is trained from scratch with a dedicated focus on enhancing medical
capabilities. Our model is trained on 20 trillion tokens and incorporates a
range of effective training methods that strike a balance between general
capabilities and medical expertise. As a result, Baichuan-M1 not only performs
strongly across general domains such as mathematics and coding but also excels
in specialized medical fields. We have open-sourced Baichuan-M1-14B, a mini
version of our model, which can be accessed through the following links.
|
2502.12672
|
Speech-FT: A Fine-tuning Strategy for Enhancing Speech Representation
Models Without Compromising Generalization Ability
|
cs.CL cs.AI
|
Speech representation models are highly effective at extracting general
features for various tasks. While fine-tuning can enhance these representations
for specific applications, it often compromises their generalization ability.
To address this challenge, we propose Speech-FT, a fine-tuning strategy for
speech representation models that leverages model merging to preserve
generalization ability while still benefiting from fine-tuning. Speech-FT is
effective across different fine-tuning scenarios and is compatible with various
types of speech representation models, providing a versatile solution.
Speech-FT offers an efficient and practical approach to further improving
general speech representations after pre-training.
|
2502.12673
|
ROI-NeRFs: Hi-Fi Visualization of Objects of Interest within a Scene by
NeRFs Composition
|
cs.CV cs.GR
|
Efficient and accurate 3D reconstruction is essential for applications in
cultural heritage. This study addresses the challenge of visualizing objects
within large-scale scenes at a high level of detail (LOD) using Neural Radiance
Fields (NeRFs). The aim is to improve the visual fidelity of chosen objects
while maintaining the efficiency of the computations by focusing on details
only for relevant content. The proposed ROI-NeRFs framework divides the scene
into a Scene NeRF, which represents the overall scene at moderate detail, and
multiple ROI NeRFs that focus on user-defined objects of interest. An
object-focused camera selection module automatically groups relevant cameras
for each NeRF training during the decomposition phase. In the composition
phase, a Ray-level Compositional Rendering technique combines information from
the Scene NeRF and ROI NeRFs, allowing simultaneous multi-object rendering
composition. Quantitative and qualitative experiments conducted on two
real-world datasets, including one on a complex eighteen's century cultural
heritage room, demonstrate superior performance compared to baseline methods,
improving LOD for object regions, minimizing artifacts, and without
significantly increasing inference time.
|
Subsets and Splits
No community queries yet
The top public SQL queries from the community will appear here once available.