id
stringlengths 9
16
| title
stringlengths 4
278
| categories
stringlengths 5
104
| abstract
stringlengths 6
4.09k
|
|---|---|---|---|
2502.14178
|
NeRF-3DTalker: Neural Radiance Field with 3D Prior Aided Audio
Disentanglement for Talking Head Synthesis
|
cs.GR cs.CV cs.MM cs.SD eess.AS
|
Talking head synthesis is to synthesize a lip-synchronized talking head video
using audio. Recently, the capability of NeRF to enhance the realism and
texture details of synthesized talking heads has attracted the attention of
researchers. However, most current NeRF methods based on audio are exclusively
concerned with the rendering of frontal faces. These methods are unable to
generate clear talking heads in novel views. Another prevalent challenge in
current 3D talking head synthesis is the difficulty in aligning acoustic and
visual spaces, which often results in suboptimal lip-syncing of the generated
talking heads. To address these issues, we propose Neural Radiance Field with
3D Prior Aided Audio Disentanglement for Talking Head Synthesis
(NeRF-3DTalker). Specifically, the proposed method employs 3D prior information
to synthesize clear talking heads with free views. Additionally, we propose a
3D Prior Aided Audio Disentanglement module, which is designed to disentangle
the audio into two distinct categories: features related to 3D awarded speech
movements and features related to speaking style. Moreover, to reposition the
generated frames that are distant from the speaker's motion space in the real
space, we have devised a local-global Standardized Space. This method
normalizes the irregular positions in the generated frames from both global and
local semantic perspectives. Through comprehensive qualitative and quantitative
experiments, it has been demonstrated that our NeRF-3DTalker outperforms
state-of-the-art in synthesizing realistic talking head videos, exhibiting
superior image quality and lip synchronization. Project page:
https://nerf-3dtalker.github.io/NeRF-3Dtalker.
|
2502.14180
|
On the logical skills of large language models: evaluations using
arbitrarily complex first-order logic problems
|
cs.LG cs.CL
|
We present a method of generating first-order logic statements whose
complexity can be controlled along multiple dimensions. We use this method to
automatically create several datasets consisting of questions asking for the
truth or falsity of first-order logic statements in Zermelo-Fraenkel set
theory. While the resolution of these questions does not require any knowledge
beyond basic notation of first-order logic and set theory, it does require a
degree of planning and logical reasoning, which can be controlled up to
arbitrarily high difficulty by the complexity of the generated statements.
Furthermore, we do extensive evaluations of the performance of various large
language models, including recent models such as DeepSeek-R1 and OpenAI's
o3-mini, on these datasets. All of the datasets along with the code used for
generating them, as well as all data from the evaluations is publicly available
at https://github.com/bkuckuck/logical-skills-of-llms.
|
2502.14182
|
Multi-Faceted Studies on Data Poisoning can Advance LLM Development
|
cs.CR cs.LG
|
The lifecycle of large language models (LLMs) is far more complex than that
of traditional machine learning models, involving multiple training stages,
diverse data sources, and varied inference methods. While prior research on
data poisoning attacks has primarily focused on the safety vulnerabilities of
LLMs, these attacks face significant challenges in practice. Secure data
collection, rigorous data cleaning, and the multistage nature of LLM training
make it difficult to inject poisoned data or reliably influence LLM behavior as
intended. Given these challenges, this position paper proposes rethinking the
role of data poisoning and argue that multi-faceted studies on data poisoning
can advance LLM development. From a threat perspective, practical strategies
for data poisoning attacks can help evaluate and address real safety risks to
LLMs. From a trustworthiness perspective, data poisoning can be leveraged to
build more robust LLMs by uncovering and mitigating hidden biases, harmful
outputs, and hallucinations. Moreover, from a mechanism perspective, data
poisoning can provide valuable insights into LLMs, particularly the interplay
between data and model behavior, driving a deeper understanding of their
underlying mechanisms.
|
2502.14183
|
Type 1 Diabetes Management using GLIMMER: Glucose Level Indicator Model
with Modified Error Rate
|
cs.LG cs.AI
|
Managing Type 1 Diabetes (T1D) demands constant vigilance as individuals
strive to regulate their blood glucose levels to avert the dangers of
dysglycemia (hyperglycemia or hypoglycemia). Despite the advent of
sophisticated technologies such as automated insulin delivery (AID) systems,
achieving optimal glycemic control remains a formidable task. AID systems
integrate continuous subcutaneous insulin infusion (CSII) and continuous
glucose monitors (CGM) data, offering promise in reducing variability and
increasing glucose time-in-range. However, these systems often fail to prevent
dysglycemia, partly due to limitations in prediction algorithms that lack the
precision to avert abnormal glucose events. This gap highlights the need for
proactive behavioral adjustments. We address this need with GLIMMER, Glucose
Level Indicator Model with Modified Error Rate, a machine learning approach for
forecasting blood glucose levels. GLIMMER categorizes glucose values into
normal and abnormal ranges and devises a novel custom loss function to
prioritize accuracy in dysglycemic events where patient safety is critical. To
evaluate the potential of GLIMMER for T1D management, we both use a publicly
available dataset and collect new data involving 25 patients with T1D. In
predicting next-hour glucose values, GLIMMER achieved a root mean square error
(RMSE) of 23.97 (+/-3.77) and a mean absolute error (MAE) of 15.83 (+/-2.09)
mg/dL. These results reflect a 23% improvement in RMSE and a 31% improvement in
MAE compared to the best-reported error rates.
|
2502.14184
|
Bayesian SegNet for Semantic Segmentation with Improved Interpretation
of Microstructural Evolution During Irradiation of Materials
|
cs.CV cs.LG
|
Understanding the relationship between the evolution of microstructures of
irradiated LiAlO2 pellets and tritium diffusion, retention and release could
improve predictions of tritium-producing burnable absorber rod performance.
Given expert-labeled segmented images of irradiated and unirradiated pellets,
we trained Deep Convolutional Neural Networks to segment images into defect,
grain, and boundary classes. Qualitative microstructural information was
calculated from these segmented images to facilitate the comparison of
unirradiated and irradiated pellets. We tested modifications to improve the
sensitivity of the model, including incorporating meta-data into the model and
utilizing uncertainty quantification. The predicted segmentation was similar to
the expert-labeled segmentation for most methods of microstructural
qualification, including pixel proportion, defect area, and defect density.
Overall, the high performance metrics for the best models for both irradiated
and unirradiated images shows that utilizing neural network models is a viable
alternative to expert-labeled images.
|
2502.14185
|
REFLEX Dataset: A Multimodal Dataset of Human Reactions to Robot
Failures and Explanations
|
cs.RO
|
This work presents REFLEX: Robotic Explanations to FaiLures and Human
EXpressions, a comprehensive multimodal dataset capturing human reactions to
robot failures and subsequent explanations in collaborative settings. It aims
to facilitate research into human-robot interaction dynamics, addressing the
need to study reactions to both initial failures and explanations, as well as
the evolution of these reactions in long-term interactions. By providing rich,
annotated data on human responses to different types of failures, explanation
levels, and explanation varying strategies, the dataset contributes to the
development of more robust, adaptive, and satisfying robotic systems capable of
maintaining positive relationships with human collaborators, even during
challenges like repeated failures.
|
2502.14187
|
Federated Fine-Tuning of Large Language Models: Kahneman-Tversky vs.
Direct Preference Optimization
|
cs.LG cs.CL
|
We evaluate Kahneman-Tversky Optimization (KTO) as a fine-tuning method for
large language models (LLMs) in federated learning (FL) settings, comparing it
against Direct Preference Optimization (DPO). Using Alpaca-7B as the base
model, we fine-tune on a realistic dataset under both methods and evaluate
performance using MT-Bench-1, Vicuna, and AdvBench benchmarks. Additionally, we
introduce a redistributed dataset setup, where only KTO is applicable due to
its ability to handle single-response feedback, unlike DPO's reliance on paired
responses. Our results demonstrate that KTO, in both its original (KTOO) and
redistributed (KTOR) configurations, consistently outperforms DPO across all
benchmarks. In the redistributed setup, KTO further validates its flexibility
and resilience by maintaining superior performance in scenarios where DPO
cannot be applied. These findings establish KTO as a robust and scalable
fine-tuning method for FL, motivating its adoption for privacy-preserving,
decentralized, and heterogeneous environments.
|
2502.14189
|
QUAD-LLM-MLTC: Large Language Models Ensemble Learning for Healthcare
Text Multi-Label Classification
|
cs.CL
|
The escalating volume of collected healthcare textual data presents a unique
challenge for automated Multi-Label Text Classification (MLTC), which is
primarily due to the scarcity of annotated texts for training and their nuanced
nature. Traditional machine learning models often fail to fully capture the
array of expressed topics. However, Large Language Models (LLMs) have
demonstrated remarkable effectiveness across numerous Natural Language
Processing (NLP) tasks in various domains, which show impressive computational
efficiency and suitability for unsupervised learning through prompt
engineering. Consequently, these LLMs promise an effective MLTC of medical
narratives. However, when dealing with various labels, different prompts can be
relevant depending on the topic. To address these challenges, the proposed
approach, QUAD-LLM-MLTC, leverages the strengths of four LLMs: GPT-4o, BERT,
PEGASUS, and BART. QUAD-LLM-MLTC operates in a sequential pipeline in which
BERT extracts key tokens, PEGASUS augments textual data, GPT-4o classifies, and
BART provides topics' assignment probabilities, which results in four
classifications, all in a 0-shot setting. The outputs are then combined using
ensemble learning and processed through a meta-classifier to produce the final
MLTC result. The approach is evaluated using three samples of annotated texts,
which contrast it with traditional and single-model methods. The results show
significant improvements across the majority of the topics in the
classification's F1 score and consistency (F1 and Micro-F1 scores of 78.17% and
80.16% with standard deviations of 0.025 and 0.011, respectively). This
research advances MLTC using LLMs and provides an efficient and scalable
solution to rapidly categorize healthcare-related text data without further
training.
|
2502.14190
|
Stereo Image Coding for Machines with Joint Visual Feature Compression
|
cs.CV eess.IV
|
2D image coding for machines (ICM) has achieved great success in coding
efficiency, while less effort has been devoted to stereo image fields. To
promote the efficiency of stereo image compression (SIC) and intelligent
analysis, the stereo image coding for machines (SICM) is formulated and
explored in this paper. More specifically, a machine vision-oriented stereo
feature compression network (MVSFC-Net) is proposed for SICM, where the stereo
visual features are effectively extracted, compressed, and transmitted for 3D
visual task. To efficiently compress stereo visual features in MVSFC-Net, a
stereo multi-scale feature compression (SMFC) module is designed to gradually
transform sparse stereo multi-scale features into compact joint visual
representations by removing spatial, inter-view, and cross-scale redundancies
simultaneously. Experimental results show that the proposed MVSFC-Net obtains
superior compression efficiency as well as 3D visual task performance, when
compared with the existing ICM anchors recommended by MPEG and the
state-of-the-art SIC method.
|
2502.14191
|
Multimodal RewardBench: Holistic Evaluation of Reward Models for Vision
Language Models
|
cs.CV cs.AI
|
Reward models play an essential role in training vision-language models
(VLMs) by assessing output quality to enable aligning with human preferences.
Despite their importance, the research community lacks comprehensive open
benchmarks for evaluating multimodal reward models in VLMs. To address this
gap, we introduce Multimodal RewardBench, an expert-annotated benchmark
covering six domains: general correctness, preference, knowledge, reasoning,
safety, and visual question-answering. Our dataset comprises 5,211 annotated
(prompt, chosen response, rejected response) triplets collected from various
VLMs. In evaluating a range of VLM judges, we find that even the top-performing
models, Gemini 1.5 Pro and Claude 3.5 Sonnet, achieve only 72% overall
accuracy. Notably, most models struggle in the reasoning and safety domains.
These findings suggest that Multimodal RewardBench offers a challenging testbed
for advancing reward model development across multiple domains. We release the
benchmark at https://github.com/facebookresearch/multimodal_rewardbench.
|
2502.14192
|
NLP-AKG: Few-Shot Construction of NLP Academic Knowledge Graph Based on
LLM
|
cs.CL cs.DL
|
Large language models (LLMs) have been widely applied in question answering
over scientific research papers. To enhance the professionalism and accuracy of
responses, many studies employ external knowledge augmentation. However,
existing structures of external knowledge in scientific literature often focus
solely on either paper entities or domain concepts, neglecting the intrinsic
connections between papers through shared domain concepts. This results in less
comprehensive and specific answers when addressing questions that combine
papers and concepts. To address this, we propose a novel knowledge graph
framework that captures deep conceptual relations between academic papers,
constructing a relational network via intra-paper semantic elements and
inter-paper citation relations. Using a few-shot knowledge graph construction
method based on LLM, we develop NLP-AKG, an academic knowledge graph for the
NLP domain, by extracting 620,353 entities and 2,271,584 relations from 60,826
papers in ACL Anthology. Based on this, we propose a 'sub-graph community
summary' method and validate its effectiveness on three NLP scientific
literature question answering datasets.
|
2502.14195
|
Bridging Text and Vision: A Multi-View Text-Vision Registration Approach
for Cross-Modal Place Recognition
|
cs.CV
|
Mobile robots necessitate advanced natural language understanding
capabilities to accurately identify locations and perform tasks such as package
delivery. However, traditional visual place recognition (VPR) methods rely
solely on single-view visual information and cannot interpret human language
descriptions. To overcome this challenge, we bridge text and vision by
proposing a multiview (360{\deg} views of the surroundings) text-vision
registration approach called Text4VPR for place recognition task, which is the
first method that exclusively utilizes textual descriptions to match a database
of images. Text4VPR employs the frozen T5 language model to extract global
textual embeddings. Additionally, it utilizes the Sinkhorn algorithm with
temperature coefficient to assign local tokens to their respective clusters,
thereby aggregating visual descriptors from images. During the training stage,
Text4VPR emphasizes the alignment between individual text-image pairs for
precise textual description. In the inference stage, Text4VPR uses the Cascaded
Cross-Attention Cosine Alignment (CCCA) to address the internal mismatch
between text and image groups. Subsequently, Text4VPR performs precisely place
match based on the descriptions of text-image groups. On Street360Loc, the
first text to image VPR dataset we created, Text4VPR builds a robust baseline,
achieving a leading top-1 accuracy of 57% and a leading top-10 accuracy of 92%
within a 5-meter radius on the test set, which indicates that localization from
textual descriptions to images is not only feasible but also holds significant
potential for further advancement, as shown in Figure 1.
|
2502.14197
|
Adaptive Sparsified Graph Learning Framework for Vessel Behavior
Anomalies
|
cs.LG cs.AI
|
Graph neural networks have emerged as a powerful tool for learning
spatiotemporal interactions. However, conventional approaches often rely on
predefined graphs, which may obscure the precise relationships being modeled.
Additionally, existing methods typically define nodes based on fixed spatial
locations, a strategy that is ill-suited for dynamic environments like maritime
environments. Our method introduces an innovative graph representation where
timestamps are modeled as distinct nodes, allowing temporal dependencies to be
explicitly captured through graph edges. This setup is extended to construct a
multi-ship graph that effectively captures spatial interactions while
preserving graph sparsity. The graph is processed using Graph Convolutional
Network layers to capture spatiotemporal patterns, with a forecasting layer for
feature prediction and a Variational Graph Autoencoder for reconstruction,
enabling robust anomaly detection.
|
2502.14198
|
Antenna Position and Beamforming Optimization for Movable Antenna
Enabled ISAC: Optimal Solutions and Efficient Algorithms
|
cs.IT eess.SP math.IT
|
In this paper, we propose an integrated sensing and communication (ISAC)
system enabled by movable antennas (MAs), which can dynamically adjust antenna
positions to enhance both sensing and communication performance for future
wireless networks. To characterize the benefits of MA-enabled ISAC systems, we
first derive the Cram\'er-Rao bound (CRB) for angle estimation error, which is
then minimized for optimizing the antenna position vector (APV) and beamforming
design, subject to a pre-defined signal-to-noise ratio (SNR) constraint to
ensure the communication performance. In particular, for the case with receive
MAs only, we provide a closed-form optimal antenna position solution, and show
that employing MAs over conventional fixed-position antennas (FPAs) can achieve
a sensing performance gain upper-bounded by 4.77 dB. On the other hand, for the
case with transmit MAs only, we develop a boundary traversal breadth-first
search (BT-BFS) algorithm to obtain the global optimal solution in the
line-of-sight (LoS) channel scenario, along with a lower-complexity boundary
traversal depth-first search (BT-DFS) algorithm to find a local optimal
solution efficiently. While in the scenario with non-LoS (NLoS) channels, a
majorization-minimization (MM) based Rosen's gradient projection (RGP)
algorithm with an efficient initialization method is proposed to obtain
stationary solutions for the considered problem, which can be extended to the
general case with both transmit and receive MAs. Extensive numerical results
are presented to verify the effectiveness of the proposed algorithms, and
demonstrate the superiority of the considered MA-enabled ISAC system over
conventional ISAC systems with FPAs in terms of sensing and communication
performance trade-off.
|
2502.14200
|
Causal Mean Field Multi-Agent Reinforcement Learning
|
cs.AI cs.MA
|
Scalability remains a challenge in multi-agent reinforcement learning and is
currently under active research. A framework named mean-field reinforcement
learning (MFRL) could alleviate the scalability problem by employing the Mean
Field Theory to turn a many-agent problem into a two-agent problem. However,
this framework lacks the ability to identify essential interactions under
nonstationary environments. Causality contains relatively invariant mechanisms
behind interactions, though environments are nonstationary. Therefore, we
propose an algorithm called causal mean-field Q-learning (CMFQ) to address the
scalability problem. CMFQ is ever more robust toward the change of the number
of agents though inheriting the compressed representation of MFRL's
action-state space. Firstly, we model the causality behind the decision-making
process of MFRL into a structural causal model (SCM). Then the essential degree
of each interaction is quantified via intervening on the SCM. Furthermore, we
design the causality-aware compact representation for behavioral information of
agents as the weighted sum of all behavioral information according to their
causal effects. We test CMFQ in a mixed cooperative-competitive game and a
cooperative game. The result shows that our method has excellent scalability
performance in both training in environments containing a large number of
agents and testing in environments containing much more agents.
|
2502.14202
|
Do LLMs Consider Security? An Empirical Study on Responses to
Programming Questions
|
cs.SE cs.AI cs.CL cs.LG
|
The widespread adoption of conversational LLMs for software development has
raised new security concerns regarding the safety of LLM-generated content. Our
motivational study outlines ChatGPT's potential in volunteering
context-specific information to the developers, promoting safe coding
practices. Motivated by this finding, we conduct a study to evaluate the degree
of security awareness exhibited by three prominent LLMs: Claude 3, GPT-4, and
Llama 3. We prompt these LLMs with Stack Overflow questions that contain
vulnerable code to evaluate whether they merely provide answers to the
questions or if they also warn users about the insecure code, thereby
demonstrating a degree of security awareness. Further, we assess whether LLM
responses provide information about the causes, exploits, and the potential
fixes of the vulnerability, to help raise users' awareness. Our findings show
that all three models struggle to accurately detect and warn users about
vulnerabilities, achieving a detection rate of only 12.6% to 40% across our
datasets. We also observe that the LLMs tend to identify certain types of
vulnerabilities related to sensitive information exposure and improper input
neutralization much more frequently than other types, such as those involving
external control of file names or paths. Furthermore, when LLMs do issue
security warnings, they often provide more information on the causes, exploits,
and fixes of vulnerabilities compared to Stack Overflow responses. Finally, we
provide an in-depth discussion on the implications of our findings and present
a CLI-based prompting tool that can be used to generate significantly more
secure LLM responses.
|
2502.14204
|
On-the-fly Preference Alignment via Principle-Guided Decoding
|
cs.CL cs.AI
|
With the rapidly expanding landscape of large language models, aligning model
generations with human values and preferences is becoming increasingly
important. Popular alignment methods, such as Reinforcement Learning from Human
Feedback, have shown significant success in guiding models with greater
control. However, these methods require considerable computational resources,
which is inefficient, and substantial collection of training data to
accommodate the diverse and pluralistic nature of human preferences, which is
impractical. These limitations significantly constrain the scope and efficacy
of both task-specific and general preference alignment methods. In this work,
we introduce On-the-fly Preference Alignment via Principle-Guided Decoding
(OPAD) to directly align model outputs with human preferences during inference,
eliminating the need for fine-tuning. Our approach involves first curating a
surrogate solution to an otherwise infeasible optimization problem and then
designing a principle-guided reward function based on this surrogate. The final
aligned policy is derived by maximizing this customized reward, which exploits
the discrepancy between the constrained policy and its unconstrained
counterpart. OPAD directly modifies the model's predictions during inference,
ensuring principle adherence without incurring the computational overhead of
retraining or fine-tuning. Experiments show that OPAD achieves competitive or
superior performance in both general and personalized alignment tasks,
demonstrating its efficiency and effectiveness compared to state-of-the-art
baselines.
|
2502.14205
|
Accurate Forgetting for Heterogeneous Federated Continual Learning
|
cs.LG cs.AI
|
Recent years have witnessed a burgeoning interest in federated learning (FL).
However, the contexts in which clients engage in sequential learning remain
under-explored. Bridging FL and continual learning (CL) gives rise to a
challenging practical problem: federated continual learning (FCL). Existing
research in FCL primarily focuses on mitigating the catastrophic forgetting
issue of continual learning while collaborating with other clients. We argue
that the forgetting phenomena are not invariably detrimental. In this paper, we
consider a more practical and challenging FCL setting characterized by
potentially unrelated or even antagonistic data/tasks across different clients.
In the FL scenario, statistical heterogeneity and data noise among clients may
exhibit spurious correlations which result in biased feature learning. While
existing CL strategies focus on a complete utilization of previous knowledge,
we found that forgetting biased information is beneficial in our study.
Therefore, we propose a new concept accurate forgetting (AF) and develop a
novel generative-replay method~\method~which selectively utilizes previous
knowledge in federated networks. We employ a probabilistic framework based on a
normalizing flow model to quantify the credibility of previous knowledge.
Comprehensive experiments affirm the superiority of our method over baselines.
|
2502.14208
|
A Non-Asymptotic Theory of Seminorm Lyapunov Stability: From
Deterministic to Stochastic Iterative Algorithms
|
cs.LG math.OC stat.ML
|
We study the problem of solving fixed-point equations for
seminorm-contractive operators and establish foundational results on the
non-asymptotic behavior of iterative algorithms in both deterministic and
stochastic settings. Specifically, in the deterministic setting, we prove a
fixed-point theorem for seminorm-contractive operators, showing that iterates
converge geometrically to the kernel of the seminorm. In the stochastic
setting, we analyze the corresponding stochastic approximation (SA) algorithm
under seminorm-contractive operators and Markovian noise, providing a
finite-sample analysis for various stepsize choices.
A benchmark for equation solving is linear systems of equations, where the
convergence behavior of fixed-point iteration is closely tied to the stability
of linear dynamical systems. In this special case, our results provide a
complete characterization of system stability with respect to a seminorm,
linking it to the solution of a Lyapunov equation in terms of positive
semi-definite matrices. In the stochastic setting, we establish a finite-sample
analysis for linear Markovian SA without requiring the Hurwitzness assumption.
Our theoretical results offer a unified framework for deriving finite-sample
bounds for various reinforcement learning algorithms in the average reward
setting, including TD($\lambda$) for policy evaluation (which is a special case
of solving a Poisson equation) and Q-learning for control.
|
2502.14209
|
Spatial and Frequency Domain Adaptive Fusion Network for Image
Deblurring
|
cs.CV
|
Image deblurring aims to reconstruct a latent sharp image from its
corresponding blurred one. Although existing methods have achieved good
performance, most of them operate exclusively in either the spatial domain or
the frequency domain, rarely exploring solutions that fuse both domains. In
this paper, we propose a spatial-frequency domain adaptive fusion network
(SFAFNet) to address this limitation. Specifically, we design a gated
spatial-frequency domain feature fusion block (GSFFBlock), which consists of
three key components: a spatial domain information module, a frequency domain
information dynamic generation module (FDGM), and a gated fusion module (GFM).
The spatial domain information module employs the NAFBlock to integrate local
information. Meanwhile, in the FDGM, we design a learnable low-pass filter that
dynamically decomposes features into separate frequency subbands, capturing the
image-wide receptive field and enabling the adaptive exploration of global
contextual information. Additionally, to facilitate information flow and the
learning of complementary representations. In the GFM, we present a gating
mechanism (GATE) to re-weight spatial and frequency domain features, which are
then fused through the cross-attention mechanism (CAM). Experimental results
demonstrate that our SFAFNet performs favorably compared to state-of-the-art
approaches on commonly used benchmarks.
|
2502.14210
|
Sample Complexity of Linear Quadratic Regulator Without Initial
Stability
|
math.OC cs.LG cs.SY eess.SY
|
Inspired by REINFORCE, we introduce a novel receding-horizon algorithm for
the Linear Quadratic Regulator (LQR) problem with unknown parameters. Unlike
prior methods, our algorithm avoids reliance on two-point gradient estimates
while maintaining the same order of sample complexity. Furthermore, it
eliminates the restrictive requirement of starting with a stable initial
policy, broadening its applicability. Beyond these improvements, we introduce a
refined analysis of error propagation through the contraction of the Riemannian
distance over the Riccati operator. This refinement leads to a better sample
complexity and ensures improved convergence guarantees. Numerical simulations
validate the theoretical results, demonstrating the method's practical
feasibility and performance in realistic scenarios.
|
2502.14211
|
Transfer-Prompting: Enhancing Cross-Task Adaptation in Large Language
Models via Dual-Stage Prompts Optimization
|
cs.CL
|
Large language models (LLMs) face significant challenges when balancing
multiple high-level objectives, such as generating coherent, relevant, and
high-quality responses while maintaining efficient task adaptation across
diverse tasks. To address these challenges, we introduce Transfer-Prompting, a
novel two-stage framework designed to enhance cross-task adaptation in prompt
generation. The framework comprises two key components: (1) source prompt
construction, which refines the original prompts on source task datasets to
generate source prompts with enhanced generalization ability, and (2) target
prompt generation, which enhances cross-task adaptation of target prompts by
fine-tuning a set of high-scored source prompts on task-specific datasets. In
each optimization cycle, a reference LLM generates candidate prompts based on
historical prompt-score pairs and task descriptions in our designed reference
prompt. These candidate prompts are refined iteratively, while a scorer LLM
evaluates their effectiveness using the multi-dimensional metrics designed in
the objective prompts evaluator-a novel contribution in this work that provides
a holistic evaluation of prompt quality and task performance. This feedback
loop facilitates continuous refinement, optimizing both prompt quality and
task-specific outcomes. We validate Transfer-Prompting through extensive
experiments across 25 LLMs, including 7 foundational models and 18 specialized
models, evaluated on 9 diverse datasets. The results demonstrate that
Transfer-Prompting significantly improves task-specific performance,
highlighting its potential for enhancing cross-task adaptation in LLMs. The
code is available at https://github.com/llm172/Transfer-Prompting.
|
2502.14212
|
Less is More: On the Importance of Data Quality for Unit Test Generation
|
cs.SE cs.IR
|
Unit testing is crucial for software development and maintenance. Effective
unit testing ensures and improves software quality, but writing unit tests is
time-consuming and labor-intensive. Recent studies have proposed deep learning
(DL) techniques or large language models (LLMs) to automate unit test
generation. These models are usually trained or fine-tuned on large-scale
datasets. Despite growing awareness of the importance of data quality, there
has been limited research on the quality of datasets used for test generation.
To bridge this gap, we systematically examine the impact of noise on the
performance of learning-based test generation models. We first apply the open
card sorting method to analyze the most popular and largest test generation
dataset, Methods2Test, to categorize eight distinct types of noise. Further, we
conduct detailed interviews with 17 domain experts to validate and assess the
importance, reasonableness, and correctness of the noise taxonomy. Then, we
propose CleanTest, an automated noise-cleaning framework designed to improve
the quality of test generation datasets. CleanTest comprises three filters: a
rule-based syntax filter, a rule-based relevance filter, and a model-based
coverage filter. To evaluate its effectiveness, we apply CleanTest on two
widely-used test generation datasets, i.e., Methods2Test and Atlas. Our
findings indicate that 43.52% and 29.65% of datasets contain noise,
highlighting its prevalence. Finally, we conduct comparative experiments using
four LLMs (i.e., CodeBERT, AthenaTest, StarCoder, and CodeLlama7B) to assess
the impact of noise on test generation performance. The results show that
filtering noise positively influences the test generation ability of the
models.
|
2502.14214
|
Asymmetric Co-Training for Source-Free Few-Shot Domain Adaptation
|
cs.LG cs.CV
|
Source-free unsupervised domain adaptation (SFUDA) has gained significant
attention as an alternative to traditional unsupervised domain adaptation
(UDA), which relies on the constant availability of labeled source data.
However, SFUDA approaches come with inherent limitations that are frequently
overlooked. These challenges include performance degradation when the unlabeled
target data fails to meet critical assumptions, such as having a closed-set
label distribution identical to that of the source domain, or when sufficient
unlabeled target data is unavailable-a common situation in real-world
applications. To address these issues, we propose an asymmetric co-training
(ACT) method specifically designed for the SFFSDA scenario. SFFSDA presents a
more practical alternative to SFUDA, as gathering a few labeled target
instances is more feasible than acquiring large volumes of unlabeled target
data in many real-world contexts. Our ACT method begins by employing a
weak-strong augmentation to enhance data diversity. Then we use a two-step
optimization process to train the target model. In the first step, we optimize
the label smoothing cross-entropy loss, the entropy of the class-conditional
distribution, and the reverse-entropy loss to bolster the model's
discriminative ability while mitigating overfitting. The second step focuses on
reducing redundancy in the output space by minimizing classifier determinacy
disparity. Extensive experiments across four benchmarks demonstrate the
superiority of our ACT approach, which outperforms state-of-the-art SFUDA
methods and transfer learning techniques. Our findings suggest that adapting a
source pre-trained model using only a small amount of labeled target data
offers a practical and dependable solution. The code is available at
https://github.com/gengxuli/ACT.
|
2502.14215
|
Towards Secure Program Partitioning for Smart Contracts with LLM's
In-Context Learning
|
cs.SE cs.AI
|
Smart contracts are highly susceptible to manipulation attacks due to the
leakage of sensitive information. Addressing manipulation vulnerabilities is
particularly challenging because they stem from inherent data confidentiality
issues rather than straightforward implementation bugs. To tackle this by
preventing sensitive information leakage, we present PartitionGPT, the first
LLM-driven approach that combines static analysis with the in-context learning
capabilities of large language models (LLMs) to partition smart contracts into
privileged and normal codebases, guided by a few annotated sensitive data
variables. We evaluated PartitionGPT on 18 annotated smart contracts containing
99 sensitive functions. The results demonstrate that PartitionGPT successfully
generates compilable, and verified partitions for 78% of the sensitive
functions while reducing approximately 30% code compared to function-level
partitioning approach. Furthermore, we evaluated PartitionGPT on nine
real-world manipulation attacks that lead to a total loss of 25 million
dollars, PartitionGPT effectively prevents eight cases, highlighting its
potential for broad applicability and the necessity for secure program
partitioning during smart contract development to diminish manipulation
vulnerabilities.
|
2502.14218
|
Rethinking Spiking Neural Networks from an Ensemble Learning Perspective
|
cs.LG cs.AI
|
Spiking neural networks (SNNs) exhibit superior energy efficiency but suffer
from limited performance. In this paper, we consider SNNs as ensembles of
temporal subnetworks that share architectures and weights, and highlight a
crucial issue that affects their performance: excessive differences in initial
states (neuronal membrane potentials) across timesteps lead to unstable
subnetwork outputs, resulting in degraded performance. To mitigate this, we
promote the consistency of the initial membrane potential distribution and
output through membrane potential smoothing and temporally adjacent subnetwork
guidance, respectively, to improve overall stability and performance. Moreover,
membrane potential smoothing facilitates forward propagation of information and
backward propagation of gradients, mitigating the notorious temporal gradient
vanishing problem. Our method requires only minimal modification of the spiking
neurons without adapting the network structure, making our method generalizable
and showing consistent performance gains in 1D speech, 2D object, and 3D point
cloud recognition tasks. In particular, on the challenging CIFAR10-DVS dataset,
we achieved 83.20\% accuracy with only four timesteps. This provides valuable
insights into unleashing the potential of SNNs.
|
2502.14219
|
Investigating the Impact of LLM Personality on Cognitive Bias
Manifestation in Automated Decision-Making Tasks
|
cs.AI
|
Large Language Models (LLMs) are increasingly used in decision-making, yet
their susceptibility to cognitive biases remains a pressing challenge. This
study explores how personality traits influence these biases and evaluates the
effectiveness of mitigation strategies across various model architectures. Our
findings identify six prevalent cognitive biases, while the sunk cost and group
attribution biases exhibit minimal impact. Personality traits play a crucial
role in either amplifying or reducing biases, significantly affecting how LLMs
respond to debiasing techniques. Notably, Conscientiousness and Agreeableness
may generally enhance the efficacy of bias mitigation strategies, suggesting
that LLMs exhibiting these traits are more receptive to corrective measures.
These findings address the importance of personality-driven bias dynamics and
highlight the need for targeted mitigation approaches to improve fairness and
reliability in AI-assisted decision-making.
|
2502.14221
|
H3DE-Net: Efficient and Accurate 3D Landmark Detection in Medical
Imaging
|
cs.CV
|
3D landmark detection is a critical task in medical image analysis, and
accurately detecting anatomical landmarks is essential for subsequent medical
imaging tasks. However, mainstream deep learning methods in this field struggle
to simultaneously capture fine-grained local features and model global spatial
relationships, while maintaining a balance between accuracy and computational
efficiency. Local feature extraction requires capturing fine-grained anatomical
details, while global modeling requires understanding the spatial relationships
within complex anatomical structures. The high-dimensional nature of 3D volume
further exacerbates these challenges, as landmarks are sparsely distributed,
leading to significant computational costs. Therefore, achieving efficient and
precise 3D landmark detection remains a pressing challenge in medical image
analysis.
In this work, We propose a \textbf{H}ybrid \textbf{3}D \textbf{DE}tection
\textbf{Net}(H3DE-Net), a novel framework that combines CNNs for local feature
extraction with a lightweight attention mechanism designed to efficiently
capture global dependencies in 3D volumetric data. This mechanism employs a
hierarchical routing strategy to reduce computational cost while maintaining
global context modeling. To our knowledge, H3DE-Net is the first 3D landmark
detection model that integrates such a lightweight attention mechanism with
CNNs. Additionally, integrating multi-scale feature fusion further enhances
detection accuracy and robustness. Experimental results on a public CT dataset
demonstrate that H3DE-Net achieves state-of-the-art(SOTA) performance,
significantly improving accuracy and robustness, particularly in scenarios with
missing landmarks or complex anatomical variations. We aready open-source our
project, including code, data and model weights.
|
2502.14222
|
Enhancing Pavement Sensor Data Acquisition for AI-Driven Transportation
Research
|
cs.DB cs.AI eess.SP
|
Effective strategies for sensor data management are essential for advancing
transportation research, especially in the current data-driven era, due to the
advent of novel applications in artificial intelligence. This paper presents
comprehensive guidelines for managing transportation sensor data, encompassing
both archived static data and real-time data streams. The real-time system
architecture integrates various applications with data acquisition systems
(DAQ). By deploying the in-house designed, open-source Avena software platform
alongside the NATS messaging system as a secure communication broker, reliable
data exchange is ensured. While robust databases like TimescaleDB facilitate
organized storage, visualization platforms like Grafana provide real-time
monitoring capabilities.
In contrast, static data standards address the challenges in handling
unstructured, voluminous datasets. The standards advocate for a combination of
cost-effective bulk cloud storage for unprocessed sensor data and relational
databases for recording summarized analyses. They highlight the role of cloud
data transfer tools like FME for efficient migration of sensor data from local
storages onto the cloud. Further, integration of robust visualization tools
into the framework helps in deriving patterns and trends from these complex
datasets.
The proposals were applied to INDOT's real-world case studies involving the
I-65 and I-69 Greenfield districts. For real-time data collection, Campbell
Scientific DAQ systems were used, enabling continuous generation and monitoring
of sensor metrics. In the case of the archived I-69 database, summary data was
compiled in Oracle, while the unprocessed data was stored in SharePoint. The
results underline the effectiveness of the proposed guidelines and motivate
their adoption in research projects.
|
2502.14226
|
Designing Parameter and Compute Efficient Diffusion Transformers using
Distillation
|
cs.CV eess.IV
|
Diffusion Transformers (DiTs) with billions of model parameters form the
backbone of popular image and video generation models like DALL.E,
Stable-Diffusion and SORA. Though these models are necessary in many
low-latency applications like Augmented/Virtual Reality, they cannot be
deployed on resource-constrained Edge devices (like Apple Vision Pro or Meta
Ray-Ban glasses) due to their huge computational complexity. To overcome this,
we turn to knowledge distillation and perform a thorough design-space
exploration to achieve the best DiT for a given parameter size. In particular,
we provide principles for how to choose design knobs such as depth, width,
attention heads and distillation setup for a DiT. During the process, a
three-way trade-off emerges between model performance, size and speed that is
crucial for Edge implementation of diffusion. We also propose two distillation
approaches - Teaching Assistant (TA) method and Multi-In-One (MI1) method - to
perform feature distillation in the DiT context. Unlike existing solutions, we
demonstrate and benchmark the efficacy of our approaches on practical Edge
devices such as NVIDIA Jetson Orin Nano.
|
2502.14227
|
SleepGMUformer: A gated multimodal temporal neural network for sleep
staging
|
cs.LG cs.AI
|
Sleep staging is a key method for assessing sleep quality and diagnosing
sleep disorders. However, current deep learning methods face challenges: 1)
postfusion techniques ignore the varying contributions of different modalities;
2) unprocessed sleep data can interfere with frequency-domain information. To
tackle these issues, this paper proposes a gated multimodal temporal neural
network for multidomain sleep data, including heart rate, motion, steps, EEG
(Fpz-Cz, Pz-Oz), and EOG from WristHR-Motion-Sleep and SleepEDF-78. The model
integrates: 1) a pre-processing module for feature alignment, missing value
handling, and EEG de-trending; 2) a feature extraction module for complex sleep
features in the time dimension; and 3) a dynamic fusion module for real-time
modality weighting.Experiments show classification accuracies of 85.03% on
SleepEDF-78 and 94.54% on WristHR-Motion-Sleep datasets. The model handles
heterogeneous datasets and outperforms state-of-the-art models by 1.00%-4.00%.
|
2502.14231
|
Real-Time Sampling-based Online Planning for Drone Interception
|
cs.RO cs.LG cs.SY eess.SY
|
This paper studies high-speed online planning in dynamic environments. The
problem requires finding time-optimal trajectories that conform to system
dynamics, meeting computational constraints for real-time adaptation, and
accounting for uncertainty from environmental changes. To address these
challenges, we propose a sampling-based online planning algorithm that
leverages neural network inference to replace time-consuming nonlinear
trajectory optimization, enabling rapid exploration of multiple trajectory
options under uncertainty. The proposed method is applied to the drone
interception problem, where a defense drone must intercept a target while
avoiding collisions and handling imperfect target predictions. The algorithm
efficiently generates trajectories toward multiple potential target drone
positions in parallel. It then assesses trajectory reachability by comparing
traversal times with the target drone's predicted arrival time, ultimately
selecting the minimum-time reachable trajectory. Through extensive validation
in both simulated and real-world environments, we demonstrate our method's
capability for high-rate online planning and its adaptability to unpredictable
movements in unstructured settings.
|
2502.14234
|
OBELiX: A Curated Dataset of Crystal Structures and Experimentally
Measured Ionic Conductivities for Lithium Solid-State Electrolytes
|
cond-mat.mtrl-sci cs.LG
|
Solid-state electrolyte batteries are expected to replace liquid electrolyte
lithium-ion batteries in the near future thanks to their higher theoretical
energy density and improved safety. However, their adoption is currently
hindered by their lower effective ionic conductivity, a quantity that governs
charge and discharge rates. Identifying highly ion-conductive materials using
conventional theoretical calculations and experimental validation is both
time-consuming and resource-intensive. While machine learning holds the promise
to expedite this process, relevant ionic conductivity and structural data is
scarce. Here, we present OBELiX, a domain-expert-curated database of $\sim$600
synthesized solid electrolyte materials and their experimentally measured room
temperature ionic conductivities gathered from literature. Each material is
described by their measured composition, space group and lattice parameters. A
full-crystal description in the form of a crystallographic information file
(CIF) is provided for ~320 structures for which atomic positions were
available. We discuss various statistics and features of the dataset and
provide training and testing splits that avoid data leakage. Finally, we
benchmark seven existing ML models on the task of predicting ionic conductivity
and discuss their performance. The goal of this work is to facilitate the use
of machine learning for solid-state electrolyte materials discovery.
|
2502.14235
|
OG-Gaussian: Occupancy Based Street Gaussians for Autonomous Driving
|
cs.CV cs.AI
|
Accurate and realistic 3D scene reconstruction enables the lifelike creation
of autonomous driving simulation environments. With advancements in 3D Gaussian
Splatting (3DGS), previous studies have applied it to reconstruct complex
dynamic driving scenes. These methods typically require expensive LiDAR sensors
and pre-annotated datasets of dynamic objects. To address these challenges, we
propose OG-Gaussian, a novel approach that replaces LiDAR point clouds with
Occupancy Grids (OGs) generated from surround-view camera images using
Occupancy Prediction Network (ONet). Our method leverages the semantic
information in OGs to separate dynamic vehicles from static street background,
converting these grids into two distinct sets of initial point clouds for
reconstructing both static and dynamic objects. Additionally, we estimate the
trajectories and poses of dynamic objects through a learning-based approach,
eliminating the need for complex manual annotations. Experiments on Waymo Open
dataset demonstrate that OG-Gaussian is on par with the current
state-of-the-art in terms of reconstruction quality and rendering speed,
achieving an average PSNR of 35.13 and a rendering speed of 143 FPS, while
significantly reducing computational costs and economic overhead.
|
2502.14238
|
No Minima, No Collisions: Combining Modulation and Control Barrier
Function Strategies for Feasible Dynamical Collision Avoidance
|
cs.RO cs.SY eess.SY
|
As prominent real-time safety-critical reactive control techniques, Control
Barrier Function Quadratic Programs (CBF-QPs) work for control affine systems
in general but result in local minima in the generated trajectories and
consequently cannot ensure convergence to the goals. Contrarily, Modulation of
Dynamical Systems (Mod-DSs), including normal, reference, and on-manifold
Mod-DS, achieve obstacle avoidance with few and even no local minima but have
trouble optimally minimizing the difference between the constrained and the
unconstrained controller outputs, and its applications are limited to
fully-actuated systems. We dive into the theoretical foundations of CBF-QP and
Mod-DS, proving that despite their distinct origins, normal Mod-DS is a special
case of CBF-QP, and reference Mod-DS's solutions are mathematically connected
to that of the CBF-QP through one equation. Building on top of the unveiled
theoretical connections between CBF-QP and Mod-DS, reference Mod-based CBF-QP
and on-manifold Mod-based CBF-QP controllers are proposed to combine the
strength of CBF-QP and Mod-DS approaches and realize local-minimum-free
reactive obstacle avoidance for control affine systems in general. We validate
our methods in both simulated hospital environments and real-world experiments
using Ridgeback for fully-actuated systems and Fetch robots for underactuated
systems. Mod-based CBF-QPs outperform CBF-QPs as well as the optimally
constrained-enforcing Mod-DS approaches we proposed in all experiments.
|
2502.14242
|
On the Contraction Analysis of Nonlinear System with Multiple
Equilibrium Points
|
eess.SY cs.SY
|
In this work, we leverage the 2-contraction theory, which extends the
capabilities of classical contraction theory, to develop a global stability
framework. Coupled with powerful geometric tools such as the Poincare index
theory, the 2-contraction theory enables us to analyze the stability of planar
nonlinear systems without relying on local equilibrium analysis. By utilizing
index theory and 2-contraction results, we efficiently characterize the nature
of equilibrium points and delineate regions in 2-dimensional state space where
periodic solutions, closed orbits, or stable dynamics may exist. A key focus of
this work is the identification of regions in the state space where periodic
solutions may occur, as well as 2-contraction regions that guarantee the
nonexistence of such solutions. Additionally, we address a critical problem in
engineering the determination of the basin of attraction (BOA) for stable
equilibrium points. For systems with multiple equilibria identifying candidate
BOAs becomes highly nontrivial. We propose a novel methodology leveraging the
2-contraction theory to approximate a common BOA for a class of nonlinear
systems with multiple stable equilibria. Theoretical findings are substantiated
through benchmark examples and numerical simulations, demonstrating the
practical utility of the proposed approach. Furthermore, we extend our
framework to analyze networked systems, showcasing their efficacy in an opinion
dynamics problem.
|
2502.14245
|
Mitigating Lost-in-Retrieval Problems in Retrieval Augmented Multi-Hop
Question Answering
|
cs.CL
|
In this paper, we identify a critical problem, "lost-in-retrieval", in
retrieval-augmented multi-hop question answering (QA): the key entities are
missed in LLMs' sub-question decomposition. "Lost-in-retrieval" significantly
degrades the retrieval performance, which disrupts the reasoning chain and
leads to the incorrect answers. To resolve this problem, we propose a
progressive retrieval and rewriting method, namely ChainRAG, which sequentially
handles each sub-question by completing missing key entities and retrieving
relevant sentences from a sentence graph for answer generation. Each step in
our retrieval and rewriting process builds upon the previous one, creating a
seamless chain that leads to accurate retrieval and answers. Finally, all
retrieved sentences and sub-question answers are integrated to generate a
comprehensive answer to the original question. We evaluate ChainRAG on three
multi-hop QA datasets$\unicode{x2013}$MuSiQue, 2Wiki, and
HotpotQA$\unicode{x2013}$using three large language models: GPT4o-mini,
Qwen2.5-72B, and GLM-4-Plus. Empirical results demonstrate that ChainRAG
consistently outperforms baselines in both effectiveness and efficiency.
|
2502.14247
|
Pandora3D: A Comprehensive Framework for High-Quality 3D Shape and
Texture Generation
|
cs.GR cs.AI cs.CV
|
This report presents a comprehensive framework for generating high-quality 3D
shapes and textures from diverse input prompts, including single images,
multi-view images, and text descriptions. The framework consists of 3D shape
generation and texture generation. (1). The 3D shape generation pipeline
employs a Variational Autoencoder (VAE) to encode implicit 3D geometries into a
latent space and a diffusion network to generate latents conditioned on input
prompts, with modifications to enhance model capacity. An alternative
Artist-Created Mesh (AM) generation approach is also explored, yielding
promising results for simpler geometries. (2). Texture generation involves a
multi-stage process starting with frontal images generation followed by
multi-view images generation, RGB-to-PBR texture conversion, and
high-resolution multi-view texture refinement. A consistency scheduler is
plugged into every stage, to enforce pixel-wise consistency among multi-view
textures during inference, ensuring seamless integration.
The pipeline demonstrates effective handling of diverse input formats,
leveraging advanced neural architectures and novel methodologies to produce
high-quality 3D content. This report details the system architecture,
experimental results, and potential future directions to improve and expand the
framework. The source code and pretrained weights are released at:
\url{https://github.com/Tencent/Tencent-XR-3DGen}.
|
2502.14251
|
Bayesian Parameter Inference and Uncertainty Quantification for a
Computational Pulmonary Hemodynamics Model Using Gaussian Processes
|
stat.AP cs.CE physics.bio-ph
|
Patient-specific modeling is a valuable tool in cardiovascular disease
research, offering insights beyond what current clinical equipment can measure.
Given the limitations of available clinical data, models that incorporate
uncertainty can provide clinicians with better guidance for tailored
treatments. However, such modeling must align with clinical time frameworks to
ensure practical applicability. In this study, we employ a one-dimensional
fluid dynamics model integrated with data from a canine model of chronic
thromboembolic pulmonary hypertension (CTEPH) to investigate microvascular
disease, which is believed to involve complex mechanisms. To enhance
computational efficiency during model calibration, we implement a Gaussian
process emulator. This approach enables us to explore the relationship between
disease severity and microvascular parameters, offering new insights into the
progression and treatment of CTEPH in a timeframe that is compatible with a
reasonable clinical timeframe.
|
2502.14252
|
Towards efficient quantum algorithms for diffusion probability models
|
quant-ph cs.LG
|
A diffusion probabilistic model (DPM) is a generative model renowned for its
ability to produce high-quality outputs in tasks such as image and audio
generation. However, training DPMs on large, high-dimensional datasets such as
high-resolution images or audio incurs significant computational, energy, and
hardware costs. In this work, we introduce efficient quantum algorithms for
implementing DPMs through various quantum ODE solvers. These algorithms
highlight the potential of quantum Carleman linearization for diverse
mathematical structures, leveraging state-of-the-art quantum linear system
solvers (QLSS) or linear combination of Hamiltonian simulations (LCHS).
Specifically, we focus on two approaches: DPM-solver-$k$ which employs exact
$k$-th order derivatives to compute a polynomial approximation of
$\epsilon_\theta(x_\lambda,\lambda)$; and UniPC which uses finite difference of
$\epsilon_\theta(x_\lambda,\lambda)$ at different points $(x_{s_m},
\lambda_{s_m})$ to approximate higher-order derivatives. As such, this work
represents one of the most direct and pragmatic applications of quantum
algorithms to large-scale machine learning models, presumably talking
substantial steps towards demonstrating the practical utility of quantum
computing.
|
2502.14254
|
Mem2Ego: Empowering Vision-Language Models with Global-to-Ego Memory for
Long-Horizon Embodied Navigation
|
cs.RO cs.AI
|
Recent advancements in Large Language Models (LLMs) and Vision-Language
Models (VLMs) have made them powerful tools in embodied navigation, enabling
agents to leverage commonsense and spatial reasoning for efficient exploration
in unfamiliar environments. Existing LLM-based approaches convert global
memory, such as semantic or topological maps, into language descriptions to
guide navigation. While this improves efficiency and reduces redundant
exploration, the loss of geometric information in language-based
representations hinders spatial reasoning, especially in intricate
environments. To address this, VLM-based approaches directly process
ego-centric visual inputs to select optimal directions for exploration.
However, relying solely on a first-person perspective makes navigation a
partially observed decision-making problem, leading to suboptimal decisions in
complex environments. In this paper, we present a novel vision-language model
(VLM)-based navigation framework that addresses these challenges by adaptively
retrieving task-relevant cues from a global memory module and integrating them
with the agent's egocentric observations. By dynamically aligning global
contextual information with local perception, our approach enhances spatial
reasoning and decision-making in long-horizon tasks. Experimental results
demonstrate that the proposed method surpasses previous state-of-the-art
approaches in object navigation tasks, providing a more effective and scalable
solution for embodied navigation.
|
2502.14255
|
Effects of Prompt Length on Domain-specific Tasks for Large Language
Models
|
cs.CL cs.AI cs.ET cs.LG
|
In recent years, Large Language Models have garnered significant attention
for their strong performance in various natural language tasks, such as machine
translation and question answering. These models demonstrate an impressive
ability to generalize across diverse tasks. However, their effectiveness in
tackling domain-specific tasks, such as financial sentiment analysis and
monetary policy understanding, remains a topic of debate, as these tasks often
require specialized knowledge and precise reasoning. To address such
challenges, researchers design various prompts to unlock the models' abilities.
By carefully crafting input prompts, researchers can guide these models to
produce more accurate responses. Consequently, prompt engineering has become a
key focus of study. Despite the advancements in both models and prompt
engineering, the relationship between the two-specifically, how prompt design
impacts models' ability to perform domain-specific tasks-remains underexplored.
This paper aims to bridge this research gap.
|
2502.14258
|
Does Time Have Its Place? Temporal Heads: Where Language Models Recall
Time-specific Information
|
cs.CL cs.AI
|
While the ability of language models to elicit facts has been widely
investigated, how they handle temporally changing facts remains underexplored.
We discover Temporal Heads, specific attention heads primarily responsible for
processing temporal knowledge through circuit analysis. We confirm that these
heads are present across multiple models, though their specific locations may
vary, and their responses differ depending on the type of knowledge and its
corresponding years. Disabling these heads degrades the model's ability to
recall time-specific knowledge while maintaining its general capabilities
without compromising time-invariant and question-answering performances.
Moreover, the heads are activated not only numeric conditions ("In 2004") but
also textual aliases ("In the year ..."), indicating that they encode a
temporal dimension beyond simple numerical representation. Furthermore, we
expand the potential of our findings by demonstrating how temporal knowledge
can be edited by adjusting the values of these heads.
|
2502.14259
|
LabTOP: A Unified Model for Lab Test Outcome Prediction on Electronic
Health Records
|
cs.LG
|
Lab tests are fundamental for diagnosing diseases and monitoring patient
conditions. However, frequent testing can be burdensome for patients, and test
results may not always be immediately available. To address these challenges,
we propose LabTOP, a unified model that predicts lab test outcomes by
leveraging a language modeling approach on EHR data. Unlike conventional
methods that estimate only a subset of lab tests or classify discrete value
ranges, LabTOP performs continuous numerical predictions for a diverse range of
lab items. We evaluate LabTOP on three publicly available EHR datasets and
demonstrate that it outperforms existing methods, including traditional machine
learning models and state-of-the-art large language models. We also conduct
extensive ablation studies to confirm the effectiveness of our design choices.
We believe that LabTOP will serve as an accurate and generalizable framework
for lab test outcome prediction, with potential applications in clinical
decision support and early detection of critical conditions.
|
2502.14260
|
EyeBench: A Call for More Rigorous Evaluation of Retinal Image
Enhancement
|
eess.IV cs.AI cs.CV
|
Over the past decade, generative models have achieved significant success in
enhancement fundus images.However, the evaluation of these models still
presents a considerable challenge. A comprehensive evaluation benchmark for
fundus image enhancement is indispensable for three main reasons: 1) The
existing denoising metrics (e.g., PSNR, SSIM) are hardly to extend to
downstream real-world clinical research (e.g., Vessel morphology consistency).
2) There is a lack of comprehensive evaluation for both paired and unpaired
enhancement methods, along with the need for expert protocols to accurately
assess clinical value. 3) An ideal evaluation system should provide insights to
inform future developments of fundus image enhancement. To this end, we propose
a novel comprehensive benchmark, EyeBench, to provide insights that align
enhancement models with clinical needs, offering a foundation for future work
to improve the clinical relevance and applicability of generative models for
fundus image enhancement. EyeBench has three appealing properties: 1)
multi-dimensional clinical alignment downstream evaluation: In addition to
evaluating the enhancement task, we provide several clinically significant
downstream tasks for fundus images, including vessel segmentation, DR grading,
denoising generalization, and lesion segmentation. 2) Medical expert-guided
evaluation design: We introduce a novel dataset that promote comprehensive and
fair comparisons between paired and unpaired methods and includes a manual
evaluation protocol by medical experts. 3) Valuable insights: Our benchmark
study provides a comprehensive and rigorous evaluation of existing methods
across different downstream tasks, assisting medical experts in making informed
choices. Additionally, we offer further analysis of the challenges faced by
existing methods. The code is available at
\url{https://github.com/Retinal-Research/EyeBench}
|
2502.14264
|
SPRIG: Stackelberg Perception-Reinforcement Learning with Internal Game
Dynamics
|
cs.AI
|
Deep reinforcement learning agents often face challenges to effectively
coordinate perception and decision-making components, particularly in
environments with high-dimensional sensory inputs where feature relevance
varies. This work introduces SPRIG (Stackelberg Perception-Reinforcement
learning with Internal Game dynamics), a framework that models the internal
perception-policy interaction within a single agent as a cooperative
Stackelberg game. In SPRIG, the perception module acts as a leader,
strategically processing raw sensory states, while the policy module follows,
making decisions based on extracted features. SPRIG provides theoretical
guarantees through a modified Bellman operator while preserving the benefits of
modern policy optimization. Experimental results on the Atari BeamRider
environment demonstrate SPRIG's effectiveness, achieving around 30% higher
returns than standard PPO through its game-theoretical balance of feature
extraction and decision-making.
|
2502.14267
|
Money Recognition for the Visually Impaired: A Case Study on Sri Lankan
Banknotes
|
cs.CV
|
Currency note recognition is a critical accessibility need for blind
individuals, as identifying banknotes accurately can impact their independence
and security in financial transactions. Several traditional and technological
initiatives have been taken to date. Nevertheless, these approaches are less
user-friendly and have made it more challenging for blind people to identify
banknotes. This research proposes a user-friendly stand-alone system for the
identification of Sri Lankan currency notes. A custom-created dataset of images
of Sri Lankan currency notes was used to fine-tune an EfficientDet model. The
currency note recognition model achieved 0.9847 AP on the validation dataset
and performs exceptionally well in real-world scenarios. The high accuracy and
the intuitive interface have enabled blind individuals to quickly and
accurately identify currency denominations, ultimately encouraging
accessibility and independence.
|
2502.14268
|
MCQA-Eval: Efficient Confidence Evaluation in NLG with Gold-Standard
Correctness Labels
|
cs.CL cs.AI
|
Large Language Models (LLMs) require robust confidence estimation,
particularly in critical domains like healthcare and law where unreliable
outputs can lead to significant consequences. Despite much recent work in
confidence estimation, current evaluation frameworks rely on correctness
functions -- various heuristics that are often noisy, expensive, and possibly
introduce systematic biases. These methodological weaknesses tend to distort
evaluation metrics and thus the comparative ranking of confidence measures. We
introduce MCQA-Eval, an evaluation framework for assessing confidence measures
in Natural Language Generation (NLG) that eliminates dependence on an explicit
correctness function by leveraging gold-standard correctness labels from
multiple-choice datasets. MCQA-Eval enables systematic comparison of both
internal state-based white-box (e.g. logit-based) and consistency-based
black-box confidence measures, providing a unified evaluation methodology
across different approaches. Through extensive experiments on multiple LLMs and
widely used QA datasets, we report that MCQA-Eval provides efficient and more
reliable assessments of confidence estimation methods than existing approaches.
|
2502.14270
|
Predicting Fetal Birthweight from High Dimensional Data using Advanced
Machine Learning
|
cs.LG
|
Birth weight serves as a fundamental indicator of neonatal health, closely
linked to both early medical interventions and long-term developmental risks.
Traditional predictive models, often constrained by limited feature selection
and incomplete datasets, struggle to achieve overlooking complex maternal and
fetal interactions in diverse clinical settings. This research explores machine
learning to address these limitations, utilizing a structured methodology that
integrates advanced imputation strategies, supervised feature selection
techniques, and predictive modeling. Given the constraints of the dataset, the
research strengthens the role of data preprocessing in improving the model
performance. Among the various methodologies explored, tree-based feature
selection methods demonstrated superior capability in identifying the most
relevant predictors, while ensemble-based regression models proved highly
effective in capturing non-linear relationships and complex maternal-fetal
interactions within the data. Beyond model performance, the study highlights
the clinical significance of key physiological determinants, offering insights
into maternal and fetal health factors that influence birth weight, offering
insights that extend over statistical modeling. By bridging computational
intelligence with perinatal research, this work underscores the transformative
role of machine learning in enhancing predictive accuracy, refining risk
assessment and informing data-driven decision-making in maternal and neonatal
care. Keywords: Birth weight prediction, maternal-fetal health, MICE, BART,
Gradient Boosting, neonatal outcomes, Clinipredictive.
|
2502.14271
|
PaperHelper: Knowledge-Based LLM QA Paper Reading Assistant
|
cs.CL
|
In the paper, we introduce a paper reading assistant, PaperHelper, a potent
tool designed to enhance the capabilities of researchers in efficiently
browsing and understanding scientific literature. Utilizing the
Retrieval-Augmented Generation (RAG) framework, PaperHelper effectively
minimizes hallucinations commonly encountered in large language models (LLMs),
optimizing the extraction of accurate, high-quality knowledge. The
implementation of advanced technologies such as RAFT and RAG Fusion
significantly boosts the performance, accuracy, and reliability of the
LLMs-based literature review process. Additionally, PaperHelper features a
user-friendly interface that facilitates the batch downloading of documents and
uses the Mermaid format to illustrate structural relationships between
documents. Experimental results demonstrate that PaperHelper, based on a
fine-tuned GPT-4 API, achieves an F1 Score of 60.04, with a latency of only 5.8
seconds, outperforming the basic RAG model by 7\% in F1 Score.
|
2502.14272
|
Capturing Nuanced Preferences: Preference-Aligned Distillation for Small
Language Models
|
cs.CL cs.AI
|
Aligning small language models (SLMs) with human values typically involves
distilling preference knowledge from large language models (LLMs). However,
existing distillation methods model preference knowledge in teacher LLMs by
comparing pairwise responses, overlooking the extent of difference between
responses. This limitation hinders student SLMs from capturing the nuanced
preferences for multiple responses. In this paper, we propose a
Preference-Aligned Distillation (PAD) framework, which models teacher's
preference knowledge as a probability distribution over all potential
preferences, thereby providing more nuanced supervisory signals. Our insight in
developing PAD is rooted in the demonstration that language models can serve as
reward functions, reflecting their intrinsic preferences. Based on this, PAD
comprises three key steps: (1) sampling diverse responses using
high-temperature; (2) computing rewards for both teacher and student to
construct their intrinsic preference; and (3) training the student's intrinsic
preference distribution to align with the teacher's. Experiments on four
mainstream alignment benchmarks demonstrate that PAD consistently and
significantly outperforms existing approaches, achieving over 20\% improvement
on AlpacaEval 2 and Arena-Hard, indicating superior alignment with human
preferences. Notably, on MT-Bench, using the \textsc{Gemma} model family, the
student trained by PAD surpasses its teacher, further validating the
effectiveness of our PAD.
|
2502.14273
|
LLM-EvRep: Learning an LLM-Compatible Event Representation Using a
Self-Supervised Framework
|
cs.CV cs.AI cs.MM
|
Recent advancements in event-based recognition have demonstrated significant
promise, yet most existing approaches rely on extensive training, limiting
their adaptability for efficient processing of event-driven visual content.
Meanwhile, large language models (LLMs) have exhibited remarkable zero-shot
capabilities across diverse domains, but their application to event-based
visual recognition remains largely unexplored. To bridge this gap, we propose
\textbf{LLM-EvGen}, an event representation generator that produces
LLM-compatible event representations \textbf{LLM-EvRep}, thereby enhancing the
performance of LLMs on event recognition tasks. The generator is trained using
a self-supervised framework, aligning the generated representations with
semantic consistency and structural fidelity. Comprehensive experiments were
conducted on three datasets: N-ImageNet, N-Caltech101, and N-MNIST. The results
demonstrate that our method, \textbf{LLM-EvRep}, outperforms the event-to-video
method, E2VID, by 15.93\%, 0.82\%, and 50.21\%, respectively, in recognition
tasks when evaluated using GPT-4o.
|
2502.14275
|
Fact or Guesswork? Evaluating Large Language Model's Medical Knowledge
with Structured One-Hop Judgment
|
cs.CL cs.LG
|
Large language models (LLMs) have been widely adopted in various downstream
task domains. However, their ability to directly recall and apply factual
medical knowledge remains under-explored. Most existing medical QA benchmarks
assess complex reasoning or multi-hop inference, making it difficult to isolate
LLMs' inherent medical knowledge from their reasoning capabilities. Given the
high-stakes nature of medical applications, where incorrect information can
have critical consequences, it is essential to evaluate how well LLMs encode,
retain, and recall fundamental medical facts.
To bridge this gap, we introduce the Medical Knowledge Judgment, a dataset
specifically designed to measure LLMs' one-hop factual medical knowledge. MKJ
is constructed from the Unified Medical Language System (UMLS), a large-scale
repository of standardized biomedical vocabularies and knowledge graphs. We
frame knowledge assessment as a binary judgment task, requiring LLMs to verify
the correctness of medical statements extracted from reliable and structured
knowledge sources.
Our experiments reveal that LLMs struggle with factual medical knowledge
retention, exhibiting significant performance variance across different
semantic categories, particularly for rare medical conditions. Furthermore,
LLMs show poor calibration, often being overconfident in incorrect answers. To
mitigate these issues, we explore retrieval-augmented generation, demonstrating
its effectiveness in improving factual accuracy and reducing uncertainty in
medical decision-making.
|
2502.14276
|
STeCa: Step-level Trajectory Calibration for LLM Agent Learning
|
cs.LG cs.AI cs.CL
|
Large language model (LLM)-based agents have shown promise in tackling
complex tasks by interacting dynamically with the environment. Existing work
primarily focuses on behavior cloning from expert demonstrations and preference
learning through exploratory trajectory sampling. However, these methods often
struggle in long-horizon tasks, where suboptimal actions accumulate step by
step, causing agents to deviate from correct task trajectories. To address
this, we highlight the importance of timely calibration and the need to
automatically construct calibration trajectories for training agents. We
propose Step-Level Trajectory Calibration (STeCa), a novel framework for LLM
agent learning. Specifically, STeCa identifies suboptimal actions through a
step-level reward comparison during exploration. It constructs calibrated
trajectories using LLM-driven reflection, enabling agents to learn from
improved decision-making processes. These calibrated trajectories, together
with successful trajectory data, are utilized for reinforced training.
Extensive experiments demonstrate that STeCa significantly outperforms existing
methods. Further analysis highlights that step-level calibration enables agents
to complete tasks with greater robustness. Our code and data are available at
https://github.com/WangHanLinHenry/STeCa.
|
2502.14279
|
OrchardDepth: Precise Metric Depth Estimation of Orchard Scene from
Monocular Camera Images
|
cs.CV
|
Monocular depth estimation is a rudimentary task in robotic perception.
Recently, with the development of more accurate and robust neural network
models and different types of datasets, monocular depth estimation has
significantly improved performance and efficiency. However, most of the
research in this area focuses on very concentrated domains. In particular, most
of the benchmarks in outdoor scenarios belong to urban environments for the
improvement of autonomous driving devices, and these benchmarks have a massive
disparity with the orchard/vineyard environment, which is hardly helpful for
research in the primary industry. Therefore, we propose OrchardDepth, which
fills the gap in the estimation of the metric depth of the monocular camera in
the orchard/vineyard environment. In addition, we present a new retraining
method to improve the training result by monitoring the consistent
regularization between dense depth maps and sparse points. Our method improves
the RMSE of depth estimation in the orchard environment from 1.5337 to 0.6738,
proving our method's validation.
|
2502.14280
|
EpMAN: Episodic Memory AttentioN for Generalizing to Longer Contexts
|
cs.CL cs.AI
|
Recent advances in Large Language Models (LLMs) have yielded impressive
successes on many language tasks. However, efficient processing of long
contexts using LLMs remains a significant challenge. We introduce
\textbf{EpMAN} -- a method for processing long contexts in an \textit{episodic
memory} module while \textit{holistically attending to} semantically relevant
context chunks. The output of \textit{episodic attention} is then used to
reweigh the decoder's self-attention to the stored KV cache of the context
during training and generation. When an LLM decoder is trained using
\textbf{EpMAN}, its performance on multiple challenging single-hop long-context
recall and question-answering benchmarks is found to be stronger and more
robust across the range from 16k to 256k tokens than baseline decoders trained
with self-attention, and popular retrieval-augmented generation frameworks.
|
2502.14281
|
Correcting Noisy Multilabel Predictions: Modeling Label Noise through
Latent Space Shifts
|
cs.LG cs.AI
|
Noise in data appears to be inevitable in most real-world machine learning
applications and would cause severe overfitting problems. Not only can data
features contain noise, but labels are also prone to be noisy due to human
input. In this paper, rather than noisy label learning in multiclass
classifications, we instead focus on the less explored area of noisy label
learning for multilabel classifications. Specifically, we investigate the
post-correction of predictions generated from classifiers learned with noisy
labels. The reasons are two-fold. Firstly, this approach can directly work with
the trained models to save computational resources. Secondly, it could be
applied on top of other noisy label correction techniques to achieve further
improvements. To handle this problem, we appeal to deep generative approaches
that are possible for uncertainty estimation. Our model posits that label noise
arises from a stochastic shift in the latent variable, providing a more robust
and beneficial means for noisy learning. We develop both unsupervised and
semi-supervised learning methods for our model. The extensive empirical study
presents solid evidence to that our approach is able to consistently improve
the independent models and performs better than a number of existing methods
across various noisy label settings. Moreover, a comprehensive empirical
analysis of the proposed method is carried out to validate its robustness,
including sensitivity analysis and an ablation study, among other elements.
|
2502.14282
|
PC-Agent: A Hierarchical Multi-Agent Collaboration Framework for Complex
Task Automation on PC
|
cs.CV
|
In the field of MLLM-based GUI agents, compared to smartphones, the PC
scenario not only features a more complex interactive environment, but also
involves more intricate intra- and inter-app workflows. To address these
issues, we propose a hierarchical agent framework named PC-Agent. Specifically,
from the perception perspective, we devise an Active Perception Module (APM) to
overcome the inadequate abilities of current MLLMs in perceiving screenshot
content. From the decision-making perspective, to handle complex user
instructions and interdependent subtasks more effectively, we propose a
hierarchical multi-agent collaboration architecture that decomposes
decision-making processes into Instruction-Subtask-Action levels. Within this
architecture, three agents (i.e., Manager, Progress and Decision) are set up
for instruction decomposition, progress tracking and step-by-step
decision-making respectively. Additionally, a Reflection agent is adopted to
enable timely bottom-up error feedback and adjustment. We also introduce a new
benchmark PC-Eval with 25 real-world complex instructions. Empirical results on
PC-Eval show that our PC-Agent achieves a 32% absolute improvement of task
success rate over previous state-of-the-art methods. The code will be publicly
available.
|
2502.14285
|
Vulnerability of Text-to-Image Models to Prompt Template Stealing: A
Differential Evolution Approach
|
cs.CL
|
Prompt trading has emerged as a significant intellectual property concern in
recent years, where vendors entice users by showcasing sample images before
selling prompt templates that can generate similar images. This work
investigates a critical security vulnerability: attackers can steal prompt
templates using only a limited number of sample images. To investigate this
threat, we introduce Prism, a prompt-stealing benchmark consisting of 50
templates and 450 images, organized into Easy and Hard difficulty levels. To
identify the vulnerabity of VLMs to prompt stealing, we propose EvoStealer, a
novel template stealing method that operates without model fine-tuning by
leveraging differential evolution algorithms. The system first initializes
population sets using multimodal large language models (MLLMs) based on
predefined patterns, then iteratively generates enhanced offspring through
MLLMs. During evolution, EvoStealer identifies common features across offspring
to derive generalized templates. Our comprehensive evaluation conducted across
open-source (INTERNVL2-26B) and closed-source models (GPT-4o and GPT-4o-mini)
demonstrates that EvoStealer's stolen templates can reproduce images highly
similar to originals and effectively generalize to other subjects,
significantly outperforming baseline methods with an average improvement of
over 10%. Moreover, our cost analysis reveals that EvoStealer achieves template
stealing with negligible computational expenses. Our code and dataset are
available at https://github.com/whitepagewu/evostealer.
|
2502.14289
|
Drift: Decoding-time Personalized Alignments with Implicit User
Preferences
|
cs.CL
|
Personalized alignments for individual users have been a long-standing goal
in large language models (LLMs). We introduce Drift, a novel framework that
personalizes LLMs at decoding time with implicit user preferences. Traditional
Reinforcement Learning from Human Feedback (RLHF) requires thousands of
annotated examples and expensive gradient updates. In contrast, Drift
personalizes LLMs in a training-free manner, using only a few dozen examples to
steer a frozen model through efficient preference modeling. Our approach models
user preferences as a composition of predefined, interpretable attributes and
aligns them at decoding time to enable personalized generation. Experiments on
both a synthetic persona dataset (Perspective) and a real human-annotated
dataset (PRISM) demonstrate that Drift significantly outperforms RLHF baselines
while using only 50-100 examples. Our results and analysis show that Drift is
both computationally efficient and interpretable.
|
2502.14293
|
Graph Anomaly Detection via Adaptive Test-time Representation Learning
across Out-of-Distribution Domains
|
cs.LG cs.AI cs.SI
|
Graph Anomaly Detection (GAD) has demonstrated great effectiveness in
identifying unusual patterns within graph-structured data. However, while
labeled anomalies are often scarce in emerging applications, existing
supervised GAD approaches are either ineffective or not applicable when moved
across graph domains due to distribution shifts and heterogeneous feature
spaces. To address these challenges, we present AdaGraph-T3, a novel test-time
training framework for cross-domain GAD. AdaGraph-T3 combines supervised and
self-supervised learning during training while adapting to a new domain during
test time using only self-supervised learning by leveraging a homophily-based
affinity score that captures domain-invariant properties of anomalies. Our
framework introduces four key innovations to cross-domain GAD: an effective
self-supervision scheme, an attention-based mechanism that dynamically learns
edge importance weights during message passing, domain-specific encoders for
handling heterogeneous features, and class-aware regularization to address
imbalance. Experiments across multiple cross-domain settings demonstrate that
AdaGraph-T3 significantly outperforms existing approaches, achieving average
improvements of over 6.6% in AUROC and 7.9% in AUPRC compared to the best
competing model.
|
2502.14294
|
DAG: Deep Adaptive and Generative $K$-Free Community Detection on
Attributed Graphs
|
cs.SI
|
Community detection on attributed graphs with rich semantic and topological
information offers great potential for real-world network analysis, especially
user matching in online games. Graph Neural Networks (GNNs) have recently
enabled Deep Graph Clustering (DGC) methods to learn cluster assignments from
semantic and topological information. However, their success depends on the
prior knowledge related to the number of communities $K$, which is unrealistic
due to the high costs and privacy issues of acquisition.In this paper, we
investigate the community detection problem without prior $K$, referred to as
$K$-Free Community Detection problem. To address this problem, we propose a
novel Deep Adaptive and Generative model~(DAG) for community detection without
specifying the prior $K$. DAG consists of three key components, \textit{i.e.,}
a node representation learning module with masked attribute reconstruction, a
community affiliation readout module, and a community number search module with
group sparsity. These components enable DAG to convert the process of
non-differentiable grid search for the community number, \textit{i.e.,} a
discrete hyperparameter in existing DGC methods, into a differentiable learning
process. In such a way, DAG can simultaneously perform community detection and
community number search end-to-end. To alleviate the cost of acquiring
community labels in real-world applications, we design a new metric, EDGE, to
evaluate community detection methods even when the labels are not feasible.
Extensive offline experiments on five public datasets and a real-world online
mobile game dataset demonstrate the superiority of our DAG over the existing
state-of-the-art (SOTA) methods. DAG has a relative increase of 7.35\% in teams
in a Tencent online game compared with the best competitor.
|
2502.14297
|
An Evaluation of Sakana's AI Scientist for Autonomous Research: Wishful
Thinking or an Emerging Reality Towards 'Artificial General Research
Intelligence' (AGRI)?
|
cs.IR cs.AI cs.LG
|
A major step toward Artificial General Intelligence (AGI) and Super
Intelligence is AI's ability to autonomously conduct research - what we term
Artificial General Research Intelligence (AGRI). If machines could generate
hypotheses, conduct experiments, and write research papers without human
intervention, it would transform science. Recently, Sakana.ai introduced the AI
Scientist, a system claiming to automate the research lifecycle, generating
both excitement and skepticism.
We evaluated the AI Scientist and found it a milestone in AI-driven research.
While it streamlines some aspects, it falls short of expectations. Literature
reviews are weak, nearly half the experiments failed, and manuscripts sometimes
contain hallucinated results. Most notably, users must provide an experimental
pipeline, limiting the AI Scientist's autonomy in research design and
execution.
Despite its limitations, the AI Scientist advances research automation. Many
reviewers or instructors who assess work superficially may not recognize its
output as AI-generated. The system produces research papers with minimal human
effort and low cost. Our analysis suggests a paper costs a few USD with a few
hours of human involvement, making it significantly faster than human
researchers. Compared to AI capabilities from a few years ago, this marks
progress toward AGRI.
The rise of AI-driven research systems requires urgent discussion within
Information Retrieval (IR) and broader scientific communities. Enhancing
literature retrieval, citation validation, and evaluation benchmarks could
improve AI-generated research reliability. We propose concrete steps, including
AGRI-specific benchmarks, refined peer review, and standardized attribution
frameworks. Whether AGRI becomes a stepping stone to AGI depends on how the
academic and AI communities shape its development.
|
2502.14298
|
Generalization Certificates for Adversarially Robust Bayesian Linear
Regression
|
cs.LG stat.ML
|
Adversarial robustness of machine learning models is critical to ensuring
reliable performance under data perturbations. Recent progress has been on
point estimators, and this paper considers distributional predictors. First,
using the link between exponential families and Bregman divergences, we
formulate an adversarial Bregman divergence loss as an adversarial negative
log-likelihood. Using the geometric properties of Bregman divergences, we
compute the adversarial perturbation for such models in closed-form. Second,
under such losses, we introduce \emph{adversarially robust posteriors}, by
exploiting the optimization-centric view of generalized Bayesian inference.
Third, we derive the \emph{first} rigorous generalization certificates in the
context of an adversarial extension of Bayesian linear regression by leveraging
the PAC-Bayesian framework. Finally, experiments on real and synthetic datasets
demonstrate the superior robustness of the derived adversarially robust
posterior over Bayes posterior, and also validate our theoretical guarantees.
|
2502.14301
|
SEA-HELM: Southeast Asian Holistic Evaluation of Language Models
|
cs.CL cs.AI
|
With the rapid emergence of novel capabilities in Large Language Models
(LLMs), the need for rigorous multilingual and multicultural benchmarks that
are integrated has become more pronounced. Though existing LLM benchmarks are
capable of evaluating specific capabilities of LLMs in English as well as in
various mid- to low-resource languages, including those in the Southeast Asian
(SEA) region, a comprehensive and authentic evaluation suite for the SEA
languages has not been developed thus far. Here, we present SEA-HELM, a
holistic linguistic and cultural LLM evaluation suite that emphasizes SEA
languages, comprising five core pillars: (1) NLP Classics, (2) LLM-specifics,
(3) SEA Linguistics, (4) SEA Culture, (5) Safety. SEA-HELM currently supports
Filipino, Indonesian, Tamil, Thai, and Vietnamese. We also introduce the
SEA-HELM leaderboard, which allows users to understand models' multilingual and
multicultural performance in a systematic and user-friendly manner.
|
2502.14302
|
MedHallu: A Comprehensive Benchmark for Detecting Medical Hallucinations
in Large Language Models
|
cs.CL cs.AI cs.LG
|
Advancements in Large Language Models (LLMs) and their increasing use in
medical question-answering necessitate rigorous evaluation of their
reliability. A critical challenge lies in hallucination, where models generate
plausible yet factually incorrect outputs. In the medical domain, this poses
serious risks to patient safety and clinical decision-making. To address this,
we introduce MedHallu, the first benchmark specifically designed for medical
hallucination detection. MedHallu comprises 10,000 high-quality question-answer
pairs derived from PubMedQA, with hallucinated answers systematically generated
through a controlled pipeline. Our experiments show that state-of-the-art LLMs,
including GPT-4o, Llama-3.1, and the medically fine-tuned UltraMedical,
struggle with this binary hallucination detection task, with the best model
achieving an F1 score as low as 0.625 for detecting "hard" category
hallucinations. Using bidirectional entailment clustering, we show that
harder-to-detect hallucinations are semantically closer to ground truth.
Through experiments, we also show incorporating domain-specific knowledge and
introducing a "not sure" category as one of the answer categories improves the
precision and F1 scores by up to 38% relative to baselines.
|
2502.14305
|
Efficient AI in Practice: Training and Deployment of Efficient LLMs for
Industry Applications
|
cs.IR cs.LG
|
Large language models (LLMs) have demonstrated remarkable performance across
a wide range of industrial applications, from search and recommendations to
generative tasks. Although scaling laws indicate that larger models generally
yield better generalization and performance, their substantial computational
requirements often render them impractical for many real-world scenarios at
scale. In this paper, we present methods and insights for training small
language models (SLMs) that deliver high performance and efficiency in
deployment. We focus on two key techniques: (1) knowledge distillation and (2)
model compression via quantization and pruning. These approaches enable SLMs to
retain much of the quality of their larger counterparts while significantly
reducing training, serving costs, and latency. We detail the impact of these
techniques on a variety of use cases at a large professional social network
platform and share deployment lessons - including hardware optimization
strategies that enhance speed and throughput for both predictive and
reasoning-based applications.
|
2502.14307
|
{\mu}RL: Discovering Transient Execution Vulnerabilities Using
Reinforcement Learning
|
cs.CR cs.AR cs.LG
|
We propose using reinforcement learning to address the challenges of
discovering microarchitectural vulnerabilities, such as Spectre and Meltdown,
which exploit subtle interactions in modern processors. Traditional methods
like random fuzzing fail to efficiently explore the vast instruction space and
often miss vulnerabilities that manifest under specific conditions. To overcome
this, we introduce an intelligent, feedback-driven approach using RL. Our RL
agents interact with the processor, learning from real-time feedback to
prioritize instruction sequences more likely to reveal vulnerabilities,
significantly improving the efficiency of the discovery process.
We also demonstrate that RL systems adapt effectively to various
microarchitectures, providing a scalable solution across processor generations.
By automating the exploration process, we reduce the need for human
intervention, enabling continuous learning that uncovers hidden
vulnerabilities. Additionally, our approach detects subtle signals, such as
timing anomalies or unusual cache behavior, that may indicate
microarchitectural weaknesses. This proposal advances hardware security testing
by introducing a more efficient, adaptive, and systematic framework for
protecting modern processors.
When unleashed on Intel Skylake-X and Raptor Lake microarchitectures, our RL
agent was indeed able to generate instruction sequences that cause significant
observable byte leakages through transient execution without generating any
$\mu$code assists, faults or interrupts. The newly identified leaky sequences
stem from a variety of Intel instructions, e.g. including SERIALIZE, VERR/VERW,
CLMUL, MMX-x87 transitions, LSL+RDSCP and LAR. These initial results give
credence to the proposed approach.
|
2502.14309
|
On Theoretical Limits of Learning with Label Differential Privacy
|
cs.LG cs.IT math.IT
|
Label differential privacy (DP) is designed for learning problems involving
private labels and public features. While various methods have been proposed
for learning under label DP, the theoretical limits remain largely unexplored.
In this paper, we investigate the fundamental limits of learning with label DP
in both local and central models for both classification and regression tasks,
characterized by minimax convergence rates. We establish lower bounds by
converting each task into a multiple hypothesis testing problem and bounding
the test error. Additionally, we develop algorithms that yield matching upper
bounds. Our results demonstrate that under label local DP (LDP), the risk has a
significantly faster convergence rate than that under full LDP, i.e. protecting
both features and labels, indicating the advantages of relaxing the DP
definition to focus solely on labels. In contrast, under the label central DP
(CDP), the risk is only reduced by a constant factor compared to full DP,
indicating that the relaxation of CDP only has limited benefits on the
performance.
|
2502.14311
|
The Impact and Feasibility of Self-Confidence Shaping for AI-Assisted
Decision-Making
|
cs.HC cs.CL cs.CY
|
In AI-assisted decision-making, it is crucial but challenging for humans to
appropriately rely on AI, especially in high-stakes domains such as finance and
healthcare. This paper addresses this problem from a human-centered perspective
by presenting an intervention for self-confidence shaping, designed to
calibrate self-confidence at a targeted level. We first demonstrate the impact
of self-confidence shaping by quantifying the upper-bound improvement in
human-AI team performance. Our behavioral experiments with 121 participants
show that self-confidence shaping can improve human-AI team performance by
nearly 50% by mitigating both over- and under-reliance on AI. We then introduce
a self-confidence prediction task to identify when our intervention is needed.
Our results show that simple machine-learning models achieve 67% accuracy in
predicting self-confidence. We further illustrate the feasibility of such
interventions. The observed relationship between sentiment and self-confidence
suggests that modifying sentiment could be a viable strategy for shaping
self-confidence. Finally, we outline future research directions to support the
deployment of self-confidence shaping in a real-world scenario for effective
human-AI collaboration.
|
2502.14314
|
ODVerse33: Is the New YOLO Version Always Better? A Multi Domain
benchmark from YOLO v5 to v11
|
cs.CV
|
You Look Only Once (YOLO) models have been widely used for building real-time
object detectors across various domains. With the increasing frequency of new
YOLO versions being released, key questions arise. Are the newer versions
always better than their previous versions? What are the core innovations in
each YOLO version and how do these changes translate into real-world
performance gains? In this paper, we summarize the key innovations from YOLOv1
to YOLOv11, introduce a comprehensive benchmark called ODverse33, which
includes 33 datasets spanning 11 diverse domains (Autonomous driving,
Agricultural, Underwater, Medical, Videogame, Industrial, Aerial, Wildlife,
Retail, Microscopic, and Security), and explore the practical impact of model
improvements in real-world, multi-domain applications through extensive
experimental results. We hope this study can provide some guidance to the
extensive users of object detection models and give some references for future
real-time object detector development.
|
2502.14315
|
Unveiling Cultural Blind Spots: Analyzing the Limitations of mLLMs in
Procedural Text Comprehension
|
cs.CL
|
Despite the impressive performance of multilingual large language models
(mLLMs) in various natural language processing tasks, their ability to
understand procedural texts, particularly those with culture-specific content,
remains largely unexplored. Texts describing cultural procedures, including
rituals, traditional craftsmanship, and social etiquette, require an inherent
understanding of cultural context, presenting a significant challenge for
mLLMs. In this work, we introduce CAPTex, a benchmark designed to evaluate
mLLMs' ability to process and reason about culturally diverse procedural texts
across multiple languages using various methodologies to assess their
performance. Our findings indicate that (1) mLLMs face difficulties with
culturally contextualized procedural texts, showing notable performance
declines in low-resource languages, (2) model performance fluctuates across
cultural domains, with some areas presenting greater difficulties, and (3)
language models exhibit better performance on multiple-choice tasks within
conversational frameworks compared to direct questioning. These results
underscore the current limitations of mLLMs in handling culturally nuanced
procedural texts and highlight the need for culturally aware benchmarks like
CAPTex to enhance their adaptability and comprehension across diverse
linguistic and cultural landscapes.
|
2502.14316
|
Textured 3D Regenerative Morphing with 3D Diffusion Prior
|
cs.CV cs.AI
|
Textured 3D morphing creates smooth and plausible interpolation sequences
between two 3D objects, focusing on transitions in both shape and texture. This
is important for creative applications like visual effects in filmmaking.
Previous methods rely on establishing point-to-point correspondences and
determining smooth deformation trajectories, which inherently restrict them to
shape-only morphing on untextured, topologically aligned datasets. This
restriction leads to labor-intensive preprocessing and poor generalization. To
overcome these challenges, we propose a method for 3D regenerative morphing
using a 3D diffusion prior. Unlike previous methods that depend on explicit
correspondences and deformations, our method eliminates the additional need for
obtaining correspondence and uses the 3D diffusion prior to generate morphing.
Specifically, we introduce a 3D diffusion model and interpolate the source and
target information at three levels: initial noise, model parameters, and
condition features. We then explore an Attention Fusion strategy to generate
more smooth morphing sequences. To further improve the plausibility of semantic
interpolation and the generated 3D surfaces, we propose two strategies: (a)
Token Reordering, where we match approximate tokens based on semantic analysis
to guide implicit correspondences in the denoising process of the diffusion
model, and (b) Low-Frequency Enhancement, where we enhance low-frequency
signals in the tokens to improve the quality of generated surfaces.
Experimental results show that our method achieves superior smoothness and
plausibility in 3D morphing across diverse cross-category object pairs,
offering a novel regenerative method for 3D morphing with textured
representations.
|
2502.14317
|
ParallelComp: Parallel Long-Context Compressor for Length Extrapolation
|
cs.CL
|
Efficiently handling long contexts is crucial for large language models
(LLMs). While rotary position embeddings (RoPEs) enhance length generalization,
effective length extrapolation remains challenging and often requires costly
fine-tuning. In contrast, recent training-free approaches suffer from the
attention sink phenomenon, leading to severe performance degradation. In this
paper, we introduce ParallelComp, a novel training-free method for long-context
extrapolation that extends LLMs' context length from 4K to 128K while
maintaining high throughput and preserving perplexity, and integrates
seamlessly with Flash Attention. Our analysis offers new insights into
attention biases in parallel attention mechanisms and provides practical
solutions to tackle these challenges. To mitigate the attention sink issue, we
propose an attention calibration strategy that reduces biases, ensuring more
stable long-range attention. Additionally, we introduce a chunk eviction
strategy to efficiently manage ultra-long contexts on a single A100 80GB GPU.
To further enhance efficiency, we propose a parallel KV cache eviction
technique, which improves chunk throughput by 1.76x, thereby achieving a 23.50x
acceleration in the prefilling stage with negligible performance loss due to
attention calibration. Furthermore, ParallelComp achieves 91.17% of GPT-4's
performance on long-context tasks using an 8B model trained on 8K-length
context, outperforming powerful closed-source models such as Claude-2 and
Kimi-Chat.
|
2502.14318
|
Line Goes Up? Inherent Limitations of Benchmarks for Evaluating Large
Language Models
|
cs.CL cs.AI cs.LG
|
Large language models (LLMs) regularly demonstrate new and impressive
performance on a wide range of language, knowledge, and reasoning benchmarks.
Such rapid progress has led many commentators to argue that LLM general
cognitive capabilities have likewise rapidly improved, with the implication
that such models are becoming progressively more capable on various real-world
tasks. Here I summarise theoretical and empirical considerations to challenge
this narrative. I argue that inherent limitations with the benchmarking
paradigm, along with specific limitations of existing benchmarks, render
benchmark performance highly unsuitable as a metric for generalisable
competence over cognitive tasks. I also contend that alternative methods for
assessing LLM capabilities, including adversarial stimuli and interpretability
techniques, have shown that LLMs do not have robust competence in many language
and reasoning tasks, and often fail to learn representations which facilitate
generalisable inferences. I conclude that benchmark performance should not be
used as a reliable indicator of general LLM cognitive capabilities.
|
2502.14321
|
Beyond Self-Talk: A Communication-Centric Survey of LLM-Based
Multi-Agent Systems
|
cs.MA cs.CL
|
Large Language Models (LLMs) have recently demonstrated remarkable
capabilities in reasoning, planning, and decision-making. Building upon these
strengths, researchers have begun incorporating LLMs into multi-agent systems
(MAS), where agents collaborate or compete through natural language
interactions to tackle tasks beyond the scope of single-agent setups. In this
survey, we present a communication-centric perspective on LLM-based multi-agent
systems, examining key system-level features such as architecture design and
communication goals, as well as internal mechanisms like communication
strategies, paradigms, objects and content. We illustrate how these
communication elements interplay to enable collective intelligence and flexible
collaboration. Furthermore, we discuss prominent challenges, including
scalability, security, and multimodal integration, and propose directions for
future work to advance research in this emerging domain. Ultimately, this
survey serves as a catalyst for further innovation, fostering more robust,
scalable, and intelligent multi-agent systems across diverse application
domains.
|
2502.14327
|
ChemHTS: Hierarchical Tool Stacking for Enhancing Chemical Agents
|
cs.CE
|
Large Language Models (LLMs) have demonstrated remarkable potential in
scientific research, particularly in chemistry-related tasks such as molecular
design, reaction prediction, and property estimation. While tool-augmented LLMs
have been introduced to enhance reasoning and computation in these domains,
existing approaches suffer from tool invocation errors and lack effective
collaboration among diverse tools, limiting their overall performance. To
address these challenges, we propose ChemHTS (Chemical Hierarchical Tool
Stacking), a novel method that optimizes tool invocation pathways through a
hierarchical stacking strategy. ChemHTS consists of two key stages: tool
self-stacking warmup and multi-layer decision optimization, enabling LLMs to
refine tool usage dynamically. We evaluate ChemHTS across four classical
chemistry tasks and demonstrate its superiority over strong baselines,
including GPT-4o, DeepSeek-R1, and chemistry-specific models, including
ChemDFM. Furthermore, we define four distinct tool-stacking behaviors to
enhance interpretability, providing insights into the effectiveness of tool
collaboration. Our dataset and code are publicly available at
\url{https://github.com/Chang-pw/ChemHTS}.
|
2502.14332
|
A Collaborative Jade Recognition System for Mobile Devices Based on
Lightweight and Large Models
|
cs.CV cs.IR
|
With the widespread adoption and development of mobile devices, vision-based
recognition applications have become a hot topic in research. Jade, as an
important cultural heritage and artistic item, has significant applications in
fields such as jewelry identification and cultural relic preservation. However,
existing jade recognition systems still face challenges in mobile
implementation, such as limited computing resources, real-time requirements,
and accuracy issues. To address these challenges, this paper proposes a jade
recognition system based on size model collaboration, aiming to achieve
efficient and accurate jade identification using mobile devices such as
smartphones.First, we design a size model based on multi-scale image
processing, extracting key visual information by analyzing jade's dimensions,
shapes, and surface textures. Then, a collaborative multi-model classification
framework is built by combining deep learning and traditional computer vision
algorithms. This framework can effectively select and adjust models based on
different jade characteristics, providing high accuracy results across various
environments and devices.Experimental results show that the proposed system can
provide high recognition accuracy and fast processing time on mobile devices,
while consuming relatively low computational resources. The system not only
holds great application potential but also provides new ideas and technical
support for the intelligent development of jade identification.
|
2502.14333
|
A Survey on Feedback-based Multi-step Reasoning for Large Language
Models on Mathematics
|
cs.CL cs.AI
|
Recent progress in large language models (LLM) found chain-of-thought
prompting strategies to improve the reasoning ability of LLMs by encouraging
problem solving through multiple steps. Therefore, subsequent research aimed to
integrate the multi-step reasoning process into the LLM itself through process
rewards as feedback and achieved improvements over prompting strategies. Due to
the cost of step-level annotation, some turn to outcome rewards as feedback.
Aside from these training-based approaches, training-free techniques leverage
frozen LLMs or external tools for feedback at each step to enhance the
reasoning process. With the abundance of work in mathematics due to its logical
nature, we present a survey of strategies utilizing feedback at the step and
outcome levels to enhance multi-step math reasoning for LLMs. As multi-step
reasoning emerges a crucial component in scaling LLMs, we hope to establish its
foundation for easier understanding and empower further research.
|
2502.14334
|
Purest Quantum State Identification
|
quant-ph cs.AI
|
Precise identification of quantum states under noise constraints is essential
for quantum information processing. In this study, we generalize the classical
best arm identification problem to quantum domains, designing methods for
identifying the purest one within $K$ unknown $n$-qubit quantum states using
$N$ samples. %, with direct applications in quantum computation and quantum
communication. We propose two distinct algorithms: (1) an algorithm employing
incoherent measurements, achieving error $\exp\left(- \Omega\left(\frac{N
H_1}{\log(K) 2^n }\right) \right)$, and (2) an algorithm utilizing coherent
measurements, achieving error $\exp\left(- \Omega\left(\frac{N H_2}{\log(K)
}\right) \right)$, highlighting the power of quantum memory. Furthermore, we
establish a lower bound by proving that all strategies with fixed two-outcome
incoherent POVM must suffer error probability exceeding $ \exp\left( -
O\left(\frac{NH_1}{2^n}\right)\right)$. This framework provides concrete design
principles for overcoming sampling bottlenecks in quantum technologies.
|
2502.14335
|
Information Types in Product Reviews
|
cs.CL
|
Information in text is communicated in a way that supports a goal for its
reader. Product reviews, for example, contain opinions, tips, product
descriptions, and many other types of information that provide both direct
insights, as well as unexpected signals for downstream applications. We devise
a typology of 24 communicative goals in sentences from the product review
domain, and employ a zero-shot multi-label classifier that facilitates
large-scale analyses of review data. In our experiments, we find that the
combination of classes in the typology forecasts helpfulness and sentiment of
reviews, while supplying explanations for these decisions. In addition, our
typology enables analysis of review intent, effectiveness and rhetorical
structure. Characterizing the types of information in reviews unlocks many
opportunities for more effective consumption of this genre.
|
2502.14338
|
English Please: Evaluating Machine Translation for Multilingual Bug
Reports
|
cs.CL cs.SE
|
Accurate translation of bug reports is critical for efficient collaboration
in global software development. In this study, we conduct the first
comprehensive evaluation of machine translation (MT) performance on bug
reports, analyzing the capabilities of DeepL, AWS Translate, and ChatGPT using
data from the Visual Studio Code GitHub repository, specifically focusing on
reports labeled with the english-please tag. To thoroughly assess the accuracy
and effectiveness of each system, we employ multiple machine translation
metrics, including BLEU, BERTScore, COMET, METEOR, and ROUGE. Our findings
indicate that DeepL consistently outperforms the other systems across most
automatic metrics, demonstrating strong lexical and semantic alignment. AWS
Translate performs competitively, particularly in METEOR, while ChatGPT lags in
key metrics. This study underscores the importance of domain adaptation for
translating technical texts and offers guidance for integrating automated
translation into bug-triaging workflows. Moreover, our results establish a
foundation for future research to refine machine translation solutions for
specialized engineering contexts. The code and dataset for this paper are
available at GitHub: https://github.com/av9ash/gitbugs/tree/main/multilingual.
|
2502.14340
|
Earlier Tokens Contribute More: Learning Direct Preference Optimization
From Temporal Decay Perspective
|
cs.CL
|
Direct Preference Optimization (DPO) has gained attention as an efficient
alternative to reinforcement learning from human feedback (RLHF) for aligning
large language models (LLMs) with human preferences. Despite its advantages,
DPO suffers from a length bias, generating responses longer than those from the
reference model. Existing solutions like SimPO and SamPO address this issue but
uniformly treat the contribution of rewards across sequences, overlooking
temporal dynamics. To this end, we propose an enhanced preference optimization
method that incorporates a temporal decay factor controlled by a gamma
parameter. This dynamic weighting mechanism adjusts the influence of each
reward based on its position in the sequence, prioritizing earlier tokens that
are more critical for alignment. By adaptively focusing on more relevant
feedback, our approach mitigates overfitting to less pertinent data and remains
responsive to evolving human preferences. Experimental results on several
benchmarks show that our approach consistently outperforms vanilla DPO by
5.9-8.8 points on AlpacaEval 2 and 3.3-9.7 points on Arena-Hard across
different model architectures and sizes. Furthermore, additional experiments on
mathematical and reasoning benchmarks (MMLU, GSM8K, and MATH) confirm that our
method enhances performance without compromising general capabilities. Our
codebase would be available at \url{https://github.com/LotuSrc/D2PO}.
|
2502.14344
|
Towards Accurate Binary Spiking Neural Networks: Learning with Adaptive
Gradient Modulation Mechanism
|
cs.CV
|
Binary Spiking Neural Networks (BSNNs) inherit the eventdriven paradigm of
SNNs, while also adopting the reduced storage burden of binarization
techniques. These distinct advantages grant BSNNs lightweight and
energy-efficient characteristics, rendering them ideal for deployment on
resource-constrained edge devices. However, due to the binary synaptic weights
and non-differentiable spike function, effectively training BSNNs remains an
open question. In this paper, we conduct an in-depth analysis of the challenge
for BSNN learning, namely the frequent weight sign flipping problem. To
mitigate this issue, we propose an Adaptive Gradient Modulation Mechanism
(AGMM), which is designed to reduce the frequency of weight sign flipping by
adaptively adjusting the gradients during the learning process. The proposed
AGMM can enable BSNNs to achieve faster convergence speed and higher accuracy,
effectively narrowing the gap between BSNNs and their full-precision
equivalents. We validate AGMM on both static and neuromorphic datasets, and
results indicate that it achieves state-of-the-art results among BSNNs. This
work substantially reduces storage demands and enhances SNNs' inherent energy
efficiency, making them highly feasible for resource-constrained environments.
|
2502.14345
|
FlowAgent: Achieving Compliance and Flexibility for Workflow Agents
|
cs.AI
|
The integration of workflows with large language models (LLMs) enables
LLM-based agents to execute predefined procedures, enhancing automation in
real-world applications. Traditional rule-based methods tend to limit the
inherent flexibility of LLMs, as their predefined execution paths restrict the
models' action space, particularly when the unexpected, out-of-workflow (OOW)
queries are encountered. Conversely, prompt-based methods allow LLMs to fully
control the flow, which can lead to diminished enforcement of procedural
compliance. To address these challenges, we introduce FlowAgent, a novel agent
framework designed to maintain both compliance and flexibility. We propose the
Procedure Description Language (PDL), which combines the adaptability of
natural language with the precision of code to formulate workflows. Building on
PDL, we develop a comprehensive framework that empowers LLMs to manage OOW
queries effectively, while keeping the execution path under the supervision of
a set of controllers. Additionally, we present a new evaluation methodology to
rigorously assess an LLM agent's ability to handle OOW scenarios, going beyond
routine flow compliance tested in existing benchmarks. Experiments on three
datasets demonstrate that FlowAgent not only adheres to workflows but also
effectively manages OOW queries, highlighting its dual strengths in compliance
and flexibility. The code is available at
https://github.com/Lightblues/FlowAgent.
|
2502.14350
|
Optimize Cardinality Estimation Model Pretraining by Simplifying the
Training Datasets
|
cs.DB cs.LG
|
The cardinality estimation is a key aspect of query optimization research,
and its performance has significantly improved with the integration of machine
learning. To overcome the "cold start" problem or the lack of model
transferability in learned cardinality estimators, some pre-training
cardinality estimation models have been proposed that use learning across
multiple datasets and corresponding workloads. These models typically train on
a dataset created by uniformly sampling from many datasets, but this approach
may not be optimal. By applying the Group Distributionally Robust Optimization
(Group DRO) algorithm to training datasets, we find that some specific training
datasets contribute more significantly to model performance than others. Based
on this observation, we conduct extensive experiments to delve deeper into
pre-training cardinality estimators. Our results show how the performance of
these models can be influenced by the datasets and corresponding workloads.
Finally, we introduce a simplified training dataset, which has been reduced to
a fraction of the size of existing pretraining datasets. Sufficient
experimental results demonstrate that the pre-trained cardinality estimator
based on this simplified dataset can still achieve comparable performance to
existing models in zero-shot setups.
|
2502.14351
|
SegAnyPET: Universal Promptable Segmentation from Positron Emission
Tomography Images
|
cs.CV
|
Positron Emission Tomography (PET) imaging plays a crucial role in modern
medical diagnostics by revealing the metabolic processes within a patient's
body, which is essential for quantification of therapy response and monitoring
treatment progress. However, the segmentation of PET images presents unique
challenges due to their lower contrast and less distinct boundaries compared to
other structural medical modalities. Recent developments in segmentation
foundation models have shown superior versatility across diverse natural image
segmentation tasks. Despite the efforts of medical adaptations, these works
primarily focus on structural medical images with detailed physiological
structural information and exhibit poor generalization ability when adapted to
molecular PET imaging. In this paper, we collect and construct PETS-5k, the
largest PET segmentation dataset to date, comprising 5,731 three-dimensional
whole-body PET images and encompassing over 1.3M 2D images. Based on the
established dataset, we develop SegAnyPET, a modality-specific 3D foundation
model for universal promptable segmentation from PET images. To issue the
challenge of discrepant annotation quality of PET images, we adopt a cross
prompting confident learning (CPCL) strategy with an uncertainty-guided
self-rectification process to robustly learn segmentation from high-quality
labeled data and low-quality noisy labeled data. Experimental results
demonstrate that SegAnyPET can correctly segment seen and unseen targets using
only one or a few prompt points, outperforming state-of-the-art foundation
models and task-specific fully supervised models with higher accuracy and
strong generalization ability for universal segmentation. As the first
foundation model for PET images, we believe that SegAnyPET will advance the
applications to various downstream tasks for molecular imaging.
|
2502.14352
|
SR-LLM: Rethinking the Structured Representation in Large Language Model
|
cs.CL
|
Structured representations, exemplified by Abstract Meaning Representation
(AMR), have long been pivotal in computational linguistics. However, their role
remains ambiguous in the Large Language Models (LLMs) era. Initial attempts to
integrate structured representation into LLMs via a zero-shot setting yielded
inferior performance. We hypothesize that such a decline stems from the
structure information being passed into LLMs in a code format unfamiliar to
LLMs' training corpora. Consequently, we propose SR-LLM, an innovative
framework with two settings to explore a superior way of integrating structured
representation with LLMs from training-free and training-dependent
perspectives. The former integrates structural information through natural
language descriptions in LLM prompts, whereas its counterpart augments the
model's inference capability through fine-tuning on linguistically described
structured representations. Performance improvements were observed in widely
downstream datasets, with particularly notable gains of 3.17% and 12.38% in
PAWS. To the best of our knowledge, this work represents the pioneering
demonstration that leveraging structural representations can substantially
enhance LLMs' inference capability. We hope that our work sheds light and
encourages future research to enhance the reasoning and interoperability of
LLMs by structure data.
|
2502.14353
|
Eliminating Majority Illusions
|
cs.CC cs.SI
|
An opinion illusion refers to a phenomenon in social networks where agents
may witness distributions of opinions among their neighbours that do not
accurately reflect the true distribution of opinions in the population as a
whole. A specific case of this occurs when there are only two possible choices,
such as whether to receive the COVID-19 vaccine or vote on EU membership, which
is commonly referred to as a majority illusion. In this work, we study the
topological properties of social networks that lead to opinion illusions and
focus on minimizing the number of agents that need to be influenced to
eliminate these illusions. To do so, we propose an initial, but systematic
study of the algorithmic behaviour of this problem.
We show that the problem is NP-hard even for underlying topologies that are
rather restrictive, being planar and of bounded diameter. We then look for
exact algorithms that scale well as the input grows (FPT). We argue the
in-existence of such algorithms even when the number of vertices that must be
influenced is bounded, or when the social network is arranged in a
``path-like'' fashion (has bounded pathwidth). On the positive side, we present
an FPT algorithm for networks with ``star-like'' structure (bounded vertex
cover number). Finally, we construct an FPT algorithm for ``tree-like''
networks (bounded treewidth) when the number of vertices that must be
influenced is bounded. This algorithm is then used to provide a PTAS for planar
graphs.
|
2502.14354
|
Self-Improvement Towards Pareto Optimality: Mitigating Preference
Conflicts in Multi-Objective Alignment
|
cs.LG cs.CL
|
Multi-Objective Alignment (MOA) aims to align LLMs' responses with multiple
human preference objectives, with Direct Preference Optimization (DPO) emerging
as a prominent approach. However, we find that DPO-based MOA approaches suffer
from widespread preference conflicts in the data, where different objectives
favor different responses. This results in conflicting optimization directions,
hindering the optimization on the Pareto Front. To address this, we propose to
construct Pareto-optimal responses to resolve preference conflicts. To
efficiently obtain and utilize such responses, we propose a self-improving DPO
framework that enables LLMs to self-generate and select Pareto-optimal
responses for self-supervised preference alignment. Extensive experiments on
two datasets demonstrate the superior Pareto Front achieved by our framework
compared to various baselines. Code is available at
\url{https://github.com/zyttt-coder/SIPO}.
|
2502.14355
|
Triply Laplacian Scale Mixture Modeling for Seismic Data Noise
Suppression
|
cs.CV
|
Sparsity-based tensor recovery methods have shown great potential in
suppressing seismic data noise. These methods exploit tensor sparsity measures
capturing the low-dimensional structures inherent in seismic data tensors to
remove noise by applying sparsity constraints through soft-thresholding or
hard-thresholding operators. However, in these methods, considering that real
seismic data are non-stationary and affected by noise, the variances of tensor
coefficients are unknown and may be difficult to accurately estimate from the
degraded seismic data, leading to undesirable noise suppression performance. In
this paper, we propose a novel triply Laplacian scale mixture (TLSM) approach
for seismic data noise suppression, which significantly improves the estimation
accuracy of both the sparse tensor coefficients and hidden scalar parameters.
To make the optimization problem manageable, an alternating direction method of
multipliers (ADMM) algorithm is employed to solve the proposed TLSM-based
seismic data noise suppression problem. Extensive experimental results on
synthetic and field seismic data demonstrate that the proposed TLSM algorithm
outperforms many state-of-the-art seismic data noise suppression methods in
both quantitative and qualitative evaluations while providing exceptional
computational efficiency.
|
2502.14356
|
Full-Step-DPO: Self-Supervised Preference Optimization with Step-wise
Rewards for Mathematical Reasoning
|
cs.CL
|
Direct Preference Optimization (DPO) often struggles with long-chain
mathematical reasoning. Existing approaches, such as Step-DPO, typically
improve this by focusing on the first erroneous step in the reasoning chain.
However, they overlook all other steps and rely heavily on humans or GPT-4 to
identify erroneous steps. To address these issues, we propose Full-Step-DPO, a
novel DPO framework tailored for mathematical reasoning. Instead of optimizing
only the first erroneous step, it leverages step-wise rewards from the entire
reasoning chain. This is achieved by training a self-supervised process reward
model, which automatically scores each step, providing rewards while avoiding
reliance on external signals. Furthermore, we introduce a novel step-wise DPO
loss, which dynamically updates gradients based on these step-wise rewards.
This endows stronger reasoning capabilities to language models. Extensive
evaluations on both in-domain and out-of-domain mathematical reasoning
benchmarks across various base language models, demonstrate that Full-Step-DPO
achieves superior performance compared to state-of-the-art baselines.
|
2502.14358
|
An exposition of recent list-size bounds of FRS Codes
|
cs.CC cs.IT math.CO math.IT
|
In the last year, there have been some remarkable improvements in the
combinatorial list-size bounds of Folded Reed Solomon codes and multiplicity
codes. Starting from the work on Kopparty, Ron-Zewi, Saraf and Wootters (SIAM
J. Comput. 2023) (and subsequent simplifications due to Tamo (IEEE Trans.
Inform. Theory 2024), we have had dramatic improvements in the list-size bounds
of FRS codes due to Srivastava (SODA 2025) and Chen & Zhang (STOC 2025). In
this note, we give a short exposition of these three results (Tamo, Srivastava
and Chen-Zhang).
|
2502.14359
|
Triangulating LLM Progress through Benchmarks, Games, and Cognitive
Tests
|
cs.CL
|
We examine three evaluation paradigms: large question-answering benchmarks
(e.g., MMLU and BBH), interactive games (e.g., Signalling Games or Taboo), and
cognitive tests (e.g., for working memory or theory of mind). First, we
investigate which of the former two-benchmarks or games-is most effective at
discriminating LLMs of varying quality. Then, inspired by human cognitive
assessments, we compile a suite of targeted tests that measure cognitive
abilities deemed essential for effective language use, and we investigate their
correlation with model performance in benchmarks and games. Our analyses reveal
that interactive games are superior to standard benchmarks in discriminating
models. Causal and logical reasoning correlate with both static and interactive
tests, while differences emerge regarding core executive functions and
social/emotional skills, which correlate more with games. We advocate the
development of new interactive benchmarks and targeted cognitive tasks inspired
by assessing human abilities but designed specifically for LLMs.
|
2502.14360
|
Weed Detection using Convolutional Neural Network
|
cs.CV
|
In this paper we use convolutional neural networks (CNNs) for weed detection
in agricultural land. We specifically investigate the application of two CNN
layer types, Conv2d and dilated Conv2d, for weed detection in crop fields. The
suggested method extracts features from the input photos using pre-trained
models, which are subsequently adjusted for weed detection. The findings of the
experiment, which used a sizable collection of dataset consisting of 15336
segments, being 3249 of soil, 7376 of soybean, 3520 grass and 1191 of broadleaf
weeds. show that the suggested approach can accurately and successfully detect
weeds at an accuracy of 94%. This study has significant ramifications for
lowering the usage of toxic herbicides and increasing the effectiveness of weed
management in agriculture.
|
2502.14361
|
Retrieval-Augmented Process Reward Model for Generalizable Mathematical
Reasoning
|
cs.AI cs.IR
|
While large language models (LLMs) have significantly advanced mathematical
reasoning, Process Reward Models (PRMs) have been developed to evaluate the
logical validity of reasoning steps. However, PRMs still struggle with
out-of-distribution (OOD) challenges. This paper identifies key OOD issues,
including step OOD, caused by differences in reasoning patterns across model
types and sizes, and question OOD, which arises from dataset shifts between
training data and real-world problems. To address these issues, we introduce
Retrieval-Augmented Process Reward Model (RetrievalPRM), a novel framework
designed to tackle these OOD issues. By utilizing a two-stage
retrieval-enhanced mechanism, RetrievalPRM retrieves semantically similar
questions and steps as a warmup, enhancing PRM's ability to evaluate target
steps and improving generalization and reasoning consistency across different
models and problem types. Our extensive experiments demonstrate that
RetrievalPRM outperforms existing baselines across multiple real-world
datasets. Our open-source contributions include a retrieval-enhanced dataset, a
tuning framework for PRM training, and the RetrievalPRM model, establishing a
new standard for PRM performance.
|
2502.14363
|
Topology-Aware Wavelet Mamba for Airway Structure Segmentation in
Postoperative Recurrent Nasopharyngeal Carcinoma CT Scans
|
eess.IV cs.CV
|
Nasopharyngeal carcinoma (NPC) patients often undergo radiotherapy and
chemotherapy, which can lead to postoperative complications such as limited
mouth opening and joint stiffness, particularly in recurrent cases that require
re-surgery. These complications can affect airway function, making accurate
postoperative airway risk assessment essential for managing patient care.
Accurate segmentation of airway-related structures in postoperative CT scans is
crucial for assessing these risks. This study introduces TopoWMamba
(Topology-aware Wavelet Mamba), a novel segmentation model specifically
designed to address the challenges of postoperative airway risk evaluation in
recurrent NPC patients. TopoWMamba combines wavelet-based multi-scale feature
extraction, state-space sequence modeling, and topology-aware modules to
segment airway-related structures in CT scans robustly. By leveraging the
Wavelet-based Mamba Block (WMB) for hierarchical frequency decomposition and
the Snake Conv VSS (SCVSS) module to preserve anatomical continuity, TopoWMamba
effectively captures both fine-grained boundaries and global structural
context, crucial for accurate segmentation in complex postoperative scenarios.
Through extensive testing on the NPCSegCT dataset, TopoWMamba achieves an
average Dice score of 88.02%, outperforming existing models such as UNet,
Attention UNet, and SwinUNet. Additionally, TopoWMamba is tested on the SegRap
2023 Challenge dataset, where it shows a significant improvement in trachea
segmentation with a Dice score of 95.26%. The proposed model provides a strong
foundation for automated segmentation, enabling more accurate postoperative
airway risk evaluation.
|
2502.14365
|
Is Q-learning an Ill-posed Problem?
|
cs.LG cs.AI
|
This paper investigates the instability of Q-learning in continuous
environments, a challenge frequently encountered by practitioners.
Traditionally, this instability is attributed to bootstrapping and regression
model errors. Using a representative reinforcement learning benchmark, we
systematically examine the effects of bootstrapping and model inaccuracies by
incrementally eliminating these potential error sources. Our findings reveal
that even in relatively simple benchmarks, the fundamental task of Q-learning -
iteratively learning a Q-function from policy-specific target values - can be
inherently ill-posed and prone to failure. These insights cast doubt on the
reliability of Q-learning as a universal solution for reinforcement learning
problems.
|
2502.14366
|
Entropy-UID: A Method for Optimizing Information Density
|
cs.CL cs.AI
|
Balanced and efficient information flow is essential for optimizing language
generation models. In this work, we propose Entropy-UID, a new token selection
method that balances entropy and Uniform Information Density (UID) principles
for enhanced efficiency of text generation. Our approach adaptively adjusts
token selection by jointly minimizing entropy and surprisal, promoting more
even information distribution across generated sequences. Theoretical
validation demonstrates that Entropy-UID optimally reduces information spikes
while maintaining fluency and coherence. The method has been evulated using
information-theoretic metrics on multiple benchmark datasets, including
WikiText-2, OpenWebText, and WMT. Experimental results show that Entropy-UID
achieves lower surprisal and entropy variance compared to standard GPT-2 and
alternative heuristics, leading to more balanced and human-like text
generation. Our findings point towards the potential of leveraging
information-theoretic constraints to refine token selection strategies in
autoregressive language models.
|
2502.14370
|
PPO-MI: Efficient Black-Box Model Inversion via Proximal Policy
Optimization
|
cs.LG cs.CV
|
Model inversion attacks pose a significant privacy risk by attempting to
reconstruct private training data from trained models. Most of the existing
methods either depend on gradient estimation or require white-box access to
model parameters, which limits their applicability in practical scenarios. In
this paper, we propose PPO-MI, a novel reinforcement learning-based framework
for black-box model inversion attacks. Our approach formulates the inversion
task as a Markov Decision Process, where an agent navigates the latent space of
a generative model to reconstruct private training samples using only model
predictions. By employing Proximal Policy Optimization (PPO) with a
momentum-based state transition mechanism, along with a reward function
balancing prediction accuracy and exploration, PPO-MI ensures efficient latent
space exploration and high query efficiency. We conduct extensive experiments
illustrates that PPO-MI outperforms the existing methods while require less
attack knowledge, and it is robust across various model architectures and
datasets. These results underline its effectiveness and generalizability in
practical black-box scenarios, raising important considerations for the privacy
vulnerabilities of deployed machine learning models.
|
Subsets and Splits
No community queries yet
The top public SQL queries from the community will appear here once available.