id
stringlengths 9
16
| title
stringlengths 4
278
| categories
stringlengths 5
104
| abstract
stringlengths 6
4.09k
|
|---|---|---|---|
2502.03654
|
Gompertz Linear Units: Leveraging Asymmetry for Enhanced Learning
Dynamics
|
cs.LG cs.AI cs.CV
|
Activation functions are fundamental elements of deep learning architectures
as they significantly influence training dynamics. ReLU, while widely used, is
prone to the dying neuron problem, which has been mitigated by variants such as
LeakyReLU, PReLU, and ELU that better handle negative neuron outputs. Recently,
self-gated activations like GELU and Swish have emerged as state-of-the-art
alternatives, leveraging their smoothness to ensure stable gradient flow and
prevent neuron inactivity. In this work, we introduce the Gompertz Linear Unit
(GoLU), a novel self-gated activation function defined as $\mathrm{GoLU}(x) = x
\, \mathrm{Gompertz}(x)$, where $\mathrm{Gompertz}(x) = e^{-e^{-x}}$. The GoLU
activation leverages the asymmetry in the Gompertz function to reduce variance
in the latent space more effectively compared to GELU and Swish, while
preserving robust gradient flow. Extensive experiments across diverse tasks,
including Image Classification, Language Modeling, Semantic Segmentation,
Object Detection, Instance Segmentation, and Diffusion, highlight GoLU's
superior performance relative to state-of-the-art activation functions,
establishing GoLU as a robust alternative to existing activation functions.
|
2502.03656
|
A Study in Dataset Distillation for Image Super-Resolution
|
cs.CV cs.AI cs.LG
|
Dataset distillation is the concept of condensing large datasets into smaller
but highly representative synthetic samples. While previous research has
primarily focused on image classification, its application to image
Super-Resolution (SR) remains underexplored. This exploratory work studies
multiple dataset distillation techniques applied to SR, including pixel- and
latent-space approaches under different aspects. Our experiments demonstrate
that a 91.12% dataset size reduction can be achieved while maintaining
comparable SR performance to the full dataset. We further analyze
initialization strategies and distillation methods to optimize memory
efficiency and computational costs. Our findings provide new insights into
dataset distillation for SR and set the stage for future advancements.
|
2502.03658
|
Advancing Weight and Channel Sparsification with Enhanced Saliency
|
cs.LG cs.CV
|
Pruning aims to accelerate and compress models by removing redundant
parameters, identified by specifically designed importance scores which are
usually imperfect. This removal is irreversible, often leading to subpar
performance in pruned models. Dynamic sparse training, while attempting to
adjust sparse structures during training for continual reassessment and
refinement, has several limitations including criterion inconsistency between
pruning and growth, unsuitability for structured sparsity, and short-sighted
growth strategies. Our paper introduces an efficient, innovative paradigm to
enhance a given importance criterion for either unstructured or structured
sparsity. Our method separates the model into an active structure for
exploitation and an exploration space for potential updates. During
exploitation, we optimize the active structure, whereas in exploration, we
reevaluate and reintegrate parameters from the exploration space through a
pruning and growing step consistently guided by the same given importance
criterion. To prepare for exploration, we briefly "reactivate" all parameters
in the exploration space and train them for a few iterations while keeping the
active part frozen, offering a preview of the potential performance gains from
reintegrating these parameters. We show on various datasets and configurations
that existing importance criterion even simple as magnitude can be enhanced
with ours to achieve state-of-the-art performance and training cost reductions.
Notably, on ImageNet with ResNet50, ours achieves an +1.3 increase in Top-1
accuracy over prior art at 90% ERK sparsity. Compared with the SOTA latency
pruning method HALP, we reduced its training cost by over 70% while attaining a
faster and more accurate pruned model.
|
2502.03660
|
Energy & Force Regression on DFT Trajectories is Not Enough for
Universal Machine Learning Interatomic Potentials
|
cond-mat.mtrl-sci cs.AI cs.LG
|
Universal Machine Learning Interactomic Potentials (MLIPs) enable accelerated
simulations for materials discovery. However, current research efforts fail to
impactfully utilize MLIPs due to: 1. Overreliance on Density Functional Theory
(DFT) for MLIP training data creation; 2. MLIPs' inability to reliably and
accurately perform large-scale molecular dynamics (MD) simulations for diverse
materials; 3. Limited understanding of MLIPs' underlying capabilities. To
address these shortcomings, we aargue that MLIP research efforts should
prioritize: 1. Employing more accurate simulation methods for large-scale MLIP
training data creation (e.g. Coupled Cluster Theory) that cover a wide range of
materials design spaces; 2. Creating MLIP metrology tools that leverage
large-scale benchmarking, visualization, and interpretability analyses to
provide a deeper understanding of MLIPs' inner workings; 3. Developing
computationally efficient MLIPs to execute MD simulations that accurately model
a broad set of materials properties. Together, these interdisciplinary research
directions can help further the real-world application of MLIPs to accurately
model complex materials at device scale.
|
2502.03662
|
EC-SBM Synthetic Network Generator
|
cs.SI
|
Generating high-quality synthetic networks with realistic community structure
is vital to effectively evaluate community detection algorithms. In this study,
we propose a new synthetic network generator called the Edge-Connected
Stochastic Block Model (EC-SBM). The goal of EC-SBM is to take a given
clustered real-world network and produce a synthetic network that resembles the
clustered real-world network with respect to both network and
community-specific criteria. In particular, we focus on simulating the internal
edge connectivity of the clusters in the reference clustered network. Our
extensive performance study on large real-world networks shows that EC-SBM has
high accuracy in both network and community-specific criteria, and is generally
more accurate than current alternative approaches for this problem.
Furthermore, EC-SBM is fast enough to scale to real-world networks with
millions of nodes.
|
2502.03664
|
Contrastive Learning for Cold Start Recommendation with Adaptive Feature
Fusion
|
cs.IR cs.LG
|
This paper proposes a cold start recommendation model that integrates
contrastive learning, aiming to solve the problem of performance degradation of
recommendation systems in cold start scenarios due to the scarcity of user and
item interaction data. The model dynamically adjusts the weights of key
features through an adaptive feature selection module and effectively
integrates user attributes, item meta-information, and contextual features by
combining a multimodal feature fusion mechanism, thereby improving
recommendation performance. In addition, the model introduces a contrastive
learning mechanism to enhance the robustness and generalization ability of
feature representation by constructing positive and negative sample pairs.
Experiments are conducted on the MovieLens-1M dataset. The results show that
the proposed model significantly outperforms mainstream recommendation methods
such as Matrix Factorization, LightGBM, DeepFM, and AutoRec in terms of HR,
NDCG, MRR, and Recall, especially in cold start scenarios. Ablation experiments
further verify the key role of each module in improving model performance, and
the learning rate sensitivity analysis shows that a moderate learning rate is
crucial to the optimization effect of the model. This study not only provides a
new solution to the cold start problem but also provides an important reference
for the application of contrastive learning in recommendation systems. In the
future, this model is expected to play a role in a wider range of scenarios,
such as real-time recommendation and cross-domain recommendation.
|
2502.03668
|
Privacy-Preserving Generative Models: A Comprehensive Survey
|
cs.LG cs.CR
|
Despite the generative model's groundbreaking success, the need to study its
implications for privacy and utility becomes more urgent. Although many studies
have demonstrated the privacy threats brought by GANs, no existing survey has
systematically categorized the privacy and utility perspectives of GANs and
VAEs. In this article, we comprehensively study privacy-preserving generative
models, articulating the novel taxonomies for both privacy and utility metrics
by analyzing 100 research publications. Finally, we discuss the current
challenges and future research directions that help new researchers gain
insight into the underlying concepts.
|
2502.03669
|
Unrealized Expectations: Comparing AI Methods vs Classical Algorithms
for Maximum Independent Set
|
cs.LG cs.AI cs.DM math.OC stat.ML
|
AI methods, such as generative models and reinforcement learning, have
recently been applied to combinatorial optimization (CO) problems, especially
NP-hard ones. This paper compares such GPU-based methods with classical
CPU-based methods on Maximum Independent Set (MIS). Experiments on standard
graph families show that AI-based algorithms fail to outperform and, in many
cases, to match the solution quality of the state-of-art classical solver KaMIS
running on a single CPU. Some GPU-based methods even perform similarly to the
simplest heuristic, degree-based greedy. Even with post-processing techniques
like local search, AI-based methods still perform worse than CPU-based solvers.
We develop a new mode of analysis to reveal that non-backtracking AI methods,
e.g. LTFT (which is based on GFlowNets), end up reasoning similarly to the
simplest degree-based greedy approach, and thus worse than KaMIS. We also find
that CPU-based algorithms, notably KaMIS, have strong performance on sparse
random graphs, which appears to refute a well-known conjectured upper bound for
efficient algorithms from Coja-Oghlan & Efthymiou (2015).
|
2502.03670
|
Chaos into Order: Neural Framework for Expected Value Estimation of
Stochastic Partial Differential Equations
|
cs.LG
|
Stochastic Partial Differential Equations (SPDEs) are fundamental to modeling
complex systems in physics, finance, and engineering, yet their numerical
estimation remains a formidable challenge. Traditional methods rely on
discretization, introducing computational inefficiencies, and limiting
applicability in high-dimensional settings. In this work, we introduce a novel
neural framework for SPDE estimation that eliminates the need for
discretization, enabling direct estimation of expected values across arbitrary
spatio-temporal points. We develop and compare two distinct neural
architectures: Loss Enforced Conditions (LEC), which integrates physical
constraints into the loss function, and Model Enforced Conditions (MEC), which
embeds these constraints directly into the network structure. Through extensive
experiments on the stochastic heat equation, Burgers' equation, and
Kardar-Parisi-Zhang (KPZ) equation, we reveal a trade-off: While LEC achieves
superior residual minimization and generalization, MEC enforces initial
conditions with absolute precision and exceptionally high accuracy in boundary
condition enforcement. Our findings highlight the immense potential of
neural-based SPDE solvers, particularly for high-dimensional problems where
conventional techniques falter. By circumventing discretization and explicitly
modeling uncertainty, our approach opens new avenues for solving SPDEs in
fields ranging from quantitative finance to turbulence modeling. To the best of
our knowledge, this is the first neural framework capable of directly
estimating the expected values of SPDEs in an entirely non-discretized manner,
offering a step forward in scientific computing.
|
2502.03671
|
Advancing Reasoning in Large Language Models: Promising Methods and
Approaches
|
cs.CL cs.AI
|
Large Language Models (LLMs) have succeeded remarkably in various natural
language processing (NLP) tasks, yet their reasoning capabilities remain a
fundamental challenge. While LLMs exhibit impressive fluency and factual
recall, their ability to perform complex reasoning-spanning logical deduction,
mathematical problem-solving, commonsense inference, and multi-step
reasoning-often falls short of human expectations. This survey provides a
comprehensive review of emerging techniques enhancing reasoning in LLMs. We
categorize existing methods into key approaches, including prompting strategies
(e.g., Chain-of-Thought reasoning, Self-Consistency, and Tree-of-Thought
reasoning), architectural innovations (e.g., retrieval-augmented models,
modular reasoning networks, and neuro-symbolic integration), and learning
paradigms (e.g., fine-tuning with reasoning-specific datasets, reinforcement
learning, and self-supervised reasoning objectives). Additionally, we explore
evaluation frameworks used to assess reasoning in LLMs and highlight open
challenges, such as hallucinations, robustness, and reasoning generalization
across diverse tasks. By synthesizing recent advancements, this survey aims to
provide insights into promising directions for future research and practical
applications of reasoning-augmented LLMs.
|
2502.03672
|
Physically consistent predictive reduced-order modeling by enhancing
Operator Inference with state constraints
|
physics.comp-ph cs.LG cs.NA math.NA
|
Numerical simulations of complex multiphysics systems, such as char
combustion considered herein, yield numerous state variables that inherently
exhibit physical constraints. This paper presents a new approach to augment
Operator Inference -- a methodology within scientific machine learning that
enables learning from data a low-dimensional representation of a
high-dimensional system governed by nonlinear partial differential equations --
by embedding such state constraints in the reduced-order model predictions. In
the model learning process, we propose a new way to choose regularization
hyperparameters based on a key performance indicator. Since embedding state
constraints improves the stability of the Operator Inference reduced-order
model, we compare the proposed state constraints-embedded Operator Inference
with the standard Operator Inference and other stability-enhancing approaches.
For an application to char combustion, we demonstrate that the proposed
approach yields state predictions superior to the other methods regarding
stability and accuracy. It extrapolates over 200\% past the training regime
while being computationally efficient and physically consistent.
|
2502.03674
|
An Empirical Study of Methods for Small Object Detection from Satellite
Imagery
|
cs.CV cs.AI
|
This paper reviews object detection methods for finding small objects from
remote sensing imagery and provides an empirical evaluation of four
state-of-the-art methods to gain insights into method performance and technical
challenges. In particular, we use car detection from urban satellite images and
bee box detection from satellite images of agricultural lands as application
scenarios. Drawing from the existing surveys and literature, we identify
several top-performing methods for the empirical study. Public, high-resolution
satellite image datasets are used in our experiments.
|
2502.03676
|
Anytime Planning for End-Effector Trajectory Tracking
|
cs.RO
|
End-effector trajectory tracking algorithms find joint motions that drive
robot manipulators to track reference trajectories. In practical scenarios,
anytime algorithms are preferred for their ability to quickly generate initial
motions and continuously refine them over time. In this paper, we present an
algorithmic framework that adapts common graph-based trajectory tracking
algorithms to be anytime and enhances their efficiency and effectiveness. Our
key insight is to identify guide paths that approximately track the reference
trajectory and strategically bias sampling toward the guide paths. We
demonstrate the effectiveness of the proposed framework by restructuring two
existing graph-based trajectory tracking algorithms and evaluating the updated
algorithms in three experiments.
|
2502.03678
|
Reflection-Window Decoding: Text Generation with Selective Refinement
|
cs.CL cs.AI cs.LG
|
The autoregressive decoding for text generation in large language models
(LLMs), while widely used, is inherently suboptimal due to the lack of a
built-in mechanism to perform refinement and/or correction of the generated
content. In this paper, we consider optimality in terms of the joint
probability over the generated response, when jointly considering all tokens at
the same time. We theoretically characterize the potential deviation of the
autoregressively generated response from its globally optimal counterpart that
is of the same length. Our analysis suggests that we need to be cautious when
noticeable uncertainty arises during text generation, which may signal the
sub-optimality of the generation history. To address the pitfall of
autoregressive decoding for text generation, we propose an approach that
incorporates a sliding reflection window and a pausing criterion, such that
refinement and generation can be carried out interchangeably as the decoding
proceeds. Our selective refinement framework strikes a balance between
efficiency and optimality, and our extensive experimental results demonstrate
the effectiveness of our approach.
|
2502.03681
|
On the effects of angular acceleration in orientation estimation using
inertial measurement units
|
eess.SY cs.SY
|
Determining the orientation of a rigid body using an inertial measurement
unit is a common problem in many engineering applications. However, sensor
fusion algorithms suffer from performance loss when other motions besides the
gravitational acceleration affect the accelerometer. In this paper, we show
that linear accelerations caused by rotational accelerations lead to additional
zeros in the linearized transfer functions, which are strongly dependent on the
operating point. These zeros lead to non-minimum phase systems, which are known
to be challenging to control. In addition, we demonstrate how Mahony and
Madgwick filters can mitigate the effects of the additional acceleration, but
at the cost of reduced bandwidth. This generates insights into a fundamental
problem in estimation, that are transferable to many practical applications.
|
2502.03685
|
Controlled LLM Decoding via Discrete Auto-regressive Biasing
|
cs.CL cs.LG stat.ML
|
Controlled text generation allows for enforcing user-defined constraints on
large language model outputs, an increasingly important field as LLMs become
more prevalent in everyday life. One common approach uses energy-based
decoding, which defines a target distribution through an energy function that
combines multiple constraints into a weighted average. However, these methods
often struggle to balance fluency with constraint satisfaction, even with
extensive tuning of the energy function's coefficients. In this paper, we
identify that this suboptimal balance arises from sampling in continuous space
rather than the natural discrete space of text tokens. To address this, we
propose Discrete Auto-regressive Biasing, a controlled decoding algorithm that
leverages gradients while operating entirely in the discrete text domain.
Specifically, we introduce a new formulation for controlled text generation by
defining a joint distribution over the generated sequence and an auxiliary bias
sequence. To efficiently sample from this joint distribution, we propose a
Langevin-within-Gibbs sampling algorithm using gradient-based discrete MCMC.
Our method significantly improves constraint satisfaction while maintaining
comparable or better fluency, all with even lower computational costs. We
demonstrate the advantages of our controlled decoding method on sentiment
control, language detoxification, and keyword-guided generation.
|
2502.03686
|
Variational Control for Guidance in Diffusion Models
|
cs.LG cs.AI cs.CV stat.ML
|
Diffusion models exhibit excellent sample quality, but existing guidance
methods often require additional model training or are limited to specific
tasks. We revisit guidance in diffusion models from the perspective of
variational inference and control, introducing Diffusion Trajectory Matching
(DTM) that enables guiding pretrained diffusion trajectories to satisfy a
terminal cost. DTM unifies a broad class of guidance methods and enables novel
instantiations. We introduce a new method within this framework that achieves
state-of-the-art results on several linear and (blind) non-linear inverse
problems without requiring additional model training or modifications. For
instance, in ImageNet non-linear deblurring, our model achieves an FID score of
34.31, significantly improving over the best pretrained-method baseline (FID
78.07). We will make the code available in a future update.
|
2502.03687
|
Conditional Diffusion Models are Medical Image Classifiers that Provide
Explainability and Uncertainty for Free
|
cs.CV cs.LG
|
Discriminative classifiers have become a foundational tool in deep learning
for medical imaging, excelling at learning separable features of complex data
distributions. However, these models often need careful design, augmentation,
and training techniques to ensure safe and reliable deployment. Recently,
diffusion models have become synonymous with generative modeling in 2D. These
models showcase robustness across a range of tasks including natural image
classification, where classification is performed by comparing reconstruction
errors across images generated for each possible conditioning input. This work
presents the first exploration of the potential of class conditional diffusion
models for 2D medical image classification. First, we develop a novel majority
voting scheme shown to improve the performance of medical diffusion
classifiers. Next, extensive experiments on the CheXpert and ISIC Melanoma skin
cancer datasets demonstrate that foundation and trained-from-scratch diffusion
models achieve competitive performance against SOTA discriminative classifiers
without the need for explicit supervision. In addition, we show that diffusion
classifiers are intrinsically explainable, and can be used to quantify the
uncertainty of their predictions, increasing their trustworthiness and
reliability in safety-critical, clinical contexts. Further information is
available on our project page:
https://faverogian.github.io/med-diffusion-classifier.github.io/
|
2502.03688
|
A Comparison of DeepSeek and Other LLMs
|
cs.CL cs.AI
|
Recently, DeepSeek has been the focus of attention in and beyond the AI
community. An interesting problem is how DeepSeek compares to other large
language models (LLMs). There are many tasks an LLM can do, and in this paper,
we use the task of predicting an outcome using a short text for comparison. We
consider two settings, an authorship classification setting and a citation
classification setting. In the first one, the goal is to determine whether a
short text is written by human or AI. In the second one, the goal is to
classify a citation to one of four types using the textual content. For each
experiment, we compare DeepSeek with $4$ popular LLMs: Claude, Gemini, GPT, and
Llama.
We find that, in terms of classification accuracy, DeepSeek outperforms
Gemini, GPT, and Llama in most cases, but underperforms Claude. We also find
that DeepSeek is comparably slower than others but with a low cost to use,
while Claude is much more expensive than all the others. Finally, we find that
in terms of similarity, the output of DeepSeek is most similar to those of
Gemini and Claude (and among all $5$ LLMs, Claude and Gemini have the most
similar outputs).
In this paper, we also present a fully-labeled dataset collected by
ourselves, and propose a recipe where we can use the LLMs and a recent data
set, MADStat, to generate new data sets. The datasets in our paper can be used
as benchmarks for future study on LLMs.
|
2502.03692
|
DocMIA: Document-Level Membership Inference Attacks against DocVQA
Models
|
cs.LG cs.CL cs.CR
|
Document Visual Question Answering (DocVQA) has introduced a new paradigm for
end-to-end document understanding, and quickly became one of the standard
benchmarks for multimodal LLMs. Automating document processing workflows,
driven by DocVQA models, presents significant potential for many business
sectors. However, documents tend to contain highly sensitive information,
raising concerns about privacy risks associated with training such DocVQA
models. One significant privacy vulnerability, exploited by the membership
inference attack, is the possibility for an adversary to determine if a
particular record was part of the model's training data. In this paper, we
introduce two novel membership inference attacks tailored specifically to
DocVQA models. These attacks are designed for two different adversarial
scenarios: a white-box setting, where the attacker has full access to the model
architecture and parameters, and a black-box setting, where only the model's
outputs are available. Notably, our attacks assume the adversary lacks access
to auxiliary datasets, which is more realistic in practice but also more
challenging. Our unsupervised methods outperform existing state-of-the-art
membership inference attacks across a variety of DocVQA models and datasets,
demonstrating their effectiveness and highlighting the privacy risks in this
domain.
|
2502.03695
|
Reduce Lap Time for Autonomous Racing with Curvature-Integrated MPCC
Local Trajectory Planning Method
|
cs.RO cs.SY eess.SY
|
The widespread application of autonomous driving technology has significantly
advanced the field of autonomous racing. Model Predictive Contouring Control
(MPCC) is a highly effective local trajectory planning method for autonomous
racing. However, the traditional MPCC method struggles with racetracks that
have significant curvature changes, limiting the performance of the vehicle
during autonomous racing. To address this issue, we propose a
curvature-integrated MPCC (CiMPCC) local trajectory planning method for
autonomous racing. This method optimizes the velocity of the local trajectory
based on the curvature of the racetrack centerline. The specific implementation
involves mapping the curvature of the racetrack centerline to a reference
velocity profile, which is then incorporated into the cost function for
optimizing the velocity of the local trajectory. This reference velocity
profile is created by normalizing and mapping the curvature of the racetrack
centerline, thereby ensuring efficient and performance-oriented local
trajectory planning in racetracks with significant curvature. The proposed
CiMPCC method has been experimented on a self-built 1:10 scale F1TENTH racing
vehicle deployed with ROS platform. The experimental results demonstrate that
the proposed method achieves outstanding results on a challenging racetrack
with sharp curvature, improving the overall lap time by 11.4%-12.5% compared to
other autonomous racing trajectory planning methods. Our code is available at
https://github.com/zhouhengli/CiMPCC.
|
2502.03696
|
Cascaded Learned Bloom Filter for Optimal Model-Filter Size Balance and
Fast Rejection
|
cs.DS cs.CC cs.LG
|
Recent studies have demonstrated that learned Bloom filters, which combine
machine learning with the classical Bloom filter, can achieve superior memory
efficiency. However, existing learned Bloom filters face two critical
unresolved challenges: the balance between the machine learning model size and
the Bloom filter size is not optimal, and the reject time cannot be minimized
effectively. We propose the Cascaded Learned Bloom Filter (CLBF) to address
these issues. Our dynamic programming-based optimization automatically selects
configurations that achieve an optimal balance between the model and filter
sizes while minimizing reject time. Experiments on real-world datasets show
that CLBF reduces memory usage by up to 24% and decreases reject time by up to
14 times compared to state-of-the-art learned Bloom filters.
|
2502.03698
|
How vulnerable is my policy? Adversarial attacks on modern behavior
cloning policies
|
cs.LG cs.CR cs.RO
|
Learning from Demonstration (LfD) algorithms have shown promising results in
robotic manipulation tasks, but their vulnerability to adversarial attacks
remains underexplored. This paper presents a comprehensive study of adversarial
attacks on both classic and recently proposed algorithms, including Behavior
Cloning (BC), LSTM-GMM, Implicit Behavior Cloning (IBC), Diffusion Policy (DP),
and VQ-Behavior Transformer (VQ-BET). We study the vulnerability of these
methods to untargeted, targeted and universal adversarial perturbations. While
explicit policies, such as BC, LSTM-GMM and VQ-BET can be attacked in the same
manner as standard computer vision models, we find that attacks for implicit
and denoising policy models are nuanced and require developing novel attack
methods. Our experiments on several simulated robotic manipulation tasks reveal
that most of the current methods are highly vulnerable to adversarial
perturbations. We also show that these attacks are transferable across
algorithms, architectures, and tasks, raising concerning security
vulnerabilities with potentially a white-box threat model. In addition, we test
the efficacy of a randomized smoothing, a widely used adversarial defense
technique, and highlight its limitation in defending against attacks on complex
and multi-modal action distribution common in complex control tasks. In
summary, our findings highlight the vulnerabilities of modern BC algorithms,
paving way for future work in addressing such limitations.
|
2502.03699
|
LLM Alignment as Retriever Optimization: An Information Retrieval
Perspective
|
cs.CL cs.AI cs.IR
|
Large Language Models (LLMs) have revolutionized artificial intelligence with
capabilities in reasoning, coding, and communication, driving innovation across
industries. Their true potential depends on effective alignment to ensure
correct, trustworthy and ethical behavior, addressing challenges like
misinformation, hallucinations, bias and misuse. While existing Reinforcement
Learning (RL)-based alignment methods are notoriously complex, direct
optimization approaches offer a simpler alternative. In this work, we introduce
a novel direct optimization approach for LLM alignment by drawing on
established Information Retrieval (IR) principles. We present a systematic
framework that bridges LLM alignment and IR methodologies, mapping LLM
generation and reward models to IR's retriever-reranker paradigm. Building on
this foundation, we propose LLM Alignment as Retriever Preference Optimization
(LarPO), a new alignment method that enhances overall alignment quality.
Extensive experiments validate LarPO's effectiveness with 38.9 % and 13.7 %
averaged improvement on AlpacaEval2 and MixEval-Hard respectively. Our work
opens new avenues for advancing LLM alignment by integrating IR foundations,
offering a promising direction for future research.
|
2502.03701
|
First-ish Order Methods: Hessian-aware Scalings of Gradient Descent
|
math.OC cs.LG
|
Gradient descent is the primary workhorse for optimizing large-scale problems
in machine learning. However, its performance is highly sensitive to the choice
of the learning rate. A key limitation of gradient descent is its lack of
natural scaling, which often necessitates expensive line searches or heuristic
tuning to determine an appropriate step size. In this paper, we address this
limitation by incorporating Hessian information to scale the gradient
direction. By accounting for the curvature of the function along the gradient,
our adaptive, Hessian-aware scaling method ensures a local unit step size
guarantee, even in nonconvex settings. Near a local minimum that satisfies the
second-order sufficient conditions, our approach achieves linear convergence
with a unit step size. We show that our method converges globally under a
significantly weaker version of the standard Lipschitz gradient smoothness
assumption. Even when Hessian information is inexact, the local unit step size
guarantee and global convergence properties remain valid under mild conditions.
Finally, we validate our theoretical results empirically on a range of convex
and nonconvex machine learning tasks, showcasing the effectiveness of the
approach.
|
2502.03703
|
On the Expressive Power of Subgraph Graph Neural Networks for Graphs
with Bounded Cycles
|
cs.LG
|
Graph neural networks (GNNs) have been widely used in graph-related contexts.
It is known that the separation power of GNNs is equivalent to that of the
Weisfeiler-Lehman (WL) test; hence, GNNs are imperfect at identifying all
non-isomorphic graphs, which severely limits their expressive power. This work
investigates $k$-hop subgraph GNNs that aggregate information from neighbors
with distances up to $k$ and incorporate the subgraph structure. We prove that
under appropriate assumptions, the $k$-hop subgraph GNNs can approximate any
permutation-invariant/equivariant continuous function over graphs without
cycles of length greater than $2k+1$ within any error tolerance. We also
provide an extension to $k$-hop GNNs without incorporating the subgraph
structure. Our numerical experiments on established benchmarks and novel
architectures validate our theory on the relationship between the information
aggregation distance and the cycle size.
|
2502.03708
|
Aggregate and conquer: detecting and steering LLM concepts by combining
nonlinear predictors over multiple layers
|
cs.CL cs.AI stat.ML
|
A trained Large Language Model (LLM) contains much of human knowledge. Yet,
it is difficult to gauge the extent or accuracy of that knowledge, as LLMs do
not always ``know what they know'' and may even be actively misleading. In this
work, we give a general method for detecting semantic concepts in the internal
activations of LLMs. Furthermore, we show that our methodology can be easily
adapted to steer LLMs toward desirable outputs. Our innovations are the
following: (1) we use a nonlinear feature learning method to identify important
linear directions for predicting concepts from each layer; (2) we aggregate
features across layers to build powerful concept detectors and steering
mechanisms. We showcase the power of our approach by attaining state-of-the-art
results for detecting hallucinations, harmfulness, toxicity, and untruthful
content on seven benchmarks. We highlight the generality of our approach by
steering LLMs towards new concepts that, to the best of our knowledge, have not
been previously considered in the literature, including: semantic
disambiguation, human languages, programming languages, hallucinated responses,
science subjects, poetic/Shakespearean English, and even multiple concepts
simultaneously. Moreover, our method can steer concepts with numerical
attributes such as product reviews. We provide our code (including a simple API
for our methods) at https://github.com/dmbeaglehole/neural_controllers .
|
2502.03711
|
MultiQ&A: An Analysis in Measuring Robustness via Automated
Crowdsourcing of Question Perturbations and Answers
|
cs.CL cs.AI cs.LG
|
One critical challenge in the institutional adoption journey of Large
Language Models (LLMs) stems from their propensity to hallucinate in generated
responses. To address this, we propose MultiQ&A, a systematic approach for
evaluating the robustness and consistency of LLM-generated answers. We
demonstrate MultiQ&A's ability to crowdsource question perturbations and their
respective answers through independent LLM agents at scale. Our experiments
culminated in the examination of 1.9 million question perturbations and 2.3
million answers. Furthermore, MultiQ&A shows that ensembled LLMs, such as
gpt-3.5-turbo, remain relatively robust and consistent under perturbations.
MultiQ&A provides clarity in the response generation space, offering an
effective method for inspecting disagreements and variability. Therefore, our
system offers a potential framework for institutional LLM adoption with the
ability to measure confidence, consistency, and the quantification of
hallucinations.
|
2502.03714
|
Universal Sparse Autoencoders: Interpretable Cross-Model Concept
Alignment
|
cs.CV cs.LG
|
We present Universal Sparse Autoencoders (USAEs), a framework for uncovering
and aligning interpretable concepts spanning multiple pretrained deep neural
networks. Unlike existing concept-based interpretability methods, which focus
on a single model, USAEs jointly learn a universal concept space that can
reconstruct and interpret the internal activations of multiple models at once.
Our core insight is to train a single, overcomplete sparse autoencoder (SAE)
that ingests activations from any model and decodes them to approximate the
activations of any other model under consideration. By optimizing a shared
objective, the learned dictionary captures common factors of
variation-concepts-across different tasks, architectures, and datasets. We show
that USAEs discover semantically coherent and important universal concepts
across vision models; ranging from low-level features (e.g., colors and
textures) to higher-level structures (e.g., parts and objects). Overall, USAEs
provide a powerful new method for interpretable cross-model analysis and offers
novel applications, such as coordinated activation maximization, that open
avenues for deeper insights in multi-model AI systems
|
2502.03715
|
Boosting Knowledge Graph-based Recommendations through Confidence-Aware
Augmentation with Large Language Models
|
cs.IR cs.AI
|
Knowledge Graph-based recommendations have gained significant attention due
to their ability to leverage rich semantic relationships. However, constructing
and maintaining Knowledge Graphs (KGs) is resource-intensive, and the accuracy
of KGs can suffer from noisy, outdated, or irrelevant triplets. Recent
advancements in Large Language Models (LLMs) offer a promising way to improve
the quality and relevance of KGs for recommendation tasks. Despite this,
integrating LLMs into KG-based systems presents challenges, such as efficiently
augmenting KGs, addressing hallucinations, and developing effective joint
learning methods. In this paper, we propose the Confidence-aware KG-based
Recommendation Framework with LLM Augmentation (CKG-LLMA), a novel framework
that combines KGs and LLMs for recommendation task. The framework includes: (1)
an LLM-based subgraph augmenter for enriching KGs with high-quality
information, (2) a confidence-aware message propagation mechanism to filter
noisy triplets, and (3) a dual-view contrastive learning method to integrate
user-item interactions and KG data. Additionally, we employ a confidence-aware
explanation generation process to guide LLMs in producing realistic
explanations for recommendations. Finally, extensive experiments demonstrate
the effectiveness of CKG-LLMA across multiple public datasets.
|
2502.03717
|
Efficiently Generating Expressive Quadruped Behaviors via
Language-Guided Preference Learning
|
cs.RO cs.AI
|
Expressive robotic behavior is essential for the widespread acceptance of
robots in social environments. Recent advancements in learned legged locomotion
controllers have enabled more dynamic and versatile robot behaviors. However,
determining the optimal behavior for interactions with different users across
varied scenarios remains a challenge. Current methods either rely on natural
language input, which is efficient but low-resolution, or learn from human
preferences, which, although high-resolution, is sample inefficient. This paper
introduces a novel approach that leverages priors generated by pre-trained LLMs
alongside the precision of preference learning. Our method, termed
Language-Guided Preference Learning (LGPL), uses LLMs to generate initial
behavior samples, which are then refined through preference-based feedback to
learn behaviors that closely align with human expectations. Our core insight is
that LLMs can guide the sampling process for preference learning, leading to a
substantial improvement in sample efficiency. We demonstrate that LGPL can
quickly learn accurate and expressive behaviors with as few as four queries,
outperforming both purely language-parameterized models and traditional
preference learning approaches. Website with videos:
https://lgpl-gaits.github.io/
|
2502.03721
|
Detecting Backdoor Attacks via Similarity in Semantic Communication
Systems
|
cs.CR cs.LG
|
Semantic communication systems, which leverage Generative AI (GAI) to
transmit semantic meaning rather than raw data, are poised to revolutionize
modern communications. However, they are vulnerable to backdoor attacks, a type
of poisoning manipulation that embeds malicious triggers into training
datasets. As a result, Backdoor attacks mislead the inference for poisoned
samples while clean samples remain unaffected. The existing defenses may alter
the model structure (such as neuron pruning that potentially degrades inference
performance on clean inputs, or impose strict requirements on data formats
(such as ``Semantic Shield" that requires image-text pairs). To address these
limitations, this work proposes a defense mechanism that leverages semantic
similarity to detect backdoor attacks without modifying the model structure or
imposing data format constraints. By analyzing deviations in semantic feature
space and establishing a threshold-based detection framework, the proposed
approach effectively identifies poisoned samples. The experimental results
demonstrate high detection accuracy and recall across varying poisoning ratios,
underlining the significant effectiveness of our proposed solution.
|
2502.03723
|
Speaking the Language of Teamwork: LLM-Guided Credit Assignment in
Multi-Agent Reinforcement Learning
|
cs.MA
|
Credit assignment, the process of attributing credit or blame to individual
agents for their contributions to a team's success or failure, remains a
fundamental challenge in multi-agent reinforcement learning (MARL),
particularly in environments with sparse rewards. Commonly-used approaches such
as value decomposition often lead to suboptimal policies in these settings, and
designing dense reward functions that align with human intuition can be complex
and labor-intensive. In this work, we propose a novel framework where a large
language model (LLM) generates dense, agent-specific rewards based on a natural
language description of the task and the overall team goal. By learning a
potential-based reward function over multiple queries, our method reduces the
impact of ranking errors while allowing the LLM to evaluate each agent's
contribution to the overall task. Through extensive experiments, we demonstrate
that our approach achieves faster convergence and higher policy returns
compared to state-of-the-art MARL baselines.
|
2502.03724
|
MD-BERT: Action Recognition in Dark Videos via Dynamic Multi-Stream
Fusion and Temporal Modeling
|
cs.CV cs.AI cs.HC cs.LG cs.MM
|
Action recognition in dark, low-light (under-exposed) or noisy videos is a
challenging task due to visibility degradation, which can hinder critical
spatiotemporal details. This paper proposes MD-BERT, a novel multi-stream
approach that integrates complementary pre-processing techniques such as gamma
correction and histogram equalization alongside raw dark frames to address
these challenges. We introduce the Dynamic Feature Fusion (DFF) module,
extending existing attentional fusion methods to a three-stream setting,
thereby capturing fine-grained and global contextual information across
different brightness and contrast enhancements. The fused spatiotemporal
features are then processed by a BERT-based temporal model, which leverages its
bidirectional self-attention to effectively capture long-range dependencies and
contextual relationships across frames. Extensive experiments on the ARID V1.0
and ARID V1.5 dark video datasets show that MD-BERT outperforms existing
methods, establishing a new state-of-the-art performance. Ablation studies
further highlight the individual contributions of each input stream and the
effectiveness of the proposed DFF and BERT modules. The official website of
this work is available at: https://github.com/HrishavBakulBarua/DarkBERT
|
2502.03725
|
Optimal Control of Fluid Restless Multi-armed Bandits: A Machine
Learning Approach
|
cs.LG
|
We propose a machine learning approach to the optimal control of fluid
restless multi-armed bandits (FRMABs) with state equations that are either
affine or quadratic in the state variables. By deriving fundamental properties
of FRMAB problems, we design an efficient machine learning based algorithm.
Using this algorithm, we solve multiple instances with varying initial states
to generate a comprehensive training set. We then learn a state feedback policy
using Optimal Classification Trees with hyperplane splits (OCT-H). We test our
approach on machine maintenance, epidemic control and fisheries control
problems. Our method yields high-quality state feedback policies and achieves a
speed-up of up to 26 million times compared to a direct numerical algorithm for
fluid problems.
|
2502.03726
|
DICE: Distilling Classifier-Free Guidance into Text Embeddings
|
cs.CV
|
Text-to-image diffusion models are capable of generating high-quality images,
but these images often fail to align closely with the given text prompts.
Classifier-free guidance (CFG) is a popular and effective technique for
improving text-image alignment in the generative process. However, using CFG
introduces significant computational overhead and deviates from the established
theoretical foundations of diffusion models. In this paper, we present
DIstilling CFG by enhancing text Embeddings (DICE), a novel approach that
removes the reliance on CFG in the generative process while maintaining the
benefits it provides. DICE distills a CFG-based text-to-image diffusion model
into a CFG-free version by refining text embeddings to replicate CFG-based
directions. In this way, we avoid the computational and theoretical drawbacks
of CFG, enabling high-quality, well-aligned image generation at a fast sampling
speed. Extensive experiments on multiple Stable Diffusion v1.5 variants, SDXL
and PixArt-$\alpha$ demonstrate the effectiveness of our method. Furthermore,
DICE supports negative prompts for image editing to improve image quality
further. Code will be available soon.
|
2502.03729
|
Action-Free Reasoning for Policy Generalization
|
cs.RO cs.AI
|
End-to-end imitation learning offers a promising approach for training robot
policies. However, generalizing to new settings remains a significant
challenge. Although large-scale robot demonstration datasets have shown
potential for inducing generalization, they are resource-intensive to scale. In
contrast, human video data is abundant and diverse, presenting an attractive
alternative. Yet, these human-video datasets lack action labels, complicating
their use in imitation learning. Existing methods attempt to extract grounded
action representations (e.g., hand poses), but resulting policies struggle to
bridge the embodiment gap between human and robot actions. We propose an
alternative approach: leveraging language-based reasoning from human
videos-essential for guiding robot actions-to train generalizable robot
policies. Building on recent advances in reasoning-based policy architectures,
we introduce Reasoning through Action-free Data (RAD). RAD learns from both
robot demonstration data (with reasoning and action labels) and action-free
human video data (with only reasoning labels). The robot data teaches the model
to map reasoning to low-level actions, while the action-free data enhances
reasoning capabilities. Additionally, we will release a new dataset of 3,377
human-hand demonstrations with reasoning annotations compatible with the Bridge
V2 benchmark and aimed at facilitating future research on reasoning-driven
robot learning. Our experiments show that RAD enables effective transfer across
the embodiment gap, allowing robots to perform tasks seen only in action-free
data. Furthermore, scaling up action-free reasoning data significantly improves
policy performance and generalization to novel tasks. These results highlight
the promise of reasoning-driven learning from action-free datasets for
advancing generalizable robot control. Project page:
https://rad-generalization.github.io
|
2502.03737
|
Mitigating the Participation Bias by Balancing Extreme Ratings
|
cs.LG cs.GT
|
Rating aggregation plays a crucial role in various fields, such as product
recommendations, hotel rankings, and teaching evaluations. However, traditional
averaging methods can be affected by participation bias, where some raters do
not participate in the rating process, leading to potential distortions. In
this paper, we consider a robust rating aggregation task under the
participation bias. We assume that raters may not reveal their ratings with a
certain probability depending on their individual ratings, resulting in
partially observed samples. Our goal is to minimize the expected squared loss
between the aggregated ratings and the average of all underlying ratings
(possibly unobserved) in the worst-case scenario.
We focus on two settings based on whether the sample size (i.e. the number of
raters) is known. In the first setting, where the sample size is known, we
propose an aggregator, named as the Balanced Extremes Aggregator. It estimates
unrevealed ratings with a balanced combination of extreme ratings. When the
sample size is unknown, we derive another aggregator, the Polarizing-Averaging
Aggregator, which becomes optimal as the sample size grows to infinity.
Numerical results demonstrate the superiority of our proposed aggregators in
mitigating participation bias, compared to simple averaging and the spectral
method. Furthermore, we validate the effectiveness of our aggregators on a
real-world dataset.
|
2502.03738
|
Scaling Laws in Patchification: An Image Is Worth 50,176 Tokens And More
|
cs.CV
|
Since the introduction of Vision Transformer (ViT), patchification has long
been regarded as a de facto image tokenization approach for plain visual
architectures. By compressing the spatial size of images, this approach can
effectively shorten the token sequence and reduce the computational cost of
ViT-like plain architectures. In this work, we aim to thoroughly examine the
information loss caused by this patchification-based compressive encoding
paradigm and how it affects visual understanding. We conduct extensive patch
size scaling experiments and excitedly observe an intriguing scaling law in
patchification: the models can consistently benefit from decreased patch sizes
and attain improved predictive performance, until it reaches the minimum patch
size of 1x1, i.e., pixel tokenization. This conclusion is broadly applicable
across different vision tasks, various input scales, and diverse architectures
such as ViT and the recent Mamba models. Moreover, as a by-product, we discover
that with smaller patches, task-specific decoder heads become less critical for
dense prediction. In the experiments, we successfully scale up the visual
sequence to an exceptional length of 50,176 tokens, achieving a competitive
test accuracy of 84.6% with a base-sized model on the ImageNet-1k benchmark. We
hope this study can provide insights and theoretical foundations for future
works of building non-compressive vision models. Code is available at
https://github.com/wangf3014/Patch_Scaling.
|
2502.03740
|
Multiple Invertible and Partial-Equivariant Function for Latent Vector
Transformation to Enhance Disentanglement in VAEs
|
cs.LG cs.AI
|
Disentanglement learning is a core issue for understanding and re-using
trained information in Variational AutoEncoder (VAE), and effective inductive
bias has been reported as a key factor. However, the actual implementation of
such bias is still vague. In this paper, we propose a novel method, called
Multiple Invertible and partial-equivariant transformation
(MIPE-transformation), to inject inductive bias by 1) guaranteeing the
invertibility of latent-to-latent vector transformation while preserving a
certain portion of equivariance of input-to-latent vector transformation,
called Invertible and partial-equivariant transformation (IPE-transformation),
2) extending the form of prior and posterior in VAE frameworks to an
unrestricted form through a learnable conversion to an approximated exponential
family, called Exponential Family conversion (EF-conversion), and 3)
integrating multiple units of IPE-transformation and EF-conversion, and their
training. In experiments on 3D Cars, 3D Shapes, and dSprites datasets,
MIPE-transformation improves the disentanglement performance of
state-of-the-art VAEs.
|
2502.03746
|
Brain Tumor Identification using Improved YOLOv8
|
cs.CV cs.LG
|
Identifying the extent of brain tumors is a significant challenge in brain
cancer treatment. The main difficulty is in the approximate detection of tumor
size. Magnetic resonance imaging (MRI) has become a critical diagnostic tool.
However, manually detecting the boundaries of brain tumors from MRI scans is a
labor-intensive task that requires extensive expertise. Deep learning and
computer-aided detection techniques have led to notable advances in machine
learning for this purpose. In this paper, we propose a modified You Only Look
Once (YOLOv8) model to accurately detect the tumors within the MRI images. The
proposed model replaced the Non-Maximum Suppression (NMS) algorithm with a
Real-Time Detection Transformer (RT- DETR) in the detection head. NMS filters
out redundant or overlapping bounding boxes in the detected tumors, but they
are hand-designed and pre-set. RT-DETR removes hand-designed components. The
second improvement was made by replacing the normal convolution block with
ghost convolution. Ghost Convolution reduces computational and memory costs
while maintaining high accuracy and enabling faster inference, making it ideal
for resource-constrained environments and real-time applications. The third
improvement was made by introducing a vision transformer block in the backbone
of YOLOv8 to extract context-aware features. We used a publicly available
dataset of brain tumors in the proposed model. The proposed model performed
better than the original YOLOv8 model and also performed better than other
object detectors (Faster R- CNN, Mask R-CNN, YOLO, YOLOv3, YOLOv4, YOLOv5, SSD,
RetinaNet, EfficientDet, and DETR). The proposed model achieved 0.91 mAP (mean
Average Precision)@0.5.
|
2502.03748
|
Rethinking the Residual Distribution of Locate-then-Editing Methods in
Model Editing
|
cs.CL
|
Model editing is a powerful technique for updating the knowledge of Large
Language Models (LLMs). Locate-then-edit methods are a popular class of
approaches that first identify the critical layers storing knowledge, then
compute the residual of the last critical layer based on the edited knowledge,
and finally perform multi-layer updates using a least-squares solution by
evenly distributing the residual from the first critical layer to the last.
Although these methods achieve promising results, they have been shown to
degrade the original knowledge of LLMs. We argue that residual distribution
leads to this issue. To explore this, we conduct a comprehensive analysis of
residual distribution in locate-then-edit methods from both empirical and
theoretical perspectives, revealing that residual distribution introduces
editing errors, leading to inaccurate edits. To address this issue, we propose
the Boundary Layer UpdatE (BLUE) strategy to enhance locate-then-edit methods.
Sequential batch editing experiments on three LLMs and two datasets demonstrate
that BLUE not only delivers an average performance improvement of 35.59\%,
significantly advancing the state of the art in model editing, but also
enhances the preservation of LLMs' general capabilities. Our code is available
at https://github.com/xpq-tech/BLUE.
|
2502.03749
|
PINS: Proximal Iterations with Sparse Newton and Sinkhorn for Optimal
Transport
|
cs.LG math.OC
|
Optimal transport (OT) is a critical problem in optimization and machine
learning, where accuracy and efficiency are paramount. Although entropic
regularization and the Sinkhorn algorithm improve scalability, they frequently
encounter numerical instability and slow convergence, especially when the
regularization parameter is small. In this work, we introduce Proximal
Iterations with Sparse Newton and Sinkhorn methods (PINS) to efficiently
compute highly accurate solutions for large-scale OT problems. A reduced
computational complexity through overall sparsity and global convergence are
guaranteed by rigorous theoretical analysis. Our approach offers three key
advantages: it achieves accuracy comparable to exact solutions, progressively
accelerates each iteration for greater efficiency, and enhances robustness by
reducing sensitivity to regularization parameters. Extensive experiments
confirm these advantages, demonstrating superior performance compared to
related methods.
|
2502.03750
|
Principal Curvatures Estimation with Applications to Single Cell Data
|
cs.LG cs.AI
|
The rapidly growing field of single-cell transcriptomic sequencing (scRNAseq)
presents challenges for data analysis due to its massive datasets. A common
method in manifold learning consists in hypothesizing that datasets lie on a
lower dimensional manifold. This allows to study the geometry of point clouds
by extracting meaningful descriptors like curvature. In this work, we will
present Adaptive Local PCA (AdaL-PCA), a data-driven method for accurately
estimating various notions of intrinsic curvature on data manifolds, in
particular principal curvatures for surfaces. The model relies on local PCA to
estimate the tangent spaces. The evaluation of AdaL-PCA on sampled surfaces
shows state-of-the-art results. Combined with a PHATE embedding, the model
applied to single-cell RNA sequencing data allows us to identify key variations
in the cellular differentiation.
|
2502.03752
|
PRISM: A Robust Framework for Skill-based Meta-Reinforcement Learning
with Noisy Demonstrations
|
cs.LG cs.AI
|
Meta-reinforcement learning (Meta-RL) facilitates rapid adaptation to unseen
tasks but faces challenges in long-horizon environments. Skill-based approaches
tackle this by decomposing state-action sequences into reusable skills and
employing hierarchical decision-making. However, these methods are highly
susceptible to noisy offline demonstrations, resulting in unstable skill
learning and degraded performance. To overcome this, we propose Prioritized
Refinement for Skill-Based Meta-RL (PRISM), a robust framework that integrates
exploration near noisy data to generate online trajectories and combines them
with offline data. Through prioritization, PRISM extracts high-quality data to
learn task-relevant skills effectively. By addressing the impact of noise, our
method ensures stable skill learning and achieves superior performance in
long-horizon tasks, even with noisy and sub-optimal data.
|
2502.03755
|
Regularization via f-Divergence: An Application to Multi-Oxide
Spectroscopic Analysis
|
cs.LG
|
In this paper, we address the task of characterizing the chemical composition
of planetary surfaces using convolutional neural networks (CNNs). Specifically,
we seek to predict the multi-oxide weights of rock samples based on
spectroscopic data collected under Martian conditions. We frame this problem as
a multi-target regression task and propose a novel regularization method based
on f-divergence. The f-divergence regularization is designed to constrain the
distributional discrepancy between predictions and noisy targets. This
regularizer serves a dual purpose: on the one hand, it mitigates overfitting by
enforcing a constraint on the distributional difference between predictions and
noisy targets. On the other hand, it acts as an auxiliary loss function,
penalizing the neural network when the divergence between the predicted and
target distributions becomes too large. To enable backpropagation during neural
network training, we develop a differentiable f-divergence and incorporate it
into the f-divergence regularization, making the network training feasible. We
conduct experiments using spectra collected in a Mars-like environment by the
remote-sensing instruments aboard the Curiosity and Perseverance rovers.
Experimental results on multi-oxide weight prediction demonstrate that the
proposed $f$-divergence regularization performs better than or comparable to
standard regularization methods including $L_1$, $L_2$, and dropout. Notably,
combining the $f$-divergence regularization with these standard regularization
further enhances performance, outperforming each regularization method used
independently.
|
2502.03758
|
Improving Adversarial Robustness via Phase and Amplitude-aware Prompting
|
cs.CV
|
Deep neural networks are found to be vulnerable to adversarial noises. The
prompt-based defense has been increasingly studied due to its high efficiency.
However, existing prompt-based defenses mainly exploited mixed prompt patterns,
where critical patterns closely related to object semantics lack sufficient
focus. The phase and amplitude spectra have been proven to be highly related to
specific semantic patterns and crucial for robustness. To this end, in this
paper, we propose a Phase and Amplitude-aware Prompting (PAP) defense.
Specifically, we construct phase-level and amplitude-level prompts for each
class, and adjust weights for prompting according to the model's robust
performance under these prompts during training. During testing, we select
prompts for each image using its predicted label to obtain the prompted image,
which is inputted to the model to get the final prediction. Experimental
results demonstrate the effectiveness of our method.
|
2502.03760
|
RAMOTS: A Real-Time System for Aerial Multi-Object Tracking based on
Deep Learning and Big Data Technology
|
cs.CV
|
Multi-object tracking (MOT) in UAV-based video is challenging due to
variations in viewpoint, low resolution, and the presence of small objects.
While other research on MOT dedicated to aerial videos primarily focuses on the
academic aspect by developing sophisticated algorithms, there is a lack of
attention to the practical aspect of these systems. In this paper, we propose a
novel real-time MOT framework that integrates Apache Kafka and Apache Spark for
efficient and fault-tolerant video stream processing, along with
state-of-the-art deep learning models YOLOv8/YOLOv10 and BYTETRACK/BoTSORT for
accurate object detection and tracking. Our work highlights the importance of
not only the advanced algorithms but also the integration of these methods with
scalable and distributed systems. By leveraging these technologies, our system
achieves a HOTA of 48.14 and a MOTA of 43.51 on the Visdrone2019-MOT test set
while maintaining a real-time processing speed of 28 FPS on a single GPU. Our
work demonstrates the potential of big data technologies and deep learning for
addressing the challenges of MOT in UAV applications.
|
2502.03762
|
Learning Reward Machines from Partially Observed Optimal Policies
|
cs.LG cs.FL
|
Inverse reinforcement learning is the problem of inferring a reward function
from an optimal policy. In this work, it is assumed that the reward is
expressed as a reward machine whose transitions depend on atomic propositions
associated with the state of a Markov Decision Process (MDP). Our goal is to
identify the true reward machine using finite information. To this end, we
first introduce the notion of a prefix tree policy which associates a
distribution of actions to each state of the MDP and each attainable finite
sequence of atomic propositions. Then, we characterize an equivalence class of
reward machines that can be identified given the prefix tree policy. Finally,
we propose a SAT-based algorithm that uses information extracted from the
prefix tree policy to solve for a reward machine. It is proved that if the
prefix tree policy is known up to a sufficient (but finite) depth, our
algorithm recovers the exact reward machine up to the equivalence class. This
sufficient depth is derived as a function of the number of MDP states and (an
upper bound on) the number of states of the reward machine. Several examples
are used to demonstrate the effectiveness of the approach.
|
2502.03765
|
Replacing K-infinity Function with Leaky ReLU in Barrier Function
Design: A Union of Invariant Sets Approach for ReLU-Based Dynamical Systems
|
eess.SY cs.SY
|
In this paper, a systematic framework is presented for determining piecewise
affine PWA barrier functions and their corresponding invariant sets for
dynamical systems identified via Rectified Linear Unit (ReLU) neural networks
or their equivalent PWA representations. A common approach to determining the
invariant set is to use Nagumo's condition, or to utilize the barrier function
with a class K-infinity function. It may be challenging to find a suitable
class K-infinity function in some cases. We propose leaky ReLU as an efficient
substitute for the complex nonlinear K-infinity function in our formulation.
Moreover, we propose the Union of Invariant Sets (UIS) method, which combines
information from multiple invariant sets in order to compute the largest
possible PWA invariant set. The proposed framework is validated through
multiple examples, showcasing its potential to enhance the analysis of
invariant sets in ReLU-based dynamical systems. Our code is available at:
https://github.com/PouyaSamanipour/UIS.git.
|
2502.03766
|
Hierarchical Contextual Manifold Alignment for Structuring Latent
Representations in Large Language Models
|
cs.CL
|
The organization of latent token representations plays a crucial role in
determining the stability, generalization, and contextual consistency of
language models, yet conventional approaches to embedding refinement often rely
on parameter modifications that introduce additional computational overhead. A
hierarchical alignment method was introduced to restructure token embeddings
without altering core model weights, ensuring that representational
distributions maintained coherence across different linguistic contexts.
Experimental evaluations demonstrated improvements in rare token retrieval,
adversarial robustness, and long-range dependency tracking, highlighting the
advantages of hierarchical structuring in mitigating inconsistencies in latent
space organization. The comparative analysis against conventional fine-tuning
and embedding perturbation methods revealed that hierarchical restructuring
maintained computational efficiency while achieving measurable gains in
representation quality. Structural refinements introduced through the alignment
process resulted in improved contextual stability across varied linguistic
tasks, reducing inconsistencies in token proximity relationships and enhancing
interpretability in language generation. A detailed computational assessment
confirmed that the realignment process introduced minimal inference overhead,
ensuring that representational improvements did not compromise model
efficiency. The findings reinforced the broader significance of structured
representation learning, illustrating that hierarchical embedding modifications
could serve as an effective strategy for refining latent space distributions
while preserving pre-learned semantic associations.
|
2502.03771
|
Adaptive Semantic Prompt Caching with VectorQ
|
cs.LG cs.CL
|
Semantic prompt caches reduce the latency and cost of large language model
(LLM) inference by reusing cached LLM-generated responses for semantically
similar prompts. Vector similarity metrics assign a numerical score to quantify
the similarity between an embedded prompt and its nearest neighbor in the
cache. Existing systems rely on a static threshold to classify whether the
similarity score is sufficiently high to result in a cache hit. We show that
this one-size-fits-all threshold is insufficient across different prompts. We
propose VectorQ, a framework to learn embedding-specific threshold regions that
adapt to the complexity and uncertainty of an embedding. Through evaluations on
a combination of four diverse datasets, we show that VectorQ consistently
outperforms state-of-the-art systems across all static thresholds, achieving up
to 12x increases in cache hit rate and error rate reductions up to 92%.
|
2502.03772
|
A Retrospective Systematic Study on Hierarchical Sparse Query
Transformer-assisted Ultrasound Screening for Early Hepatocellular Carcinoma
|
cs.CV cs.AI
|
Hepatocellular carcinoma (HCC) ranks as the third leading cause of
cancer-related mortality worldwide, with early detection being crucial for
improving patient survival rates. However, early screening for HCC using
ultrasound suffers from insufficient sensitivity and is highly dependent on the
expertise of radiologists for interpretation. Leveraging the latest
advancements in artificial intelligence (AI) in medical imaging, this study
proposes an innovative Hierarchical Sparse Query Transformer (HSQformer) model
that combines the strengths of Convolutional Neural Networks (CNNs) and Vision
Transformers (ViTs) to enhance the accuracy of HCC diagnosis in ultrasound
screening. The HSQformer leverages sparse latent space representations to
capture hierarchical details at various granularities without the need for
complex adjustments, and adopts a modular, plug-and-play design philosophy,
ensuring the model's versatility and ease of use. The HSQformer's performance
was rigorously tested across three distinct clinical scenarios: single-center,
multi-center, and high-risk patient testing. In each of these settings, it
consistently outperformed existing state-of-the-art models, such as ConvNext
and SwinTransformer. Notably, the HSQformer even matched the diagnostic
capabilities of senior radiologists and comprehensively surpassed those of
junior radiologists. The experimental results from this study strongly
demonstrate the effectiveness and clinical potential of AI-assisted tools in
HCC screening. The full code is available at
https://github.com/Asunatan/HSQformer.
|
2502.03773
|
ExpProof : Operationalizing Explanations for Confidential Models with
ZKPs
|
cs.LG cs.AI cs.CR
|
In principle, explanations are intended as a way to increase trust in machine
learning models and are often obligated by regulations. However, many
circumstances where these are demanded are adversarial in nature, meaning the
involved parties have misaligned interests and are incentivized to manipulate
explanations for their purpose. As a result, explainability methods fail to be
operational in such settings despite the demand \cite{bordt2022post}. In this
paper, we take a step towards operationalizing explanations in adversarial
scenarios with Zero-Knowledge Proofs (ZKPs), a cryptographic primitive.
Specifically we explore ZKP-amenable versions of the popular explainability
algorithm LIME and evaluate their performance on Neural Networks and Random
Forests.
|
2502.03774
|
High-Rate Spatially Coupled LDPC Codes Based on Massey's Convolutional
Self-Orthogonal Codes
|
cs.IT math.IT
|
In this paper, we study a new class of high-rate spatially coupled LDPC
(SC-LDPC) codes based on the convolutional self-orthogonal codes (CSOCs) first
introduced by Massey. The SC-LDPC codes are constructed by treating the
irregular graph corresponding to the parity-check matrix of a systematic rate R
= (n - 1)/n CSOC as a convolutional protograph. The protograph can then be
lifted using permutation matrices to generate a high-rate SC-LDPC code whose
strength depends on the lifting factor. The SC-LDPC codes constructed in this
fashion can be decoded using iterative belief propagation (BP) based sliding
window decoding (SWD).
A non-systematic version of a CSOC parity-check matrix is then proposed by
making a slight modification to the systematic construction. The non-systematic
parity-check matrix corresponds to a regular protograph whose degree profile
depends on the rate and error-correcting capability of the underlying CSOC.
Even though the parity-check matrix is in non-systematic form, we show how
systematic encoding can still be performed. We also show that the
non-systematic convolutional protograph has a guaranteed girth and free
distance and that these properties carry over to the lifted versions.
Finally, numerical results are included demonstrating that CSOC-based SC-LDPC
codes (i) achieve excellent performance at very high rates, (ii) have
performance at least as good as that of SC-LDPC codes constructed from
convolutional protographs commonly found in the literature, and (iii) have
iterative decoding thresholds comparable to those of existing SC-LDPC code
designs.
|
2502.03776
|
StarMAP: Global Neighbor Embedding for Faithful Data Visualization
|
cs.LG
|
Neighbor embedding is widely employed to visualize high-dimensional data;
however, it frequently overlooks the global structure, e.g., intercluster
similarities, thereby impeding accurate visualization. To address this problem,
this paper presents Star-attracted Manifold Approximation and Projection
(StarMAP), which incorporates the advantage of principal component analysis
(PCA) in neighbor embedding. Inspired by the property of PCA embedding, which
can be viewed as the largest shadow of the data, StarMAP introduces the concept
of \textit{star attraction} by leveraging the PCA embedding. This approach
yields faithful global structure preservation while maintaining the
interpretability and computational efficiency of neighbor embedding. StarMAP
was compared with existing methods in the visualization tasks of toy datasets,
single-cell RNA sequencing data, and deep representation. The experimental
results show that StarMAP is simple but effective in realizing faithful
visualizations.
|
2502.03777
|
Multi-Label Test-Time Adaptation with Bound Entropy Minimization
|
cs.CV
|
Mainstream test-time adaptation (TTA) techniques endeavor to mitigate
distribution shifts via entropy minimization for multi-class classification,
inherently increasing the probability of the most confident class. However,
when encountering multi-label instances, the primary challenge stems from the
varying number of labels per image, and prioritizing only the highest
probability class inevitably undermines the adaptation of other positive
labels. To address this issue, we investigate TTA within multi-label scenario
(ML--TTA), developing Bound Entropy Minimization (BEM) objective to
simultaneously increase the confidence of multiple top predicted labels.
Specifically, to determine the number of labels for each augmented view, we
retrieve a paired caption with yielded textual labels for that view. These
labels are allocated to both the view and caption, called weak label set and
strong label set with the same size k. Following this, the proposed BEM
considers the highest top-k predicted labels from view and caption as a single
entity, respectively, learning both view and caption prompts concurrently. By
binding top-k predicted labels, BEM overcomes the limitation of vanilla entropy
minimization, which exclusively optimizes the most confident class. Across the
MSCOCO, VOC, and NUSWIDE multi-label datasets, our ML--TTA framework equipped
with BEM exhibits superior performance compared to the latest SOTA methods,
across various model architectures, prompt initialization, and varying label
scenarios. The code is available at https://github.com/Jinx630/ML-TTA.
|
2502.03781
|
Gaze-Assisted Human-Centric Domain Adaptation for Cardiac Ultrasound
Image Segmentation
|
cs.CV eess.IV
|
Domain adaptation (DA) for cardiac ultrasound image segmentation is
clinically significant and valuable. However, previous domain adaptation
methods are prone to be affected by the incomplete pseudo-label and low-quality
target to source images. Human-centric domain adaptation has great advantages
of human cognitive guidance to help model adapt to target domain and reduce
reliance on labels. Doctor gaze trajectories contains a large amount of
cross-domain human guidance. To leverage gaze information and human cognition
for guiding domain adaptation, we propose gaze-assisted human-centric domain
adaptation (GAHCDA), which reliably guides the domain adaptation of cardiac
ultrasound images. GAHCDA includes following modules: (1) Gaze Augment
Alignment (GAA): GAA enables the model to obtain human cognition general
features to recognize segmentation target in different domain of cardiac
ultrasound images like humans. (2) Gaze Balance Loss (GBL): GBL fused gaze
heatmap with outputs which makes the segmentation result structurally closer to
the target domain. The experimental results illustrate that our proposed
framework is able to segment cardiac ultrasound images more effectively in the
target domain than GAN-based methods and other self-train based methods,
showing great potential in clinical application.
|
2502.03783
|
UltraBones100k: An Ultrasound Image Dataset with CT-Derived Labels for
Lower Extremity Long Bone Surface Segmentation
|
eess.IV cs.CV
|
Ultrasound-based bone surface segmentation is crucial in computer-assisted
orthopedic surgery. However, ultrasound images have limitations, including a
low signal-to-noise ratio, and acoustic shadowing, which make interpretation
difficult. Existing deep learning models for bone segmentation rely primarily
on costly manual labeling by experts, limiting dataset size and model
generalizability. Additionally, the complexity of ultrasound physics and
acoustic shadow makes the images difficult for humans to interpret, leading to
incomplete labels in anechoic regions and limiting model performance. To
advance ultrasound bone segmentation and establish effective model benchmarks,
larger and higher-quality datasets are needed.
We propose a methodology for collecting ex-vivo ultrasound datasets with
automatically generated bone labels, including anechoic regions. The proposed
labels are derived by accurately superimposing tracked bone CT models onto the
tracked ultrasound images. These initial labels are refined to account for
ultrasound physics. A clinical evaluation is conducted by an expert physician
specialized on orthopedic sonography to assess the quality of the generated
bone labels. A neural network for bone segmentation is trained on the collected
dataset and its predictions are compared to expert manual labels, evaluating
accuracy, completeness, and F1-score.
We collected the largest known dataset of 100k ultrasound images of human
lower limbs with bone labels, called UltraBones100k. A Wilcoxon signed-rank
test with Bonferroni correction confirmed that the bone alignment after our
method significantly improved the quality of bone labeling (p < 0.001). The
model trained on UltraBones100k consistently outperforms manual labeling in all
metrics, particularly in low-intensity regions (320% improvement in
completeness at a distance threshold of 0.5 mm).
|
2502.03785
|
Reed-Muller Codes on CQ Channels via a New Correlation Bound for Quantum
Observables
|
cs.IT math.IT quant-ph
|
The question of whether Reed-Muller (RM) codes achieve capacity on binary
memoryless symmetric (BMS) channels has drawn attention since it was resolved
positively for the binary erasure channel by Kudekar et al. in 2016. In 2021,
Reeves and Pfister extended this to prove the bit-error probability vanishes on
BMS channels when the code rate is less than capacity. In 2023, Abbe and Sandon
improved this to show the block-error probability also goes to zero. These
results analyze decoding functions using symmetry and the nested structure of
RM codes. In this work, we focus on binary-input symmetric classical-quantum
(BSCQ) channels and the Holevo capacity. For a BSCQ, we consider observables
that estimate the channel input in the sense of minimizing the mean-squared
error (MSE). Using the orthogonal decomposition of these observables under a
weighted inner product, we establish a recursive relation for the minimum MSE
estimate of a single bit in the RM code. Our results show that any set of
$2^{o(\sqrt{\log N})}$ bits can be decoded with a high probability when the
code rate is less than the Holevo capacity.
|
2502.03787
|
Iterate to Accelerate: A Unified Framework for Iterative Reasoning and
Feedback Convergence
|
cs.LG
|
We introduce a unified framework for iterative reasoning that leverages
non-Euclidean geometry via Bregman divergences, higher-order operator
averaging, and adaptive feedback mechanisms. Our analysis establishes that,
under mild smoothness and contractivity assumptions, a generalized update
scheme not only unifies classical methods such as mirror descent and dynamic
programming but also captures modern chain-of-thought reasoning processes in
large language models. In particular, we prove that our accelerated iterative
update achieves an $O(1/t^2)$ convergence rate in the absence of persistent
perturbations, and we further demonstrate that feedback (iterative)
architectures are necessary to approximate certain fixed-point functions
efficiently. These theoretical insights bridge classical acceleration
techniques with contemporary applications in neural computation and
optimization.
|
2502.03792
|
Guiding Two-Layer Neural Network Lipschitzness via Gradient Descent
Learning Rate Constraints
|
stat.ML cs.LG
|
We demonstrate that applying an eventual decay to the learning rate (LR) in
empirical risk minimization (ERM), where the mean-squared-error loss is
minimized using standard gradient descent (GD) for training a two-layer neural
network with Lipschitz activation functions, ensures that the resulting network
exhibits a high degree of Lipschitz regularity, that is, a small Lipschitz
constant. Moreover, we show that this decay does not hinder the convergence
rate of the empirical risk, now measured with the Huber loss, toward a critical
point of the non-convex empirical risk. From these findings, we derive
generalization bounds for two-layer neural networks trained with GD and a
decaying LR with a sub-linear dependence on its number of trainable parameters,
suggesting that the statistical behaviour of these networks is independent of
overparameterization. We validate our theoretical results with a series of toy
numerical experiments, where surprisingly, we observe that networks trained
with constant step size GD exhibit similar learning and regularity properties
to those trained with a decaying LR. This suggests that neural networks trained
with standard GD may already be highly regular learners.
|
2502.03793
|
It's All in The [MASK]: Simple Instruction-Tuning Enables BERT-like
Masked Language Models As Generative Classifiers
|
cs.CL cs.AI
|
While encoder-only models such as BERT and ModernBERT are ubiquitous in
real-world NLP applications, their conventional reliance on task-specific
classification heads can limit their applicability compared to decoder-based
large language models (LLMs). In this work, we introduce
ModernBERT-Large-Instruct, a 0.4B-parameter encoder model that leverages its
masked language modelling (MLM) head for generative classification. Our
approach employs an intentionally simple training loop and inference mechanism
that requires no heavy pre-processing, heavily engineered prompting, or
architectural modifications. ModernBERT-Large-Instruct exhibits strong
zero-shot performance on both classification and knowledge-based tasks,
outperforming similarly sized LLMs on MMLU and achieving 93% of Llama3-1B's
MMLU performance with 60% less parameters. We also demonstrate that, when
fine-tuned, the generative approach using the MLM head matches or even
surpasses traditional classification-head methods across diverse NLU tasks.This
capability emerges specifically in models trained on contemporary, diverse data
mixes, with models trained on lower volume, less-diverse data yielding
considerably weaker performance. Although preliminary, these results
demonstrate the potential of using the original generative masked language
modelling head over traditional task-specific heads for downstream tasks. Our
work suggests that further exploration into this area is warranted,
highlighting many avenues for future improvements.
|
2502.03795
|
Distribution learning via neural differential equations: minimal energy
regularization and approximation theory
|
cs.LG math.CA stat.ME stat.ML
|
Neural ordinary differential equations (ODEs) provide expressive
representations of invertible transport maps that can be used to approximate
complex probability distributions, e.g., for generative modeling, density
estimation, and Bayesian inference. We show that for a large class of transport
maps $T$, there exists a time-dependent ODE velocity field realizing a
straight-line interpolation $(1-t)x + tT(x)$, $t \in [0,1]$, of the
displacement induced by the map. Moreover, we show that such velocity fields
are minimizers of a training objective containing a specific minimum-energy
regularization. We then derive explicit upper bounds for the $C^k$ norm of the
velocity field that are polynomial in the $C^k$ norm of the corresponding
transport map $T$; in the case of triangular (Knothe--Rosenblatt) maps, we also
show that these bounds are polynomial in the $C^k$ norms of the associated
source and target densities. Combining these results with stability arguments
for distribution approximation via ODEs, we show that Wasserstein or
Kullback--Leibler approximation of the target distribution to any desired
accuracy $\epsilon > 0$ can be achieved by a deep neural network representation
of the velocity field whose size is bounded explicitly in terms of $\epsilon$,
the dimension, and the smoothness of the source and target densities. The same
neural network ansatz yields guarantees on the value of the regularized
training objective.
|
2502.03798
|
Network-Wide Traffic Flow Estimation Across Multiple Cities with Global
Open Multi-Source Data: A Large-Scale Case Study in Europe and North America
|
cs.LG
|
Network-wide traffic flow, which captures dynamic traffic volume on each link
of a general network, is fundamental to smart mobility applications. However,
the observed traffic flow from sensors is usually limited across the entire
network due to the associated high installation and maintenance costs. To
address this issue, existing research uses various supplementary data sources
to compensate for insufficient sensor coverage and estimate the unobserved
traffic flow. Although these studies have shown promising results, the
inconsistent availability and quality of supplementary data across cities make
their methods typically face a trade-off challenge between accuracy and
generality. In this research, we first time advocate using the Global Open
Multi-Source (GOMS) data within an advanced deep learning framework to break
the trade-off. The GOMS data primarily encompass geographical and demographic
information, including road topology, building footprints, and population
density, which can be consistently collected across cities. More importantly,
these GOMS data are either causes or consequences of transportation activities,
thereby creating opportunities for accurate network-wide flow estimation.
Furthermore, we use map images to represent GOMS data, instead of traditional
tabular formats, to capture richer and more comprehensive geographical and
demographic information. To address multi-source data fusion, we develop an
attention-based graph neural network that effectively extracts and synthesizes
information from GOMS maps while simultaneously capturing spatiotemporal
traffic dynamics from observed traffic data. A large-scale case study across 15
cities in Europe and North America was conducted. The results demonstrate
stable and satisfactory estimation accuracy across these cities, which suggests
that the trade-off challenge can be successfully addressed using our approach.
|
2502.03799
|
Enhancing Hallucination Detection through Noise Injection
|
cs.CL cs.SY eess.SY
|
Large Language Models (LLMs) are prone to generating plausible yet incorrect
responses, known as hallucinations. Effectively detecting hallucinations is
therefore crucial for the safe deployment of LLMs. Recent research has linked
hallucinations to model uncertainty, suggesting that hallucinations can be
detected by measuring dispersion over answer distributions obtained from a set
of samples drawn from a model. While drawing from the distribution over tokens
defined by the model is a natural way to obtain samples, in this work, we argue
that it is sub-optimal for the purpose of detecting hallucinations. We show
that detection can be improved significantly by taking into account model
uncertainty in the Bayesian sense. To this end, we propose a very simple and
efficient approach that perturbs an appropriate subset of model parameters, or
equivalently hidden unit activations, during sampling. We demonstrate its
effectiveness across a wide range of datasets and model architectures.
|
2502.03801
|
SoK: Benchmarking Poisoning Attacks and Defenses in Federated Learning
|
cs.CR cs.AI cs.LG
|
Federated learning (FL) enables collaborative model training while preserving
data privacy, but its decentralized nature exposes it to client-side data
poisoning attacks (DPAs) and model poisoning attacks (MPAs) that degrade global
model performance. While numerous proposed defenses claim substantial
effectiveness, their evaluation is typically done in isolation with limited
attack strategies, raising concerns about their validity. Additionally,
existing studies overlook the mutual effectiveness of defenses against both
DPAs and MPAs, causing fragmentation in this field. This paper aims to provide
a unified benchmark and analysis of defenses against DPAs and MPAs, clarifying
the distinction between these two similar but slightly distinct domains. We
present a systematic taxonomy of poisoning attacks and defense strategies,
outlining their design, strengths, and limitations. Then, a unified comparative
evaluation across FL algorithms and data heterogeneity is conducted to validate
their individual and mutual effectiveness and derive key insights for design
principles and future research. Along with the analysis, we frame our work to a
unified benchmark, FLPoison, with high modularity and scalability to evaluate
15 representative poisoning attacks and 17 defense strategies, facilitating
future research in this domain. Code is available at
https://github.com/vio1etus/FLPoison.
|
2502.03802
|
MXMap: A Multivariate Cross Mapping Framework for Causal Discovery in
Dynamical Systems
|
cs.LG math.DS stat.ME
|
Convergent Cross Mapping (CCM) is a powerful method for detecting causality
in coupled nonlinear dynamical systems, providing a model-free approach to
capture dynamic causal interactions. Partial Cross Mapping (PCM) was introduced
as an extension of CCM to address indirect causality in three-variable systems
by comparing cross-mapping quality between direct cause-effect mapping and
indirect mapping through an intermediate conditioning variable. However, PCM
remains limited to univariate delay embeddings in its cross-mapping processes.
In this work, we extend PCM to the multivariate setting, introducing multiPCM,
which leverages multivariate embeddings to more effectively distinguish
indirect causal relationships. We further propose a multivariate cross-mapping
framework (MXMap) for causal discovery in dynamical systems. This two-phase
framework combines (1) pairwise CCM tests to establish an initial causal graph
and (2) multiPCM to refine the graph by pruning indirect causal connections.
Through experiments on simulated data and the ERA5 Reanalysis weather dataset,
we demonstrate the effectiveness of MXMap. Additionally, MXMap is compared
against several baseline methods, showing advantages in accuracy and causal
graph refinement.
|
2502.03803
|
Graph Neural Network-Driven Hierarchical Mining for Complex Imbalanced
Data
|
cs.LG
|
This study presents a hierarchical mining framework for high-dimensional
imbalanced data, leveraging a depth graph model to address the inherent
performance limitations of conventional approaches in handling complex,
high-dimensional data distributions with imbalanced sample representations. By
constructing a structured graph representation of the dataset and integrating
graph neural network (GNN) embeddings, the proposed method effectively captures
global interdependencies among samples. Furthermore, a hierarchical strategy is
employed to enhance the characterization and extraction of minority class
feature patterns, thereby facilitating precise and robust imbalanced data
mining. Empirical evaluations across multiple experimental scenarios validate
the efficacy of the proposed approach, demonstrating substantial improvements
over traditional methods in key performance metrics, including pattern
discovery count, average support, and minority class coverage. Notably, the
method exhibits superior capabilities in minority-class feature extraction and
pattern correlation analysis. These findings underscore the potential of depth
graph models, in conjunction with hierarchical mining strategies, to
significantly enhance the efficiency and accuracy of imbalanced data analysis.
This research contributes a novel computational framework for high-dimensional
complex data processing and lays the foundation for future extensions to
dynamically evolving imbalanced data and multi-modal data applications, thereby
expanding the applicability of advanced data mining methodologies to more
intricate analytical domains.
|
2502.03804
|
Understanding and Supporting Formal Email Exchange by Answering
AI-Generated Questions
|
cs.HC cs.AI
|
Replying to formal emails is time-consuming and cognitively demanding, as it
requires crafting polite phrasing and providing an adequate response to the
sender's demands. Although systems with Large Language Models (LLMs) were
designed to simplify the email replying process, users still need to provide
detailed prompts to obtain the expected output. Therefore, we proposed and
evaluated an LLM-powered question-and-answer (QA)-based approach for users to
reply to emails by answering a set of simple and short questions generated from
the incoming email. We developed a prototype system, ResQ, and conducted
controlled and field experiments with 12 and 8 participants. Our results
demonstrated that the QA-based approach improves the efficiency of replying to
emails and reduces workload while maintaining email quality, compared to a
conventional prompt-based approach that requires users to craft appropriate
prompts to obtain email drafts. We discuss how the QA-based approach influences
the email reply process and interpersonal relationship dynamics, as well as the
opportunities and challenges associated with using a QA-based approach in
AI-mediated communication.
|
2502.03805
|
Identify Critical KV Cache in LLM Inference from an Output Perturbation
Perspective
|
cs.CL
|
Large language models have revolutionized natural language processing but
face significant challenges of high storage and runtime costs, due to the
transformer architecture's reliance on self-attention, particularly the large
Key-Value (KV) cache for long-sequence inference. Recent efforts to reduce KV
cache size by pruning less critical entries based on attention weights remain
empirical and lack formal grounding. This paper presents a formal study on
identifying critical KV cache entries by analyzing attention output
perturbation. Our analysis reveals that, beyond attention weights, the value
states within KV entries and pretrained parameter matrices are also crucial.
Based on this, we propose a perturbation-constrained selection algorithm that
optimizes the worst-case output perturbation to identify critical entries.
Evaluations on the Needle-in-a-Haystack test and Longbench benchmark show our
algorithm enhances state-of-the-art cache eviction methods. Further empirical
analysis confirms that our algorithm achieves lower output perturbations in
over 92% attention heads in Llama model, thereby providing a significant
improvement over existing methods.
|
2502.03806
|
Should Code Models Learn Pedagogically? A Preliminary Evaluation of
Curriculum Learning for Real-World Software Engineering Tasks
|
cs.SE cs.LG
|
Learning-based techniques, especially advanced pre-trained models for code
have demonstrated capabilities in code understanding and generation, solving
diverse software engineering (SE) tasks. Despite the promising results, current
training approaches may not fully optimize model performance, as they typically
involve learning from randomly shuffled training data. Recent work shows that
Curriculum Learning (CL) can improve performance on code-related tasks through
incremental learning based on the difficulty of synthetic code. Yet, the
effectiveness of CL with conventional difficulty measures in SE tasks remains
largely unexplored. In this study, we explore two conventional code metrics:
code length and cyclomatic complexity to determine the difficulty levels. We
investigate how the pre-trained code model (CodeT5) learns under CL, through
the tasks of code clone detection and code summarization. Our empirical study
on the CodeXGLUE benchmark showed contrasting results to prior studies, where
the model exhibited signs of catastrophic forgetting and shortcut learning.
Surprisingly, model performance saturates after only the first quartile of
training, potentially indicating a limit in the model's representation capacity
and/or the task's inherent difficulty. Future work should further explore
various CL strategies with different code models across a wider range of SE
tasks for a more holistic understanding.
|
2502.03810
|
DeblurDiff: Real-World Image Deblurring with Generative Diffusion Models
|
cs.CV
|
Diffusion models have achieved significant progress in image generation. The
pre-trained Stable Diffusion (SD) models are helpful for image deblurring by
providing clear image priors. However, directly using a blurry image or
pre-deblurred one as a conditional control for SD will either hinder accurate
structure extraction or make the results overly dependent on the deblurring
network. In this work, we propose a Latent Kernel Prediction Network (LKPN) to
achieve robust real-world image deblurring. Specifically, we co-train the LKPN
in latent space with conditional diffusion. The LKPN learns a spatially variant
kernel to guide the restoration of sharp images in the latent space. By
applying element-wise adaptive convolution (EAC), the learned kernel is
utilized to adaptively process the input feature, effectively preserving the
structural information of the input. This process thereby more effectively
guides the generative process of Stable Diffusion (SD), enhancing both the
deblurring efficacy and the quality of detail reconstruction. Moreover, the
results at each diffusion step are utilized to iteratively estimate the kernels
in LKPN to better restore the sharp latent by EAC. This iterative refinement
enhances the accuracy and robustness of the deblurring process. Extensive
experimental results demonstrate that the proposed method outperforms
state-of-the-art image deblurring methods on both benchmark and real-world
images.
|
2502.03813
|
Optimized Unet with Attention Mechanism for Multi-Scale Semantic
Segmentation
|
cs.CV
|
Semantic segmentation is one of the core tasks in the field of computer
vision, and its goal is to accurately classify each pixel in an image. The
traditional Unet model achieves efficient feature extraction and fusion through
an encoder-decoder structure, but it still has certain limitations when dealing
with complex backgrounds, long-distance dependencies, and multi-scale targets.
To this end, this paper proposes an improved Unet model combined with an
attention mechanism, introduces channel attention and spatial attention
modules, enhances the model's ability to focus on important features, and
optimizes skip connections through a multi-scale feature fusion strategy,
thereby improving the combination of global semantic information and
fine-grained features. The experiment is based on the Cityscapes dataset and
compared with classic models such as FCN, SegNet, DeepLabv3+, and PSPNet. The
improved model performs well in terms of mIoU and pixel accuracy (PA), reaching
76.5% and 95.3% respectively. The experimental results verify the superiority
of this method in dealing with complex scenes and blurred target boundaries. In
addition, this paper discusses the potential of the improved model in practical
applications and future expansion directions, indicating that it has broad
application value in fields such as autonomous driving, remote sensing image
analysis, and medical image processing.
|
2502.03814
|
Large Language Models for Multi-Robot Systems: A Survey
|
cs.RO cs.AI
|
The rapid advancement of Large Language Models (LLMs) has opened new
possibilities in Multi-Robot Systems (MRS), enabling enhanced communication,
task planning, and human-robot interaction. Unlike traditional single-robot and
multi-agent systems, MRS poses unique challenges, including coordination,
scalability, and real-world adaptability. This survey provides the first
comprehensive exploration of LLM integration into MRS. It systematically
categorizes their applications across high-level task allocation, mid-level
motion planning, low-level action generation, and human intervention. We
highlight key applications in diverse domains, such as household robotics,
construction, formation control, target tracking, and robot games, showcasing
the versatility and transformative potential of LLMs in MRS. Furthermore, we
examine the challenges that limit adapting LLMs in MRS, including mathematical
reasoning limitations, hallucination, latency issues, and the need for robust
benchmarking systems. Finally, we outline opportunities for future research,
emphasizing advancements in fine-tuning, reasoning techniques, and
task-specific models. This survey aims to guide researchers in the intelligence
and real-world deployment of MRS powered by LLMs. Based on the fast-evolving
nature of research in the field, we keep updating the papers in the open-source
Github repository.
|
2502.03817
|
Knowing When to Stop Matters: A Unified Algorithm for Online Conversion
under Horizon Uncertainty
|
cs.DS cs.LG
|
This paper investigates the online conversion problem, which involves
sequentially trading a divisible resource (e.g., energy) under dynamically
changing prices to maximize profit. A key challenge in online conversion is
managing decisions under horizon uncertainty, where the duration of trading is
either known, revealed partway, or entirely unknown. We propose a unified
algorithm that achieves optimal competitive guarantees across these horizon
models, accounting for practical constraints such as box constraints, which
limit the maximum allowable trade per step. Additionally, we extend the
algorithm to a learning-augmented version, leveraging horizon predictions to
adaptively balance performance: achieving near-optimal results when predictions
are accurate while maintaining strong guarantees when predictions are
unreliable. These results advance the understanding of online conversion under
various degrees of horizon uncertainty and provide more practical strategies to
address real world constraints.
|
2502.03821
|
PsyPlay: Personality-Infused Role-Playing Conversational Agents
|
cs.CL
|
The current research on Role-Playing Conversational Agents (RPCAs) with Large
Language Models (LLMs) primarily focuses on imitating specific speaking styles
and utilizing character backgrounds, neglecting the depiction of deeper
personality traits.~In this study, we introduce personality-infused
role-playing for LLM agents, which encourages agents to accurately portray
their designated personality traits during dialogues. We then propose PsyPlay,
a dialogue generation framework that facilitates the expression of rich
personalities among multiple LLM agents. Specifically, PsyPlay enables agents
to assume roles with distinct personality traits and engage in discussions
centered around specific topics, consistently exhibiting their designated
personality traits throughout the interactions. Validation on generated
dialogue data demonstrates that PsyPlay can accurately portray the intended
personality traits, achieving an overall success rate of 80.31% on GPT-3.5.
Notably, we observe that LLMs aligned with positive values are more successful
in portraying positive personality roles compared to negative ones. Moreover,
we construct a dialogue corpus for personality-infused role-playing, called
PsyPlay-Bench. The corpus, which consists of 4745 instances of correctly
portrayed dialogues using PsyPlay, aims to further facilitate research in
personalized role-playing and dialogue personality detection.
|
2502.03822
|
Dynamic Rank Adjustment in Diffusion Policies for Efficient and Flexible
Training
|
cs.RO
|
Diffusion policies trained via offline behavioral cloning have recently
gained traction in robotic motion generation. While effective, these policies
typically require a large number of trainable parameters. This model size
affords powerful representations but also incurs high computational cost during
training. Ideally, it would be beneficial to dynamically adjust the trainable
portion as needed, balancing representational power with computational
efficiency. For example, while overparameterization enables diffusion policies
to capture complex robotic behaviors via offline behavioral cloning, the
increased computational demand makes online interactive imitation learning
impractical due to longer training time. To address this challenge, we present
a framework, called DRIFT, that uses the Singular Value Decomposition to enable
dynamic rank adjustment during diffusion policy training. We implement and
demonstrate the benefits of this framework in DRIFT-DAgger, an imitation
learning algorithm that can seamlessly slide between an offline bootstrapping
phase and an online interactive phase. We perform extensive experiments to
better understand the proposed framework, and demonstrate that DRIFT-DAgger
achieves improved sample efficiency and faster training with minimal impact on
model performance.
|
2502.03824
|
Syntriever: How to Train Your Retriever with Synthetic Data from LLMs
|
cs.CL cs.AI
|
LLMs have boosted progress in many AI applications. Recently, there were
attempts to distill the vast knowledge of LLMs into information retrieval
systems. Those distillation methods mostly use output probabilities of LLMs
which are unavailable in the latest black-box LLMs. We propose Syntriever, a
training framework for retrievers using synthetic data from black-box LLMs.
Syntriever consists of two stages. Firstly in the distillation stage, we
synthesize relevant and plausibly irrelevant passages and augmented queries
using chain-of-thoughts for the given queries. LLM is asked to self-verify the
synthetic data for possible hallucinations, after which retrievers are trained
with a loss designed to cluster the embeddings of relevant passages. Secondly
in the alignment stage, we align the retriever with the preferences of LLMs. We
propose a preference modeling called partial Plackett-Luce ranking to learn LLM
preferences with regularization which prevents the model from deviating
excessively from that trained in the distillation stage. Experiments show that
Syntriever achieves state-of-the-art performances on benchmark datasets from
various domains in nDCG@$K$. The code is available at
\href{https://github.com/kmswin1/Syntriever}{https://github.com/kmswin1/Syntriever}.
|
2502.03825
|
Synthetic Poisoning Attacks: The Impact of Poisoned MRI Image on U-Net
Brain Tumor Segmentation
|
eess.IV cs.CR cs.CV
|
Deep learning-based medical image segmentation models, such as U-Net, rely on
high-quality annotated datasets to achieve accurate predictions. However, the
increasing use of generative models for synthetic data augmentation introduces
potential risks, particularly in the absence of rigorous quality control. In
this paper, we investigate the impact of synthetic MRI data on the robustness
and segmentation accuracy of U-Net models for brain tumor segmentation.
Specifically, we generate synthetic T1-contrast-enhanced (T1-Ce) MRI scans
using a GAN-based model with a shared encoding-decoding framework and
shortest-path regularization. To quantify the effect of synthetic data
contamination, we train U-Net models on progressively "poisoned" datasets,
where synthetic data proportions range from 16.67% to 83.33%. Experimental
results on a real MRI validation set reveal a significant performance
degradation as synthetic data increases, with Dice coefficients dropping from
0.8937 (33.33% synthetic) to 0.7474 (83.33% synthetic). Accuracy and
sensitivity exhibit similar downward trends, demonstrating the detrimental
effect of synthetic data on segmentation robustness. These findings underscore
the importance of quality control in synthetic data integration and highlight
the risks of unregulated synthetic augmentation in medical image analysis. Our
study provides critical insights for the development of more reliable and
trustworthy AI-driven medical imaging systems.
|
2502.03826
|
FairT2I: Mitigating Social Bias in Text-to-Image Generation via Large
Language Model-Assisted Detection and Attribute Rebalancing
|
cs.CV
|
The proliferation of Text-to-Image (T2I) models has revolutionized content
creation, providing powerful tools for diverse applications ranging from
artistic expression to educational material development and marketing. Despite
these technological advancements, significant ethical concerns arise from these
models' reliance on large-scale datasets that often contain inherent societal
biases. These biases are further amplified when AI-generated content is
included in training data, potentially reinforcing and perpetuating stereotypes
in the generated outputs. In this paper, we introduce FairT2I, a novel
framework that harnesses large language models to detect and mitigate social
biases in T2I generation. Our framework comprises two key components: (1) an
LLM-based bias detection module that identifies potential social biases in
generated images based on text prompts, and (2) an attribute rebalancing module
that fine-tunes sensitive attributes within the T2I model to mitigate
identified biases. Our extensive experiments across various T2I models and
datasets show that FairT2I can significantly reduce bias while maintaining
high-quality image generation. We conducted both qualitative user studies and
quantitative non-parametric analyses in the generated image feature space,
building upon the occupational dataset introduced in the Stable Bias study. Our
results show that FairT2I successfully mitigates social biases and enhances the
diversity of sensitive attributes in generated images. We further demonstrate,
using the P2 dataset, that our framework can detect subtle biases that are
challenging for human observers to perceive, extending beyond
occupation-related prompts. On the basis of these findings, we introduce a new
benchmark dataset for evaluating bias in T2I models.
|
2502.03827
|
A comprehensive survey of contemporary Arabic sentiment analysis:
Methods, Challenges, and Future Directions
|
cs.CL cs.AI
|
Sentiment Analysis, a popular subtask of Natural Language Processing, employs
computational methods to extract sentiment, opinions, and other subjective
aspects from linguistic data. Given its crucial role in understanding human
sentiment, research in sentiment analysis has witnessed significant growth in
the recent years. However, the majority of approaches are aimed at the English
language, and research towards Arabic sentiment analysis remains relatively
unexplored. This paper presents a comprehensive and contemporary survey of
Arabic Sentiment Analysis, identifies the challenges and limitations of
existing literature in this field and presents avenues for future research. We
present a systematic review of Arabic sentiment analysis methods, focusing
specifically on research utilizing deep learning. We then situate Arabic
Sentiment Analysis within the broader context, highlighting research gaps in
Arabic sentiment analysis as compared to general sentiment analysis. Finally,
we outline the main challenges and promising future directions for research in
Arabic sentiment analysis.
|
2502.03829
|
FE-UNet: Frequency Domain Enhanced U-Net with Segment Anything
Capability for Versatile Image Segmentation
|
cs.CV
|
Image segmentation is a critical task in visual understanding. Convolutional
Neural Networks (CNNs) are predisposed to capture high-frequency features in
images, while Transformers exhibit a contrasting focus on low-frequency
features. In this paper, we experimentally quantify the contrast sensitivity
function of CNNs and compare it with that of the human visual system, informed
by the seminal experiments of Mannos and Sakrison. Leveraging these insights,
we propose the Wavelet-Guided Spectral Pooling Module (WSPM) to enhance and
balance image features across the frequency domain. To further emulate the
human visual system, we introduce the Frequency Domain Enhanced Receptive Field
Block (FE-RFB), which integrates WSPM to extract enriched features from the
frequency domain. Building on these innovations, we develop FE-UNet, a model
that utilizes SAM2 as its backbone and incorporates Hiera-Large as a
pre-trained block, designed to enhance generalization capabilities while
ensuring high segmentation accuracy. Experimental results demonstrate that
FE-UNet achieves state-of-the-art performance in diverse tasks, including
marine animal and polyp segmentation, underscoring its versatility and
effectiveness.
|
2502.03835
|
Single-Domain Generalized Object Detection by Balancing Domain Diversity
and Invariance
|
cs.CV
|
Single-domain generalization for object detection (S-DGOD) aims to transfer
knowledge from a single source domain to unseen target domains. In recent
years, many models have focused primarily on achieving feature invariance to
enhance robustness. However, due to the inherent diversity across domains, an
excessive emphasis on invariance can cause the model to overlook the actual
differences between images. This overemphasis may complicate the training
process and lead to a loss of valuable information. To address this issue, we
propose the Diversity Invariance Detection Model (DIDM), which focuses on the
balance between the diversity of domain-specific and invariance cross domains.
Recognizing that domain diversity introduces variations in domain-specific
features, we introduce a Diversity Learning Module (DLM). The DLM is designed
to preserve the diversity of domain-specific information with proposed feature
diversity loss while limiting the category semantics in the features. In
addition, to maintain domain invariance, we incorporate a Weighted Aligning
Module (WAM), which aligns features without compromising feature diversity. We
conducted our model on five distinct datasets, which have illustrated the
superior performance and effectiveness of the proposed model.
|
2502.03836
|
Adapting Human Mesh Recovery with Vision-Language Feedback
|
cs.CV
|
Human mesh recovery can be approached using either regression-based or
optimization-based methods. Regression models achieve high pose accuracy but
struggle with model-to-image alignment due to the lack of explicit 2D-3D
correspondences. In contrast, optimization-based methods align 3D models to 2D
observations but are prone to local minima and depth ambiguity. In this work,
we leverage large vision-language models (VLMs) to generate interactive body
part descriptions, which serve as implicit constraints to enhance 3D perception
and limit the optimization space. Specifically, we formulate monocular human
mesh recovery as a distribution adaptation task by integrating both 2D
observations and language descriptions. To bridge the gap between text and 3D
pose signals, we first train a text encoder and a pose VQ-VAE, aligning texts
to body poses in a shared latent space using contrastive learning.
Subsequently, we employ a diffusion-based framework to refine the initial
parameters guided by gradients derived from both 2D observations and text
descriptions. Finally, the model can produce poses with accurate 3D perception
and image consistency. Experimental results on multiple benchmarks validate its
effectiveness. The code will be made publicly available.
|
2502.03839
|
On the Number of Control Nodes in Boolean Networks with Degree
Constraints
|
eess.SY cs.SY
|
This paper studies the minimum control node set problem for Boolean networks
(BNs) with degree constraints. The main contribution is to derive the
nontrivial lower and upper bounds on the size of the minimum control node set
through combinatorial analysis of four types of BNs (i.e., $k$-$k$-XOR-BNs,
simple $k$-$k$-AND-BNs, $k$-$k$-AND-BNs with negation and $k$-$k$-NC-BNs, where
the $k$-$k$-AND-BN with negation is an extension of the simple $k$-$k$-AND-BN
that considers the occurrence of negation and NC means nested canalyzing). More
specifically, four bounds for the size of the minimum control node set: general
lower bound, best case upper bound, worst case lower bound, and general upper
bound are studied, where the general lower bound is a value that is not less
than the size of the control node set for any BN, the general upper bound is
the maximum value of the size of the minimum control node set for any BN, while
the best case upper bound (resp., the worst case lower bound) is the minimum
(resp., maximum) value currently found, which is obtained from some BN. By
dividing nodes into three disjoint sets, extending the time to reach the target
state, and utilizing necessary conditions for controllability, these bounds are
obtained, and further meaningful results and phenomena are discovered. Notably,
all of the above results involving the AND function also apply to the OR
function.
|
2502.03843
|
Improving Natural Language Understanding for LLMs via Large-Scale
Instruction Synthesis
|
cs.CL cs.AI
|
High-quality, large-scale instructions are crucial for aligning large
language models (LLMs), however, there is a severe shortage of instruction in
the field of natural language understanding (NLU). Previous works on
constructing NLU instructions mainly focus on information extraction (IE),
neglecting tasks such as machine reading comprehension, question answering, and
text classification. Furthermore, the lack of diversity in the data has led to
a decreased generalization ability of trained LLMs in other NLU tasks and a
noticeable decline in the fundamental model's general capabilities. To address
this issue, we propose Hum, a large-scale, high-quality synthetic instruction
corpus for NLU tasks, designed to enhance the NLU capabilities of LLMs.
Specifically, Hum includes IE (either close IE or open IE), machine reading
comprehension, text classification, and instruction generalist tasks, thereby
enriching task diversity. Additionally, we introduce a human-LLMs collaborative
mechanism to synthesize instructions, which enriches instruction diversity by
incorporating guidelines, preference rules, and format variants. We conduct
extensive experiments on 5 NLU tasks and 28 general capability evaluation
datasets for LLMs. Experimental results show that Hum enhances the NLU
capabilities of six LLMs by an average of 3.1\%, with no significant decline
observed in other general capabilities.
|
2502.03845
|
PAGNet: Pluggable Adaptive Generative Networks for Information
Completion in Multi-Agent Communication
|
cs.MA
|
For partially observable cooperative tasks, multi-agent systems must develop
effective communication and understand the interplay among agents in order to
achieve cooperative goals. However, existing multi-agent reinforcement learning
(MARL) with communication methods lack evaluation metrics for information
weights and information-level communication modeling. This causes agents to
neglect the aggregation of multiple messages, thereby significantly reducing
policy learning efficiency. In this paper, we propose pluggable adaptive
generative networks (PAGNet), a novel framework that integrates generative
models into MARL to enhance communication and decision-making. PAGNet enables
agents to synthesize global states representations from weighted local
observations and use these representations alongside learned communication
weights for coordinated decision-making. This pluggable approach reduces the
computational demands typically associated with the joint training of
communication and policy networks. Extensive experimental evaluations across
diverse benchmarks and communication scenarios demonstrate the significant
performance improvements achieved by PAGNet. Furthermore, we analyze the
emergent communication patterns and the quality of generated global states,
providing insights into operational mechanisms.
|
2502.03850
|
Electromagnetic Channel Modeling and Capacity Analysis for HMIMO
Communications
|
cs.IT eess.SP math.IT
|
Advancements in emerging technologies, e.g., reconfigurable intelligent
surfaces and holographic MIMO (HMIMO), facilitate unprecedented manipulation of
electromagnetic (EM) waves, significantly enhancing the performance of wireless
communication systems. To accurately characterize the achievable performance
limits of these systems, it is crucial to develop a universal EM-compliant
channel model. This paper addresses this necessity by proposing a comprehensive
EM channel model tailored for realistic multi-path environments, accounting for
the combined effects of antenna array configurations and propagation conditions
in HMIMO communications. Both polarization phenomena and spatial correlation
are incorporated into this probabilistic channel model. Additionally, physical
constraints of antenna configurations, such as mutual coupling effects and
energy consumption, are integrated into the channel modeling framework.
Simulation results validate the effectiveness of the proposed probabilistic
channel model, indicating that traditional Rician and Rayleigh fading models
cannot accurately depict the channel characteristics and underestimate the
channel capacity. More importantly, the proposed channel model outperforms
free-space Green's functions in accurately depicting both near-field gain and
multi-path effects in radiative near-field regions. These gains are much more
evident in tri-polarized systems, highlighting the necessity of polarization
interference elimination techniques. Moreover, the theoretical analysis
accurately verifies that capacity decreases with expanding communication
regions of two-user communications.
|
2502.03852
|
Pursuing Better Decision Boundaries for Long-Tailed Object Detection via
Category Information Amount
|
cs.CV cs.AI
|
In object detection, the instance count is typically used to define whether a
dataset exhibits a long-tail distribution, implicitly assuming that models will
underperform on categories with fewer instances. This assumption has led to
extensive research on category bias in datasets with imbalanced instance
counts. However, models still exhibit category bias even in datasets where
instance counts are relatively balanced, clearly indicating that instance count
alone cannot explain this phenomenon. In this work, we first introduce the
concept and measurement of category information amount. We observe a
significant negative correlation between category information amount and
accuracy, suggesting that category information amount more accurately reflects
the learning difficulty of a category. Based on this observation, we propose
Information Amount-Guided Angular Margin (IGAM) Loss. The core idea of IGAM is
to dynamically adjust the decision space of each category based on its
information amount, thereby reducing category bias in long-tail datasets. IGAM
Loss not only performs well on long-tailed benchmark datasets such as LVIS v1.0
and COCO-LT but also shows significant improvement for underrepresented
categories in the non-long-tailed dataset Pascal VOC. Comprehensive experiments
demonstrate the potential of category information amount as a tool and the
generality of our proposed method.
|
2502.03854
|
Mirror Descent Actor Critic via Bounded Advantage Learning
|
cs.LG
|
Regularization is a core component of recent Reinforcement Learning (RL)
algorithms. Mirror Descent Value Iteration (MDVI) uses both Kullback-Leibler
divergence and entropy as regularizers in its value and policy updates. Despite
its empirical success in discrete action domains and strong theoretical
guarantees, the performance of a MDVI-based method does not surpass an
entropy-only-regularized method in continuous action domains. In this study, we
propose Mirror Descent Actor Critic (MDAC) as an actor-critic style
instantiation of MDVI for continuous action domains, and show that its
empirical performance is significantly boosted by bounding the actor's
log-density terms in the critic's loss function, compared to a non-bounded
naive instantiation. Further, we relate MDAC to Advantage Learning by recalling
that the actor's log-probability is equal to the regularized advantage function
in tabular cases, and theoretically discuss when and why bounding the advantage
terms is validated and beneficial. We also empirically explore a good choice
for the bounding function, and show that MDAC perfoms better than strong
non-regularized and entropy-only-regularized methods with an appropriate choice
of the bounding function.
|
2502.03855
|
Semi-rPPG: Semi-Supervised Remote Physiological Measurement with
Curriculum Pseudo-Labeling
|
cs.CV
|
Remote Photoplethysmography (rPPG) is a promising technique to monitor
physiological signals such as heart rate from facial videos. However, the
labeled facial videos in this research are challenging to collect. Current rPPG
research is mainly based on several small public datasets collected in simple
environments, which limits the generalization and scale of the AI models.
Semi-supervised methods that leverage a small amount of labeled data and
abundant unlabeled data can fill this gap for rPPG learning. In this study, a
novel semi-supervised learning method named Semi-rPPG that combines curriculum
pseudo-labeling and consistency regularization is proposed to extract intrinsic
physiological features from unlabelled data without impairing the model from
noises. Specifically, a curriculum pseudo-labeling strategy with
signal-to-noise ratio (SNR) criteria is proposed to annotate the unlabelled
data while adaptively filtering out the low-quality unlabelled data. Besides, a
novel consistency regularization term for quasi-periodic signals is proposed
through weak and strong augmented clips. To benefit the research on
semi-supervised rPPG measurement, we establish a novel semi-supervised
benchmark for rPPG learning through intra-dataset and cross-dataset evaluation
on four public datasets. The proposed Semi-rPPG method achieves the best
results compared with three classical semi-supervised methods under different
protocols. Ablation studies are conducted to prove the effectiveness of the
proposed methods.
|
2502.03856
|
Taking A Closer Look at Interacting Objects: Interaction-Aware Open
Vocabulary Scene Graph Generation
|
cs.CV
|
Today's open vocabulary scene graph generation (OVSGG) extends traditional
SGG by recognizing novel objects and relationships beyond predefined
categories, leveraging the knowledge from pre-trained large-scale models. Most
existing methods adopt a two-stage pipeline: weakly supervised pre-training
with image captions and supervised fine-tuning (SFT) on fully annotated scene
graphs. Nonetheless, they omit explicit modeling of interacting objects and
treat all objects equally, resulting in mismatched relation pairs. To this end,
we propose an interaction-aware OVSGG framework INOVA. During pre-training,
INOVA employs an interaction-aware target generation strategy to distinguish
interacting objects from non-interacting ones. In SFT, INOVA devises an
interaction-guided query selection tactic to prioritize interacting objects
during bipartite graph matching. Besides, INOVA is equipped with an
interaction-consistent knowledge distillation to enhance the robustness by
pushing interacting object pairs away from the background. Extensive
experiments on two benchmarks (VG and GQA) show that INOVA achieves
state-of-the-art performance, demonstrating the potential of interaction-aware
mechanisms for real-world applications.
|
2502.03859
|
Stabilizing scheduling logic for networked control systems under limited
capacity and lossy communication networks
|
eess.SY cs.SY
|
In this paper we address the problem of designing scheduling logic for
stabilizing Networked Control Systems (NCSs) with plants and controllers
remotely-located over a limited capacity communication network subject to data
losses. Our specific contributions include characterization of stability under
worst case data loss using an inequality associated with a cycle on a graph.
This is eventually formulated as a feasibility problem to solve for certain
parameters (\(T\)-factors) used to design a periodic scheduling logic. We show
that given a solution to the feasibility problem, the designed scheduling logic
guarantees \emph{global asymptotic stability} for all plants of the network
under all admissible data losses. We also derive sufficient conditions on the
number of plants and the capacity of the network for the existence of a
solution to the feasibility problem. Given that a sufficient condition is
satisfied, we discuss the procedure to obtain the feasible \(T\)-factors. We
use tools from switched systems theory and graph theory in this work. A
numerical experiment is provided to verify our results.
|
2502.03860
|
BOLT: Bootstrap Long Chain-of-Thought in Language Models without
Distillation
|
cs.CL
|
Large language models (LLMs), such as o1 from OpenAI, have demonstrated
remarkable reasoning capabilities. o1 generates a long chain-of-thought
(LongCoT) before answering a question. LongCoT allows LLMs to analyze problems,
devise plans, reflect, and backtrack effectively. These actions empower LLM to
solve complex problems. After the release of o1, many teams have attempted to
replicate its LongCoT and reasoning capabilities. In terms of methods, they
primarily rely on knowledge distillation with data from existing models with
LongCoT capacities (e.g., OpenAI-o1, Qwen-QwQ, DeepSeek-R1-Preview), leaving
significant uncertainties on systematically developing such reasoning
abilities. In terms of data domains, these works focus narrowly on math while a
few others include coding, limiting their generalizability. This paper
introduces a novel approach to enable LLM's LongCoT capacity without
distillation from o1-like models or expensive human annotations, where we
bootstrap LongCoT (BOLT) from a standard instruct model. BOLT involves three
stages: 1) LongCoT data bootstrapping with in-context learning on a standard
instruct model; 2) LongCoT supervised finetuning; 3) online training to further
refine LongCoT capacities. In BOLT, only a few in-context examples need to be
constructed during the bootstrapping stage; in our experiments, we created 10
examples, demonstrating the feasibility of this approach. We use
Llama-3.1-70B-Instruct to bootstrap LongCoT and apply our method to various
model scales (7B, 8B, 70B). We achieve impressive performance on a variety of
benchmarks, Arena-Hard, MT-Bench, WildBench, ZebraLogic, MATH500, which
evaluate diverse task-solving and reasoning capabilities.
|
2502.03866
|
Weyl symmetry of the gradient-flow in information geometry
|
gr-qc cs.IT math-ph math.IT math.MP
|
We have revisited the gradient-flow in information geometry from the
perspective of Weyl symmetry. The gradient-flow equations are derived from the
proposed action which is invariant under the Weyl's gauge transformations. In
Weyl integrable geometry, we have related Amari's $\alpha$-connections in IG to
the Weyl invariant connection on the Riemannian manifold equipped with the
scaled metric.
|
2502.03876
|
Position: Untrained Machine Learning for Anomaly Detection
|
cs.LG
|
Anomaly detection based on 3D point cloud data is an important research
problem and receives more and more attention recently. Untrained anomaly
detection based on only one sample is an emerging research problem motivated by
real manufacturing industries such as personalized manufacturing that only one
sample can be collected without any additional labels. How to accurately
identify anomalies based on one 3D point cloud sample is a critical challenge
in both industrial applications and the field of machine learning. This paper
aims to provide a formal definition of untrained anomaly detection problem
based on 3D point cloud data, discuss the differences between untrained anomaly
detection and current unsupervised anomaly detection methods. Unlike
unsupervised learning, untrained methods do not rely on any data, including
unlabeled data. Instead, they leverage prior knowledge about the manufacturing
surfaces and anomalies. Examples are used to illustrate these prior knowledge
and untrained machine learning model. Afterwards, literature review on
untrained anomaly detection based on 3D point cloud data is also provided, and
the potential of untrained deep neural networks for anomaly detection is also
discussed as outlooks.
|
2502.03877
|
Advanced Object Detection and Pose Estimation with Hybrid Task Cascade
and High-Resolution Networks
|
cs.CV
|
In the field of computer vision, 6D object detection and pose estimation are
critical for applications such as robotics, augmented reality, and autonomous
driving. Traditional methods often struggle with achieving high accuracy in
both object detection and precise pose estimation simultaneously. This study
proposes an improved 6D object detection and pose estimation pipeline based on
the existing 6D-VNet framework, enhanced by integrating a Hybrid Task Cascade
(HTC) and a High-Resolution Network (HRNet) backbone. By leveraging the
strengths of HTC's multi-stage refinement process and HRNet's ability to
maintain high-resolution representations, our approach significantly improves
detection accuracy and pose estimation precision. Furthermore, we introduce
advanced post-processing techniques and a novel model integration strategy that
collectively contribute to superior performance on public and private
benchmarks. Our method demonstrates substantial improvements over
state-of-the-art models, making it a valuable contribution to the domain of 6D
object detection and pose estimation.
|
2502.03884
|
Rank Also Matters: Hierarchical Configuration for Mixture of Adapter
Experts in LLM Fine-Tuning
|
cs.LG cs.AI
|
Large language models (LLMs) have demonstrated remarkable success across
various tasks, accompanied by a continuous increase in their parameter size.
Parameter-efficient fine-tuning (PEFT) methods, such as Low-Rank Adaptation
(LoRA), address the challenges of fine-tuning LLMs by significantly reducing
the number of trainable parameters. Recent studies have integrated LoRA with
Mixture of Experts (MoE) architectures, leveraging multiple adapter experts and
gating mechanisms to further improve fine-tuning performance. However, existing
approaches primarily focus on adjusting the allocations of adapter experts per
layer to optimize the introduced trainable parameter size, while neglecting a
critical factor of adapters' rank. To this end, we propose a hierarchical
scheme for expert allocation and rank configuration, HILO, which dynamically
adjusts the number and rank of adapter experts across layers, matching the
varying representational complexity of model layers in adapter-granularity.
Extensive experiments on multiple benchmark tasks demonstrate that HILO
outperforms existing methods in accuracy while introducing fewer trainable
parameters, providing an efficient and practical solution for fine-tuning LLMs.
|
2502.03885
|
InfinitePOD: Building Datacenter-Scale High-Bandwidth Domain for LLM
with Optical Circuit Switching Transceivers
|
cs.NI cs.DC cs.LG
|
Scaling Large Language Model (LLM) training relies on multi-dimensional
parallelism, where High-Bandwidth Domains (HBDs) are critical for
communication-intensive parallelism like Tensor Parallelism (TP) and Expert
Parallelism (EP). However, existing HBD architectures face fundamental
limitations in scalability, cost, and fault resiliency: switch-centric HBDs
(e.g., NVL-72) incur prohibitive scaling costs, while GPU-centric HBDs (e.g.,
TPUv3/Dojo) suffer from severe fault propagation. Switch-GPU hybrid HBDs such
as TPUv4 takes a middle-ground approach by leveraging Optical Circuit Switches,
but the fault explosion radius remains large at the cube level (e.g., 64 TPUs).
We propose InfinitePOD, a novel transceiver-centric HBD architecture that
unifies connectivity and dynamic switching at the transceiver level using
Optical Circuit Switching (OCS). By embedding OCS within each transceiver,
InfinitePOD achieves reconfigurable point-to-multipoint connectivity, allowing
the topology to adapt into variable-size rings. This design provides: i)
datacenter-wide scalability without cost explosion; ii) fault resilience by
isolating failures to a single node, and iii) full bandwidth utilization for
fault-free GPUs. Key innovations include a Silicon Photonic (SiPh) based
low-cost OCS transceiver (OCSTrx), a reconfigurable k-hop ring topology
co-designed with intra-/inter-node communication, and an HBD-DCN orchestration
algorithm maximizing GPU utilization while minimizing cross-ToR datacenter
network traffic. The evaluation demonstrates that InfinitePOD achieves 31% of
the cost of NVL-72, near-zero GPU waste ratio (over one order of magnitude
lower than NVL-72 and TPUv4), near-zero cross-ToR traffic when node fault
ratios under 7%, and improves Model FLOPs Utilization by 3.37x compared to
NVIDIA DGX (8 GPUs per Node).
|
Subsets and Splits
No community queries yet
The top public SQL queries from the community will appear here once available.