id
stringlengths 9
16
| title
stringlengths 4
278
| categories
stringlengths 5
104
| abstract
stringlengths 6
4.09k
|
|---|---|---|---|
2502.01701
|
Learning with Differentially Private (Sliced) Wasserstein Gradients
|
cs.LG math.ST stat.TH
|
In this work, we introduce a novel framework for privately optimizing
objectives that rely on Wasserstein distances between data-dependent empirical
measures. Our main theoretical contribution is, based on an explicit
formulation of the Wasserstein gradient in a fully discrete setting, a control
on the sensitivity of this gradient to individual data points, allowing strong
privacy guarantees at minimal utility cost. Building on these insights, we
develop a deep learning approach that incorporates gradient and activations
clipping, originally designed for DP training of problems with a finite-sum
structure. We further demonstrate that privacy accounting methods extend to
Wasserstein-based objectives, facilitating large-scale private training.
Empirical results confirm that our framework effectively balances accuracy and
privacy, offering a theoretically sound solution for privacy-preserving machine
learning tasks relying on optimal transport distances such as Wasserstein
distance or sliced-Wasserstein distance.
|
2502.01702
|
Al-Khwarizmi: Discovering Physical Laws with Foundation Models
|
cs.LG
|
Inferring physical laws from data is a central challenge in science and
engineering, including but not limited to healthcare, physical sciences,
biosciences, social sciences, sustainability, climate, and robotics. Deep
networks offer high-accuracy results but lack interpretability, prompting
interest in models built from simple components. The Sparse Identification of
Nonlinear Dynamics (SINDy) method has become the go-to approach for building
such modular and interpretable models. SINDy leverages sparse regression with
L1 regularization to identify key terms from a library of candidate functions.
However, SINDy's choice of candidate library and optimization method requires
significant technical expertise, limiting its widespread applicability. This
work introduces Al-Khwarizmi, a novel agentic framework for physical law
discovery from data, which integrates foundational models with SINDy.
Leveraging LLMs, VLMs, and Retrieval-Augmented Generation (RAG), our approach
automates physical law discovery, incorporating prior knowledge and iteratively
refining candidate solutions via reflection. Al-Khwarizmi operates in two
steps: it summarizes system observations-comprising textual descriptions, raw
data, and plots-followed by a secondary step that generates candidate feature
libraries and optimizer configurations to identify hidden physics laws
correctly. Evaluating our algorithm on over 198 models, we demonstrate
state-of-the-art performance compared to alternatives, reaching a 20 percent
increase against the best-performing alternative.
|
2502.01703
|
QLESS: A Quantized Approach for Data Valuation and Selection in Large
Language Model Fine-Tuning
|
cs.LG cs.AI cs.CL
|
Fine-tuning large language models (LLMs) is often constrained by the
computational costs of processing massive datasets. We propose \textbf{QLESS}
(Quantized Low-rank Gradient Similarity Search), which integrates gradient
quantization with the LESS framework to enable memory-efficient data valuation
and selection. QLESS employs a two-step compression process: first, it obtains
low-dimensional gradient representations through LoRA-based random projection;
then, it quantizes these gradients to low-bitwidth representations. Experiments
on multiple LLM architectures (LLaMA, Mistral, Qwen) and benchmarks (MMLU, BBH,
TyDiQA) show that QLESS achieves comparable data selection performance to LESS
while reducing memory usage by up to 16x. Even 1-bit gradient quantization
preserves data valuation quality. These findings underscore QLESS as a
practical, scalable approach to identifying informative examples within strict
memory constraints.
|
2502.01704
|
Adaptive Observation Cost Control for Variational Quantum Eigensolvers
|
quant-ph cs.LG
|
The objective to be minimized in the variational quantum eigensolver (VQE)
has a restricted form, which allows a specialized sequential minimal
optimization (SMO) that requires only a few observations in each iteration.
However, the SMO iteration is still costly due to the observation noise -- one
observation at a point typically requires averaging over hundreds to thousands
of repeated quantum measurement shots for achieving a reasonable noise level.
In this paper, we propose an adaptive cost control method, named subspace in
confident region (SubsCoRe), for SMO. SubsCoRe uses the Gaussian process (GP)
surrogate, and requires it to have low uncertainty over the subspace being
updated, so that optimization in each iteration is performed with guaranteed
accuracy. The adaptive cost control is performed by first setting the required
accuracy according to the progress of the optimization, and then choosing the
minimum number of measurement shots and their distribution such that the
required accuracy is satisfied. We demonstrate that SubsCoRe significantly
improves the efficiency of SMO, and outperforms the state-of-the-art methods.
|
2502.01705
|
Progressive Binarization with Semi-Structured Pruning for LLMs
|
cs.LG
|
Large language models (LLMs) have achieved remarkable success in natural
language processing tasks, but their high computational and memory demands pose
challenges for deployment on resource-constrained devices. Binarization, as an
efficient compression method that reduces model weights to just 1 bit,
significantly lowers both computational and memory requirements. Despite this,
the binarized LLM still contains redundancy, which can be further compressed.
Semi-structured pruning provides a promising approach to achieve this, which
offers a better trade-off between model performance and hardware efficiency.
However, simply combining binarization with semi-structured pruning can lead to
a significant performance drop. To address this issue, we propose a Progressive
Binarization with Semi-Structured Pruning (PBS$^2$P) method for LLM
compression. We first propose a Stepwise semi-structured Pruning with
Binarization Optimization (SPBO). Our optimization strategy significantly
reduces the total error caused by pruning and binarization, even below that of
the no-pruning scenario. Furthermore, we design a Coarse-to-Fine Search (CFS)
method to select pruning elements more effectively. Extensive experiments
demonstrate that PBS$^2$P achieves superior accuracy across various LLM
families and evaluation metrics, noticeably outperforming state-of-the-art
(SOTA) binary PTQ methods. The code and models will be available at
https://github.com/XIANGLONGYAN/PBS2P.
|
2502.01706
|
Comply: Learning Sentences with Complex Weights inspired by Fruit Fly
Olfaction
|
cs.CL cs.AI cs.LG cs.NE
|
Biologically inspired neural networks offer alternative avenues to model data
distributions. FlyVec is a recent example that draws inspiration from the fruit
fly's olfactory circuit to tackle the task of learning word embeddings.
Surprisingly, this model performs competitively even against deep learning
approaches specifically designed to encode text, and it does so with the
highest degree of computational efficiency. We pose the question of whether
this performance can be improved further. For this, we introduce Comply. By
incorporating positional information through complex weights, we enable a
single-layer neural network to learn sequence representations. Our experiments
show that Comply not only supersedes FlyVec but also performs on par with
significantly larger state-of-the-art models. We achieve this without
additional parameters. Comply yields sparse contextual representations of
sentences that can be interpreted explicitly from the neuron weights.
|
2502.01707
|
CLIP-DQA: Blindly Evaluating Dehazed Images from Global and Local
Perspectives Using CLIP
|
cs.CV cs.AI
|
Blind dehazed image quality assessment (BDQA), which aims to accurately
predict the visual quality of dehazed images without any reference information,
is essential for the evaluation, comparison, and optimization of image dehazing
algorithms. Existing learning-based BDQA methods have achieved remarkable
success, while the small scale of DQA datasets limits their performance. To
address this issue, in this paper, we propose to adapt Contrastive
Language-Image Pre-Training (CLIP), pre-trained on large-scale image-text
pairs, to the BDQA task. Specifically, inspired by the fact that the human
visual system understands images based on hierarchical features, we take global
and local information of the dehazed image as the input of CLIP. To accurately
map the input hierarchical information of dehazed images into the quality
score, we tune both the vision branch and language branch of CLIP with prompt
learning. Experimental results on two authentic DQA datasets demonstrate that
our proposed approach, named CLIP-DQA, achieves more accurate quality
predictions over existing BDQA methods. The code is available at
https://github.com/JunFu1995/CLIP-DQA.
|
2502.01708
|
Aspects of Artificial Intelligence: Transforming Machine Learning
Systems Naturally
|
cs.LG cs.AI cs.DB cs.DM
|
In this paper, we study the machine learning elements which we are interested
in together as a machine learning system, consisting of a collection of machine
learning elements and a collection of relations between the elements. The
relations we concern are algebraic operations, binary relations, and binary
relations with composition that can be reasoned categorically. A machine
learning system transformation between two systems is a map between the
systems, which preserves the relations we concern. The system transformations
given by quotient or clustering, representable functor, and Yoneda embedding
are highlighted and discussed by machine learning examples. An adjunction
between machine learning systems, a special machine learning system
transformation loop, provides the optimal way of solving problems. Machine
learning system transformations are linked and compared by their maps at
2-cell, natural transformations. New insights and structures can be obtained
from universal properties and algebraic structures given by monads, which are
generated from adjunctions.
|
2502.01709
|
Adapter-Based Multi-Agent AVSR Extension for Pre-Trained ASR Models
|
cs.SD cs.LG eess.AS
|
We present an approach to Audio-Visual Speech Recognition that builds on a
pre-trained Whisper model. To infuse visual information into this audio-only
model, we extend it with an AV fusion module and LoRa adapters, one of the most
up-to-date adapter approaches. One advantage of adapter-based approaches, is
that only a relatively small number of parameters are trained, while the basic
model remains unchanged. Common AVSR approaches train single models to handle
several noise categories and noise levels simultaneously. Taking advantage of
the lightweight nature of adapter approaches, we train noise-scenario-specific
adapter-sets, each covering individual noise-categories or a specific
noise-level range. The most suitable adapter-set is selected by previously
classifying the noise-scenario. This enables our models to achieve an optimum
coverage across different noise-categories and noise-levels, while training
only a minimum number of parameters.
Compared to a full fine-tuning approach with SOTA performance our models
achieve almost comparable results over the majority of the tested
noise-categories and noise-levels, with up to 88.5% less trainable parameters.
Our approach can be extended by further noise-specific adapter-sets to cover
additional noise scenarios. It is also possible to utilize the underlying
powerful ASR model when no visual information is available, as it remains
unchanged.
|
2502.01710
|
A Multi-Scale Feature Fusion Framework Integrating Frequency Domain and
Cross-View Attention for Dual-View X-ray Security Inspections
|
cs.CV
|
With the rapid development of modern transportation systems and the
exponential growth of logistics volumes, intelligent X-ray-based security
inspection systems play a crucial role in public safety. Although single-view
X-ray equipment is widely deployed, it struggles to accurately identify
contraband in complex stacking scenarios due to strong viewpoint dependency and
inadequate feature representation. To address this, we propose an innovative
multi-scale interactive feature fusion framework tailored for dual-view X-ray
security inspection image classification. The framework comprises three core
modules: the Frequency Domain Interaction Module (FDIM) enhances
frequency-domain features through Fourier transform; the Multi-Scale Cross-View
Feature Enhancement (MSCFE) leverages cross-view attention mechanisms to
strengthen feature interactions; and the Convolutional Attention Fusion Module
(CAFM) efficiently fuses features by integrating channel attention with
depthwise-separable convolutions. Experimental results demonstrate that our
method outperforms existing state-of-the-art approaches across multiple
backbone architectures, particularly excelling in complex scenarios with
occlusions and object stacking.
|
2502.01711
|
Expected Return Symmetries
|
cs.MA
|
Symmetry is an important inductive bias that can improve model robustness and
generalization across many deep learning domains. In multi-agent settings, a
priori known symmetries have been shown to address a fundamental coordination
failure mode known as mutually incompatible symmetry breaking; e.g. in a game
where two independent agents can choose to move "left'' or "right'', and where
a reward of +1 or -1 is received when the agents choose the same action or
different actions, respectively. However, the efficient and automatic discovery
of environment symmetries, in particular for decentralized partially observable
Markov decision processes, remains an open problem. Furthermore, environmental
symmetry breaking constitutes only one type of coordination failure, which
motivates the search for a more accessible and broader symmetry class. In this
paper, we introduce such a broader group of previously unexplored symmetries,
which we call expected return symmetries, which contains environment symmetries
as a subgroup. We show that agents trained to be compatible under the group of
expected return symmetries achieve better zero-shot coordination results than
those using environment symmetries. As an additional benefit, our method makes
minimal a priori assumptions about the structure of their environment and does
not require access to ground truth symmetries.
|
2502.01713
|
Auditing a Dutch Public Sector Risk Profiling Algorithm Using an
Unsupervised Bias Detection Tool
|
cs.CY cs.LG
|
Algorithms are increasingly used to automate or aid human decisions, yet
recent research shows that these algorithms may exhibit bias across legally
protected demographic groups. However, data on these groups may be unavailable
to organizations or external auditors due to privacy legislation. This paper
studies bias detection using an unsupervised clustering tool when data on
demographic groups are unavailable. We collaborate with the Dutch Executive
Agency for Education to audit an algorithm that was used to assign risk scores
to college students at the national level in the Netherlands between 2012-2023.
Our audit covers more than 250,000 students from the whole country. The
unsupervised clustering tool highlights known disparities between students with
a non-European migration background and Dutch origin. Our contributions are
three-fold: (1) we assess bias in a real-world, large-scale and high-stakes
decision-making process by a governmental organization; (2) we use simulation
studies to highlight potential pitfalls of using the unsupervised clustering
tool to detect true bias when demographic group data are unavailable and
provide recommendations for valid inferences; (3) we provide the unsupervised
clustering tool in an open-source library. Our work serves as a starting point
for a deliberative assessment by human experts to evaluate potential
discrimination in algorithmic-supported decision-making processes.
|
2502.01714
|
Position: Towards a Responsible LLM-empowered Multi-Agent Systems
|
cs.MA cs.AI
|
The rise of Agent AI and Large Language Model-powered Multi-Agent Systems
(LLM-MAS) has underscored the need for responsible and dependable system
operation. Tools like LangChain and Retrieval-Augmented Generation have
expanded LLM capabilities, enabling deeper integration into MAS through
enhanced knowledge retrieval and reasoning. However, these advancements
introduce critical challenges: LLM agents exhibit inherent unpredictability,
and uncertainties in their outputs can compound across interactions,
threatening system stability. To address these risks, a human-centered design
approach with active dynamic moderation is essential. Such an approach enhances
traditional passive oversight by facilitating coherent inter-agent
communication and effective system governance, allowing MAS to achieve desired
outcomes more efficiently.
|
2502.01715
|
Process-Supervised Reinforcement Learning for Code Generation
|
cs.SE cs.AI
|
Existing reinforcement learning strategies based on outcome supervision have
proven effective in enhancing the performance of large language models(LLMs)
for code generation. While reinforcement learning based on process supervision
has shown great promise in handling multi-step reasoning tasks, its
effectiveness in code generation remains largely underexplored and
underjustified. The primary obstacle stems from the resource-intensive nature
of constructing high-quality process-supervised data, which demands substantial
human expertise and computational resources. In response to this challenge, we
propose a "statement mutation/refactoring-compile and execution verification"
strategy: mutating and refactoring code line-by-line through a teacher model,
and utilizing compiler execution results to automatically label each line,
resulting in line-by-line process-supervised data, which is pivotal for
training a process-supervised reward model. The trained reward model is then
integrated into the PRLCoder framework, followed by experimental validation on
several benchmarks. Experimental results demonstrate that process-supervised
reinforcement learning significantly surpasses methods relying solely on
outcome supervision. Notably, in tackling complex code generation tasks,
process-supervised reinforcement learning shows a clear advantage, ensuring
both the integrity of the code generation process and the correctness of the
generation results.
|
2502.01717
|
Choose Your Model Size: Any Compression by a Single Gradient Descent
|
cs.LG
|
The adoption of Foundation Models in resource-constrained environments
remains challenging due to their large size and inference costs. A promising
way to overcome these limitations is post-training compression, which aims to
balance reduced model size against performance degradation. This work presents
Any Compression via Iterative Pruning (ACIP), a novel algorithmic approach to
determine a compression-performance trade-off from a single stochastic gradient
descent run. To ensure parameter efficiency, we use an SVD-reparametrization of
linear layers and iteratively prune their singular values with a
sparsity-inducing penalty. The resulting pruning order gives rise to a global
parameter ranking that allows us to materialize models of any target size.
Importantly, the compressed models exhibit strong predictive downstream
performance without the need for costly fine-tuning. We evaluate ACIP on a
large selection of open-weight LLMs and tasks, and demonstrate state-of-the-art
results compared to existing factorisation-based compression methods. We also
show that ACIP seamlessly complements common quantization-based compression
techniques.
|
2502.01718
|
ACECODER: Acing Coder RL via Automated Test-Case Synthesis
|
cs.SE cs.AI cs.CL
|
Most progress in recent coder models has been driven by supervised
fine-tuning (SFT), while the potential of reinforcement learning (RL) remains
largely unexplored, primarily due to the lack of reliable reward data/model in
the code domain. In this paper, we address this challenge by leveraging
automated large-scale test-case synthesis to enhance code model training.
Specifically, we design a pipeline that generates extensive (question,
test-cases) pairs from existing code data. Using these test cases, we construct
preference pairs based on pass rates over sampled programs to train reward
models with Bradley-Terry loss. It shows an average of 10-point improvement for
Llama-3.1-8B-Ins and 5-point improvement for Qwen2.5-Coder-7B-Ins through
best-of-32 sampling, making the 7B model on par with 236B DeepSeek-V2.5.
Furthermore, we conduct reinforcement learning with both reward models and
test-case pass rewards, leading to consistent improvements across HumanEval,
MBPP, BigCodeBench, and LiveCodeBench (V4). Notably, we follow the R1-style
training to start from Qwen2.5-Coder-base directly and show that our RL
training can improve model on HumanEval-plus by over 25\% and MBPP-plus by 6\%
for merely 80 optimization steps. We believe our results highlight the huge
potential of reinforcement learning in coder models.
|
2502.01719
|
MJ-VIDEO: Fine-Grained Benchmarking and Rewarding Video Preferences in
Video Generation
|
cs.CV
|
Recent advancements in video generation have significantly improved the
ability to synthesize videos from text instructions. However, existing models
still struggle with key challenges such as instruction misalignment, content
hallucination, safety concerns, and bias. Addressing these limitations, we
introduce MJ-BENCH-VIDEO, a large-scale video preference benchmark designed to
evaluate video generation across five critical aspects: Alignment, Safety,
Fineness, Coherence & Consistency, and Bias & Fairness. This benchmark
incorporates 28 fine-grained criteria to provide a comprehensive evaluation of
video preference. Building upon this dataset, we propose MJ-VIDEO, a
Mixture-of-Experts (MoE)-based video reward model designed to deliver
fine-grained reward. MJ-VIDEO can dynamically select relevant experts to
accurately judge the preference based on the input text-video pair. This
architecture enables more precise and adaptable preference judgments. Through
extensive benchmarking on MJ-BENCH-VIDEO, we analyze the limitations of
existing video reward models and demonstrate the superior performance of
MJ-VIDEO in video preference assessment, achieving 17.58% and 15.87%
improvements in overall and fine-grained preference judgments, respectively.
Additionally, introducing MJ-VIDEO for preference tuning in video generation
enhances the alignment performance. All our code, data, and models are
available at https://aiming-lab.github.io/MJ-VIDEO.github.io/.
|
2502.01720
|
Generating Multi-Image Synthetic Data for Text-to-Image Customization
|
cs.CV cs.GR cs.LG
|
Customization of text-to-image models enables users to insert custom concepts
and generate the concepts in unseen settings. Existing methods either rely on
costly test-time optimization or train encoders on single-image training
datasets without multi-image supervision, leading to worse image quality. We
propose a simple approach that addresses both limitations. We first leverage
existing text-to-image models and 3D datasets to create a high-quality
Synthetic Customization Dataset (SynCD) consisting of multiple images of the
same object in different lighting, backgrounds, and poses. We then propose a
new encoder architecture based on shared attention mechanisms that better
incorporate fine-grained visual details from input images. Finally, we propose
a new inference technique that mitigates overexposure issues during inference
by normalizing the text and image guidance vectors. Through extensive
experiments, we show that our model, trained on the synthetic dataset with the
proposed encoder and inference algorithm, outperforms existing tuning-free
methods on standard customization benchmarks.
|
2502.01739
|
Grokking vs. Learning: Same Features, Different Encodings
|
cs.LG cond-mat.dis-nn cs.AI
|
Grokking typically achieves similar loss to ordinary, "steady", learning. We
ask whether these different learning paths - grokking versus ordinary training
- lead to fundamental differences in the learned models. To do so we compare
the features, compressibility, and learning dynamics of models trained via each
path in two tasks. We find that grokked and steadily trained models learn the
same features, but there can be large differences in the efficiency with which
these features are encoded. In particular, we find a novel "compressive regime"
of steady training in which there emerges a linear trade-off between model loss
and compressibility, and which is absent in grokking. In this regime, we can
achieve compression factors 25x times the base model, and 5x times the
compression achieved in grokking. We then track how model features and
compressibility develop through training. We show that model development in
grokking is task-dependent, and that peak compressibility is achieved
immediately after the grokking plateau. Finally, novel information-geometric
measures are introduced which demonstrate that models undergoing grokking
follow a straight path in information space.
|
2502.01754
|
Evaluation of Large Language Models via Coupled Token Generation
|
cs.CL cs.AI cs.LG
|
State of the art large language models rely on randomization to respond to a
prompt. As an immediate consequence, a model may respond differently to the
same prompt if asked multiple times. In this work, we argue that the evaluation
and ranking of large language models should control for the randomization
underpinning their functioning. Our starting point is the development of a
causal model for coupled autoregressive generation, which allows different
large language models to sample responses with the same source of randomness.
Building upon our causal model, we first show that, on evaluations based on
benchmark datasets, coupled autoregressive generation leads to the same
conclusions as vanilla autoregressive generation but using provably fewer
samples. However, we further show that, on evaluations based on (human)
pairwise comparisons, coupled and vanilla autoregressive generation can
surprisingly lead to different rankings when comparing more than two models,
even with an infinite amount of samples. This suggests that the apparent
advantage of a model over others in existing evaluation protocols may not be
genuine but rather confounded by the randomness inherent to the generation
process. To illustrate and complement our theoretical results, we conduct
experiments with several large language models from the Llama family. We find
that, across multiple knowledge areas from the popular MMLU benchmark dataset,
coupled autoregressive generation requires up to 40% fewer samples to reach the
same conclusions as vanilla autoregressive generation. Further, using data from
the LMSYS Chatbot Arena platform, we find that the win-rates derived from
pairwise comparisons by a strong large language model to prompts differ under
coupled and vanilla autoregressive generation.
|
2502.01755
|
Robust Federated Finetuning of LLMs via Alternating Optimization of LoRA
|
cs.LG cs.AI
|
Parameter-Efficient Fine-Tuning (PEFT) methods like Low-Rank Adaptation
(LoRA) optimize federated training by reducing computational and communication
costs. We propose RoLoRA, a federated framework using alternating optimization
to fine-tune LoRA adapters. Our approach emphasizes the importance of learning
up and down projection matrices to enhance expressiveness and robustness. We
use both theoretical analysis and extensive experiments to demonstrate the
advantages of RoLoRA over prior approaches that either generate imperfect model
updates or limit expressiveness of the model. We present theoretical analysis
on a simplified linear model to demonstrate the importance of learning both
down-projection and up-projection matrices in LoRA. We provide extensive
experimental evaluations on a toy neural network on MNIST as well as large
language models including RoBERTa-Large, Llama-2-7B on diverse tasks to
demonstrate the advantages of RoLoRA over other methods.
|
2502.01763
|
On The Concurrence of Layer-wise Preconditioning Methods and Provable
Feature Learning
|
cs.LG math.OC stat.ML
|
Layer-wise preconditioning methods are a family of memory-efficient
optimization algorithms that introduce preconditioners per axis of each layer's
weight tensors. These methods have seen a recent resurgence, demonstrating
impressive performance relative to entry-wise ("diagonal") preconditioning
methods such as Adam(W) on a wide range of neural network optimization tasks.
Complementary to their practical performance, we demonstrate that layer-wise
preconditioning methods are provably necessary from a statistical perspective.
To showcase this, we consider two prototypical models, linear representation
learning and single-index learning, which are widely used to study how typical
algorithms efficiently learn useful features to enable generalization. In these
problems, we show SGD is a suboptimal feature learner when extending beyond
ideal isotropic inputs $\mathbf{x} \sim \mathsf{N}(\mathbf{0}, \mathbf{I})$ and
well-conditioned settings typically assumed in prior work. We demonstrate
theoretically and numerically that this suboptimality is fundamental, and that
layer-wise preconditioning emerges naturally as the solution. We further show
that standard tools like Adam preconditioning and batch-norm only mildly
mitigate these issues, supporting the unique benefits of layer-wise
preconditioning.
|
2502.01770
|
Hamming Attention Distillation: Binarizing Keys and Queries for
Efficient Long-Context Transformers
|
cs.LG cs.AI eess.IV
|
Pre-trained transformer models with extended context windows are notoriously
expensive to run at scale, often limiting real-world deployment due to their
high computational and memory requirements. In this paper, we introduce Hamming
Attention Distillation (HAD), a novel framework that binarizes keys and queries
in the attention mechanism to achieve significant efficiency gains. By
converting keys and queries into {-1, +1} vectors and replacing dot-product
operations with efficient Hamming distance computations, our method drastically
reduces computational overhead. Additionally, we incorporate attention matrix
sparsification to prune low-impact activations, which further reduces the cost
of processing long-context sequences. \par Despite these aggressive compression
strategies, our distilled approach preserves a high degree of representational
power, leading to substantially improved accuracy compared to prior transformer
binarization methods. We evaluate HAD on a range of tasks and models, including
the GLUE benchmark, ImageNet, and QuALITY, demonstrating state-of-the-art
performance among binarized Transformers while drastically reducing the
computational costs of long-context inference. \par We implement HAD in custom
hardware simulations, demonstrating superior performance characteristics
compared to a custom hardware implementation of standard attention. HAD
achieves just $\mathbf{1.78}\%$ performance losses on GLUE compared to $9.08\%$
in state-of-the-art binarization work, and $\mathbf{2.5}\%$ performance losses
on ImageNet compared to $12.14\%$, all while targeting custom hardware with a
$\mathbf{79}\%$ area reduction and $\mathbf{87}\%$ power reduction compared to
its standard attention counterpart.
|
2502.01772
|
On Bob Dylan: A Computational Perspective
|
cs.CL cs.AI cs.IR cs.SI
|
Cass Sunstein's essay 'On Bob Dylan' describes Dylan's 'dishabituating' style
-- a constant refusal to conform to expectation and a penchant for reinventing
his musical and lyrical identity. In this paper, I extend Sunstein's
observations through a large-scale computational analysis of Dylan's lyrics
from 1962 to 2012. Using o3-mini-high (a large language model), I extract
concept-to-concept relationships from the lyrics and construct directed
knowledge graphs that capture Dylan's thematic structure. I then quantify
shifts in sentiment, metaphorical expression, thematic diversity, and network
complexity over time. The results indicate that Dylan's lyrics increasingly
rely on metaphor, display an evolving sentiment profile, and exhibit heightened
dishabituation -- measured here as a growing variance in the network centrality
of key concepts. I also find that references to movement, protest, and mythic
imagery fluctuate in ways that align with well-known phases of Dylan's career,
reflecting the dynamic and unpredictable quality of his art. These findings not
only deepen our empirical understanding of Sunstein's thesis but also introduce
a novel computational method for analyzing an artist's evolution-offering
broader applicability to the study of cultural and creative change.
|
2502.01773
|
Coarse-to-Fine 3D Keyframe Transporter
|
cs.RO cs.CV
|
Recent advances in Keyframe Imitation Learning (IL) have enabled
learning-based agents to solve a diverse range of manipulation tasks. However,
most approaches ignore the rich symmetries in the problem setting and, as a
consequence, are sample-inefficient. This work identifies and utilizes the
bi-equivariant symmetry within Keyframe IL to design a policy that generalizes
to transformations of both the workspace and the objects grasped by the
gripper. We make two main contributions: First, we analyze the bi-equivariance
properties of the keyframe action scheme and propose a Keyframe Transporter
derived from the Transporter Networks, which evaluates actions using
cross-correlation between the features of the grasped object and the features
of the scene. Second, we propose a computationally efficient coarse-to-fine
SE(3) action evaluation scheme for reasoning the intertwined translation and
rotation action. The resulting method outperforms strong Keyframe IL baselines
by an average of >10% on a wide range of simulation tasks, and by an average of
55% in 4 physical experiments.
|
2502.01774
|
Grokking Explained: A Statistical Phenomenon
|
cs.LG cs.AI
|
Grokking, or delayed generalization, is an intriguing learning phenomenon
where test set loss decreases sharply only after a model's training set loss
has converged. This challenges conventional understanding of the training
dynamics in deep learning networks. In this paper, we formalize and investigate
grokking, highlighting that a key factor in its emergence is a distribution
shift between training and test data. We introduce two synthetic datasets
specifically designed to analyze grokking. One dataset examines the impact of
limited sampling, and the other investigates transfer learning's role in
grokking. By inducing distribution shifts through controlled imbalanced
sampling of sub-categories, we systematically reproduce the phenomenon,
demonstrating that while small-sampling is strongly associated with grokking,
it is not its cause. Instead, small-sampling serves as a convenient mechanism
for achieving the necessary distribution shift. We also show that when classes
form an equivariant map, grokking can be explained by the model's ability to
learn from similar classes or sub-categories. Unlike earlier work suggesting
that grokking primarily arises from high regularization and sparse data, we
demonstrate that it can also occur with dense data and minimal hyper-parameter
tuning. Our findings deepen the understanding of grokking and pave the way for
developing better stopping criteria in future training processes.
|
2502.01776
|
Sparse VideoGen: Accelerating Video Diffusion Transformers with
Spatial-Temporal Sparsity
|
cs.CV cs.LG
|
Diffusion Transformers (DiTs) dominate video generation but their high
computational cost severely limits real-world applicability, usually requiring
tens of minutes to generate a few seconds of video even on high-performance
GPUs. This inefficiency primarily arises from the quadratic computational
complexity of 3D Full Attention with respect to the context length. In this
paper, we propose a training-free framework termed Sparse VideoGen (SVG) that
leverages the inherent sparsity in 3D Full Attention to boost inference
efficiency. We reveal that the attention heads can be dynamically classified
into two groups depending on distinct sparse patterns: (1) Spatial Head, where
only spatially-related tokens within each frame dominate the attention output,
and (2) Temporal Head, where only temporally-related tokens across different
frames dominate. Based on this insight, SVG proposes an online profiling
strategy to capture the dynamic sparse patterns and predicts the type of
attention head. Combined with a novel hardware-efficient tensor layout
transformation and customized kernel implementations, SVG achieves up to 2.28x
and 2.33x end-to-end speedup on CogVideoX-v1.5 and HunyuanVideo, respectively,
while preserving generation quality.
|
2502.01777
|
CTC-DRO: Robust Optimization for Reducing Language Disparities in Speech
Recognition
|
cs.LG cs.CL eess.AS
|
Modern deep learning models often achieve high overall performance, but
consistently fail on specific subgroups. Group distributionally robust
optimization (group DRO) addresses this problem by minimizing the worst-group
loss, but it fails when group losses misrepresent performance differences
between groups. This is common in domains like speech, where the widely used
connectionist temporal classification (CTC) loss scales with input length and
varies with linguistic and acoustic properties, leading to spurious differences
between group losses. We present CTC-DRO, which addresses the shortcomings of
the group DRO objective by smoothing the group weight update to prevent
overemphasis on consistently high-loss groups, while using input length-matched
batching to mitigate CTC's scaling issues. We evaluate CTC-DRO on the task of
multilingual automatic speech recognition (ASR) across five language sets from
the ML-SUPERB 2.0 benchmark. CTC-DRO consistently outperforms group DRO and
CTC-based baseline models, reducing the worst-language error by up to 65.9% and
the average error by up to 47.7%. CTC-DRO can be applied to ASR with minimal
computational costs, and offers the potential for reducing group disparities in
other domains with similar challenges.
|
2502.01778
|
GNN-DT: Graph Neural Network Enhanced Decision Transformer for Efficient
Optimization in Dynamic Environments
|
cs.LG cs.SY eess.SY
|
Reinforcement Learning (RL) methods used for solving real-world optimization
problems often involve dynamic state-action spaces, larger scale, and sparse
rewards, leading to significant challenges in convergence, scalability, and
efficient exploration of the solution space. This study introduces GNN-DT, a
novel Decision Transformer (DT) architecture that integrates Graph Neural
Network (GNN) embedders with a novel residual connection between input and
output tokens crucial for handling dynamic environments. By learning from
previously collected trajectories, GNN-DT reduces dependence on accurate
simulators and tackles the sparse rewards limitations of online RL algorithms.
We evaluate GNN-DT on the complex electric vehicle (EV) charging optimization
problem and prove that its performance is superior and requires significantly
fewer training trajectories, thus improving sample efficiency compared to
existing DT baselines. Furthermore, GNN-DT exhibits robust generalization to
unseen environments and larger action spaces, addressing a critical gap in
prior DT-based approaches
|
2502.01780
|
Graph Canonical Correlation Analysis
|
stat.ML cs.LG
|
Canonical correlation analysis (CCA) is a widely used technique for
estimating associations between two sets of multi-dimensional variables. Recent
advancements in CCA methods have expanded their application to decipher the
interactions of multiomics datasets, imaging-omics datasets, and more. However,
conventional CCA methods are limited in their ability to incorporate structured
patterns in the cross-correlation matrix, potentially leading to suboptimal
estimations. To address this limitation, we propose the graph Canonical
Correlation Analysis (gCCA) approach, which calculates canonical correlations
based on the graph structure of the cross-correlation matrix between the two
sets of variables. We develop computationally efficient algorithms for gCCA,
and provide theoretical results for finite sample analysis of best subset
selection and canonical correlation estimation by introducing concentration
inequalities and stopping time rule based on martingale theories. Extensive
simulations demonstrate that gCCA outperforms competing CCA methods.
Additionally, we apply gCCA to a multiomics dataset of DNA methylation and
RNA-seq transcriptomics, identifying both positively and negatively regulated
gene expression pathways by DNA methylation pathways.
|
2502.01784
|
VILP: Imitation Learning with Latent Video Planning
|
cs.RO cs.CV
|
In the era of generative AI, integrating video generation models into
robotics opens new possibilities for the general-purpose robot agent. This
paper introduces imitation learning with latent video planning (VILP). We
propose a latent video diffusion model to generate predictive robot videos that
adhere to temporal consistency to a good degree. Our method is able to generate
highly time-aligned videos from multiple views, which is crucial for robot
policy learning. Our video generation model is highly time-efficient. For
example, it can generate videos from two distinct perspectives, each consisting
of six frames with a resolution of 96x160 pixels, at a rate of 5 Hz. In the
experiments, we demonstrate that VILP outperforms the existing video generation
robot policy across several metrics: training costs, inference speed, temporal
consistency of generated videos, and the performance of the policy. We also
compared our method with other imitation learning methods. Our findings
indicate that VILP can rely less on extensive high-quality task-specific robot
action data while still maintaining robust performance. In addition, VILP
possesses robust capabilities in representing multi-modal action distributions.
Our paper provides a practical example of how to effectively integrate video
generation models into robot policies, potentially offering insights for
related fields and directions. For more details, please refer to our
open-source repository https://github.com/ZhengtongXu/VILP.
|
2502.01785
|
AquaticCLIP: A Vision-Language Foundation Model for Underwater Scene
Analysis
|
cs.CV cs.AI
|
The preservation of aquatic biodiversity is critical in mitigating the
effects of climate change. Aquatic scene understanding plays a pivotal role in
aiding marine scientists in their decision-making processes. In this paper, we
introduce AquaticCLIP, a novel contrastive language-image pre-training model
tailored for aquatic scene understanding. AquaticCLIP presents a new
unsupervised learning framework that aligns images and texts in aquatic
environments, enabling tasks such as segmentation, classification, detection,
and object counting. By leveraging our large-scale underwater image-text paired
dataset without the need for ground-truth annotations, our model enriches
existing vision-language models in the aquatic domain. For this purpose, we
construct a 2 million underwater image-text paired dataset using heterogeneous
resources, including YouTube, Netflix, NatGeo, etc. To fine-tune AquaticCLIP,
we propose a prompt-guided vision encoder that progressively aggregates patch
features via learnable prompts, while a vision-guided mechanism enhances the
language encoder by incorporating visual context. The model is optimized
through a contrastive pretraining loss to align visual and textual modalities.
AquaticCLIP achieves notable performance improvements in zero-shot settings
across multiple underwater computer vision tasks, outperforming existing
methods in both robustness and interpretability. Our model sets a new benchmark
for vision-language applications in underwater environments. The code and
dataset for AquaticCLIP are publicly available on GitHub at xxx.
|
2502.01787
|
The Effects of Enterprise Social Media on Communication Networks
|
cs.CY cs.SI
|
Enterprise social media platforms (ESMPs) are web-based platforms with
standard social media functionality, e.g., communicating with others, posting
links and files, liking content, etc., yet all users are part of the same
company. The first contribution of this work is the use of a
difference-in-differences analysis of $99$ companies to measure the causal
impact of ESMPs on companies' communication networks across the full spectrum
of communication technologies used within companies: email, instant messaging,
and ESMPs. Adoption caused companies' communication networks to grow denser and
more well-connected by adding new, novel ties that often, but not exclusively,
involve communication from one to many employees. Importantly, some new ties
also bridge otherwise separate parts of the corporate communication network.
The second contribution of this work, utilizing data on Microsoft's own
communication network, is understanding how these communication technologies
connect people across the corporate hierarchy. Compared to email and instant
messaging, ESMPs excel at connecting nodes distant in the corporate hierarchy
both vertically (between leaders and employees) and horizontally (between
employees in similar roles but different sectors). Also, influence in ESMPs is
more `democratic' than elsewhere, with high-influence nodes well-distributed
across the corporate hierarchy. Overall, our results suggest that ESMPs boost
information flow within companies and increase employees' attention to what is
happening outside their immediate working group, above and beyond email and
instant messaging.
|
2502.01789
|
An Agentic AI Workflow for Detecting Cognitive Concerns in Real-world
Data
|
cs.AI cs.MA
|
Early identification of cognitive concerns is critical but often hindered by
subtle symptom presentation. This study developed and validated a fully
automated, multi-agent AI workflow using LLaMA 3 8B to identify cognitive
concerns in 3,338 clinical notes from Mass General Brigham. The agentic
workflow, leveraging task-specific agents that dynamically collaborate to
extract meaningful insights from clinical notes, was compared to an
expert-driven benchmark. Both workflows achieved high classification
performance, with F1-scores of 0.90 and 0.91, respectively. The agentic
workflow demonstrated improved specificity (1.00) and achieved prompt
refinement in fewer iterations. Although both workflows showed reduced
performance on validation data, the agentic workflow maintained perfect
specificity. These findings highlight the potential of fully automated
multi-agent AI workflows to achieve expert-level accuracy with greater
efficiency, offering a scalable and cost-effective solution for detecting
cognitive concerns in clinical settings.
|
2502.01792
|
Policy Design for Two-sided Platforms with Participation Dynamics
|
cs.GT cs.IR cs.LG cs.SY eess.SY
|
In two-sided platforms (e.g., video streaming or e-commerce), viewers and
providers engage in interactive dynamics, where an increased provider
population results in higher viewer utility and the increase of viewer
population results in higher provider utility. Despite the importance of such
"population effects" on long-term platform health, recommendation policies do
not generally take the participation dynamics into account. This paper thus
studies the dynamics and policy design on two-sided platforms under the
population effects for the first time. Our control- and game-theoretic findings
warn against the use of myopic-greedy policy and shed light on the importance
of provider-side considerations (i.e., effectively distributing exposure among
provider groups) to improve social welfare via population growth. We also
present a simple algorithm to optimize long-term objectives by considering the
population effects, and demonstrate its effectiveness in synthetic and
real-data experiments.
|
2502.01800
|
Flow-based Domain Randomization for Learning and Sequencing Robotic
Skills
|
cs.RO cs.AI cs.LG
|
Domain randomization in reinforcement learning is an established technique
for increasing the robustness of control policies trained in simulation. By
randomizing environment properties during training, the learned policy can
become robust to uncertainties along the randomized dimensions. While the
environment distribution is typically specified by hand, in this paper we
investigate automatically discovering a sampling distribution via
entropy-regularized reward maximization of a normalizing-flow-based neural
sampling distribution. We show that this architecture is more flexible and
provides greater robustness than existing approaches that learn simpler,
parameterized sampling distributions, as demonstrated in six simulated and one
real-world robotics domain. Lastly, we explore how these learned sampling
distributions, combined with a privileged value function, can be used for
out-of-distribution detection in an uncertainty-aware multi-step manipulation
planner.
|
2502.01803
|
Discovering Chunks in Neural Embeddings for Interpretability
|
cs.LG cs.AI
|
Understanding neural networks is challenging due to their high-dimensional,
interacting components. Inspired by human cognition, which processes complex
sensory data by chunking it into recurring entities, we propose leveraging this
principle to interpret artificial neural population activities. Biological and
artificial intelligence share the challenge of learning from structured,
naturalistic data, and we hypothesize that the cognitive mechanism of chunking
can provide insights into artificial systems. We first demonstrate this concept
in recurrent neural networks (RNNs) trained on artificial sequences with
imposed regularities, observing that their hidden states reflect these
patterns, which can be extracted as a dictionary of chunks that influence
network responses. Extending this to large language models (LLMs) like LLaMA,
we identify similar recurring embedding states corresponding to concepts in the
input, with perturbations to these states activating or inhibiting the
associated concepts. By exploring methods to extract dictionaries of
identifiable chunks across neural embeddings of varying complexity, our
findings introduce a new framework for interpreting neural networks, framing
their population activity as structured reflections of the data they process.
|
2502.01804
|
Soup-of-Experts: Pretraining Specialist Models via Parameters Averaging
|
cs.LG cs.CL
|
Machine learning models are routinely trained on a mixture of different data
domains. Different domain weights yield very different downstream performances.
We propose the Soup-of-Experts, a novel architecture that can instantiate a
model at test time for any domain weights with minimal computational cost and
without re-training the model. Our architecture consists of a bank of expert
parameters, which are linearly combined to instantiate one model. We learn the
linear combination coefficients as a function of the input domain weights. To
train this architecture, we sample random domain weights, instantiate the
corresponding model, and backprop through one batch of data sampled with these
domain weights. We demonstrate how our approach obtains small specialized
models on several language modeling tasks quickly. Soup-of-Experts are
particularly appealing when one needs to ship many different specialist models
quickly under a model size constraint.
|
2502.01806
|
Toward Neurosymbolic Program Comprehension
|
cs.SE cs.AI
|
Recent advancements in Large Language Models (LLMs) have paved the way for
Large Code Models (LCMs), enabling automation in complex software engineering
tasks, such as code generation, software testing, and program comprehension,
among others. Tools like GitHub Copilot and ChatGPT have shown substantial
benefits in supporting developers across various practices. However, the
ambition to scale these models to trillion-parameter sizes, exemplified by
GPT-4, poses significant challenges that limit the usage of Artificial
Intelligence (AI)-based systems powered by large Deep Learning (DL) models.
These include rising computational demands for training and deployment and
issues related to trustworthiness, bias, and interpretability. Such factors can
make managing these models impractical for many organizations, while their
"black-box'' nature undermines key aspects, including transparency and
accountability. In this paper, we question the prevailing assumption that
increasing model parameters is always the optimal path forward, provided there
is sufficient new data to learn additional patterns. In particular, we advocate
for a Neurosymbolic research direction that combines the strengths of existing
DL techniques (e.g., LLMs) with traditional symbolic methods--renowned for
their reliability, speed, and determinism. To this end, we outline the core
features and present preliminary results for our envisioned approach, aimed at
establishing the first Neurosymbolic Program Comprehension (NsPC) framework to
aid in identifying defective code components.
|
2502.01809
|
Self-supervised Subgraph Neural Network With Deep Reinforcement Walk
Exploration
|
cs.LG
|
Graph data, with its structurally variable nature, represents complex
real-world phenomena like chemical compounds, protein structures, and social
networks. Traditional Graph Neural Networks (GNNs) primarily utilize the
message-passing mechanism, but their expressive power is limited and their
prediction lacks explainability. To address these limitations, researchers have
focused on graph substructures. Subgraph neural networks (SGNNs) and GNN
explainers have emerged as potential solutions, but each has its limitations.
SGNNs computes graph representations based on the bags of subgraphs to enhance
the expressive power. However, they often rely on predefined algorithm-based
sampling strategies, which is inefficient. GNN explainers adopt data-driven
approaches to generate important subgraphs to provide explanation.
Nevertheless, their explanation is difficult to be translated into practical
improvements on GNNs. To overcome these issues, we propose a novel
self-supervised framework that integrates SGNNs with the generation approach of
GNN explainers, named the Reinforcement Walk Exploration SGNN (RWE-SGNN). Our
approach features a sampling model trained in an explainer fashion, optimizing
subgraphs to enhance model performance. To achieve a data-driven sampling
approach, unlike traditional subgraph generation approaches, we propose a novel
walk exploration process, which efficiently extracts important substructures,
simplifying the embedding process and avoiding isomorphism problems. Moreover,
we prove that our proposed walk exploration process has equivalent generation
capability to the traditional subgraph generation process. Experimental results
on various graph datasets validate the effectiveness of our proposed method,
demonstrating significant improvements in performance and precision.
|
2502.01810
|
Estimating Network Models using Neural Networks
|
cs.SI econ.EM stat.CO stat.ML
|
Exponential random graph models (ERGMs) are very flexible for modeling
network formation but pose difficult estimation challenges due to their
intractable normalizing constant. Existing methods, such as MCMC-MLE, rely on
sequential simulation at every optimization step. We propose a neural network
approach that trains on a single, large set of parameter-simulation pairs to
learn the mapping from parameters to average network statistics. Once trained,
this map can be inverted, yielding a fast and parallelizable estimation method.
The procedure also accommodates extra network statistics to mitigate model
misspecification. Some simple illustrative examples show that the method
performs well in practice.
|
2502.01812
|
SelfCheckAgent: Zero-Resource Hallucination Detection in Generative
Large Language Models
|
cs.CL cs.LG
|
Detecting hallucinations in Large Language Models (LLMs) remains a critical
challenge for their reliable deployment in real-world applications. To address
this, we introduce SelfCheckAgent, a novel framework integrating three
different agents: the Symbolic Agent, the Specialized Detection Agent, and the
Contextual Consistency Agent. These agents provide a robust multi-dimensional
approach to hallucination detection. Notable results include the Contextual
Consistency Agent leveraging Llama 3.1 with Chain-of-Thought (CoT) to achieve
outstanding performance on the WikiBio dataset, with NonFactual hallucination
detection scoring 93.64%, Factual 70.26%, and Ranking 78.48% respectively. On
the AIME dataset, GPT-4o with CoT excels in NonFactual detection with 94.89%
but reveals trade-offs in Factual with 30.58% and Ranking with 30.68%,
underscoring the complexity of hallucination detection in the complex
mathematical domains. The framework also incorporates a triangulation strategy,
which increases the strengths of the SelfCheckAgent, yielding significant
improvements in real-world hallucination identification. The comparative
analysis demonstrates SelfCheckAgent's applicability across diverse domains,
positioning it as a crucial advancement for trustworthy LLMs. These findings
highlight the potentiality of consistency-driven methodologies in detecting
hallucinations in LLMs.
|
2502.01814
|
PolyhedronNet: Representation Learning for Polyhedra with
Surface-attributed Graph
|
cs.CV cs.LG
|
Ubiquitous geometric objects can be precisely and efficiently represented as
polyhedra. The transformation of a polyhedron into a vector, known as polyhedra
representation learning, is crucial for manipulating these shapes with
mathematical and statistical tools for tasks like classification, clustering,
and generation. Recent years have witnessed significant strides in this domain,
yet most efforts focus on the vertex sequence of a polyhedron, neglecting the
complex surface modeling crucial in real-world polyhedral objects. This study
proposes \textbf{PolyhedronNet}, a general framework tailored for learning
representations of 3D polyhedral objects. We propose the concept of the
surface-attributed graph to seamlessly model the vertices, edges, faces, and
their geometric interrelationships within a polyhedron. To effectively learn
the representation of the entire surface-attributed graph, we first propose to
break it down into local rigid representations to effectively learn each local
region's relative positions against the remaining regions without geometric
information loss. Subsequently, we propose PolyhedronGNN to hierarchically
aggregate the local rigid representation via intra-face and inter-face
geometric message passing modules, to obtain a global representation that
minimizes information loss while maintaining rotation and translation
invariance. Our experimental evaluations on four distinct datasets,
encompassing both classification and retrieval tasks, substantiate
PolyhedronNet's efficacy in capturing comprehensive and informative
representations of 3D polyhedral objects. Code and data are available at
{https://github.com/dyu62/3D_polyhedron}.
|
2502.01816
|
Low Resource Video Super-resolution using Memory and Residual Deformable
Convolutions
|
cs.CV cs.LG
|
Transformer-based video super-resolution (VSR) models have set new benchmarks
in recent years, but their substantial computational demands make most of them
unsuitable for deployment on resource-constrained devices. Achieving a balance
between model complexity and output quality remains a formidable challenge in
VSR. Although lightweight models have been introduced to address this issue,
they often struggle to deliver state-of-the-art performance. We propose a novel
lightweight, parameter-efficient deep residual deformable convolution network
for VSR. Unlike prior methods, our model enhances feature utilization through
residual connections and employs deformable convolution for precise frame
alignment, addressing motion dynamics effectively. Furthermore, we introduce a
single memory tensor to capture information accrued from the past frames and
improve motion estimation across frames. This design enables an efficient
balance between computational cost and reconstruction quality. With just 2.3
million parameters, our model achieves state-of-the-art SSIM of 0.9175 on the
REDS4 dataset, surpassing existing lightweight and many heavy models in both
accuracy and resource efficiency. Architectural insights from our model pave
the way for real-time VSR on streaming data.
|
2502.01819
|
Score as Action: Fine-Tuning Diffusion Generative Models by
Continuous-time Reinforcement Learning
|
cs.LG cs.AI math.OC
|
Reinforcement learning from human feedback (RLHF), which aligns a diffusion
model with input prompt, has become a crucial step in building reliable
generative AI models. Most works in this area use a discrete-time formulation,
which is prone to induced errors, and often not applicable to models with
higher-order/black-box solvers. The objective of this study is to develop a
disciplined approach to fine-tune diffusion models using continuous-time RL,
formulated as a stochastic control problem with a reward function that aligns
the end result (terminal state) with input prompt. The key idea is to treat
score matching as controls or actions, and thereby making connections to policy
optimization and regularization in continuous-time RL. To carry out this idea,
we lay out a new policy optimization framework for continuous-time RL, and
illustrate its potential in enhancing the value networks design space via
leveraging the structural property of diffusion models. We validate the
advantages of our method by experiments in downstream tasks of fine-tuning
large-scale Text2Image models of Stable Diffusion v1.5.
|
2502.01820
|
Physics-Informed Surrogates for Temperature Prediction of Multi-Tracks
in Laser Powder Bed Fusion
|
cs.CE
|
Modeling plays a critical role in additive manufacturing (AM), enabling a
deeper understanding of underlying processes. Parametric solutions for such
models are of great importance, enabling the optimization of production
processes and considerable cost reductions. However, the complexity of the
problem and diversity of spatio-temporal scales involved in the process pose
significant challenges for traditional numerical methods. Surrogate models
offer a powerful alternative by accelerating simulations and facilitating
real-time monitoring and control. The present study presents an operator
learning approach that relies on the deep operator network (DeepONet) and
physics-informed neural networks (PINN) to predict the three-dimensional
temperature distribution during melting and consolidation in laser powder bed
fusion (LPBF). Parametric solutions for both single-track and multi-track
scenarios with respect to tool path are obtained. To address the challenges in
obtaining parametric solutions for multi-track scenarios using DeepONet
architecture, a sequential PINN approach is proposed to efficiently manage the
increased training complexity inherent in those scenarios. The accuracy and
consistency of the model are verified against finite-difference computations.
The developed surrogate allows us to efficiently analyze the effect of scanning
paths and laser parameters on the thermal history.
|
2502.01821
|
Agentic Bug Reproduction for Effective Automated Program Repair at
Google
|
cs.SE cs.AI
|
Bug reports often lack sufficient detail for developers to reproduce and fix
the underlying defects. Bug Reproduction Tests (BRTs), tests that fail when the
bug is present and pass when it has been resolved, are crucial for debugging,
but they are rarely included in bug reports, both in open-source and in
industrial settings. Thus, automatically generating BRTs from bug reports has
the potential to accelerate the debugging process and lower time to repair.
This paper investigates automated BRT generation within an industry setting,
specifically at Google, focusing on the challenges of a large-scale,
proprietary codebase and considering real-world industry bugs extracted from
Google's internal issue tracker. We adapt and evaluate a state-of-the-art BRT
generation technique, LIBRO, and present our agent-based approach, BRT Agent,
which makes use of a fine-tuned Large Language Model (LLM) for code editing.
Our BRT Agent significantly outperforms LIBRO, achieving a 28% plausible BRT
generation rate, compared to 10% by LIBRO, on 80 human-reported bugs from
Google's internal issue tracker. We further investigate the practical value of
generated BRTs by integrating them with an Automated Program Repair (APR)
system at Google. Our results show that providing BRTs to the APR system
results in 30% more bugs with plausible fixes. Additionally, we introduce
Ensemble Pass Rate (EPR), a metric which leverages the generated BRTs to select
the most promising fixes from all fixes generated by APR system. Our evaluation
on EPR for Top-K and threshold-based fix selections demonstrates promising
results and trade-offs. For example, EPR correctly selects a plausible fix from
a pool of 20 candidates in 70% of cases, based on its top-1 ranking.
|
2502.01825
|
Assessing Data Augmentation-Induced Bias in Training and Testing of
Machine Learning Models
|
cs.SE cs.AI
|
Data augmentation has become a standard practice in software engineering to
address limited or imbalanced data sets, particularly in specialized domains
like test classification and bug detection where data can be scarce. Although
techniques such as SMOTE and mutation-based augmentation are widely used in
software testing and debugging applications, a rigorous understanding of how
augmented training data impacts model bias is lacking. It is especially
critical to consider bias in scenarios where augmented data sets are used not
just in training but also in testing models. Through a comprehensive case study
of flaky test classification, we demonstrate how to test for bias and
understand the impact that the inclusion of augmented samples in testing sets
can have on model evaluation.
|
2502.01827
|
Relatively-Secure LLM-Based Steganography via Constrained Markov
Decision Processes
|
cs.IT math.IT
|
Linguistic steganography aims to conceal information within natural language
text without being detected. An effective steganography approach should encode
the secret message into a minimal number of language tokens while preserving
the natural appearance and fluidity of the stego-texts. We present a new
framework to enhance the embedding efficiency of stego-texts generated by
modifying the output of a large language model (LLM). The novelty of our
approach is in abstracting the sequential steganographic embedding process as a
Constrained Markov Decision Process (CMDP), which takes into consideration the
long-term dependencies instead of merely the immediate effects. We constrain
the solution space such that the discounted accumulative total variation
divergence between the selected probability distribution and the original
distribution given by the LLM is below a threshold. To find the optimal policy,
we first show that the functional optimization problem can be simplified to a
convex optimization problem with a finite number of variables. A closed-form
solution for the optimal policy is then presented to this equivalent problem.
It is remarkable that the optimal policy is deterministic and resembles
water-filling in some cases. The solution suggests that usually adjusting the
probability distribution for the state that has the least random transition
probability should be prioritized, but the choice should be made by taking into
account the transition probabilities at all states instead of only the current
state.
|
2502.01828
|
From Foresight to Forethought: VLM-In-the-Loop Policy Steering via
Latent Alignment
|
cs.RO cs.LG
|
While generative robot policies have demonstrated significant potential in
learning complex, multimodal behaviors from demonstrations, they still exhibit
diverse failures at deployment-time. Policy steering offers an elegant solution
to reducing the chance of failure by using an external verifier to select from
low-level actions proposed by an imperfect generative policy. Here, one might
hope to use a Vision Language Model (VLM) as a verifier, leveraging its
open-world reasoning capabilities. However, off-the-shelf VLMs struggle to
understand the consequences of low-level robot actions as they are represented
fundamentally differently than the text and images the VLM was trained on. In
response, we propose FOREWARN, a novel framework to unlock the potential of
VLMs as open-vocabulary verifiers for runtime policy steering. Our key idea is
to decouple the VLM's burden of predicting action outcomes (foresight) from
evaluation (forethought). For foresight, we leverage a latent world model to
imagine future latent states given diverse low-level action plans. For
forethought, we align the VLM with these predicted latent states to reason
about the consequences of actions in its native representation--natural
language--and effectively filter proposed plans. We validate our framework
across diverse robotic manipulation tasks, demonstrating its ability to bridge
representational gaps and provide robust, generalizable policy steering. Videos
can be found on the project website: https://yilin-wu98.github.io/forewarn/.
|
2502.01830
|
Meta-neural Topology Optimization: Knowledge Infusion with Meta-learning
|
cs.CE physics.comp-ph
|
Engineers learn from every design they create, building intuition that helps
them quickly identify promising solutions for new problems. Topology
optimization (TO) - a well-established computational method for designing
structures with optimized performance - lacks this ability to learn from
experience. Existing approaches treat design tasks in isolation, starting from
a "blank canvas" design for each new problem, often requiring many
computationally expensive steps to converge. We propose a meta-learning
strategy, termed meta-neural TO, that finds effective initial designs through a
systematic transfer of knowledge between related tasks, building on the
mesh-agnostic representation provided by neural reparameterization. We compare
our approach against established TO methods, demonstrating efficient
optimization across diverse test cases without compromising design quality.
Further, we demonstrate powerful cross-resolution transfer capabilities, where
initializations learned on lower-resolution discretizations lead to superior
convergence in 74.1% of tasks on a higher-resolution test set, reducing the
average number of iterations by 33.6% compared to standard neural TO.
Remarkably, we discover that meta-learning naturally gravitates toward the
strain energy patterns found in uniform density designs as effective starting
points, aligning with engineering intuition.
|
2502.01834
|
Building a Cognitive Twin Using a Distributed Cognitive System and an
Evolution Strategy
|
cs.AI cs.NE
|
This work presents a technique to build interaction-based Cognitive Twins (a
computational version of an external agent) using input-output training and an
Evolution Strategy on top of a framework for distributed Cognitive
Architectures. Here, we show that it's possible to orchestrate many simple
physical and virtual devices to achieve good approximations of a person's
interaction behavior by training the system in an end-to-end fashion and
present performance metrics. The generated Cognitive Twin may later be used to
automate tasks, generate more realistic human-like artificial agents or further
investigate its behaviors.
|
2502.01836
|
LeaFi: Data Series Indexes on Steroids with Learned Filters
|
cs.DB
|
The ever-growing collections of data series create a pressing need for
efficient similarity search, which serves as the backbone for various analytics
pipelines. Recent studies have shown that tree-based series indexes excel in
many scenarios. However, we observe a significant waste of effort during
search, due to suboptimal pruning. To address this issue, we introduce LeaFi, a
novel framework that uses machine learning models to boost pruning
effectiveness of tree-based data series indexes. These models act as learned
filters, which predict tight node-wise distance lower bounds that are used to
make pruning decisions, thus, improving pruning effectiveness. We describe the
LeaFi-enhanced index building algorithm, which selects leaf nodes and generates
training data to insert and train machine learning models, as well as the
LeaFi-enhanced search algorithm, which calibrates learned filters at query time
to support the user-defined quality target of each query. Our experimental
evaluation, using two different tree-based series indexes and five diverse
datasets, demonstrates the advantages of the proposed approach. LeaFi-enhanced
data-series indexes improve pruning ratio by up to 20x and search time by up to
32x, while maintaining a target recall of 99%.
|
2502.01837
|
TESS: A Scalable Temporally and Spatially Local Learning Rule for
Spiking Neural Networks
|
cs.NE cs.AI cs.LG
|
The demand for low-power inference and training of deep neural networks
(DNNs) on edge devices has intensified the need for algorithms that are both
scalable and energy-efficient. While spiking neural networks (SNNs) allow for
efficient inference by processing complex spatio-temporal dynamics in an
event-driven fashion, training them on resource-constrained devices remains
challenging due to the high computational and memory demands of conventional
error backpropagation (BP)-based approaches. In this work, we draw inspiration
from biological mechanisms such as eligibility traces, spike-timing-dependent
plasticity, and neural activity synchronization to introduce TESS, a temporally
and spatially local learning rule for training SNNs. Our approach addresses
both temporal and spatial credit assignments by relying solely on locally
available signals within each neuron, thereby allowing computational and memory
overheads to scale linearly with the number of neurons, independently of the
number of time steps. Despite relying on local mechanisms, we demonstrate
performance comparable to the backpropagation through time (BPTT) algorithm,
within $\sim1.4$ accuracy points on challenging computer vision scenarios
relevant at the edge, such as the IBM DVS Gesture dataset, CIFAR10-DVS, and
temporal versions of CIFAR10, and CIFAR100. Being able to produce comparable
performance to BPTT while keeping low time and memory complexity, TESS enables
efficient and scalable on-device learning at the edge.
|
2502.01839
|
Sample, Scrutinize and Scale: Effective Inference-Time Search by Scaling
Verification
|
cs.LG cs.AI
|
Sampling-based search, a simple paradigm for utilizing test-time compute,
involves generating multiple candidate responses and selecting the best one --
typically by having models self-verify each response for correctness. In this
paper, we study the scaling trends governing sampling-based search. Among our
findings is that simply scaling up a minimalist implementation of
sampling-based search, using only random sampling and direct self-verification,
provides a practical inference method that, for example, elevates the reasoning
capabilities of Gemini v1.5 Pro above that of o1-Preview on popular benchmarks.
We partially attribute the scalability of sampling-based search to a phenomenon
of implicit scaling, where sampling a larger pool of responses in turn improves
self-verification accuracy. We further identify two useful principles for
improving self-verification capabilities with test-time compute: (1) comparing
across responses provides helpful signals about the locations of errors and
hallucinations, and (2) different model output styles are useful for different
contexts -- chains of thought are useful for reasoning but harder to verify. We
also find that, though accurate verification can be elicited, frontier models
demonstrate remarkably weak out-of-box verification capabilities and introduce
a benchmark to measure progress on these deficiencies.
|
2502.01842
|
Texture Image Synthesis Using Spatial GAN Based on Vision Transformers
|
cs.CV cs.AI
|
Texture synthesis is a fundamental task in computer vision, whose goal is to
generate visually realistic and structurally coherent textures for a wide range
of applications, from graphics to scientific simulations. While traditional
methods like tiling and patch-based techniques often struggle with complex
textures, recent advancements in deep learning have transformed this field. In
this paper, we propose ViT-SGAN, a new hybrid model that fuses Vision
Transformers (ViTs) with a Spatial Generative Adversarial Network (SGAN) to
address the limitations of previous methods. By incorporating specialized
texture descriptors such as mean-variance (mu, sigma) and textons into the
self-attention mechanism of ViTs, our model achieves superior texture
synthesis. This approach enhances the model's capacity to capture complex
spatial dependencies, leading to improved texture quality that is superior to
state-of-the-art models, especially for regular and irregular textures.
Comparison experiments with metrics such as FID, IS, SSIM, and LPIPS
demonstrate the substantial improvement of ViT-SGAN, which underlines its
efficiency in generating diverse realistic textures.
|
2502.01846
|
UVGS: Reimagining Unstructured 3D Gaussian Splatting using UV Mapping
|
cs.CV
|
3D Gaussian Splatting (3DGS) has demonstrated superior quality in modeling 3D
objects and scenes. However, generating 3DGS remains challenging due to their
discrete, unstructured, and permutation-invariant nature. In this work, we
present a simple yet effective method to overcome these challenges. We utilize
spherical mapping to transform 3DGS into a structured 2D representation, termed
UVGS. UVGS can be viewed as multi-channel images, with feature dimensions as a
concatenation of Gaussian attributes such as position, scale, color, opacity,
and rotation. We further find that these heterogeneous features can be
compressed into a lower-dimensional (e.g., 3-channel) shared feature space
using a carefully designed multi-branch network. The compressed UVGS can be
treated as typical RGB images. Remarkably, we discover that typical VAEs
trained with latent diffusion models can directly generalize to this new
representation without additional training. Our novel representation makes it
effortless to leverage foundational 2D models, such as diffusion models, to
directly model 3DGS. Additionally, one can simply increase the 2D UV resolution
to accommodate more Gaussians, making UVGS a scalable solution compared to
typical 3D backbones. This approach immediately unlocks various novel
generation applications of 3DGS by inherently utilizing the already developed
superior 2D generation capabilities. In our experiments, we demonstrate various
unconditional, conditional generation, and inpainting applications of 3DGS
based on diffusion models, which were previously non-trivial.
|
2502.01847
|
Containment Control Approach for Steering Opinion in a Social Network
|
eess.SY cs.MA cs.SY math.DS math.OC
|
The paper studies the problem of steering multi-dimensional opinion in a
social network. Assuming the society of desire consists of stubborn and regular
agents, stubborn agents are considered as leaders who specify the desired
opinion distribution as a distributed reward or utility function. In this
context, each regular agent is seen as a follower, updating its bias on the
initial opinion and influence weights by averaging their observations of the
rewards their influencers have received. Assuming random graphs with reducible
and irreducible topology specify the influences on regular agents, opinion
evolution is represented as a containment control problem in which stability
and convergence to the final opinion are proven.
|
2502.01850
|
Foundation Model-Based Apple Ripeness and Size Estimation for Selective
Harvesting
|
cs.CV
|
Harvesting is a critical task in the tree fruit industry, demanding extensive
manual labor and substantial costs, and exposing workers to potential hazards.
Recent advances in automated harvesting offer a promising solution by enabling
efficient, cost-effective, and ergonomic fruit picking within tight harvesting
windows. However, existing harvesting technologies often indiscriminately
harvest all visible and accessible fruits, including those that are unripe or
undersized. This study introduces a novel foundation model-based framework for
efficient apple ripeness and size estimation. Specifically, we curated two
public RGBD-based Fuji apple image datasets, integrating expanded annotations
for ripeness ("Ripe" vs. "Unripe") based on fruit color and image capture
dates. The resulting comprehensive dataset, Fuji-Ripeness-Size Dataset,
includes 4,027 images and 16,257 annotated apples with ripeness and size
labels. Using Grounding-DINO, a language-model-based object detector, we
achieved robust apple detection and ripeness classification, outperforming
other state-of-the-art models. Additionally, we developed and evaluated six
size estimation algorithms, selecting the one with the lowest error and
variation for optimal performance. The Fuji-Ripeness-Size Dataset and the apple
detection and size estimation algorithms are made publicly available, which
provides valuable benchmarks for future studies in automated and selective
harvesting.
|
2502.01853
|
Security and Quality in LLM-Generated Code: A Multi-Language,
Multi-Model Analysis
|
cs.CR cs.LG cs.SE
|
Artificial Intelligence (AI)-driven code generation tools are increasingly
used throughout the software development lifecycle to accelerate coding tasks.
However, the security of AI-generated code using Large Language Models (LLMs)
remains underexplored, with studies revealing various risks and weaknesses.
This paper analyzes the security of code generated by LLMs across different
programming languages. We introduce a dataset of 200 tasks grouped into six
categories to evaluate the performance of LLMs in generating secure and
maintainable code. Our research shows that while LLMs can automate code
creation, their security effectiveness varies by language. Many models fail to
utilize modern security features in recent compiler and toolkit updates, such
as Java 17. Moreover, outdated methods are still commonly used, particularly in
C++. This highlights the need for advancing LLMs to enhance security and
quality while incorporating emerging best practices in programming languages.
|
2502.01854
|
How to warm-start your unfolding network
|
cs.LG eess.IV eess.SP
|
We present a new ensemble framework for boosting the performance of
overparameterized unfolding networks solving the compressed sensing problem. We
combine a state-of-the-art overparameterized unfolding network with a
continuation technique, to warm-start a crucial quantity of the said network's
architecture; we coin the resulting continued network C-DEC. Moreover, for
training and evaluating C-DEC, we incorporate the log-cosh loss function, which
enjoys both linear and quadratic behavior. Finally, we numerically assess
C-DEC's performance on real-world images. Results showcase that the combination
of continuation with the overparameterized unfolded architecture, trained and
evaluated with the chosen loss function, yields smoother loss landscapes and
improved reconstruction and generalization performance of C-DEC, consistently
for all datasets.
|
2502.01855
|
Learning Fine-to-Coarse Cuboid Shape Abstraction
|
cs.CV cs.GR
|
The abstraction of 3D objects with simple geometric primitives like cuboids
allows to infer structural information from complex geometry. It is important
for 3D shape understanding, structural analysis and geometric modeling. We
introduce a novel fine-to-coarse unsupervised learning approach to abstract
collections of 3D shapes. Our architectural design allows us to reduce the
number of primitives from hundreds (fine reconstruction) to only a few (coarse
abstraction) during training. This allows our network to optimize the
reconstruction error and adhere to a user-specified number of primitives per
shape while simultaneously learning a consistent structure across the whole
collection of data. We achieve this through our abstraction loss formulation
which increasingly penalizes redundant primitives. Furthermore, we introduce a
reconstruction loss formulation to account not only for surface approximation
but also volume preservation. Combining both contributions allows us to
represent 3D shapes more precisely with fewer cuboid primitives than previous
work. We evaluate our method on collections of man-made and humanoid shapes
comparing with previous state-of-the-art learning methods on commonly used
benchmarks. Our results confirm an improvement over previous cuboid-based shape
abstraction techniques. Furthermore, we demonstrate our cuboid abstraction in
downstream tasks like clustering, retrieval, and partial symmetry detection.
|
2502.01856
|
Reliability-Driven LiDAR-Camera Fusion for Robust 3D Object Detection
|
cs.CV cs.LG
|
Accurate and robust 3D object detection is essential for autonomous driving,
where fusing data from sensors like LiDAR and camera enhances detection
accuracy. However, sensor malfunctions such as corruption or disconnection can
degrade performance, and existing fusion models often struggle to maintain
reliability when one modality fails. To address this, we propose ReliFusion, a
novel LiDAR-camera fusion framework operating in the bird's-eye view (BEV)
space. ReliFusion integrates three key components: the Spatio-Temporal Feature
Aggregation (STFA) module, which captures dependencies across frames to
stabilize predictions over time; the Reliability module, which assigns
confidence scores to quantify the dependability of each modality under
challenging conditions; and the Confidence-Weighted Mutual Cross-Attention
(CW-MCA) module, which dynamically balances information from LiDAR and camera
modalities based on these confidence scores. Experiments on the nuScenes
dataset show that ReliFusion significantly outperforms state-of-the-art
methods, achieving superior robustness and accuracy in scenarios with limited
LiDAR fields of view and severe sensor malfunctions.
|
2502.01857
|
Learning Human Perception Dynamics for Informative Robot Communication
|
cs.RO cs.AI
|
Human-robot cooperative navigation is challenging in environments with
incomplete information. We introduce CoNav-Maze, a simulated robotics
environment where a robot navigates using local perception while a human
operator provides guidance based on an inaccurate map. The robot can share its
camera views to improve the operator's understanding of the environment. To
enable efficient human-robot cooperation, we propose Information Gain Monte
Carlo Tree Search (IG-MCTS), an online planning algorithm that balances
autonomous movement and informative communication. Central to IG-MCTS is a
neural human perception dynamics model that estimates how humans distill
information from robot communications. We collect a dataset through a
crowdsourced mapping task in CoNav-Maze and train this model using a fully
convolutional architecture with data augmentation. User studies show that
IG-MCTS outperforms teleoperation and instruction-following baselines,
achieving comparable task performance with significantly less communication and
lower human cognitive load, as evidenced by eye-tracking metrics.
|
2502.01858
|
Rethinking Energy Management for Autonomous Ground Robots on a Budget
|
cs.RO cs.SY eess.SY
|
Autonomous Ground Robots (AGRs) face significant challenges due to limited
energy reserve, which restricts their overall performance and availability.
Prior research has focused separately on energy-efficient approaches and fleet
management strategies for task allocation to extend operational time. A
fleet-level scheduler, however, assumes a specific energy consumption during
task allocation, requiring the AGR to fully utilize the energy for maximum
performance, which contrasts with energy-efficient practices. This paper
addresses this gap by investigating the combined impact of computing frequency
and locomotion speed on energy consumption and performance. We analyze these
variables through experiments on our prototype AGR, laying the foundation for
an integrated approach that optimizes cyber-physical resources within the
constraints of a specified energy budget. To tackle this challenge, we
introduce PECC (Predictable Energy Consumption Controller), a framework
designed to optimize computing frequency and locomotion speed to maximize
performance while ensuring the system operates within the specified energy
budget. We conducted extensive experiments with PECC using a real AGR and in
simulations, comparing it to an energy-efficient baseline. Our results show
that the AGR travels up to 17\% faster than the baseline in real-world tests
and up to 31\% faster in simulations, while consuming 95\% and 91\% of the
given energy budget, respectively. These results prove that PECC can
effectively enhance AGR performance in scenarios where prioritizing the energy
budget outweighs the need for energy efficiency.
|
2502.01860
|
SE Arena: Benchmarking Software Engineering Chatbots with Iterative
Interactions
|
cs.SE cs.LG
|
Foundation models (FMs), particularly large language models (LLMs), have
shown significant promise in various software engineering (SE) tasks, including
code generation, debugging, and requirement refinement. Despite these advances,
existing evaluation frameworks are insufficient for assessing model performance
in iterative, context-rich workflows characteristic of SE activities. To
address this limitation, we introduce SE Arena, an interactive platform
designed to evaluate SE-focused chatbots. SE Arena provides a transparent,
open-source leaderboard, supports multi-round conversational workflows, and
enables end-to-end model comparisons. Moreover, SE Arena incorporates a new
feature called RepoChat, which automatically injects repository-related context
(e.g., issues, commits, pull requests) into the conversation, further aligning
evaluations with real-world development processes. This paper outlines the
design and capabilities of SE Arena, emphasizing its potential to advance the
evaluation and practical application of FMs in software engineering.
|
2502.01861
|
Learning Hyperparameters via a Data-Emphasized Variational Objective
|
cs.LG stat.ML
|
When training large flexible models, practitioners often rely on grid search
to select hyperparameters that control over-fitting. This grid search has
several disadvantages: the search is computationally expensive, requires
carving out a validation set that reduces the available data for training, and
requires users to specify candidate values. In this paper, we propose an
alternative: directly learning regularization hyperparameters on the full
training set via the evidence lower bound ("ELBo") objective from variational
methods. For deep neural networks with millions of parameters, we recommend a
modified ELBo that upweights the influence of the data likelihood relative to
the prior. Our proposed technique overcomes all three disadvantages of grid
search. In a case study on transfer learning of image classifiers, we show how
our method reduces the 88+ hour grid search of past work to under 3 hours while
delivering comparable accuracy. We further demonstrate how our approach enables
efficient yet accurate approximations of Gaussian processes with learnable
length-scale kernels.
|
2502.01865
|
Enhancing Generalization via Sharpness-Aware Trajectory Matching for
Dataset Condensation
|
cs.LG
|
Dataset condensation aims to synthesize datasets with a few representative
samples that can effectively represent the original datasets. This enables
efficient training and produces models with performance close to those trained
on the original sets. Most existing dataset condensation methods conduct
dataset learning under the bilevel (inner- and outer-loop) based optimization.
However, the preceding methods perform with limited dataset generalization due
to the notoriously complicated loss landscape and expensive time-space
complexity of the inner-loop unrolling of bilevel optimization. These issues
deteriorate when the datasets are learned via matching the trajectories of
networks trained on the real and synthetic datasets with a long horizon
inner-loop. To address these issues, we introduce Sharpness-Aware Trajectory
Matching (SATM), which enhances the generalization capability of learned
synthetic datasets by optimising the sharpness of the loss landscape and
objective simultaneously. Moreover, our approach is coupled with an efficient
hypergradient approximation that is mathematically well-supported and
straightforward to implement along with controllable computational overhead.
Empirical evaluations of SATM demonstrate its effectiveness across various
applications, including in-domain benchmarks and out-of-domain settings.
Moreover, its easy-to-implement properties afford flexibility, allowing it to
integrate with other advanced sharpness-aware minimizers. Our code will be
released.
|
2502.01866
|
Online Curvature-Aware Replay: Leveraging $\mathbf{2^{nd}}$ Order
Information for Online Continual Learning
|
cs.LG cs.AI
|
Online Continual Learning (OCL) models continuously adapt to nonstationary
data streams, usually without task information. These settings are complex and
many traditional CL methods fail, while online methods (mainly replay-based)
suffer from instabilities after the task shift. To address this issue, we
formalize replay-based OCL as a second-order online joint optimization with
explicit KL-divergence constraints on replay data. We propose Online
Curvature-Aware Replay (OCAR) to solve the problem: a method that leverages
second-order information of the loss using a K-FAC approximation of the Fisher
Information Matrix (FIM) to precondition the gradient. The FIM acts as a
stabilizer to prevent forgetting while also accelerating the optimization in
non-interfering directions. We show how to adapt the estimation of the FIM to a
continual setting stabilizing second-order optimization for non-iid data,
uncovering the role of the Tikhonov regularization in the stability-plasticity
tradeoff. Empirical results show that OCAR outperforms state-of-the-art methods
in continual metrics achieving higher average accuracy throughout the training
process in three different benchmarks.
|
2502.01867
|
Optimizing Online Advertising with Multi-Armed Bandits: Mitigating the
Cold Start Problem under Auction Dynamics
|
cs.LG
|
Online advertising platforms often face a common challenge: the cold start
problem. Insufficient behavioral data (clicks) makes accurate click-through
rate (CTR) forecasting of new ads challenging. CTR for "old" items can also be
significantly underestimated due to their early performance influencing their
long-term behavior on the platform.
The cold start problem has far-reaching implications for businesses,
including missed long-term revenue opportunities. To mitigate this issue, we
developed a UCB-like algorithm under multi-armed bandit (MAB) setting for
positional-based model (PBM), specifically tailored to auction pay-per-click
systems.
Our proposed algorithm successfully combines theory and practice: we obtain
theoretical upper estimates of budget regret, and conduct a series of
experiments on synthetic and real-world data that confirm the applicability of
the method on the real platform.
In addition to increasing the platform's long-term profitability, we also
propose a mechanism for maintaining short-term profits through controlled
exploration and exploitation of items.
|
2502.01873
|
Explaining Automatic Image Assessment
|
cs.CV
|
Previous work in aesthetic categorization and explainability utilizes manual
labeling and classification to explain aesthetic scores. These methods require
a complex labeling process and are limited in size. Our proposed approach
attempts to explain aesthetic assessment models through visualizing dataset
trends and automatic categorization of visual aesthetic features through
training neural networks on different versions of the same dataset. By
evaluating the models adapted to each specific modality using existing and
novel metrics, we can capture and visualize aesthetic features and trends.
|
2502.01874
|
Countering Election Sway: Strategic Algorithms in Friedkin-Johnsen
Dynamics
|
cs.SI
|
Social influence profoundly impacts individual choices and collective
behaviors in politics. In this work, driven by the goal of protecting elections
from improper influence, we consider the following scenario: an individual, who
has vested interests in political party $Y$, is aware through reliable surveys
that parties $X$ and $Y$ are likely to get 50.1\% and 49.9\% of the vote,
respectively. Could this individual employ strategies to alter public opinions
and consequently invert these polling numbers in favor of party $Y$?
We address this question by employing: (i) the Friedkin-Johnsen (FJ) opinion
dynamics model, which is mathematically sophisticated and effectively captures
the way individual biases and social interactions shape opinions, making it
crucial for examining social influence, and (ii) interventions similar to those
in Asch's experiments, which involve selecting a group of stooges within the
network to spread a specific opinion. We mathematically formalize the
aforementioned motivation as an optimization framework and establish that it is
NP-hard and inapproximable within any constant factor. We introduce three
efficient polynomial-time algorithms. The first two utilize a continuous
approach: one employs gradient descent with Huber's estimator to approximate
the median, and the other uses a sigmoid threshold influence function. The
third utilizes a combinatorial greedy algorithm for targeted interventions.
Through comparative analysis against various natural baselines and using
real-world data, our results demonstrate that in numerous cases a small
fraction of nodes chosen as stooges can significantly sway election outcomes
under the Friedkin-Johnsen model.
|
2502.01876
|
Reinforcement Learning with Segment Feedback
|
cs.LG
|
Standard reinforcement learning (RL) assumes that an agent can observe a
reward for each state-action pair. However, in practical applications, it is
often difficult and costly to collect a reward for each state-action pair.
While there have been several works considering RL with trajectory feedback, it
is unclear if trajectory feedback is inefficient for learning when trajectories
are long. In this work, we consider a model named RL with segment feedback,
which offers a general paradigm filling the gap between per-state-action
feedback and trajectory feedback. In this model, we consider an episodic Markov
decision process (MDP), where each episode is divided into $m$ segments, and
the agent observes reward feedback only at the end of each segment. Under this
model, we study two popular feedback settings: binary feedback and sum
feedback, where the agent observes a binary outcome and a reward sum according
to the underlying reward function, respectively. To investigate the impact of
the number of segments $m$ on learning performance, we design efficient
algorithms and establish regret upper and lower bounds for both feedback
settings. Our theoretical and experimental results show that: under binary
feedback, increasing the number of segments $m$ decreases the regret at an
exponential rate; in contrast, surprisingly, under sum feedback, increasing $m$
does not reduce the regret significantly.
|
2502.01882
|
Latent Lexical Projection in Large Language Models: A Novel Approach to
Implicit Representation Refinement
|
cs.CL
|
Generating semantically coherent text requires a robust internal
representation of linguistic structures, which traditional embedding techniques
often fail to capture adequately. A novel approach, Latent Lexical Projection
(LLP), is introduced to refine lexical representations through a structured
transformation into a latent space, thereby enhancing the alignment between
input embeddings and their contextual meanings. The method integrates an
optimized projection mechanism within an existing language model architecture,
enabling more accurate token selection while maintaining syntactic integrity.
Evaluations across multiple benchmarks indicate a reduction in perplexity and
an increase in BLEU scores, suggesting improvements in predictive accuracy and
fluency. The analysis of lexical diversity reveals a more varied vocabulary in
generated text, addressing common issues of redundancy and repetitive phrase
structures. Further assessments of entropy distributions demonstrate a decline
in uncertainty during decoding, reflecting enhanced confidence in word
selection. Additionally, long-range dependency retention exhibits measurable
gains, with increased classification accuracy at extended token distances.
Computational efficiency remains within manageable constraints, despite the
added projection mechanism, highlighting the practicality of LLP for
integration into existing architectures.
|
2502.01885
|
A Privacy-Preserving Domain Adversarial Federated learning for
multi-site brain functional connectivity analysis
|
cs.LG cs.AI eess.IV
|
Resting-state functional magnetic resonance imaging (rs-fMRI) and its derived
functional connectivity networks (FCNs) have become critical for understanding
neurological disorders. However, collaborative analyses and the
generalizability of models still face significant challenges due to privacy
regulations and the non-IID (non-independent and identically distributed)
property of multiple data sources. To mitigate these difficulties, we propose
Domain Adversarial Federated Learning (DAFed), a novel federated deep learning
framework specifically designed for non-IID fMRI data analysis in multi-site
settings. DAFed addresses these challenges through feature disentanglement,
decomposing the latent feature space into domain-invariant and domain-specific
components, to ensure robust global learning while preserving local data
specificity. Furthermore, adversarial training facilitates effective knowledge
transfer between labeled and unlabeled datasets, while a contrastive learning
module enhances the global representation of domain-invariant features. We
evaluated DAFed on the diagnosis of ASD and further validated its
generalizability in the classification of AD, demonstrating its superior
classification accuracy compared to state-of-the-art methods. Additionally, an
enhanced Score-CAM module identifies key brain regions and functional
connectivity significantly associated with ASD and MCI, respectively,
uncovering shared neurobiological patterns across sites. These findings
highlight the potential of DAFed to advance multi-site collaborative research
in neuroimaging while protecting data confidentiality.
|
2502.01889
|
Displacement-Sparse Neural Optimal Transport
|
cs.LG cs.AI
|
Optimal Transport (OT) theory seeks to determine the map $T:X \to Y$ that
transports a source measure $P$ to a target measure $Q$, minimizing the cost
$c(\mathbf{x}, T(\mathbf{x}))$ between $\mathbf{x}$ and its image
$T(\mathbf{x})$. Building upon the Input Convex Neural Network OT solver and
incorporating the concept of displacement-sparse maps, we introduce a sparsity
penalty into the minimax Wasserstein formulation, promote sparsity in
displacement vectors $\Delta(\mathbf{x}) := T(\mathbf{x}) - \mathbf{x}$, and
enhance the interpretability of the resulting map. However, increasing sparsity
often reduces feasibility, causing $T_{\#}(P)$ to deviate more significantly
from the target measure. In low-dimensional settings, we propose a heuristic
framework to balance the trade-off between sparsity and feasibility by
dynamically adjusting the sparsity intensity parameter during training. For
high-dimensional settings, we directly constrain the dimensionality of
displacement vectors by enforcing $\dim(\Delta(\mathbf{x})) \leq l$, where $l <
d$ for $X \subseteq \mathbb{R}^d$. Among maps satisfying this constraint, we
aim to identify the most feasible one. This goal can be effectively achieved by
adapting our low-dimensional heuristic framework without resorting to
dimensionality reduction. We validate our method on both synthesized sc-RNA and
real 4i cell perturbation datasets, demonstrating improvements over existing
methods.
|
2502.01890
|
Geometric Framework for 3D Cell Segmentation Correction
|
cs.CV cs.LG
|
3D cellular image segmentation methods are commonly divided into non-2D-based
and 2D-based approaches, the latter reconstructing 3D shapes from the
segmentation results of 2D layers. However, errors in 2D results often
propagate, leading to oversegmentations in the final 3D results. To tackle this
issue, we introduce an interpretable geometric framework that addresses the
oversegmentations by correcting the 2D segmentation results based on geometric
information from adjacent layers. Leveraging both geometric (layer-to-layer,
2D) and topological (3D shape) features, we use binary classification to
determine whether neighboring cells should be stitched. We develop a
pre-trained classifier on public plant cell datasets and validate its
performance on animal cell datasets, confirming its effectiveness in correcting
oversegmentations under the transfer learning setting. Furthermore, we
demonstrate that our framework can be extended to correcting oversegmentation
on non-2D-based methods. A clear pipeline is provided for end-users to build
the pre-trained model to any labeled dataset.
|
2502.01891
|
Training and Evaluating with Human Label Variation: An Empirical Study
|
cs.LG cs.CL
|
Human label variation (HLV) challenges the standard assumption that an
example has a single ground truth, instead embracing the natural variation in
human labelling to train and evaluate models. While various training methods
and metrics for HLV have been proposed, there has been no systematic
meta-evaluation of HLV evaluation metrics, contributing to the lack of clarity
in the best HLV training method. We propose new evaluation metrics and training
methods and empirically meta-evaluate HLV evaluation metrics. We find that
training on either disaggregated annotations or soft labels often performs best
across metrics, and that our proposed soft metric correlates best with human
preference.
|
2502.01894
|
SimBEV: A Synthetic Multi-Task Multi-Sensor Driving Data Generation Tool
and Dataset
|
cs.CV cs.LG cs.RO
|
Bird's-eye view (BEV) perception for autonomous driving has garnered
significant attention in recent years, in part because BEV representation
facilitates the fusion of multi-sensor data. This enables a variety of
perception tasks including BEV segmentation, a concise view of the environment
that can be used to plan a vehicle's trajectory. However, this representation
is not fully supported by existing datasets, and creation of new datasets can
be a time-consuming endeavor. To address this problem, in this paper we
introduce SimBEV, an extensively configurable and scalable randomized synthetic
data generation tool that incorporates information from multiple sources to
capture accurate BEV ground truth data, supports a comprehensive array of
sensors, and enables a variety of perception tasks including BEV segmentation
and 3D object detection. We use SimBEV to create the SimBEV dataset, a large
collection of annotated perception data from diverse driving scenarios.
|
2502.01896
|
INTACT: Inducing Noise Tolerance through Adversarial Curriculum Training
for LiDAR-based Safety-Critical Perception and Autonomy
|
cs.CV cs.RO
|
In this work, we present INTACT, a novel two-phase framework designed to
enhance the robustness of deep neural networks (DNNs) against noisy LiDAR data
in safety-critical perception tasks. INTACT combines meta-learning with
adversarial curriculum training (ACT) to systematically address challenges
posed by data corruption and sparsity in 3D point clouds. The meta-learning
phase equips a teacher network with task-agnostic priors, enabling it to
generate robust saliency maps that identify critical data regions. The ACT
phase leverages these saliency maps to progressively expose a student network
to increasingly complex noise patterns, ensuring targeted perturbation and
improved noise resilience. INTACT's effectiveness is demonstrated through
comprehensive evaluations on object detection, tracking, and classification
benchmarks using diverse datasets, including KITTI, Argoverse, and ModelNet40.
Results indicate that INTACT improves model robustness by up to 20% across all
tasks, outperforming standard adversarial and curriculum training methods. This
framework not only addresses the limitations of conventional training
strategies but also offers a scalable and efficient solution for real-world
deployment in resource-constrained safety-critical systems. INTACT's principled
integration of meta-learning and adversarial training establishes a new
paradigm for noise-tolerant 3D perception in safety-critical applications.
INTACT improved KITTI Multiple Object Tracking Accuracy (MOTA) by 9.6% (64.1%
-> 75.1%) and by 12.4% under Gaussian noise (52.5% -> 73.7%). Similarly, KITTI
mean Average Precision (mAP) rose from 59.8% to 69.8% (50% point drop) and
49.3% to 70.9% (Gaussian noise), highlighting the framework's ability to
enhance deep learning model resilience in safety-critical object tracking
scenarios.
|
2502.01901
|
Conceptual Metaphor Theory as a Prompting Paradigm for Large Language
Models
|
cs.CL
|
We introduce Conceptual Metaphor Theory (CMT) as a framework for enhancing
large language models (LLMs) through cognitive prompting in complex reasoning
tasks. CMT leverages metaphorical mappings to structure abstract reasoning,
improving models' ability to process and explain intricate concepts. By
incorporating CMT-based prompts, we guide LLMs toward more structured and
human-like reasoning patterns. To evaluate this approach, we compare four
native models (Llama3.2, Phi3, Gemma2, and Mistral) against their CMT-augmented
counterparts on benchmark tasks spanning domain-specific reasoning, creative
insight, and metaphor interpretation. Responses were automatically evaluated
using the Llama3.3 70B model. Experimental results indicate that CMT prompting
significantly enhances reasoning accuracy, clarity, and metaphorical coherence,
outperforming baseline models across all evaluated tasks.
|
2502.01904
|
Common Neighborhood Estimation over Bipartite Graphs under Local
Differential Privacy
|
cs.DB
|
Bipartite graphs, formed by two vertex layers, arise as a natural fit for
modeling the relationships between two groups of entities. In bipartite graphs,
common neighborhood computation between two vertices on the same vertex layer
is a basic operator, which is easily solvable in general settings. However, it
inevitably involves releasing the neighborhood information of vertices, posing
a significant privacy risk for users in real-world applications. To protect
edge privacy in bipartite graphs, in this paper, we study the problem of
estimating the number of common neighbors of two vertices on the same layer
under edge local differential privacy (edge LDP). The problem is challenging in
the context of edge LDP since each vertex on the opposite layer of the query
vertices can potentially be a common neighbor. To obtain efficient and accurate
estimates, we propose a multiple-round framework that significantly reduces the
candidate pool of common neighbors and enables the query vertices to construct
unbiased estimators locally. Furthermore, we improve data utility by
incorporating the estimators built from the neighbors of both query vertices
and devise privacy budget allocation optimizations. These improve the
estimator's robustness and consistency, particularly against query vertices
with imbalanced degrees. Extensive experiments on 15 datasets validate the
effectiveness and efficiency of our proposed techniques.
|
2502.01905
|
When not to target negative ties? Studying competitive influence
maximisation in signed networks
|
cs.SI
|
We explore the influence maximisation problem in networks with negative ties.
Where prior work has focused on unsigned networks, we investigate the need to
consider negative ties in networks while trying to maximise spread in a
population - particularly under competitive conditions. Given a signed network
we optimise the strategies of a focal controller, against competing influence
in the network, using two approaches - either the focal controller uses a
sign-agnostic approach or they factor in the sign of the edges while optimising
their strategy. We compare the difference in vote-shares (or the share of
population) obtained by both these methods to determine the need to navigate
negative ties in these settings. More specifically, we study the impact of: (a)
network topology, (b) resource conditions and (c) competitor strategies on the
difference in vote shares obtained across both methodologies. We observe that
gains are maximum when resources available to the focal controller are low and
the competitor avoids negative edges in their strategy. Conversely, gains are
insignificant irrespective of resource conditions when the competitor targets
the network indiscriminately. Finally, we study the problem in a game-theoretic
setting, where we simultaneously optimise the strategies of both competitors.
Interestingly we observe that, strategising with the knowledge of negative ties
can occasionally also lead to loss in vote-shares.
|
2502.01906
|
Rethinking Homogeneity of Vision and Text Tokens in Large
Vision-and-Language Models
|
cs.CV
|
Large vision-and-language models (LVLMs) typically treat visual and textual
embeddings as homogeneous inputs to a large language model (LLM). However,
these inputs are inherently different: visual inputs are multi-dimensional and
contextually rich, often pre-encoded by models like CLIP, while textual inputs
lack this structure. In this paper, we propose Decomposed Attention (D-Attn), a
novel method that processes visual and textual embeddings differently by
decomposing the 1-D causal self-attention in LVLMs. After the attention
decomposition, D-Attn diagonalizes visual-to-visual self-attention, reducing
computation from $\mathcal{O}(|V|^2)$ to $\mathcal{O}(|V|)$ for $|V|$ visual
embeddings without compromising performance. Moreover, D-Attn debiases
positional encodings in textual-to-visual cross-attention, further enhancing
visual understanding. Finally, we introduce an $\alpha$-weighting strategy to
merge visual and textual information, maximally preserving the pre-trained
LLM's capabilities with minimal modifications. Extensive experiments and
rigorous analyses validate the effectiveness of D-Attn, demonstrating
significant improvements on multiple image benchmarks while significantly
reducing computational costs. Code, data, and models will be publicly
available.
|
2502.01908
|
Unlocking Efficient Large Inference Models: One-Bit Unrolling Tips the
Scales
|
cs.LG
|
Recent advancements in Large Language Model (LLM) compression, such as BitNet
and BitNet b1.58, have marked significant strides in reducing the computational
demands of LLMs through innovative one-bit quantization techniques. We extend
this frontier by looking at Large Inference Models (LIMs) that have become
indispensable across various applications. However, their scale and complexity
often come at a significant computational cost. We introduce a novel approach
that leverages one-bit algorithm unrolling, effectively integrating information
from the physical world in the model architecture. Our method achieves a
bit-per-link rate significantly lower than the 1.58 bits reported in prior
work, thanks to the natural sparsity that emerges in our network architectures.
We numerically demonstrate that the proposed one-bit algorithm unrolling scheme
can improve both training and test outcomes by effortlessly increasing the
number of layers while substantially compressing the network. Additionally, we
provide theoretical results on the generalization gap, convergence rate,
stability, and sensitivity of our proposed one-bit algorithm unrolling.
|
2502.01912
|
PATCH: a deep learning method to assess heterogeneity of artistic
practice in historical paintings
|
cs.CV cs.AI cs.LG
|
The history of art has seen significant shifts in the manner in which
artworks are created, making understanding of creative processes a central
question in technical art history. In the Renaissance and Early Modern period,
paintings were largely produced by master painters directing workshops of
apprentices who often contributed to projects. The masters varied significantly
in artistic and managerial styles, meaning different combinations of artists
and implements might be seen both between masters and within workshops or even
individual canvases. Information on how different workshops were managed and
the processes by which artworks were created remains elusive. Machine learning
methods have potential to unearth new information about artists' creative
processes by extending the analysis of brushwork to a microscopic scale.
Analysis of workshop paintings, however, presents a challenge in that
documentation of the artists and materials involved is sparse, meaning external
examples are not available to train networks to recognize their contributions.
Here we present a novel machine learning approach we call pairwise assignment
training for classifying heterogeneity (PATCH) that is capable of identifying
individual artistic practice regimes with no external training data, or "ground
truth." The method achieves unsupervised results by supervised means, and
outperforms both simple statistical procedures and unsupervised machine
learning methods. We apply this method to two historical paintings by the
Spanish Renaissance master, El Greco: The Baptism of Christ and Christ on the
Cross with Landscape, and our findings regarding the former potentially
challenge previous work that has assigned the painting to workshop members.
Further, the results of our analyses create a measure of heterogeneity of
artistic practice that can be used to characterize artworks across time and
space.
|
2502.01913
|
Composite Gaussian Processes Flows for Learning Discontinuous Multimodal
Policies
|
cs.RO cs.LG
|
Learning control policies for real-world robotic tasks often involve
challenges such as multimodality, local discontinuities, and the need for
computational efficiency. These challenges arise from the complexity of robotic
environments, where multiple solutions may coexist. To address these issues, we
propose Composite Gaussian Processes Flows (CGP-Flows), a novel semi-parametric
model for robotic policy. CGP-Flows integrate Overlapping Mixtures of Gaussian
Processes (OMGPs) with the Continuous Normalizing Flows (CNFs), enabling them
to model complex policies addressing multimodality and local discontinuities.
This hybrid approach retains the computational efficiency of OMGPs while
incorporating the flexibility of CNFs. Experiments conducted in both simulated
and real-world robotic tasks demonstrate that CGP-flows significantly improve
performance in modeling control policies. In a simulation task, we confirmed
that CGP-Flows had a higher success rate compared to the baseline method, and
the success rate of GCP-Flow was significantly different from the success rate
of other baselines in chi-square tests.
|
2502.01916
|
Generalizable and Fast Surrogates: Model Predictive Control of
Articulated Soft Robots using Physics-Informed Neural Networks
|
cs.RO cs.LG
|
Soft robots can revolutionize several applications with high demands on
dexterity and safety. When operating these systems, real-time estimation and
control require fast and accurate models. However, prediction with
first-principles (FP) models is slow, and learned black-box models have poor
generalizability. Physics-informed machine learning offers excellent advantages
here, but it is currently limited to simple, often simulated systems without
considering changes after training. We propose physics-informed neural networks
(PINNs) for articulated soft robots (ASRs) with a focus on data efficiency. The
amount of expensive real-world training data is reduced to a minimum - one
dataset in one system domain. Two hours of data in different domains are used
for a comparison against two gold-standard approaches: In contrast to a
recurrent neural network, the PINN provides a high generalizability. The
prediction speed of an accurate FP model is improved with the PINN by up to a
factor of 466 at slightly reduced accuracy. This enables nonlinear model
predictive control (MPC) of the pneumatic ASR. In nine dynamic MPC experiments,
an average joint-tracking error of 1.3{\deg} is achieved.
|
2502.01918
|
Wake-Informed 3D Path Planning for Autonomous Underwater Vehicles Using
A* and Neural Network Approximations
|
cs.RO cs.AI cs.LG
|
Autonomous Underwater Vehicles (AUVs) encounter significant energy, control
and navigation challenges in complex underwater environments, particularly
during close-proximity operations, such as launch and recovery (LAR), where
fluid interactions and wake effects present additional navigational and energy
challenges. Traditional path planning methods fail to incorporate these
detailed wake structures, resulting in increased energy consumption, reduced
control stability, and heightened safety risks. This paper presents a novel
wake-informed, 3D path planning approach that fully integrates localized wake
effects and global currents into the planning algorithm. Two variants of the A*
algorithm - a current-informed planner and a wake-informed planner - are
created to assess its validity and two neural network models are then trained
to approximate these planners for real-time applications. Both the A* planners
and NN models are evaluated using important metrics such as energy expenditure,
path length, and encounters with high-velocity and turbulent regions. The
results demonstrate a wake-informed A* planner consistently achieves the lowest
energy expenditure and minimizes encounters with high-velocity regions,
reducing energy consumption by up to 11.3%. The neural network models are
observed to offer computational speedup of 6 orders of magnitude, but exhibit
4.51 - 19.79% higher energy expenditures and 9.81 - 24.38% less optimal paths.
These findings underscore the importance of incorporating detailed wake
structures into traditional path planning algorithms and the benefits of neural
network approximations to enhance energy efficiency and operational safety for
AUVs in complex 3D domains.
|
2502.01919
|
Poisson Hierarchical Indian Buffet Processes for Within and Across Group
Sharing of Latent Features-With Indications for Microbiome Species Sampling
Models
|
stat.ML cs.LG math.PR math.ST stat.TH
|
In this work, we present a comprehensive Bayesian posterior analysis of what
we term Poisson Hierarchical Indian Buffet Processes, designed for complex
random sparse count species sampling models that allow for the sharing of
information across and within groups. This analysis covers a potentially
infinite number of species and unknown parameters, which, within a Bayesian
machine learning context, we are able to learn from as more information is
sampled. To achieve our refined results, we employ a range of methodologies
drawn from Bayesian latent feature models, random occupancy models, and
excursion theory. Despite this complexity, our goal is to make our findings
accessible to practitioners, including those who may not be familiar with these
areas. To facilitate understanding, we adopt a pseudo-expository style that
emphasizes clarity and practical utility. We aim to express our findings in a
language that resonates with experts in microbiome and ecological studies,
addressing gaps in modeling capabilities while acknowledging that we are not
experts ourselves in these fields. This approach encourages the use of our
models as basic components of more sophisticated frameworks employed by domain
experts, embodying the spirit of the seminal work on the Dirichlet Process.
Ultimately, our refined posterior analysis not only yields tractable
computational procedures but also enables practical statistical implementation
and provides a clear mapping to relevant quantities in microbiome analysis.
|
2502.01920
|
Anomaly Detection via Autoencoder Composite Features and NCE
|
cs.LG
|
Unsupervised anomaly detection is a challenging task. Autoencoders (AEs) or
generative models are often employed to model the data distribution of normal
inputs and subsequently identify anomalous, out-of-distribution inputs by high
reconstruction error or low likelihood, respectively. However, AEs may
generalize and achieve small reconstruction errors on abnormal inputs. We
propose a decoupled training approach for anomaly detection that both an AE and
a likelihood model trained with noise contrastive estimation (NCE). After
training the AE, NCE estimates a probability density function, to serve as the
anomaly score, on the joint space of the AE's latent representation combined
with features of the reconstruction quality. To further reduce the false
negative rate in NCE we systematically varying the reconstruction features to
augment the training and optimize the contrastive Gaussian noise distribution.
Experimental assessments on multiple benchmark datasets demonstrate that the
proposed approach matches the performance of prevalent state-of-the-art anomaly
detection algorithms.
|
2502.01922
|
LAST SToP For Modeling Asynchronous Time Series
|
cs.LG cs.AI
|
We present a novel prompt design for Large Language Models (LLMs) tailored to
Asynchronous Time Series. Unlike regular time series, which assume values at
evenly spaced time points, asynchronous time series consist of timestamped
events occurring at irregular intervals, each described in natural language.
Our approach effectively utilizes the rich natural language of event
descriptions, allowing LLMs to benefit from their broad world knowledge for
reasoning across different domains and tasks. This allows us to extend the
scope of asynchronous time series analysis beyond forecasting to include tasks
like anomaly detection and data imputation. We further introduce Stochastic
Soft Prompting, a novel prompt-tuning mechanism that significantly improves
model performance, outperforming existing fine-tuning methods such as QLoRA.
Through extensive experiments on real world datasets, we demonstrate that our
approach achieves state-of-the-art performance across different tasks and
datasets.
|
2502.01924
|
DualGuard MPPI: Safe and Performant Optimal Control by Combining
Sampling-Based MPC and Hamilton-Jacobi Reachability
|
eess.SY cs.RO cs.SY
|
Designing controllers that are both safe and performant is inherently
challenging. This co-optimization can be formulated as a constrained optimal
control problem, where the cost function represents the performance criterion
and safety is specified as a constraint. While sampling-based methods, such as
Model Predictive Path Integral (MPPI) control, have shown great promise in
tackling complex optimal control problems, they often struggle to enforce
safety constraints. To address this limitation, we propose DualGuard-MPPI, a
novel framework for solving safety-constrained optimal control problems. Our
approach integrates Hamilton-Jacobi reachability analysis within the MPPI
sampling process to ensure that all generated samples are provably safe for the
system. On the one hand, this integration allows DualGuard-MPPI to enforce
strict safety constraints; at the same time, it facilitates a more effective
exploration of the environment with the same number of samples, reducing the
effective sampling variance and leading to better performance optimization.
Through several simulations and hardware experiments, we demonstrate that the
proposed approach achieves much higher performance compared to existing MPPI
methods, without compromising safety.
|
2502.01925
|
PANDAS: Improving Many-shot Jailbreaking via Positive Affirmation,
Negative Demonstration, and Adaptive Sampling
|
cs.CL cs.CR cs.LG
|
Many-shot jailbreaking circumvents the safety alignment of large language
models by exploiting their ability to process long input sequences. To achieve
this, the malicious target prompt is prefixed with hundreds of fabricated
conversational turns between the user and the model. These fabricated exchanges
are randomly sampled from a pool of malicious questions and responses, making
it appear as though the model has already complied with harmful instructions.
In this paper, we present PANDAS: a hybrid technique that improves many-shot
jailbreaking by modifying these fabricated dialogues with positive
affirmations, negative demonstrations, and an optimized adaptive sampling
method tailored to the target prompt's topic. Extensive experiments on AdvBench
and HarmBench, using state-of-the-art LLMs, demonstrate that PANDAS
significantly outperforms baseline methods in long-context scenarios. Through
an attention analysis, we provide insights on how long-context vulnerabilities
are exploited and show how PANDAS further improves upon many-shot jailbreaking.
|
2502.01926
|
Fairness through Difference Awareness: Measuring Desired Group
Discrimination in LLMs
|
cs.CY cs.CL
|
Algorithmic fairness has conventionally adopted a perspective of racial
color-blindness (i.e., difference unaware treatment). We contend that in a
range of important settings, group difference awareness matters. For example,
differentiating between groups may be necessary in legal contexts (e.g., the
U.S. compulsory draft applies to men but not women) and harm assessments (e.g.,
calling a girl a terrorist may be less harmful than calling a Muslim person
one). In our work we first introduce an important distinction between
descriptive (fact-based), normative (value-based), and correlation
(association-based) benchmarks. This distinction is significant because each
category requires distinct interpretation and mitigation tailored to its
specific characteristics. Then, we present a benchmark suite composed of eight
different scenarios for a total of 16k questions that enables us to assess
difference awareness. Finally, we show results across ten models that
demonstrate difference awareness is a distinct dimension of fairness where
existing bias mitigation strategies may backfire.
|
2502.01930
|
Distributionally Robust Direct Preference Optimization
|
cs.LG cs.AI
|
A major challenge in aligning large language models (LLMs) with human
preferences is the issue of distribution shift. LLM alignment algorithms rely
on static preference datasets, assuming that they accurately represent
real-world user preferences. However, user preferences vary significantly
across geographical regions, demographics, linguistic patterns, and evolving
cultural trends. This preference distribution shift leads to catastrophic
alignment failures in many real-world applications. We address this problem
using the principled framework of distributionally robust optimization, and
develop two novel distributionally robust direct preference optimization (DPO)
algorithms, namely, Wasserstein DPO (WDPO) and Kullback-Leibler DPO (KLDPO). We
characterize the sample complexity of learning the optimal policy parameters
for WDPO and KLDPO. Moreover, we propose scalable gradient descent-style
learning algorithms by developing suitable approximations for the challenging
minimax loss functions of WDPO and KLDPO. Our empirical experiments demonstrate
the superior performance of WDPO and KLDPO in substantially improving the
alignment when there is a preference distribution shift.
|
2502.01932
|
VolleyBots: A Testbed for Multi-Drone Volleyball Game Combining Motion
Control and Strategic Play
|
cs.RO cs.AI cs.LG
|
Multi-agent reinforcement learning (MARL) has made significant progress,
largely fueled by the development of specialized testbeds that enable
systematic evaluation of algorithms in controlled yet challenging scenarios.
However, existing testbeds often focus on purely virtual simulations or limited
robot morphologies such as robotic arms, quadrupeds, and humanoids, leaving
high-mobility platforms with real-world physical constraints like drones
underexplored. To bridge this gap, we present VolleyBots, a new MARL testbed
where multiple drones cooperate and compete in the sport of volleyball under
physical dynamics. VolleyBots features a turn-based interaction model under
volleyball rules, a hierarchical decision-making process that combines motion
control and strategic play, and a high-fidelity simulation for seamless
sim-to-real transfer. We provide a comprehensive suite of tasks ranging from
single-drone drills to multi-drone cooperative and competitive tasks,
accompanied by baseline evaluations of representative MARL and game-theoretic
algorithms. Results in simulation show that while existing algorithms handle
simple tasks effectively, they encounter difficulty in complex tasks that
require both low-level control and high-level strategy. We further demonstrate
zero-shot deployment of a simulation-learned policy to real-world drones,
highlighting VolleyBots' potential to propel MARL research involving agile
robotic platforms. The project page is at
https://sites.google.com/view/thu-volleybots/home.
|
2502.01936
|
Query-Based and Unnoticeable Graph Injection Attack from Neighborhood
Perspective
|
cs.LG cs.CR
|
The robustness of Graph Neural Networks (GNNs) has become an increasingly
important topic due to their expanding range of applications. Various attack
methods have been proposed to explore the vulnerabilities of GNNs, ranging from
Graph Modification Attacks (GMA) to the more practical and flexible Graph
Injection Attacks (GIA). However, existing methods face two key challenges: (i)
their reliance on surrogate models, which often leads to reduced attack
effectiveness due to structural differences and prior biases, and (ii) existing
GIA methods often sacrifice attack success rates in undefended settings to
bypass certain defense models, thereby limiting their overall effectiveness. To
overcome these limitations, we propose QUGIA, a Query-based and Unnoticeable
Graph Injection Attack. QUGIA injects nodes by first selecting edges based on
victim node connections and then generating node features using a Bayesian
framework. This ensures that the injected nodes are similar to the original
graph nodes, implicitly preserving homophily and making the attack more
unnoticeable. Unlike previous methods, QUGIA does not rely on surrogate models,
thereby avoiding performance degradation and achieving better generalization.
Extensive experiments on six real-world datasets with diverse characteristics
demonstrate that QUGIA achieves unnoticeable attacks and outperforms
state-of-the-art attackers. The code will be released upon acceptance.
|
2502.01937
|
A Comprehensive Study of Bug-Fix Patterns in Autonomous Driving Systems
|
cs.SE cs.RO
|
As autonomous driving systems (ADSes) become increasingly complex and
integral to daily life, the importance of understanding the nature and
mitigation of software bugs in these systems has grown correspondingly.
Addressing the challenges of software maintenance in autonomous driving systems
(e.g., handling real-time system decisions and ensuring safety-critical
reliability) is crucial due to the unique combination of real-time
decision-making requirements and the high stakes of operational failures in
ADSes. The potential of automated tools in this domain is promising, yet there
remains a gap in our comprehension of the challenges faced and the strategies
employed during manual debugging and repair of such systems. In this paper, we
present an empirical study that investigates bug-fix patterns in ADSes, with
the aim of improving reliability and safety. We have analyzed the commit
histories and bug reports of two major autonomous driving projects, Apollo and
Autoware, from 1,331 bug fixes with the study of bug symptoms, root causes, and
bug-fix patterns. Our study reveals several dominant bug-fix patterns,
including those related to path planning, data flow, and configuration
management. Additionally, we find that the frequency distribution of bug-fix
patterns varies significantly depending on their nature and types and that
certain categories of bugs are recurrent and more challenging to exterminate.
Based on our findings, we propose a hierarchy of ADS bugs and two taxonomies of
15 syntactic bug-fix patterns and 27 semantic bug-fix patterns that offer
guidance for bug identification and resolution. We also contribute a benchmark
of 1,331 ADS bug-fix instances.
|
2502.01940
|
Toward a Low-Cost Perception System in Autonomous Vehicles: A Spectrum
Learning Approach
|
cs.CV eess.IV
|
We present a cost-effective new approach for generating denser depth maps for
Autonomous Driving (AD) and Autonomous Vehicles (AVs) by integrating the images
obtained from deep neural network (DNN) 4D radar detectors with conventional
camera RGB images. Our approach introduces a novel pixel positional encoding
algorithm inspired by Bartlett's spatial spectrum estimation technique. This
algorithm transforms both radar depth maps and RGB images into a unified pixel
image subspace called the Spatial Spectrum, facilitating effective learning
based on their similarities and differences. Our method effectively leverages
high-resolution camera images to train radar depth map generative models,
addressing the limitations of conventional radar detectors in complex vehicular
environments, thus sharpening the radar output. We develop spectrum estimation
algorithms tailored for radar depth maps and RGB images, a comprehensive
training framework for data-driven generative models, and a camera-radar
deployment scheme for AV operation. Our results demonstrate that our approach
also outperforms the state-of-the-art (SOTA) by 27.95% in terms of
Unidirectional Chamfer Distance (UCD).
|
Subsets and Splits
No community queries yet
The top public SQL queries from the community will appear here once available.