id
stringlengths 9
16
| title
stringlengths 4
278
| categories
stringlengths 5
104
| abstract
stringlengths 6
4.09k
|
|---|---|---|---|
2501.18873
|
Best Policy Learning from Trajectory Preference Feedback
|
cs.LG
|
We address the problem of best policy identification in preference-based
reinforcement learning (PbRL), where learning occurs from noisy binary
preferences over trajectory pairs rather than explicit numerical rewards. This
approach is useful for post-training optimization of generative AI models
during multi-turn user interactions, where preference feedback is more robust
than handcrafted reward models. In this setting, learning is driven by both an
offline preference dataset -- collected from a rater of unknown 'competence' --
and online data collected with pure exploration. Since offline datasets may
exhibit out-of-distribution (OOD) biases, principled online data collection is
necessary. To address this, we propose Posterior Sampling for Preference
Learning ($\mathsf{PSPL}$), a novel algorithm inspired by Top-Two Thompson
Sampling, that maintains independent posteriors over the true reward model and
transition dynamics. We provide the first theoretical guarantees for PbRL in
this setting, establishing an upper bound on the simple Bayesian regret of
$\mathsf{PSPL}$. Since the exact algorithm can be computationally impractical,
we also provide an approximate version that outperforms existing baselines.
|
2501.18875
|
Self-Supervised Learning Using Nonlinear Dependence
|
cs.LG cs.CV stat.ML
|
Self-supervised learning has gained significant attention in contemporary
applications, particularly due to the scarcity of labeled data. While existing
SSL methodologies primarily address feature variance and linear correlations,
they often neglect the intricate relations between samples and the nonlinear
dependencies inherent in complex data. In this paper, we introduce
Correlation-Dependence Self-Supervised Learning (CDSSL), a novel framework that
unifies and extends existing SSL paradigms by integrating both linear
correlations and nonlinear dependencies, encapsulating sample-wise and
feature-wise interactions. Our approach incorporates the Hilbert-Schmidt
Independence Criterion (HSIC) to robustly capture nonlinear dependencies within
a Reproducing Kernel Hilbert Space, enriching representation learning.
Experimental evaluations on diverse benchmarks demonstrate the efficacy of
CDSSL in improving representation quality.
|
2501.18876
|
QMe14S, A Comprehensive and Efficient Spectral Dataset for Small Organic
Molecules
|
physics.chem-ph cs.LG
|
Developing machine learning protocols for molecular simulations requires
comprehensive and efficient datasets. Here we introduce the QMe14S dataset,
comprising 186,102 small organic molecules featuring 14 elements (H, B, C, N,
O, F, Al, Si, P, S, Cl, As, Se, Br) and 47 functional groups. Using density
functional theory at the B3LYP/TZVP level, we optimized the geometries and
calculated properties including energy, atomic charge, atomic force, dipole
moment, quadrupole moment, polarizability, octupole moment, first
hyperpolarizability, and Hessian. At the same level, we obtained the harmonic
IR, Raman and NMR spectra. Furthermore, we conducted ab initio molecular
dynamics simulations to generate dynamic configurations and extract
nonequilibrium properties, including energy, forces, and Hessians. By
leveraging our E(3)-equivariant message-passing neural network (DetaNet), we
demonstrated that models trained on QMe14S outperform those trained on the
previously developed QM9S dataset in simulating molecular spectra. The QMe14S
dataset thus serves as a comprehensive benchmark for molecular simulations,
offering valuable insights into structure-property relationships.
|
2501.18877
|
Distorting Embedding Space for Safety: A Defense Mechanism for
Adversarially Robust Diffusion Models
|
cs.CV cs.CR cs.LG
|
Text-to-image diffusion models show remarkable generation performance
following text prompts, but risk generating Not Safe For Work (NSFW) contents
from unsafe prompts. Existing approaches, such as prompt filtering or concept
unlearning, fail to defend against adversarial attacks while maintaining benign
image quality. In this paper, we propose a novel approach called Distorting
Embedding Space (DES), a text encoder-based defense mechanism that effectively
tackles these issues through innovative embedding space control. DES transforms
unsafe embeddings, extracted from a text encoder using unsafe prompts, toward
carefully calculated safe embedding regions to prevent unsafe contents
generation, while reproducing the original safe embeddings. DES also
neutralizes the nudity embedding, extracted using prompt ``nudity", by aligning
it with neutral embedding to enhance robustness against adversarial attacks.
These methods ensure both robust defense and high-quality image generation.
Additionally, DES can be adopted in a plug-and-play manner and requires zero
inference overhead, facilitating its deployment. Extensive experiments on
diverse attack types, including black-box and white-box scenarios, demonstrate
DES's state-of-the-art performance in both defense capability and benign image
generation quality. Our model is available at https://github.com/aei13/DES.
|
2501.18879
|
Understanding Generalization in Physics Informed Models through Affine
Variety Dimensions
|
cs.LG math.ST stat.ML stat.TH
|
In recent years, physics-informed machine learning has gained significant
attention for its ability to enhance statistical performance and sample
efficiency by integrating physical structures into machine learning models.
These structures, such as differential equations, conservation laws, and
symmetries, serve as inductive biases that can improve the generalization
capacity of the hybrid model. However, the mechanisms by which these physical
structures enhance generalization capacity are not fully understood, limiting
the ability to guarantee the performance of the models. In this study, we show
that the generalization performance of linear regressors incorporating
differential equation structures is determined by the dimension of the
associated affine variety, rather than the number of parameters. This finding
enables a unified analysis of various equations, including nonlinear ones. We
introduce a method to approximate the dimension of the affine variety and
provide experimental evidence to validate our theoretical insights.
|
2501.18880
|
RLS3: RL-Based Synthetic Sample Selection to Enhance Spatial Reasoning
in Vision-Language Models for Indoor Autonomous Perception
|
cs.CV cs.LG
|
Vision-language model (VLM) fine-tuning for application-specific visual
grounding based on natural language instructions has become one of the most
popular approaches for learning-enabled autonomous systems. However, such
fine-tuning relies heavily on high-quality datasets to achieve successful
performance in various downstream tasks. Additionally, VLMs often encounter
limitations due to insufficient and imbalanced fine-tuning data. To address
these issues, we propose a new generalizable framework to improve VLM
fine-tuning by integrating it with a reinforcement learning (RL) agent. Our
method utilizes the RL agent to manipulate objects within an indoor setting to
create synthetic data for fine-tuning to address certain vulnerabilities of the
VLM. Specifically, we use the performance of the VLM to provide feedback to the
RL agent to generate informative data that efficiently fine-tune the VLM over
the targeted task (e.g. spatial reasoning). The key contribution of this work
is developing a framework where the RL agent serves as an informative data
sampling tool and assists the VLM in order to enhance performance and address
task-specific vulnerabilities. By targeting the data sampling process to
address the weaknesses of the VLM, we can effectively train a more
context-aware model. In addition, generating synthetic data allows us to have
precise control over each scene and generate granular ground truth captions.
Our results show that the proposed data generation approach improves the
spatial reasoning performance of VLMs, which demonstrates the benefits of using
RL-guided data generation in vision-language tasks.
|
2501.18883
|
Can We Predict the Effect of Prompts?
|
cs.SE cs.LG
|
Large Language Models (LLMs) are machine learning models that have seen
widespread adoption due to their capability of handling previously difficult
tasks. LLMs, due to their training, are sensitive to how exactly a question is
presented, also known as prompting. However, prompting well is challenging, as
it has been difficult to uncover principles behind prompting -- generally,
trial-and-error is the most common way of improving prompts, despite its
significant computational cost. In this context, we argue it would be useful to
perform `predictive prompt analysis', in which an automated technique would
perform a quick analysis of a prompt and predict how the LLM would react to it,
relative to a goal provided by the user. As a demonstration of the concept, we
present Syntactic Prevalence Analyzer (SPA), a predictive prompt analysis
approach based on sparse autoencoders (SAEs). SPA accurately predicted how
often an LLM would generate target syntactic structures during code synthesis,
with up to 0.994 Pearson correlation between the predicted and actual
prevalence of the target structure. At the same time, SPA requires only 0.4\%
of the time it takes to run the LLM on a benchmark. As LLMs are increasingly
used during and integrated into modern software development, our proposed
predictive prompt analysis concept has the potential to significantly ease the
use of LLMs for both practitioners and researchers.
|
2501.18887
|
Building Bridges, Not Walls -- Advancing Interpretability by Unifying
Feature, Data, and Model Component Attribution
|
cs.LG cs.AI
|
The increasing complexity of AI systems has made understanding their behavior
a critical challenge. Numerous methods have been developed to attribute model
behavior to three key aspects: input features, training data, and internal
model components. However, these attribution methods are studied and applied
rather independently, resulting in a fragmented landscape of approaches and
terminology. This position paper argues that feature, data, and component
attribution methods share fundamental similarities, and bridging them can
benefit interpretability research. We conduct a detailed analysis of successful
methods of these three attribution aspects and present a unified view to
demonstrate that these seemingly distinct methods employ similar approaches,
such as perturbations, gradients, and linear approximations, differing
primarily in their perspectives rather than core techniques. Our unified
perspective enhances understanding of existing attribution methods, identifies
shared concepts and challenges, makes this field more accessible to newcomers,
and highlights new directions not only for attribution and interpretability but
also for broader AI research, including model editing, steering, and
regulation.
|
2501.18889
|
Fully Distributed and Quantized Algorithm for MPC-based Autonomous
Vehicle Platooning Optimization
|
eess.SY cs.MA cs.SY eess.SP math.OC
|
Intelligent transportation systems have recently emerged to address the
growing interest for safer, more efficient, and sustainable transportation
solutions. In this direction, this paper presents distributed algorithms for
control and optimization over vehicular networks. First, we formulate the
autonomous vehicle platooning framework based on model-predictive-control (MPC)
strategies and present its objective optimization as a cooperative quadratic
cost function. Then, we propose a distributed algorithm to locally optimize
this objective at every vehicle subject to data quantization over the
communication network of vehicles. In contrast to most existing literature that
assumes ideal communication channels, log-scale data quantization over the
network is addressed in this work, which is more realistic and practical. In
particular, we show by simulation that the proposed log-quantized algorithm
reaches optimal convergence with less residual and optimality gap. This
outperforms the existing literature considering uniform quantization which
leads to a large optimality gap and residual.
|
2501.18890
|
Distributed Observer Design for Tracking Platoon of Connected and
Autonomous Vehicles
|
eess.SY cs.MA cs.SY eess.SP math.OC
|
Intelligent transportation systems (ITS) aim to advance innovative strategies
relating to different modes of transport, traffic management, and autonomous
vehicles. This paper studies the platoon of connected and autonomous vehicles
(CAV) and proposes a distributed observer to track the state of the CAV
dynamics. First, we model the CAV dynamics via an LTI interconnected system.
Then, a consensus-based strategy is proposed to infer the state of the CAV
dynamics based on local information exchange over the communication network of
vehicles. A linear-matrix-inequality (LMI) technique is adopted for the
block-diagonal observer gain design such that this gain is associated in a
distributed way and locally to every vehicle. The distributed observer error
dynamics is then shown to follow the structure of the Kronecker matrix product
of the system dynamics and the adjacency matrix of the CAV network. The notions
of survivable network design and redundant observer scheme are further
discussed in the paper to address resilience to link and node failure. Finally,
we verify our theoretical contributions via numerical simulations.
|
2501.18891
|
CAAT-EHR: Cross-Attentional Autoregressive Transformer for Multimodal
Electronic Health Record Embeddings
|
cs.LG
|
Electronic health records (EHRs) provide a comprehensive source of
longitudinal patient data, encompassing structured modalities such as
laboratory results, imaging data, and vital signs, and unstructured clinical
notes. These datasets, after necessary preprocessing to clean and format the
data for analysis, often remain in their raw EHR form, representing numerical
or categorical values without further transformation into task-agnostic
embeddings. While such raw EHR data enables predictive modeling, its reliance
on manual feature engineering or downstream task-specific optimization limits
its utility for general-purpose applications. Deep learning (DL) techniques,
such as recurrent neural networks (RNNs) and Transformers, have facilitated
predictive tasks like disease progression and diagnosis prediction. However,
these methods often struggle to fully exploit the temporal and multimodal
dependencies inherent in EHR data due to their reliance on pre-processed but
untransformed raw EHR inputs. In this study, we introduce CAAT-EHR, a novel
architecture designed to bridge this gap by generating robust, task-agnostic
longitudinal embeddings from raw EHR data. CAAT-EHR leverages self- and
cross-attention mechanisms in its encoder to integrate temporal and contextual
relationships across multiple modalities, transforming the data into enriched
embeddings that capture complex dependencies. An autoregressive decoder
complements the encoder by predicting future time points data during
pre-training, ensuring that the resulting embeddings maintain temporal
consistency and alignment. CAAT-EHR eliminates the need for manual feature
engineering and enables seamless transferability across diverse downstream
tasks. Extensive evaluations on benchmark datasets, demonstrate the superiority
of CAAT-EHR-generated embeddings over pre-processed raw EHR data and other
baseline approaches.
|
2501.18893
|
A machine learning approach for Premature Coronary Artery Disease
Diagnosis according to Different Ethnicities in Iran
|
cs.LG
|
Premature coronary artery disease (PCAD) refers to the early onset of the
disease, usually before the age of 55 for men and 65 for women. Coronary Artery
Disease (CAD) develops when coronary arteries, the major blood vessels
supplying the heart with blood, oxygen, and nutrients, become clogged or
diseased. This is often due to many risk factors, including lifestyle and
cardiometabolic ones, but few studies were done on ethnicity as one of these
risk factors, especially in PCAD. In this study, we tested the rank of
ethnicity among the major risk factors of PCAD, including age, gender, body
mass index (BMI), visceral obesity presented as waist circumference (WC),
diabetes mellitus (DM), high blood pressure (HBP), high low-density lipoprotein
cholesterol (LDL-C), and smoking in a large national sample of patients with
PCAD from different ethnicities. All patients who met the age criteria
underwent coronary angiography to confirm CAD diagnosis. The weight of
ethnicity was compared to the other eight features using feature weighting
algorithms in PCAD diagnosis. In addition, we conducted an experiment where we
ran predictive models (classification algorithms) to predict PCAD. We compared
the performance of these models under two conditions: we trained the
classification algorithms, including or excluding ethnicity. This study
analyzed various factors to determine their predictive power influencing PCAD
prediction. Among these factors, gender and age were the most significant
predictors, with ethnicity being the third most important. The results also
showed that if ethnicity is used as one of the input risk factors for
classification algorithms, it can improve their efficiency. Our results show
that ethnicity ranks as an influential factor in predicting PCAD. Therefore, it
needs to be addressed in the PCAD diagnostic and preventive measures.
|
2501.18895
|
Efficient Supernet Training with Orthogonal Softmax for Scalable ASR
Model Compression
|
cs.CL
|
ASR systems are deployed across diverse environments, each with specific
hardware constraints. We use supernet training to jointly train multiple
encoders of varying sizes, enabling dynamic model size adjustment to fit
hardware constraints without redundant training. Moreover, we introduce a novel
method called OrthoSoftmax, which applies multiple orthogonal softmax functions
to efficiently identify optimal subnets within the supernet, avoiding
resource-intensive search. This approach also enables more flexible and precise
subnet selection by allowing selection based on various criteria and levels of
granularity. Our results with CTC on Librispeech and TED-LIUM-v2 show that
FLOPs-aware component-wise selection achieves the best overall performance.
With the same number of training updates from one single job, WERs for all
model sizes are comparable to or slightly better than those of individually
trained models. Furthermore, we analyze patterns in the selected components and
reveal interesting insights.
|
2501.18897
|
Trustworthy Evaluation of Generative AI Models
|
stat.ML cs.LG
|
Generative AI (GenAI) models have recently achieved remarkable empirical
performance in various applications, however, their evaluations yet lack
uncertainty quantification. In this paper, we propose a method to compare two
generative models based on an unbiased estimator of their relative performance
gap. Statistically, our estimator achieves parametric convergence rate and
asymptotic normality, which enables valid inference. Computationally, our
method is efficient and can be accelerated by parallel computing and leveraging
pre-storing intermediate results. On simulated datasets with known ground
truth, we show our approach effectively controls type I error and achieves
power comparable with commonly used metrics. Furthermore, we demonstrate the
performance of our method in evaluating diffusion models on real image datasets
with statistical confidence.
|
2501.18898
|
GestureLSM: Latent Shortcut based Co-Speech Gesture Generation with
Spatial-Temporal Modeling
|
cs.CV cs.GR
|
Controlling human gestures based on speech signals presents a significant
challenge in computer vision. While existing works did preliminary studies of
generating holistic co-speech gesture from speech, the spatial interaction of
each body region during the speech remains barely explored. This leads to wield
body part interactions given the speech signal. Furthermore, the slow
generation speed limits the construction of real-world digital avatars. To
resolve these problems, we propose \textbf{GestureLSM}, a Latent Shortcut based
approach for Co-Speech Gesture Generation with spatial-temporal modeling. We
tokenize various body regions and explicitly model their interactions with
spatial and temporal attention. To achieve real-time gesture generations, we
exam the denoising patterns and design an effective time distribution to speed
up sampling while improve the generation quality for shortcut model. Extensive
quantitative and qualitative experiments demonstrate the effectiveness of
GestureLSM, showcasing its potential for various applications in the
development of digital humans and embodied agents. Project Page:
https://andypinxinliu.github.io/GestureLSM
|
2501.18899
|
Minimum Time Strategies for a Differential Drive Robot Escaping from a
Circular Detection Region
|
cs.RO math.OC
|
A Differential Drive Robot (DDR) located inside a circular detection region
in the plane wants to escape from it in minimum time. Various robotics
applications can be modeled like the previous problem, such as a DDR escaping
as soon as possible from a forbidden/dangerous region in the plane or running
out from the sensor footprint of an unmanned vehicle flying at a constant
altitude. In this paper, we find the motion strategies to accomplish its goal
under two scenarios. In one, the detection region moves slower than the DDR and
seeks to prevent escape; in another, its position is fixed. We formulate the
problem as a zero-sum pursuit-evasion game, and using differential games
theory, we compute the players' time-optimal motion strategies. Given the DDR's
speed advantage, it can always escape by translating away from the center of
the detection region at maximum speed. In this work, we show that the previous
strategy could be optimal in some cases; however, other motion strategies
emerge based on the player's speed ratio and the players' initial
configurations.
|
2501.18901
|
Lightspeed Geometric Dataset Distance via Sliced Optimal Transport
|
cs.LG cs.AI stat.CO stat.ME stat.ML
|
We introduce sliced optimal transport dataset distance (s-OTDD), a
model-agnostic, embedding-agnostic approach for dataset comparison that
requires no training, is robust to variations in the number of classes, and can
handle disjoint label sets. The core innovation is Moment Transform Projection
(MTP), which maps a label, represented as a distribution over features, to a
real number. Using MTP, we derive a data point projection that transforms
datasets into one-dimensional distributions. The s-OTDD is defined as the
expected Wasserstein distance between the projected distributions, with respect
to random projection parameters. Leveraging the closed form solution of
one-dimensional optimal transport, s-OTDD achieves (near-)linear computational
complexity in the number of data points and feature dimensions and is
independent of the number of classes. With its geometrically meaningful
projection, s-OTDD strongly correlates with the optimal transport dataset
distance while being more efficient than existing dataset discrepancy measures.
Moreover, it correlates well with the performance gap in transfer learning and
classification accuracy in data augmentation.
|
2501.18911
|
Integrated Communication and Binary State Detection Under Unequal Error
Constraints
|
cs.IT eess.SP math.IT
|
This work considers a problem of integrated sensing and communication (ISAC)
in which the goal of sensing is to detect a binary state. Unlike most
approaches that minimize the total detection error probability, in our work, we
disaggregate the error probability into false alarm and missed detection
probabilities and investigate their information-theoretic three-way tradeoff
including communication data rate. We consider a broadcast channel that
consists of a transmitter, a communication receiver, and a detector where the
receiver's and the detector's channels are affected by an unknown binary state.
We consider and present results on two different state-dependent models. In the
first setting, the state is fixed throughout the entire transmission, for which
we fully characterize the optimal three-way tradeoff between the coding rate
for communication and the two possibly nonidentical error exponents for sensing
in the asymptotic regime. The achievability and converse proofs rely on the
analysis of the cumulant-generating function of the log-likelihood ratio. In
the second setting, the state changes every symbol in an independently and
identically distributed (i.i.d.) manner, for which we characterize the optimal
tradeoff region based on the analysis of the receiver operating characteristic
(ROC) curves.
|
2501.18912
|
Analyzing Classroom Interaction Data Using Prompt Engineering and
Network Analysis
|
stat.AP cs.SI
|
Classroom interactions play a vital role in developing critical thinking,
collaborative problem-solving abilities, and enhanced learning outcomes. While
analyzing these interactions is crucial for improving educational practices,
the examination of classroom dialogues presents significant challenges due to
the complexity and high-dimensionality of conversational data. This study
presents an integrated framework that combines prompt engineering with network
analysis to investigate classroom interactions comprehensively. Our approach
automates utterance classification through prompt engineering, enabling
efficient and scalable dialogue analysis without requiring pre-labeled
datasets. The classified interactions are subsequently transformed into network
representations, facilitating the analysis of classroom dynamics as structured
social networks. To uncover complex interaction patterns and how underlying
interaction structures relate to student learning, we utilize network mediation
analysis. In this approach, latent interaction structures, derived from the
additive and multiplicative effects network (AMEN) model that places students
within a latent social space, act as mediators. In particular, we investigate
how the gender gap in mathematics performance may be mediated by students'
classroom interaction structures.
|
2501.18913
|
Rethinking Diffusion Posterior Sampling: From Conditional Score
Estimator to Maximizing a Posterior
|
cs.CV
|
Recent advancements in diffusion models have been leveraged to address
inverse problems without additional training, and Diffusion Posterior Sampling
(DPS) (Chung et al., 2022a) is among the most popular approaches. Previous
analyses suggest that DPS accomplishes posterior sampling by approximating the
conditional score. While in this paper, we demonstrate that the conditional
score approximation employed by DPS is not as effective as previously assumed,
but rather aligns more closely with the principle of maximizing a posterior
(MAP). This assertion is substantiated through an examination of DPS on 512x512
ImageNet images, revealing that: 1) DPS's conditional score estimation
significantly diverges from the score of a well-trained conditional diffusion
model and is even inferior to the unconditional score; 2) The mean of DPS's
conditional score estimation deviates significantly from zero, rendering it an
invalid score estimation; 3) DPS generates high-quality samples with
significantly lower diversity. In light of the above findings, we posit that
DPS more closely resembles MAP than a conditional score estimator, and
accordingly propose the following enhancements to DPS: 1) we explicitly
maximize the posterior through multi-step gradient ascent and projection; 2) we
utilize a light-weighted conditional score estimator trained with only 100
images and 8 GPU hours. Extensive experimental results indicate that these
proposed improvements significantly enhance DPS's performance. The source code
for these improvements is provided in
https://github.com/tongdaxu/Rethinking-Diffusion-Posterior-Sampling-From-Conditional-Score-Estimator-to-Maximizing-a-Posterior.
|
2501.18914
|
Scaling Laws for Differentially Private Language Models
|
cs.LG cs.CR
|
Scaling laws have emerged as important components of large language model
(LLM) training as they can predict performance gains through scale, and provide
guidance on important hyper-parameter choices that would otherwise be
expensive. LLMs also rely on large, high-quality training datasets, like those
sourced from (sometimes sensitive) user data. Training models on this sensitive
user data requires careful privacy protections like differential privacy (DP).
However, the dynamics of DP training are significantly different, and
consequently their scaling laws are not yet fully understood. In this work, we
establish scaling laws that accurately model the intricacies of DP LLM
training, providing a complete picture of the compute-privacy-utility tradeoffs
and the optimal training configurations in many settings.
|
2501.18915
|
An Invitation to Neuroalgebraic Geometry
|
cs.LG math.AG
|
In this expository work, we promote the study of function spaces
parameterized by machine learning models through the lens of algebraic
geometry. To this end, we focus on algebraic models, such as neural networks
with polynomial activations, whose associated function spaces are
semi-algebraic varieties. We outline a dictionary between algebro-geometric
invariants of these varieties, such as dimension, degree, and singularities,
and fundamental aspects of machine learning, such as sample complexity,
expressivity, training dynamics, and implicit bias. Along the way, we review
the literature and discuss ideas beyond the algebraic domain. This work lays
the foundations of a research direction bridging algebraic geometry and deep
learning, that we refer to as neuroalgebraic geometry.
|
2501.18916
|
LLM Program Optimization via Retrieval Augmented Search
|
cs.LG
|
With the advent of large language models (LLMs), there has been a great deal
of interest in applying them to solve difficult programming tasks. Recent work
has demonstrated their potential at program optimization, a key challenge in
programming languages research. We propose a blackbox adaptation method called
Retrieval Augmented Search (RAS) that performs beam search over candidate
optimizations; at each step, it retrieves in-context examples from a given
training dataset of slow-fast program pairs to guide the LLM. Critically, we
find that performing contextual retrieval based on an LLM-generated natural
language description significantly outperforms retrieval based on the source
code. In addition, we propose a method called AEGIS for improving
interpretability by decomposing training examples into "atomic edits" that are
significantly more incremental in nature. We show that RAS performs 1.8$\times$
better than prior state-of-the-art blackbox adaptation strategies, and that
AEGIS performs 1.37$\times$ better while performing significantly smaller
edits.
|
2501.18919
|
Deepfake Detection of Singing Voices With Whisper Encodings
|
cs.SD cs.AI eess.AS
|
The deepfake generation of singing vocals is a concerning issue for artists
in the music industry. In this work, we propose a singing voice deepfake
detection (SVDD) system, which uses noise-variant encodings of open-AI's
Whisper model. As counter-intuitive as it may sound, even though the Whisper
model is known to be noise-robust, the encodings are rich in non-speech
information, and are noise-variant. This leads us to evaluate Whisper encodings
as feature representations for the SVDD task. Therefore, in this work, the SVDD
task is performed on vocals and mixtures, and the performance is evaluated in
\%EER over varying Whisper model sizes and two classifiers- CNN and ResNet34,
under different testing conditions.
|
2501.18921
|
Full-scale Representation Guided Network for Retinal Vessel Segmentation
|
eess.IV cs.CV
|
The U-Net architecture and its variants have remained state-of-the-art (SOTA)
for retinal vessel segmentation over the past decade. In this study, we
introduce a Full Scale Guided Network (FSG-Net), where the feature
representation network with modernized convolution blocks extracts full-scale
information and the guided convolution block refines that information.
Attention-guided filter is introduced to the guided convolution block under the
interpretation that the filter behaves like the unsharp mask filter. Passing
full-scale information to the attention block allows for the generation of
improved attention maps, which are then passed to the attention-guided filter,
resulting in performance enhancement of the segmentation network. The structure
preceding the guided convolution block can be replaced by any U-Net variant,
which enhances the scalability of the proposed approach. For a fair comparison,
we re-implemented recent studies available in public repositories to evaluate
their scalability and reproducibility. Our experiments also show that the
proposed network demonstrates competitive results compared to current SOTA
models on various public datasets. Ablation studies demonstrate that the
proposed model is competitive with much smaller parameter sizes. Lastly, by
applying the proposed model to facial wrinkle segmentation, we confirmed the
potential for scalability to similar tasks in other domains. Our code is
available on https://github.com/ZombaSY/FSG-Net-pytorch.
|
2501.18922
|
KBQA-o1: Agentic Knowledge Base Question Answering with Monte Carlo Tree
Search
|
cs.CL cs.AI cs.DB
|
Knowledge Base Question Answering (KBQA) aims to answer natural language
questions with a large-scale structured knowledge base (KB). Despite
advancements with large language models (LLMs), KBQA still faces challenges in
weak KB awareness, imbalance between effectiveness and efficiency, and high
reliance on annotated data. To address these challenges, we propose KBQA-o1, a
novel agentic KBQA method with Monte Carlo Tree Search (MCTS). It introduces a
ReAct-based agent process for stepwise logical form generation with KB
environment exploration. Moreover, it employs MCTS, a heuristic search method
driven by policy and reward models, to balance agentic exploration's
performance and search space. With heuristic exploration, KBQA-o1 generates
high-quality annotations for further improvement by incremental fine-tuning.
Experimental results show that KBQA-o1 outperforms previous low-resource KBQA
methods with limited annotated data, boosting Llama-3.1-8B model's GrailQA F1
performance to 78.5% compared to 48.5% of the previous sota method with
GPT-3.5-turbo.
|
2501.18924
|
Language Games as the Pathway to Artificial Superhuman Intelligence
|
cs.AI cs.CL cs.MA
|
The evolution of large language models (LLMs) toward artificial superhuman
intelligence (ASI) hinges on data reproduction, a cyclical process in which
models generate, curate and retrain on novel data to refine capabilities.
Current methods, however, risk getting stuck in a data reproduction trap:
optimizing outputs within fixed human-generated distributions in a closed loop
leads to stagnation, as models merely recombine existing knowledge rather than
explore new frontiers. In this paper, we propose language games as a pathway to
expanded data reproduction, breaking this cycle through three mechanisms: (1)
\textit{role fluidity}, which enhances data diversity and coverage by enabling
multi-agent systems to dynamically shift roles across tasks; (2) \textit{reward
variety}, embedding multiple feedback criteria that can drive complex
intelligent behaviors; and (3) \textit{rule plasticity}, iteratively evolving
interaction constraints to foster learnability, thereby injecting continual
novelty. By scaling language games into global sociotechnical ecosystems,
human-AI co-evolution generates unbounded data streams that drive open-ended
exploration. This framework redefines data reproduction not as a closed loop
but as an engine for superhuman intelligence.
|
2501.18929
|
Training-free Quantum-Inspired Image Edge Extraction Method
|
cs.CV
|
Edge detection is a cornerstone of image processing, yet existing methods
often face critical limitations. Traditional deep learning edge detection
methods require extensive training datasets and fine-tuning, while classical
techniques often fail in complex or noisy scenarios, limiting their real-world
applicability. To address these limitations, we propose a training-free,
quantum-inspired edge detection model. Our approach integrates classical Sobel
edge detection, the Schr\"odinger wave equation refinement, and a hybrid
framework combining Canny and Laplacian operators. By eliminating the need for
training, the model is lightweight and adaptable to diverse applications. The
Schr\"odinger wave equation refines gradient-based edge maps through iterative
diffusion, significantly enhancing edge precision. The hybrid framework further
strengthens the model by synergistically combining local and global features,
ensuring robustness even under challenging conditions. Extensive evaluations on
datasets like BIPED, Multicue, and NYUD demonstrate superior performance of the
proposed model, achieving state-of-the-art metrics, including ODS, OIS, AP, and
F-measure. Noise robustness experiments highlight its reliability, showcasing
its practicality for real-world scenarios. Due to its versatile and adaptable
nature, our model is well-suited for applications such as medical imaging,
autonomous systems, and environmental monitoring, setting a new benchmark for
edge detection.
|
2501.18935
|
TabFSBench: Tabular Benchmark for Feature Shifts in Open Environment
|
cs.LG
|
Tabular data is widely utilized in various machine learning tasks. Current
tabular learning research predominantly focuses on closed environments, while
in real-world applications, open environments are often encountered, where
distribution and feature shifts occur, leading to significant degradation in
model performance. Previous research has primarily concentrated on mitigating
distribution shifts, whereas feature shifts, a distinctive and unexplored
challenge of tabular data, have garnered limited attention. To this end, this
paper conducts the first comprehensive study on feature shifts in tabular data
and introduces the first tabular feature-shift benchmark (TabFSBench).
TabFSBench evaluates impacts of four distinct feature-shift scenarios on four
tabular model categories across various datasets and assesses the performance
of large language models (LLMs) and tabular LLMs in the tabular benchmark for
the first time. Our study demonstrates three main observations: (1) most
tabular models have the limited applicability in feature-shift scenarios; (2)
the shifted feature set importance has a linear relationship with model
performance degradation; (3) model performance in closed environments
correlates with feature-shift performance. Future research direction is also
explored for each observation. TabFSBench is released for public access by
using a few lines of Python codes at https://github.com/LAMDASZ-ML/TabFSBench.
|
2501.18936
|
Adaptive Prompt: Unlocking the Power of Visual Prompt Tuning
|
cs.LG cs.CV
|
Visual Prompt Tuning (VPT) has recently emerged as a powerful method for
adapting pre-trained vision models to downstream tasks. By introducing
learnable prompt tokens as task-specific instructions, VPT effectively guides
pre-trained transformer models with minimal overhead. Despite its empirical
success, a comprehensive theoretical understanding of VPT remains an active
area of research. Building on recent insights into the connection between
mixture of experts and prompt-based approaches, we identify a key limitation in
VPT: the restricted functional expressiveness in prompt formulation. To address
this limitation, we propose Visual Adaptive Prompt Tuning (VAPT), a new
generation of prompts that redefines prompts as adaptive functions of the
input. Our theoretical analysis shows that this simple yet intuitive approach
achieves optimal sample efficiency. Empirical results on VTAB-1K and FGVC
further demonstrate VAPT's effectiveness, with performance gains of 7.34% and
1.04% over fully fine-tuning baselines, respectively. Notably, VAPT also
surpasses VPT by a substantial margin while using fewer parameters. These
results highlight both the effectiveness and efficiency of our method and pave
the way for future research to explore the potential of adaptive prompts.
|
2501.18940
|
TV-Dialogue: Crafting Theme-Aware Video Dialogues with Immersive
Interaction
|
cs.CV
|
Recent advancements in LLMs have accelerated the development of dialogue
generation across text and images, yet video-based dialogue generation remains
underexplored and presents unique challenges. In this paper, we introduce
Theme-aware Video Dialogue Crafting (TVDC), a novel task aimed at generating
new dialogues that align with video content and adhere to user-specified
themes. We propose TV-Dialogue, a novel multi-modal agent framework that
ensures both theme alignment (i.e., the dialogue revolves around the theme) and
visual consistency (i.e., the dialogue matches the emotions and behaviors of
characters in the video) by enabling real-time immersive interactions among
video characters, thereby accurately understanding the video content and
generating new dialogue that aligns with the given themes. To assess the
generated dialogues, we present a multi-granularity evaluation benchmark with
high accuracy, interpretability and reliability, demonstrating the
effectiveness of TV-Dialogue on self-collected dataset over directly using
existing LLMs. Extensive experiments reveal that TV-Dialogue can generate
dialogues for videos of any length and any theme in a zero-shot manner without
training. Our findings underscore the potential of TV-Dialogue for various
applications, such as video re-creation, film dubbing and its use in downstream
multimodal tasks.
|
2501.18942
|
Open-Source Autonomous Driving Software Platforms: Comparison of
Autoware and Apollo
|
cs.RO
|
Full-stack autonomous driving system spans diverse technological
domains-including perception, planning, and control-that each require in-depth
research. Moreover, validating such technologies of the system necessitates
extensive supporting infrastructure, from simulators and sensors to
high-definition maps. These complexities with barrier to entry pose substantial
limitations for individual developers and research groups. Recently,
open-source autonomous driving software platforms have emerged to address this
challenge by providing autonomous driving technologies and practical supporting
infrastructure for implementing and evaluating autonomous driving
functionalities. Among the prominent open-source platforms, Autoware and Apollo
are frequently adopted in both academia and industry. While previous studies
have assessed each platform independently, few have offered a quantitative and
detailed head-to-head comparison of their capabilities. In this paper, we
systematically examine the core modules of Autoware and Apollo and evaluate
their middleware performance to highlight key differences. These insights serve
as a practical reference for researchers and engineers, guiding them in
selecting the most suitable platform for their specific development
environments and advancing the field of full-stack autonomous driving system.
|
2501.18943
|
HeLiOS: Heterogeneous LiDAR Place Recognition via Overlap-based Learning
and Local Spherical Transformer
|
cs.RO
|
LiDAR place recognition is a crucial module in localization that matches the
current location with previously observed environments. Most existing
approaches in LiDAR place recognition dominantly focus on the spinning type
LiDAR to exploit its large FOV for matching. However, with the recent emergence
of various LiDAR types, the importance of matching data across different LiDAR
types has grown significantly-a challenge that has been largely overlooked for
many years. To address these challenges, we introduce HeLiOS, a deep network
tailored for heterogeneous LiDAR place recognition, which utilizes small local
windows with spherical transformers and optimal transport-based cluster
assignment for robust global descriptors. Our overlap-based data mining and
guided-triplet loss overcome the limitations of traditional distance-based
mining and discrete class constraints. HeLiOS is validated on public datasets,
demonstrating performance in heterogeneous LiDAR place recognition while
including an evaluation for long-term recognition, showcasing its ability to
handle unseen LiDAR types. We release the HeLiOS code as an open source for the
robotics community at https://github.com/minwoo0611/HeLiOS.
|
2501.18944
|
O-MAPL: Offline Multi-agent Preference Learning
|
cs.LG cs.MA
|
Inferring reward functions from demonstrations is a key challenge in
reinforcement learning (RL), particularly in multi-agent RL (MARL), where large
joint state-action spaces and complex inter-agent interactions complicate the
task. While prior single-agent studies have explored recovering reward
functions and policies from human preferences, similar work in MARL is limited.
Existing methods often involve separate stages of supervised reward learning
and MARL algorithms, leading to unstable training. In this work, we introduce a
novel end-to-end preference-based learning framework for cooperative MARL,
leveraging the underlying connection between reward functions and soft
Q-functions. Our approach uses a carefully-designed multi-agent value
decomposition strategy to improve training efficiency. Extensive experiments on
SMAC and MAMuJoCo benchmarks show that our algorithm outperforms existing
methods across various tasks.
|
2501.18945
|
Solving Inverse Problem for Multi-armed Bandits via Convex Optimization
|
cs.CE cs.LG math.OC q-bio.NC
|
We consider the inverse problem of multi-armed bandits (IMAB) that are widely
used in neuroscience and psychology research for behavior modelling. We first
show that the IMAB problem is not convex in general, but can be relaxed to a
convex problem via variable transformation. Based on this result, we propose a
two-step sequential heuristic for (approximately) solving the IMAB problem. We
discuss a condition where our method provides global solution to the IMAB
problem with certificate, as well as approximations to further save computing
time. Numerical experiments indicate that our heuristic method is more robust
than directly solving the IMAB problem via repeated local optimization, and can
achieve the performance of Monte Carlo methods within a significantly decreased
running time. We provide the implementation of our method based on CVXPY, which
allows straightforward application by users not well versed in convex
optimization.
|
2501.18950
|
Fantastic Targets for Concept Erasure in Diffusion Models and Where To
Find Them
|
cs.LG cs.AI cs.CV
|
Concept erasure has emerged as a promising technique for mitigating the risk
of harmful content generation in diffusion models by selectively unlearning
undesirable concepts. The common principle of previous works to remove a
specific concept is to map it to a fixed generic concept, such as a neutral
concept or just an empty text prompt. In this paper, we demonstrate that this
fixed-target strategy is suboptimal, as it fails to account for the impact of
erasing one concept on the others. To address this limitation, we model the
concept space as a graph and empirically analyze the effects of erasing one
concept on the remaining concepts. Our analysis uncovers intriguing geometric
properties of the concept space, where the influence of erasing a concept is
confined to a local region. Building on this insight, we propose the Adaptive
Guided Erasure (AGE) method, which \emph{dynamically} selects optimal target
concepts tailored to each undesirable concept, minimizing unintended side
effects. Experimental results show that AGE significantly outperforms
state-of-the-art erasure methods on preserving unrelated concepts while
maintaining effective erasure performance. Our code is published at
{https://github.com/tuananhbui89/Adaptive-Guided-Erasure}.
|
2501.18954
|
LLMDet: Learning Strong Open-Vocabulary Object Detectors under the
Supervision of Large Language Models
|
cs.CV
|
Recent open-vocabulary detectors achieve promising performance with abundant
region-level annotated data. In this work, we show that an open-vocabulary
detector co-training with a large language model by generating image-level
detailed captions for each image can further improve performance. To achieve
the goal, we first collect a dataset, GroundingCap-1M, wherein each image is
accompanied by associated grounding labels and an image-level detailed caption.
With this dataset, we finetune an open-vocabulary detector with training
objectives including a standard grounding loss and a caption generation loss.
We take advantage of a large language model to generate both region-level short
captions for each region of interest and image-level long captions for the
whole image. Under the supervision of the large language model, the resulting
detector, LLMDet, outperforms the baseline by a clear margin, enjoying superior
open-vocabulary ability. Further, we show that the improved LLMDet can in turn
build a stronger large multi-modal model, achieving mutual benefits. The code,
model, and dataset is available at https://github.com/iSEE-Laboratory/LLMDet.
|
2501.18955
|
Deep Learning based Quasi-consciousness Training for Robot Intelligent
Model
|
cs.RO cs.AI
|
This paper explores a deep learning based robot intelligent model that
renders robots learn and reason for complex tasks. First, by constructing a
network of environmental factor matrix to stimulate the learning process of the
robot intelligent model, the model parameters must be subjected to coarse &
fine tuning to optimize the loss function for minimizing the loss score,
meanwhile robot intelligent model can fuse all previously known concepts
together to represent things never experienced before, which need robot
intelligent model can be generalized extensively. Secondly, in order to
progressively develop a robot intelligent model with primary consciousness,
every robot must be subjected to at least 1~3 years of special school for
training anthropomorphic behaviour patterns to understand and process complex
environmental information and make rational decisions. This work explores and
delivers the potential application of deep learning-based quasi-consciousness
training in the field of robot intelligent model.
|
2501.18956
|
Differentiable Simulation of Soft Robots with Frictional Contacts
|
cs.RO
|
In recent years, soft robotics simulators have evolved to offer various
functionalities, including the simulation of different material types (e.g.,
elastic, hyper-elastic) and actuation methods (e.g., pneumatic, cable-driven,
servomotor). These simulators also provide tools for various tasks, such as
calibration, design, and control. However, efficiently and accurately computing
derivatives within these simulators remains a challenge, particularly in the
presence of physical contact interactions. Incorporating these derivatives can,
for instance, significantly improve the convergence speed of control methods
like reinforcement learning and trajectory optimization, enable gradient-based
techniques for design, or facilitate end-to-end machine-learning approaches for
model reduction. This paper addresses these challenges by introducing a unified
method for computing the derivatives of mechanical equations within the finite
element method framework, including contact interactions modeled as a nonlinear
complementarity problem. The proposed approach handles both collision and
friction phases, accounts for their nonsmooth dynamics, and leverages the
sparsity introduced by mesh-based models. Its effectiveness is demonstrated
through several examples of controlling and calibrating soft systems.
|
2501.18957
|
Intrinsic Tensor Field Propagation in Large Language Models: A Novel
Approach to Contextual Information Flow
|
cs.CL
|
Context propagation remains a central challenge in language model
architectures, particularly in tasks requiring the retention of long-range
dependencies. Conventional attention mechanisms, while effective in many
applications, exhibit limitations in maintaining coherent contextual
representations over extended sequences due to their reliance on discrete token
interactions. A novel approach is introduced through the formulation of
Intrinsic Tensor Field Propagation (ITFP), which models contextual
relationships as continuous tensor fields distributed across token embeddings.
The propagation dynamics are governed through differential equations that
enable a structured flow of contextual information, augmenting the standard
attention mechanism to enhance coherence and recall. A series of experiments
conducted on an open-source transformer-based model demonstrate that ITFP
provides measurable improvements in contextual retention, dependency
resolution, and inference stability across various linguistic structures.
Comparisons with baseline models reveal a reduction in syntactic
inconsistencies and factual errors, while ablation studies indicate that the
choice of propagation depth and integration strength significantly impacts
model performance. Additional evaluations assessing domain generalization
suggest that ITFP effectively adapts across different text genres, reinforcing
its applicability beyond conventional language modeling tasks. Although
computational trade-offs are introduced through the inclusion of tensor field
computations, empirical findings suggest that the benefits in accuracy and
coherence outweigh the increased processing demands.
|
2501.18959
|
Enhancing Neural Function Approximation: The XNet Outperforming KAN
|
cs.LG cs.AI
|
XNet is a single-layer neural network architecture that leverages Cauchy
integral-based activation functions for high-order function approximation.
Through theoretical analysis, we show that the Cauchy activation functions used
in XNet can achieve arbitrary-order polynomial convergence, fundamentally
outperforming traditional MLPs and Kolmogorov-Arnold Networks (KANs) that rely
on increased depth or B-spline activations. Our extensive experiments on
function approximation, PDE solving, and reinforcement learning demonstrate
XNet's superior performance - reducing approximation error by up to 50000 times
and accelerating training by up to 10 times compared to existing approaches.
These results establish XNet as a highly efficient architecture for both
scientific computing and AI applications.
|
2501.18962
|
Spend Wisely: Maximizing Post-Training Gains in Iterative Synthetic Data
Boostrapping
|
cs.LG
|
Modern foundation models often undergo iterative ``bootstrapping'' in their
post-training phase: a model generates synthetic data, an external verifier
filters out low-quality samples, and the high-quality subset is used for
further fine-tuning. Over multiple iterations, the model's performance
improves--raising a crucial question: how should the total budget on generation
and training be allocated across iterations to maximize final performance? In
this work, we develop a theoretical framework to analyze budget allocation
strategies. Specifically, we show that constant policies fail to converge with
high probability, while increasing policies--particularly exponential growth
policies--exhibit significant theoretical advantages. Experiments on image
denoising with diffusion probabilistic models and math reasoning with large
language models show that both exponential and polynomial growth policies
consistently outperform constant policies, with exponential policies often
providing more stable performance.
|
2501.18963
|
Optimizing Through Change: Bounds and Recommendations for Time-Varying
Bayesian Optimization Algorithms
|
stat.ML cs.LG
|
Time-Varying Bayesian Optimization (TVBO) is the go-to framework for
optimizing a time-varying, expensive, noisy black-box function. However, most
of the solutions proposed so far either rely on unrealistic assumptions on the
nature of the objective function or do not offer any theoretical guarantees. We
propose the first analysis that asymptotically bounds the cumulative regret of
TVBO algorithms under mild and realistic assumptions only. In particular, we
provide an algorithm-independent lower regret bound and an upper regret bound
that holds for a large class of TVBO algorithms. Based on this analysis, we
formulate recommendations for TVBO algorithms and show how an algorithm (BOLT)
that follows them performs better than the state-of-the-art of TVBO through
experiments on synthetic and real-world problems.
|
2501.18965
|
The Surprising Agreement Between Convex Optimization Theory and
Learning-Rate Scheduling for Large Model Training
|
cs.LG math.OC stat.ML
|
We show that learning-rate schedules for large model training behave
surprisingly similar to a performance bound from non-smooth convex optimization
theory. We provide a bound for the constant schedule with linear cooldown; in
particular, the practical benefit of cooldown is reflected in the bound due to
the absence of logarithmic terms. Further, we show that this surprisingly close
match between optimization theory and practice can be exploited for
learning-rate tuning: we achieve noticeable improvements for training 124M and
210M Llama-type models by (i) extending the schedule for continued training
with optimal learning-rate, and (ii) transferring the optimal learning-rate
across schedules.
|
2501.18972
|
BCAT: A Block Causal Transformer for PDE Foundation Models for Fluid
Dynamics
|
cs.LG cs.NA math.NA
|
We introduce BCAT, a PDE foundation model designed for autoregressive
prediction of solutions to two dimensional fluid dynamics problems. Our
approach uses a block causal transformer architecture to model next frame
predictions, leveraging previous frames as contextual priors rather than
relying solely on sub-frames or pixel-based inputs commonly used in image
generation methods. This block causal framework more effectively captures the
spatial dependencies inherent in nonlinear spatiotemporal dynamics and physical
phenomena. In an ablation study, next frame prediction demonstrated a 2.9x
accuracy improvement over next token prediction. BCAT is trained on a diverse
range of fluid dynamics datasets, including incompressible and compressible
Navier-Stokes equations across various geometries and parameter regimes, as
well as the shallow-water equations. The model's performance was evaluated on 6
distinct downstream prediction tasks and tested on about 8K trajectories to
measure robustness on a variety of fluid dynamics simulations. BCAT achieved an
average relative error of 1.92% across all evaluation tasks, outperforming
prior approaches on standard benchmarks.
|
2501.18973
|
GPO-VAE: Modeling Explainable Gene Perturbation Responses utilizing
GRN-Aligned Parameter Optimization
|
cs.LG cs.AI
|
Motivation: Predicting cellular responses to genetic perturbations is
essential for understanding biological systems and developing targeted
therapeutic strategies. While variational autoencoders (VAEs) have shown
promise in modeling perturbation responses, their limited explainability poses
a significant challenge, as the learned features often lack clear biological
meaning. Nevertheless, model explainability is one of the most important
aspects in the realm of biological AI. One of the most effective ways to
achieve explainability is incorporating the concept of gene regulatory networks
(GRNs) in designing deep learning models such as VAEs. GRNs elicit the
underlying causal relationships between genes and are capable of explaining the
transcriptional responses caused by genetic perturbation treatments. Results:
We propose GPO-VAE, an explainable VAE enhanced by GRN-aligned Parameter
Optimization that explicitly models gene regulatory networks in the latent
space. Our key approach is to optimize the learnable parameters related to
latent perturbation effects towards GRN-aligned explainability. Experimental
results on perturbation prediction show our model achieves state-of-the-art
performance in predicting transcriptional responses across multiple benchmark
datasets. Furthermore, additional results on evaluating the GRN inference task
reveal our model's ability to generate meaningful GRNs compared to other
methods. According to qualitative analysis, GPO-VAE posseses the ability to
construct biologically explainable GRNs that align with experimentally
validated regulatory pathways. GPO-VAE is available at
https://github.com/dmis-lab/GPO-VAE
|
2501.18975
|
Meta-learning of shared linear representations beyond well-specified
linear regression
|
cs.LG stat.ML
|
Motivated by multi-task and meta-learning approaches, we consider the problem
of learning structure shared by tasks or users, such as shared low-rank
representations or clustered structures. While all previous works focus on
well-specified linear regression, we consider more general convex objectives,
where the structural low-rank and cluster assumptions are expressed on the
optima of each function. We show that under mild assumptions such as
\textit{Hessian concentration} and \textit{noise concentration at the optimum},
rank and clustered regularized estimators recover such structure, provided the
number of samples per task and the number of tasks are large enough. We then
study the problem of recovering the subspace in which all the solutions lie, in
the setting where there is only a single sample per task: we show that in that
case, the rank-constrained estimator can recover the subspace, but that the
number of tasks needs to scale exponentially large with the dimension of the
subspace. Finally, we provide a polynomial-time algorithm via nuclear norm
constraints for learning a shared linear representation in the context of
convex learning objectives.
|
2501.18977
|
Blocked Bloom Filters with Choices
|
cs.DB cs.DS
|
Probabilistic filters are approximate set membership data structures that
represent a set of keys in small space, and answer set membership queries
without false negative answers, but with a certain allowed false positive
probability. Such filters are widely used in database systems, networks,
storage systems and in biological sequence analysis because of their fast query
times and low space requirements. Starting with Bloom filters in the 1970s,
many filter data structures have been developed, each with its own advantages
and disadvantages, e.g., Blocked Bloom filters, Cuckoo filters, XOR filters,
Ribbon filters, and more.
We introduce Blocked Bloom filters with choices that work similarly to
Blocked Bloom filters, except that for each key there are two (or more)
alternative choices of blocks where the key's information may be stored. The
result is a filter that partially inherits the advantages of a Blocked Bloom
filter, such as the ability to insert keys rapidly online or the ability to
slightly overload the filter with only a small penalty to the false positive
rate. At the same time, it avoids the major disadvantage of a Blocked Bloom
filter, namely the larger space consumption. Our new data structure uses less
space at the same false positive rate, or has a lower false positive rate at
the same space consumption as a Blocked Bloom filter. We discuss the
methodology, engineered implementation, a detailed performance evaluation and
use cases in bioinformatics of Blocked Bloom filters with choices, showing that
they can be of practical value.
The implementation of the evaluated filters and the workflows used are
provided via Gitlab at https://gitlab.com/rahmannlab/blowchoc-filters.
|
2501.18980
|
Symmetric Pruning of Large Language Models
|
cs.LG cs.AI
|
Popular post-training pruning methods such as Wanda and RIA are known for
their simple, yet effective, designs that have shown exceptional empirical
performance. Wanda optimizes performance through calibrated activations during
pruning, while RIA emphasizes the relative, rather than absolute, importance of
weight elements. Despite their practical success, a thorough theoretical
foundation explaining these outcomes has been lacking. This paper introduces
new theoretical insights that redefine the standard minimization objective for
pruning, offering a deeper understanding of the factors contributing to their
success. Our study extends beyond these insights by proposing complementary
strategies that consider both input activations and weight significance. We
validate these approaches through rigorous experiments, demonstrating
substantial enhancements over existing methods. Furthermore, we introduce a
novel training-free fine-tuning approach $R^2$-DSnoT that incorporates relative
weight importance and a regularized decision boundary within a dynamic
pruning-and-growing framework, significantly outperforming strong baselines and
establishing a new state of the art.
|
2501.18982
|
OmniPhysGS: 3D Constitutive Gaussians for General Physics-Based Dynamics
Generation
|
cs.CV
|
Recently, significant advancements have been made in the reconstruction and
generation of 3D assets, including static cases and those with physical
interactions. To recover the physical properties of 3D assets, existing methods
typically assume that all materials belong to a specific predefined category
(e.g., elasticity). However, such assumptions ignore the complex composition of
multiple heterogeneous objects in real scenarios and tend to render less
physically plausible animation given a wider range of objects. We propose
OmniPhysGS for synthesizing a physics-based 3D dynamic scene composed of more
general objects. A key design of OmniPhysGS is treating each 3D asset as a
collection of constitutive 3D Gaussians. For each Gaussian, its physical
material is represented by an ensemble of 12 physical domain-expert sub-models
(rubber, metal, honey, water, etc.), which greatly enhances the flexibility of
the proposed model. In the implementation, we define a scene by user-specified
prompts and supervise the estimation of material weighting factors via a
pretrained video diffusion model. Comprehensive experiments demonstrate that
OmniPhysGS achieves more general and realistic physical dynamics across a
broader spectrum of materials, including elastic, viscoelastic, plastic, and
fluid substances, as well as interactions between different materials. Our
method surpasses existing methods by approximately 3% to 16% in metrics of
visual quality and text alignment.
|
2501.18984
|
Context Matters: Query-aware Dynamic Long Sequence Modeling of Gigapixel
Images
|
cs.CV
|
Whole slide image (WSI) analysis presents significant computational
challenges due to the massive number of patches in gigapixel images. While
transformer architectures excel at modeling long-range correlations through
self-attention, their quadratic computational complexity makes them impractical
for computational pathology applications. Existing solutions like local-global
or linear self-attention reduce computational costs but compromise the strong
modeling capabilities of full self-attention. In this work, we propose Querent,
i.e., the query-aware long contextual dynamic modeling framework, which
maintains the expressive power of full self-attention while achieving practical
efficiency. Our method adaptively predicts which surrounding regions are most
relevant for each patch, enabling focused yet unrestricted attention
computation only with potentially important contexts. By using efficient
region-wise metadata computation and importance estimation, our approach
dramatically reduces computational overhead while preserving global perception
to model fine-grained patch correlations. Through comprehensive experiments on
biomarker prediction, gene mutation prediction, cancer subtyping, and survival
analysis across over 10 WSI datasets, our method demonstrates superior
performance compared to the state-of-the-art approaches. Code will be made
available at https://github.com/dddavid4real/Querent.
|
2501.18989
|
Extension of Optimal Locally Repairable codes
|
cs.IT math.IT
|
Recent studies have delved into the construction of locally repairable codes
(LRCs) with optimal minimum distance from function fields. In this paper, we
present several novel constructions by extending the findings of optimally
designed locally repairable codes documented in the literature. Let $C$ denote
an optimal LRC of locality $r$, implying that every repairable block of $C$ is
a $[r+1, r]$ MDS code, and $C$ maximizes its minimum distance. By extending a
single coordinate of one of these blocks, we demonstrate that the resulting
code remains an optimally designed locally repairable code. This suggests that
the maximal length of an optimal LRC from rational function fields can be
extended up to $q+2$ over a finite field $\mathbb{F}_q$. In addition, we give a
new construction of optimal $(r, 3)$-LRC by extending one coordinate in each
block within $C$. Furthermore, we propose a novel family of LRCs with
Roth-Lempel type that are optimal under certain conditions. Finally, we explore
optimal LRCs derived from elliptic function fields and extend a single
coordinate of such codes. This approach leads us to confirm that the new codes
are also optimal, thereby allowing their lengths to reach $q + 2\sqrt{q} - 2r -
2$ with locality $r$. We also consider the construction of optimal $(r, 3)$-LRC
in elliptic function fields, with exploring one more condition.
|
2501.18990
|
Permutation-Based Rank Test in the Presence of Discretization and
Application in Causal Discovery with Mixed Data
|
cs.LG
|
Recent advances have shown that statistical tests for the rank of
cross-covariance matrices play an important role in causal discovery. These
rank tests include partial correlation tests as special cases and provide
further graphical information about latent variables. Existing rank tests
typically assume that all the continuous variables can be perfectly measured,
and yet, in practice many variables can only be measured after discretization.
For example, in psychometric studies, the continuous level of certain
personality dimensions of a person can only be measured after being discretized
into order-preserving options such as disagree, neutral, and agree. Motivated
by this, we propose Mixed data Permutation-based Rank Test (MPRT), which
properly controls the statistical errors even when some or all variables are
discretized. Theoretically, we establish the exchangeability and estimate the
asymptotic null distribution by permutations; as a consequence, MPRT can
effectively control the Type I error in the presence of discretization while
previous methods cannot. Empirically, our method is validated by extensive
experiments on synthetic data and real-world data to demonstrate its
effectiveness as well as applicability in causal discovery.
|
2501.18991
|
Optimal Transport-based Conformal Prediction
|
stat.ML cs.LG
|
Conformal Prediction (CP) is a principled framework for quantifying
uncertainty in blackbox learning models, by constructing prediction sets with
finite-sample coverage guarantees. Traditional approaches rely on scalar
nonconformity scores, which fail to fully exploit the geometric structure of
multivariate outputs, such as in multi-output regression or multiclass
classification. Recent methods addressing this limitation impose predefined
convex shapes for the prediction sets, potentially misaligning with the
intrinsic data geometry. We introduce a novel CP procedure handling
multivariate score functions through the lens of optimal transport.
Specifically, we leverage Monge-Kantorovich vector ranks and quantiles to
construct prediction region with flexible, potentially non-convex shapes,
better suited to the complex uncertainty patterns encountered in multivariate
learning tasks. We prove that our approach ensures finite-sample,
distribution-free coverage properties, similar to typical CP methods. We then
adapt our method for multi-output regression and multiclass classification, and
also propose simple adjustments to generate adaptive prediction regions with
asymptotic conditional coverage guarantees. Finally, we evaluate our method on
practical regression and classification problems, illustrating its advantages
in terms of (conditional) coverage and efficiency.
|
2501.18993
|
Visual Autoregressive Modeling for Image Super-Resolution
|
cs.CV
|
Image Super-Resolution (ISR) has seen significant progress with the
introduction of remarkable generative models. However, challenges such as the
trade-off issues between fidelity and realism, as well as computational
complexity, have also posed limitations on their application. Building upon the
tremendous success of autoregressive models in the language domain, we propose
\textbf{VARSR}, a novel visual autoregressive modeling for ISR framework with
the form of next-scale prediction. To effectively integrate and preserve
semantic information in low-resolution images, we propose using prefix tokens
to incorporate the condition. Scale-aligned Rotary Positional Encodings are
introduced to capture spatial structures and the diffusion refiner is utilized
for modeling quantization residual loss to achieve pixel-level fidelity.
Image-based Classifier-free Guidance is proposed to guide the generation of
more realistic images. Furthermore, we collect large-scale data and design a
training process to obtain robust generative priors. Quantitative and
qualitative results show that VARSR is capable of generating high-fidelity and
high-realism images with more efficiency than diffusion-based methods. Our
codes will be released at https://github.com/qyp2000/VARSR.
|
2501.18994
|
VKFPos: A Learning-Based Monocular Positioning with Variational Bayesian
Extended Kalman Filter Integration
|
cs.CV cs.AI
|
This paper addresses the challenges in learning-based monocular positioning
by proposing VKFPos, a novel approach that integrates Absolute Pose Regression
(APR) and Relative Pose Regression (RPR) via an Extended Kalman Filter (EKF)
within a variational Bayesian inference framework. Our method shows that the
essential posterior probability of the monocular positioning problem can be
decomposed into APR and RPR components. This decomposition is embedded in the
deep learning model by predicting covariances in both APR and RPR branches,
allowing them to account for associated uncertainties. These covariances
enhance the loss functions and facilitate EKF integration. Experimental
evaluations on both indoor and outdoor datasets show that the single-shot APR
branch achieves accuracy on par with state-of-the-art methods. Furthermore, for
temporal positioning, where consecutive images allow for RPR and EKF
integration, VKFPos outperforms temporal APR and model-based integration
methods, achieving superior accuracy.
|
2501.18997
|
Collaborative Diffusion Model for Recommender System
|
cs.IR
|
Diffusion-based recommender systems (DR) have gained increasing attention for
their advanced generative and denoising capabilities. However, existing DR face
two central limitations: (i) a trade-off between enhancing generative capacity
via noise injection and retaining the loss of personalized information. (ii)
the underutilization of rich item-side information. To address these
challenges, we present a Collaborative Diffusion model for Recommender System
(CDiff4Rec). Specifically, CDiff4Rec generates pseudo-users from item features
and leverages collaborative signals from both real and pseudo personalized
neighbors identified through behavioral similarity, thereby effectively
reconstructing nuanced user preferences. Experimental results on three public
datasets show that CDiff4Rec outperforms competitors by effectively mitigating
the loss of personalized information through the integration of item content
and collaborative signals.
|
2501.18998
|
Adversarial Attacks on AI-Generated Text Detection Models: A Token
Probability-Based Approach Using Embeddings
|
cs.CL cs.AI cs.LG
|
In recent years, text generation tools utilizing Artificial Intelligence (AI)
have occasionally been misused across various domains, such as generating
student reports or creative writings. This issue prompts plagiarism detection
services to enhance their capabilities in identifying AI-generated content.
Adversarial attacks are often used to test the robustness of AI-text generated
detectors. This work proposes a novel textual adversarial attack on the
detection models such as Fast-DetectGPT. The method employs embedding models
for data perturbation, aiming at reconstructing the AI generated texts to
reduce the likelihood of detection of the true origin of the texts.
Specifically, we employ different embedding techniques, including the Tsetlin
Machine (TM), an interpretable approach in machine learning for this purpose.
By combining synonyms and embedding similarity vectors, we demonstrates the
state-of-the-art reduction in detection scores against Fast-DetectGPT.
Particularly, in the XSum dataset, the detection score decreased from 0.4431 to
0.2744 AUROC, and in the SQuAD dataset, it dropped from 0.5068 to 0.3532 AUROC.
|
2501.19001
|
Quantum SMOTE with Angular Outliers: Redefining Minority Class Handling
|
quant-ph cs.LG
|
This paper introduces Quantum-SMOTEV2, an advanced variant of the
Quantum-SMOTE method, leveraging quantum computing to address class imbalance
in machine learning datasets without K-Means clustering. Quantum-SMOTEV2
synthesizes data samples using swap tests and quantum rotation centered around
a single data centroid, concentrating on the angular distribution of minority
data points and the concept of angular outliers (AOL). Experimental results
show significant enhancements in model performance metrics at moderate SMOTE
levels (30-36%), which previously required up to 50% with the original method.
Quantum-SMOTEV2 maintains essential features of its predecessor
(arXiv:2402.17398), such as rotation angle, minority percentage, and splitting
factor, allowing for tailored adaptation to specific dataset needs. The method
is scalable, utilizing compact swap tests and low depth quantum circuits to
accommodate a large number of features. Evaluation on the public Cell-to-Cell
Telecom dataset with Random Forest (RF), K-Nearest Neighbours (KNN) Classifier,
and Neural Network (NN) illustrates that integrating Angular Outliers modestly
boosts classification metrics like accuracy, F1 Score, AUC-ROC, and AUC-PR
across different proportions of synthetic data, highlighting the effectiveness
of Quantum-SMOTEV2 in enhancing model performance for edge cases.
|
2501.19003
|
Virtual airways heatmaps to optimize point of entry location in lung
biopsy planning systems
|
cs.CV cs.AI
|
Purpose: We present a virtual model to optimize point of entry (POE) in lung
biopsy planning systems. Our model allows to compute the quality of a biopsy
sample taken from potential POE, taking into account the margin of error that
arises from discrepancies between the orientation in the planning simulation
and the actual orientation during the operation. Additionally, the study
examines the impact of the characteristics of the lesion. Methods: The quality
of the biopsy is given by a heatmap projected onto the skeleton of a
patient-specific model of airways. The skeleton provides a 3D representation of
airways structure, while the heatmap intensity represents the potential amount
of tissue that it could be extracted from each POE. This amount of tissue is
determined by the intersection of the lesion with a cone that represents the
uncertainty area in the introduction of biopsy instruments. The cone, lesion,
and skeleton are modelled as graphical objects that define a 3D scene of the
intervention. Results: We have simulated different settings of the intervention
scene from a single anatomy extracted from a CT scan and two lesions with
regular and irregular shapes. The different scenarios are simulated by
systematic rotation of each lesion placed at different distances from airways.
Analysis of the heatmaps for the different settings show a strong impact of
lesion orientation for irregular shape and the distance for both shapes.
Conclusion: The proposed heatmaps help to visually assess the optimal POE and
identify whether multiple optimal POEs exist in different zones of the bronchi.
They also allow us to model the maximum allowable error in navigation systems
and study which variables have the greatest influence on the success of the
operation. Additionally, they help determine at what point this influence could
potentially jeopardize the operation.
|
2501.19004
|
CPU vs. GPU for Community Detection: Performance Insights from
GVE-Louvain and $\nu$-Louvain
|
cs.DC cs.SI
|
Community detection involves identifying natural divisions in networks, a
crucial task for many large-scale applications. This report presents
GVE-Louvain, one of the most efficient multicore implementations of the Louvain
algorithm, a high-quality method for community detection. Running on a dual
16-core Intel Xeon Gold 6226R server, GVE-Louvain outperforms Vite, Grappolo,
NetworKit Louvain, and cuGraph Louvain (on an NVIDIA A100 GPU) by factors of
50x, 22x, 20x, and 5.8x, respectively, achieving a processing rate of 560M
edges per second on a 3.8B-edge graph. Additionally, it scales efficiently,
improving performance by 1.6x for every thread doubling. The paper also
presents $\nu$-Louvain, a GPU-based implementation. When evaluated on an NVIDIA
A100 GPU, $\nu$-Louvain performs only on par with GVE-Louvain, largely due to
reduced workload and parallelism in later algorithmic passes. These results
suggest that CPUs, with their flexibility in handling irregular workloads, may
be better suited for community detection tasks.
|
2501.19010
|
DyPCL: Dynamic Phoneme-level Contrastive Learning for Dysarthric Speech
Recognition
|
cs.CL cs.SD eess.AS
|
Dysarthric speech recognition often suffers from performance degradation due
to the intrinsic diversity of dysarthric severity and extrinsic disparity from
normal speech. To bridge these gaps, we propose a Dynamic Phoneme-level
Contrastive Learning (DyPCL) method, which leads to obtaining invariant
representations across diverse speakers. We decompose the speech utterance into
phoneme segments for phoneme-level contrastive learning, leveraging dynamic
connectionist temporal classification alignment. Unlike prior studies focusing
on utterance-level embeddings, our granular learning allows discrimination of
subtle parts of speech. In addition, we introduce dynamic curriculum learning,
which progressively transitions from easy negative samples to
difficult-to-distinguishable negative samples based on phonetic similarity of
phoneme. Our approach to training by difficulty levels alleviates the inherent
variability of speakers, better identifying challenging speeches. Evaluated on
the UASpeech dataset, DyPCL outperforms baseline models, achieving an average
22.10\% relative reduction in word error rate (WER) across the overall
dysarthria group.
|
2501.19012
|
Importing Phantoms: Measuring LLM Package Hallucination Vulnerabilities
|
cs.LG cs.CL cs.CR
|
Large Language Models (LLMs) have become an essential tool in the
programmer's toolkit, but their tendency to hallucinate code can be used by
malicious actors to introduce vulnerabilities to broad swathes of the software
supply chain. In this work, we analyze package hallucination behaviour in LLMs
across popular programming languages examining both existing package references
and fictional dependencies. By analyzing this package hallucination behaviour
we find potential attacks and suggest defensive strategies to defend against
these attacks. We discover that package hallucination rate is predicated not
only on model choice, but also programming language, model size, and
specificity of the coding task request. The Pareto optimality boundary between
code generation performance and package hallucination is sparsely populated,
suggesting that coding models are not being optimized for secure code.
Additionally, we find an inverse correlation between package hallucination rate
and the HumanEval coding benchmark, offering a heuristic for evaluating the
propensity of a model to hallucinate packages. Our metrics, findings and
analyses provide a base for future models, securing AI-assisted software
development workflows against package supply chain attacks.
|
2501.19013
|
On the efficiency of explicit and semi-explicit immersed boundary finite
element methods for wave propagation problems
|
cs.CE cs.NA math.NA
|
Immersed boundary methods have attracted substantial interest in the last
decades due to their potential for computations involving complex geometries.
Often these cannot be efficiently discretized using boundary-fitted finite
elements. Immersed boundary methods provide a simple and fully automatic
discretization based on Cartesian grids and tailored quadrature schemes that
account for the geometric model. It can thus be described independently of the
grid, e.g., by image data obtained from computed tomography scans. The drawback
of such a discretization lies in the potentially small overlap between certain
elements in the grid and the geometry. These badly cut elements with small
physical support pose a particular challenge for nonlinear and/or dynamic
simulations. In this work, we focus on problems in structural dynamics and
acoustics and concentrate on solving them with explicit time-marching schemes.
In this context, badly cut elements can lead to unfeasibly small critical time
step sizes. We investigate the performance of implicit-explicit time marching
schemes and two stabilization methods developed in previous works as potential
remedies. While these have been studied before with regard to their
effectiveness in increasing the critical time step size, their numerical
efficiency has only been considered in terms of accuracy per degree of freedom.
In this paper, we evaluate the computation time required for a given accuracy,
which depends not only on the number of degrees of freedom but also on the
selected spatial discretization, the sparsity patterns of the system matrices,
and the employed time-marching scheme.
|
2501.19016
|
Modelling Infodemics on a Global Scale: A 30 Countries Study using
Epidemiological and Social Listening Data
|
cs.SI physics.soc-ph
|
Infodemics are a threat to public health, arising from multiple interacting
phenomena occurring both online and offline. The continuous feedback loops
between the digital information ecosystem and offline contingencies make
infodemics particularly challenging to define operationally, measure, and
eventually model in quantitative terms. In this study, we present evidence of
the effect of various epidemic-related variables on the dynamics of infodemics,
using a robust modelling framework applied to data from 30 countries across
diverse income groups. We use WHO COVID-19 surveillance data on new cases and
deaths, vaccination data from the Oxford COVID-19 Government Response Tracker,
infodemic data (volume of public conversations and social media content) from
the WHO EARS platform, and Google Trends data to represent information demand.
Our findings show that new deaths are the strongest predictor of the infodemic,
measured as new document production including social media content and public
conversations, and that the epidemic burden in neighbouring countries appears
to have a greater impact on document production than the domestic one. Building
on these results, we propose a taxonomy that highlights country-specific
discrepancies between the evolution of the infodemic and the epidemic. Further,
an analysis of the temporal evolution of the relationship between the two
phenomena quantifies how much the discussions around vaccine rollouts may have
shaped the development of the infodemic. The insights from our quantitative
model contribute to advancing infodemic research, highlighting the importance
of a holistic approach integrating both online and offline dimensions.
|
2501.19017
|
Calling a Spade a Heart: Gaslighting Multimodal Large Language Models
via Negation
|
cs.CL
|
Multimodal Large Language Models (MLLMs) have exhibited remarkable
advancements in integrating different modalities, excelling in complex
understanding and generation tasks. Despite their success, MLLMs remain
vulnerable to conversational adversarial inputs, particularly negation
arguments. This paper systematically evaluates state-of-the-art MLLMs across
diverse benchmarks, revealing significant performance drops when negation
arguments are introduced to initially correct responses. We show critical
vulnerabilities in the reasoning and alignment mechanisms of these models.
Proprietary models such as GPT-4o and Claude-3.5-Sonnet demonstrate better
resilience compared to open-source counterparts like Qwen2-VL and LLaVA.
However, all evaluated MLLMs struggle to maintain logical consistency under
negation arguments during conversation. This paper aims to offer valuable
insights for improving the robustness of MLLMs against adversarial inputs,
contributing to the development of more reliable and trustworthy multimodal AI
systems.
|
2501.19018
|
Scalable Multi-phase Word Embedding Using Conjunctive Propositional
Clauses
|
cs.LG cs.CL
|
The Tsetlin Machine (TM) architecture has recently demonstrated effectiveness
in Machine Learning (ML), particularly within Natural Language Processing
(NLP). It has been utilized to construct word embedding using conjunctive
propositional clauses, thereby significantly enhancing our understanding and
interpretation of machine-derived decisions. The previous approach performed
the word embedding over a sequence of input words to consolidate the
information into a cohesive and unified representation. However, that approach
encounters scalability challenges as the input size increases. In this study,
we introduce a novel approach incorporating two-phase training to discover
contextual embeddings of input sequences. Specifically, this method
encapsulates the knowledge for each input word within the dataset's vocabulary,
subsequently constructing embeddings for a sequence of input words utilizing
the extracted knowledge. This technique not only facilitates the design of a
scalable model but also preserves interpretability. Our experimental findings
revealed that the proposed method yields competitive performance compared to
the previous approaches, demonstrating promising results in contrast to
human-generated benchmarks. Furthermore, we applied the proposed approach to
sentiment analysis on the IMDB dataset, where the TM embedding and the TM
classifier, along with other interpretable classifiers, offered a transparent
end-to-end solution with competitive performance.
|
2501.19022
|
On the Impact of Noise in Differentially Private Text Rewriting
|
cs.CL
|
The field of text privatization often leverages the notion of
$\textit{Differential Privacy}$ (DP) to provide formal guarantees in the
rewriting or obfuscation of sensitive textual data. A common and nearly
ubiquitous form of DP application necessitates the addition of calibrated noise
to vector representations of text, either at the data- or model-level, which is
governed by the privacy parameter $\varepsilon$. However, noise addition almost
undoubtedly leads to considerable utility loss, thereby highlighting one major
drawback of DP in NLP. In this work, we introduce a new sentence infilling
privatization technique, and we use this method to explore the effect of noise
in DP text rewriting. We empirically demonstrate that non-DP privatization
techniques excel in utility preservation and can find an acceptable empirical
privacy-utility trade-off, yet cannot outperform DP methods in empirical
privacy protections. Our results highlight the significant impact of noise in
current DP rewriting mechanisms, leading to a discussion of the merits and
challenges of DP in NLP, as well as the opportunities that non-DP methods
present.
|
2501.19025
|
Recognize then Resolve: A Hybrid Framework for Understanding Interaction
and Cooperative Conflict Resolution in Mixed Traffic
|
cs.MA
|
A lack of understanding of interactions and the inability to effectively
resolve conflicts continue to impede the progress of Connected Autonomous
Vehicles (CAVs) in their interactions with Human-Driven Vehicles (HDVs). To
address this challenge, we propose the Recognize then Resolve (RtR) framework.
First, a Bilateral Intention Progression Graph (BIPG) is constructed based on
CAV-HDV interaction data to model the evolution of interactions and identify
potential HDV intentions. Three typical interaction breakdown scenarios are
then categorized, and key moments are defined for triggering cooperative
conflict resolution. On this basis, a constrained Monte Carlo Tree Search
(MCTS) algorithm is introduced to determine the optimal passage order while
accommodating HDV intentions. Experimental results demonstrate that the
proposed RtR framework outperforms other cooperative approaches in terms of
safety and efficiency across various penetration rates, achieving results close
to consistent cooperation while significantly reducing computational resources.
Our code and data are available at:
https://github.com/FanGShiYuu/RtR-Recognize-then-Resolve/.
|
2501.19027
|
True Online TD-Replan(lambda) Achieving Planning through Replaying
|
cs.LG
|
In this paper, we develop a new planning method that extends the capabilities
of the true online TD to allow an agent to efficiently replay all or part of
its past experience, online in the sequence that they appear with, either in
each step or sparsely according to the usual {\lambda} parameter. In this new
method that we call True Online TD-Replan({\lambda}), the {\lambda} parameter
plays a new role in specifying the density of the replay process in addition to
the usual role of specifying the depth of the target's updates. We demonstrate
that, for problems that benefit from experience replay, our new method
outperforms true online TD({\lambda}), albeit quadratic in complexity due to
its replay capabilities. In addition, we demonstrate that our method
outperforms other methods with similar quadratic complexity such as Dyna
Planning and TD({\lambda})-Replan algorithms. We test our method on two
benchmarking environments, a random walk problem that uses simple binary
features and a myoelectric control domain that uses both simple sEMG features
and deeply extracted features to showcase its capabilities.
|
2501.19032
|
Error Slice Discovery via Manifold Compactness
|
cs.LG
|
Despite the great performance of deep learning models in many areas, they
still make mistakes and underperform on certain subsets of data, i.e. error
slices. Given a trained model, it is important to identify its semantically
coherent error slices that are easy to interpret, which is referred to as the
error slice discovery problem. However, there is no proper metric of slice
coherence without relying on extra information like predefined slice labels.
Current evaluation of slice coherence requires access to predefined slices
formulated by metadata like attributes or subclasses. Its validity heavily
relies on the quality and abundance of metadata, where some possible patterns
could be ignored. Besides, current algorithms cannot directly incorporate the
constraint of coherence into their optimization objective due to the absence of
an explicit coherence metric, which could potentially hinder their
effectiveness. In this paper, we propose manifold compactness, a coherence
metric without reliance on extra information by incorporating the data geometry
property into its design, and experiments on typical datasets empirically
validate the rationality of the metric. Then we develop Manifold Compactness
based error Slice Discovery (MCSD), a novel algorithm that directly treats risk
and coherence as the optimization objective, and is flexible to be applied to
models of various tasks. Extensive experiments on the benchmark and case
studies on other typical datasets demonstrate the superiority of MCSD.
|
2501.19034
|
XRF V2: A Dataset for Action Summarization with Wi-Fi Signals, and IMUs
in Phones, Watches, Earbuds, and Glasses
|
cs.CV
|
Human Action Recognition (HAR) plays a crucial role in applications such as
health monitoring, smart home automation, and human-computer interaction. While
HAR has been extensively studied, action summarization, which involves
identifying and summarizing continuous actions, remains an emerging task. This
paper introduces the novel XRF V2 dataset, designed for indoor daily activity
Temporal Action Localization (TAL) and action summarization. XRF V2 integrates
multimodal data from Wi-Fi signals, IMU sensors (smartphones, smartwatches,
headphones, and smart glasses), and synchronized video recordings, offering a
diverse collection of indoor activities from 16 volunteers across three
distinct environments. To tackle TAL and action summarization, we propose the
XRFMamba neural network, which excels at capturing long-term dependencies in
untrimmed sensory sequences and outperforms state-of-the-art methods, such as
ActionFormer and WiFiTAD. We envision XRF V2 as a valuable resource for
advancing research in human action localization, action forecasting, pose
estimation, multimodal foundation models pre-training, synthetic data
generation, and more.
|
2501.19035
|
SynthmanticLiDAR: A Synthetic Dataset for Semantic Segmentation on LiDAR
Imaging
|
cs.CV
|
Semantic segmentation on LiDAR imaging is increasingly gaining attention, as
it can provide useful knowledge for perception systems and potential for
autonomous driving. However, collecting and labeling real LiDAR data is an
expensive and time-consuming task. While datasets such as SemanticKITTI have
been manually collected and labeled, the introduction of simulation tools such
as CARLA, has enabled the creation of synthetic datasets on demand.
In this work, we present a modified CARLA simulator designed with LiDAR
semantic segmentation in mind, with new classes, more consistent object
labeling with their counterparts from real datasets such as SemanticKITTI, and
the possibility to adjust the object class distribution. Using this tool, we
have generated SynthmanticLiDAR, a synthetic dataset for semantic segmentation
on LiDAR imaging, designed to be similar to SemanticKITTI, and we evaluate its
contribution to the training process of different semantic segmentation
algorithms by using a naive transfer learning approach. Our results show that
incorporating SynthmanticLiDAR into the training process improves the overall
performance of tested algorithms, proving the usefulness of our dataset, and
therefore, our adapted CARLA simulator.
The dataset and simulator are available in
https://github.com/vpulab/SynthmanticLiDAR.
|
2501.19036
|
RedundancyLens: Revealing and Exploiting Visual Token Processing
Redundancy for Efficient Decoder-Only MLLMs
|
cs.CV
|
Current Multimodal Large Language Model (MLLM) architectures face a critical
tradeoff between performance and efficiency: decoder-only architectures achieve
higher performance but lower efficiency, while cross-attention-based
architectures offer greater efficiency but lower performance. The key
distinction lies in how visual tokens are processed. Decoder-only architectures
apply self-attention and FFN operations on visual tokens, while cross-attention
architectures skip these computations. To investigate whether redundancy exists
in this computationally expensive process, we propose a training-free framework
for analyzing trained MLLMs. It consists of Probe-Activated Dynamic FFN and
Hollow Attention, which enable adjustable reductions in computations for visual
tokens, as well as a Layer Ranking Algorithm that prioritizes layers for these
reductions. Extensive experiments demonstrate substantial, structured, and
clustered redundancy unique to decoder-only MLLMs, offering valuable insights
for future MLLM architecture design. Furthermore, by leveraging our reduction
framework as a training-free inference acceleration approach, we achieve
performance comparable to or better than state-of-the-art methods while
remaining compatible with them. Code will be publicly available at
https://github.com/L-Hugh/RedundancyLens.
|
2501.19038
|
Conformal Prediction in Hierarchical Classification
|
stat.ML cs.LG
|
Conformal prediction has emerged as a widely used framework for constructing
valid prediction sets in classification and regression tasks. In this work, we
extend the split conformal prediction framework to hierarchical classification,
where prediction sets are commonly restricted to internal nodes of a predefined
hierarchy, and propose two computationally efficient inference algorithms. The
first algorithm returns internal nodes as prediction sets, while the second
relaxes this restriction, using the notion of representation complexity,
yielding a more general and combinatorial inference problem, but smaller set
sizes. Empirical evaluations on several benchmark datasets demonstrate the
effectiveness of the proposed algorithms in achieving nominal coverage.
|
2501.19040
|
Towards the Worst-case Robustness of Large Language Models
|
cs.LG
|
Recent studies have revealed the vulnerability of Large Language Models
(LLMs) to adversarial attacks, where the adversary crafts specific input
sequences to induce harmful, violent, private, or incorrect outputs. Although
various defenses have been proposed, they have not been evaluated by strong
adaptive attacks, leaving the worst-case robustness of LLMs still intractable.
By developing a stronger white-box attack, our evaluation results indicate that
most typical defenses achieve nearly 0\% robustness.To solve this, we propose
\textit{DiffTextPure}, a general defense that diffuses the (adversarial) input
prompt using any pre-defined smoothing distribution, and purifies the diffused
input using a pre-trained language model. Theoretically, we derive tight
robustness lower bounds for all smoothing distributions using Fractal Knapsack
or 0-1 Knapsack solvers. Under this framework, we certify the robustness of a
specific case -- smoothing LLMs using a uniform kernel -- against \textit{any
possible attack} with an average $\ell_0$ perturbation of 2.02 or an average
suffix length of 6.41.
|
2501.19042
|
Swarm-Gen: Fast Generation of Diverse Feasible Swarm Behaviors
|
cs.RO cs.AI
|
Coordination behavior in robot swarms is inherently multi-modal in nature.
That is, there are numerous ways in which a swarm of robots can avoid
inter-agent collisions and reach their respective goals. However, the problem
of generating diverse and feasible swarm behaviors in a scalable manner remains
largely unaddressed. In this paper, we fill this gap by combining generative
models with a safety-filter (SF). Specifically, we sample diverse trajectories
from a learned generative model which is subsequently projected onto the
feasible set using the SF. We experiment with two choices for generative
models, namely: Conditional Variational Autoencoder (CVAE) and Vector-Quantized
Variational Autoencoder (VQ-VAE). We highlight the trade-offs these two models
provide in terms of computation time and trajectory diversity. We develop a
custom solver for our SF and equip it with a neural network that predicts
context-specific initialization. Thecinitialization network is trained in a
self-supervised manner, taking advantage of the differentiability of the SF
solver. We provide two sets of empirical results. First, we demonstrate that we
can generate a large set of multi-modal, feasible trajectories, simulating
diverse swarm behaviors, within a few tens of milliseconds. Second, we show
that our initialization network provides faster convergence of our SF solver
vis-a-vis other alternative heuristics.
|
2501.19043
|
Self-Supervised Cross-Modal Text-Image Time Series Retrieval in Remote
Sensing
|
cs.CV
|
The development of image time series retrieval (ITSR) methods is a growing
research interest in remote sensing (RS). Given a user-defined image time
series (i.e., the query time series), the ITSR methods search and retrieve from
large archives the image time series that have similar content to the query
time series. The existing ITSR methods in RS are designed for unimodal
retrieval problems, limiting their usability and versatility. To overcome this
issue, as a first time in RS we introduce the task of cross-modal text-ITSR. In
particular, we present a self-supervised cross-modal text-image time series
retrieval (text-ITSR) method that enables the retrieval of image time series
using text sentences as queries, and vice versa. In detail, we focus our
attention on text-ITSR in pairs of images (i.e., bitemporal images). The
proposed text-ITSR method consists of two key components: 1) modality-specific
encoders to model the semantic content of bitemporal images and text sentences
with discriminative features; and 2) modality-specific projection heads to
align textual and image representations in a shared embedding space. To
effectively model the temporal information within the bitemporal images, we
introduce two fusion strategies: i) global feature fusion (GFF) strategy that
combines global image features through simple yet effective operators; and ii)
transformer-based feature fusion (TFF) strategy that leverages transformers for
fine-grained temporal integration. Extensive experiments conducted on two
benchmark RS archives demonstrate the effectiveness of the proposed method in
accurately retrieving semantically relevant bitemporal images (or text
sentences) to a query text sentence (or bitemporal image). The code of this
work is publicly available at
https://git.tu-berlin.de/rsim/cross-modal-text-tsir.
|
2501.19045
|
Trajectory Optimization Under Stochastic Dynamics Leveraging Maximum
Mean Discrepancy
|
cs.RO
|
This paper addresses sampling-based trajectory optimization for risk-aware
navigation under stochastic dynamics. Typically such approaches operate by
computing $\tilde{N}$ perturbed rollouts around the nominal dynamics to
estimate the collision risk associated with a sequence of control commands. We
consider a setting where it is expensive to estimate risk using perturbed
rollouts, for example, due to expensive collision-checks. We put forward two
key contributions. First, we develop an algorithm that distills the statistical
information from a larger set of rollouts to a reduced-set with sample size
$N<<\tilde{N}$. Consequently, we estimate collision risk using just $N$
rollouts instead of $\tilde{N}$. Second, we formulate a novel surrogate for the
collision risk that can leverage the distilled statistical information
contained in the reduced-set. We formalize both algorithmic contributions using
distribution embedding in Reproducing Kernel Hilbert Space (RKHS) and Maximum
Mean Discrepancy (MMD). We perform extensive benchmarking to demonstrate that
our MMD-based approach leads to safer trajectories at low sample regime than
existing baselines using Conditional Value-at Risk (CVaR) based collision risk
estimate.
|
2501.19047
|
Understanding Model Calibration -- A gentle introduction and visual
exploration of calibration and the expected calibration error (ECE)
|
stat.ME cs.AI cs.CV cs.LG stat.ML
|
To be considered reliable, a model must be calibrated so that its confidence
in each decision closely reflects its true outcome. In this blogpost we'll take
a look at the most commonly used definition for calibration and then dive into
a frequently used evaluation measure for model calibration. We'll then cover
some of the drawbacks of this measure and how these surfaced the need for
additional notions of calibration, which require their own new evaluation
measures. This post is not intended to be an in-depth dissection of all works
on calibration, nor does it focus on how to calibrate models. Instead, it is
meant to provide a gentle introduction to the different notions and their
evaluation measures as well as to re-highlight some issues with a measure that
is still widely used to evaluate calibration.
|
2501.19048
|
The Role of Graph-based MIL and Interventional Training in the
Generalization of WSI Classifiers
|
eess.IV cs.CV
|
Whole Slide Imaging (WSI), which involves high-resolution digital scans of
pathology slides, has become the gold standard for cancer diagnosis, but its
gigapixel resolution and the scarcity of annotated datasets present challenges
for deep learning models. Multiple Instance Learning (MIL), a widely-used
weakly supervised approach, bypasses the need for patch-level annotations.
However, conventional MIL methods overlook the spatial relationships between
patches, which are crucial for tasks such as cancer grading and diagnosis. To
address this, graph-based approaches have gained prominence by incorporating
spatial information through node connections. Despite their potential, both MIL
and graph-based models are vulnerable to learning spurious associations, like
color variations in WSIs, affecting their robustness. In this dissertation, we
conduct an extensive comparison of multiple graph construction techniques, MIL
models, graph-MIL approaches, and interventional training, introducing a new
framework, Graph-based Multiple Instance Learning with Interventional Training
(GMIL-IT), for WSI classification. We evaluate their impact on model
generalization through domain shift analysis and demonstrate that graph-based
models alone achieve the generalization initially anticipated from
interventional training. Our code is available here:
github.com/ritamartinspereira/GMIL-IT
|
2501.19050
|
Norm-Bounded Low-Rank Adaptation
|
cs.LG
|
In this work, we propose norm-bounded low-rank adaptation (NB-LoRA) for
parameter-efficient fine tuning. We introduce two parameterizations that allow
explicit bounds on each singular value of the weight adaptation matrix, which
can therefore satisfy any prescribed unitarily invariant norm bound, including
the Schatten norms (e.g., nuclear, Frobenius, spectral norm). The proposed
parameterizations are unconstrained and complete, i.e. they cover all matrices
satisfying the prescribed rank and norm constraints. Experiments on vision
fine-tuning benchmarks show that the proposed approach can achieve good
adaptation performance while avoiding model catastrophic forgetting and also
substantially improve robustness to a wide range of hyper-parameters, including
adaptation rank, learning rate and number of training epochs. We also explore
applications in privacy-preserving model merging and low-rank matrix
completion.
|
2501.19054
|
Text-to-CAD Generation Through Infusing Visual Feedback in Large
Language Models
|
cs.CV cs.LG
|
Creating Computer-Aided Design (CAD) models requires significant expertise
and effort. Text-to-CAD, which converts textual descriptions into CAD
parametric sequences, is crucial in streamlining this process. Recent studies
have utilized ground-truth parametric sequences, known as sequential signals,
as supervision to achieve this goal. However, CAD models are inherently
multimodal, comprising parametric sequences and corresponding rendered visual
objects. Besides,the rendering process from parametric sequences to visual
objects is many-to-one. Therefore, both sequential and visual signals are
critical for effective training. In this work, we introduce CADFusion, a
framework that uses Large Language Models (LLMs) as the backbone and alternates
between two training stages: the sequential learning (SL) stage and the visual
feedback (VF) stage. In the SL stage, we train LLMs using ground-truth
parametric sequences, enabling the generation of logically coherent parametric
sequences. In the VF stage, we reward parametric sequences that render into
visually preferred objects and penalize those that do not, allowing LLMs to
learn how rendered visual objects are perceived and evaluated. These two stages
alternate throughout the training, ensuring balanced learning and preserving
benefits of both signals. Experiments demonstrate that CADFusion significantly
improves performance, both qualitatively and quantitatively.
|
2501.19055
|
Towards Physiologically Sensible Predictions via the Rule-based
Reinforcement Learning Layer
|
cs.LG cs.AI
|
This paper adds to the growing literature of reinforcement learning (RL) for
healthcare by proposing a novel paradigm: augmenting any predictor with
Rule-based RL Layer (RRLL) that corrects the model's physiologically impossible
predictions. Specifically, RRLL takes as input states predicted labels and
outputs corrected labels as actions. The reward of the state-action pair is
evaluated by a set of general rules. RRLL is efficient, general and
lightweight: it does not require heavy expert knowledge like prior work but
only a set of impossible transitions. This set is much smaller than all
possible transitions; yet it can effectively reduce physiologically impossible
mistakes made by the state-of-the-art predictor models. We verify the utility
of RRLL on a variety of important healthcare classification problems and
observe significant improvements using the same setup, with only the
domain-specific set of impossibility changed. In-depth analysis shows that RRLL
indeed improves accuracy by effectively reducing the presence of
physiologically impossible predictions.
|
2501.19056
|
Enabling Autonomic Microservice Management through Self-Learning Agents
|
cs.SE cs.AI cs.CL cs.MA
|
The increasing complexity of modern software systems necessitates robust
autonomic self-management capabilities. While Large Language Models (LLMs)
demonstrate potential in this domain, they often face challenges in adapting
their general knowledge to specific service contexts. To address this
limitation, we propose ServiceOdyssey, a self-learning agent system that
autonomously manages microservices without requiring prior knowledge of
service-specific configurations. By leveraging curriculum learning principles
and iterative exploration, ServiceOdyssey progressively develops a deep
understanding of operational environments, reducing dependence on human input
or static documentation. A prototype built with the Sock Shop microservice
demonstrates the potential of this approach for autonomic microservice
management.
|
2501.19057
|
TeZO: Empowering the Low-Rankness on the Temporal Dimension in the
Zeroth-Order Optimization for Fine-tuning LLMs
|
cs.LG
|
Zeroth-order optimization (ZO) has demonstrated remarkable promise in
efficient fine-tuning tasks for Large Language Models (LLMs). In particular,
recent advances incorporate the low-rankness of gradients, introducing low-rank
ZO estimators to further reduce GPU memory consumption. However, most existing
works focus solely on the low-rankness of each individual gradient, overlooking
a broader property shared by all gradients throughout the training, i.e., all
gradients approximately reside within a similar subspace. In this paper, we
consider two factors together and propose a novel low-rank ZO estimator, TeZO,
which captures the low-rankness across both the model and temporal dimension.
Specifically, we represent ZO perturbations along the temporal dimension as a
3D tensor and employ Canonical Polyadic Decomposition (CPD) to extract each
low-rank 2D matrix, significantly reducing the training cost. TeZO can also be
easily extended to the Adam variant while consuming less memory than MeZO-SGD,
and requiring about only 35% memory of MeZO-Adam. Both comprehensive
theoretical analysis and extensive experimental research have validated its
efficiency, achieving SOTA-comparable results with lower overhead of time and
memory.
|
2501.19058
|
Gravity Compensation of the dVRK-Si Patient Side Manipulator based on
Dynamic Model Identification
|
cs.RO cs.SY eess.SY
|
The da Vinci Research Kit (dVRK, also known as dVRK Classic) is an
open-source teleoperated surgical robotic system whose hardware is obtained
from the first generation da Vinci Surgical System (Intuitive, Sunnyvale, CA,
USA). The dVRK has greatly facilitated research in robot-assisted surgery over
the past decade and helped researchers address multiple major challenges in
this domain. Recently, the dVRK-Si system, a new version of the dVRK which uses
mechanical components from the da Vinci Si Surgical System, became available to
the community. The major difference between the first generation da Vinci and
the da Vinci Si is in the structural upgrade of the Patient Side Manipulator
(PSM). Because of this upgrade, the gravity of the dVRK-Si PSM can no longer be
ignored as in the dVRK Classic. The high gravity offset may lead to relatively
low control accuracy and longer response time. In addition, although
substantial progress has been made in addressing the dynamic model
identification problem for the dVRK Classic, further research is required on
model-based control for the dVRK-Si, due to differences in mechanical
components and the demand for enhanced control performance. To address these
problems, in this work, we present (1) a novel full kinematic model of the
dVRK-Si PSM, and (2) a gravity compensation approach based on the dynamic model
identification.
|
2501.19059
|
Controllable Neural Architectures for Multi-Task Control
|
eess.SY cs.SY
|
This paper studies a multi-task control problem where multiple linear systems
are to be regulated by a single non-linear controller. In particular, motivated
by recent advances in multi-task learning and the design of brain-inspired
architectures, we consider a neural controller with (smooth) ReLU activation
function. The parameters of the controller are a connectivity matrix and a bias
vector: although both parameters can be designed, the connectivity matrix is
constant while the bias vector can be varied and is used to adapt the
controller across different control tasks. The bias vector determines the
equilibrium of the neural controller and, consequently, of its linearized
dynamics. Our multi-task control strategy consists of designing the
connectivity matrix and a set of bias vectors in a way that the linearized
dynamics of the neural controller for the different bias vectors provide a good
approximation of a set of desired controllers. We show that, by properly
choosing the bias vector, the linearized dynamics of the neural controller can
replicate the dynamics of any single, linear controller. Further, we design
gradient-based algorithms to train the parameters of the neural controller, and
we provide upper and lower bounds for the performance of our neural controller.
Finally, we validate our results using different numerical examples.
|
2501.19060
|
Contrast-Aware Calibration for Fine-Tuned CLIP: Leveraging Image-Text
Alignment
|
cs.CV cs.LG
|
Vision-language models (VLMs), such as CLIP, have demonstrated exceptional
generalization capabilities and can quickly adapt to downstream tasks through
prompt fine-tuning. Unfortunately, in classification tasks involving
non-training classes, known as open-vocabulary setting, fine-tuned VLMs often
overfit to train classes, resulting in a misalignment between confidence scores
and actual accuracy on unseen classes, which significantly undermines their
reliability in real-world deployments. Existing confidence calibration methods
typically require training parameters or analyzing features from the training
dataset, restricting their ability to generalize unseen classes without
corresponding train data. Moreover, VLM-specific calibration methods rely
solely on text features from train classes as calibration indicators, which
inherently limits their ability to calibrate train classes. To address these
challenges, we propose an effective multimodal calibration method
Contrast-Aware Calibration (CAC). Building on the original CLIP's zero-shot
adaptability and the conclusion from empirical analysis that poor intra-class
and inter-class discriminative ability on unseen classes is the root cause, we
calculate calibration weights based on the contrastive difference between the
original and fine-tuned CLIP. This method not only adapts to calibrating unseen
classes but also overcomes the limitations of previous VLM calibration methods
that could not calibrate train classes. In experiments involving 11 datasets
with 5 fine-tuning methods, CAC consistently achieved the best calibration
effect on both train and unseen classes without sacrificing accuracy and
inference speed.
|
2501.19061
|
EgoMe: Follow Me via Egocentric View in Real World
|
cs.CV
|
When interacting with the real world, human often take the egocentric
(first-person) view as a benchmark, naturally transferring behaviors observed
from a exocentric (third-person) view to their own. This cognitive theory
provides a foundation for researching how robots can more effectively imitate
human behavior. However, current research either employs multiple cameras with
different views focusing on the same individual's behavior simultaneously or
encounters unpair ego-exo view scenarios, there is no effort to fully exploit
human cognitive behavior in the real world. To fill this gap, in this paper, we
introduce a novel large-scale egocentric dataset, called EgoMe, which towards
following the process of human imitation learning via egocentric view in the
real world. Our dataset includes 7902 pairs of videos (15804 videos) for
diverse daily behaviors in real-world scenarios. For a pair of videos, one
video captures a exocentric view of the imitator observing the demonstrator's
actions, while the other captures a egocentric view of the imitator
subsequently following those actions. Notably, our dataset also contain exo-ego
eye gaze, angular velocity, acceleration, magnetic strength and other sensor
multi-modal data for assisting in establishing correlations between observing
and following process. In addition, we also propose eight challenging benchmark
tasks for fully leveraging this data resource and promoting the research of
robot imitation learning ability. Extensive statistical analysis demonstrates
significant advantages compared to existing datasets. The proposed EgoMe
dataset and benchmark will be released soon.
|
2501.19063
|
Optimizing Job Allocation using Reinforcement Learning with Graph Neural
Networks
|
cs.LG
|
Efficient job allocation in complex scheduling problems poses significant
challenges in real-world applications. In this report, we propose a novel
approach that leverages the power of Reinforcement Learning (RL) and Graph
Neural Networks (GNNs) to tackle the Job Allocation Problem (JAP). The JAP
involves allocating a maximum set of jobs to available resources while
considering several constraints. Our approach enables learning of adaptive
policies through trial-and-error interactions with the environment while
exploiting the graph-structured data of the problem. By leveraging RL, we
eliminate the need for manual annotation, a major bottleneck in supervised
learning approaches. Experimental evaluations on synthetic and real-world data
demonstrate the effectiveness and generalizability of our proposed approach,
outperforming baseline algorithms and showcasing its potential for optimizing
job allocation in complex scheduling problems.
|
2501.19064
|
Machine Learning in Gamma Astronomy
|
astro-ph.IM astro-ph.HE cs.LG
|
The purpose of this paper is to review the most popular deep learning methods
used to analyze astroparticle data obtained with Imaging Atmospheric Cherenkov
Telescopes and provide references to the original papers.
|
2501.19065
|
BEAT: Balanced Frequency Adaptive Tuning for Long-Term Time-Series
Forecasting
|
cs.LG cs.AI
|
Time-series forecasting is crucial for numerous real-world applications
including weather prediction and financial market modeling. While
temporal-domain methods remain prevalent, frequency-domain approaches can
effectively capture multi-scale periodic patterns, reduce sequence
dependencies, and naturally denoise signals. However, existing approaches
typically train model components for all frequencies under a unified training
objective, often leading to mismatched learning speeds: high-frequency
components converge faster and risk overfitting, while low-frequency components
underfit due to insufficient training time. To deal with this challenge, we
propose BEAT (Balanced frEquency Adaptive Tuning), a novel framework that
dynamically monitors the training status for each frequency and adaptively
adjusts their gradient updates. By recognizing convergence, overfitting, or
underfitting for each frequency, BEAT dynamically reallocates learning
priorities, moderating gradients for rapid learners and increasing those for
slower ones, alleviating the tension between competing objectives across
frequencies and synchronizing the overall learning process. Extensive
experiments on seven real-world datasets demonstrate that BEAT consistently
outperforms state-of-the-art approaches.
|
2501.19066
|
Concept Steerers: Leveraging K-Sparse Autoencoders for Controllable
Generations
|
cs.CV
|
Despite the remarkable progress in text-to-image generative models, they are
prone to adversarial attacks and inadvertently generate unsafe, unethical
content. Existing approaches often rely on fine-tuning models to remove
specific concepts, which is computationally expensive, lack scalability, and/or
compromise generation quality. In this work, we propose a novel framework
leveraging k-sparse autoencoders (k-SAEs) to enable efficient and interpretable
concept manipulation in diffusion models. Specifically, we first identify
interpretable monosemantic concepts in the latent space of text embeddings and
leverage them to precisely steer the generation away or towards a given concept
(e.g., nudity) or to introduce a new concept (e.g., photographic style).
Through extensive experiments, we demonstrate that our approach is very simple,
requires no retraining of the base model nor LoRA adapters, does not compromise
the generation quality, and is robust to adversarial prompt manipulations. Our
method yields an improvement of $\mathbf{20.01\%}$ in unsafe concept removal,
is effective in style manipulation, and is $\mathbf{\sim5}$x faster than
current state-of-the-art.
|
2501.19067
|
Deep Multi-Task Learning Has Low Amortized Intrinsic Dimensionality
|
cs.LG stat.ML
|
Deep learning methods are known to generalize well from training to future
data, even in an overparametrized regime, where they could easily overfit. One
explanation for this phenomenon is that even when their *ambient
dimensionality*, (i.e. the number of parameters) is large, the models'
*intrinsic dimensionality* is small, i.e. their learning takes place in a small
subspace of all possible weight configurations.
In this work, we confirm this phenomenon in the setting of *deep multi-task
learning*. We introduce a method to parametrize multi-task network directly in
the low-dimensional space, facilitated by the use of *random expansions*
techniques. We then show that high-accuracy multi-task solutions can be found
with much smaller intrinsic dimensionality (fewer free parameters) than what
single-task learning requires. Subsequently, we show that the low-dimensional
representations in combination with *weight compression* and *PAC-Bayesian*
reasoning lead to the first *non-vacuous generalization bounds* for deep
multi-task networks.
|
2501.19069
|
Improving vision-language alignment with graph spiking hybrid Networks
|
cs.CV cs.AI
|
To bridge the semantic gap between vision and language (VL), it is necessary
to develop a good alignment strategy, which includes handling semantic
diversity, abstract representation of visual information, and generalization
ability of models. Recent works use detector-based bounding boxes or patches
with regular partitions to represent visual semantics. While current paradigms
have made strides, they are still insufficient for fully capturing the nuanced
contextual relations among various objects. This paper proposes a comprehensive
visual semantic representation module, necessitating the utilization of
panoptic segmentation to generate coherent fine-grained semantic features.
Furthermore, we propose a novel Graph Spiking Hybrid Network (GSHN) that
integrates the complementary advantages of Spiking Neural Networks (SNNs) and
Graph Attention Networks (GATs) to encode visual semantic information.
Intriguingly, the model not only encodes the discrete and continuous latent
variables of instances but also adeptly captures both local and global
contextual features, thereby significantly enhancing the richness and diversity
of semantic representations. Leveraging the spatiotemporal properties inherent
in SNNs, we employ contrastive learning (CL) to enhance the similarity-based
representation of embeddings. This strategy alleviates the computational
overhead of the model and enriches meaningful visual representations by
constructing positive and negative sample pairs. We design an innovative
pre-training method, Spiked Text Learning (STL), which uses text features to
improve the encoding ability of discrete semantics. Experiments show that the
proposed GSHN exhibits promising results on multiple VL downstream tasks.
|
2501.19072
|
SpikingSoft: A Spiking Neuron Controller for Bio-inspired Locomotion
with Soft Snake Robots
|
cs.RO cs.LG
|
Inspired by the dynamic coupling of moto-neurons and physical elasticity in
animals, this work explores the possibility of generating locomotion gaits by
utilizing physical oscillations in a soft snake by means of a low-level spiking
neural mechanism. To achieve this goal, we introduce the Double Threshold
Spiking neuron model with adjustable thresholds to generate varied output
patterns. This neuron model can excite the natural dynamics of soft robotic
snakes, and it enables distinct movements, such as turning or moving forward,
by simply altering the neural thresholds. Finally, we demonstrate that our
approach, termed SpikingSoft, naturally pairs and integrates with reinforcement
learning. The high-level agent only needs to adjust the two thresholds to
generate complex movement patterns, thus strongly simplifying the learning of
reactive locomotion. Simulation results demonstrate that the proposed
architecture significantly enhances the performance of the soft snake robot,
enabling it to achieve target objectives with a 21.6% increase in success rate,
a 29% reduction in time to reach the target, and smoother movements compared to
the vanilla reinforcement learning controllers or Central Pattern Generator
controller acting in torque space.
|
2501.19073
|
Pareto-frontier Entropy Search with Variational Lower Bound Maximization
|
cs.LG stat.ML
|
This study considers multi-objective Bayesian optimization (MOBO) through the
information gain of the Pareto-frontier. To calculate the information gain, a
predictive distribution conditioned on the Pareto-frontier plays a key role,
which is defined as a distribution truncated by the Pareto-frontier. However,
it is usually impossible to obtain the entire Pareto-frontier in a continuous
domain, and therefore, the complete truncation cannot be known. We consider an
approximation of the truncate distribution by using a mixture distribution
consisting of two possible approximate truncation obtainable from a subset of
the Pareto-frontier, which we call over- and under-truncation. Since the
optimal balance of the mixture is unknown beforehand, we propose optimizing the
balancing coefficient through the variational lower bound maximization
framework, by which the approximation error of the information gain can be
minimized. Our empirical evaluation demonstrates the effectiveness of the
proposed method particularly when the number of objective functions is large.
|
2501.19077
|
Temperature-Annealed Boltzmann Generators
|
cs.LG
|
Efficient sampling of unnormalized probability densities such as the
Boltzmann distribution of molecular systems is a longstanding challenge. Next
to conventional approaches like molecular dynamics or Markov chain Monte Carlo,
variational approaches, such as training normalizing flows with the reverse
Kullback-Leibler divergence, have been introduced. However, such methods are
prone to mode collapse and often do not learn to sample the full
configurational space. Here, we present temperature-annealed Boltzmann
generators (TA-BG) to address this challenge. First, we demonstrate that
training a normalizing flow with the reverse Kullback-Leibler divergence at
high temperatures is possible without mode collapse. Furthermore, we introduce
a reweighting-based training objective to anneal the distribution to lower
target temperatures. We apply this methodology to three molecular systems of
increasing complexity and, compared to the baseline, achieve better results in
almost all metrics while requiring up to three times fewer target energy
evaluations. For the largest system, our approach is the only method that
accurately resolves the metastable states of the system.
|
2501.19080
|
Differentially Private Policy Gradient
|
cs.LG
|
Motivated by the increasing deployment of reinforcement learning in the real
world, involving a large consumption of personal data, we introduce a
differentially private (DP) policy gradient algorithm. We show that, in this
setting, the introduction of Differential Privacy can be reduced to the
computation of appropriate trust regions, thus avoiding the sacrifice of
theoretical properties of the DP-less methods. Therefore, we show that it is
possible to find the right trade-off between privacy noise and trust-region
size to obtain a performant differentially private policy gradient algorithm.
We then outline its performance empirically on various benchmarks. Our results
and the complexity of the tasks addressed represent a significant improvement
over existing DP algorithms in online RL.
|
Subsets and Splits
No community queries yet
The top public SQL queries from the community will appear here once available.