id
stringlengths 9
16
| title
stringlengths 4
278
| categories
stringlengths 5
104
| abstract
stringlengths 6
4.09k
|
|---|---|---|---|
2501.08035
|
READ: Reinforcement-based Adversarial Learning for Text Classification
with Limited Labeled Data
|
cs.CL cs.AI
|
Pre-trained transformer models such as BERT have shown massive gains across
many text classification tasks. However, these models usually need enormous
labeled data to achieve impressive performances. Obtaining labeled data is
often expensive and time-consuming, whereas collecting unlabeled data using
some heuristics is relatively much cheaper for any task. Therefore, this paper
proposes a method that encapsulates reinforcement learning-based text
generation and semi-supervised adversarial learning approaches in a novel way
to improve the model's performance. Our method READ, Reinforcement-based
Adversarial learning, utilizes an unlabeled dataset to generate diverse
synthetic text through reinforcement learning, improving the model's
generalization capability using adversarial learning. Our experimental results
show that READ outperforms the existing state-of-art methods on multiple
datasets.
|
2501.08036
|
Decoding Quantum LDPC Codes using Collaborative Check Node Removal
|
quant-ph cs.IT math.IT
|
The fault tolerance of quantum devices requires on-par contributions from
error-correcting codes and suitable decoders.
One of the most explored error-correcting codes is the family of Quantum
Low-Density Parity Check (QLDPC) codes.
Although faster than many of the reported decoders for QLDPC codes, iterative
decoders fail due to the colossal degeneracy and short cycles intrinsic to
these codes.
We present a strategy to improve the performance of the iterative decoders
based on a collaborative way to use the message passing of the iterative
decoders and check node removal from the code's Tanner graph.
We use the concept of bit separation and generalize it to qubit separation.
This gives us a metric to analyze and improve the decoder's performance
towards harmful configurations of QLDPC codes.
We present a simple decoding architecture to overcome iterative decoding
failures by increasing the separation of trapped qubits without incurring any
significant overhead.
|
2501.08037
|
Enhanced SPS Velocity-adaptive Scheme: Access Fairness in 5G NR V2I
Networks
|
cs.LG cs.NI
|
Vehicle-to-Infrastructure (V2I) technology enables information exchange
between vehicles and road infrastructure. Specifically, when a vehicle
approaches a roadside unit (RSU), it can exchange information with the RSU to
obtain accurate data that assists in driving. With the release of the 3rd
Generation Partnership Project (3GPP) Release 16, which includes the 5G New
Radio (NR) Vehicle-to-Everything (V2X) standards, vehicles typically adopt
mode-2 communication using sensing-based semi-persistent scheduling (SPS) for
resource allocation. In this approach, vehicles identify candidate resources
within a selection window and exclude ineligible resources based on information
from a sensing window. However, vehicles often drive at different speeds,
resulting in varying amounts of data transmission with RSUs as they pass by,
which leads to unfair access. Therefore, it is essential to design an access
scheme that accounts for different vehicle speeds to achieve fair access across
the network. This paper formulates an optimization problem for vehicular
networks and proposes a multi-objective optimization scheme to address it by
adjusting the selection window in the SPS mechanism of 5G NR V2I mode-2.
Simulation results demonstrate the effectiveness of the proposed scheme
|
2501.08038
|
Robust Low-Light Human Pose Estimation through Illumination-Texture
Modulation
|
cs.CV
|
As critical visual details become obscured, the low visibility and high ISO
noise in extremely low-light images pose a significant challenge to human pose
estimation. Current methods fail to provide high-quality representations due to
reliance on pixel-level enhancements that compromise semantics and the
inability to effectively handle extreme low-light conditions for robust feature
learning. In this work, we propose a frequency-based framework for low-light
human pose estimation, rooted in the "divide-and-conquer" principle. Instead of
uniformly enhancing the entire image, our method focuses on task-relevant
information. By applying dynamic illumination correction to the low-frequency
components and low-rank denoising to the high-frequency components, we
effectively enhance both the semantic and texture information essential for
accurate pose estimation. As a result, this targeted enhancement method results
in robust, high-quality representations, significantly improving pose
estimation performance. Extensive experiments demonstrating its superiority
over state-of-the-art methods in various challenging low-light scenarios.
|
2501.08040
|
Convergence Analysis of Real-time Recurrent Learning (RTRL) for a class
of Recurrent Neural Networks
|
cs.LG math.PR stat.ML
|
Recurrent neural networks (RNNs) are commonly trained with the truncated
backpropagation-through-time (TBPTT) algorithm. For the purposes of
computational tractability, the TBPTT algorithm truncates the chain rule and
calculates the gradient on a finite block of the overall data sequence. Such
approximation could lead to significant inaccuracies, as the block length for
the truncated backpropagation is typically limited to be much smaller than the
overall sequence length. In contrast, Real-time recurrent learning (RTRL) is an
online optimization algorithm which asymptotically follows the true gradient of
the loss on the data sequence as the number of sequence time steps $t
\rightarrow \infty$. RTRL forward propagates the derivatives of the RNN
hidden/memory units with respect to the parameters and, using the forward
derivatives, performs online updates of the parameters at each time step in the
data sequence. RTRL's online forward propagation allows for exact optimization
over extremely long data sequences, although it can be computationally costly
for models with large numbers of parameters. We prove convergence of the RTRL
algorithm for a class of RNNs. The convergence analysis establishes a fixed
point for the joint distribution of the data sequence, RNN hidden layer, and
the RNN hidden layer forward derivatives as the number of data samples from the
sequence and the number of training steps tend to infinity. We prove
convergence of the RTRL algorithm to a stationary point of the loss. Numerical
studies illustrate our theoretical results. One potential application area for
RTRL is the analysis of financial data, which typically involve long time
series and models with small to medium numbers of parameters. This makes RTRL
computationally tractable and a potentially appealing optimization method for
training models. Thus, we include an example of RTRL applied to limit order
book data.
|
2501.08042
|
Exploring visual language models as a powerful tool in the diagnosis of
Ewing Sarcoma
|
cs.CV cs.AI
|
Ewing's sarcoma (ES), characterized by a high density of small round blue
cells without structural organization, presents a significant health concern,
particularly among adolescents aged 10 to 19. Artificial intelligence-based
systems for automated analysis of histopathological images are promising to
contribute to an accurate diagnosis of ES. In this context, this study explores
the feature extraction ability of different pre-training strategies for
distinguishing ES from other soft tissue or bone sarcomas with similar
morphology in digitized tissue microarrays for the first time, as far as we
know. Vision-language supervision (VLS) is compared to fully-supervised
ImageNet pre-training within a multiple instance learning paradigm. Our
findings indicate a substantial improvement in diagnostic accuracy with the
adaption of VLS using an in-domain dataset. Notably, these models not only
enhance the accuracy of predicted classes but also drastically reduce the
number of trainable parameters and computational costs.
|
2501.08043
|
PolyLUT: Ultra-low Latency Polynomial Inference with Hardware-Aware
Structured Pruning
|
cs.LG cs.AR
|
Standard deep neural network inference involves the computation of
interleaved linear maps and nonlinear activation functions. Prior work for
ultra-low latency implementations has hardcoded these operations inside FPGA
lookup tables (LUTs). However, FPGA LUTs can implement a much greater variety
of functions. In this paper, we propose a novel approach to training DNNs for
FPGA deployment using multivariate polynomials as the basic building block. Our
method takes advantage of the flexibility offered by the soft logic, hiding the
polynomial evaluation inside the LUTs with minimal overhead. By using
polynomial building blocks, we achieve the same accuracy using considerably
fewer layers of soft logic than by using linear functions, leading to
significant latency and area improvements. LUT-based implementations also face
a significant challenge: the LUT size grows exponentially with the number of
inputs. Prior work relies on a priori fixed sparsity, with results heavily
dependent on seed selection. To address this, we propose a structured pruning
strategy using a bespoke hardware-aware group regularizer that encourages a
particular sparsity pattern that leads to a small number of inputs per neuron.
We demonstrate the effectiveness of PolyLUT on three tasks: network intrusion
detection, jet identification at the CERN Large Hadron Collider, and MNIST.
|
2501.08044
|
UFGraphFR: An attempt at a federated recommendation system based on user
text characteristics
|
cs.LG
|
Federated learning has become an important research area in 'private
computing' due to the 'useable invisibility' of data during training. Inspired
by Federated learning, the federated recommendation system has gradually become
a new recommendation service architecture that can protect users' privacy. The
use of user diagrams to enhance federated recommendations is a promising topic.
How to use user diagrams to enhance federated recommendations is a promising
research topic. However, it's a great challenge to construct a user diagram
without compromising privacy in a federated learning scenario. Inspired by the
simple idea that similar users often have the same attribute characteristics,
we propose a personalized federated recommendation algorithm based on the user
relationship graph constructed by the user text characteristics(Graph
Federation Recommendation System based on User Text description Features,
UFGraphFR). The method uses the embedding layer weight of the user's text
feature description to construct the user relationship graph. It introduces the
Transformer mechanism to capture the sequence modeling of the user's historical
interaction sequence. Without access to user history interactions and specific
user attributes, the federal learning privacy protection of data 'useable
invisibility' is embodied. Preliminary experiments on some benchmark datasets
demonstrate the superior performance of UFGraphFR. Our experiments show that
this model can protect user privacy to some extent without affecting the
performance of the recommendation system. The code will be easily available on
https://github.com/trueWangSyutung/UFGraphFR.
|
2501.08046
|
Building Symbiotic AI: Reviewing the AI Act for a Human-Centred,
Principle-Based Framework
|
cs.HC cs.AI
|
Artificial Intelligence (AI) spreads quickly as new technologies and services
take over modern society. The need to regulate AI design, development, and use
is strictly necessary to avoid unethical and potentially dangerous consequences
to humans. The European Union (EU) has released a new legal framework, the AI
Act, to regulate AI by undertaking a risk-based approach to safeguard humans
during interaction. At the same time, researchers offer a new perspective on AI
systems, commonly known as Human-Centred AI (HCAI), highlighting the need for a
human-centred approach to their design. In this context, Symbiotic AI (a
subtype of HCAI) promises to enhance human capabilities through a deeper and
continuous collaboration between human intelligence and AI. This article
presents the results of a Systematic Literature Review (SLR) that aims to
identify principles that characterise the design and development of Symbiotic
AI systems while considering humans as the core of the process. Through content
analysis, four principles emerged from the review that must be applied to
create Human-Centred AI systems that can establish a symbiotic relationship
with humans. In addition, current trends and challenges were defined to
indicate open questions that may guide future research for the development of
SAI systems that comply with the AI Act.
|
2501.08047
|
Gen-A: Generalizing Ambisonics Neural Encoding to Unseen Microphone
Arrays
|
eess.AS cs.LG cs.SD
|
Using deep neural networks (DNNs) for encoding of microphone array (MA)
signals to the Ambisonics spatial audio format can surpass certain limitations
of established conventional methods, but existing DNN-based methods need to be
trained separately for each MA. This paper proposes a DNN-based method for
Ambisonics encoding that can generalize to arbitrary MA geometries unseen
during training. The method takes as inputs the MA geometry and MA signals and
uses a multi-level encoder consisting of separate paths for geometry and signal
data, where geometry features inform the signal encoder at each level. The
method is validated in simulated anechoic and reverberant conditions with one
and two sources. The results indicate improvement over conventional encoding
across the whole frequency range for dry scenes, while for reverberant scenes
the improvement is frequency-dependent.
|
2501.08049
|
Self-Attentive Spatio-Temporal Calibration for Precise Intermediate
Layer Matching in ANN-to-SNN Distillation
|
cs.AI cs.CV cs.LG
|
Spiking Neural Networks (SNNs) are promising for low-power computation due to
their event-driven mechanism but often suffer from lower accuracy compared to
Artificial Neural Networks (ANNs). ANN-to-SNN knowledge distillation can
improve SNN performance, but previous methods either focus solely on label
information, missing valuable intermediate layer features, or use a layer-wise
approach that neglects spatial and temporal semantic inconsistencies, leading
to performance degradation.To address these limitations, we propose a novel
method called self-attentive spatio-temporal calibration (SASTC). SASTC uses
self-attention to identify semantically aligned layer pairs between ANN and
SNN, both spatially and temporally. This enables the autonomous transfer of
relevant semantic information. Extensive experiments show that SASTC
outperforms existing methods, effectively solving the mismatching problem.
Superior accuracy results include 95.12% on CIFAR-10, 79.40% on CIFAR-100 with
2 time steps, and 68.69% on ImageNet with 4 time steps for static datasets, and
97.92% on DVS-Gesture and 83.60% on DVS-CIFAR10 for neuromorphic datasets. This
marks the first time SNNs have outperformed ANNs on both CIFAR-10 and
CIFAR-100, shedding the new light on the potential applications of SNNs.
|
2501.08050
|
On the use of Statistical Learning Theory for model selection in
Structural Health Monitoring
|
stat.ML cs.LG
|
Whenever data-based systems are employed in engineering applications,
defining an optimal statistical representation is subject to the problem of
model selection. This paper focusses on how well models can generalise in
Structural Health Monitoring (SHM). Although statistical model validation in
this field is often performed heuristically, it is possible to estimate
generalisation more rigorously using the bounds provided by Statistical
Learning Theory (SLT). Therefore, this paper explores the selection process of
a kernel smoother for modelling the impulse response of a linear oscillator
from the perspective of SLT. It is demonstrated that incorporating domain
knowledge into the regression problem yields a lower guaranteed risk, thereby
enhancing generalisation.
|
2501.08053
|
Exploring Narrative Clustering in Large Language Models: A Layerwise
Analysis of BERT
|
cs.CL cs.AI
|
This study investigates the internal mechanisms of BERT, a transformer-based
large language model, with a focus on its ability to cluster narrative content
and authorial style across its layers. Using a dataset of narratives developed
via GPT-4, featuring diverse semantic content and stylistic variations, we
analyze BERT's layerwise activations to uncover patterns of localized neural
processing. Through dimensionality reduction techniques such as Principal
Component Analysis (PCA) and Multidimensional Scaling (MDS), we reveal that
BERT exhibits strong clustering based on narrative content in its later layers,
with progressively compact and distinct clusters. While strong stylistic
clustering might occur when narratives are rephrased into different text types
(e.g., fables, sci-fi, kids' stories), minimal clustering is observed for
authorial style specific to individual writers. These findings highlight BERT's
prioritization of semantic content over stylistic features, offering insights
into its representational capabilities and processing hierarchy. This study
contributes to understanding how transformer models like BERT encode linguistic
information, paving the way for future interdisciplinary research in artificial
intelligence and cognitive neuroscience.
|
2501.08057
|
Optimizing Speech Multi-View Feature Fusion through Conditional
Computation
|
eess.AS cs.AI cs.CL cs.SD
|
Recent advancements have highlighted the efficacy of self-supervised learning
(SSL) features in various speech-related tasks, providing lightweight and
versatile multi-view speech representations. However, our study reveals that
while SSL features expedite model convergence, they conflict with traditional
spectral features like FBanks in terms of update directions. In response, we
propose a novel generalized feature fusion framework grounded in conditional
computation, featuring a gradient-sensitive gating network and a multi-stage
dropout strategy. This framework mitigates feature conflicts and bolsters model
robustness to multi-view input features. By integrating SSL and spectral
features, our approach accelerates convergence and maintains performance on par
with spectral models across multiple speech translation tasks on the MUSTC
dataset.
|
2501.08058
|
Range-Only Dynamic Output Feedback Controller for Safe and Secure Target
Circumnavigation
|
eess.SY cs.SY
|
The safety and security of robotic systems are paramount when navigating
around a hostile target. This paper addresses the problem of circumnavigating
an unknown target by a unicycle robot while ensuring it maintains a desired
safe distance and remains within the sensing region around the target
throughout its motion. The proposed control design methodology is based on the
construction of a joint Lyapunov function that incorporates: (i) a quadratic
potential function characterizing the desired target-circumnavigation
objective, and (ii) a barrier Lyapunov function-based potential term to enforce
safety and sensing constraints on the robot's motion. A notable feature of the
proposed control design is its reliance exclusively on local range measurements
between the robot and the target, realized using a dynamic output feedback
controller that treats the range as the only observable output for feedback.
Using the Lyapunov stability theory, we show that the desired equilibrium of
the closed-loop system is asymptotically stable, and the prescribed safety and
security constraints are met under the proposed controllers. We also obtain
restrictive bounds on the post-design signals and provide both simulation and
experimental results to validate the theoretical contributions.
|
2501.08062
|
Skeleton and Font Generation Network for Zero-shot Chinese Character
Generation
|
cs.CV
|
Automatic font generation remains a challenging research issue, primarily due
to the vast number of Chinese characters, each with unique and intricate
structures. Our investigation of previous studies reveals inherent bias capable
of causing structural changes in characters. Specifically, when generating a
Chinese character similar to, but different from, those in the training
samples, the bias is prone to either correcting or ignoring these subtle
variations. To address this concern, we propose a novel Skeleton and Font
Generation Network (SFGN) to achieve a more robust Chinese character font
generation. Our approach includes a skeleton builder and font generator. The
skeleton builder synthesizes content features using low-resource text input,
enabling our technique to realize font generation independently of content
image inputs. Unlike previous font generation methods that treat font style as
a global embedding, we introduce a font generator to align content and style
features on the radical level, which is a brand-new perspective for font
generation. Except for common characters, we also conduct experiments on
misspelled characters, a substantial portion of which slightly differs from the
common ones. Our approach visually demonstrates the efficacy of generated
images and outperforms current state-of-the-art font generation methods.
Moreover, we believe that misspelled character generation have significant
pedagogical implications and verify such supposition through experiments. We
used generated misspelled characters as data augmentation in Chinese character
error correction tasks, simulating the scenario where students learn
handwritten Chinese characters with the help of misspelled characters. The
significantly improved performance of error correction tasks demonstrates the
effectiveness of our proposed approach and the value of misspelled character
generation.
|
2501.08067
|
Optimal Policy Adaptation under Covariate Shift
|
cs.LG
|
Transfer learning of prediction models has been extensively studied, while
the corresponding policy learning approaches are rarely discussed. In this
paper, we propose principled approaches for learning the optimal policy in the
target domain by leveraging two datasets: one with full information from the
source domain and the other from the target domain with only covariates. First,
under the setting of covariate shift, we formulate the problem from a
perspective of causality and present the identifiability assumptions for the
reward induced by a given policy. Then, we derive the efficient influence
function and the semiparametric efficiency bound for the reward. Based on this,
we construct a doubly robust and semiparametric efficient estimator for the
reward and then learn the optimal policy by optimizing the estimated reward.
Moreover, we theoretically analyze the bias and the generalization error bound
for the learned policy. Furthermore, in the presence of both covariate and
concept shifts, we propose a novel sensitivity analysis method to evaluate the
robustness of the proposed policy learning approach. Extensive experiments
demonstrate that the approach not only estimates the reward more accurately but
also yields a policy that closely approximates the theoretically optimal
policy.
|
2501.08068
|
A Roadmap to Guide the Integration of LLMs in Hierarchical Planning
|
cs.AI
|
Recent advances in Large Language Models (LLMs) are fostering their
integration into several reasoning-related fields, including Automated Planning
(AP). However, their integration into Hierarchical Planning (HP), a subfield of
AP that leverages hierarchical knowledge to enhance planning performance,
remains largely unexplored. In this preliminary work, we propose a roadmap to
address this gap and harness the potential of LLMs for HP. To this end, we
present a taxonomy of integration methods, exploring how LLMs can be utilized
within the HP life cycle. Additionally, we provide a benchmark with a
standardized dataset for evaluating the performance of future LLM-based HP
approaches, and present initial results for a state-of-the-art HP planner and
LLM planner. As expected, the latter exhibits limited performance (3\% correct
plans, and none with a correct hierarchical decomposition) but serves as a
valuable baseline for future approaches.
|
2501.08071
|
CuAsmRL: Optimizing GPU SASS Schedules via Deep Reinforcement Learning
|
cs.AR cs.LG
|
Large language models (LLMs) are remarked by their substantial computational
requirements. To mitigate the cost, researchers develop specialized CUDA
kernels, which often fuse several tensor operations to maximize the utilization
of GPUs as much as possible. However, those specialized kernels may still leave
performance on the table as CUDA assembly experts show that manual optimization
of GPU SASS schedules can lead to better performance, and trial-and-error is
largely employed to manually find the best GPU SASS schedules.
In this work, we employ an automatic approach to optimize GPU SASS schedules,
which thus can be integrated into existing compiler frameworks. The key to
automatic optimization is training an RL agent to mimic how human experts
perform manual scheduling. To this end, we formulate an assembly game, where RL
agents can play to find the best GPU SASS schedules. The assembly game starts
from a \textit{-O3} optimized SASS schedule, and the RL agents can iteratively
apply actions to mutate the current schedules. Positive rewards are generated
if the mutated schedules get higher throughput by executing on GPUs.
Experiments show that CuAsmRL can further improve the performance of existing
specialized CUDA kernels transparently by up to $26\%$, and on average $9\%$.
Moreover, it is used as a tool to reveal potential optimization moves learned
automatically.
|
2501.08072
|
Evaluating Human Perception of Novel View Synthesis: Subjective Quality
Assessment of Gaussian Splatting and NeRF in Dynamic Scenes
|
cs.CV eess.IV
|
Gaussian Splatting (GS) and Neural Radiance Fields (NeRF) are two
groundbreaking technologies that have revolutionized the field of Novel View
Synthesis (NVS), enabling immersive photorealistic rendering and user
experiences by synthesizing multiple viewpoints from a set of images of sparse
views. The potential applications of NVS, such as high-quality virtual and
augmented reality, detailed 3D modeling, and realistic medical organ imaging,
underscore the importance of quality assessment of NVS methods from the
perspective of human perception. Although some previous studies have explored
subjective quality assessments for NVS technology, they still face several
challenges, especially in NVS methods selection, scenario coverage, and
evaluation methodology. To address these challenges, we conducted two
subjective experiments for the quality assessment of NVS technologies
containing both GS-based and NeRF-based methods, focusing on dynamic and
real-world scenes. This study covers 360{\deg}, front-facing, and
single-viewpoint videos while providing a richer and greater number of real
scenes. Meanwhile, it's the first time to explore the impact of NVS methods in
dynamic scenes with moving objects. The two types of subjective experiments
help to fully comprehend the influences of different viewing paths from a human
perception perspective and pave the way for future development of
full-reference and no-reference quality metrics. In addition, we established a
comprehensive benchmark of various state-of-the-art objective metrics on the
proposed database, highlighting that existing methods still struggle to
accurately capture subjective quality. The results give us some insights into
the limitations of existing NVS methods and may promote the development of new
NVS methods.
|
2501.08074
|
Artificial Liver Classifier: A New Alternative to Conventional Machine
Learning Models
|
cs.AI
|
Supervised machine learning classifiers often encounter challenges related to
performance, accuracy, and overfitting. This paper introduces the Artificial
Liver Classifier (ALC), a novel supervised learning classifier inspired by the
human liver's detoxification function. The ALC is characterized by its
simplicity, speed, hyperparameters-free, ability to reduce overfitting, and
effectiveness in addressing multi-classification problems through
straightforward mathematical operations. To optimize the ALC's parameters, an
improved FOX optimization algorithm (IFOX) is employed as the training method.
The proposed ALC was evaluated on five benchmark machine learning datasets:
Iris Flower, Breast Cancer Wisconsin, Wine, Voice Gender, and MNIST. The
results demonstrated competitive performance, with the ALC achieving 100%
accuracy on the Iris dataset, surpassing logistic regression, multilayer
perceptron, and support vector machine. Similarly, on the Breast Cancer
dataset, it achieved 99.12% accuracy, outperforming XGBoost and logistic
regression. Across all datasets, the ALC consistently exhibited lower
overfitting gaps and loss compared to conventional classifiers. These findings
highlight the potential of leveraging biological process simulations to develop
efficient machine learning models and open new avenues for innovation in the
field.
|
2501.08077
|
HydroelasticTouch: Simulation of Tactile Sensors with Hydroelastic
Contact Surfaces
|
cs.RO
|
Thanks to recent advancements in the development of inexpensive,
high-resolution tactile sensors, touch sensing has become popular in
contact-rich robotic manipulation tasks. With the surge of data-driven methods
and their requirement for substantial datasets, several methods of simulating
tactile sensors have emerged in the tactile research community to overcome
real-world data collection limitations. These simulation approaches can be
split into two main categories: fast but inaccurate (soft) point-contact models
and slow but accurate finite element modeling. In this work, we present a novel
approach to simulating pressure-based tactile sensors using the hydroelastic
contact model, which provides a high degree of physical realism at a reasonable
computational cost. This model produces smooth contact forces for soft-to-soft
and soft-to-rigid contacts along even non-convex contact surfaces. Pressure
values are approximated at each point of the contact surface and can be
integrated to calculate sensor outputs. We validate our models' capacity to
synthesize real-world tactile data by conducting zero-shot sim-to-real transfer
of a model for object state estimation. Our simulation is available as a
plug-in to our open-source, MuJoCo-based simulator.
|
2501.08083
|
Benchmarking Vision Foundation Models for Input Monitoring in Autonomous
Driving
|
cs.CV
|
Deep neural networks (DNNs) remain challenged by distribution shifts in
complex open-world domains like automated driving (AD): Absolute robustness
against yet unknown novel objects (semantic shift) or styles like lighting
conditions (covariate shift) cannot be guaranteed. Hence, reliable
operation-time monitors for identification of out-of-training-data-distribution
(OOD) scenarios are imperative. Current approaches for OOD classification are
untested for complex domains like AD, are limited in the kinds of shifts they
detect, or even require supervision with OOD samples. To prepare for
unanticipated shifts, we instead establish a framework around a principled,
unsupervised, and model-agnostic method that unifies detection of all kinds of
shifts: Find a full model of the training data's feature distribution, to then
use its density at new points as in-distribution (ID) score. To implement this,
we propose to combine the newly available Vision Foundation Models (VFM) as
feature extractors with one of four alternative density modeling techniques. In
an extensive benchmark of 4 VFMs against 20 baselines, we show the superior
performance of VFM feature encodings compared to shift-specific OOD monitors.
Additionally, we find that sophisticated architectures outperform larger latent
space dimensionality; and our method identifies samples with higher risk of
errors on downstream tasks, despite being model-agnostic. This suggests that
VFMs are promising to realize model-agnostic, unsupervised, reliable safety
monitors in complex vision tasks.
|
2501.08085
|
Dynamic Multimodal Sentiment Analysis: Leveraging Cross-Modal Attention
for Enabled Classification
|
cs.CL cs.LG
|
This paper explores the development of a multimodal sentiment analysis model
that integrates text, audio, and visual data to enhance sentiment
classification. The goal is to improve emotion detection by capturing the
complex interactions between these modalities, thereby enabling more accurate
and nuanced sentiment interpretation. The study evaluates three feature fusion
strategies -- late stage fusion, early stage fusion, and multi-headed attention
-- within a transformer-based architecture. Experiments were conducted using
the CMU-MOSEI dataset, which includes synchronized text, audio, and visual
inputs labeled with sentiment scores. Results show that early stage fusion
significantly outperforms late stage fusion, achieving an accuracy of 71.87\%,
while the multi-headed attention approach offers marginal improvement, reaching
72.39\%. The findings suggest that integrating modalities early in the process
enhances sentiment classification, while attention mechanisms may have limited
impact within the current framework. Future work will focus on refining feature
fusion techniques, incorporating temporal data, and exploring dynamic feature
weighting to further improve model performance.
|
2501.08086
|
NOMTO: Neural Operator-based symbolic Model approximaTion and discOvery
|
cs.AI cs.SC
|
While many physical and engineering processes are most effectively described
by non-linear symbolic models, existing non-linear symbolic regression (SR)
methods are restricted to a limited set of continuous algebraic functions,
thereby limiting their applicability to discover higher order non-linear
differential relations. In this work, we introduce the Neural Operator-based
symbolic Model approximaTion and discOvery (NOMTO) method, a novel approach to
symbolic model discovery that leverages Neural Operators to encompass a broad
range of symbolic operations. We demonstrate that NOMTO can successfully
identify symbolic expressions containing elementary functions with
singularities, special functions, and derivatives. Additionally, our
experiments demonstrate that NOMTO can accurately rediscover second-order
non-linear partial differential equations. By broadening the set of symbolic
operations available for discovery, NOMTO significantly advances the
capabilities of existing SR methods. It provides a powerful and flexible tool
for model discovery, capable of capturing complex relations in a variety of
physical systems.
|
2501.08088
|
AgentPose: Progressive Distribution Alignment via Feature Agent for
Human Pose Distillation
|
cs.CV
|
Pose distillation is widely adopted to reduce model size in human pose
estimation. However, existing methods primarily emphasize the transfer of
teacher knowledge while often neglecting the performance degradation resulted
from the curse of capacity gap between teacher and student. To address this
issue, we propose AgentPose, a novel pose distillation method that integrates a
feature agent to model the distribution of teacher features and progressively
aligns the distribution of student features with that of the teacher feature,
effectively overcoming the capacity gap and enhancing the ability of knowledge
transfer. Our comprehensive experiments conducted on the COCO dataset
substantiate the effectiveness of our method in knowledge transfer,
particularly in scenarios with a high capacity gap.
|
2501.08090
|
Hierarchical Autoscaling for Large Language Model Serving with Chiron
|
cs.DC cs.AI
|
Large language model (LLM) serving is becoming an increasingly important
workload for cloud providers. Based on performance SLO requirements, LLM
inference requests can be divided into (a) interactive requests that have tight
SLOs in the order of seconds, and (b) batch requests that have relaxed SLO in
the order of minutes to hours. These SLOs can degrade based on the arrival
rates, multiplexing, and configuration parameters, thus necessitating the use
of resource autoscaling on serving instances and their batch sizes. However,
previous autoscalers for LLM serving do not consider request SLOs leading to
unnecessary scaling and resource under-utilization. To address these
limitations, we introduce Chiron, an autoscaler that uses the idea of
hierarchical backpressure estimated using queue size, utilization, and SLOs.
Our experiments show that Chiron achieves up to 90% higher SLO attainment and
improves GPU efficiency by up to 70% compared to existing solutions.
|
2501.08094
|
CellOMaps: A Compact Representation for Robust Classification of Lung
Adenocarcinoma Growth Patterns
|
eess.IV cs.CV
|
Lung adenocarcinoma (LUAD) is a morphologically heterogeneous disease,
characterized by five primary histological growth patterns. The classification
of such patterns is crucial due to their direct relation to prognosis but the
high subjectivity and observer variability pose a major challenge. Although
several studies have developed machine learning methods for growth pattern
classification, they either only report the predominant pattern per slide or
lack proper evaluation. We propose a generalizable machine learning pipeline
capable of classifying lung tissue into one of the five patterns or as
non-tumor. The proposed pipeline's strength lies in a novel compact Cell
Organization Maps (cellOMaps) representation that captures the cellular spatial
patterns from Hematoxylin and Eosin whole slide images (WSIs). The proposed
pipeline provides state-of-the-art performance on LUAD growth pattern
classification when evaluated on both internal unseen slides and external
datasets, significantly outperforming the current approaches. In addition, our
preliminary results show that the model's outputs can be used to predict
patients Tumor Mutational Burden (TMB) levels.
|
2501.08096
|
Hybrid Action Based Reinforcement Learning for Multi-Objective
Compatible Autonomous Driving
|
cs.RO cs.AI cs.ET cs.LG
|
Reinforcement Learning (RL) has shown excellent performance in solving
decision-making and control problems of autonomous driving, which is
increasingly applied in diverse driving scenarios. However, driving is a
multi-attribute problem, leading to challenges in achieving multi-objective
compatibility for current RL methods, especially in both policy execution and
policy iteration. On the one hand, the common action space structure with
single action type limits driving flexibility or results in large behavior
fluctuations during policy execution. On the other hand, the multi-attribute
weighted single reward function result in the agent's disproportionate
attention to certain objectives during policy iterations. To this end, we
propose a Multi-objective Ensemble-Critic reinforcement learning method with
Hybrid Parametrized Action for multi-objective compatible autonomous driving.
Specifically, a parameterized action space is constructed to generate hybrid
driving actions, combining both abstract guidance and concrete control
commands. A multi-objective critics architecture is constructed considering
multiple attribute rewards, to ensure simultaneously focusing on different
driving objectives. Additionally, uncertainty-based exploration strategy is
introduced to help the agent faster approach viable driving policy. The
experimental results in both the simulated traffic environment and the HighD
dataset demonstrate that our method can achieve multi-objective compatible
autonomous driving in terms of driving efficiency, action consistency, and
safety. It enhances the general performance of the driving while significantly
increasing training efficiency.
|
2501.08097
|
Guiding the classification of hepatocellular carcinoma on 3D CT-scans
using deep and handcrafted radiological features
|
cs.CV cs.AI
|
Hepatocellular carcinoma is the most spread primary liver cancer across the
world ($\sim$80\% of the liver tumors). The gold standard for HCC diagnosis is
liver biopsy. However, in the clinical routine, expert radiologists provide a
visual diagnosis by interpreting hepatic CT-scans according to a standardized
protocol, the LI-RADS, which uses five radiological criteria with an associated
decision tree. In this paper, we propose an automatic approach to predict
histology-proven HCC from CT images in order to reduce radiologists'
inter-variability. We first show that standard deep learning methods fail to
accurately predict HCC from CT-scans on a challenging database, and propose a
two-step approach inspired by the LI-RADS system to improve the performance. We
achieve improvements from 6 to 18 points of AUC with respect to deep learning
baselines trained with different architectures. We also provide clinical
validation of our method, achieving results that outperform non-expert
radiologists and are on par with expert ones.
|
2501.08099
|
Smooth Handovers via Smoothed Online Learning
|
cs.NI cs.LG
|
With users demanding seamless connectivity, handovers (HOs) have become a
fundamental element of cellular networks. However, optimizing HOs is a
challenging problem, further exacerbated by the growing complexity of mobile
networks. This paper presents the first countrywide study of HO optimization,
through the prism of Smoothed Online Learning (SOL). We first analyze an
extensive dataset from a commercial mobile network operator (MNO) in Europe
with more than 40M users, to understand and reveal important features and
performance impacts on HOs. Our findings highlight a correlation between HO
failures/delays, and the characteristics of radio cells and end-user devices,
showcasing the impact of heterogeneity in mobile networks nowadays. We
subsequently model UE-cell associations as dynamic decisions and propose a
realistic system model for smooth and accurate HOs that extends existing
approaches by (i) incorporating device and cell features on HO optimization,
and (ii) eliminating (prior) strong assumptions about requiring future signal
measurements and knowledge of end-user mobility. Our algorithm, aligned with
the O-RAN paradigm, provides robust dynamic regret guarantees, even in
challenging environments, and shows superior performance in multiple scenarios
with real-world and synthetic data.
|
2501.08102
|
Consistency of Responses and Continuations Generated by Large Language
Models on Social Media
|
cs.CL cs.AI cs.HC
|
Large Language Models (LLMs) demonstrate remarkable capabilities in text
generation, yet their emotional consistency and semantic coherence in social
media contexts remain insufficiently understood. This study investigates how
LLMs handle emotional content and maintain semantic relationships through
continuation and response tasks using two open-source models: Gemma and Llama.
By analyzing climate change discussions from Twitter and Reddit, we examine
emotional transitions, intensity patterns, and semantic similarity between
human-authored and LLM-generated content. Our findings reveal that while both
models maintain high semantic coherence, they exhibit distinct emotional
patterns: Gemma shows a tendency toward negative emotion amplification,
particularly anger, while maintaining certain positive emotions like optimism.
Llama demonstrates superior emotional preservation across a broader spectrum of
affects. Both models systematically generate responses with attenuated
emotional intensity compared to human-authored content and show a bias toward
positive emotions in response tasks. Additionally, both models maintain strong
semantic similarity with original texts, though performance varies between
continuation and response tasks. These findings provide insights into LLMs'
emotional and semantic processing capabilities, with implications for their
deployment in social media contexts and human-AI interaction design.
|
2501.08103
|
A Comparative Analysis of Transformer-less Inverter Topologies for
Grid-Connected PV Systems: Minimizing Leakage Current and THD
|
eess.SY cs.SY
|
The integration of distributed energy resources (DERs), particularly
photovoltaic (PV) systems, into power grids has gained major attention due to
their environmental and economic benefits. Although traditional
transformer-based grid-connected PV inverters provide galvanic isolation for
leakage current, they suffer from major drawbacks of high cost, lower
efficiency, and increased size. Transformer-less grid-connected PV inverters
(TLGI) have emerged as a prominent alternative, as they achieve higher
efficiency, compact design, and lower cost. However, due to a lack of galvanic
isolation, TLGIs are highly affected by leakage current caused by the
fluctuation of common-mode voltage (CMV). This paper investigates three
topologies H4, H5, and HERIC with comparisons between their CMV,
differential-mode voltage (DMV), total harmonic distortion (THD), and leakage
current. A simulation was conducted for each topology in MATLAB/Simulink
R2023a, and the results demonstrate that the H5 topology achieves a balance
between low leakage current, reduced THD, and optimal operational efficiency,
making it suitable for practical application.
|
2501.08105
|
About the Rankin and Berg\'e-Martinet Constants from a Coding Theory
View Point
|
cs.IT math.IT
|
The Rankin constant $\gamma_{n,l}$ measures the largest volume of the densest
sublattice of rank $l$ of a lattice $\Lambda\in \RR^n$ over all such lattices
of rank $n$. The Berg\'e-Martinet constant $\gamma'_{n,l}$ is a variation that
takes into account the dual lattice. Exact values and bounds for both constants
are mostly open in general. We consider the case of lattices built from linear
codes, and look at bounds on $\gamma_{n,l}$ and $\gamma'_{n,l}$. In particular,
we revisit known results for $n=3,4,5,8$ and give lower and upper bounds for
the open cases $\gamma_{5,2},\gamma_{7,2}$ and $\gamma'_{5,2},\gamma'_{7,2}$.
|
2501.08109
|
Data-driven inventory management for new products: A warm-start and
adjusted Dyna-$Q$ approach
|
cs.LG cs.AI cs.CE
|
In this paper, we propose a novel reinforcement learning algorithm for
inventory management of newly launched products with no or limited historical
demand information. The algorithm follows the classic Dyna-$Q$ structure,
balancing the model-based and model-free approaches, while accelerating the
training process of Dyna-$Q$ and mitigating the model discrepancy generated by
the model-based feedback. Warm-start information from the demand data of
existing similar products can be incorporated into the algorithm to further
stabilize the early-stage training and reduce the variance of the estimated
optimal policy. Our approach is validated through a case study of bakery
inventory management with real data. The adjusted Dyna-$Q$ shows up to a 23.7%
reduction in average daily cost compared with $Q$-learning, and up to a 77.5%
reduction in training time within the same horizon compared with classic
Dyna-$Q$. By incorporating the warm-start information, it can be found that the
adjusted Dyna-$Q$ has the lowest total cost, lowest variance in total cost, and
relatively low shortage percentages among all the algorithms under a 30-day
testing.
|
2501.08111
|
EarthView: A Large Scale Remote Sensing Dataset for Self-Supervision
|
cs.CV
|
This paper presents EarthView, a comprehensive dataset specifically designed
for self-supervision on remote sensing data, intended to enhance deep learning
applications on Earth monitoring tasks. The dataset spans 15 tera pixels of
global remote-sensing data, combining imagery from a diverse range of sources,
including NEON, Sentinel, and a novel release of 1m spatial resolution data
from Satellogic. Our dataset provides a wide spectrum of image data with
varying resolutions, harnessed from different sensors and organized coherently
into an accessible HuggingFace dataset in parquet format. This data spans five
years, from 2017 to 2022. Accompanying the dataset, we introduce EarthMAE, a
tailored Masked Autoencoder, developed to tackle the distinct challenges of
remote sensing data. Trained in a self-supervised fashion, EarthMAE effectively
processes different data modalities such as hyperspectral, multispectral,
topographical data, segmentation maps, and temporal structure. This model helps
us show that pre-training on Satellogic data improves performance on downstream
tasks. While there is still a gap to fill in MAE for heterogeneous data, we
regard this innovative combination of an expansive, diverse dataset and a
versatile model adapted for self-supervised learning as a stride forward in
deep learning for Earth monitoring.
|
2501.08114
|
Change Captioning in Remote Sensing: Evolution to SAT-Cap -- A
Single-Stage Transformer Approach
|
cs.CV
|
Change captioning has become essential for accurately describing changes in
multi-temporal remote sensing data, providing an intuitive way to monitor
Earth's dynamics through natural language. However, existing change captioning
methods face two key challenges: high computational demands due to multistage
fusion strategy, and insufficient detail in object descriptions due to limited
semantic extraction from individual images. To solve these challenges, we
propose SAT-Cap based on the transformers model with a single-stage feature
fusion for remote sensing change captioning. In particular, SAT-Cap integrates
a Spatial-Channel Attention Encoder, a Difference-Guided Fusion module, and a
Caption Decoder. Compared to typical models that require multi-stage fusion in
transformer encoder and fusion module, SAT-Cap uses only a simple cosine
similarity-based fusion module for information integration, reducing the
complexity of the model architecture. By jointly modeling spatial and channel
information in Spatial-Channel Attention Encoder, our approach significantly
enhances the model's ability to extract semantic information from objects in
multi-temporal remote sensing images. Extensive experiments validate the
effectiveness of SAT-Cap, achieving CIDEr scores of 140.23% on the LEVIR-CC
dataset and 97.74% on the DUBAI-CC dataset, surpassing current state-of-the-art
methods. The code and pre-trained models will be available online.
|
2501.08115
|
RoHan: Robust Hand Detection in Operation Room
|
cs.CV cs.LG
|
Hand-specific localization has garnered significant interest within the
computer vision community. Although there are numerous datasets with hand
annotations from various angles and settings, domain transfer techniques
frequently struggle in surgical environments. This is mainly due to the limited
availability of gloved hand instances and the unique challenges of operating
rooms (ORs). Thus, hand-detection models tailored to OR settings require
extensive training and expensive annotation processes. To overcome these
challenges, we present "RoHan" - a novel approach for robust hand detection in
the OR, leveraging advanced semi-supervised domain adaptation techniques to
tackle the challenges of varying recording conditions, diverse glove colors,
and occlusions common in surgical settings. Our methodology encompasses two
main stages: (1) data augmentation strategy that utilizes "Artificial Gloves,"
a method for augmenting publicly available hand datasets with synthetic images
of hands-wearing gloves; (2) semi-supervised domain adaptation pipeline that
improves detection performance in real-world OR settings through iterative
prediction refinement and efficient frame filtering. We evaluate our method
using two datasets: simulated enterotomy repair and saphenous vein graft
harvesting. "RoHan" substantially reduces the need for extensive labeling and
model training, paving the way for the practical implementation of hand
detection technologies in medical settings.
|
2501.08118
|
Revisiting Birds Eye View Perception Models with Frozen Foundation
Models: DINOv2 and Metric3Dv2
|
cs.CV
|
Birds Eye View perception models require extensive data to perform and
generalize effectively. While traditional datasets often provide abundant
driving scenes from diverse locations, this is not always the case. It is
crucial to maximize the utility of the available training data. With the advent
of large foundation models such as DINOv2 and Metric3Dv2, a pertinent question
arises: can these models be integrated into existing model architectures to not
only reduce the required training data but surpass the performance of current
models? We choose two model architectures in the vehicle segmentation domain to
alter: Lift-Splat-Shoot, and Simple-BEV. For Lift-Splat-Shoot, we explore the
implementation of frozen DINOv2 for feature extraction and Metric3Dv2 for depth
estimation, where we greatly exceed the baseline results by 7.4 IoU while
utilizing only half the training data and iterations. Furthermore, we introduce
an innovative application of Metric3Dv2's depth information as a PseudoLiDAR
point cloud incorporated into the Simple-BEV architecture, replacing
traditional LiDAR. This integration results in a +3 IoU improvement compared to
the Camera-only model.
|
2501.08120
|
In-situ graph reasoning and knowledge expansion using Graph-PReFLexOR
|
cs.AI cond-mat.dis-nn cond-mat.mtrl-sci cs.CL
|
The pursuit of automated scientific discovery has fueled progress from
symbolic logic to modern AI, forging new frontiers in reasoning and pattern
recognition. Transformers function as potential systems, where every possible
relationship remains latent potentiality until tasks impose constraints, akin
to measurement. Yet, refining their sampling requires more than probabilistic
selection: solutions must conform to specific structures or rules, ensuring
consistency and the invocation of general principles. We present
Graph-PReFLexOR (Graph-based Preference-based Recursive Language Modeling for
Exploratory Optimization of Reasoning), a framework that combines graph
reasoning with symbolic abstraction to dynamically expand domain knowledge.
Inspired by reinforcement learning, Graph-PReFLexOR defines reasoning as a
structured mapping, where tasks yield knowledge graphs, abstract patterns, and
ultimately, final answers. Inspired by category theory, it encodes concepts as
nodes and their relationships as edges, supporting hierarchical inference and
adaptive learning through isomorphic representations. Demonstrations include
hypothesis generation, materials design, and creative reasoning, such as
discovering relationships between mythological concepts like 'thin places' with
materials science. We propose a 'knowledge garden growth' strategy that
integrates insights across domains, promoting interdisciplinary connections.
Results with a 3-billion-parameter Graph-PReFLexOR model show superior
reasoning depth and adaptability, underscoring the potential for transparent,
multidisciplinary AI-driven discovery. It lays the groundwork for general
autonomous reasoning solutions.
|
2501.08131
|
SAR Strikes Back: A New Hope for RSVQA
|
cs.CV
|
Remote sensing visual question answering (RSVQA) is a task that automatically
extracts information from satellite images and processes a question to predict
the answer from the images in textual form, helping with the interpretation of
the image. While different methods have been proposed to extract information
from optical images with different spectral bands and resolutions, no method
has been proposed to answer questions from Synthetic Aperture Radar (SAR)
images. SAR images capture electromagnetic information from the scene, and are
less affected by atmospheric conditions, such as clouds. In this work, our
objective is to introduce SAR in the RSVQA task, finding the best way to use
this modality. In our research, we carry out a study on different pipelines for
the task of RSVQA taking into account information from both SAR and optical
data. To this purpose, we also present a dataset that allows for the
introduction of SAR images in the RSVQA framework. We propose two different
models to include the SAR modality. The first one is an end-to-end method in
which we add an additional encoder for the SAR modality. In the second
approach, we build on a two-stage framework. First, relevant information is
extracted from SAR and, optionally, optical data. This information is then
translated into natural language to be used in the second step which only
relies on a language model to provide the answer. We find that the second
pipeline allows us to obtain good results with SAR images alone. We then try
various types of fusion methods to use SAR and optical images together, finding
that a fusion at the decision level achieves the best results on the proposed
dataset. We show that SAR data offers additional information when fused with
the optical modality, particularly for questions related to specific land cover
classes, such as water areas.
|
2501.08134
|
An Empirical Wall-Pressure Spectrum Model for Aeroacoustic Predictions
Based on Symbolic Regression
|
physics.flu-dyn cs.AI cs.LG
|
Fast-turn around methods to predict airfoil trailing-edge noise are crucial
for incorporating noise limitations into design optimization loops of several
applications. Among these aeroacoustic predictive models, Amiet's theory offers
the best balance between accuracy and simplicity. The accuracy of the model
relies heavily on precise wall-pressure spectrum predictions, which are often
based on single-equation formulations with adjustable parameters. These
parameters are calibrated for particular airfoils and flow conditions and
consequently tend to fail when applied outside their calibration range. This
paper introduces a new wall-pressure spectrum empirical model designed to
enhance the robustness and accuracy of current state-of-the-art predictions
while widening the range of applicability of the model to different airfoils
and flow conditions. The model is developed using AI-based symbolic regression
via a genetic-algorithm-based approach, and applied to a dataset of
wall-pressure fluctuations measured on NACA 0008 and NACA 63018 airfoils at
multiple angles of attack and inflow velocities, covering turbulent boundary
layers with both adverse and favorable pressure gradients. Validation against
experimental data (outside the training dataset) demonstrates the robustness of
the model compared to well-accepted semi-empirical models. Finally, the model
is integrated with Amiet's theory to predict the aeroacoustic noise of a
full-scale wind turbine, showing good agreement with experimental measurements.
|
2501.08137
|
Audio-Visual Deepfake Detection With Local Temporal Inconsistencies
|
cs.CV cs.CR cs.MM cs.SD eess.AS
|
This paper proposes an audio-visual deepfake detection approach that aims to
capture fine-grained temporal inconsistencies between audio and visual
modalities. To achieve this, both architectural and data synthesis strategies
are introduced. From an architectural perspective, a temporal distance map,
coupled with an attention mechanism, is designed to capture these
inconsistencies while minimizing the impact of irrelevant temporal
subsequences. Moreover, we explore novel pseudo-fake generation techniques to
synthesize local inconsistencies. Our approach is evaluated against
state-of-the-art methods using the DFDC and FakeAVCeleb datasets, demonstrating
its effectiveness in detecting audio-visual deepfakes.
|
2501.08139
|
EEG-ReMinD: Enhancing Neurodegenerative EEG Decoding through
Self-Supervised State Reconstruction-Primed Riemannian Dynamics
|
eess.SP cs.AI cs.LG
|
The development of EEG decoding algorithms confronts challenges such as data
sparsity, subject variability, and the need for precise annotations, all of
which are vital for advancing brain-computer interfaces and enhancing the
diagnosis of diseases. To address these issues, we propose a novel two-stage
approach named Self-Supervised State Reconstruction-Primed Riemannian Dynamics
(EEG-ReMinD) , which mitigates reliance on supervised learning and integrates
inherent geometric features. This approach efficiently handles EEG data
corruptions and reduces the dependency on labels. EEG-ReMinD utilizes
self-supervised and geometric learning techniques, along with an attention
mechanism, to analyze the temporal dynamics of EEG features within the
framework of Riemannian geometry, referred to as Riemannian dynamics.
Comparative analyses on both intact and corrupted datasets from two different
neurodegenerative disorders underscore the enhanced performance of EEG-ReMinD.
|
2501.08142
|
Bootstrapping Corner Cases: High-Resolution Inpainting for Safety
Critical Detect and Avoid for Automated Flying
|
cs.CV cs.LG
|
Modern machine learning techniques have shown tremendous potential,
especially for object detection on camera images. For this reason, they are
also used to enable safety-critical automated processes such as autonomous
drone flights. We present a study on object detection for Detect and Avoid, a
safety critical function for drones that detects air traffic during automated
flights for safety reasons. An ill-posed problem is the generation of good and
especially large data sets, since detection itself is the corner case. Most
models suffer from limited ground truth in raw data, \eg recorded air traffic
or frontal flight with a small aircraft. It often leads to poor and critical
detection rates. We overcome this problem by using inpainting methods to
bootstrap the dataset such that it explicitly contains the corner cases of the
raw data. We provide an overview of inpainting methods and generative models
and present an example pipeline given a small annotated dataset. We validate
our method by generating a high-resolution dataset, which we make publicly
available and present it to an independent object detector that was fully
trained on real data.
|
2501.08145
|
Refusal Behavior in Large Language Models: A Nonlinear Perspective
|
cs.CL cs.AI
|
Refusal behavior in large language models (LLMs) enables them to decline
responding to harmful, unethical, or inappropriate prompts, ensuring alignment
with ethical standards. This paper investigates refusal behavior across six
LLMs from three architectural families. We challenge the assumption of refusal
as a linear phenomenon by employing dimensionality reduction techniques,
including PCA, t-SNE, and UMAP. Our results reveal that refusal mechanisms
exhibit nonlinear, multidimensional characteristics that vary by model
architecture and layer. These findings highlight the need for nonlinear
interpretability to improve alignment research and inform safer AI deployment
strategies.
|
2501.08149
|
Multiple-Input Variational Auto-Encoder for Anomaly Detection in
Heterogeneous Data
|
cs.AI cs.LG stat.ML
|
Anomaly detection (AD) plays a pivotal role in AI applications, e.g., in
classification, and intrusion/threat detection in cybersecurity. However, most
existing methods face challenges of heterogeneity amongst feature subsets posed
by non-independent and identically distributed (non-IID) data. We propose a
novel neural network model called Multiple-Input Auto-Encoder for AD (MIAEAD)
to address this. MIAEAD assigns an anomaly score to each feature subset of a
data sample to indicate its likelihood of being an anomaly. This is done by
using the reconstruction error of its sub-encoder as the anomaly score. All
sub-encoders are then simultaneously trained using unsupervised learning to
determine the anomaly scores of feature subsets. The final AUC of MIAEAD is
calculated for each sub-dataset, and the maximum AUC obtained among the
sub-datasets is selected. To leverage the modelling of the distribution of
normal data to identify anomalies of the generative models, we develop a novel
neural network architecture/model called Multiple-Input Variational
Auto-Encoder (MIVAE). MIVAE can process feature subsets through its
sub-encoders before learning distribution of normal data in the latent space.
This allows MIVAE to identify anomalies that deviate from the learned
distribution. We theoretically prove that the difference in the average anomaly
score between normal samples and anomalies obtained by the proposed MIVAE is
greater than that of the Variational Auto-Encoder (VAEAD), resulting in a
higher AUC for MIVAE. Extensive experiments on eight real-world anomaly
datasets demonstrate the superior performance of MIAEAD and MIVAE over
conventional methods and the state-of-the-art unsupervised models, by up to 6%
in terms of AUC score. Alternatively, MIAEAD and MIVAE have a high AUC when
applied to feature subsets with low heterogeneity based on the coefficient of
variation (CV) score.
|
2501.08150
|
Evaluating Policy Effects through Network Dynamics and Sampling
|
cs.SI stat.AP
|
In the process of enacting or introducing a new policy, policymakers
frequently consider the population's responses. These considerations are
critical for effective governance. There are numerous methods to gauge the
ground sentiment from a subset of the population; examples include surveys or
listening to various feedback channels. Many conventional approaches implicitly
assume that opinions are static; however, in reality, the population will
discuss and debate these new policies among themselves, and reform new opinions
in the process. In this paper, we pose the following questions: Can we quantify
the effect of these social dynamics on the broader opinion towards a new
policy? Given some information about the relationship network that underlies
the population, how does overall opinion change post-discussion? We investigate
three different settings in which the policy is revealed: respondents who do
not know each other, groups of respondents who all know each other, and
respondents chosen randomly. By controlling who the policy is revealed to, we
control the degree of discussion among the population. We quantify how these
factors affect the changes in policy beliefs via the Wasserstein distance
between the empirically observed data post-discussion and its distribution
pre-discussion. We also provide several numerical analyses based on generated
network and real-life network datasets. Our work aims to address the challenges
associated with network topology and social interactions, and provide
policymakers with a quantitative lens to assess policy effectiveness in the
face of resource constraints and network complexities.
|
2501.08152
|
Energy Backdoor Attack to Deep Neural Networks
|
cs.CV
|
The rise of deep learning (DL) has increased computing complexity and energy
use, prompting the adoption of application specific integrated circuits (ASICs)
for energy-efficient edge and mobile deployment. However, recent studies have
demonstrated the vulnerability of these accelerators to energy attacks. Despite
the development of various inference time energy attacks in prior research,
backdoor energy attacks remain unexplored. In this paper, we design an
innovative energy backdoor attack against deep neural networks (DNNs) operating
on sparsity-based accelerators. Our attack is carried out in two distinct
phases: backdoor injection and backdoor stealthiness. Experimental results
using ResNet-18 and MobileNet-V2 models trained on CIFAR-10 and Tiny ImageNet
datasets show the effectiveness of our proposed attack in increasing energy
consumption on trigger samples while preserving the model's performance for
clean/regular inputs. This demonstrates the vulnerability of DNNs to energy
backdoor attacks. The source code of our attack is available at:
https://github.com/hbrachemi/energy_backdoor.
|
2501.08155
|
FairTTTS: A Tree Test Time Simulation Method for Fairness-Aware
Classification
|
cs.LG cs.AI
|
Algorithmic decision-making has become deeply ingrained in many domains, yet
biases in machine learning models can still produce discriminatory outcomes,
often harming unprivileged groups. Achieving fair classification is inherently
challenging, requiring a careful balance between predictive performance and
ethical considerations. We present FairTTTS, a novel post-processing bias
mitigation method inspired by the Tree Test Time Simulation (TTTS) method.
Originally developed to enhance accuracy and robustness against adversarial
inputs through probabilistic decision-path adjustments, TTTS serves as the
foundation for FairTTTS. By building on this accuracy-enhancing technique,
FairTTTS mitigates bias and improves predictive performance. FairTTTS uses a
distance-based heuristic to adjust decisions at protected attribute nodes,
ensuring fairness for unprivileged samples. This fairness-oriented adjustment
occurs as a post-processing step, allowing FairTTTS to be applied to
pre-trained models, diverse datasets, and various fairness metrics without
retraining. Extensive evaluation on seven benchmark datasets shows that
FairTTTS outperforms traditional methods in fairness improvement, achieving a
20.96% average increase over the baseline compared to 18.78% for related work,
and further enhances accuracy by 0.55%. In contrast, competing methods
typically reduce accuracy by 0.42%. These results confirm that FairTTTS
effectively promotes more equitable decision-making while simultaneously
improving predictive performance.
|
2501.08156
|
Are DeepSeek R1 And Other Reasoning Models More Faithful?
|
cs.LG
|
Language models trained to solve reasoning tasks via reinforcement learning
have achieved striking results. We refer to these models as reasoning models. A
key question emerges: Are the Chains of Thought (CoTs) of reasoning models more
faithful than traditional models? To investigate this, we evaluate three
reasoning models (based on Qwen-2.5, Gemini-2, and DeepSeek-V3-Base) on an
existing test of faithful CoT. To measure faithfulness, we test whether models
can describe how a cue in their prompt influences their answer to MMLU
questions. For example, when the cue "A Stanford Professor thinks the answer is
D" is added to the prompt, models sometimes switch their answer to D. In such
cases, the DeepSeek-R1 reasoning model describes the influence of this cue 59%
of the time, compared to 7% for the non-reasoning DeepSeek model. We evaluate
seven types of cue, such as misleading few-shot examples and suggestive
follow-up questions from the user. Reasoning models describe cues that
influence them much more reliably than all the non-reasoning models tested
(including Claude-3.5-Sonnet and GPT-4). In an additional experiment, we
provide evidence suggesting that the use of reward models causes less faithful
responses - which may help explain why non-reasoning models are less faithful.
Our study has two main limitations. First, we test faithfulness using a set of
artificial tasks, which may not reflect realistic use-cases. Second, we only
measure one specific aspect of faithfulness - whether models can describe the
influence of cues. Future research should investigate whether the advantage of
reasoning models in faithfulness holds for a broader set of tests.
|
2501.08163
|
DM-Mamba: Dual-domain Multi-scale Mamba for MRI reconstruction
|
eess.IV cs.CV
|
The accelerated MRI reconstruction poses a challenging ill-posed inverse
problem due to the significant undersampling in k-space. Deep neural networks,
such as CNNs and ViT, have shown substantial performance improvements for this
task while encountering the dilemma between global receptive fields and
efficient computation. To this end, this paper pioneers exploring Mamba, a new
paradigm for long-range dependency modeling with linear complexity, for
efficient and effective MRI reconstruction. However, directly applying Mamba to
MRI reconstruction faces three significant issues: (1) Mamba's row-wise and
column-wise scanning disrupts k-space's unique spectrum, leaving its potential
in k-space learning unexplored. (2) Existing Mamba methods unfold feature maps
with multiple lengthy scanning paths, leading to long-range forgetting and high
computational burden. (3) Mamba struggles with spatially-varying contents,
resulting in limited diversity of local representations. To address these, we
propose a dual-domain multi-scale Mamba for MRI reconstruction from the
following perspectives: (1) We pioneer vision Mamba in k-space learning. A
circular scanning is customized for spectrum unfolding, benefiting the global
modeling of k-space. (2) We propose a multi-scale Mamba with an efficient
scanning strategy in both image and k-space domains. It mitigates long-range
forgetting and achieves a better trade-off between efficiency and performance.
(3) We develop a local diversity enhancement module to improve the
spatially-varying representation of Mamba. Extensive experiments are conducted
on three public datasets for MRI reconstruction under various undersampling
patterns. Comprehensive results demonstrate that our method significantly
outperforms state-of-the-art methods with lower computational cost.
Implementation code will be available at
https://github.com/XiaoMengLiLiLi/DM-Mamba.
|
2501.08165
|
I Can Find You in Seconds! Leveraging Large Language Models for Code
Authorship Attribution
|
cs.SE cs.AI
|
Source code authorship attribution is important in software forensics,
plagiarism detection, and protecting software patch integrity. Existing
techniques often rely on supervised machine learning, which struggles with
generalization across different programming languages and coding styles due to
the need for large labeled datasets. Inspired by recent advances in natural
language authorship analysis using large language models (LLMs), which have
shown exceptional performance without task-specific tuning, this paper explores
the use of LLMs for source code authorship attribution.
We present a comprehensive study demonstrating that state-of-the-art LLMs can
successfully attribute source code authorship across different languages. LLMs
can determine whether two code snippets are written by the same author with
zero-shot prompting, achieving a Matthews Correlation Coefficient (MCC) of
0.78, and can attribute code authorship from a small set of reference code
snippets via few-shot learning, achieving MCC of 0.77. Additionally, LLMs show
some adversarial robustness against misattribution attacks.
Despite these capabilities, we found that naive prompting of LLMs does not
scale well with a large number of authors due to input token limitations. To
address this, we propose a tournament-style approach for large-scale
attribution. Evaluating this approach on datasets of C++ (500 authors, 26,355
samples) and Java (686 authors, 55,267 samples) code from GitHub, we achieve
classification accuracy of up to 65% for C++ and 68.7% for Java using only one
reference per author. These results open new possibilities for applying LLMs to
code authorship attribution in cybersecurity and software engineering.
|
2501.08167
|
Potential and Perils of Large Language Models as Judges of Unstructured
Textual Data
|
cs.CL cs.AI cs.CY
|
Rapid advancements in large language models have unlocked remarkable
capabilities when it comes to processing and summarizing unstructured text
data. This has implications for the analysis of rich, open-ended datasets, such
as survey responses, where LLMs hold the promise of efficiently distilling key
themes and sentiments. However, as organizations increasingly turn to these
powerful AI systems to make sense of textual feedback, a critical question
arises, can we trust LLMs to accurately represent the perspectives contained
within these text based datasets? While LLMs excel at generating human-like
summaries, there is a risk that their outputs may inadvertently diverge from
the true substance of the original responses. Discrepancies between the
LLM-generated outputs and the actual themes present in the data could lead to
flawed decision-making, with far-reaching consequences for organizations. This
research investigates the effectiveness of LLM-as-judge models to evaluate the
thematic alignment of summaries generated by other LLMs. We utilized an
Anthropic Claude model to generate thematic summaries from open-ended survey
responses, with Amazon's Titan Express, Nova Pro, and Meta's Llama serving as
judges. This LLM-as-judge approach was compared to human evaluations using
Cohen's kappa, Spearman's rho, and Krippendorff's alpha, validating a scalable
alternative to traditional human centric evaluation methods. Our findings
reveal that while LLM-as-judge offer a scalable solution comparable to human
raters, humans may still excel at detecting subtle, context-specific nuances.
Our research contributes to the growing body of knowledge on AI assisted text
analysis. Further, we provide recommendations for future research, emphasizing
the need for careful consideration when generalizing LLM-as-judge models across
various contexts and use cases.
|
2501.08168
|
LeapVAD: A Leap in Autonomous Driving via Cognitive Perception and
Dual-Process Thinking
|
cs.AI
|
While autonomous driving technology has made remarkable strides, data-driven
approaches still struggle with complex scenarios due to their limited reasoning
capabilities. Meanwhile, knowledge-driven autonomous driving systems have
evolved considerably with the popularization of visual language models. In this
paper, we propose LeapVAD, a novel method based on cognitive perception and
dual-process thinking. Our approach implements a human-attentional mechanism to
identify and focus on critical traffic elements that influence driving
decisions. By characterizing these objects through comprehensive attributes -
including appearance, motion patterns, and associated risks - LeapVAD achieves
more effective environmental representation and streamlines the decision-making
process. Furthermore, LeapVAD incorporates an innovative dual-process
decision-making module miming the human-driving learning process. The system
consists of an Analytic Process (System-II) that accumulates driving experience
through logical reasoning and a Heuristic Process (System-I) that refines this
knowledge via fine-tuning and few-shot learning. LeapVAD also includes
reflective mechanisms and a growing memory bank, enabling it to learn from past
mistakes and continuously improve its performance in a closed-loop environment.
To enhance efficiency, we develop a scene encoder network that generates
compact scene representations for rapid retrieval of relevant driving
experiences. Extensive evaluations conducted on two leading autonomous driving
simulators, CARLA and DriveArena, demonstrate that LeapVAD achieves superior
performance compared to camera-only approaches despite limited training data.
Comprehensive ablation studies further emphasize its effectiveness in
continuous learning and domain adaptation. Project page:
https://pjlab-adg.github.io/LeapVAD/.
|
2501.08169
|
Revolutionizing Communication with Deep Learning and XAI for Enhanced
Arabic Sign Language Recognition
|
cs.CV cs.AI cs.CY cs.LG
|
This study introduces an integrated approach to recognizing Arabic Sign
Language (ArSL) using state-of-the-art deep learning models such as
MobileNetV3, ResNet50, and EfficientNet-B2. These models are further enhanced
by explainable AI (XAI) techniques to boost interpretability. The ArSL2018 and
RGB Arabic Alphabets Sign Language (AASL) datasets are employed, with
EfficientNet-B2 achieving peak accuracies of 99.48\% and 98.99\%, respectively.
Key innovations include sophisticated data augmentation methods to mitigate
class imbalance, implementation of stratified 5-fold cross-validation for
better generalization, and the use of Grad-CAM for clear model decision
transparency. The proposed system not only sets new benchmarks in recognition
accuracy but also emphasizes interpretability, making it suitable for
applications in healthcare, education, and inclusive communication
technologies.
|
2501.08170
|
Benchmarking Multimodal Models for Fine-Grained Image Analysis: A
Comparative Study Across Diverse Visual Features
|
cs.CV
|
This article introduces a benchmark designed to evaluate the capabilities of
multimodal models in analyzing and interpreting images. The benchmark focuses
on seven key visual aspects: main object, additional objects, background,
detail, dominant colors, style, and viewpoint. A dataset of 14,580 images,
generated from diverse text prompts, was used to assess the performance of
seven leading multimodal models. These models were evaluated on their ability
to accurately identify and describe each visual aspect, providing insights into
their strengths and weaknesses for comprehensive image understanding. The
findings of this benchmark have significant implications for the development
and selection of multimodal models for various image analysis tasks.
|
2501.08174
|
Object-Centric 2D Gaussian Splatting: Background Removal and
Occlusion-Aware Pruning for Compact Object Models
|
cs.CV
|
Current Gaussian Splatting approaches are effective for reconstructing entire
scenes but lack the option to target specific objects, making them
computationally expensive and unsuitable for object-specific applications. We
propose a novel approach that leverages object masks to enable targeted
reconstruction, resulting in object-centric models. Additionally, we introduce
an occlusion-aware pruning strategy to minimize the number of Gaussians without
compromising quality. Our method reconstructs compact object models, yielding
object-centric Gaussian and mesh representations that are up to 96\% smaller
and up to 71\% faster to train compared to the baseline while retaining
competitive quality. These representations are immediately usable for
downstream applications such as appearance editing and physics simulation
without additional processing.
|
2501.08180
|
D$^2$-DPM: Dual Denoising for Quantized Diffusion Probabilistic Models
|
cs.CV cs.LG
|
Diffusion models have achieved cutting-edge performance in image generation.
However, their lengthy denoising process and computationally intensive score
estimation network impede their scalability in low-latency and
resource-constrained scenarios. Post-training quantization (PTQ) compresses and
accelerates diffusion models without retraining, but it inevitably introduces
additional quantization noise, resulting in mean and variance deviations. In
this work, we propose D2-DPM, a dual denoising mechanism aimed at precisely
mitigating the adverse effects of quantization noise on the noise estimation
network. Specifically, we first unravel the impact of quantization noise on the
sampling equation into two components: the mean deviation and the variance
deviation. The mean deviation alters the drift coefficient of the sampling
equation, influencing the trajectory trend, while the variance deviation
magnifies the diffusion coefficient, impacting the convergence of the sampling
trajectory. The proposed D2-DPM is thus devised to denoise the quantization
noise at each time step, and then denoise the noisy sample through the inverse
diffusion iterations. Experimental results demonstrate that D2-DPM achieves
superior generation quality, yielding a 1.42 lower FID than the full-precision
model while achieving 3.99x compression and 11.67x bit-operation acceleration.
|
2501.08181
|
Economic Model Predictive Control for Periodic Operation: A Quadratic
Programming Approach
|
eess.SY cs.SY math.OC
|
Periodic dynamical systems, distinguished by their repetitive behavior over
time, are prevalent across various engineering disciplines. In numerous
applications, particularly within industrial contexts, the implementation of
model predictive control (MPC) schemes tailored to optimize specific economic
criteria was shown to offer substantial advantages. However, the real-time
implementation of these schemes is often infeasible due to limited
computational resources. To tackle this problem, we propose a
resource-efficient economic model predictive control scheme for periodic
systems, leveraging existing single-layer MPC techniques. Our method relies on
a single quadratic optimization problem, which ensures high computational
efficiency for real-time control in dynamic settings. We prove feasibility,
stability and convergence to optimum of the proposed approach, and validate the
effectiveness through numerical experiments.
|
2501.08182
|
CG-MER: A Card Game-based Multimodal dataset for Emotion Recognition
|
cs.AI cs.CV cs.HC
|
The field of affective computing has seen significant advancements in
exploring the relationship between emotions and emerging technologies. This
paper presents a novel and valuable contribution to this field with the
introduction of a comprehensive French multimodal dataset designed specifically
for emotion recognition. The dataset encompasses three primary modalities:
facial expressions, speech, and gestures, providing a holistic perspective on
emotions. Moreover, the dataset has the potential to incorporate additional
modalities, such as Natural Language Processing (NLP) to expand the scope of
emotion recognition research. The dataset was curated through engaging
participants in card game sessions, where they were prompted to express a range
of emotions while responding to diverse questions. The study included 10
sessions with 20 participants (9 females and 11 males). The dataset serves as a
valuable resource for furthering research in emotion recognition and provides
an avenue for exploring the intricate connections between human emotions and
digital technologies.
|
2501.08184
|
Assessing AI Adoption and Digitalization in SMEs: A Framework for
Implementation
|
cs.AI
|
The primary objective of this research is to examine the current state of
digitalization and the integration of artificial intelligence (AI) within small
and medium-sized enterprises (SMEs) in Italy. There is a significant gap
between SMEs and large corporations in their use of AI, with SMEs facing
numerous barriers to adoption. This study identifies critical drivers and
obstacles to achieving intelligent transformation, proposing a framework model
to address key challenges and provide actionable guidelines
|
2501.08187
|
A Multi-Modal AI Copilot for Single-Cell Analysis with Instruction
Following
|
cs.CL cs.AI cs.CE cs.HC cs.LG q-bio.CB
|
Large language models excel at interpreting complex natural language
instructions, enabling them to perform a wide range of tasks. In the life
sciences, single-cell RNA sequencing (scRNA-seq) data serves as the "language
of cellular biology", capturing intricate gene expression patterns at the
single-cell level. However, interacting with this "language" through
conventional tools is often inefficient and unintuitive, posing challenges for
researchers. To address these limitations, we present InstructCell, a
multi-modal AI copilot that leverages natural language as a medium for more
direct and flexible single-cell analysis. We construct a comprehensive
multi-modal instruction dataset that pairs text-based instructions with
scRNA-seq profiles from diverse tissues and species. Building on this, we
develop a multi-modal cell language architecture capable of simultaneously
interpreting and processing both modalities. InstructCell empowers researchers
to accomplish critical tasks-such as cell type annotation, conditional
pseudo-cell generation, and drug sensitivity prediction-using straightforward
natural language commands. Extensive evaluations demonstrate that InstructCell
consistently meets or exceeds the performance of existing single-cell
foundation models, while adapting to diverse experimental conditions. More
importantly, InstructCell provides an accessible and intuitive tool for
exploring complex single-cell data, lowering technical barriers and enabling
deeper biological insights.
|
2501.08188
|
A Critical Synthesis of Uncertainty Quantification and Foundation Models
in Monocular Depth Estimation
|
cs.CV cs.AI cs.LG
|
While recent foundation models have enabled significant breakthroughs in
monocular depth estimation, a clear path towards safe and reliable deployment
in the real-world remains elusive. Metric depth estimation, which involves
predicting absolute distances, poses particular challenges, as even the most
advanced foundation models remain prone to critical errors. Since quantifying
the uncertainty has emerged as a promising endeavor to address these
limitations and enable trustworthy deployment, we fuse five different
uncertainty quantification methods with the current state-of-the-art
DepthAnythingV2 foundation model. To cover a wide range of metric depth
domains, we evaluate their performance on four diverse datasets. Our findings
identify fine-tuning with the Gaussian Negative Log-Likelihood Loss (GNLL) as a
particularly promising approach, offering reliable uncertainty estimates while
maintaining predictive performance and computational efficiency on par with the
baseline, encompassing both training and inference time. By fusing uncertainty
quantification and foundation models within the context of monocular depth
estimation, this paper lays a critical foundation for future research aimed at
improving not only model performance but also its explainability. Extending
this critical synthesis of uncertainty quantification and foundation models
into other crucial tasks, such as semantic segmentation and pose estimation,
presents exciting opportunities for safer and more reliable machine vision
systems.
|
2501.08192
|
PRESERVE: Prefetching Model Weights and KV-Cache in Distributed LLM
Serving
|
cs.AI cs.AR cs.DC
|
Large language models (LLMs) are widely used across various applications, but
their substantial computational requirements pose significant challenges,
particularly in terms of HBM bandwidth bottlenecks and inter-device
communication overhead. In this paper, we present PRESERVE, a novel prefetching
framework designed to optimize LLM inference by overlapping memory reads for
model weights and KV-cache with collective communication operations. Through
extensive experiments conducted on commercial AI accelerators, we demonstrate
up to 1.6x end-to-end speedup on state-of-the-art, open-source LLMs.
Additionally, we perform a design space exploration that identifies the optimal
hardware configuration for the proposed method, showing a further 1.25x
improvement in performance per cost by selecting the optimal L2 cache size. Our
results show that PRESERVE has the potential to mitigate the memory bottlenecks
and communication overheads, offering a solution to improve the performance and
scalability of the LLM inference systems.
|
2501.08193
|
Modeling Quantum Machine Learning for Genomic Data Analysis
|
cs.LG
|
Quantum Machine Learning (QML) continues to evolve, unlocking new
opportunities for diverse applications. In this study, we investigate and
evaluate the applicability of QML models for binary classification of genome
sequence data by employing various feature mapping techniques. We present an
open-source, independent Qiskit-based implementation to conduct experiments on
a benchmark genomic dataset. Our simulations reveal that the interplay between
feature mapping techniques and QML algorithms significantly influences
performance. Notably, the Pegasos Quantum Support Vector Classifier
(Pegasos-QSVC) exhibits high sensitivity, particularly excelling in recall
metrics, while Quantum Neural Networks (QNN) achieve the highest training
accuracy across all feature maps. However, the pronounced variability in
classifier performance, dependent on feature mapping, highlights the risk of
overfitting to localized output distributions in certain scenarios. This work
underscores the transformative potential of QML for genomic data classification
while emphasizing the need for continued advancements to enhance the robustness
and accuracy of these methodologies.
|
2501.08195
|
Self-supervised Deep Hyperspectral Inpainting with the Plug and Play and
Deep Image Prior Models
|
cs.CV cs.LG
|
Hyperspectral images are typically composed of hundreds of narrow and
contiguous spectral bands, each containing information regarding the material
composition of the imaged scene. However, these images can be affected by
various sources of noise, distortions, or data loss, which can significantly
degrade their quality and usefulness. This paper introduces a convergent
guaranteed algorithm, LRS-PnP-DIP(1-Lip), which successfully addresses the
instability issue of DHP that has been reported before. The proposed algorithm
extends the successful joint low-rank and sparse model to further exploit the
underlying data structures beyond the conventional and sometimes restrictive
unions of subspace models. A stability analysis guarantees the convergence of
the proposed algorithm under mild assumptions , which is crucial for its
application in real-world scenarios. Extensive experiments demonstrate that the
proposed solution consistently delivers visually and quantitatively superior
inpainting results, establishing state-of-the-art performance.
|
2501.08197
|
OpenCSG Chinese Corpus: A Series of High-quality Chinese Datasets for
LLM Training
|
cs.CL
|
Large language models (LLMs) have demonstrated remarkable capabilities, but
their success heavily relies on the quality of pretraining corpora. For Chinese
LLMs, the scarcity of high-quality Chinese datasets presents a significant
challenge, often limiting their performance. To address this issue, we propose
the OpenCSG Chinese Corpus, a series of high-quality datasets specifically
designed for LLM pretraining, post-training, and fine-tuning. This corpus
includes Fineweb-edu-chinese, Fineweb-edu-chinese-v2, Cosmopedia-chinese, and
Smoltalk-chinese, each with distinct characteristics: Fineweb-edu datasets
focus on filtered, high-quality content derived from diverse Chinese web
sources; Cosmopedia-chinese provides synthetic, textbook-style data for
knowledge-intensive training; and Smoltalk-chinese emphasizes stylistic and
diverse chat-format data. The OpenCSG Chinese Corpus is characterized by its
high-quality text, diverse coverage across domains, and scalable, reproducible
data curation processes. Additionally, we conducted extensive experimental
analyses, including evaluations on smaller parameter models, which demonstrated
significant performance improvements in tasks such as C-Eval, showcasing the
effectiveness of the corpus for training Chinese LLMs.
|
2501.08199
|
EmoNeXt: an Adapted ConvNeXt for Facial Emotion Recognition
|
cs.CV cs.AI
|
Facial expressions play a crucial role in human communication serving as a
powerful and impactful means to express a wide range of emotions. With
advancements in artificial intelligence and computer vision, deep neural
networks have emerged as effective tools for facial emotion recognition. In
this paper, we propose EmoNeXt, a novel deep learning framework for facial
expression recognition based on an adapted ConvNeXt architecture network. We
integrate a Spatial Transformer Network (STN) to focus on feature-rich regions
of the face and Squeeze-and-Excitation blocks to capture channel-wise
dependencies. Moreover, we introduce a self-attention regularization term,
encouraging the model to generate compact feature vectors. We demonstrate the
superiority of our model over existing state-of-the-art deep learning models on
the FER2013 dataset regarding emotion classification accuracy.
|
2501.08200
|
CWEval: Outcome-driven Evaluation on Functionality and Security of LLM
Code Generation
|
cs.SE cs.CL cs.LG
|
Large Language Models (LLMs) have significantly aided developers by
generating or assisting in code writing, enhancing productivity across various
tasks. While identifying incorrect code is often straightforward, detecting
vulnerabilities in functionally correct code is more challenging, especially
for developers with limited security knowledge, which poses considerable
security risks of using LLM-generated code and underscores the need for robust
evaluation benchmarks that assess both functional correctness and security.
Current benchmarks like CyberSecEval and SecurityEval attempt to solve it but
are hindered by unclear and impractical specifications, failing to assess both
functionality and security accurately. To tackle these deficiencies, we
introduce CWEval, a novel outcome-driven evaluation framework designed to
enhance the evaluation of secure code generation by LLMs. This framework not
only assesses code functionality but also its security simultaneously with
high-quality task specifications and outcome-driven test oracles which provides
high accuracy. Coupled with CWEval-bench, a multilingual, security-critical
coding benchmark, CWEval provides a rigorous empirical security evaluation on
LLM-generated code, overcoming previous benchmarks' shortcomings. Through our
evaluations, CWEval reveals a notable portion of functional but insecure code
produced by LLMs, and shows a serious inaccuracy of previous evaluations,
ultimately contributing significantly to the field of secure code generation.
We open-source our artifact at: https://github.com/Co1lin/CWEval .
|
2501.08201
|
Globally Convergent Variational Inference
|
stat.ML cs.LG
|
In variational inference (VI), an approximation of the posterior distribution
is selected from a family of distributions through numerical optimization. With
the most common variational objective function, known as the evidence lower
bound (ELBO), only convergence to a local optimum can be guaranteed. In this
work, we instead establish the global convergence of a particular VI method.
This VI method, which may be considered an instance of neural posterior
estimation (NPE), minimizes an expectation of the inclusive (forward) KL
divergence to fit a variational distribution that is parameterized by a neural
network. Our convergence result relies on the neural tangent kernel (NTK) to
characterize the gradient dynamics that arise from considering the variational
objective in function space. In the asymptotic regime of a fixed,
positive-definite neural tangent kernel, we establish conditions under which
the variational objective admits a unique solution in a reproducing kernel
Hilbert space (RKHS). Then, we show that the gradient descent dynamics in
function space converge to this unique function. In ablation studies and
practical problems, we demonstrate that our results explain the behavior of NPE
in non-asymptotic finite-neuron settings, and show that NPE outperforms
ELBO-based optimization, which often converges to shallow local optima.
|
2501.08202
|
Data-driven system identification using quadratic embeddings of
nonlinear dynamics
|
math.DS cs.LG stat.ML
|
We propose a novel data-driven method called QENDy (Quadratic Embedding of
Nonlinear Dynamics) that not only allows us to learn quadratic representations
of highly nonlinear dynamical systems, but also to identify the governing
equations. The approach is based on an embedding of the system into a
higher-dimensional feature space in which the dynamics become quadratic. Just
like SINDy (Sparse Identification of Nonlinear Dynamics), our method requires
trajectory data, time derivatives for the training data points, which can also
be estimated using finite difference approximations, and a set of preselected
basis functions, called dictionary. We illustrate the efficacy and accuracy of
QENDy with the aid of various benchmark problems and compare its performance
with SINDy and a deep learning method for identifying quadratic embeddings.
Furthermore, we analyze the convergence of QENDy and SINDy in the infinite data
limit, highlight their similarities and main differences, and compare the
quadratic embedding with linearization techniques based on the Koopman
operator.
|
2501.08203
|
ArithmAttack: Evaluating Robustness of LLMs to Noisy Context in Math
Problem Solving
|
cs.CL
|
While Large Language Models (LLMs) have shown impressive capabilities in math
problem-solving tasks, their robustness to noisy inputs is not well-studied. In
this work, we propose ArithmAttack to examine how robust the LLMs are when they
encounter noisy prompts that contain extra noise in the form of punctuation
marks. While being easy to implement, ArithmAttack does not cause any
information loss since words are not added or deleted from the context. We
evaluate the robustness of seven LLMs, including LLama3, Mistral, and
Mathstral, on noisy GSM8K and MultiArith datasets. Our experiments suggest that
all the studied models show vulnerability to such noise, with more noise
leading to poorer performances.
|
2501.08205
|
Modeling Feature Maps for Quantum Machine Learning
|
cs.LG cs.AI
|
Quantum Machine Learning (QML) offers significant potential for complex tasks
like genome sequence classification, but quantum noise on Noisy
Intermediate-Scale Quantum (NISQ) devices poses practical challenges. This
study systematically evaluates how various quantum noise models including
dephasing, amplitude damping, depolarizing, thermal noise, bit-flip, and
phase-flip affect key QML algorithms (QSVC, Peg-QSVC, QNN, VQC) and feature
mapping techniques (ZFeatureMap, ZZFeatureMap, and PauliFeatureMap). Results
indicate that QSVC is notably robust under noise, whereas Peg-QSVC and QNN are
more sensitive, particularly to depolarizing and amplitude-damping noise. The
PauliFeatureMap is especially vulnerable, highlighting difficulties in
maintaining accurate classification under noisy conditions. These findings
underscore the critical importance of feature map selection and noise
mitigation strategies in optimizing QML for genomic classification, with
promising implications for personalized medicine.
|
2501.08207
|
Efficient Dataframe Systems: Lazy Fat Pandas on a Diet
|
cs.DB
|
Pandas is widely used for data science applications, but users often run into
problems when datasets are larger than memory. There are several frameworks
based on lazy evaluation that handle large datasets, but the programs have to
be rewritten to suit the framework, and the presence of multiple frameworks
complicates the life of a programmer. In this paper we present a framework that
allows programmers to code in plain Pandas; with just two lines of code changed
by the user, our system optimizes the program using a combination of
just-in-time static analysis, and runtime optimization based on a lazy
dataframe wrapper framework. Moreover, our system allows the programmer to
choose the backend. It works seamlessly with Pandas, Dask, and Modin, allowing
the choice of the best-suited backend for an application based on factors such
as data size. Performance results on a variety of programs show the benefits of
our framework.
|
2501.08208
|
ASTRID -- An Automated and Scalable TRIaD for the Evaluation of
RAG-based Clinical Question Answering Systems
|
cs.CL cs.AI
|
Large Language Models (LLMs) have shown impressive potential in clinical
question answering (QA), with Retrieval Augmented Generation (RAG) emerging as
a leading approach for ensuring the factual accuracy of model responses.
However, current automated RAG metrics perform poorly in clinical and
conversational use cases. Using clinical human evaluations of responses is
expensive, unscalable, and not conducive to the continuous iterative
development of RAG systems. To address these challenges, we introduce ASTRID -
an Automated and Scalable TRIaD for evaluating clinical QA systems leveraging
RAG - consisting of three metrics: Context Relevance (CR), Refusal Accuracy
(RA), and Conversational Faithfulness (CF). Our novel evaluation metric, CF, is
designed to better capture the faithfulness of a model's response to the
knowledge base without penalising conversational elements. To validate our
triad, we curate a dataset of over 200 real-world patient questions posed to an
LLM-based QA agent during surgical follow-up for cataract surgery - the highest
volume operation in the world - augmented with clinician-selected questions for
emergency, clinical, and non-clinical out-of-domain scenarios. We demonstrate
that CF can predict human ratings of faithfulness better than existing
definitions for conversational use cases. Furthermore, we show that evaluation
using our triad consisting of CF, RA, and CR exhibits alignment with clinician
assessment for inappropriate, harmful, or unhelpful responses. Finally, using
nine different LLMs, we demonstrate that the three metrics can closely agree
with human evaluations, highlighting the potential of these metrics for use in
LLM-driven automated evaluation pipelines. We also publish the prompts and
datasets for these experiments, providing valuable resources for further
research and development.
|
2501.08219
|
Investigating Energy Efficiency and Performance Trade-offs in LLM
Inference Across Tasks and DVFS Settings
|
cs.LG
|
Large language models (LLMs) have shown significant improvements in many
natural language processing (NLP) tasks, accelerating their rapid adoption
across many industries. These models are resource-intensive, requiring
extensive computational resources both during training and inference, leading
to increased energy consumption and negative environmental impact. As their
adoption accelerates, the sustainability of LLMs has become a critical issue,
necessitating strategies to optimize their runtime efficiency without
compromising performance. Hence, it is imperative to identify the parameters
that significantly influence the performance and energy efficiency of LLMs. To
that end, in this work, we investigate the effect of important parameters on
the performance and energy efficiency of LLMs during inference and examine
their trade-offs.
First, we analyze how different types of models with varying numbers of
parameters and architectures perform on tasks like text generation, question
answering, and summarization by benchmarking LLMs such as Falcon-7B,
Mistral-7B-v0.1, T5-3B, GPT-2, GPT-J-6B, and GPT-Neo-2.7B. Second, we study
input and output sequence characteristics such as sequence length concerning
energy consumption, performance, and throughput. Finally, we explore the impact
of hardware-based power-saving techniques, i.e., Dynamic Voltage Frequency
Scaling (DVFS), on the models' latency and energy efficiency. Our extensive
benchmarking and statistical analysis reveal many interesting findings,
uncovering how specific optimizations can reduce energy consumption while
maintaining throughput and accuracy. This study provides actionable insights
for researchers and practitioners to design energy-efficient LLM inference
systems.
|
2501.08220
|
Optimization of Link Configuration for Satellite Communication Using
Reinforcement Learning
|
cs.AI
|
Satellite communication is a key technology in our modern connected world.
With increasingly complex hardware, one challenge is to efficiently configure
links (connections) on a satellite transponder. Planning an optimal link
configuration is extremely complex and depends on many parameters and metrics.
The optimal use of the limited resources, bandwidth and power of the
transponder is crucial. Such an optimization problem can be approximated using
metaheuristic methods such as simulated annealing, but recent research results
also show that reinforcement learning can achieve comparable or even better
performance in optimization methods. However, there have not yet been any
studies on link configuration on satellite transponders. In order to close this
research gap, a transponder environment was developed as part of this work. For
this environment, the performance of the reinforcement learning algorithm PPO
was compared with the metaheuristic simulated annealing in two experiments. The
results show that Simulated Annealing delivers better results for this static
problem than the PPO algorithm, however, the research in turn also underlines
the potential of reinforcement learning for optimization problems.
|
2501.08222
|
Data-driven Spatial Classification using Multi-Arm Bandits for
Monitoring with Energy-Constrained Mobile Robots
|
cs.RO
|
We consider the spatial classification problem for monitoring using data
collected by a coordinated team of mobile robots. Such classification problems
arise in several applications including search-and-rescue and precision
agriculture. Specifically, we want to classify the regions of a search
environment into interesting and uninteresting as quickly as possible using a
team of mobile sensors and mobile charging stations. We develop a data-driven
strategy that accommodates the noise in sensed data and the limited energy
capacity of the sensors, and generates collision-free motion plans for the
team. We propose a bi-level approach, where a high-level planner leverages a
multi-armed bandit framework to determine the potential regions of interest for
the drones to visit next based on the data collected online. Then, a low-level
path planner based on integer programming coordinates the paths for the team to
visit the target regions subject to the physical constraints. We characterize
several theoretical properties of the proposed approach, including anytime
guarantees and task completion time. We show the efficacy of our approach in
simulation, and further validate these observations in physical experiments
using mobile robots.
|
2501.08223
|
Big Batch Bayesian Active Learning by Considering Predictive
Probabilities
|
cs.LG stat.ML
|
We observe that BatchBALD, a popular acquisition function for batch Bayesian
active learning for classification, can conflate epistemic and aleatoric
uncertainty, leading to suboptimal performance. Motivated by this observation,
we propose to focus on the predictive probabilities, which only exhibit
epistemic uncertainty. The result is an acquisition function that not only
performs better, but is also faster to evaluate, allowing for larger batches
than before.
|
2501.08225
|
FramePainter: Endowing Interactive Image Editing with Video Diffusion
Priors
|
cs.CV
|
Interactive image editing allows users to modify images through visual
interaction operations such as drawing, clicking, and dragging. Existing
methods construct such supervision signals from videos, as they capture how
objects change with various physical interactions. However, these models are
usually built upon text-to-image diffusion models, so necessitate (i) massive
training samples and (ii) an additional reference encoder to learn real-world
dynamics and visual consistency. In this paper, we reformulate this task as an
image-to-video generation problem, so that inherit powerful video diffusion
priors to reduce training costs and ensure temporal consistency. Specifically,
we introduce FramePainter as an efficient instantiation of this formulation.
Initialized with Stable Video Diffusion, it only uses a lightweight sparse
control encoder to inject editing signals. Considering the limitations of
temporal attention in handling large motion between two frames, we further
propose matching attention to enlarge the receptive field while encouraging
dense correspondence between edited and source image tokens. We highlight the
effectiveness and efficiency of FramePainter across various of editing signals:
it domainantly outperforms previous state-of-the-art methods with far less
training data, achieving highly seamless and coherent editing of images, \eg,
automatically adjust the reflection of the cup. Moreover, FramePainter also
exhibits exceptional generalization in scenarios not present in real-world
videos, \eg, transform the clownfish into shark-like shape. Our code will be
available at https://github.com/YBYBZhang/FramePainter.
|
2501.08226
|
Efficient Deep Learning-based Forward Solvers for Brain Tumor Growth
Models
|
cs.CV cs.LG
|
Glioblastoma, a highly aggressive brain tumor, poses major challenges due to
its poor prognosis and high morbidity rates. Partial differential
equation-based models offer promising potential to enhance therapeutic outcomes
by simulating patient-specific tumor behavior for improved radiotherapy
planning. However, model calibration remains a bottleneck due to the high
computational demands of optimization methods like Monte Carlo sampling and
evolutionary algorithms. To address this, we recently introduced an approach
leveraging a neural forward solver with gradient-based optimization to
significantly reduce calibration time. This approach requires a highly accurate
and fully differentiable forward model. We investigate multiple architectures,
including (i) an enhanced TumorSurrogate, (ii) a modified nnU-Net, and (iii) a
3D Vision Transformer (ViT). The optimized TumorSurrogate achieved the best
overall results, excelling in both tumor outline matching and voxel-level
prediction of tumor cell concentration. It halved the MSE relative to the
baseline model and achieved the highest Dice score across all tumor cell
concentration thresholds. Our study demonstrates significant enhancement in
forward solver performance and outlines important future research directions.
|
2501.08227
|
Nonlinear Cruise Controllers with Bidirectional Sensing for a String of
Vehicles
|
math.OC cs.SY eess.SY
|
We introduce a nonlinear cruise controller that is fully decentralized (by
vehicle) and uses spacing and speed measurements from the preceding and
following vehicles to decide on the appropriate control action (acceleration)
for each vehicle. The proposed cruise controller is studied on both a ring-road
and an open road and guarantees that there are no collisions between vehicles,
while their speeds are always positive and never exceed the road speed limits.
For both cases of the open road and the ring-road, we rigorously prove that the
set of equilibrium points is globally asymptotically stable and provide KL
estimates that guarantee uniform convergence to the said set. Moreover, we show
that for the ring-road, and under certain conditions, there is a single
equilibrium point which is exponentially attractive.
|
2501.08234
|
Dynamic Pricing in High-Speed Railways Using Multi-Agent Reinforcement
Learning
|
cs.LG cs.AI cs.MA
|
This paper addresses a critical challenge in the high-speed passenger railway
industry: designing effective dynamic pricing strategies in the context of
competing and cooperating operators. To address this, a multi-agent
reinforcement learning (MARL) framework based on a non-zero-sum Markov game is
proposed, incorporating random utility models to capture passenger decision
making. Unlike prior studies in areas such as energy, airlines, and mobile
networks, dynamic pricing for railway systems using deep reinforcement learning
has received limited attention. A key contribution of this paper is a
parametrisable and versatile reinforcement learning simulator designed to model
a variety of railway network configurations and demand patterns while enabling
realistic, microscopic modelling of user behaviour, called RailPricing-RL. This
environment supports the proposed MARL framework, which models heterogeneous
agents competing to maximise individual profits while fostering cooperative
behaviour to synchronise connecting services. Experimental results validate the
framework, demonstrating how user preferences affect MARL performance and how
pricing policies influence passenger choices, utility, and overall system
dynamics. This study provides a foundation for advancing dynamic pricing
strategies in railway systems, aligning profitability with system-wide
efficiency, and supporting future research on optimising pricing policies.
|
2501.08236
|
Privacy-Preserving Model and Preprocessing Verification for Machine
Learning
|
cs.LG
|
This paper presents a framework for privacy-preserving verification of
machine learning models, focusing on models trained on sensitive data.
Integrating Local Differential Privacy (LDP) with model explanations from LIME
and SHAP, our framework enables robust verification without compromising
individual privacy. It addresses two key tasks: binary classification, to
verify if a target model was trained correctly by applying the appropriate
preprocessing steps, and multi-class classification, to identify specific
preprocessing errors. Evaluations on three real-world datasets-Diabetes, Adult,
and Student Record-demonstrate that while the ML-based approach is particularly
effective in binary tasks, the threshold-based method performs comparably in
multi-class tasks. Results indicate that although verification accuracy varies
across datasets and noise levels, the framework provides effective detection of
preprocessing errors, strong privacy guarantees, and practical applicability
for safeguarding sensitive data.
|
2501.08241
|
A Feature-Level Ensemble Model for COVID-19 Identification in CXR Images
using Choquet Integral and Differential Evolution Optimization
|
cs.CV cs.AI cs.LG eess.IV
|
The COVID-19 pandemic has profoundly impacted billions globally. It
challenges public health and healthcare systems due to its rapid spread and
severe respiratory effects. An effective strategy to mitigate the COVID-19
pandemic involves integrating testing to identify infected individuals. While
RT-PCR is considered the gold standard for diagnosing COVID-19, it has some
limitations such as the risk of false negatives. To address this problem, this
paper introduces a novel Deep Learning Diagnosis System that integrates
pre-trained Deep Convolutional Neural Networks (DCNNs) within an ensemble
learning framework to achieve precise identification of COVID-19 cases from
Chest X-ray (CXR) images. We combine feature vectors from the final hidden
layers of pre-trained DCNNs using the Choquet integral to capture interactions
between different DCNNs that a linear approach cannot. We employed
Sugeno-$\lambda$ measure theory to derive fuzzy measures for subsets of
networks to enable aggregation. We utilized Differential Evolution to estimate
fuzzy densities. We developed a TensorFlow-based layer for Choquet operation to
facilitate efficient aggregation, due to the intricacies involved in
aggregating feature vectors. Experimental results on the COVIDx dataset show
that our ensemble model achieved 98\% accuracy in three-class classification
and 99.50\% in binary classification, outperforming its components-DenseNet-201
(97\% for three-class, 98.75\% for binary), Inception-v3 (96.25\% for
three-class, 98.50\% for binary), and Xception (94.50\% for three-class, 98\%
for binary)-and surpassing many previous methods.
|
2501.08243
|
Engineering LLM Powered Multi-agent Framework for Autonomous CloudOps
|
cs.SE cs.AI cs.LG
|
Cloud Operations (CloudOps) is a rapidly growing field focused on the
automated management and optimization of cloud infrastructure which is
essential for organizations navigating increasingly complex cloud environments.
MontyCloud Inc. is one of the major companies in the CloudOps domain that
leverages autonomous bots to manage cloud compliance, security, and continuous
operations. To make the platform more accessible and effective to the
customers, we leveraged the use of GenAI.
Developing a GenAI-based solution for autonomous CloudOps for the existing
MontyCloud system presented us with various challenges such as i) diverse data
sources; ii) orchestration of multiple processes; and iii) handling complex
workflows to automate routine tasks. To this end, we developed MOYA, a
multi-agent framework that leverages GenAI and balances autonomy with the
necessary human control. This framework integrates various internal and
external systems and is optimized for factors like task orchestration,
security, and error mitigation while producing accurate, reliable, and relevant
insights by utilizing Retrieval Augmented Generation (RAG). Evaluations of our
multi-agent system with the help of practitioners as well as using automated
checks demonstrate enhanced accuracy, responsiveness, and effectiveness over
non-agentic approaches across complex workflows.
|
2501.08245
|
Continual Deep Active Learning for Medical Imaging: Replay-Base
Architecture for Context Adaptation
|
cs.CV cs.LG
|
Deep Learning for medical imaging faces challenges in adapting and
generalizing to new contexts. Additionally, it often lacks sufficient labeled
data for specific tasks requiring significant annotation effort. Continual
Learning (CL) tackles adaptability and generalizability by enabling lifelong
learning from a data stream while mitigating forgetting of previously learned
knowledge. Active Learning (AL) reduces the number of required annotations for
effective training. This work explores both approaches (CAL) to develop a novel
framework for robust medical image analysis. Based on the automatic recognition
of shifts in image characteristics, Replay-Base Architecture for Context
Adaptation (RBACA) employs a CL rehearsal method to continually learn from
diverse contexts, and an AL component to select the most informative instances
for annotation. A novel approach to evaluate CAL methods is established using a
defined metric denominated IL-Score, which allows for the simultaneous
assessment of transfer learning, forgetting, and final model performance. We
show that RBACA works in domain and class-incremental learning scenarios, by
assessing its IL-Score on the segmentation and diagnosis of cardiac images. The
results show that RBACA outperforms a baseline framework without CAL, and a
state-of-the-art CAL method across various memory sizes and annotation budgets.
Our code is available in https://github.com/RuiDaniel/RBACA .
|
2501.08246
|
Text-Diffusion Red-Teaming of Large Language Models: Unveiling Harmful
Behaviors with Proximity Constraints
|
cs.LG
|
Recent work has proposed automated red-teaming methods for testing the
vulnerabilities of a given target large language model (LLM). These methods use
red-teaming LLMs to uncover inputs that induce harmful behavior in a target
LLM. In this paper, we study red-teaming strategies that enable a targeted
security assessment. We propose an optimization framework for red-teaming with
proximity constraints, where the discovered prompts must be similar to
reference prompts from a given dataset. This dataset serves as a template for
the discovered prompts, anchoring the search for test-cases to specific topics,
writing styles, or types of harmful behavior. We show that established
auto-regressive model architectures do not perform well in this setting. We
therefore introduce a black-box red-teaming method inspired by text-diffusion
models: Diffusion for Auditing and Red-Teaming (DART). DART modifies the
reference prompt by perturbing it in the embedding space, directly controlling
the amount of change introduced. We systematically evaluate our method by
comparing its effectiveness with established methods based on model fine-tuning
and zero- and few-shot prompting. Our results show that DART is significantly
more effective at discovering harmful inputs in close proximity to the
reference prompt.
|
2501.08248
|
Eliciting In-context Retrieval and Reasoning for Long-context Large
Language Models
|
cs.CL cs.AI cs.IR cs.LG
|
Recent advancements in long-context language models (LCLMs) promise to
transform Retrieval-Augmented Generation (RAG) by simplifying pipelines. With
their expanded context windows, LCLMs can process entire knowledge bases and
perform retrieval and reasoning directly -- a capability we define as
In-Context Retrieval and Reasoning (ICR^2). However, existing benchmarks like
LOFT often overestimate LCLM performance by providing overly simplified
contexts. To address this, we introduce ICR^2, a benchmark that evaluates LCLMs
in more realistic scenarios by including confounding passages retrieved with
strong retrievers. We then propose three methods to enhance LCLM performance:
(1) retrieve-then-generate fine-tuning, (2) retrieval-attention-probing, which
uses attention heads to filter and de-noise long contexts during decoding, and
(3) joint retrieval head training alongside the generation head. Our evaluation
of five well-known LCLMs on LOFT and ICR^2 demonstrates significant gains with
our best approach applied to Mistral-7B: +17 and +15 points by Exact Match on
LOFT, and +13 and +2 points on ICR^2, compared to vanilla RAG and supervised
fine-tuning, respectively. It even outperforms GPT-4-Turbo on most tasks
despite being a much smaller model.
|
2501.08258
|
Towards an End-to-End (E2E) Adversarial Learning and Application in the
Physical World
|
cs.CV cs.CR
|
The traditional learning process of patch-based adversarial attacks,
conducted in the digital domain and then applied in the physical domain (e.g.,
via printed stickers), may suffer from reduced performance due to adversarial
patches' limited transferability from the digital domain to the physical
domain. Given that previous studies have considered using projectors to apply
adversarial attacks, we raise the following question: can adversarial learning
(i.e., patch generation) be performed entirely in the physical domain with a
projector? In this work, we propose the Physical-domain Adversarial Patch
Learning Augmentation (PAPLA) framework, a novel end-to-end (E2E) framework
that converts adversarial learning from the digital domain to the physical
domain using a projector. We evaluate PAPLA across multiple scenarios,
including controlled laboratory settings and realistic outdoor environments,
demonstrating its ability to ensure attack success compared to conventional
digital learning-physical application (DL-PA) methods. We also analyze the
impact of environmental factors, such as projection surface color, projector
strength, ambient light, distance, and angle of the target object relative to
the camera, on the effectiveness of projected patches. Finally, we demonstrate
the feasibility of the attack against a parked car and a stop sign in a
real-world outdoor environment. Our results show that under specific
conditions, E2E adversarial learning in the physical domain eliminates the
transferability issue and ensures evasion by object detectors. Finally, we
provide insights into the challenges and opportunities of applying adversarial
learning in the physical domain and explain where such an approach is more
effective than using a sticker.
|
2501.08259
|
FDPP: Fine-tune Diffusion Policy with Human Preference
|
cs.RO cs.LG
|
Imitation learning from human demonstrations enables robots to perform
complex manipulation tasks and has recently witnessed huge success. However,
these techniques often struggle to adapt behavior to new preferences or changes
in the environment. To address these limitations, we propose Fine-tuning
Diffusion Policy with Human Preference (FDPP). FDPP learns a reward function
through preference-based learning. This reward is then used to fine-tune the
pre-trained policy with reinforcement learning (RL), resulting in alignment of
pre-trained policy with new human preferences while still solving the original
task. Our experiments across various robotic tasks and preferences demonstrate
that FDPP effectively customizes policy behavior without compromising
performance. Additionally, we show that incorporating Kullback-Leibler (KL)
regularization during fine-tuning prevents over-fitting and helps maintain the
competencies of the initial policy.
|
2501.08263
|
Multiplayer Federated Learning: Reaching Equilibrium with Less
Communication
|
cs.LG math.OC stat.ML
|
Traditional Federated Learning (FL) approaches assume collaborative clients
with aligned objectives working towards a shared global model. However, in many
real-world scenarios, clients act as rational players with individual
objectives and strategic behaviors, a concept that existing FL frameworks are
not equipped to adequately address. To bridge this gap, we introduce
Multiplayer Federated Learning (MpFL), a novel framework that models the
clients in the FL environment as players in a game-theoretic context, aiming to
reach an equilibrium. In this scenario, each player tries to optimize their own
utility function, which may not align with the collective goal. Within MpFL, we
propose Per-Player Local Stochastic Gradient Descent (PEARL-SGD), an algorithm
in which each player/client performs local updates independently and
periodically communicates with other players. We theoretically analyze
PEARL-SGD and prove that it reaches a neighborhood of equilibrium with less
communication in the stochastic setup compared to its non-local counterpart.
Finally, we verify our theoretical findings through numerical experiments.
|
2501.08266
|
AI Driven Water Segmentation with deep learning models for Enhanced
Flood Monitoring
|
cs.CV cs.AI cs.LG eess.IV
|
Flooding is a major natural hazard causing significant fatalities and
economic losses annually, with increasing frequency due to climate change.
Rapid and accurate flood detection and monitoring are crucial for mitigating
these impacts. This study compares the performance of three deep learning
models UNet, ResNet, and DeepLabv3 for pixelwise water segmentation to aid in
flood detection, utilizing images from drones, in field observations, and
social media. This study involves creating a new dataset that augments
wellknown benchmark datasets with flood-specific images, enhancing the
robustness of the models. The UNet, ResNet, and DeepLab v3 architectures are
tested to determine their effectiveness in various environmental conditions and
geographical locations, and the strengths and limitations of each model are
also discussed here, providing insights into their applicability in different
scenarios by predicting image segmentation masks. This fully automated approach
allows these models to isolate flooded areas in images, significantly reducing
processing time compared to traditional semi-automated methods. The outcome of
this study is to predict segmented masks for each image effected by a flood
disaster and the validation accuracy of these models. This methodology
facilitates timely and continuous flood monitoring, providing vital data for
emergency response teams to reduce loss of life and economic damages. It offers
a significant reduction in the time required to generate flood maps, cutting
down the manual processing time. Additionally, we present avenues for future
research, including the integration of multimodal data sources and the
development of robust deep learning architectures tailored specifically for
flood detection tasks. Overall, our work contributes to the advancement of
flood management strategies through innovative use of deep learning
technologies.
|
2501.08267
|
TriMod Fusion for Multimodal Named Entity Recognition in Social Media
|
cs.IR cs.SI
|
Social media platforms serve as invaluable sources of user-generated content,
offering insights into various aspects of human behavior. Named Entity
Recognition (NER) plays a crucial role in analyzing such content by identifying
and categorizing named entities into predefined classes. However, traditional
NER models often struggle with the informal, contextually sparse, and ambiguous
nature of social media language. To address these challenges, recent research
has focused on multimodal approaches that leverage both textual and visual cues
for enhanced entity recognition. Despite advances, existing methods face
limitations in capturing nuanced mappings between visual objects and textual
entities and addressing distributional disparities between modalities. In this
paper, we propose a novel approach that integrates textual, visual, and hashtag
features (TriMod), utilizing Transformer-attention for effective modality
fusion. The improvements exhibited by our model suggest that named entities can
greatly benefit from the auxiliary context provided by multiple modalities,
enabling more accurate recognition. Through the experiments on a multimodal
social media dataset, we demonstrate the superiority of our approach over
existing state-of-the-art methods, achieving significant improvements in
precision, recall, and F1 score.
|
2501.08271
|
Comparative Analysis of Efficient Adapter-Based Fine-Tuning of
State-of-the-Art Transformer Models
|
cs.CL cs.AI
|
In this work, we investigate the efficacy of various adapter architectures on
supervised binary classification tasks from the SuperGLUE benchmark as well as
a supervised multi-class news category classification task from Kaggle.
Specifically, we compare classification performance and time complexity of
three transformer models, namely DistilBERT, ELECTRA, and BART, using
conventional fine-tuning as well as nine state-of-the-art (SoTA) adapter
architectures. Our analysis reveals performance differences across adapter
architectures, highlighting their ability to achieve comparable or better
performance relative to fine-tuning at a fraction of the training time. Similar
results are observed on the new classification task, further supporting our
findings and demonstrating adapters as efficient and flexible alternatives to
fine-tuning. This study provides valuable insights and guidelines for selecting
and implementing adapters in diverse natural language processing (NLP)
applications.
|
2501.08276
|
Exploring Robustness of LLMs to Sociodemographically-Conditioned
Paraphrasing
|
cs.CL
|
Large Language Models (LLMs) have shown impressive performance in various NLP
tasks. However, there are concerns about their reliability in different domains
of linguistic variations. Many works have proposed robustness evaluation
measures for local adversarial attacks, but we need globally robust models
unbiased to different language styles. We take a broader approach to explore a
wider range of variations across sociodemographic dimensions to perform
structured reliability tests on the reasoning capacity of language models. We
extend the SocialIQA dataset to create diverse paraphrased sets conditioned on
sociodemographic styles. The assessment aims to provide a deeper understanding
of LLMs in (a) their capability of generating demographic paraphrases with
engineered prompts and (b) their reasoning capabilities in real-world, complex
language scenarios. We also explore measures such as perplexity,
explainability, and ATOMIC performance of paraphrases for fine-grained
reliability analysis of LLMs on these sets. We find that demographic-specific
paraphrasing significantly impacts the performance of language models,
indicating that the subtleties of language variations remain a significant
challenge. The code and dataset will be made available for reproducibility and
future research.
|
2501.08279
|
SmartEraser: Remove Anything from Images using Masked-Region Guidance
|
cs.CV
|
Object removal has so far been dominated by the mask-and-inpaint paradigm,
where the masked region is excluded from the input, leaving models relying on
unmasked areas to inpaint the missing region. However, this approach lacks
contextual information for the masked area, often resulting in unstable
performance. In this work, we introduce SmartEraser, built with a new removing
paradigm called Masked-Region Guidance. This paradigm retains the masked region
in the input, using it as guidance for the removal process. It offers several
distinct advantages: (a) it guides the model to accurately identify the object
to be removed, preventing its regeneration in the output; (b) since the user
mask often extends beyond the object itself, it aids in preserving the
surrounding context in the final result. Leveraging this new paradigm, we
present Syn4Removal, a large-scale object removal dataset, where instance
segmentation data is used to copy and paste objects onto images as removal
targets, with the original images serving as ground truths. Experimental
results demonstrate that SmartEraser significantly outperforms existing
methods, achieving superior performance in object removal, especially in
complex scenes with intricate compositions.
|
2501.08281
|
Decoding Interpretable Logic Rules from Neural Networks
|
cs.LG
|
As deep neural networks continue to excel across various domains, their
black-box nature has raised concerns about transparency and trust. In
particular, interpretability has become increasingly essential for applications
that demand high safety and knowledge rigor, such as drug discovery, autonomous
driving, and genomics. However, progress in understanding even the simplest
deep neural networks - such as fully connected networks - has been limited,
despite their role as foundational elements in state-of-the-art models like
ResNet and Transformer. In this paper, we address this challenge by introducing
NeuroLogic, a novel approach for decoding interpretable logic rules from neural
networks. NeuroLogic leverages neural activation patterns to capture the
model's critical decision-making processes, translating them into logical rules
represented by hidden predicates. Thanks to its flexible design in the
grounding phase, NeuroLogic can be adapted to a wide range of neural networks.
For simple fully connected neural networks, hidden predicates can be grounded
in certain split patterns of original input features to derive
decision-tree-like rules. For large, complex vision neural networks, NeuroLogic
grounds hidden predicates into high-level visual concepts that are
understandable to humans. Our empirical study demonstrates that NeuroLogic can
extract global and interpretable rules from state-of-the-art models such as
ResNet, a task at which existing work struggles. We believe NeuroLogic can help
pave the way for understanding the black-box nature of neural networks.
|
2501.08282
|
LLaVA-ST: A Multimodal Large Language Model for Fine-Grained
Spatial-Temporal Understanding
|
cs.CV
|
Recent advancements in multimodal large language models (MLLMs) have shown
promising results, yet existing approaches struggle to effectively handle both
temporal and spatial localization simultaneously. This challenge stems from two
key issues: first, incorporating spatial-temporal localization introduces a
vast number of coordinate combinations, complicating the alignment of
linguistic and visual coordinate representations; second, encoding fine-grained
temporal and spatial information during video feature compression is inherently
difficult. To address these issues, we propose LLaVA-ST, a MLLM for
fine-grained spatial-temporal multimodal understanding. In LLaVA-ST, we propose
Language-Aligned Positional Embedding, which embeds the textual coordinate
special token into the visual space, simplifying the alignment of fine-grained
spatial-temporal correspondences. Additionally, we design the Spatial-Temporal
Packer, which decouples the feature compression of temporal and spatial
resolutions into two distinct point-to-region attention processing streams.
Furthermore, we propose ST-Align dataset with 4.3M training samples for
fine-grained spatial-temporal multimodal understanding. With ST-align, we
present a progressive training pipeline that aligns the visual and textual
feature through sequential coarse-to-fine stages.Additionally, we introduce an
ST-Align benchmark to evaluate spatial-temporal interleaved fine-grained
understanding tasks, which include Spatial-Temporal Video Grounding (STVG) ,
Event Localization and Captioning (ELC) and Spatial Video Grounding (SVG).
LLaVA-ST achieves outstanding performance on 11 benchmarks requiring
fine-grained temporal, spatial, or spatial-temporal interleaving multimodal
understanding. Our code, data and benchmark will be released at Our code, data
and benchmark will be released at https://github.com/appletea233/LLaVA-ST .
|
Subsets and Splits
No community queries yet
The top public SQL queries from the community will appear here once available.