id
stringlengths 9
16
| title
stringlengths 4
278
| categories
stringlengths 5
104
| abstract
stringlengths 6
4.09k
|
|---|---|---|---|
2502.03891
|
Counterfactual Query Rewriting to Use Historical Relevance Feedback
|
cs.IR
|
When a retrieval system receives a query it has encountered before, previous
relevance feedback, such as clicks or explicit judgments can help to improve
retrieval results. However, the content of a previously relevant document may
have changed, or the document might not be available anymore. Despite this
evolved corpus, we counterfactually use these previously relevant documents as
relevance signals. In this paper we proposed approaches to rewrite user queries
and compare them against a system that directly uses the previous qrels for the
ranking. We expand queries with terms extracted from the previously relevant
documents or derive so-called keyqueries that rank the previously relevant
documents to the top of the current corpus. Our evaluation in the CLEF LongEval
scenario shows that rewriting queries with historical relevance feedback
improves the retrieval effectiveness and even outperforms computationally
expensive transformer-based approaches.
|
2502.03895
|
Rule-Based Modeling of Low-Dimensional Data with PCA and Binary Particle
Swarm Optimization (BPSO) in ANFIS
|
cs.CV
|
Fuzzy rule-based systems interpret data in low-dimensional domains, providing
transparency and interpretability. In contrast, deep learning excels in complex
tasks like image and speech recognition but is prone to overfitting in sparse,
unstructured, or low-dimensional data. This interpretability is crucial in
fields like healthcare and finance. Traditional rule-based systems, especially
ANFIS with grid partitioning, suffer from exponential rule growth as
dimensionality increases. We propose a strategic rule-reduction model that
applies Principal Component Analysis (PCA) on normalized firing strengths to
obtain linearly uncorrelated components. Binary Particle Swarm Optimization
(BPSO) selectively refines these components, significantly reducing the number
of rules while preserving precision in decision-making. A custom parameter
update mechanism fine-tunes specific ANFIS layers by dynamically adjusting BPSO
parameters, avoiding local minima. We validated our approach on standard UCI
respiratory, keel classification, regression datasets, and a real-world
ischemic stroke dataset, demonstrating adaptability and practicality. Results
indicate fewer rules, shorter training, and high accuracy, underscoring the
methods effectiveness for low-dimensional interpretability and complex data
scenarios. This synergy of fuzzy logic and optimization fosters robust
solutions. Our method contributes a powerful framework for interpretable AI in
multiple domains. It addresses dimensionality, ensuring a rule base.
|
2502.03897
|
UniForm: A Unified Diffusion Transformer for Audio-Video Generation
|
cs.MM cs.AI cs.CV cs.SD eess.AS
|
As a natural multimodal content, audible video delivers an immersive sensory
experience. Consequently, audio-video generation systems have substantial
potential. However, existing diffusion-based studies mainly employ relatively
independent modules for generating each modality, which lack exploration of
shared-weight generative modules. This approach may under-use the intrinsic
correlations between audio and visual modalities, potentially resulting in
sub-optimal generation quality. To address this, we propose UniForm, a unified
diffusion transformer designed to enhance cross-modal consistency. By
concatenating auditory and visual information, UniForm learns to generate audio
and video simultaneously within a unified latent space, facilitating the
creation of high-quality and well-aligned audio-visual pairs. Extensive
experiments demonstrate the superior performance of our method in joint
audio-video generation, audio-guided video generation, and video-guided audio
generation tasks. Our demos are available at https://uniform-t2av.github.io/.
|
2502.03900
|
How to introduce an initial crack in phase field simulations to
accurately predict the linear elastic fracture propagation threshold?
|
cs.CE
|
Variational phase field fracture models are now widely used to simulate crack
propagation in structures. A critical aspect of these simulations is the
correct determination of the propagation threshold of pre-existing cracks, as
it highly relies on how the initial cracks are implemented. While prior studies
briefly discuss initial crack implementation techniques, we present here a
systematic investigation. Various techniques to introduce initial cracks in
phase field fracture simulations are tested, from the crack explicit meshing to
the replacement by a fully damaged phase field, including different variants
for the boundary conditions. Our focus here is on phase field models aiming to
approximate, in the $\Gamma$-convergence limit, Griffith quasi-static
propagation in the framework of Linear Elastic Fracture Mechanics. Therefore, a
sharp crack model from classic linear elastic fracture mechanics based on
Griffith criterion is the reference in this work. To assess the different
techniques to introduce initial cracks, we rely on path-following methods to
compute the sharp crack and the phase field smeared crack solutions. The
underlying idea is that path-following ensures staying at equilibrium at each
instant so that any difference between phase field and sharp crack models can
be attributed to numerical artifacts. Thus, by comparing the results from both
models, we can provide practical recommendations for reliably incorporating
initial cracks in phase field fracture simulations. The comparison shows that
an improper initial crack implementation often requires the smeared crack to
transition to a one-element-wide phase band to adequately represent a
displacement jump along a crack. This transition increases the energy required
to propagate the crack, leading to a significant overshoot in the
force-displacement response. The take-home message is that to predict the
propagation threshold accurately and avoid artificial toughening; the crack
must be initialized either setting the phase field to its damage state over a
one-element-wide band or meshing the crack explicitly as a one-element-wide
slit and imposing the fully cracked state on the crack surface.
|
2502.03901
|
LeAP: Consistent multi-domain 3D labeling using Foundation Models
|
cs.CV cs.RO
|
Availability of datasets is a strong driver for research on 3D semantic
understanding, and whilst obtaining unlabeled 3D point cloud data is
straightforward, manually annotating this data with semantic labels is
time-consuming and costly. Recently, Vision Foundation Models (VFMs) enable
open-set semantic segmentation on camera images, potentially aiding automatic
labeling. However,VFMs for 3D data have been limited to adaptations of 2D
models, which can introduce inconsistencies to 3D labels. This work introduces
Label Any Pointcloud (LeAP), leveraging 2D VFMs to automatically label 3D data
with any set of classes in any kind of application whilst ensuring label
consistency. Using a Bayesian update, point labels are combined into voxels to
improve spatio-temporal consistency. A novel 3D Consistency Network (3D-CN)
exploits 3D information to further improve label quality. Through various
experiments, we show that our method can generate high-quality 3D semantic
labels across diverse fields without any manual labeling. Further, models
adapted to new domains using our labels show up to a 34.2 mIoU increase in
semantic segmentation tasks.
|
2502.03902
|
Geometric Stabilization of Virtual Nonlinear Nonholonomic Constraints
|
eess.SY cs.SY math.DS math.OC
|
In this paper, we address the problem of stabilizing a system around a
desired manifold determined by virtual nonlinear nonholonomic constraints.
Virtual constraints are relationships imposed on a control system that are
rendered invariant through feedback control. Virtual nonholonomic constraints
represent a specific class of virtual constraints that depend on the system's
velocities in addition to its configurations. We derive a control law under
which a mechanical control system achieves exponential convergence to the
virtual constraint submanifold, and rendering it control-invariant. The
proposed controller's performance is validated through simulation results in
two distinct applications: flocking motion in multi-agent systems and the
control of an unmanned surface vehicle (USV) navigating a stream.
|
2502.03904
|
A Gaussian-Sinc Pulse Shaping Filter for Zak-OTFS
|
cs.IT eess.SP math.IT
|
The choice of delay-Doppler domain (DD) pulse shaping filter plays an
important role in determining the performance of Zak-OTFS. Sinc filter has good
main lobe characteristics (with nulls at information grid points) which is good
for equalization/detection, but has high side lobes which are detrimental for
input-output (I/O) relation estimation. Whereas, Gaussian filter is highly
localized with very low side lobes which is good for I/O relation estimation,
but has poor main lobe characteristics which is not good for
equalization/detection. In this paper, we propose a new filter, termed as {\em
Gaussian-sinc (GS) filter}, which inherits the complementary strengths of both
Gaussian and sinc filters. The proposed filter does not incur time or bandwidth
expansion. We derive closed-form expressions for the I/O relation and noise
covariance of Zak-OTFS with the proposed GS filter. We evaluate the Zak-OTFS
performance for different pulse shaping filters with I/O relation estimated
using exclusive and embedded pilots. Our results show that the proposed GS
filter achieves better bit error rate (BER) performance compared to other
filters reported in the literature. For example, with model-free I/O relation
estimation using embedded pilot and 8-QAM, the proposed GS filter achieves an
SNR gain of about 4 dB at $10^{-2}$ uncoded BER compared to Gaussian and sinc
filters, and the SNR gain becomes more than 6 dB at a coded BER of $10^{-4}$
with rate-1/2 coding.
|
2502.03907
|
No Free Lunch in Annotation either: An objective evaluation of
foundation models for streamlining annotation in animal tracking
|
cs.CV
|
We analyze the capabilities of foundation models addressing the tedious task
of generating annotations for animal tracking. Annotating a large amount of
data is vital and can be a make-or-break factor for the robustness of a
tracking model. Robustness is particularly crucial in animal tracking, as
accurate tracking over long time horizons is essential for capturing the
behavior of animals. However, generating additional annotations using
foundation models can be counterproductive, as the quality of the annotations
is just as important. Poorly annotated data can introduce noise and
inaccuracies, ultimately compromising the performance and accuracy of the
trained model. Over-reliance on automated annotations without ensuring
precision can lead to diminished results, making careful oversight and quality
control essential in the annotation process. Ultimately, we demonstrate that a
thoughtful combination of automated annotations and manually annotated data is
a valuable strategy, yielding an IDF1 score of 80.8 against blind usage of SAM2
video with an IDF1 score of 65.6.
|
2502.03909
|
Technical Report: Generating the WEB-IDS23 Dataset
|
cs.NI cs.CR cs.LG
|
Anomaly-based Network Intrusion Detection Systems (NIDS) require correctly
labelled, representative and diverse datasets for an accurate evaluation and
development. However, several widely used datasets do not include labels which
are fine-grained enough and, together with small sample sizes, can lead to
overfitting issues that also remain undetected when using test data.
Additionally, the cybersecurity sector is evolving fast, and new attack
mechanisms require the continuous creation of up-to-date datasets. To address
these limitations, we developed a modular traffic generator that can simulate a
wide variety of benign and malicious traffic. It incorporates multiple
protocols, variability through randomization techniques and can produce attacks
along corresponding benign traffic, as it occurs in real-world scenarios. Using
the traffic generator, we create a dataset capturing over 12 million samples
with 82 flow-level features and 21 fine-grained labels. Additionally, we
include several web attack types which are often underrepresented in other
datasets.
|
2502.03914
|
A Flexible FBG-Based Contact Force Sensor for Robotic Gripping Systems
|
cs.RO cs.SY eess.SY
|
Soft robotic grippers demonstrate great potential for gently and safely
handling objects; however, their full potential for executing precise and
secure grasping has been limited by the lack of integrated sensors, leading to
problems such as slippage and excessive force exertion. To address this
challenge, we present a small and highly sensitive Fiber Bragg Grating-based
force sensor designed for accurate contact force measurement. The flexible
force sensor comprises a 3D-printed TPU casing with a small bump and uvula
structure, a dual FBG array, and a protective tube. A series of tests have been
conducted to evaluate the effectiveness of the proposed force sensor, including
force calibration, repeatability test, hysteresis study, force measurement
comparison, and temperature calibration and compensation tests. The results
demonstrated good repeatability, with a force measurement range of 4.69 N, a
high sensitivity of approximately 1169.04 pm/N, a root mean square error (RMSE)
of 0.12 N, and a maximum hysteresis of 4.83%. When compared to a commercial
load cell, the sensor showed a percentage error of 2.56% and an RMSE of 0.14 N.
Besides, the proposed sensor validated its temperature compensation
effectiveness, with a force RMSE of 0.01 N over a temperature change of 11
Celsius degree. The sensor was integrated with a soft grow-and-twine gripper to
monitor interaction forces between different objects and the robotic gripper.
Closed-loop force control was applied during automated pick-and-place tasks and
significantly improved gripping stability, as demonstrated in tests. This force
sensor can be used across manufacturing, agriculture, healthcare (like
prosthetic hands), logistics, and packaging, to provide situation awareness and
higher operational efficiency.
|
2502.03916
|
Experiments with Large Language Models on Retrieval-Augmented Generation
for Closed-Source Simulation Software
|
cs.CL cs.AI
|
Large Language Models (LLMs) are increasingly helpful in text generation,
even writing code in programming languages based on user prompts written in
natural language. They are even applied to generate simulation models for
multibody systems from natural language. Research results suggest that LLMs
surpass the mere replication of existing code examples, where some LLMs have
been trained on an open-source multibody simulation code. However, for
closed-source simulation software, such results are not to be expected as their
ideas and concepts might differ from other publicly available ones. LLMs can
hallucinate for knowledge-intensive tasks, such as model creation, which can
lead to wrong responses. This is especially the case for the LLM unknown
closed-source simulation software. The same applies to other internal knowledge
kept private to protect intellectual property or data privacy. The
Retrieval-Augmented Generation (RAG) approach might yield a solution for these
knowledge-intensive tasks. This paper explores the application of RAG to
closed-source simulation software and presents first experiments. After a brief
introduction to LLMs, the RAG approach, and the simulation method applied by
the close-source simulation software, several examples are provided to test
LLMs' knowledge of the simulation software and the creation of simulation
models using two RAG systems. The examples show promising results indicating
the benefits of applying RAG systems to closed-source simulation software,
helping to access their knowledge. Nevertheless, they also reveal gaps in the
applied information and open questions for further research.
|
2502.03917
|
On the existence of strong functional observer
|
eess.SY cs.SY
|
For arbitrary linear time-invariant systems, the existence of a strong
functional observer is investigated. Such observer determines, from the
available measurement on the plant, an estimate of a function of the state and
the input. This estimate converges irrespective to initial state and input.
This formulation encompass the cases of observer existence for known or unknown
inputs and generalizes state-of-art. Necessary and sufficient conditions for
such an existence are proposed, in the framework of state-space representation.
These conditions are based on functional detectability property and its
generalizations for arbitrary input, which include considerations on
convergence of the estimation, irrespective to the initial state and the input.
Known results on state detectability, input reconstruction or functional
detectability are retrieved by particularizing the proposed conditions.
|
2502.03918
|
Adaptation of Task Goal States from Prior Knowledge
|
cs.RO cs.AI
|
This paper presents a framework to define a task with freedom and variability
in its goal state. A robot could use this to observe the execution of a task
and target a different goal from the observed one; a goal that is still
compatible with the task description but would be easier for the robot to
execute. We define the model of an environment state and an environment
variation, and present experiments on how to interactively create the variation
from a single task demonstration and how to use this variation to create an
execution plan for bringing any environment into the goal state.
|
2502.03919
|
Blackwell's Approachability with Approximation Algorithms
|
math.OC cs.LG
|
We revisit Blackwell's celebrated approachability problem which considers a
repeated vector-valued game between a player and an adversary. Motivated by
settings in which the action set of the player or adversary (or both) is
difficult to optimize over, for instance when it corresponds to the set of all
possible solutions to some NP-Hard optimization problem, we ask what can the
player guarantee \textit{efficiently}, when only having access to these sets
via approximation algorithms with ratios $\alpha_{\mX} \geq 1$ and $ 1 \geq
\alpha_{\mY} > 0$, respectively. Assuming the player has monotone preferences,
in the sense that he does not prefer a vector-valued loss $\ell_1$ over
$\ell_2$ if $\ell_2 \leq \ell_1$, we establish that given a Blackwell instance
with an approachable target set $S$, the downward closure of the
appropriately-scaled set $\alpha_{\mX}\alpha_{\mY}^{-1}S$ is
\textit{efficiently} approachable with optimal rate. In case only the player's
or adversary's set is equipped with an approximation algorithm, we give simpler
and more efficient algorithms.
|
2502.03930
|
DiTAR: Diffusion Transformer Autoregressive Modeling for Speech
Generation
|
eess.AS cs.AI cs.CL cs.LG cs.SD
|
Several recent studies have attempted to autoregressively generate continuous
speech representations without discrete speech tokens by combining diffusion
and autoregressive models, yet they often face challenges with excessive
computational loads or suboptimal outcomes. In this work, we propose Diffusion
Transformer Autoregressive Modeling (DiTAR), a patch-based autoregressive
framework combining a language model with a diffusion transformer. This
approach significantly enhances the efficacy of autoregressive models for
continuous tokens and reduces computational demands. DiTAR utilizes a
divide-and-conquer strategy for patch generation, where the language model
processes aggregated patch embeddings and the diffusion transformer
subsequently generates the next patch based on the output of the language
model. For inference, we propose defining temperature as the time point of
introducing noise during the reverse diffusion ODE to balance diversity and
determinism. We also show in the extensive scaling analysis that DiTAR has
superb scalability. In zero-shot speech generation, DiTAR achieves
state-of-the-art performance in robustness, speaker similarity, and
naturalness.
|
2502.03933
|
HEP-JEPA: A foundation model for collider physics using joint embedding
predictive architecture
|
cs.LG hep-ex hep-ph
|
We present a transformer architecture-based foundation model for tasks at
high-energy particle colliders such as the Large Hadron Collider. We train the
model to classify jets using a self-supervised strategy inspired by the Joint
Embedding Predictive Architecture. We use the JetClass dataset containing 100M
jets of various known particles to pre-train the model with a data-centric
approach -- the model uses a fraction of the jet constituents as the context to
predict the embeddings of the unseen target constituents. Our pre-trained model
fares well with other datasets for standard classification benchmark tasks. We
test our model on two additional downstream tasks: top tagging and
differentiating light-quark jets from gluon jets. We also evaluate our model
with task-specific metrics and baselines and compare it with state-of-the-art
models in high-energy physics. Project site: https://hep-jepa.github.io/
|
2502.03935
|
Thermal Model Calibration of a Squirrel-Cage Induction Machine
|
cs.CE
|
Accurate and efficient thermal simulations of induction machines are
indispensable for detecting thermal hot spots and hence avoiding potential
material failure in an early design stage. A goal is the better utilization of
the machines with reduced safety margins due to a better knowledge of the
critical conditions. In this work, the parameters of a two-dimensional
induction machine model are calibrated according to evidence from measurements,
by solving an inverse field problem. The set of parameters comprise material
parameters as well as parameters that model three-dimensional effects. This
allows a consideration of physical effects without explicit knowledge of its
quantities. First, the accuracy of the approach is studied using an academic
example in combination with synthetic data. Afterwards, it is successfully
applied to a realistic induction machine model.
|
2502.03937
|
Quantifying Correlations of Machine Learning Models
|
cs.LG cs.NA math.NA
|
Machine Learning models are being extensively used in safety critical
applications where errors from these models could cause harm to the user. Such
risks are amplified when multiple machine learning models, which are deployed
concurrently, interact and make errors simultaneously. This paper explores
three scenarios where error correlations between multiple models arise,
resulting in such aggregated risks. Using real-world data, we simulate these
scenarios and quantify the correlations in errors of different models. Our
findings indicate that aggregated risks are substantial, particularly when
models share similar algorithms, training datasets, or foundational models.
Overall, we observe that correlations across models are pervasive and likely to
intensify with increased reliance on foundational models and widely used public
datasets, highlighting the need for effective mitigation strategies to address
these challenges.
|
2502.03938
|
Unravelling Causal Genetic Biomarkers of Alzheimer's Disease via Neuron
to Gene-token Backtracking in Neural Architecture: A Groundbreaking
Reverse-Gene-Finder Approach
|
cs.LG
|
Alzheimer's Disease (AD) affects over 55 million people globally, yet the key
genetic contributors remain poorly understood. Leveraging recent advancements
in genomic foundation models, we present the innovative Reverse-Gene-Finder
technology, a ground-breaking neuron-to-gene-token backtracking approach in a
neural network architecture to elucidate the novel causal genetic biomarkers
driving AD onset. Reverse-Gene-Finder comprises three key innovations. Firstly,
we exploit the observation that genes with the highest probability of causing
AD, defined as the most causal genes (MCGs), must have the highest probability
of activating those neurons with the highest probability of causing AD, defined
as the most causal neurons (MCNs). Secondly, we utilize a gene token
representation at the input layer to allow each gene (known or novel to AD) to
be represented as a discrete and unique entity in the input space. Lastly, in
contrast to the existing neural network architectures, which track neuron
activations from the input layer to the output layer in a feed-forward manner,
we develop an innovative backtracking method to track backwards from the MCNs
to the input layer, identifying the Most Causal Tokens (MCTs) and the
corresponding MCGs. Reverse-Gene-Finder is highly interpretable, generalizable,
and adaptable, providing a promising avenue for application in other disease
scenarios.
|
2502.03943
|
Multimodal Data-Driven Classification of Mental Disorders: A
Comprehensive Approach to Diagnosing Depression, Anxiety, and Schizophrenia
|
cs.LG
|
This study investigates the potential of multimodal data integration, which
combines electroencephalogram (EEG) data with sociodemographic characteristics
like age, sex, education, and intelligence quotient (IQ), to diagnose mental
diseases like schizophrenia, depression, and anxiety. Using Apache Spark and
convolutional neural networks (CNNs), a data-driven classification pipeline has
been developed for big data environment to effectively analyze massive
datasets. In order to evaluate brain activity and connection patterns
associated with mental disorders, EEG parameters such as power spectral density
(PSD) and coherence are examined. The importance of coherence features is
highlighted by comparative analysis, which shows significant improvement in
classification accuracy and robustness. This study emphasizes the significance
of holistic approaches for efficient diagnostic tools by integrating a variety
of data sources. The findings open the door for creative, data-driven
approaches to treating psychiatric diseases by demonstrating the potential of
utilizing big data, sophisticated deep learning methods, and multimodal
datasets to enhance the precision, usability, and comprehension of mental
health diagnostics.
|
2502.03944
|
Exact Covariance Characterization for Controlled Linear Systems subject
to Stochastic Parametric and Additive Uncertainties
|
eess.SY cs.SY
|
This work addresses the exact characterization of the covariance dynamics
related to linear discrete-time systems subject to both additive and parametric
stochastic uncertainties that are potentially unbounded. The derived exact
representation allows to understand how the covariance of the multiplicative
parametric uncertainties affects the stability of the state covariance dynamics
through a transformation of the parameters covariance matrix, allowing
therefore to address the problem of control design for state covariance
dynamics in this context. Numerical results assess this new characterization by
comparing it to the empirical covariance and illustrating the control design
problem.
|
2502.03945
|
Afrispeech-Dialog: A Benchmark Dataset for Spontaneous English
Conversations in Healthcare and Beyond
|
cs.CL
|
Speech technologies are transforming interactions across various sectors,
from healthcare to call centers and robots, yet their performance on
African-accented conversations remains underexplored. We introduce
Afrispeech-Dialog, a benchmark dataset of 50 simulated medical and non-medical
African-accented English conversations, designed to evaluate automatic speech
recognition (ASR) and related technologies. We assess state-of-the-art (SOTA)
speaker diarization and ASR systems on long-form, accented speech, comparing
their performance with native accents and discover a 10%+ performance
degradation. Additionally, we explore medical conversation summarization
capabilities of large language models (LLMs) to demonstrate the impact of ASR
errors on downstream medical summaries, providing insights into the challenges
and opportunities for speech technologies in the Global South. Our work
highlights the need for more inclusive datasets to advance conversational AI in
low-resource settings.
|
2502.03946
|
CleanSurvival: Automated data preprocessing for time-to-event models
using reinforcement learning
|
cs.LG
|
Data preprocessing is a critical yet frequently neglected aspect of machine
learning, often paid little attention despite its potentially significant
impact on model performance. While automated machine learning pipelines are
starting to recognize and integrate data preprocessing into their solutions for
classification and regression tasks, this integration is lacking for more
specialized tasks like survival or time-to-event models. As a result, survival
analysis not only faces the general challenges of data preprocessing but also
suffers from the lack of tailored, automated solutions in this area.
To address this gap, this paper presents 'CleanSurvival', a
reinforcement-learning-based solution for optimizing preprocessing pipelines,
extended specifically for survival analysis. The framework can handle
continuous and categorical variables, using Q-learning to select which
combination of data imputation, outlier detection and feature extraction
techniques achieves optimal performance for a Cox, random forest, neural
network or user-supplied time-to-event model. The package is available on
GitHub: https://github.com/datasciapps/CleanSurvival
Experimental benchmarks on real-world datasets show that the Q-learning-based
data preprocessing results in superior predictive performance to standard
approaches, finding such a model up to 10 times faster than undirected random
grid search. Furthermore, a simulation study demonstrates the effectiveness in
different types and levels of missingness and noise in the data.
|
2502.03948
|
Enhancing Online Learning Efficiency Through Heterogeneous Resource
Integration with a Multi-Agent RAG System
|
cs.AI cs.CL cs.MA
|
Efficient online learning requires seamless access to diverse resources such
as videos, code repositories, documentation, and general web content. This
poster paper introduces early-stage work on a Multi-Agent Retrieval-Augmented
Generation (RAG) System designed to enhance learning efficiency by integrating
these heterogeneous resources. Using specialized agents tailored for specific
resource types (e.g., YouTube tutorials, GitHub repositories, documentation
websites, and search engines), the system automates the retrieval and synthesis
of relevant information. By streamlining the process of finding and combining
knowledge, this approach reduces manual effort and enhances the learning
experience. A preliminary user study confirmed the system's strong usability
and moderate-high utility, demonstrating its potential to improve the
efficiency of knowledge acquisition.
|
2502.03950
|
LR0.FM: Low-Resolution Zero-shot Classification Benchmark For Foundation
Models
|
cs.CV
|
Visual-language foundation Models (FMs) exhibit remarkable zero-shot
generalization across diverse tasks, largely attributed to extensive
pre-training on largescale datasets. However, their robustness on
low-resolution/pixelated (LR) images, a common challenge in real-world
scenarios, remains underexplored. We introduce LR0.FM, a comprehensive
benchmark evaluating the impact of low resolution on the zero-shot
classification performance of 10 FM(s) across 66 backbones and 15 datasets. We
propose a novel metric, Weighted Aggregated Robustness, to address the
limitations of existing metrics and better evaluate model performance across
resolutions and datasets. Our key findings show that: (i) model size positively
correlates with robustness to resolution degradation, (ii) pre-training dataset
quality is more important than its size, and (iii) fine-tuned and higher
resolution models are less robust against LR. Our analysis further reveals that
the model makes semantically reasonable predictions at LR, and the lack of
fine-grained details in input adversely impacts the model's initial layers more
than the deeper layers. We use these insights and introduce a simple strategy,
LR-TK0, to enhance the robustness of models without compromising their
pre-trained weights. We demonstrate the effectiveness of LR-TK0 for robustness
against low-resolution across several datasets and its generalization
capability across backbones and other approaches. Code is available at
https://github.com/shyammarjit/LR0.FM
|
2502.03952
|
Bridging the inference gap in Mutimodal Variational Autoencoders
|
cs.LG stat.ML
|
From medical diagnosis to autonomous vehicles, critical applications rely on
the integration of multiple heterogeneous data modalities. Multimodal
Variational Autoencoders offer versatile and scalable methods for generating
unobserved modalities from observed ones. Recent models using
mixturesof-experts aggregation suffer from theoretically grounded limitations
that restrict their generation quality on complex datasets. In this article, we
propose a novel interpretable model able to learn both joint and conditional
distributions without introducing mixture aggregation. Our model follows a
multistage training process: first modeling the joint distribution with
variational inference and then modeling the conditional distributions with
Normalizing Flows to better approximate true posteriors. Importantly, we also
propose to extract and leverage the information shared between modalities to
improve the conditional coherence of generated samples. Our method achieves
state-of-the-art results on several benchmark datasets.
|
2502.03953
|
Fairness Aware Reinforcement Learning via Proximal Policy Optimization
|
cs.MA cs.LG
|
Fairness in multi-agent systems (MAS) focuses on equitable reward
distribution among agents in scenarios involving sensitive attributes such as
race, gender, or socioeconomic status. This paper introduces fairness in
Proximal Policy Optimization (PPO) with a penalty term derived from demographic
parity, counterfactual fairness, and conditional statistical parity. The
proposed method balances reward maximisation with fairness by integrating two
penalty components: a retrospective component that minimises disparities in
past outcomes and a prospective component that ensures fairness in future
decision-making. We evaluate our approach in the Allelopathic Harvest game, a
cooperative and competitive MAS focused on resource collection, where some
agents possess a sensitive attribute. Experiments demonstrate that fair-PPO
achieves fairer policies across all fairness metrics than classic PPO. Fairness
comes at the cost of reduced rewards, namely the Price of Fairness, although
agents with and without the sensitive attribute renounce comparable amounts of
rewards. Additionally, the retrospective and prospective penalties effectively
change the agents' behaviour and improve fairness. These findings underscore
the potential of fair-PPO to address fairness challenges in MAS.
|
2502.03954
|
MAQInstruct: Instruction-based Unified Event Relation Extraction
|
cs.CL cs.AI
|
Extracting event relations that deviate from known schemas has proven
challenging for previous methods based on multi-class classification, MASK
prediction, or prototype matching. Recent advancements in large language models
have shown impressive performance through instruction tuning. Nevertheless, in
the task of event relation extraction, instruction-based methods face several
challenges: there are a vast number of inference samples, and the relations
between events are non-sequential. To tackle these challenges, we present an
improved instruction-based event relation extraction framework named
MAQInstruct. Firstly, we transform the task from extracting event relations
using given event-event instructions to selecting events using given
event-relation instructions, which reduces the number of samples required for
inference. Then, by incorporating a bipartite matching loss, we reduce the
dependency of the instruction-based method on the generation sequence. Our
experimental results demonstrate that MAQInstruct significantly improves the
performance of event relation extraction across multiple LLMs.
|
2502.03957
|
Improving the Perturbation-Based Explanation of Deepfake Detectors
Through the Use of Adversarially-Generated Samples
|
cs.CV cs.AI cs.CR
|
In this paper, we introduce the idea of using adversarially-generated samples
of the input images that were classified as deepfakes by a detector, to form
perturbation masks for inferring the importance of different input features and
produce visual explanations. We generate these samples based on Natural
Evolution Strategies, aiming to flip the original deepfake detector's decision
and classify these samples as real. We apply this idea to four
perturbation-based explanation methods (LIME, SHAP, SOBOL and RISE) and
evaluate the performance of the resulting modified methods using a SOTA
deepfake detection model, a benchmarking dataset (FaceForensics++) and a
corresponding explanation evaluation framework. Our quantitative assessments
document the mostly positive contribution of the proposed perturbation approach
in the performance of explanation methods. Our qualitative analysis shows the
capacity of the modified explanation methods to demarcate the manipulated image
regions more accurately, and thus to provide more useful explanations.
|
2502.03958
|
Non-convex composite federated learning with heterogeneous data
|
cs.LG cs.DC
|
We propose an innovative algorithm for non-convex composite federated
learning that decouples the proximal operator evaluation and the communication
between server and clients. Moreover, each client uses local updates to
communicate less frequently with the server, sends only a single d-dimensional
vector per communication round, and overcomes issues with client drift. In the
analysis, challenges arise from the use of decoupling strategies and local
updates in the algorithm, as well as from the non-convex and non-smooth nature
of the problem. We establish sublinear and linear convergence to a bounded
residual error under general non-convexity and the proximal Polyak-Lojasiewicz
inequality, respectively. In the numerical experiments, we demonstrate the
superiority of our algorithm over state-of-the-art methods on both synthetic
and real datasets.
|
2502.03960
|
Bilevel Multi-Armed Bandit-Based Hierarchical Reinforcement Learning for
Interaction-Aware Self-Driving at Unsignalized Intersections
|
cs.RO
|
In this work, we present BiM-ACPPO, a bilevel multi-armed bandit-based
hierarchical reinforcement learning framework for interaction-aware
decision-making and planning at unsignalized intersections. Essentially, it
proactively takes the uncertainties associated with surrounding vehicles (SVs)
into consideration, which encompass those stemming from the driver's intention,
interactive behaviors, and the varying number of SVs. Intermediate decision
variables are introduced to enable the high-level RL policy to provide an
interaction-aware reference, for guiding low-level model predictive control
(MPC) and further enhancing the generalization ability of the proposed
framework. By leveraging the structured nature of self-driving at unsignalized
intersections, the training problem of the RL policy is modeled as a bilevel
curriculum learning task, which is addressed by the proposed Exp3.S-based BiMAB
algorithm. It is noteworthy that the training curricula are dynamically
adjusted, thereby facilitating the sample efficiency of the RL training
process. Comparative experiments are conducted in the high-fidelity CARLA
simulator, and the results indicate that our approach achieves superior
performance compared to all baseline methods. Furthermore, experimental results
in two new urban driving scenarios clearly demonstrate the commendable
generalization performance of the proposed method.
|
2502.03962
|
Quantum Circuit Design using a Progressive Widening Monte Carlo Tree
Search
|
quant-ph cs.AI cs.ET
|
The performance of Variational Quantum Algorithms (VQAs) strongly depends on
the choice of the parameterized quantum circuit to optimize. One of the biggest
challenges in VQAs is designing quantum circuits tailored to the particular
problem and to the quantum hardware. This article proposes a gradient-free
Monte Carlo Tree Search (MCTS) technique to automate the process of quantum
circuit design. It introduces a novel formulation of the action space based on
a sampling scheme and a progressive widening technique to explore the space
dynamically. When testing our MCTS approach on the domain of random quantum
circuits, MCTS approximates unstructured circuits under different values of
stabilizer R\'enyi entropy. It turns out that MCTS manages to approximate the
benchmark quantum states independently from their degree of nonstabilizerness.
Next, our technique exhibits robustness across various application domains,
including quantum chemistry and systems of linear equations. Compared to
previous MCTS research, our technique reduces the number of quantum circuit
evaluations by a factor of 10 to 100 while achieving equal or better results.
In addition, the resulting quantum circuits have up to three times fewer CNOT
gates.
|
2502.03963
|
AL-PINN: Active Learning-Driven Physics-Informed Neural Networks for
Efficient Sample Selection in Solving Partial Differential Equations
|
cs.LG
|
Physics-Informed Neural Networks (PINNs) have emerged as a promising approach
for solving Partial Differential Equations (PDEs) by incorporating physical
constraints into deep learning models. However, standard PINNs often require a
large number of training samples to achieve high accuracy, leading to increased
computational costs. To address this issue, we propose Active Learning-Driven
PINNs (AL-PINN), which integrates Uncertainty Quantification (UQ) and Active
Learning (AL) strategies to optimize sample selection dynamically.
AL-PINN utilizes Monte Carlo Dropout to estimate epistemic uncertainty in the
model predictions, enabling the adaptive selection of high-uncertainty regions
for additional training. This approach significantly enhances learning
efficiency by focusing computational resources on the most informative data
points. We evaluate AL-PINN on benchmark PDE problems with known analytical
solutions and real-world WeatherBench climate data. Our results demonstrate
that AL-PINN achieves comparable or superior accuracy compared to traditional
PINNs while reducing the number of required training samples.
The proposed framework is particularly beneficial for scientific and
engineering applications where data collection is expensive or limited, such as
climate modeling, medical simulations, and material science. Our findings
highlight the potential of active learning in accelerating PINN-based PDE
solvers while maintaining high accuracy and computational efficiency.
|
2502.03965
|
Innovative Framework for Early Estimation of Mental Disorder Scores to
Enable Timely Interventions
|
cs.LG
|
Individual's general well-being is greatly impacted by mental health
conditions including depression and Post-Traumatic Stress Disorder (PTSD),
underscoring the importance of early detection and precise diagnosis in order
to facilitate prompt clinical intervention. An advanced multimodal deep
learning system for the automated classification of PTSD and depression is
presented in this paper. Utilizing textual and audio data from clinical
interview datasets, the method combines features taken from both modalities by
combining the architectures of LSTM (Long Short Term Memory) and BiLSTM
(Bidirectional Long Short-Term Memory).Although text features focus on speech's
semantic and grammatical components; audio features capture vocal traits
including rhythm, tone, and pitch. This combination of modalities enhances the
model's capacity to identify minute patterns connected to mental health
conditions. Using test datasets, the proposed method achieves classification
accuracies of 92% for depression and 93% for PTSD, outperforming traditional
unimodal approaches and demonstrating its accuracy and robustness.
|
2502.03966
|
MultiFloodSynth: Multi-Annotated Flood Synthetic Dataset Generation
|
cs.CV cs.AI cs.LG
|
In this paper, we present synthetic data generation framework for flood
hazard detection system. For high fidelity and quality, we characterize several
real-world properties into virtual world and simulate the flood situation by
controlling them. For the sake of efficiency, recent generative models in
image-to-3D and urban city synthesis are leveraged to easily composite flood
environments so that we avoid data bias due to the hand-crafted manner. Based
on our framework, we build the flood synthetic dataset with 5 levels, dubbed
MultiFloodSynth which contains rich annotation types like normal map,
segmentation, 3D bounding box for a variety of downstream task. In experiments,
our dataset demonstrate the enhanced performance of flood hazard detection with
on-par realism compared with real dataset.
|
2502.03971
|
RWKV-UI: UI Understanding with Enhanced Perception and Reasoning
|
cs.CV cs.HC
|
Existing Visual Language Modelsoften struggle with information loss and
limited reasoning abilities when handling high-resolution web interfaces that
combine complex visual, textual, and interactive elements. These challenges are
particularly evident in tasks requiring webpage layout comprehension and
multi-step interactive reasoning. To address these challenges, we propose
RWKV-UI, a Visual Language Model based on the RWKV architecture, specifically
designed to handle high-resolution UI images. During model training, we
introduce layout detection as a visual prompt to help the model better
understand the webpage layout structures. Additionally, we design a visual
prompt based on the Chain-of-Thought(CoT) mechanism, which enhances the model's
ability to understand and reason about webpage content through reasoning
chains. Experimental results show that RWKV-UI demonstrates significant
performance improvements in high-resolution UI understanding and interactive
reasoning tasks.
|
2502.03974
|
Spatiotemporal Trajectory Tracking Method for Vehicles Incorporating
Lead-Lag Judgement
|
eess.SY cs.SY
|
In the domain of intelligent transportation systems, especially within the
context of autonomous vehicle control, the preemptive holistic collaborative
system has been presented as a promising solution to bring a remarkable
enhancement in traffic efficiency and a substantial reduction in the accident
rate, demonstrating a great potential of development. In order to ensure this
system operates as intended, accurate tracking of the spatiotemporal trajectory
is of crucial significance. Moreover, minimizing the tracking error is a
necessary step in this process. To this end, a novel lead-lag judgment
mechanism is proposed. This mechanism precisely quantifies the longitudinal
positional deviation between the vehicle and the target trajectory over time,
then the deviation is corrected with a real - time acceleration compensation
strategy, as a result, the accuracy and reliability of trajectory tracking are
significantly enhanced. Real - vehicle experiments were conducted in a
dedicated test field to validate the feasibility of this innovative approach
empirically. Subsequently, the obtained tracking data was subsequent processed
using the lead-lag judgment mechanism. In this step, we carefully analyzed the
spatiotemporal error patterns between the vehicle and the target trajectory
under different alignments and speeds. Finally, using real highway speed and
alignment data, we conducted comprehensive spatiotemporal trajectory tracking
simulations. Through experiments and simulations, tracking errors maintained in
an acceptable range and reasonable spatiotemporal distance is given during the
preemptive merging process on highway ramps. Overall, this study offers
valuable insights for highway ramp emerging safety. Future work can expand on
these findings.
|
2502.03976
|
Small Signal Stability Analysis of Kurdistan Regional Power System
|
eess.SY cs.SY
|
This paper presents for the first time a mathematical model for evaluating
the Planned Kurdistan Regional Power System (KRPS) for its ability to maintain
stability under small disturbances and fluctuations during normal operating
conditions. To achieve this objective, practical field data, manufacture's
datasheets, related IEEE task force reports have been used to build a complete
mathematical model in MATLAB/SIMULINK/SimPowerSystem environment. New modules
have been established and added to the platform wherever it does not support
special type of elements. The model represents accurately all the power system
components involved in physical phenomena of system dynamic oscillations. The
model consists of 53 transmission lines, 35 nodes and 6 generating stations.
The system is simulated under different configurations and settings; the
dynamic behaviors associated with each configuration are recorded and analyzed
accordingly.
|
2502.03979
|
Towards Unified Music Emotion Recognition across Dimensional and
Categorical Models
|
cs.SD cs.AI eess.AS
|
One of the most significant challenges in Music Emotion Recognition (MER)
comes from the fact that emotion labels can be heterogeneous across datasets
with regard to the emotion representation, including categorical (e.g., happy,
sad) versus dimensional labels (e.g., valence-arousal). In this paper, we
present a unified multitask learning framework that combines these two types of
labels and is thus able to be trained on multiple datasets. This framework uses
an effective input representation that combines musical features (i.e., key and
chords) and MERT embeddings. Moreover, knowledge distillation is employed to
transfer the knowledge of teacher models trained on individual datasets to a
student model, enhancing its ability to generalize across multiple tasks. To
validate our proposed framework, we conducted extensive experiments on a
variety of datasets, including MTG-Jamendo, DEAM, PMEmo, and EmoMusic.
According to our experimental results, the inclusion of musical features,
multitask learning, and knowledge distillation significantly enhances
performance. In particular, our model outperforms the state-of-the-art models,
including the best-performing model from the MediaEval 2021 competition on the
MTG-Jamendo dataset. Our work makes a significant contribution to MER by
allowing the combination of categorical and dimensional emotion labels in one
unified framework, thus enabling training across datasets.
|
2502.03982
|
Temporal Distribution Shift in Real-World Pharmaceutical Data:
Implications for Uncertainty Quantification in QSAR Models
|
cs.LG
|
The estimation of uncertainties associated with predictions from quantitative
structure-activity relationship (QSAR) models can accelerate the drug discovery
process by identifying promising experiments and allowing an efficient
allocation of resources. Several computational tools exist that estimate the
predictive uncertainty in machine learning models. However, deviations from the
i.i.d. setting have been shown to impair the performance of these uncertainty
quantification methods. We use a real-world pharmaceutical dataset to address
the pressing need for a comprehensive, large-scale evaluation of uncertainty
estimation methods in the context of realistic distribution shifts over time.
We investigate the performance of several uncertainty estimation methods,
including ensemble-based and Bayesian approaches. Furthermore, we use this
real-world setting to systematically assess the distribution shifts in label
and descriptor space and their impact on the capability of the uncertainty
estimation methods. Our study reveals significant shifts over time in both
label and descriptor space and a clear connection between the magnitude of the
shift and the nature of the assay. Moreover, we show that pronounced
distribution shifts impair the performance of popular uncertainty estimation
methods used in QSAR models. This work highlights the challenges of identifying
uncertainty quantification methods that remain reliable under distribution
shifts introduced by real-world data.
|
2502.03984
|
PGB: One-Shot Pruning for BERT via Weight Grouping and Permutation
|
cs.CL cs.AI
|
Large pretrained language models such as BERT suffer from slow inference and
high memory usage, due to their huge size. Recent approaches to compressing
BERT rely on iterative pruning and knowledge distillation, which, however, are
often too complicated and computationally intensive. This paper proposes a
novel semi-structured one-shot pruning method for BERT, called
$\textit{Permutation and Grouping for BERT}$ (PGB), which achieves high
compression efficiency and sparsity while preserving accuracy. To this end, PGB
identifies important groups of individual weights by permutation and prunes all
other weights as a structure in both multi-head attention and feed-forward
layers. Furthermore, if no important group is formed in a particular layer, PGB
drops the entire layer to produce an even more compact model. Our experimental
results on BERT$_{\text{BASE}}$ demonstrate that PGB outperforms the
state-of-the-art structured pruning methods in terms of computational cost and
accuracy preservation.
|
2502.03988
|
Tight Bounds on Jensen's Gap: Novel Approach with Applications in
Generative Modeling
|
cs.LG
|
Among various mathematical tools of particular interest are those that
provide a common basis for researchers in different scientific fields. One of
them is Jensen's inequality, which states that the expectation of a convex
function is greater than or equal to the function evaluated at the expectation.
The resulting difference, known as Jensen's gap, became the subject of
investigation by both the statistical and machine learning communities. Among
many related topics, finding lower and upper bounds on Jensen's gap (under
different assumptions on the underlying function and distribution) has recently
become a problem of particular interest. In our paper, we take another step in
this direction by providing a novel general and mathematically rigorous
technique, motivated by the recent results of Struski et al. (2023). In
addition, by studying in detail the case of the logarithmic function and the
log-normal distribution, we explore a method for tightly estimating the
log-likelihood of generative models trained on real-world datasets.
Furthermore, we present both analytical and experimental arguments in support
of the superiority of our approach in comparison to existing state-of-the-art
solutions, contingent upon fulfillment of the criteria set forth by theoretical
studies and corresponding experiments on synthetic data.
|
2502.03990
|
Frequency Control and Power Sharing in Combined Heat and Power Networks
|
eess.SY cs.SY
|
We consider the problem of using district heating systems as ancillary
services for primary frequency control in power networks. We propose a novel
power sharing scheme for heating systems based on the average temperature,
which enables an optimal power allocation among the diverse heat sources
without having a prior knowledge of the disturbances. We then discuss two
approaches for heating systems to contribute to frequency regulation in power
networks. We show that both approaches ensure stability in the combined heat
and power network and facilitate optimal power allocation among the different
energy sources.
|
2502.03992
|
Ontology-Guided, Hybrid Prompt Learning for Generalization in Knowledge
Graph Question Answering
|
cs.CL cs.AI
|
Most existing Knowledge Graph Question Answering (KGQA) approaches are
designed for a specific KG, such as Wikidata, DBpedia or Freebase. Due to the
heterogeneity of the underlying graph schema, topology and assertions, most
KGQA systems cannot be transferred to unseen Knowledge Graphs (KGs) without
resource-intensive training data. We present OntoSCPrompt, a novel Large
Language Model (LLM)-based KGQA approach with a two-stage architecture that
separates semantic parsing from KG-dependent interactions. OntoSCPrompt first
generates a SPARQL query structure (including SPARQL keywords such as SELECT,
ASK, WHERE and placeholders for missing tokens) and then fills them with
KG-specific information. To enhance the understanding of the underlying KG, we
present an ontology-guided, hybrid prompt learning strategy that integrates KG
ontology into the learning process of hybrid prompts (e.g., discrete and
continuous vectors). We also present several task-specific decoding strategies
to ensure the correctness and executability of generated SPARQL queries in both
stages. Experimental results demonstrate that OntoSCPrompt performs as well as
SOTA approaches without retraining on a number of KGQA datasets such as CWQ,
WebQSP and LC-QuAD 1.0 in a resource-efficient manner and can generalize well
to unseen domain-specific KGs like DBLP-QuAD and CoyPu KG Code:
\href{https://github.com/LongquanJiang/OntoSCPrompt}{https://github.com/LongquanJiang/OntoSCPrompt}
|
2502.03997
|
CAD-Editor: A Locate-then-Infill Framework with Automated Training Data
Synthesis for Text-Based CAD Editing
|
cs.CV
|
Computer Aided Design (CAD) is indispensable across various industries.
\emph{Text-based CAD editing}, which automates the modification of CAD models
based on textual instructions, holds great potential but remains underexplored.
Existing methods primarily focus on design variation generation or text-based
CAD generation, either lacking support for text-based control or neglecting
existing CAD models as constraints. We introduce \emph{CAD-Editor}, the first
framework for text-based CAD editing. To address the challenge of demanding
triplet data with accurate correspondence for training, we propose an automated
data synthesis pipeline. This pipeline utilizes design variation models to
generate pairs of original and edited CAD models and employs Large
Vision-Language Models (LVLMs) to summarize their differences into editing
instructions. To tackle the composite nature of text-based CAD editing, we
propose a locate-then-infill framework that decomposes the task into two
focused sub-tasks: locating regions requiring modification and infilling these
regions with appropriate edits. Large Language Models (LLMs) serve as the
backbone for both sub-tasks, leveraging their capabilities in natural language
understanding and CAD knowledge. Experiments show that CAD-Editor achieves
superior performance both quantitatively and qualitatively.
|
2502.03998
|
Online Learning of Counter Categories and Ratings in PvP Games
|
cs.LG cs.AI cs.GT cs.MA
|
In competitive games, strength ratings like Elo are widely used to quantify
player skill and support matchmaking by accounting for skill disparities better
than simple win rate statistics. However, scalar ratings cannot handle complex
intransitive relationships, such as counter strategies seen in
Rock-Paper-Scissors. To address this, recent work introduced Neural Rating
Table and Neural Counter Table, which combine scalar ratings with discrete
counter categories to model intransitivity. While effective, these methods rely
on neural network training and cannot perform real-time updates. In this paper,
we propose an online update algorithm that extends Elo principles to
incorporate real-time learning of counter categories. Our method dynamically
adjusts both ratings and counter relationships after each match, preserving the
explainability of scalar ratings while addressing intransitivity. Experiments
on zero-sum competitive games demonstrate its practicality, particularly in
scenarios without complex team compositions.
|
2502.03999
|
A Self-supervised Multimodal Deep Learning Approach to Differentiate
Post-radiotherapy Progression from Pseudoprogression in Glioblastoma
|
eess.IV cs.CV
|
Accurate differentiation of pseudoprogression (PsP) from True Progression
(TP) following radiotherapy (RT) in glioblastoma (GBM) patients is crucial for
optimal treatment planning. However, this task remains challenging due to the
overlapping imaging characteristics of PsP and TP. This study therefore
proposes a multimodal deep-learning approach utilizing complementary
information from routine anatomical MR images, clinical parameters, and RT
treatment planning information for improved predictive accuracy. The approach
utilizes a self-supervised Vision Transformer (ViT) to encode multi-sequence MR
brain volumes to effectively capture both global and local context from the
high dimensional input. The encoder is trained in a self-supervised upstream
task on unlabeled glioma MRI datasets from the open BraTS2021, UPenn-GBM, and
UCSF-PDGM datasets to generate compact, clinically relevant representations
from FLAIR and T1 post-contrast sequences. These encoded MR inputs are then
integrated with clinical data and RT treatment planning information through
guided cross-modal attention, improving progression classification accuracy.
This work was developed using two datasets from different centers: the Burdenko
Glioblastoma Progression Dataset (n = 59) for training and validation, and the
GlioCMV progression dataset from the University Hospital Erlangen (UKER) (n =
20) for testing. The proposed method achieved an AUC of 75.3%, outperforming
the current state-of-the-art data-driven approaches. Importantly, the proposed
approach relies on readily available anatomical MRI sequences, clinical data,
and RT treatment planning information, enhancing its clinical feasibility. The
proposed approach addresses the challenge of limited data availability for PsP
and TP differentiation and could allow for improved clinical decision-making
and optimized treatment plans for GBM patients.
|
2502.04004
|
Near-optimal Regret Using Policy Optimization in Online MDPs with
Aggregate Bandit Feedback
|
cs.LG
|
We study online finite-horizon Markov Decision Processes with adversarially
changing loss and aggregate bandit feedback (a.k.a full-bandit). Under this
type of feedback, the agent observes only the total loss incurred over the
entire trajectory, rather than the individual losses at each intermediate step
within the trajectory. We introduce the first Policy Optimization algorithms
for this setting. In the known-dynamics case, we achieve the first
\textit{optimal} regret bound of $\tilde \Theta(H^2\sqrt{SAK})$, where $K$ is
the number of episodes, $H$ is the episode horizon, $S$ is the number of
states, and $A$ is the number of actions. In the unknown dynamics case we
establish regret bound of $\tilde O(H^3 S \sqrt{AK})$, significantly improving
the best known result by a factor of $H^2 S^5 A^2$.
|
2502.04008
|
Automating a Complete Software Test Process Using LLMs: An Automotive
Case Study
|
cs.SE cs.AI
|
Vehicle API testing verifies whether the interactions between a vehicle's
internal systems and external applications meet expectations, ensuring that
users can access and control various vehicle functions and data. However, this
task is inherently complex, requiring the alignment and coordination of API
systems, communication protocols, and even vehicle simulation systems to
develop valid test cases. In practical industrial scenarios, inconsistencies,
ambiguities, and interdependencies across various documents and system
specifications pose significant challenges. This paper presents a system
designed for the automated testing of in-vehicle APIs. By clearly defining and
segmenting the testing process, we enable Large Language Models (LLMs) to focus
on specific tasks, ensuring a stable and controlled testing workflow.
Experiments conducted on over 100 APIs demonstrate that our system effectively
automates vehicle API testing. The results also confirm that LLMs can
efficiently handle mundane tasks requiring human judgment, making them suitable
for complete automation in similar industrial contexts.
|
2502.04012
|
Malleable Robots
|
cs.RO
|
This chapter is about the fundamentals of fabrication, control, and
human-robot interaction of a new type of collaborative robotic manipulators,
called malleable robots, which are based on adjustable architectures of varying
stiffness for achieving high dexterity with lower mobility arms. Collaborative
robots, or cobots, commonly integrate six or more degrees of freedom (DOF) in a
serial arm in order to allow positioning in constrained spaces and adaptability
across tasks. Increasing the dexterity of robotic arms has been indeed
traditionally accomplished by increasing the number of degrees of freedom of
the system; however, once a robotic task has been established (e.g., a
pick-and-place operation), the motion of the end-effector can be normally
achieved using less than 6-DOF (i.e., lower mobility). The aim of malleable
robots is to close the technological gap that separates current cobots from
achieving flexible, accessible manufacturing automation with a reduced number
of actuators.
|
2502.04014
|
Enhancing people localisation in drone imagery for better crowd
management by utilising every pixel in high-resolution images
|
cs.CV cs.RO
|
Accurate people localisation using drones is crucial for effective crowd
management, not only during massive events and public gatherings but also for
monitoring daily urban crowd flow. Traditional methods for tiny object
localisation using high-resolution drone imagery often face limitations in
precision and efficiency, primarily due to constraints in image scaling and
sliding window techniques. To address these challenges, a novel approach
dedicated to point-oriented object localisation is proposed. Along with this
approach, the Pixel Distill module is introduced to enhance the processing of
high-definition images by extracting spatial information from individual pixels
at once. Additionally, a new dataset named UP-COUNT, tailored to contemporary
drone applications, is shared. It addresses a wide range of challenges in drone
imagery, such as simultaneous camera and object movement during the image
acquisition process, pushing forward the capabilities of crowd management
applications. A comprehensive evaluation of the proposed method on the proposed
dataset and the commonly used DroneCrowd dataset demonstrates the superiority
of our approach over existing methods and highlights its efficacy in
drone-based crowd object localisation tasks. These improvements markedly
increase the algorithm's applicability to operate in real-world scenarios,
enabling more reliable localisation and counting of individuals in dynamic
environments.
|
2502.04018
|
PINT: Physics-Informed Neural Time Series Models with Applications to
Long-term Inference on WeatherBench 2m-Temperature Data
|
cs.LG
|
This paper introduces PINT (Physics-Informed Neural Time Series Models), a
framework that integrates physical constraints into neural time series models
to improve their ability to capture complex dynamics. We apply PINT to the ERA5
WeatherBench dataset, focusing on long-term forecasting of 2m-temperature data.
PINT incorporates the Simple Harmonic Oscillator Equation as a physics-informed
prior, embedding its periodic dynamics into RNN, LSTM, and GRU architectures.
This equation's analytical solutions (sine and cosine functions) facilitate
rigorous evaluation of the benefits of incorporating physics-informed
constraints. By benchmarking against a linear regression baseline derived from
its exact solutions, we quantify the impact of embedding physical principles in
data-driven models. Unlike traditional time series models that rely on future
observations, PINT is designed for practical forecasting. Using only the first
90 days of observed data, it iteratively predicts the next two years,
addressing challenges posed by limited real-time updates. Experiments on the
WeatherBench dataset demonstrate PINT's ability to generalize, capture periodic
trends, and align with physical principles. This study highlights the potential
of physics-informed neural models in bridging machine learning and
interpretable climate applications.
Our models and datasets are publicly available on GitHub:
https://github.com/KV-Park.
|
2502.04021
|
Variational Quantum Optimization with Continuous Bandits
|
cs.LG quant-ph
|
We introduce a novel approach to variational Quantum algorithms (VQA) via
continuous bandits. VQA are a class of hybrid Quantum-classical algorithms
where the parameters of Quantum circuits are optimized by classical algorithms.
Previous work has used zero and first order gradient based methods, however
such algorithms suffer from the barren plateau (BP) problem where gradients and
loss differences are exponentially small. We introduce an approach using
bandits methods which combine global exploration with local exploitation. We
show how VQA can be formulated as a best arm identification problem in a
continuous space of arms with Lipschitz smoothness. While regret minimization
has been addressed in this setting, existing methods for pure exploration only
cover discrete spaces. We give the first results for pure exploration in a
continuous setting and derive a fixed-confidence, information-theoretic,
instance specific lower bound. Under certain assumptions on the expected
payoff, we derive a simple algorithm, which is near-optimal with respect to our
lower bound. Finally, we apply our continuous bandit algorithm to two VQA
schemes: a PQC and a QAOA quantum circuit, showing that we significantly
outperform the previously known state of the art methods (which used gradient
based methods).
|
2502.04022
|
Quantification of Biodiversity from Historical Survey Text with
LLM-based Best-Worst Scaling
|
cs.CL
|
In this study, we evaluate methods to determine the frequency of species via
quantity estimation from historical survey text. To that end, we formulate
classification tasks and finally show that this problem can be adequately
framed as a regression task using Best-Worst Scaling (BWS) with Large Language
Models (LLMs). We test Ministral-8B, DeepSeek-V3, and GPT-4, finding that the
latter two have reasonable agreement with humans and each other. We conclude
that this approach is more cost-effective and similarly robust compared to a
fine-grained multi-class approach, allowing automated quantity estimation
across species.
|
2502.04024
|
A Robust Optimization Model for Cost-Efficient and Fast Electric Vehicle
Charging with L2-norm Uncertainty
|
eess.SY cs.SY
|
In this paper, we propose a robust optimization model that addresses both the
cost-efficiency and fast charging requirements for electric vehicles (EVs) at
charging stations. By combining elements from traditional cost-minimization
models and a fast charging objective, we construct an optimization model that
balances user costs with rapid power allocation. Additionally, we incorporate
L2-norm uncertainty into the charging cost, ensuring that the model remains
resilient under cost fluctuations. The proposed model is tested under
real-world scenarios and demonstrates its potential for efficient and flexible
EV charging solutions.
|
2502.04028
|
Deep Meta Coordination Graphs for Multi-agent Reinforcement Learning
|
cs.LG
|
This paper presents deep meta coordination graphs (DMCG) for learning
cooperative policies in multi-agent reinforcement learning (MARL). Coordination
graph formulations encode local interactions and accordingly factorize the
joint value function of all agents to improve efficiency in MARL. However,
existing approaches rely solely on pairwise relations between agents, which
potentially oversimplifies complex multi-agent interactions. DMCG goes beyond
these simple direct interactions by also capturing useful higher-order and
indirect relationships among agents. It generates novel graph structures
accommodating multiple types of interactions and arbitrary lengths of multi-hop
connections in coordination graphs to model such interactions. It then employs
a graph convolutional network module to learn powerful representations in an
end-to-end manner. We demonstrate its effectiveness in multiple coordination
problems in MARL where other state-of-the-art methods can suffer from sample
inefficiency or fail entirely. All codes can be found here:
https://github.com/Nikunj-Gupta/dmcg-marl.
|
2502.04030
|
Fine, I'll Merge It Myself: A Multi-Fidelity Framework for Automated
Model Merging
|
cs.AI cs.LG
|
Reasoning capabilities represent a critical frontier for large language
models (LLMs), but developing them requires extensive proprietary datasets and
computational resources. One way to efficiently supplement capabilities with is
by model merging, which offers a promising alternative by combining multiple
models without retraining. However, current merging approaches rely on
manually-designed strategies for merging hyperparameters, limiting the
exploration of potential model combinations and requiring significant human
effort. We propose an Automated Model Merging Framework that enables
fine-grained exploration of merging strategies while reducing costs through
multi-fidelity approximations. We support both single and multi-objective
optimization and introduce two novel search spaces: layerwise fusion (LFS) and
depth-wise integration (DIS). Evaluating across a number of benchmarks, we find
that the search autonomously finds 1) Merges that further boost
single-objective performance, even on tasks the model has already been
finetuned on, and 2) Merges that optimize multi-objective frontiers across
tasks. Effective merges are found with limited compute, e.g. within less than
500 search steps.
|
2502.04034
|
Generalize Drug Response Prediction by Latent Independent Projection for
Asymmetric Constrained Domain Generalization
|
cs.LG cs.AI
|
The accurate prediction of drug responses remains a formidable challenge,
particularly at the single-cell level and in clinical treatment contexts. Some
studies employ transfer learning techniques to predict drug responses in
individual cells and patients, but they require access to target-domain data
during training, which is often unavailable or only obtainable in future. In
this study, we propose a novel domain generalization framework, termed
panCancerDR, to address this challenge. We conceptualize each cancer type as a
distinct source domain, with its cell lines serving as domain-specific samples.
Our primary objective is to extract domain-invariant features from the
expression profiles of cell lines across diverse cancer types, thereby
generalize the predictive capacity to out-of-distribution samples. To enhance
robustness, we introduce a latent independence projection (LIP) module that
encourages the encoder to extract informative yet non-redundant features. Also,
we propose an asymmetric adaptive clustering constraint, which clusters
drug-sensitive samples into a compact group while drives resistant samples
dispersed across separate clusters in the latent space. Our empirical
experiments demonstrate that panCancerDR effectively learns task-relevant
features from diverse source domains, and achieves accurate predictions of drug
response for unseen cancer type during training. Furthermore, when evaluated on
single-cell and patient-level prediction tasks, our model-trained solely on in
vitro cell line data without access to target-domain information-consistently
outperforms and matched current state-of-the-art methods. These findings
highlights the potential of our method for real-world clinical applications.
|
2502.04037
|
Exploring Imbalanced Annotations for Effective In-Context Learning
|
cs.CL cs.LG
|
Large language models (LLMs) have shown impressive performance on downstream
tasks through in-context learning (ICL), which heavily relies on the
demonstrations selected from annotated datasets. Existing selection methods may
hinge on the distribution of annotated datasets, which can often be long-tailed
in real-world scenarios. In this work, we show that imbalanced class
distributions in annotated datasets significantly degrade the performance of
ICL across various tasks and selection methods. Moreover, traditional rebalance
methods fail to ameliorate the issue of class imbalance in ICL. Our method is
motivated by decomposing the distributional differences between annotated and
test datasets into two-component weights: class-wise weights and conditional
bias. The key idea behind our method is to estimate the conditional bias by
minimizing the empirical error on a balanced validation dataset and to employ
the two-component weights to modify the original scoring functions during
selection. Our approach can prevent selecting too many demonstrations from a
single class while preserving the effectiveness of the original selection
methods. Extensive experiments demonstrate the effectiveness of our method,
improving the average accuracy by up to 5.46 on common benchmarks with
imbalanced datasets.
|
2502.04038
|
Simulating the Emergence of Differential Case Marking with Communicating
Neural-Network Agents
|
cs.CL
|
Differential Case Marking (DCM) refers to the phenomenon where grammatical
case marking is applied selectively based on semantic, pragmatic, or other
factors. The emergence of DCM has been studied in artificial language learning
experiments with human participants, which were specifically aimed at
disentangling the effects of learning from those of communication (Smith &
Culbertson, 2020). Multi-agent reinforcement learning frameworks based on
neural networks have gained significant interest to simulate the emergence of
human-like linguistic phenomena. In this study, we employ such a framework in
which agents first acquire an artificial language before engaging in
communicative interactions, enabling direct comparisons to human result. Using
a very generic communication optimization algorithm and neural-network learners
that have no prior experience with language or semantic preferences, our
results demonstrate that learning alone does not lead to DCM, but when agents
communicate, differential use of markers arises. This supports Smith and
Culbertson (2020)'s findings that highlight the critical role of communication
in shaping DCM and showcases the potential of neural-agent models to complement
experimental research on language evolution.
|
2502.04039
|
A Cloud-native Agile approach to cyber platform prototyping and
integration for astronomy: the ENGAGE SKA case
|
astro-ph.IM cs.SY eess.SY physics.med-ph
|
The Square Kilometre Array (SKA) Observatory is gearing up the formal
construction of its two radio interferometers in Australia and South Africa
after the end of design and pre-construction phases. Agile methodologies, the
Cloud native Computing technologies and the DevOps software ideas are
influencing the design of compute infrastructures that will be key to reduce
the operational costs of SKA while improving the control and monitoring of the
SKA antennas and ancillary systems, Correlators, HPC facilities or related data
centre tiered systems. These tools will likely include advanced power metering
technologies and efficient distribution automation and Network Operation
Centres (NOC). SKA will become the world's largest radio telescope and is
expected to achieve its first science by 2026. To cope with this dimension and
complexity, a key part of this distributed Observatory is the overall software
control and monitoring system embodied in the Observatory Management and
Control (OMC) and the Services Teams that requires specialized Agile Teams to
assist in software and cyber infrastructure building using an Agile development
environment that includes test automation, Continuous Integration, and
Continuous Deployment. To manage such a large and distributed machine, the
Agile approach was adopted for the core software package of the SKA Telescope
aimed at scheduling observations, controlling their execution, monitoring the
telescope status and ensuring scalability and reliability. Here, we report on
the ENGAGE SKA ciberinfrastructure prototyping support to the SKA Agile
Software Development Life Cycle (SDLC).
|
2502.04040
|
Leveraging Reasoning with Guidelines to Elicit and Utilize Knowledge for
Enhancing Safety Alignment
|
cs.LG cs.AI cs.CL
|
Training safe LLMs is one of the most critical research challenge. However,
the commonly used method, Refusal Training (RT), struggles to generalize
against various OOD jailbreaking attacks. Many safety training methods have
been proposed to address this issue. While they offer valuable insights, we aim
to complement this line of research by investigating whether OOD attacks truly
exceed the capability of RT model. Conducting evaluation with BoN, we observe
significant improvements on generalization as N increases. This underscores
that the model possesses sufficient safety-related latent knowledge, but RT
fails to consistently elicit this knowledge when addressing OOD attacks.
Further analysis based on domain adaptation reveals that training with direct
refusal causes model to rely on superficial shortcuts, resulting in learning of
non-robust representation mappings. Based on our findings, we propose training
model to perform safety reasoning for each query. Reasoning supervision
encourages model to perform more computations, explicitly eliciting and using
latent knowledge through reasoning. To achieve this, we synthesize reasoning
supervision based on pre-guidelines, training the model to reason in alignment
with them, thereby effectively eliciting and utilizing latent knowledge from
diverse perspectives. Extensive experiments show that our method significantly
improves generalization performance against OOD attacks.
|
2502.04043
|
Probe-Free Low-Rank Activation Intervention
|
cs.LG cs.AI
|
Language models (LMs) can produce texts that appear accurate and coherent but
contain untruthful or toxic content. Inference-time interventions that edit the
hidden activations have shown promising results in steering the LMs towards
desirable generations. Existing activation intervention methods often comprise
an activation probe to detect undesirable generation, triggering the activation
modification to steer subsequent generation. This paper proposes a probe-free
intervention method FLORAIN for all attention heads in a specific activation
layer. It eliminates the need to train classifiers for probing purposes. The
intervention function is parametrized by a sample-wise nonlinear low-rank
mapping, which is trained by minimizing the distance between the modified
activations and their projection onto the manifold of desirable content. Under
specific constructions of the manifold and projection distance, we show that
the intervention strategy can be computed efficiently by solving a smooth
optimization problem. The empirical results, benchmarked on multiple base
models, demonstrate that FLORAIN consistently outperforms several baseline
methods in enhancing model truthfulness and quality across generation and
multiple-choice tasks.
|
2502.04045
|
Comparing privacy notions for protection against reconstruction attacks
in machine learning
|
cs.LG cs.CR cs.IT math.IT
|
Within the machine learning community, reconstruction attacks are a principal
concern and have been identified even in federated learning (FL), which was
designed with privacy preservation in mind. In response to these threats, the
privacy community recommends the use of differential privacy (DP) in the
stochastic gradient descent algorithm, termed DP-SGD. However, the
proliferation of variants of DP in recent years\textemdash such as metric
privacy\textemdash has made it challenging to conduct a fair comparison between
different mechanisms due to the different meanings of the privacy parameters
$\epsilon$ and $\delta$ across different variants. Thus, interpreting the
practical implications of $\epsilon$ and $\delta$ in the FL context and amongst
variants of DP remains ambiguous. In this paper, we lay a foundational
framework for comparing mechanisms with differing notions of privacy
guarantees, namely $(\epsilon,\delta)$-DP and metric privacy. We provide two
foundational means of comparison: firstly, via the well-established
$(\epsilon,\delta)$-DP guarantees, made possible through the R\'enyi
differential privacy framework; and secondly, via Bayes' capacity, which we
identify as an appropriate measure for reconstruction threats.
|
2502.04048
|
Adaptive Output Feedback MPC with Guaranteed Stability and Robustness
|
eess.SY cs.SY math.OC
|
This work proposes an adaptive output feedback model predictive control (MPC)
framework for uncertain systems subject to external disturbances. In the
absence of exact knowledge about the plant parameters and complete state
measurements, the MPC optimization problem is reformulated in terms of their
estimates derived from a suitably designed robust adaptive observer. The MPC
routine returns a homothetic tube for the state estimate trajectory. Sets that
characterize the state estimation errors are then added to the homothetic tube
sections, resulting in a larger tube containing the true state trajectory. The
two-tier tube architecture provides robustness to uncertainties due to
imperfect parameter knowledge, external disturbances, and incomplete state
information. Additionally, recursive feasibility and robust exponential
stability are guaranteed and validated using a numerical example.
|
2502.04050
|
PartEdit: Fine-Grained Image Editing using Pre-Trained Diffusion Models
|
cs.CV
|
We present the first text-based image editing approach for object parts based
on pre-trained diffusion models. Diffusion-based image editing approaches
capitalized on the deep understanding of diffusion models of image semantics to
perform a variety of edits. However, existing diffusion models lack sufficient
understanding of many object parts, hindering fine-grained edits requested by
users. To address this, we propose to expand the knowledge of pre-trained
diffusion models to allow them to understand various object parts, enabling
them to perform fine-grained edits. We achieve this by learning special textual
tokens that correspond to different object parts through an efficient token
optimization process. These tokens are optimized to produce reliable
localization masks at each inference step to localize the editing region.
Leveraging these masks, we design feature-blending and adaptive thresholding
strategies to execute the edits seamlessly. To evaluate our approach, we
establish a benchmark and an evaluation protocol for part editing. Experiments
show that our approach outperforms existing editing methods on all metrics and
is preferred by users 77-90% of the time in conducted user studies.
|
2502.04052
|
Decision Trees That Remember: Gradient-Based Learning of Recurrent
Decision Trees with Memory
|
cs.LG
|
Neural architectures such as Recurrent Neural Networks (RNNs), Transformers,
and State-Space Models have shown great success in handling sequential data by
learning temporal dependencies. Decision Trees (DTs), on the other hand, remain
a widely used class of models for structured tabular data but are typically not
designed to capture sequential patterns directly. Instead, DT-based approaches
for time-series data often rely on feature engineering, such as manually
incorporating lag features, which can be suboptimal for capturing complex
temporal dependencies. To address this limitation, we introduce ReMeDe Trees, a
novel recurrent DT architecture that integrates an internal memory mechanism,
similar to RNNs, to learn long-term dependencies in sequential data. Our model
learns hard, axis-aligned decision rules for both output generation and state
updates, optimizing them efficiently via gradient descent. We provide a
proof-of-concept study on synthetic benchmarks to demonstrate the effectiveness
of our approach.
|
2502.04054
|
Precision Agriculture Revolution: Integrating Digital Twins and Advanced
Crop Recommendation for Optimal Yield
|
cs.LG
|
With the help of a digital twin structure, Agriculture 4.0 technologies like
weather APIs (Application programming interface), GPS (Global Positioning
System) modules, and NPK (Nitrogen, Phosphorus and Potassium) soil sensors and
machine learning recommendation models, we seek to revolutionize agricultural
production through this concept. In addition to providing precise crop growth
forecasts, the combination of real-time data on soil composition,
meteorological dynamics, and geographic coordinates aims to support crop
recommendation models and simulate predictive scenarios for improved water and
pesticide management.
|
2502.04055
|
Evaluating Inter-Column Logical Relationships in Synthetic Tabular Data
Generation
|
cs.LG
|
Current evaluations of synthetic tabular data mainly focus on how well joint
distributions are modeled, often overlooking the assessment of their
effectiveness in preserving realistic event sequences and coherent entity
relationships across columns.This paper proposes three evaluation metrics
designed to assess the preservation of logical relationships among columns in
synthetic tabular data. We validate these metrics by assessing the performance
of both classical and state-of-the-art generation methods on a real-world
industrial dataset.Experimental results reveal that existing methods often fail
to rigorously maintain logical consistency (e.g., hierarchical relationships in
geography or organization) and dependencies (e.g., temporal sequences or
mathematical relationships), which are crucial for preserving the fine-grained
realism of real-world tabular data. Building on these insights, this study also
discusses possible pathways to better capture logical relationships while
modeling the distribution of synthetic tabular data.
|
2502.04056
|
TQ-DiT: Efficient Time-Aware Quantization for Diffusion Transformers
|
cs.LG eess.SP
|
Diffusion transformers (DiTs) combine transformer architectures with
diffusion models. However, their computational complexity imposes significant
limitations on real-time applications and sustainability of AI systems. In this
study, we aim to enhance the computational efficiency through model
quantization, which represents the weights and activation values with lower
precision. Multi-region quantization (MRQ) is introduced to address the
asymmetric distribution of network values in DiT blocks by allocating two
scaling parameters to sub-regions. Additionally, time-grouping quantization
(TGQ) is proposed to reduce quantization error caused by temporal variation in
activations. The experimental results show that the proposed algorithm achieves
performance comparable to the original full-precision model with only a 0.29
increase in FID at W8A8. Furthermore, it outperforms other baselines at W6A6,
thereby confirming its suitability for low-bit quantization. These results
highlight the potential of our method to enable efficient real-time generative
models.
|
2502.04057
|
Smart IoT Security: Lightweight Machine Learning Techniques for
Multi-Class Attack Detection in IoT Networks
|
cs.LG
|
In the growing terrain of the Internet of Things (IoT), it is vital that
networks are secure to protect against a range of cyber threats. Based on the
strong machine learning framework, this study proposes novel lightweight
ensemble approaches for improving multi-class attack detection of IoT devices.
Using the large CICIoT 2023 dataset with 34 attack types distributed amongst 10
attack categories, we systematically evaluated the performance of a wide
variety of modern machine learning methods with the aim of establishing the
best-performing algorithmic choice to secure IoT applications. In particular,
we explore approaches based on ML classifiers to tackle the biocharges
characterized by the challenging and heterogeneous nature of attack vectors in
IoT environments. The method that performed best was the Decision Tree, with an
accuracy of 99.56% and an F1 score of 99.62%, showing that this model is
capable of accurately and reliably detecting threats.The Random Forest model
was the next best-performing model with 98.22% and an F1 score of 98.24%,
suggesting that ML methods are quite effective in a situation of
high-dimensional data. Our results highlight the potential for using ML
classifiers in bolstering security for IoT devices and also serve as
motivations for future investigations targeting scalable, keystroke-based
attack detection systems. We believe that our method provides a new path to
develop complex machine learning algorithms for low-resource IoT devices,
balancing both accuracy and time efficiency needs. In summary, these
contributions enrich the state of the art of the IoT security literature,
laying down solid ground and guidelines for the deployment of smart, adaptive
security in IoT settings.
|
2502.04058
|
Strategic Learning with Local Explanations as Feedback
|
cs.AI
|
We investigate algorithmic decision problems where agents can respond
strategically to the decision maker's (DM) models. The demand for clear and
actionable explanations from DMs to (potentially strategic) agents continues to
rise. While prior work often treats explanations as full model disclosures,
explanations in practice might convey only partial information, which can lead
to misinterpretations and harmful responses. When full disclosure of the
predictive model is neither feasible nor desirable, a key open question is how
DMs can use explanations to maximise their utility without compromising agent
welfare. In this work, we explore well-known local and global explanation
methods, and establish a necessary condition to prevent explanations from
misleading agents into self-harming actions. Moreover, with conditional
homogeneity, we establish that action recommendation (AR)-based explanations
are sufficient for non-harmful responses, akin to the revelation principle in
information design. To operationalise AR-based explanations, we propose a
simple algorithm to jointly optimise the predictive model and AR policy to
balance DM outcomes with agent welfare. Our empirical results demonstrate the
benefits of this approach as a more refined strategy for safe and effective
partial model disclosure in algorithmic decision-making.
|
2502.04062
|
On Sufficient Richness for Linear Time-Invariant Systems
|
eess.SY cs.SY
|
Persistent excitation (PE) is a necessary and sufficient condition for
uniform exponential parameter convergence in several adaptive, identification,
and learning schemes. In this article, we consider, in the context of
multi-input linear time-invariant (LTI) systems, the problem of guaranteeing PE
of commonly-used regressors by applying a sufficiently rich (SR) input signal.
Exploiting the analogies between time shifts and time derivatives, we state
simple necessary and sufficient PE conditions for the discrete- and
continuous-time frameworks. Moreover, we characterize the shape of the set of
SR input signals for both single-input and multi-input systems. Finally, we
show with a numerical example that the derived conditions are tight and cannot
be improved without including additional knowledge of the considered LTI
system.
|
2502.04064
|
Inteligencia artificial para la multi-clasificaci\'on de fauna en
fotograf\'ias autom\'aticas utilizadas en investigaci\'on cient\'ifica
|
cs.CV
|
The management of natural environments, whether for conservation or
production, requires a deep understanding of wildlife. The number, location,
and behavior of wild animals are among the main subjects of study in ecology
and wildlife research. The use of camera traps offers the opportunity to
quickly collect large quantities of photographs that capture wildlife in its
natural habitat, avoiding factors that could alter their behavior. In Tierra
del Fuego, Argentina, research is being conducted on forest use by different
herbivores (guanacos, cows, sheep) to optimize management and protect these
natural ecosystems. Although camera traps allow for the collection of millions
of images, interpreting such photographs presents a scalability challenge for
manual processing. As a result, much of the valuable knowledge stored in these
vast data repositories remains untapped. Neural Networks and Deep Learning are
areas of study within Artificial Intelligence. Over the past decade, these two
disciplines have made significant contributions to image recognition on a
global scale. Ecological and wildlife conservation studies can be combined with
these new technologies to extract important information from the photographs
obtained by camera traps, contributing to the understanding of various natural
processes and improving the management of the involved wild areas. Our project
aims to develop neural network models to classify animal species in photographs
taken with camera traps, addressing large-scale challenges in scientific
research.
|
2502.04066
|
Predicting Large Language Model Capabilities on Closed-Book QA Tasks
Using Only Information Available Prior to Training
|
cs.CL cs.AI
|
The GPT-4 technical report from OpenAI suggests that model performance on
specific tasks can be predicted prior to training, though methodologies remain
unspecified. This approach is crucial for optimizing resource allocation and
ensuring data alignment with target tasks. To achieve this vision, we focus on
predicting performance on Closed-book Question Answering (CBQA) tasks, which
are closely tied to pre-training data and knowledge retention. We address three
major challenges: 1) mastering the entire pre-training process, especially data
construction; 2) evaluating a model's knowledge retention; and 3) predicting
task-specific knowledge retention using only information available prior to
training. To tackle these challenges, we pre-train three large language models
(i.e., 1.6B, 7B, and 13B) using 560k dollars and 520k GPU hours. We analyze the
pre-training data with knowledge triples and assess knowledge retention using
established methods. Additionally, we introduce the SMI metric, an
information-theoretic measure that quantifies the relationship between
pre-training data, model size, and task-specific knowledge retention. Our
experiments reveal a strong linear correlation ($\text{R}^2 > 0.84$) between
the SMI metric and the model's accuracy on CBQA tasks across models of varying
sizes (i.e., 1.1B, 1.6B, 7B, and 13B). The dataset, model, and code are
available at https://github.com/yuhui1038/SMI.
|
2502.04074
|
3D Prior is All You Need: Cross-Task Few-shot 2D Gaze Estimation
|
cs.CV
|
3D and 2D gaze estimation share the fundamental objective of capturing eye
movements but are traditionally treated as two distinct research domains. In
this paper, we introduce a novel cross-task few-shot 2D gaze estimation
approach, aiming to adapt a pre-trained 3D gaze estimation network for 2D gaze
prediction on unseen devices using only a few training images. This task is
highly challenging due to the domain gap between 3D and 2D gaze, unknown screen
poses, and limited training data. To address these challenges, we propose a
novel framework that bridges the gap between 3D and 2D gaze. Our framework
contains a physics-based differentiable projection module with learnable
parameters to model screen poses and project 3D gaze into 2D gaze. The
framework is fully differentiable and can integrate into existing 3D gaze
networks without modifying their original architecture. Additionally, we
introduce a dynamic pseudo-labelling strategy for flipped images, which is
particularly challenging for 2D labels due to unknown screen poses. To overcome
this, we reverse the projection process by converting 2D labels to 3D space,
where flipping is performed. Notably, this 3D space is not aligned with the
camera coordinate system, so we learn a dynamic transformation matrix to
compensate for this misalignment. We evaluate our method on MPIIGaze, EVE, and
GazeCapture datasets, collected respectively on laptops, desktop computers, and
mobile devices. The superior performance highlights the effectiveness of our
approach, and demonstrates its strong potential for real-world applications.
|
2502.04075
|
Controllable Emotion Generation with Emotion Vectors
|
cs.CL
|
In recent years, technologies based on large-scale language models (LLMs)
have made remarkable progress in many fields, especially in customer service,
content creation, and embodied intelligence, showing broad application
potential. However, The LLM's ability to express emotions with proper tone,
timing, and in both direct and indirect forms is still insufficient but
significant. Few works have studied on how to build the controlable emotional
expression capability of LLMs. In this work, we propose a method for emotion
expression output by LLMs, which is universal, highly flexible, and well
controllable proved with the extensive experiments and verifications. This
method has broad application prospects in fields involving emotions output by
LLMs, such as intelligent customer service, literary creation, and home
companion robots. The extensive experiments on various LLMs with different
model-scales and architectures prove the versatility and the effectiveness of
the proposed method.
|
2502.04076
|
Content-Rich AIGC Video Quality Assessment via Intricate Text Alignment
and Motion-Aware Consistency
|
cs.CV
|
The advent of next-generation video generation models like \textit{Sora}
poses challenges for AI-generated content (AIGC) video quality assessment
(VQA). These models substantially mitigate flickering artifacts prevalent in
prior models, enable longer and complex text prompts and generate longer videos
with intricate, diverse motion patterns. Conventional VQA methods designed for
simple text and basic motion patterns struggle to evaluate these content-rich
videos. To this end, we propose \textbf{CRAVE}
(\underline{C}ontent-\underline{R}ich \underline{A}IGC \underline{V}ideo
\underline{E}valuator), specifically for the evaluation of Sora-era AIGC
videos. CRAVE proposes the multi-granularity text-temporal fusion that aligns
long-form complex textual semantics with video dynamics. Additionally, CRAVE
leverages the hybrid motion-fidelity modeling to assess temporal artifacts.
Furthermore, given the straightforward prompts and content in current AIGC VQA
datasets, we introduce \textbf{CRAVE-DB}, a benchmark featuring content-rich
videos from next-generation models paired with elaborate prompts. Extensive
experiments have shown that the proposed CRAVE achieves excellent results on
multiple AIGC VQA benchmarks, demonstrating a high degree of alignment with
human perception. All data and code will be publicly available at
https://github.com/littlespray/CRAVE.
|
2502.04077
|
AttentionPredictor: Temporal Pattern Matters for Efficient LLM Inference
|
cs.CL cs.LG
|
With the development of large language models (LLMs), efficient inference
through Key-Value (KV) cache compression has attracted considerable attention,
especially for long-context generation. To compress the KV cache, recent
methods identify critical KV tokens through heuristic ranking with attention
scores. However, these methods often struggle to accurately determine critical
tokens as they neglect the \textit{temporal patterns} in attention scores,
resulting in a noticeable degradation in LLM performance. To address this
challenge, we propose AttentionPredictor, which is the first learning-based
critical token identification approach. Specifically, AttentionPredictor learns
a lightweight convolution model to capture spatiotemporal patterns and predict
the next-token attention score. An appealing feature of AttentionPredictor is
that it accurately predicts the attention score while consuming negligible
memory. Moreover, we propose a cross-token critical cache prefetching framework
that hides the token estimation time overhead to accelerate the decoding stage.
By retaining most of the attention information, AttentionPredictor achieves
16$\times$ KV cache compression with comparable LLM performance, significantly
outperforming the state-of-the-art.
|
2502.04079
|
DEALing with Image Reconstruction: Deep Attentive Least Squares
|
eess.IV cs.CV cs.LG
|
State-of-the-art image reconstruction often relies on complex, highly
parameterized deep architectures. We propose an alternative: a data-driven
reconstruction method inspired by the classic Tikhonov regularization. Our
approach iteratively refines intermediate reconstructions by solving a sequence
of quadratic problems. These updates have two key components: (i) learned
filters to extract salient image features, and (ii) an attention mechanism that
locally adjusts the penalty of filter responses. Our method achieves
performance on par with leading plug-and-play and learned regularizer
approaches while offering interpretability, robustness, and convergent
behavior. In effect, we bridge traditional regularization and deep learning
with a principled reconstruction approach.
|
2502.04083
|
Automatic quantification of breast cancer biomarkers from multiple
18F-FDG PET image segmentation
|
cs.CV cs.AI
|
Neoadjuvant chemotherapy (NAC) has become a standard clinical practice for
tumor downsizing in breast cancer with 18F-FDG Positron Emission Tomography
(PET). Our work aims to leverage PET imaging for the segmentation of breast
lesions. The focus is on developing an automated system that accurately
segments primary tumor regions and extracts key biomarkers from these areas to
provide insights into the evolution of breast cancer following the first course
of NAC. 243 baseline 18F-FDG PET scans (PET_Bl) and 180 follow-up 18F-FDG PET
scans (PET_Fu) were acquired before and after the first course of NAC,
respectively. Firstly, a deep learning-based breast tumor segmentation method
was developed. The optimal baseline model (model trained on baseline exams) was
fine-tuned on 15 follow-up exams and adapted using active learning to segment
tumor areas in PET_Fu. The pipeline computes biomarkers such as maximum
standardized uptake value (SUVmax), metabolic tumor volume (MTV), and total
lesion glycolysis (TLG) to evaluate tumor evolution between PET_Fu and PET_Bl.
Quality control measures were employed to exclude aberrant outliers. The nnUNet
deep learning model outperformed in tumor segmentation on PET_Bl, achieved a
Dice similarity coefficient (DSC) of 0.89 and a Hausdorff distance (HD) of 3.52
mm. After fine-tuning, the model demonstrated a DSC of 0.78 and a HD of 4.95 mm
on PET_Fu exams. Biomarkers analysis revealed very strong correlations whatever
the biomarker between manually segmented and automatically predicted regions.
The significant average decrease of SUVmax, MTV and TLG were 5.22, 11.79 cm3
and 19.23 cm3, respectively. The presented approach demonstrates an automated
system for breast tumor segmentation from 18F-FDG PET. Thanks to the extracted
biomarkers, our method enables the automatic assessment of cancer progression.
|
2502.04088
|
Quantifying imperfect cognition via achieved information gain
|
cs.IT math.IT
|
Cognition, the process of information processing in form of inference,
communication, and memorization, is the central activity of any intelligence.
Its physical realization in a brain, computer, or in any other intelligent
system requires resources like time, energy, memory, bandwidth, money, and
others. Due to limited resources, many real world intelligent systems perform
only imperfect cognition. For understanding the trade-off between accuracy and
resource investments in existing systems, e.g. in biology, as well as for the
resource-aware optimal design of information processing systems, like computer
algorithms and artificial neural networks, a quantification of information
obtained in an imperfect cognitive operation is desirable. To this end, we
propose the concept of achieved information gain (AIG) of a belief update,
which is given by the amount of information obtained by updating from the
initial knowledge state to the ideal one, minus the amount a change from the
imperfect to the ideal state would yield. AIG has many properties desired for
quantifying imperfect cognition. The ratio of achieved to ideally obtainable
information measures cognitive fidelity and that of AIG to the necessary
cognitive effort measures cognitive efficiency. We provide an axiomatic
derivation of AIG, illustrate its application at common scenarios of posterior
inaccuracies, and discuss the implication of cognitive efficiency for
sustainable resource allocation in computational inference.
|
2502.04094
|
Soft and Highly-Integrated Optical Fiber Bending Sensors for
Proprioception in Multi-Material 3D Printed Fingers
|
cs.RO
|
Accurate shape sensing, only achievable through distributed proprioception,
is a key requirement for closed-loop control of soft robots. Low-cost power
efficient optoelectronic sensors manufactured from flexible materials represent
a natural choice as they can cope with the large deformations of soft robots
without loss of performance. However, existing integration approaches are
cumbersome and require manual steps and complex assembly. We propose a
semi-automated printing process where plastic optical fibers are embedded with
readout electronics in 3D printed flexures. The fibers become locked in place
and the readout electronics remain optically coupled to them while the flexures
undergo large bending deformations, creating a repeatable, monolithically
manufactured bending transducer with only 10 minutes required in total for the
manual embedding steps. We demonstrate the process by manufacturing
multi-material 3D printed fingers and extensively evaluating the performance of
each proprioceptive joint. The sensors achieve 70% linearity and 4.81{\deg} RMS
error on average. Furthermore, the distributed architecture allows for
maintaining an average fingertip position estimation accuracy of 12 mm in the
presence of external static forces. To demonstrate the potential of the
distributed sensor architecture in robotics applications, we build a
data-driven model independent of actuation feedback to detect contact with
objects in the environment.
|
2502.04095
|
LLMs to Support a Domain Specific Knowledge Assistant
|
cs.CL cs.AI
|
This work presents a custom approach to developing a domain specific
knowledge assistant for sustainability reporting using the International
Financial Reporting Standards (IFRS). In this domain, there is no publicly
available question-answer dataset, which has impeded the development of a
high-quality chatbot to support companies with IFRS reporting. The two key
contributions of this project therefore are:
(1) A high-quality synthetic question-answer (QA) dataset based on IFRS
sustainability standards, created using a novel generation and evaluation
pipeline leveraging Large Language Models (LLMs). This comprises 1,063 diverse
QA pairs that address a wide spectrum of potential user queries in
sustainability reporting. Various LLM-based techniques are employed to create
the dataset, including chain-of-thought reasoning and few-shot prompting. A
custom evaluation framework is developed to assess question and answer quality
across multiple dimensions, including faithfulness, relevance, and domain
specificity. The dataset averages a score range of 8.16 out of 10 on these
metrics.
(2) Two architectures for question-answering in the sustainability reporting
domain - a RAG pipeline and a fully LLM-based pipeline. The architectures are
developed by experimenting, fine-tuning, and training on the QA dataset. The
final pipelines feature an LLM fine-tuned on domain specific data and an
industry classification component to improve the handling of complex queries.
The RAG architecture achieves an accuracy of 85.32% on single-industry and
72.15% on cross-industry multiple-choice questions, outperforming the baseline
approach by 4.67 and 19.21 percentage points, respectively. The LLM-based
pipeline achieves an accuracy of 93.45% on single-industry and 80.30% on
cross-industry multiple-choice questions, an improvement of 12.80 and 27.36
percentage points over the baseline, respectively.
|
2502.04098
|
Efficient Few-Shot Continual Learning in Vision-Language Models
|
cs.CV cs.AI
|
Vision-language models (VLMs) excel in tasks such as visual question
answering and image captioning. However, VLMs are often limited by their use of
pretrained image encoders, like CLIP, leading to image understanding errors
that hinder overall performance. On top of that, real-world applications often
require the model to be continuously adapted as new and often limited data
continuously arrive. To address this, we propose LoRSU (Low-Rank Adaptation
with Structured Updates), a robust and computationally efficient method for
selectively updating image encoders within VLMs. LoRSU introduces structured
and localized parameter updates, effectively correcting performance on
previously error-prone data while preserving the model's general robustness.
Our approach leverages theoretical insights to identify and update only the
most critical parameters, achieving significant resource efficiency.
Specifically, we demonstrate that LoRSU reduces computational overhead by over
25x compared to full VLM updates, without sacrificing performance. Experimental
results on VQA tasks in the few-shot continual learning setting, validate
LoRSU's scalability, efficiency, and effectiveness, making it a compelling
solution for image encoder adaptation in resource-constrained environments.
|
2502.04101
|
Safe Quadrotor Navigation using Composite Control Barrier Functions
|
cs.RO
|
This paper introduces a safety filter to ensure collision avoidance for
multirotor aerial robots. The proposed formalism leverages a single Composite
Control Barrier Function from all position constraints acting on a third-order
nonlinear representation of the robot's dynamics. We analyze the recursive
feasibility of the safety filter under the composite constraint and demonstrate
that the infeasible set is negligible. The proposed method allows computational
scalability against thousands of constraints and, thus, complex scenes with
numerous obstacles. We experimentally demonstrate its ability to guarantee the
safety of a quadrotor with an onboard LiDAR, operating in both indoor and
outdoor cluttered environments against both naive and adversarial nominal
policies.
|
2502.04103
|
VTutor: An Open-Source SDK for Generative AI-Powered Animated
Pedagogical Agents with Multi-Media Output
|
cs.HC cs.AI cs.SE
|
The rapid evolution of large language models (LLMs) has transformed
human-computer interaction (HCI), but the interaction with LLMs is currently
mainly focused on text-based interactions, while other multi-model approaches
remain under-explored. This paper introduces VTutor, an open-source Software
Development Kit (SDK) that combines generative AI with advanced animation
technologies to create engaging, adaptable, and realistic APAs for human-AI
multi-media interactions. VTutor leverages LLMs for real-time personalized
feedback, advanced lip synchronization for natural speech alignment, and WebGL
rendering for seamless web integration. Supporting various 2D and 3D character
models, VTutor enables researchers and developers to design emotionally
resonant, contextually adaptive learning agents. This toolkit enhances learner
engagement, feedback receptivity, and human-AI interaction while promoting
trustworthy AI principles in education. VTutor sets a new standard for
next-generation APAs, offering an accessible, scalable solution for fostering
meaningful and immersive human-AI interaction experiences. The VTutor project
is open-sourced and welcomes community-driven contributions and showcases.
|
2502.04110
|
Ancient Greek Technology: An Immersive Learning Use Case Described Using
a Co-Intelligent Custom ChatGPT Assistant
|
cs.HC cs.AI
|
Achieving consistency in immersive learning case descriptions is essential
but challenging due to variations in research focus, methodology, and
researchers' background. We address these challenges by leveraging the
Immersive Learning Case Sheet (ILCS), a methodological instrument to
standardize case descriptions, that we applied to an immersive learning case on
ancient Greek technology in VRChat. Research team members had differing levels
of familiarity with the ILCS and the case content, so we developed a custom
ChatGPT assistant to facilitate consistent terminology and process alignment
across the team. This paper constitutes an example of how structured case
reports can be a novel contribution to immersive learning literature. Our
findings demonstrate how the ILCS supports structured reflection and
interpretation of the case. Further we report that the use of a ChatGPT
assistant significantly sup-ports the coherence and quality of the team members
development of the final ILCS. This exposes the potential of employing
AI-driven tools to enhance collaboration and standardization of research
practices in qualitative educational research. However, we also discuss the
limitations and challenges, including reliance on AI for interpretive tasks and
managing varied levels of expertise within the team. This study thus provides
insights into the practical application of AI in standardizing immersive
learning research processes.
|
2502.04111
|
Adaptive Margin Contrastive Learning for Ambiguity-aware 3D Semantic
Segmentation
|
cs.CV
|
In this paper, we propose an adaptive margin contrastive learning method for
3D point cloud semantic segmentation, namely AMContrast3D. Most existing
methods use equally penalized objectives, which ignore per-point ambiguities
and less discriminated features stemming from transition regions. However, as
highly ambiguous points may be indistinguishable even for humans, their
manually annotated labels are less reliable, and hard constraints over these
points would lead to sub-optimal models. To address this, we design adaptive
objectives for individual points based on their ambiguity levels, aiming to
ensure the correctness of low-ambiguity points while allowing mistakes for
high-ambiguity points. Specifically, we first estimate ambiguities based on
position embeddings. Then, we develop a margin generator to shift decision
boundaries for contrastive feature embeddings, so margins are narrowed due to
increasing ambiguities with even negative margins for extremely high-ambiguity
points. Experimental results on large-scale datasets, S3DIS and ScanNet,
demonstrate that our method outperforms state-of-the-art methods.
|
2502.04115
|
A Neural Network-based Multi-timestep Command Governor for Nonlinear
Systems with Constraints
|
eess.SY cs.SY
|
The multi-timestep command governor (MCG) is an add-on algorithm that
enforces constraints by modifying, at each timestep, the reference command to a
pre-stabilized control system. The MCG can be interpreted as a Model-Predictive
Control scheme operating on the reference command. The implementation of MCG on
nonlinear systems carries a heavy computational burden as it requires solving a
nonlinear program with multiple decision variables at each timestep. This paper
proposes a less computationally demanding alternative, based on approximating
the MCG control law using a neural network (NN) trained on offline data.
However, since the NN output may not always be constraint-admissible due to
training errors, its output is adjusted using a sensitivity-based method. We
thus refer to the resulting control strategy as the neural network-based MCG
(NN-MCG). As validation, the proposed controller is applied as a load governor
for constraint management in an automotive fuel cell system. It is shown that
the proposed strategy is significantly more computationally efficient than the
traditional MCG, while achieving nearly identical performance if the NN is
well-trained.
|
2502.04116
|
Generative Adversarial Networks Bridging Art and Machine Intelligence
|
cs.LG cs.CV
|
Generative Adversarial Networks (GAN) have greatly influenced the development
of computer vision and artificial intelligence in the past decade and also
connected art and machine intelligence together. This book begins with a
detailed introduction to the fundamental principles and historical development
of GANs, contrasting them with traditional generative models and elucidating
the core adversarial mechanisms through illustrative Python examples. The text
systematically addresses the mathematical and theoretical underpinnings
including probability theory, statistics, and game theory providing a solid
framework for understanding the objectives, loss functions, and optimisation
challenges inherent to GAN training. Subsequent chapters review classic
variants such as Conditional GANs, DCGANs, InfoGAN, and LAPGAN before
progressing to advanced training methodologies like Wasserstein GANs, GANs with
gradient penalty, least squares GANs, and spectral normalisation techniques.
The book further examines architectural enhancements and task-specific
adaptations in generators and discriminators, showcasing practical
implementations in high resolution image generation, artistic style transfer,
video synthesis, text to image generation and other multimedia applications.
The concluding sections offer insights into emerging research trends, including
self-attention mechanisms, transformer-based generative models, and a
comparative analysis with diffusion models, thus charting promising directions
for future developments in both academic and applied settings.
|
2502.04121
|
Optimizing Perturbations for Improved Training of Machine Learning
Models
|
cs.LG cond-mat.dis-nn physics.chem-ph
|
Machine learning models have become indispensable tools in applications
across the physical sciences. Their training is often time-consuming, vastly
exceeding the inference timescales. Several protocols have been developed to
perturb the learning process and improve the training, such as shrink and
perturb, warm restarts, and stochastic resetting. For classifiers, these
perturbations have been shown to result in enhanced speedups or improved
generalization. However, the design of such perturbations is usually done
\textit{ad hoc} by intuition and trial and error. To rationally optimize
training protocols, we frame them as first-passage processes and consider their
response to perturbations. We show that if the unperturbed learning process
reaches a quasi-steady state, the response at a single perturbation frequency
can predict the behavior at a wide range of frequencies. We demonstrate that
this is the case when training a CIFAR-10 classifier using the ResNet-18 model
and use this approach to identify an optimal perturbation and frequency. Our
work allows optimization of training protocols of machine learning models using
a statistical mechanical approach.
|
2502.04126
|
RC Measurement Uncertainty Estimation Method for Directive Antennas and
Turntable Stirring
|
eess.SY cs.SY
|
This paper investigates measurement uncertainty in a Reverberation Chamber
(RC) within the lower FR2 bands (24.25-29.5 GHz). The study focuses on the
impact of several factors contributing to RC measurement uncertainty, including
finite sample size, polarization imbalance, and spatial non-uniformity. A
series of 24 measurements were conducted using a horn antenna, known for its
directivity in mmWave frequencies, varying antenna parameters such as height,
orientation, position on the turntable, and polarization within a predefined
chamber volume. The measurement uncertainty was evaluated by a method based on
the standardized 3GPP and CTIA approaches, incorporating uncorrelated
measurements and analyzing Pearson correlation coefficients between measurement
pairs. An analysis of variance (ANOVA) was performed on the frequency-averaged
power transfer function to identify the significance and impact of each
variable on measurement variability. Additionally, the K-factor was estimated
for each measurement set as part of the RC characterization, using an
alternative approach to account for the turntable stirring effect. The findings
highlight which variables most significantly influence measurement uncertainty,
where the antenna orientation emerges as the most significant factor for the
mmWave directive antenna setup.
|
2502.04128
|
Llasa: Scaling Train-Time and Inference-Time Compute for Llama-based
Speech Synthesis
|
eess.AS cs.AI cs.CL cs.MM cs.SD
|
Recent advances in text-based large language models (LLMs), particularly in
the GPT series and the o1 model, have demonstrated the effectiveness of scaling
both training-time and inference-time compute. However, current
state-of-the-art TTS systems leveraging LLMs are often multi-stage, requiring
separate models (e.g., diffusion models after LLM), complicating the decision
of whether to scale a particular model during training or testing. This work
makes the following contributions: First, we explore the scaling of train-time
and inference-time compute for speech synthesis. Second, we propose a simple
framework Llasa for speech synthesis that employs a single-layer vector
quantizer (VQ) codec and a single Transformer architecture to fully align with
standard LLMs such as Llama. Our experiments reveal that scaling train-time
compute for Llasa consistently improves the naturalness of synthesized speech
and enables the generation of more complex and accurate prosody patterns.
Furthermore, from the perspective of scaling inference-time compute, we employ
speech understanding models as verifiers during the search, finding that
scaling inference-time compute shifts the sampling modes toward the preferences
of specific verifiers, thereby improving emotional expressiveness, timbre
consistency, and content accuracy. In addition, we released the checkpoint and
training code for our TTS model (1B, 3B, 8B) and codec model publicly
available.
|
2502.04131
|
On the importance of structural identifiability for machine learning
with partially observed dynamical systems
|
cs.LG
|
The successful application of modern machine learning for time series
classification is often hampered by limitations in quality and quantity of
available training data. To overcome these limitations, available domain expert
knowledge in the form of parametrised mechanistic dynamical models can be used
whenever it is available and time series observations may be represented as an
element from a given class of parametrised dynamical models. This makes the
learning process interpretable and allows the modeller to deal with sparsely
and irregularly sampled data in a natural way. However, the internal processes
of a dynamical model are often only partially observed. This can lead to
ambiguity regarding which particular model realization best explains a given
time series observation. This problem is well-known in the literature, and a
dynamical model with this issue is referred to as structurally unidentifiable.
Training a classifier that incorporates knowledge about a structurally
unidentifiable dynamical model can negatively influence classification
performance. To address this issue, we employ structural identifiability
analysis to explicitly relate parameter configurations that are associated with
identical system outputs. Using the derived relations in classifier training,
we demonstrate that this method significantly improves the classifier's ability
to generalize to unseen data on a number of example models from the biomedical
domain. This effect is especially pronounced when the number of training
instances is limited. Our results demonstrate the importance of accounting for
structural identifiability, a topic that has received relatively little
attention from the machine learning community.
|
2502.04132
|
Transfer Learning for Covert Speech Classification Using EEG Hilbert
Envelope and Temporal Fine Structure
|
cs.LG
|
Brain-Computer Interfaces (BCIs) can decode imagined speech from neural
activity. However, these systems typically require extensive training sessions
where participants imaginedly repeat words, leading to mental fatigue and
difficulties identifying the onset of words, especially when imagining
sequences of words. This paper addresses these challenges by transferring a
classifier trained in overt speech data to covert speech classification. We
used electroencephalogram (EEG) features derived from the Hilbert envelope and
temporal fine structure, and used them to train a bidirectional long-short-term
memory (BiLSTM) model for classification. Our method reduces the burden of
extensive training and achieves state-of-the-art classification accuracy:
86.44% for overt speech and 79.82% for covert speech using the overt speech
classifier.
|
2502.04134
|
The Order Effect: Investigating Prompt Sensitivity in Closed-Source LLMs
|
cs.CL
|
As large language models (LLMs) become integral to diverse applications,
ensuring their reliability under varying input conditions is crucial. One key
issue affecting this reliability is order sensitivity, wherein slight
variations in input arrangement can lead to inconsistent or biased outputs.
Although recent advances have reduced this sensitivity, the problem remains
unresolved. This paper investigates the extent of order sensitivity in
closed-source LLMs by conducting experiments across multiple tasks, including
paraphrasing, relevance judgment, and multiple-choice questions. Our results
show that input order significantly affects performance across tasks, with
shuffled inputs leading to measurable declines in output accuracy. Few-shot
prompting demonstrates mixed effectiveness and offers partial mitigation,
however, fails to fully resolve the problem. These findings highlight
persistent risks, particularly in high-stakes applications, and point to the
need for more robust LLMs or improved input-handling techniques in future
development.
|
2502.04139
|
Beyond the Final Layer: Hierarchical Query Fusion Transformer with
Agent-Interpolation Initialization for 3D Instance Segmentation
|
cs.CV
|
3D instance segmentation aims to predict a set of object instances in a scene
and represent them as binary foreground masks with corresponding semantic
labels. Currently, transformer-based methods are gaining increasing attention
due to their elegant pipelines, reduced manual selection of geometric
properties, and superior performance. However, transformer-based methods fail
to simultaneously maintain strong position and content information during query
initialization. Additionally, due to supervision at each decoder layer, there
exists a phenomenon of object disappearance with the deepening of layers. To
overcome these hurdles, we introduce Beyond the Final Layer: Hierarchical Query
Fusion Transformer with Agent-Interpolation Initialization for 3D Instance
Segmentation (BFL). Specifically, an Agent-Interpolation Initialization Module
is designed to generate resilient queries capable of achieving a balance
between foreground coverage and content learning. Additionally, a Hierarchical
Query Fusion Decoder is designed to retain low overlap queries, mitigating the
decrease in recall with the deepening of layers. Extensive experiments on
ScanNetV2, ScanNet200, ScanNet++ and S3DIS datasets demonstrate the superior
performance of BFL.
|
2502.04140
|
Synthetic Datasets for Machine Learning on Spatio-Temporal Graphs using
PDEs
|
cs.LG cs.AI
|
Many physical processes can be expressed through partial differential
equations (PDEs). Real-world measurements of such processes are often collected
at irregularly distributed points in space, which can be effectively
represented as graphs; however, there are currently only a few existing
datasets. Our work aims to make advancements in the field of PDE-modeling
accessible to the temporal graph machine learning community, while addressing
the data scarcity problem, by creating and utilizing datasets based on PDEs. In
this work, we create and use synthetic datasets based on PDEs to support
spatio-temporal graph modeling in machine learning for different applications.
More precisely, we showcase three equations to model different types of
disasters and hazards in the fields of epidemiology, atmospheric particles, and
tsunami waves. Further, we show how such created datasets can be used by
benchmarking several machine learning models on the epidemiological dataset.
Additionally, we show how pre-training on this dataset can improve model
performance on real-world epidemiological data. The presented methods enable
others to create datasets and benchmarks customized to individual requirements.
The source code for our methodology and the three created datasets can be found
on https://github.com/github-usr-ano/Temporal_Graph_Data_PDEs.
|
2502.04141
|
Behavioral Entropy-Guided Dataset Generation for Offline Reinforcement
Learning
|
cs.LG
|
Entropy-based objectives are widely used to perform state space exploration
in reinforcement learning (RL) and dataset generation for offline RL.
Behavioral entropy (BE), a rigorous generalization of classical entropies that
incorporates cognitive and perceptual biases of agents, was recently proposed
for discrete settings and shown to be a promising metric for robotic
exploration problems. In this work, we propose using BE as a principled
exploration objective for systematically generating datasets that provide
diverse state space coverage in complex, continuous, potentially
high-dimensional domains. To achieve this, we extend the notion of BE to
continuous settings, derive tractable $k$-nearest neighbor estimators, provide
theoretical guarantees for these estimators, and develop practical reward
functions that can be used with standard RL methods to learn BE-maximizing
policies. Using standard MuJoCo environments, we experimentally compare the
performance of offline RL algorithms for a variety of downstream tasks on
datasets generated using BE, R\'{e}nyi, and Shannon entropy-maximizing
policies, as well as the SMM and RND algorithms. We find that offline RL
algorithms trained on datasets collected using BE outperform those trained on
datasets collected using Shannon entropy, SMM, and RND on all tasks considered,
and on 80% of the tasks compared to datasets collected using R\'{e}nyi entropy.
|
Subsets and Splits
No community queries yet
The top public SQL queries from the community will appear here once available.