id
stringlengths 9
16
| title
stringlengths 4
278
| categories
stringlengths 5
104
| abstract
stringlengths 6
4.09k
|
|---|---|---|---|
2501.11361
|
Block Flow: Learning Straight Flow on Data Blocks
|
cs.LG cs.CV
|
Flow-matching models provide a powerful framework for various applications,
offering efficient sampling and flexible probability path modeling. These
models are characterized by flows with low curvature in learned generative
trajectories, which results in reduced truncation error at each sampling step.
To further reduce curvature, we propose block matching. This novel approach
leverages label information to partition the data distribution into blocks and
match them with a prior distribution parameterized using the same label
information, thereby learning straighter flows. We demonstrate that the
variance of the prior distribution can control the curvature upper bound of
forward trajectories in flow-matching models. By designing flexible
regularization strategies to adjust this variance, we achieve optimal
generation performance, effectively balancing the trade-off between maintaining
diversity in generated samples and minimizing numerical solver errors. Our
results demonstrate competitive performance with models of the same parameter
scale.Code is available at \url{https://github.com/wpp13749/block_flow}.
|
2501.11371
|
Reed-Solomon Codes Against Insertions and Deletions: Full-Length and
Rate-$1/2$ Codes
|
cs.IT math.IT
|
The performance of Reed-Solomon codes (RS codes, for short) in the presence
of insertion and deletion errors has been studied recently in several papers.
In this work, we further study this intriguing mathematical problem, focusing
on two regimes. First, we study the question of how well full-length RS codes
perform against insertions and deletions. For 2-dimensional RS codes, we fully
characterize which codes cannot correct even a single insertion or deletion and
show that (almost) all 2-dimensional RS codes correct at least $1$ insertion or
deletion error. Moreover, for large enough field size $q$, and for any $k \ge
2$, we show that there exists a full-length $k$-dimensional RS code that
corrects $q/10k$ insertions and deletions. Second, we focus on rate $1/2$ RS
codes that can correct a single insertion or deletion error. We present a
polynomial time algorithm that constructs such codes for $q = O(k^4)$. This
result matches the existential bound given in \cite{con2023reed}.
|
2501.11374
|
Linear ADRC is equivalent to PID with set-point weighting and
measurement filter
|
eess.SY cs.SY
|
We show that linear Active Disturbance-Rejection Control (ADRC) tuned using
the "bandwidth method" is equivalent to PI(D) control with set-point weighting
and a lowpass filter on the measurement signal. We also provide simple
expressions that make it possible to implement linear ADRC for first and
second-order systems using commonplace two degree-of-freedom PID
implementations. The expressions are equivalent to ADRC in the response from
measurements, and a slight approximation in the response from references.
|
2501.11378
|
Investigation of Whisper ASR Hallucinations Induced by Non-Speech Audio
|
cs.SD cs.AI eess.AS
|
Hallucinations of deep neural models are amongst key challenges in automatic
speech recognition (ASR). In this paper, we investigate hallucinations of the
Whisper ASR model induced by non-speech audio segments present during
inference. By inducting hallucinations with various types of sounds, we show
that there exists a set of hallucinations that appear frequently. We then study
hallucinations caused by the augmentation of speech with such sounds. Finally,
we describe the creation of a bag of hallucinations (BoH) that allows to remove
the effect of hallucinations through the post-processing of text
transcriptions. The results of our experiments show that such post-processing
is capable of reducing word error rate (WER) and acts as a good safeguard
against problematic hallucinations.
|
2501.11384
|
Transductive Conformal Inference for Ranking
|
cs.LG stat.ME stat.ML
|
We introduce a method based on Conformal Prediction (CP) to quantify the
uncertainty of full ranking algorithms. We focus on a specific scenario where
$n + m$ items are to be ranked by some ''black box'' algorithm. It is assumed
that the relative (ground truth) ranking of n of them is known. The objective
is then to quantify the error made by the algorithm on the ranks of the m new
items among the total $(n + m)$. In such a setting, the true ranks of the n
original items in the total $(n + m)$ depend on the (unknown) true ranks of the
m new ones. Consequently, we have no direct access to a calibration set to
apply a classical CP method. To address this challenge, we propose to construct
distribution-free bounds of the unknown conformity scores using recent results
on the distribution of conformal p-values. Using these scores upper bounds, we
provide valid prediction sets for the rank of any item. We also control the
false coverage proportion, a crucial quantity when dealing with multiple
prediction sets. Finally, we empirically show on both synthetic and real data
the efficiency of our CP method.
|
2501.11388
|
UniTrans: A Unified Vertical Federated Knowledge Transfer Framework for
Enhancing Cross-Hospital Collaboration
|
cs.LG cs.DC
|
Cross-hospital collaboration has the potential to address disparities in
medical resources across different regions. However, strict privacy regulations
prohibit the direct sharing of sensitive patient information between hospitals.
Vertical federated learning (VFL) offers a novel privacy-preserving machine
learning paradigm that maximizes data utility across multiple hospitals.
Traditional VFL methods, however, primarily benefit patients with overlapping
data, leaving vulnerable non-overlapping patients without guaranteed
improvements in medical prediction services. While some knowledge transfer
techniques can enhance the prediction performance for non-overlapping patients,
they fall short in addressing scenarios where overlapping and non-overlapping
patients belong to different domains, resulting in challenges such as feature
heterogeneity and label heterogeneity. To address these issues, we propose a
novel unified vertical federated knowledge transfer framework (Unitrans). Our
framework consists of three key steps. First, we extract the federated
representation of overlapping patients by employing an effective vertical
federated representation learning method to model multi-party joint features
online. Next, each hospital learns a local knowledge transfer module offline,
enabling the transfer of knowledge from the federated representation of
overlapping patients to the enriched representation of local non-overlapping
patients in a domain-adaptive manner. Finally, hospitals utilize these enriched
local representations to enhance performance across various downstream medical
prediction tasks. Experiments on real-world medical datasets validate the
framework's dual effectiveness in both intra-domain and cross-domain knowledge
transfer. The code of \method is available at
\url{https://github.com/Chung-ju/Unitrans}.
|
2501.11391
|
Revisiting Language Models in Neural News Recommender Systems
|
cs.IR
|
Neural news recommender systems (RSs) have integrated language models (LMs)
to encode news articles with rich textual information into representations,
thereby improving the recommendation process. Most studies suggest that (i)
news RSs achieve better performance with larger pre-trained language models
(PLMs) than shallow language models (SLMs), and (ii) that large language models
(LLMs) outperform PLMs. However, other studies indicate that PLMs sometimes
lead to worse performance than SLMs. Thus, it remains unclear whether using
larger LMs consistently improves the performance of news RSs. In this paper, we
revisit, unify, and extend these comparisons of the effectiveness of LMs in
news RSs using the real-world MIND dataset. We find that (i) larger LMs do not
necessarily translate to better performance in news RSs, and (ii) they require
stricter fine-tuning hyperparameter selection and greater computational
resources to achieve optimal recommendation performance than smaller LMs. On
the positive side, our experiments show that larger LMs lead to better
recommendation performance for cold-start users: they alleviate dependency on
extensive user interaction history and make recommendations more reliant on the
news content.
|
2501.11393
|
Trace Reconstruction of First-Order Reed-Muller Codewords Using Run
Statistics
|
cs.IT math.IT math.PR
|
In this paper, we derive an expression for the expected number of runs in a
trace of a binary sequence $x \in \{0,1\}^n$ obtained by passing $x$ through a
deletion channel that independently deletes each bit with probability $q$. We
use this expression to show that if $x$ is a codeword of a first-order
Reed-Muller code, and the deletion probability $q$ is 1/2, then $x$ can be
reconstructed, with high probability, from $\tilde{O}(n)$ many of its traces.
|
2501.11395
|
To BEE or not to BEE: Estimating more than Entropy with Biased Entropy
Estimators
|
cs.IT cs.SE math.IT
|
Entropy estimation plays a significant role in biology, economics, physics,
communication engineering and other disciplines. It is increasingly used in
software engineering, e.g. in software confidentiality, software testing,
predictive analysis, machine learning, and software improvement. However
accurate estimation is demonstrably expensive in many contexts, including
software. Statisticians have consequently developed biased estimators that aim
to accurately estimate entropy on the basis of a sample. In this paper we apply
18 widely employed entropy estimators to Shannon measures useful to the
software engineer: entropy, mutual information and conditional mutual
information. Moreover, we investigate how the estimators are affected by two
main influential factors: sample size and domain size. Our experiments range
over a large set of randomly generated joint probability distributions and
varying sample sizes, rather than choosing just one or two well known
probability distributions as in previous investigations.
Our most important result is identifying that the Chao-Shen and
Chao-Wang-Jost estimators stand out for consistently converging more quickly to
the ground truth, regardless of domain size and regardless of the measure used.
They also tend to outperform the others in terms of accuracy as sample sizes
increase. This discovery enables a significant reduction in data collection
effort without compromising performance.
|
2501.11403
|
Verifying Cross-modal Entity Consistency in News using Vision-language
Models
|
cs.CL cs.IR cs.MM
|
The web has become a crucial source of information, but it is also used to
spread disinformation, often conveyed through multiple modalities like images
and text. The identification of inconsistent cross-modal information, in
particular entities such as persons, locations, and events, is critical to
detect disinformation. Previous works either identify out-of-context
disinformation by assessing the consistency of images to the whole document,
neglecting relations of individual entities, or focus on generic entities that
are not relevant to news. So far, only few approaches have addressed the task
of validating entity consistency between images and text in news. However, the
potential of large vision-language models (LVLMs) has not been explored yet. In
this paper, we propose an LVLM-based framework for verifying Cross-modal Entity
Consistency~(LVLM4CEC), to assess whether persons, locations and events in news
articles are consistent across both modalities. We suggest effective prompting
strategies for LVLMs for entity verification that leverage reference images
crawled from web. Moreover, we extend three existing datasets for the task of
entity verification in news providing manual ground-truth data. Our results
show the potential of LVLMs for automating cross-modal entity verification,
showing improved accuracy in identifying persons and events when using evidence
images. Moreover, our method outperforms a baseline for location and event
verification in documents. The datasets and source code are available on GitHub
at https://github.com/TIBHannover/LVLM4CEC.
|
2501.11406
|
Efficient Reduction of Interconnected Subsystem Models using Abstracted
Environments
|
eess.SY cs.SY
|
We present two frameworks for structure-preserving model order reduction of
interconnected subsystems, improving tractability of the reduction methods
while ensuring stability and accuracy bounds of the reduced interconnected
model. Instead of reducing each subsystem independently, we take a low-order
abstraction of its environment into account to better capture the dynamics
relevant to the external input-output behaviour of the interconnected system,
thereby increasing accuracy of the reduced interconnected model. This approach
significantly reduces the computational costs of reduction by abstracting
instead of fully retaining the environment. The two frameworks differ in how
they generate these abstracted environments: one abstracts the environment as a
whole, whereas the other abstracts each individual subsystem. By relating
low-level errors introduced by reduction and abstraction to the resulting
high-level error on the interconnected system, we are able to translate
high-level accuracy requirements (on the reduced interconnected system) to
low-level specifications (on abstraction and reduction errors) using techniques
from robust performance analysis. By adhering to these low-level
specifications, restricting the introduced low-level errors, both frameworks
automatically guarantee the accuracy and stability of the reduced
interconnected system. We demonstrate the effectiveness of both frameworks by
applying them to a structural dynamics model of a two-stroke wafer stage,
achieving improved accuracy and/or greater reduction compared to an existing
method from literature.
|
2501.11407
|
A Truly Sparse and General Implementation of Gradient-Based Synaptic
Plasticity
|
cs.NE cs.AI cs.LG
|
Online synaptic plasticity rules derived from gradient descent achieve high
accuracy on a wide range of practical tasks. However, their software
implementation often requires tediously hand-derived gradients or using
gradient backpropagation which sacrifices the online capability of the rules.
In this work, we present a custom automatic differentiation (AD) pipeline for
sparse and online implementation of gradient-based synaptic plasticity rules
that generalizes to arbitrary neuron models. Our work combines the programming
ease of backpropagation-type methods for forward AD while being
memory-efficient. To achieve this, we exploit the advantageous compute and
memory scaling of online synaptic plasticity by providing an inherently sparse
implementation of AD where expensive tensor contractions are replaced with
simple element-wise multiplications if the tensors are diagonal. Gradient-based
synaptic plasticity rules such as eligibility propagation (e-prop) have exactly
this property and thus profit immensely from this feature. We demonstrate the
alignment of our gradients with respect to gradient backpropagation on an
synthetic task where e-prop gradients are exact, as well as audio speech
classification benchmarks. We demonstrate how memory utilization scales with
network size without dependence on the sequence length, as expected from
forward AD methods.
|
2501.11409
|
Unsupervised Learning in Echo State Networks for Input Reconstruction
|
cs.LG cs.AI eess.SP nlin.CD q-bio.NC
|
Conventional echo state networks (ESNs) require supervised learning to train
the readout layer, using the desired outputs as training data. In this study,
we focus on input reconstruction (IR), which refers to training the readout
layer to reproduce the input time series in its output. We reformulate the
learning algorithm of the ESN readout layer to perform IR using unsupervised
learning (UL). By conducting theoretical analysis and numerical experiments, we
demonstrate that IR in ESNs can be effectively implemented under realistic
conditions without explicitly using the desired outputs as training data; in
this way, UL is enabled. Furthermore, we demonstrate that applications relying
on IR, such as dynamical system replication and noise filtering, can be
reformulated within the UL framework. Our findings establish a theoretically
sound and universally applicable IR formulation, along with its related tasks
in ESNs. This work paves the way for novel predictions and highlights
unresolved theoretical challenges in ESNs, particularly in the context of
time-series processing methods and computational models of the brain.
|
2501.11411
|
Beyond the Hype: Benchmarking LLM-Evolved Heuristics for Bin Packing
|
cs.NE
|
Coupling Large Language Models (LLMs) with Evolutionary Algorithms has
recently shown significant promise as a technique to design new heuristics that
outperform existing methods, particularly in the field of combinatorial
optimisation. An escalating arms race is both rapidly producing new heuristics
and improving the efficiency of the processes evolving them. However, driven by
the desire to quickly demonstrate the superiority of new approaches, evaluation
of the new heuristics produced for a specific domain is often cursory: testing
on very few datasets in which instances all belong to a specific class from the
domain, and on few instances per class. Taking bin-packing as an example, to
the best of our knowledge we conduct the first rigorous benchmarking study of
new LLM-generated heuristics, comparing them to well-known existing heuristics
across a large suite of benchmark instances using three performance metrics.
For each heuristic, we then evolve new instances won by the heuristic and
perform an instance space analysis to understand where in the feature space
each heuristic performs well. We show that most of the LLM heuristics do not
generalise well when evaluated across a broad range of benchmarks in contrast
to existing simple heuristics, and suggest that any gains from generating very
specialist heuristics that only work in small areas of the instance space need
to be weighed carefully against the considerable cost of generating these
heuristics.
|
2501.11413
|
Generalization and Informativeness of Weighted Conformal Risk Control
Under Covariate Shift
|
cs.LG cs.AI cs.IT math.IT
|
Predictive models are often required to produce reliable predictions under
statistical conditions that are not matched to the training data. A common type
of training-testing mismatch is covariate shift, where the conditional
distribution of the target variable given the input features remains fixed,
while the marginal distribution of the inputs changes. Weighted conformal risk
control (W-CRC) uses data collected during the training phase to convert point
predictions into prediction sets with valid risk guarantees at test time
despite the presence of a covariate shift. However, while W-CRC provides
statistical reliability, its efficiency -- measured by the size of the
prediction sets -- can only be assessed at test time. In this work, we relate
the generalization properties of the base predictor to the efficiency of W-CRC
under covariate shifts. Specifically, we derive a bound on the inefficiency of
the W-CRC predictor that depends on algorithmic hyperparameters and
task-specific quantities available at training time. This bound offers insights
on relationships between the informativeness of the prediction sets, the extent
of the covariate shift, and the size of the calibration and training sets.
Experiments on fingerprinting-based localization validate the theoretical
results.
|
2501.11414
|
Algorithm Selection with Probing Trajectories: Benchmarking the Choice
of Classifier Model
|
cs.LG cs.NE
|
Recent approaches to training algorithm selectors in the black-box
optimisation domain have advocated for the use of training data that is
algorithm-centric in order to encapsulate information about how an algorithm
performs on an instance, rather than relying on information derived from
features of the instance itself. Probing-trajectories that consist of a
sequence of objective performance per function evaluation obtained from a short
run of an algorithm have recently shown particular promise in training accurate
selectors. However, training models on this type of data requires an
appropriately chosen classifier given the sequential nature of the data. There
are currently no clear guidelines for choosing the most appropriate classifier
for algorithm selection using time-series data from the plethora of models
available. To address this, we conduct a large benchmark study using 17
different classifiers and three types of trajectory on a classification task
using the BBOB benchmark suite using both leave-one-instance out and
leave-one-problem out cross-validation. In contrast to previous studies using
tabular data, we find that the choice of classifier has a significant impact,
showing that feature-based and interval-based models are the best choices.
|
2501.11416
|
Mapping network structures and dynamics of decentralised
cryptocurrencies: The evolution of Bitcoin (2009-2023)
|
cs.CE
|
Cryptocurrencies have recently been in the spotlight of public debate due to
their embrace by the new US President, with crypto fans expecting a 'bull run'.
The global cryptocurrency market capitalisation is more than \$3.50 trillion,
with 1 Bitcoin exchanging for more than \$97,000 at the end of November 2024.
Monitoring the evolution of these systems is key to understanding whether the
popular perception of cryptocurrencies as a new, sustainable economic
infrastructure is well-founded. In this paper, we have reconstructed the
network structures and dynamics of Bitcoin from its launch in January 2009 to
December 2023 and identified its key evolutionary phases. Our results show that
network centralisation and wealth concentration increased from the very early
years, following a richer-get-richer mechanism. This trend was endogenous to
the system, beyond any subsequent institutional or exogenous influence. The
evolution of Bitcoin is characterised by three periods, Exploration, Adaptation
and Maturity, with substantial coherent network patterns. Our findings suggest
that Bitcoin is a highly centralised structure, with high levels of wealth
inequality and internally crystallised power dynamics, which may have negative
implications for its long-term sustainability.
|
2501.11417
|
Neural Contextual Reinforcement Framework for Logical Structure Language
Generation
|
cs.CL cs.AI
|
The Neural Contextual Reinforcement Framework introduces an innovative
approach to enhancing the logical coherence and structural consistency of text
generated by large language models. Leveraging reinforcement learning
principles, the framework integrates custom reward functions and dynamic
context alignment mechanisms to address challenges inherent in maintaining
long-range dependencies across extended sequences. The architecture
incorporates multi-head attention layers and hierarchical encoding modules,
enabling the model to produce outputs that align closely with human
expectations of logical structure and semantic flow. Quantitative evaluations
across diverse datasets demonstrate substantial improvements in coherence
metrics, perplexity reduction, and semantic alignment, showcasing the
framework's ability to outperform baseline models in both general and
domain-specific tasks. Qualitative analyses further highlight the framework's
capacity to generate text with improved narrative clarity and reduced
redundancy, reflecting its effectiveness in balancing fluency with structural
precision. In addition to its performance gains, the framework exhibits
robustness in handling noisy input data and scalability across varying model
sizes, reinforcing its versatility in practical applications. Experimental
results reveal that optimal context window sizes significantly influence
coherence outcomes, showing the importance of architectural flexibility in
adapting to diverse linguistic structures. Cross-lingual performance
evaluations affirm the framework's adaptability to multiple languages,
extending its utility beyond monolingual contexts. Resource efficiency analyses
indicate a reduction in computational overhead compared to traditional
approaches, emphasizing the practicality of the framework for large-scale
deployment.
|
2501.11419
|
An Analysis of the Correctness and Computational Complexity of Path
Planning in Payment Channel Networks
|
cs.DM cs.CE
|
Payment Channel Networks (PCNs) are a method for improving the scaling and
latency of cryptocurrency transactions. For a payment to be made between two
peers in a PCN, a feasible low-fee path in the network must be planned. Many
PCN path planning algorithms use a search algorithm that is a variant of
Dijkstra's algorithm. In this article, we prove the correctness and
computational complexity of this algorithm. Specifically, we show that, if the
PCN satisfies a consistency property relating to the fees charged by payment
channels, the algorithm is correct and has polynomial computational complexity.
However, in the general case, the algorithm is not correct and the path
planning problem is NP-hard. These newly developed results can be used to
inform the development of new or existing PCNs amenable to path planning. For
example, we show that the Lightning Network, which is the most widely used PCN
and is built on the Bitcoin cryptocurrency, currently satisfies the above
consistency property. As a second contribution, we demonstrate that a small
modification to the above path planning algorithm which, although having the
same asymptotic computational complexity, empirically shows better performance.
This modification involves the use of a bidirectional search and is empirically
evaluated by simulating transactions on the Lightning Network.
|
2501.11421
|
Online Clustering with Bandit Information
|
cs.LG cs.IT math.IT math.ST stat.TH
|
We study the problem of online clustering within the multi-armed bandit
framework under the fixed confidence setting. In this multi-armed bandit
problem, we have $M$ arms, each providing i.i.d. samples that follow a
multivariate Gaussian distribution with an {\em unknown} mean and a known unit
covariance. The arms are grouped into $K$ clusters based on the distance
between their means using the Single Linkage (SLINK) clustering algorithm on
the means of the arms. Since the true means are unknown, the objective is to
obtain the above clustering of the arms with the minimum number of samples
drawn from the arms, subject to an upper bound on the error probability. We
introduce a novel algorithm, Average Tracking Bandit Online Clustering (ATBOC),
and prove that this algorithm is order optimal, meaning that the upper bound on
its expected sample complexity for given error probability $\delta$ is within a
factor of 2 of an instance-dependent lower bound as $\delta \rightarrow 0$.
Furthermore, we propose a computationally more efficient algorithm, Lower and
Upper Confidence Bound-based Bandit Online Clustering (LUCBBOC), inspired by
the LUCB algorithm for best arm identification. Simulation results demonstrate
that the performance of LUCBBOC is comparable to that of ATBOC. We numerically
assess the effectiveness of the proposed algorithms through numerical
experiments on both synthetic datasets and the real-world MovieLens dataset. To
the best of our knowledge, this is the first work on bandit online clustering
that allows arms with different means in a cluster and $K$ greater than 2.
|
2501.11422
|
Multi-View Spectral Clustering for Graphs with Multiple View Structures
|
cs.LG cs.AI
|
Despite the fundamental importance of clustering, to this day, much of the
relevant research is still based on ambiguous foundations, leading to an
unclear understanding of whether or how the various clustering methods are
connected with each other. In this work, we provide an additional stepping
stone towards resolving such ambiguities by presenting a general clustering
framework that subsumes a series of seemingly disparate clustering methods,
including various methods belonging to the widely popular spectral clustering
framework. In fact, the generality of the proposed framework is additionally
capable of shedding light to the largely unexplored area of multi-view graphs
where each view may have differently clustered nodes. In turn, we propose
GenClus: a method that is simultaneously an instance of this framework and a
generalization of spectral clustering, while also being closely related to
k-means as well. This results in a principled alternative to the few existing
methods studying this special type of multi-view graphs. Then, we conduct
in-depth experiments, which demonstrate that GenClus is more computationally
efficient than existing methods, while also attaining similar or better
clustering performance. Lastly, a qualitative real-world case-study further
demonstrates the ability of GenClus to produce meaningful clusterings.
|
2501.11425
|
Agent-R: Training Language Model Agents to Reflect via Iterative
Self-Training
|
cs.AI
|
Large Language Models (LLMs) agents are increasingly pivotal for addressing
complex tasks in interactive environments. Existing work mainly focuses on
enhancing performance through behavior cloning from stronger experts, yet such
approaches often falter in real-world applications, mainly due to the inability
to recover from errors. However, step-level critique data is difficult and
expensive to collect. Automating and dynamically constructing self-critique
datasets is thus crucial to empowering models with intelligent agent
capabilities. In this work, we propose an iterative self-training framework,
Agent-R, that enables language Agent to Reflect on the fly. Unlike traditional
methods that reward or penalize actions based on correctness, Agent-R leverages
MCTS to construct training data that recover correct trajectories from
erroneous ones. A key challenge of agent reflection lies in the necessity for
timely revision rather than waiting until the end of a rollout. To address
this, we introduce a model-guided critique construction mechanism: the actor
model identifies the first error step (within its current capability) in a
failed trajectory. Starting from it, we splice it with the adjacent correct
path, which shares the same parent node in the tree. This strategy enables the
model to learn reflection based on its current policy, therefore yielding
better learning efficiency. To further explore the scalability of this
self-improvement paradigm, we investigate iterative refinement of both error
correction capabilities and dataset construction. Our findings demonstrate that
Agent-R continuously improves the model's ability to recover from errors and
enables timely error correction. Experiments on three interactive environments
show that Agent-R effectively equips agents to correct erroneous actions while
avoiding loops, achieving superior performance compared to baseline methods
(+5.59%).
|
2501.11428
|
Enhancing Coronary Artery Calcium Scoring via Multi-Organ Segmentation
on Non-Contrast Cardiac Computed Tomography
|
cs.CV cs.AI cs.LG
|
Despite coronary artery calcium scoring being considered a largely solved
problem within the realm of medical artificial intelligence, this paper argues
that significant improvements can still be made. By shifting the focus from
pathology detection to a deeper understanding of anatomy, the novel algorithm
proposed in the paper both achieves high accuracy in coronary artery calcium
scoring and offers enhanced interpretability of the results. This approach not
only aids in the precise quantification of calcifications in coronary arteries,
but also provides valuable insights into the underlying anatomical structures.
Through this anatomically-informed methodology, the paper shows how a nuanced
understanding of the heart's anatomy can lead to more accurate and
interpretable results in the field of cardiovascular health. We demonstrate the
superior accuracy of the proposed method by evaluating it on an open-source
multi-vendor dataset, where we obtain results at the inter-observer level,
surpassing the current state of the art. Finally, the qualitative analyses show
the practical value of the algorithm in such tasks as labeling coronary artery
calcifications, identifying aortic calcifications, and filtering out false
positive detections due to noise.
|
2501.11429
|
The Explanation Game -- Rekindled (Extended Version)
|
cs.AI
|
Recent work demonstrated the existence of critical flaws in the current use
of Shapley values in explainable AI (XAI), i.e. the so-called SHAP scores.
These flaws are significant in that the scores provided to a human
decision-maker can be misleading. Although these negative results might appear
to indicate that Shapley values ought not be used in XAI, this paper argues
otherwise. Concretely, this paper proposes a novel definition of SHAP scores
that overcomes existing flaws. Furthermore, the paper outlines a practically
efficient solution for the rigorous estimation of the novel SHAP scores.
Preliminary experimental results confirm our claims, and further underscore the
flaws of the current SHAP scores.
|
2501.11430
|
A Survey on Diffusion Models for Anomaly Detection
|
cs.LG cs.AI
|
Diffusion models (DMs) have emerged as a powerful class of generative AI
models, showing remarkable potential in anomaly detection (AD) tasks across
various domains, such as cybersecurity, fraud detection, healthcare, and
manufacturing. The intersection of these two fields, termed diffusion models
for anomaly detection (DMAD), offers promising solutions for identifying
deviations in increasingly complex and high-dimensional data. In this survey,
we review recent advances in DMAD research. We begin by presenting the
fundamental concepts of AD and DMs, followed by a comprehensive analysis of
classic DM architectures including DDPMs, DDIMs, and Score SDEs. We further
categorize existing DMAD methods into reconstruction-based, density-based, and
hybrid approaches, providing detailed examinations of their methodological
innovations. We also explore the diverse tasks across different data
modalities, encompassing image, time series, video, and multimodal data
analysis. Furthermore, we discuss critical challenges and emerging research
directions, including computational efficiency, model interpretability,
robustness enhancement, edge-cloud collaboration, and integration with large
language models. The collection of DMAD research papers and resources is
available at https://github.com/fdjingliu/DMAD.
|
2501.11434
|
An Incremental Sampling and Segmentation-Based Approach for Motion
Planning Infeasibility
|
cs.RO
|
We present a simple and easy-to-implement algorithm to detect plan
infeasibility in kinematic motion planning. Our method involves approximating
the robot's configuration space to a discrete space, where each degree of
freedom has a finite set of values. The obstacle region separates the free
configuration space into different connected regions. For a path to exist
between the start and goal configurations, they must lie in the same connected
region of the free space. Thus, to ascertain plan infeasibility, we merely need
to sample adequate points from the obstacle region that isolate start and goal.
Accordingly, we progressively construct the configuration space by sampling
from the discretized space and updating the bitmap cells representing obstacle
regions. Subsequently, we partition this partially built configuration space to
identify different connected components within it and assess the connectivity
of the start and goal cells. We illustrate this methodology on five different
scenarios with configuration spaces having up to 5 degree-of-freedom (DOF).
|
2501.11440
|
RACCOON: A Retrieval-Augmented Generation Approach for Location
Coordinate Capture from News Articles
|
cs.CL
|
Geocoding involves automatic extraction of location coordinates of incidents
reported in news articles, and can be used for epidemic intelligence or
disaster management. This paper introduces Retrieval-Augmented Coordinate
Capture Of Online News articles (RACCOON), an open-source geocoding approach
that extracts geolocations from news articles. RACCOON uses a
retrieval-augmented generation (RAG) approach where candidate locations and
associated information are retrieved in the form of context from a location
database, and a prompt containing the retrieved context, location mentions and
news articles is fed to an LLM to generate the location coordinates. Our
evaluation on three datasets, two underlying LLMs, three baselines and several
ablation tests based on the components of RACCOON demonstrate the utility of
RACCOON. To the best of our knowledge, RACCOON is the first RAG-based approach
for geocoding using pre-trained LLMs.
|
2501.11441
|
Ontology Matching with Large Language Models and Prioritized Depth-First
Search
|
cs.IR cs.CL
|
Ontology matching (OM) plays a key role in enabling data interoperability and
knowledge sharing, but it remains challenging due to the need for large
training datasets and limited vocabulary processing in machine learning
approaches. Recently, methods based on Large Language Model (LLMs) have shown
great promise in OM, particularly through the use of a retrieve-then-prompt
pipeline. In this approach, relevant target entities are first retrieved and
then used to prompt the LLM to predict the final matches. Despite their
potential, these systems still present limited performance and high
computational overhead. To address these issues, we introduce MILA, a novel
approach that embeds a retrieve-identify-prompt pipeline within a prioritized
depth-first search (PDFS) strategy. This approach efficiently identifies a
large number of semantic correspondences with high accuracy, limiting LLM
requests to only the most borderline cases. We evaluated MILA using the
biomedical challenge proposed in the 2023 and 2024 editions of the Ontology
Alignment Evaluation Initiative. Our method achieved the highest F-Measure in
four of the five unsupervised tasks, outperforming state-of-the-art OM systems
by up to 17%. It also performed better than or comparable to the leading
supervised OM systems. MILA further exhibited task-agnostic performance,
remaining stable across all tasks and settings, while significantly reducing
LLM requests. These findings highlight that high-performance LLM-based OM can
be achieved through a combination of programmed (PDFS), learned (embedding
vectors), and prompting-based heuristics, without the need of domain-specific
heuristics or fine-tuning.
|
2501.11447
|
Decomposing Interventional Causality into Synergistic, Redundant, and
Unique Components
|
cs.AI cs.IT math.IT physics.data-an
|
We introduce a novel framework for decomposing interventional causal effects
into synergistic, redundant, and unique components, building on the intuition
of Partial Information Decomposition (PID) and the principle of M\"obius
inversion. While recent work has explored a similar decomposition of an
observational measure, we argue that a proper causal decomposition must be
interventional in nature. We develop a mathematical approach that
systematically quantifies how causal power is distributed among variables in a
system, using a recently derived closed-form expression for the M\"obius
function of the redundancy lattice. The formalism is then illustrated by
decomposing the causal power in logic gates, cellular automata, and chemical
reaction networks. Our results reveal how the distribution of causal power can
be context- and parameter-dependent. This decomposition provides new insights
into complex systems by revealing how causal influences are shared and combined
among multiple variables, with potential applications ranging from attribution
of responsibility in legal or AI systems, to the analysis of biological
networks or climate models.
|
2501.11453
|
Integrate-and-Fire from a Mathematical and Signal Processing Perspective
|
eess.SP cs.NE
|
Integrate-and-Fire (IF) is an idealized model of the spike-triggering
mechanism of a biological neuron. It is used to realize the bio-inspired
event-based principle of information processing in neuromorphic computing. We
show that IF is closely related to the concept of Send-on-Delta (SOD) as used
in threshold-based sampling. It turns out that the IF model can be adjusted in
a way that SOD can be understood as differential version of IF. As a result, we
gain insight into the underlying metric structure based on the Alexiewicz norm
with consequences for clarifying the underlying signal space including bounded
integrable signals with superpositions of finitely many Dirac impulses, the
identification of a maximum sparsity property, error bounds for signal
reconstruction and a characterization in terms of sparse regularization.
|
2501.11454
|
Improving thermal state preparation of Sachdev-Ye-Kitaev model with
reinforcement learning on quantum hardware
|
quant-ph cs.AI cs.LG hep-lat hep-th
|
The Sachdev-Ye-Kitaev (SYK) model, known for its strong quantum correlations
and chaotic behavior, serves as a key platform for quantum gravity studies.
However, variationally preparing thermal states on near-term quantum processors
for large systems (N>12, where N is the number of Majorana fermions) presents a
significant challenge due to the rapid growth in the complexity of
parameterized quantum circuits. This paper addresses this challenge by
integrating reinforcement learning (RL) with convolutional neural networks,
employing an iterative approach to optimize the quantum circuit and its
parameters. The refinement process is guided by a composite reward signal
derived from entropy and the expectation values of the SYK Hamiltonian. This
approach reduces the number of CNOT gates by two orders of magnitude for
systems N>10 compared to traditional methods like first-order Trotterization.
We demonstrate the effectiveness of the RL framework in both noiseless and
noisy quantum hardware environments, maintaining high accuracy in thermal state
preparation. This work contributes to the advancement of a scalable, RL-based
framework with applications for computations of thermal out-of-time-order
correlators in quantum many-body systems and quantum gravity studies on
near-term quantum hardware.
|
2501.11459
|
Multi-Stage Active Sequential Hypothesis Testing with Clustered
Hypotheses
|
cs.IT math.IT
|
We consider the problem where an active Decision-Maker (DM) is tasked to
identify the true hypothesis using as few as possible observations while
maintaining accuracy. The DM collects observations according to its determined
actions and knows the distributions under each hypothesis. We propose a
deterministic and adaptive multi-stage hypothesis-elimination strategy where
the DM selects an action, applies it repeatedly, and discards hypotheses in
light of its obtained observations. The DM selects actions based on maximal
separation expressed by the distance between the parameter vectors of each
distribution under each hypothesis. Close distributions can be clustered,
simplifying the search and significantly reducing the number of required
observations.
Our algorithms achieve vanishing Average Bayes Risk (ABR) as the error
probability approaches zero, i.e., the algorithm is asymptotically optimal.
Furthermore, we show that the ABR is bounded when the number of hypotheses
grows. Simulations are carried out to evaluate the algorithm's performance
compared to another multi-stage hypothesis-elimination algorithm, where an
improvement of several orders of magnitude in the mean number of observations
required is observed.
|
2501.11462
|
On the Adversarial Vulnerabilities of Transfer Learning in Remote
Sensing
|
cs.CV eess.IV
|
The use of pretrained models from general computer vision tasks is widespread
in remote sensing, significantly reducing training costs and improving
performance. However, this practice also introduces vulnerabilities to
downstream tasks, where publicly available pretrained models can be used as a
proxy to compromise downstream models. This paper presents a novel Adversarial
Neuron Manipulation method, which generates transferable perturbations by
selectively manipulating single or multiple neurons in pretrained models.
Unlike existing attacks, this method eliminates the need for domain-specific
information, making it more broadly applicable and efficient. By targeting
multiple fragile neurons, the perturbations achieve superior attack
performance, revealing critical vulnerabilities in deep learning models.
Experiments on diverse models and remote sensing datasets validate the
effectiveness of the proposed method. This low-access adversarial neuron
manipulation technique highlights a significant security risk in transfer
learning models, emphasizing the urgent need for more robust defenses in their
design when addressing the safety-critical remote sensing tasks.
|
2501.11463
|
Curiosity-Driven Reinforcement Learning from Human Feedback
|
cs.CL
|
Reinforcement learning from human feedback (RLHF) has proven effective in
aligning large language models (LLMs) with human preferences, but often at the
cost of reduced output diversity. This trade-off between diversity and
alignment quality remains a significant challenge. Drawing inspiration from
curiosity-driven exploration in reinforcement learning, we introduce
curiosity-driven RLHF (CD-RLHF), a framework that incorporates intrinsic
rewards for novel states, alongside traditional sparse extrinsic rewards, to
optimize both output diversity and alignment quality. We demonstrate the
effectiveness of CD-RLHF through extensive experiments on a range of tasks,
including text summarization and instruction following. Our approach achieves
significant gains in diversity on multiple diversity-oriented metrics while
maintaining alignment with human preferences comparable to standard RLHF. We
make our code publicly available at https://github.com/ernie-research/CD-RLHF.
|
2501.11467
|
Fixed Point Certificates for Reachability and Expected Rewards in MDPs
|
cs.LO cs.DM cs.SY eess.SY
|
The possibility of errors in human-engineered formal verification software,
such as model checkers, poses a serious threat to the purpose of these tools.
An established approach to mitigate this problem are certificates --
lightweight, easy-to-check proofs of the verification results. In this paper,
we develop novel certificates for model checking of Markov decision processes
(MDPs) with quantitative reachability and expected reward properties. Our
approach is conceptually simple and relies almost exclusively on elementary
fixed point theory. Our certificates work for arbitrary finite MDPs and can be
readily computed with little overhead using standard algorithms. We formalize
the soundness of our certificates in Isabelle/HOL and provide a formally
verified certificate checker. Moreover, we augment existing algorithms in the
probabilistic model checker Storm with the ability to produce certificates and
demonstrate practical applicability by conducting the first formal
certification of the reference results in the Quantitative Verification
Benchmark Set.
|
2501.11469
|
MASS: Overcoming Language Bias in Image-Text Matching
|
cs.CV cs.LG
|
Pretrained visual-language models have made significant advancements in
multimodal tasks, including image-text retrieval. However, a major challenge in
image-text matching lies in language bias, where models predominantly rely on
language priors and neglect to adequately consider the visual content. We thus
present Multimodal ASsociation Score (MASS), a framework that reduces the
reliance on language priors for better visual accuracy in image-text matching
problems. It can be seamlessly incorporated into existing visual-language
models without necessitating additional training. Our experiments have shown
that MASS effectively lessens language bias without losing an understanding of
linguistic compositionality. Overall, MASS offers a promising solution for
enhancing image-text matching performance in visual-language models.
|
2501.11473
|
Strong Data Processing Properties of R\'enyi-divergences via
Pinsker-type Inequalities
|
cs.IT math.IT
|
We investigate strong data processing inequalities (SDPIs) of the
R\'enyi-divergence between two discrete distributions when both distributions
are passed through a fixed channel. We provide a condition on the channel for
which the DPI holds with equality given two arbitrary distributions in the
probability simplex. Motivated by this, we examine the contraction behavior for
restricted sets of prior distributions via $f$-divergence inequalities: We
provide an alternative proof of the optimal reverse Pinsker's inequality for
R\'enyi-divergences first shown by Binette. We further present an improved
Pinsker's inequality for R\'enyi-divergence based on the joint range technique
by Harremo\"es and Vajda. The presented bound is tight whenever the value of
the total variation distance is larger than $\frac{1}{\alpha}$. By framing
these inequalities in a cross-channel setting, we arrive at SDPIs that can be
adapted to use-case specific restrictions of input distribution and channel. We
apply these results to the R\'enyi local differential privacy amplification
through post-processing by channels that satisfy no local differential privacy
guarantee.
|
2501.11477
|
QGAIC: Quantum Inspired Genetic Algorithm for Image Classification
|
cs.NE
|
This study uses two meta-heuristics methodologies to introduce two novel
quantum-inspired meta heuristic approaches: quantum-inspired genetic algorithm
(QIGA1) and quantum-inspired genetic algorithm with dynamic approach (QIGA2).
The two suggested methods combine a classical and quantum genetic algorithm
approach. Both approaches use The correlation coefficient as an assessment
function to identify the best (optimal) values for binary image. In quantum
computing, they use simple ideas like qubits and state superposition. Due to
these characteristics, parallelism which uses the time discreteness of quantum
mechanical systems, is exhibited. For five distinct MNIST datasets, the
performance of all participating algorithms has been assessed by comparing the
suggested approaches first with their traditional approach counterparts and
then with the proposed methods QIGA1 and QIGA2. Each method's ideal threshold
value, associated fitness value (best and average), loss, and accuracy for each
MNIST dataset have all been published. The outcomes demonstrate the superior
efficiency of the suggested approaches over their traditional equivalents.
|
2501.11478
|
Each Graph is a New Language: Graph Learning with LLMs
|
cs.CL cs.AI cs.LG
|
Recent efforts leverage Large Language Models (LLMs) for modeling
text-attributed graph structures in node classification tasks. These approaches
describe graph structures for LLMs to understand or aggregate LLM-generated
textual attribute embeddings through graph structure. However, these approaches
face two main limitations in modeling graph structures with LLMs. (i) Graph
descriptions become verbose in describing high-order graph structure. (ii)
Textual attributes alone do not contain adequate graph structure information.
It is challenging to model graph structure concisely and adequately with LLMs.
LLMs lack built-in mechanisms to model graph structures directly. They also
struggle with complex long-range dependencies between high-order nodes and
target nodes.
Inspired by the observation that LLMs pre-trained on one language can achieve
exceptional performance on another with minimal additional training, we propose
\textbf{G}raph-\textbf{D}efined \textbf{L}anguage for \textbf{L}arge
\textbf{L}anguage \textbf{M}odel (GDL4LLM). This novel framework enables LLMs
to transfer their powerful language understanding capabilities to
graph-structured data. GDL4LLM translates graphs into a graph language corpus
instead of graph descriptions and pre-trains LLMs on this corpus to adequately
understand graph structures. During fine-tuning, this corpus describes the
structural information of target nodes concisely with only a few tokens. By
treating graphs as a new language, GDL4LLM enables LLMs to model graph
structures adequately and concisely for node classification tasks. Extensive
experiments on three real-world datasets demonstrate that GDL4LLM outperforms
description-based and textual attribute embeddings-based baselines by
efficiently modeling different orders of graph structure with LLMs.
|
2501.11485
|
SimLabel: Consistency-Guided OOD Detection with Pretrained
Vision-Language Models
|
cs.CV
|
Detecting out-of-distribution (OOD) data is crucial in real-world machine
learning applications, particularly in safety-critical domains. Existing
methods often leverage language information from vision-language models (VLMs)
to enhance OOD detection by improving confidence estimation through rich
class-wise text information. However, when building OOD detection score upon on
in-distribution (ID) text-image affinity, existing works either focus on each
ID class or whole ID label sets, overlooking inherent ID classes' connection.
We find that the semantic information across different ID classes is beneficial
for effective OOD detection. We thus investigate the ability of image-text
comprehension among different semantic-related ID labels in VLMs and propose a
novel post-hoc strategy called SimLabel. SimLabel enhances the separability
between ID and OOD samples by establishing a more robust image-class similarity
metric that considers consistency over a set of similar class labels. Extensive
experiments demonstrate the superior performance of SimLabel on various
zero-shot OOD detection benchmarks. The proposed model is also extended to
various VLM-backbones, demonstrating its good generalization ability. Our
demonstration and implementation codes are available at:
https://github.com/ShuZou-1/SimLabel.
|
2501.11487
|
Detecting Convolutional Codes: A Markovian Approach with LRT and DNN
|
cs.IT math.IT
|
Identifying the unknown convolutional code corresponding to the given
intercepted data is an important problem in military surveillance and in
wireless communication. While a variety of code identification algorithms are
available in the literature, the key contribution of our work lies in the novel
solution and the corresponding analysis. In this paper, we focus on the
situation when the given data corresponds to either of the two potential
convolutional codes and the goal is to detect the correct code. We first
provide a new interpretation of the convolutional code as a Markov chain, which
is more suitable for analyzing the code detection problem. Our problem then
gets reduced to identifying between the two Markov chains. We provide the
closed-form expressions for the corresponding state transition matrices and
estimate the error exponent for the underlying likelihood ratio test (LRT). We
also provide a computationally efficient BCJR-based method for computing the
likelihoods required for the LRT. We observe that BCJR-based likelihoods suffer
from numerical issues for a longer data sequence, and hence, in this case, we
design neural networks that have been found to achieve the optimal performance
of the LRT.
|
2501.11493
|
Communication-Efficient Federated Learning Based on Explanation-Guided
Pruning for Remote Sensing Image Classification
|
cs.CV cs.AI
|
Federated learning (FL) is a decentralized machine learning paradigm, where
multiple clients collaboratively train a global model by exchanging only model
updates with the central server without sharing the local data of clients. Due
to the large volume of model updates required to be transmitted between clients
and the central server, most FL systems are associated with high transfer costs
(i.e., communication overhead). This issue is more critical for operational
applications in remote sensing (RS), especially when large-scale RS data is
processed and analyzed through FL systems with restricted communication
bandwidth. To address this issue, we introduce an explanation-guided pruning
strategy for communication-efficient FL in the context of RS image
classification. Our pruning strategy is defined based on the layerwise
relevance propagation (LRP) driven explanations to: 1) efficiently and
effectively identify the most relevant and informative model parameters (to be
exchanged between clients and the central server); and 2) eliminate the
non-informative ones to minimize the volume of model updates. The experimental
results on the BigEarthNet-S2 dataset demonstrate that our strategy effectively
reduces the number of shared model updates, while increasing the generalization
ability of the global model. The code of this work will be publicly available
at https://git.tu-berlin.de/rsim/FL-LRP
|
2501.11495
|
Discrete-Time Passivity-Based Control using Hermite-Obreschkoff Methods
|
eess.SY cs.SY
|
The motivation for this paper is the implementation of nonlinear state
feedback control, designed based on the continuous-time plant model, in a
sampled control loop under relatively slow sampling. In previous work we have
shown that using one-step predictions of the target dynamics with higher order
integration schemes, together with possibly higher order input shaping, is a
simple and effective way to increase the feasible sampling times until
performance degradation and instability occur. In this contribution we present
a unifying derivation for arbitrary orders of the previously used Lobatto IIIA
collocation and Hermite interpolation schemes through the Hermite-Obreschkoff
formula. We derive, moreover, an IDA-PBC controller for a magnetic levitation
system, which requires a non-constant target interconnection matrix, and show
experimental results.
|
2501.11496
|
Generative AI and Large Language Models in Language Preservation:
Opportunities and Challenges
|
cs.CL cs.AI cs.LG
|
Generative AI and large-scale language models (LLM) have emerged as powerful
tools in language preservation, particularly for near-native and endangered
languages. With the increasing reliance on technology for communication,
education, and cultural documentation, new opportunities have emerged to
mitigate the dramatic decline of linguistic diversity worldwide. This paper
examines the role of generative AIs and LLMs in preserving endangered
languages, highlighting the risks and challenges associated with their use. We
analyze the underlying technologies driving these models, including natural
language processing (NLP) and deep learning, and explore several cases where
these technologies have been applied to low-resource languages. Additionally,
we discuss ethical considerations, data scarcity issues, and technical
challenges while proposing solutions to enhance AI-driven language
preservation.
|
2501.11498
|
Dialect2SQL: A Novel Text-to-SQL Dataset for Arabic Dialects with a
Focus on Moroccan Darija
|
cs.SE cs.AI cs.CL cs.DB
|
The task of converting natural language questions (NLQs) into executable SQL
queries, known as text-to-SQL, has gained significant interest in recent years,
as it enables non-technical users to interact with relational databases. Many
benchmarks, such as SPIDER and WikiSQL, have contributed to the development of
new models and the evaluation of their performance. In addition, other
datasets, like SEDE and BIRD, have introduced more challenges and complexities
to better map real-world scenarios. However, these datasets primarily focus on
high-resource languages such as English and Chinese. In this work, we introduce
Dialect2SQL, the first large-scale, cross-domain text-to-SQL dataset in an
Arabic dialect. It consists of 9,428 NLQ-SQL pairs across 69 databases in
various domains. Along with SQL-related challenges such as long schemas, dirty
values, and complex queries, our dataset also incorporates the complexities of
the Moroccan dialect, which is known for its diverse source languages, numerous
borrowed words, and unique expressions. This demonstrates that our dataset will
be a valuable contribution to both the text-to-SQL community and the
development of resources for low-resource languages.
|
2501.11499
|
KEIR @ ECIR 2025: The Second Workshop on Knowledge-Enhanced Information
Retrieval
|
cs.IR
|
Pretrained language models (PLMs) like BERT and GPT-4 have become the
foundation for modern information retrieval (IR) systems. However, existing
PLM-based IR models primarily rely on the knowledge learned during training for
prediction, limiting their ability to access and incorporate external,
up-to-date, or domain-specific information. Therefore, current information
retrieval systems struggle with semantic nuances, context relevance, and
domain-specific issues. To address these challenges, we propose the second
Knowledge-Enhanced Information Retrieval workshop (KEIR @ ECIR 2025) as a
platform to discuss innovative approaches that integrate external knowledge,
aiming to enhance the effectiveness of information retrieval in a rapidly
evolving technological landscape. The goal of this workshop is to bring
together researchers from academia and industry to discuss various aspects of
knowledge-enhanced information retrieval.
|
2501.11502
|
Hierarchical Coded Caching in High Memory Regime with Coded Placement
|
cs.IT math.IT
|
We consider a two-layer hierarchical coded caching network where a server
with a library of $N$ files is connected to $K_1$ mirrors, each having a cache
memory of size $M_1$. Each mirror is further connected to $K_2$ users, each
equipped with a dedicated cache of size $M_2$. In this paper, we propose two
distinct coded caching schemes based on coded placement, corresponding to two
distinct memory pairs, \( (M_1, M_2) \). We show that the proposed schemes
outperform the existing schemes at these memory points given by the proposed
schemes for smaller values of $K_2$. In setups where mirrors are positioned
near each other, avoiding signal interference is crucial. This can be ensured
by having all mirrors transmit using orthogonal carrier frequencies. To compare
our schemes with existing ones, we used the composite rate metric, which
accurately represents the total bandwidth utilized in such setups. The
composite rate is given by $\overline{R} = R_1 + K_1 R_2$, where $R_1$ is the
rate from the server to the mirrors, and $R_2$ is the rate from the mirrors to
the users, with respect to $M_1$ and $M_2$.
|
2501.11505
|
Sun-Jafar-Type Schemes for Weak Private Information Retrieval
|
cs.IT math.IT
|
In information-theoretic private information retrieval (PIR), a client wants
to retrieve one desired file out of $M$ files, stored across $N$ servers, while
keeping the index of the desired file private from each $T$-sized subset of
servers. A PIR protocol must ideally maximize the rate, which is the ratio of
the file size to the total quantum of the download from the servers, while
ensuring such privacy. In Weak-PIR (WPIR), the criterion of perfect
information-theoretic privacy is relaxed. This enables higher rates to be
achieved, while some information about the desired file index leaks to the
servers. This leakage is captured by various known privacy metrics. By
leveraging the well-established capacity-achieving schemes of Sun and Jafar
under non-colluding ($T=1$) and colluding ($1<T\leq N$) scenarios, we present
WPIR protocols for these scenarios. We also present a new WPIR scheme for the
MDS scenario, by building upon the scheme by Banawan and Ulukus for this
scenario. We present corresponding explicit rate-privacy trade-offs for these
setups, under the mutual-information and the maximal leakage privacy metrics.
In the collusion-free setup, our presented rate-privacy trade-off under maximal
leakage matches that of the previous state of the art. With respect to the MDS
scenario under the maximal leakage metric, we compare with the non-explicit
trade-off in the literature, and show that our scheme performs better for some
numerical examples. For the $T$-collusion setup (under both privacy metrics)
and for the MDS setup under the mutual information metric, our rate-privacy
trade-offs are the first in the literature, to the best of our knowledge.
|
2501.11508
|
See In Detail: Enhancing Sparse-view 3D Gaussian Splatting with Local
Depth and Semantic Regularization
|
cs.CV
|
3D Gaussian Splatting (3DGS) has shown remarkable performance in novel view
synthesis. However, its rendering quality deteriorates with sparse inphut
views, leading to distorted content and reduced details. This limitation
hinders its practical application. To address this issue, we propose a
sparse-view 3DGS method. Given the inherently ill-posed nature of sparse-view
rendering, incorporating prior information is crucial. We propose a semantic
regularization technique, using features extracted from the pretrained DINO-ViT
model, to ensure multi-view semantic consistency. Additionally, we propose
local depth regularization, which constrains depth values to improve
generalization on unseen views. Our method outperforms state-of-the-art novel
view synthesis approaches, achieving up to 0.4dB improvement in terms of PSNR
on the LLFF dataset, with reduced distortion and enhanced visual quality.
|
2501.11511
|
Subjective and Objective Quality Assessment of Non-Uniformly Distorted
Omnidirectional Images
|
eess.IV cs.CV
|
Omnidirectional image quality assessment (OIQA) has been one of the hot
topics in IQA with the continuous development of VR techniques, and achieved
much success in the past few years. However, most studies devote themselves to
the uniform distortion issue, i.e., all regions of an omnidirectional image are
perturbed by the ``same amount'' of noise, while ignoring the non-uniform
distortion issue, i.e., partial regions undergo ``different amount'' of
perturbation with the other regions in the same omnidirectional image.
Additionally, nearly all OIQA models are verified on the platforms containing a
limited number of samples, which largely increases the over-fitting risk and
therefore impedes the development of OIQA. To alleviate these issues, we
elaborately explore this topic from both subjective and objective perspectives.
Specifically, we construct a large OIQA database containing 10,320
non-uniformly distorted omnidirectional images, each of which is generated by
considering quality impairments on one or two camera len(s). Then we
meticulously conduct psychophysical experiments and delve into the influence of
both holistic and individual factors (i.e., distortion range and viewing
condition) on omnidirectional image quality. Furthermore, we propose a
perception-guided OIQA model for non-uniform distortion by adaptively
simulating users' viewing behavior. Experimental results demonstrate that the
proposed model outperforms state-of-the-art methods. The source code is
available at https://github.com/RJL2000/OIQAND.
|
2501.11512
|
Multitask Auxiliary Network for Perceptual Quality Assessment of
Non-Uniformly Distorted Omnidirectional Images
|
eess.IV cs.CV
|
Omnidirectional image quality assessment (OIQA) has been widely investigated
in the past few years and achieved much success. However, most of existing
studies are dedicated to solve the uniform distortion problem in OIQA, which
has a natural gap with the non-uniform distortion problem, and their ability in
capturing non-uniform distortion is far from satisfactory. To narrow this gap,
in this paper, we propose a multitask auxiliary network for non-uniformly
distorted omnidirectional images, where the parameters are optimized by jointly
training the main task and other auxiliary tasks. The proposed network mainly
consists of three parts: a backbone for extracting multiscale features from the
viewport sequence, a multitask feature selection module for dynamically
allocating specific features to different tasks, and auxiliary sub-networks for
guiding the proposed model to capture local distortion and global quality
change. Extensive experiments conducted on two large-scale OIQA databases
demonstrate that the proposed model outperforms other state-of-the-art OIQA
metrics, and these auxiliary sub-networks contribute to improve the performance
of the proposed model. The source code is available at
https://github.com/RJL2000/MTAOIQA.
|
2501.11513
|
Transferability of labels between multilens cameras
|
cs.CV
|
In this work, a new method for automatically extending Bounding Box (BB) and
mask labels across different channels on multilens cameras is presented. For
that purpose, the proposed method combines the well known phase correlation
method with a refinement process. During the first step, images are aligned by
localizing the peak of intensity obtained in the spatial domain after
performing the cross correlation process in the frequency domain. The second
step consists of obtaining the best possible transformation by using an
iterative process maximising the IoU (Intersection over Union) metric. Results
show that, by using this method, labels could be transferred across different
lens on a camera with an accuracy over 90% in most cases and just by using 65
ms in the whole process. Once the transformations are obtained, artificial RGB
images are generated, for labeling them so as to transfer this information into
each of the other lens. This work will allow users to use this type of cameras
in more fields rather than satellite or medical imagery, giving the chance of
labeling even invisible objects in the visible spectrum.
|
2501.11515
|
UltraFusion: Ultra High Dynamic Imaging using Exposure Fusion
|
cs.CV
|
Capturing high dynamic range (HDR) scenes is one of the most important issues
in camera design. Majority of cameras use exposure fusion technique, which
fuses images captured by different exposure levels, to increase dynamic range.
However, this approach can only handle images with limited exposure difference,
normally 3-4 stops. When applying to very high dynamic scenes where a large
exposure difference is required, this approach often fails due to incorrect
alignment or inconsistent lighting between inputs, or tone mapping artifacts.
In this work, we propose UltraFusion, the first exposure fusion technique that
can merge input with 9 stops differences. The key idea is that we model the
exposure fusion as a guided inpainting problem, where the under-exposed image
is used as a guidance to fill the missing information of over-exposed highlight
in the over-exposed region. Using under-exposed image as a soft guidance,
instead of a hard constrain, our model is robust to potential alignment issue
or lighting variations. Moreover, utilizing the image prior of the generative
model, our model also generates natural tone mapping, even for very
high-dynamic range scene. Our approach outperforms HDR-Transformer on latest
HDR benchmarks. Moreover, to test its performance in ultra high dynamic range
scene, we capture a new real-world exposure fusion benchmark, UltraFusion
Dataset, with exposure difference up to 9 stops, and experiments show that
\model~can generate beautiful and high-quality fusion results under various
scenarios. An online demo is provided at
https://openimaginglab.github.io/UltraFusion/.
|
2501.11520
|
Fundus Image Quality Assessment and Enhancement: a Systematic Review
|
eess.IV cs.CV
|
As an affordable and convenient eye scan, fundus photography holds the
potential for preventing vision impairment, especially in resource-limited
regions. However, fundus image degradation is common under intricate imaging
environments, impacting following diagnosis and treatment. Consequently, image
quality assessment (IQA) and enhancement (IQE) are essential for ensuring the
clinical value and reliability of fundus images. While existing reviews offer
some overview of this field, a comprehensive analysis of the interplay between
IQA and IQE, along with their clinical deployment challenges, is lacking. This
paper addresses this gap by providing a thorough review of fundus IQA and IQE
algorithms, research advancements, and practical applications. We outline the
fundamentals of the fundus photography imaging system and the associated
interferences, and then systematically summarize the paradigms in fundus IQA
and IQE. Furthermore, we discuss the practical challenges and solutions in
deploying IQA and IQE, as well as offer insights into potential future research
directions.
|
2501.11522
|
Optimal Trajectory Control of Geometrically Exact Strings with
Space-Time Finite Elements
|
eess.SY cs.SY
|
In this contribution, we present a variational space-time formulation which
generates an optimal feed-forward controller for geometrically exact strings.
More concretely, the optimization problem is solved with an indirect approach,
and the space-time finite element method translates the problem to a set of
algebraic equations. Thereby, only the positional field and the corresponding
adjoint variable field are approximated by continuous shape functions, which
makes the discretization of a velocity field unnecessary. In addition, the
variational formulation can be solved using commercial or open source finite
element packages. The entire approach can also be interpreted as a
multiple-shooting method for solving the optimality conditions based on the
semi-discrete problem. The performance of our approach is demonstrated by a
numerical test.
|
2501.11525
|
Technical Report for the Forgotten-by-Design Project: Targeted
Obfuscation for Machine Learning
|
cs.LG cs.AI cs.CR
|
The right to privacy, enshrined in various human rights declarations, faces
new challenges in the age of artificial intelligence (AI). This paper explores
the concept of the Right to be Forgotten (RTBF) within AI systems, contrasting
it with traditional data erasure methods. We introduce Forgotten by Design, a
proactive approach to privacy preservation that integrates instance-specific
obfuscation techniques during the AI model training process. Unlike machine
unlearning, which modifies models post-training, our method prevents sensitive
data from being embedded in the first place. Using the LIRA membership
inference attack, we identify vulnerable data points and propose defenses that
combine additive gradient noise and weighting schemes. Our experiments on the
CIFAR-10 dataset demonstrate that our techniques reduce privacy risks by at
least an order of magnitude while maintaining model accuracy (at 95%
significance). Additionally, we present visualization methods for the
privacy-utility trade-off, providing a clear framework for balancing privacy
risk and model accuracy. This work contributes to the development of
privacy-preserving AI systems that align with human cognitive processes of
motivated forgetting, offering a robust framework for safeguarding sensitive
information and ensuring compliance with privacy regulations.
|
2501.11526
|
Meta-Instance Selection. Instance Selection as a Classification Problem
with Meta-Features
|
cs.LG cs.AI
|
Data pruning, or instance selection, is an important problem in machine
learning especially in terms of nearest neighbour classifier. However, in data
pruning which speeds up the prediction phase, there is an issue related to the
speed and efficiency of the process itself. In response, the study proposes an
approach involving transforming the instance selection process into a
classification task conducted in a unified meta-feature space where each
instance can be classified and assigned to either the "to keep" or "to remove"
class. This approach requires training an appropriate meta-classifier, which
can be developed based on historical instance selection results from other
datasets using reference instance selection methods as a labeling tool. This
work proposes constructing the meta-feature space based on properties extracted
from the nearest neighbor graph. Experiments conducted on 17 datasets of
varying sizes and five reference instance selection methods (ENN, Drop3, ICF,
HMN-EI, and CCIS) demonstrate that the proposed solution achieves results
comparable to reference instance selection methods while significantly reducing
computational complexity. In the proposed approach, the computational
complexity of the system depends only on identifying the k-nearest neighbors
for each data sample and running the meta-classifier. Additionally, the study
discusses the choice of meta-classifier, recommending the use of Balanced
Random Forest.
|
2501.11532
|
Early Stopping Bayesian Optimization for Controller Tuning
|
eess.SY cs.SY
|
Manual tuning of performance-critical controller parameters can be tedious
and sub-optimal. Bayesian Optimization (BO) is an increasingly popular
practical alternative to automatically optimize controller parameters from few
experiments. Standard BO practice is to evaluate the closed-loop performance of
parameters proposed during optimization on an episode with a fixed length.
However, fixed-length episodes can be wasteful. For example, continuing an
episode where already the start shows undesirable behavior such as strong
oscillations seems pointless. Therefore, we propose a BO method that stops an
episode early if suboptimality becomes apparent before an episode is completed.
Such early stopping results in partial observations of the controller's
performance, which cannot directly be included in standard BO. We propose three
heuristics to facilitate partially observed episodes in BO. Through five
numerical and one hardware experiment, we demonstrate that early stopping BO
can substantially reduce the time needed for optimization.
|
2501.11533
|
The impact of intrinsic rewards on exploration in Reinforcement Learning
|
cs.AI cs.LG
|
One of the open challenges in Reinforcement Learning is the hard exploration
problem in sparse reward environments. Various types of intrinsic rewards have
been proposed to address this challenge by pushing towards diversity. This
diversity might be imposed at different levels, favouring the agent to explore
different states, policies or behaviours (State, Policy and Skill level
diversity, respectively). However, the impact of diversity on the agent's
behaviour remains unclear. In this work, we aim to fill this gap by studying
the effect of different levels of diversity imposed by intrinsic rewards on the
exploration patterns of RL agents. We select four intrinsic rewards (State
Count, Intrinsic Curiosity Module (ICM), Maximum Entropy, and Diversity is all
you need (DIAYN)), each pushing for a different diversity level. We conduct an
empirical study on MiniGrid environment to compare their impact on exploration
considering various metrics related to the agent's exploration, namely:
episodic return, observation coverage, agent's position coverage, policy
entropy, and timeframes to reach the sparse reward. The main outcome of the
study is that State Count leads to the best exploration performance in the case
of low-dimensional observations. However, in the case of RGB observations, the
performance of State Count is highly degraded mostly due to representation
learning challenges. Conversely, Maximum Entropy is less impacted, resulting in
a more robust exploration, despite being not always optimal. Lastly, our
empirical study revealed that learning diverse skills with DIAYN, often linked
to improved robustness and generalisation, does not promote exploration in
MiniGrid environments. This is because: i) learning the skill space itself can
be challenging, and ii) exploration within the skill space prioritises
differentiating between behaviours rather than achieving uniform state
visitation.
|
2501.11535
|
A baseline for machine-learning-based hepatocellular carcinoma diagnosis
using multi-modal clinical data
|
cs.CV
|
The objective of this paper is to provide a baseline for performing
multi-modal data classification on a novel open multimodal dataset of
hepatocellular carcinoma (HCC), which includes both image data
(contrast-enhanced CT and MRI images) and tabular data (the clinical laboratory
test data as well as case report forms). TNM staging is the classification
task. Features from the vectorized preprocessed tabular data and radiomics
features from contrast-enhanced CT and MRI images are collected. Feature
selection is performed based on mutual information. An XGBoost classifier
predicts the TNM staging and it shows a prediction accuracy of $0.89 \pm 0.05$
and an AUC of $0.93 \pm 0.03$. The classifier shows that this high level of
prediction accuracy can only be obtained by combining image and clinical
laboratory data and therefore is a good example case where multi-model
classification is mandatory to achieve accurate results.
|
2501.11538
|
DenoMAE: A Multimodal Autoencoder for Denoising Modulation Signals
|
cs.LG
|
We propose Denoising Masked Autoencoder (Deno-MAE), a novel multimodal
autoencoder framework for denoising modulation signals during pretraining.
DenoMAE extends the concept of masked autoencoders by incorporating multiple
input modalities, including noise as an explicit modality, to enhance
cross-modal learning and improve denoising performance. The network is
pre-trained using unlabeled noisy modulation signals and constellation
diagrams, effectively learning to reconstruct their equivalent noiseless
signals and diagrams. Deno-MAE achieves state-of-the-art accuracy in automatic
modulation classification tasks with significantly fewer training samples,
demonstrating a 10% reduction in unlabeled pretraining data and a 3% reduction
in labeled fine-tuning data compared to existing approaches. Moreover, our
model exhibits robust performance across varying signal-to-noise ratios (SNRs)
and supports extrapolation on unseen lower SNRs. The results indicate that
DenoMAE is an efficient, flexible, and data-efficient solution for denoising
and classifying modulation signals in challenging noise-intensive environments.
|
2501.11540
|
A Hands-free Spatial Selection and Interaction Technique using Gaze and
Blink Input with Blink Prediction for Extended Reality
|
cs.HC cs.LG
|
Gaze-based interaction techniques have created significant interest in the
field of spatial interaction. Many of these methods require additional input
modalities, such as hand gestures (e.g., gaze coupled with pinch). Those can be
uncomfortable and difficult to perform in public or limited spaces, and pose
challenges for users who are unable to execute pinch gestures. To address these
aspects, we propose a novel, hands-free Gaze+Blink interaction technique that
leverages the user's gaze and intentional eye blinks. This technique enables
users to perform selections by executing intentional blinks. It facilitates
continuous interactions, such as scrolling or drag-and-drop, through eye blinks
coupled with head movements. So far, this concept has not been explored for
hands-free spatial interaction techniques. We evaluated the performance and
user experience (UX) of our Gaze+Blink method with two user studies and
compared it with Gaze+Pinch in a realistic user interface setup featuring
common menu interaction tasks. Study 1 demonstrated that while Gaze+Blink
achieved comparable selection speeds, it was prone to accidental selections
resulting from unintentional blinks. In Study 2 we explored an enhanced
technique employing a deep learning algorithms for filtering out unintentional
blinks.
|
2501.11542
|
DLinear-based Prediction of Remaining Useful Life of Lithium-Ion
Batteries: Feature Engineering through Explainable Artificial Intelligence
|
eess.SY cs.LG cs.SY
|
Accurate prediction of the Remaining Useful Life (RUL) of lithium-ion
batteries is essential for ensuring safety, reducing maintenance costs, and
optimizing usage. However, predicting RUL is challenging due to the nonlinear
characteristics of the degradation caused by complex chemical reactions.
Machine learning allows precise predictions by learning the latent functions of
degradation relationships based on cycling behavior. This study introduces an
accurate RUL prediction approach based on feature engineering and DLinear,
applied to the dataset from NASA's Prognostics Center of Excellence. Among the
20 features generated from current, voltage, temperature, and time provided in
this dataset, key features contributing to degradation are selected using
Pearson correlation coefficient and Shapley values. Shapley value-based feature
selection effectively reflects cell-to-cell variability, showing similar
importance rankings across all cells. The DLinear-based RUL prediction using
key features efficiently captures the time-series trend, demonstrating
significantly better performance compared to Long Short-Term Memory and
Transformer models.
|
2501.11549
|
Whose Boat Does it Float? Improving Personalization in Preference Tuning
via Inferred User Personas
|
cs.CL
|
LLMs are tuned to follow instructions (aligned) by learning which of two
outputs users prefer for a prompt. However, this preference data format does
not convey why users prefer responses that are chosen or rejected, so LLMs
trained on these datasets cannot tailor responses to varied user needs. To
surface these parameters of personalization, we apply abductive reasoning to
preference data, inferring needs and interests of users, i.e. personas, that
may prefer each output. We test this idea in two steps: Persona Inference
(PI)-abductively inferring personas of users who prefer chosen or rejected
outputs-and Persona Tailoring (PT)-training models to tailor responses to
personas from PI. We find: 1) LLMs infer personas accurately explaining why
different users may prefer both chosen or rejected outputs; 2) Training on
preference data augmented with PI personas via PT boosts personalization,
enabling models to support user-written personas; and 3) Rejected response
personas form harder personalization evaluations, showing PT better aids users
with uncommon preferences versus typical alignment methods. We argue for an
abductive view of preferences for personalization, asking not only which
response is better but when, why, and for whom.
|
2501.11551
|
PIKE-RAG: sPecIalized KnowledgE and Rationale Augmented Generation
|
cs.CL
|
Despite notable advancements in Retrieval-Augmented Generation (RAG) systems
that expand large language model (LLM) capabilities through external retrieval,
these systems often struggle to meet the complex and diverse needs of
real-world industrial applications. The reliance on retrieval alone proves
insufficient for extracting deep, domain-specific knowledge performing in
logical reasoning from specialized corpora. To address this, we introduce
sPecIalized KnowledgE and Rationale Augmentation Generation (PIKE-RAG),
focusing on extracting, understanding, and applying specialized knowledge,
while constructing coherent rationale to incrementally steer LLMs toward
accurate responses. Recognizing the diverse challenges of industrial tasks, we
introduce a new paradigm that classifies tasks based on their complexity in
knowledge extraction and application, allowing for a systematic evaluation of
RAG systems' problem-solving capabilities. This strategic approach offers a
roadmap for the phased development and enhancement of RAG systems, tailored to
meet the evolving demands of industrial applications. Furthermore, we propose
knowledge atomizing and knowledge-aware task decomposition to effectively
extract multifaceted knowledge from the data chunks and iteratively construct
the rationale based on original query and the accumulated knowledge,
respectively, showcasing exceptional performance across various benchmarks.
|
2501.11553
|
Clinically Ready Magnetic Microrobots for Targeted Therapies
|
cs.RO cond-mat.mtrl-sci cs.SY eess.SY physics.app-ph physics.bio-ph physics.med-ph
|
Systemic drug administration often causes off-target effects limiting the
efficacy of advanced therapies. Targeted drug delivery approaches increase
local drug concentrations at the diseased site while minimizing systemic drug
exposure. We present a magnetically guided microrobotic drug delivery system
capable of precise navigation under physiological conditions. This platform
integrates a clinical electromagnetic navigation system, a custom-designed
release catheter, and a dissolvable capsule for accurate therapeutic delivery.
In vitro tests showed precise navigation in human vasculature models, and in
vivo experiments confirmed tracking under fluoroscopy and successful navigation
in large animal models. The microrobot balances magnetic material
concentration, contrast agent loading, and therapeutic drug capacity, enabling
effective hosting of therapeutics despite the integration complexity of its
components, offering a promising solution for precise targeted drug delivery.
|
2501.11554
|
Event-based vision for egomotion estimation using precise event timing
|
cs.CV cs.AR cs.RO
|
Egomotion estimation is crucial for applications such as autonomous
navigation and robotics, where accurate and real-time motion tracking is
required. However, traditional methods relying on inertial sensors are highly
sensitive to external conditions, and suffer from drifts leading to large
inaccuracies over long distances. Vision-based methods, particularly those
utilising event-based vision sensors, provide an efficient alternative by
capturing data only when changes are perceived in the scene. This approach
minimises power consumption while delivering high-speed, low-latency feedback.
In this work, we propose a fully event-based pipeline for egomotion estimation
that processes the event stream directly within the event-based domain. This
method eliminates the need for frame-based intermediaries, allowing for
low-latency and energy-efficient motion estimation. We construct a shallow
spiking neural network using a synaptic gating mechanism to convert precise
event timing into bursts of spikes. These spikes encode local optical flow
velocities, and the network provides an event-based readout of egomotion. We
evaluate the network's performance on a dedicated chip, demonstrating strong
potential for low-latency, low-power motion estimation. Additionally,
simulations of larger networks show that the system achieves state-of-the-art
accuracy in egomotion estimation tasks with event-based cameras, making it a
promising solution for real-time, power-constrained robotics applications.
|
2501.11555
|
Beyond R-barycenters: an effective averaging method on Stiefel and
Grassmann manifolds
|
stat.ML cs.LG
|
In this paper, the issue of averaging data on a manifold is addressed. While
the Fr\'echet mean resulting from Riemannian geometry appears ideal, it is
unfortunately not always available and often computationally very expensive. To
overcome this, R-barycenters have been proposed and successfully applied to
Stiefel and Grassmann manifolds. However, R-barycenters still suffer severe
limitations as they rely on iterative algorithms and complicated operators. We
propose simpler, yet efficient, barycenters that we call RL-barycenters. We
show that, in the setting relevant to most applications, our framework yields
astonishingly simple barycenters: arithmetic means projected onto the manifold.
We apply this approach to the Stiefel and Grassmann manifolds. On simulated
data, our approach is competitive with respect to existing averaging methods,
while computationally cheaper.
|
2501.11557
|
Secure Resource Allocation via Constrained Deep Reinforcement Learning
|
cs.LG
|
The proliferation of Internet of Things (IoT) devices and the advent of 6G
technologies have introduced computationally intensive tasks that often surpass
the processing capabilities of user devices. Efficient and secure resource
allocation in serverless multi-cloud edge computing environments is essential
for supporting these demands and advancing distributed computing. However,
existing solutions frequently struggle with the complexity of multi-cloud
infrastructures, robust security integration, and effective application of
traditional deep reinforcement learning (DRL) techniques under system
constraints. To address these challenges, we present SARMTO, a novel framework
that integrates an action-constrained DRL model. SARMTO dynamically balances
resource allocation, task offloading, security, and performance by utilizing a
Markov decision process formulation, an adaptive security mechanism, and
sophisticated optimization techniques. Extensive simulations across varying
scenarios, including different task loads, data sizes, and MEC capacities, show
that SARMTO consistently outperforms five baseline approaches, achieving up to
a 40% reduction in system costs and a 41.5% improvement in energy efficiency
over state-of-the-art methods. These enhancements highlight SARMTO's potential
to revolutionize resource management in intricate distributed computing
environments, opening the door to more efficient and secure IoT and edge
computing applications.
|
2501.11560
|
Explainable Lane Change Prediction for Near-Crash Scenarios Using
Knowledge Graph Embeddings and Retrieval Augmented Generation
|
cs.LG cs.AI cs.CL cs.IR
|
Lane-changing maneuvers, particularly those executed abruptly or in risky
situations, are a significant cause of road traffic accidents. However, current
research mainly focuses on predicting safe lane changes. Furthermore, existing
accident datasets are often based on images only and lack comprehensive sensory
data. In this work, we focus on predicting risky lane changes using the CRASH
dataset (our own collected dataset specifically for risky lane changes), and
safe lane changes (using the HighD dataset). Then, we leverage KG and Bayesian
inference to predict these maneuvers using linguistic contextual information,
enhancing the model's interpretability and transparency. The model achieved a
91.5% f1-score with anticipation time extending to four seconds for risky lane
changes, and a 90.0% f1-score for predicting safe lane changes with the same
anticipation time. We validate our model by integrating it into a vehicle
within the CARLA simulator in scenarios that involve risky lane changes. The
model managed to anticipate sudden lane changes, thus providing automated
vehicles with further time to plan and execute appropriate safe reactions.
Finally, to enhance the explainability of our model, we utilize RAG to provide
clear and natural language explanations for the given prediction.
|
2501.11561
|
Teaching Large Language Models to Regress Accurate Image Quality Scores
using Score Distribution
|
cs.CV
|
With the rapid advancement of Multi-modal Large Language Models (MLLMs),
MLLM-based Image Quality Assessment (IQA) methods have shown promising
performance in linguistic quality description. However, current methods still
fall short in accurately scoring image quality. In this work, we aim to
leverage MLLMs to regress accurate quality scores. A key challenge is that the
quality score is inherently continuous, typically modeled as a Gaussian
distribution, whereas MLLMs generate discrete token outputs. This mismatch
necessitates score discretization. Previous approaches discretize the mean
score into a one-hot label, resulting in information loss and failing to
capture inter-image relationships. We propose a distribution-based approach
that discretizes the score distribution into a soft label. This method
preserves the characteristics of the score distribution, achieving high
accuracy and maintaining inter-image relationships. Moreover, to address
dataset variation, where different IQA datasets exhibit various distributions,
we introduce a fidelity loss based on Thurstone's model. This loss captures
intra-dataset relationships, facilitating co-training across multiple IQA
datasets. With these designs, we develop the distribution-based Depicted image
Quality Assessment model for Score regression (DeQA-Score). Experiments across
multiple benchmarks show that DeQA-Score stably outperforms baselines in score
regression. Also, DeQA-Score can predict the score distribution that closely
aligns with human annotations. Codes and model weights have been released in
https://depictqa.github.io/deqa-score/.
|
2501.11568
|
Graph Defense Diffusion Model
|
cs.LG
|
Graph Neural Networks (GNNs) demonstrate significant potential in various
applications but remain highly vulnerable to adversarial attacks, which can
greatly degrade their performance. Existing graph purification methods attempt
to address this issue by filtering attacked graphs; however, they struggle to
effectively defend against multiple types of adversarial attacks simultaneously
due to their limited flexibility, and they lack comprehensive modeling of graph
data due to their heavy reliance on heuristic prior knowledge. To overcome
these challenges, we propose a more versatile approach for defending against
adversarial attacks on graphs. In this work, we introduce the Graph Defense
Diffusion Model (GDDM), a flexible purification method that leverages the
denoising and modeling capabilities of diffusion models. The iterative nature
of diffusion models aligns well with the stepwise process of adversarial
attacks, making them particularly suitable for defense. By iteratively adding
and removing noise, GDDM effectively purifies attacked graphs, restoring their
original structure and features. Our GDDM consists of two key components: (1)
Graph Structure-Driven Refiner, which preserves the basic fidelity of the graph
during the denoising process, and ensures that the generated graph remains
consistent with the original scope; and (2) Node Feature-Constrained
Regularizer, which removes residual impurities from the denoised graph, further
enhances the purification effect. Additionally, we design tailored denoising
strategies to handle different types of adversarial attacks, improving the
model's adaptability to various attack scenarios. Extensive experiments
conducted on three real-world datasets demonstrate that GDDM outperforms
state-of-the-art methods in defending against a wide range of adversarial
attacks, showcasing its robustness and effectiveness.
|
2501.11570
|
Uncertainty Estimation in the Real World: A Study on Music Emotion
Recognition
|
cs.SD cs.IR cs.LG eess.AS
|
Any data annotation for subjective tasks shows potential variations between
individuals. This is particularly true for annotations of emotional responses
to musical stimuli. While older approaches to music emotion recognition systems
frequently addressed this uncertainty problem through probabilistic modeling,
modern systems based on neural networks tend to ignore the variability and
focus only on predicting central tendencies of human subjective responses. In
this work, we explore several methods for estimating not only the central
tendencies of the subjective responses to a musical stimulus, but also for
estimating the uncertainty associated with these responses. In particular, we
investigate probabilistic loss functions and inference-time random sampling.
Experimental results indicate that while the modeling of the central tendencies
is achievable, modeling of the uncertainty in subjective responses proves
significantly more challenging with currently available approaches even when
empirical estimates of variations in the responses are available.
|
2501.11577
|
Rethinking Membership Inference Attacks Against Transfer Learning
|
cs.CR cs.LG
|
Transfer learning, successful in knowledge translation across related tasks,
faces a substantial privacy threat from membership inference attacks (MIAs).
These attacks, despite posing significant risk to ML model's training data,
remain limited-explored in transfer learning. The interaction between teacher
and student models in transfer learning has not been thoroughly explored in
MIAs, potentially resulting in an under-examined aspect of privacy
vulnerabilities within transfer learning. In this paper, we propose a new MIA
vector against transfer learning, to determine whether a specific data point
was used to train the teacher model while only accessing the student model in a
white-box setting. Our method delves into the intricate relationship between
teacher and student models, analyzing the discrepancies in hidden layer
representations between the student model and its shadow counterpart. These
identified differences are then adeptly utilized to refine the shadow model's
training process and to inform membership inference decisions effectively. Our
method, evaluated across four datasets in diverse transfer learning tasks,
reveals that even when an attacker only has access to the student model, the
teacher model's training data remains susceptible to MIAs. We believe our work
unveils the unexplored risk of membership inference in transfer learning.
|
2501.11584
|
GCSAM: Gradient Centralized Sharpness Aware Minimization
|
cs.LG
|
The generalization performance of deep neural networks (DNNs) is a critical
factor in achieving robust model behavior on unseen data. Recent studies have
highlighted the importance of sharpness-based measures in promoting
generalization by encouraging convergence to flatter minima. Among these
approaches, Sharpness-Aware Minimization (SAM) has emerged as an effective
optimization technique for reducing the sharpness of the loss landscape,
thereby improving generalization. However, SAM's computational overhead and
sensitivity to noisy gradients limit its scalability and efficiency. To address
these challenges, we propose Gradient-Centralized Sharpness-Aware Minimization
(GCSAM), which incorporates Gradient Centralization (GC) to stabilize gradients
and accelerate convergence. GCSAM normalizes gradients before the ascent step,
reducing noise and variance, and improving stability during training. Our
evaluations indicate that GCSAM consistently outperforms SAM and the Adam
optimizer in terms of generalization and computational efficiency. These
findings demonstrate GCSAM's effectiveness across diverse domains, including
general and medical imaging tasks.
|
2501.11586
|
Compressibility Analysis for the differentiable shift-variant Filtered
Backprojection Model
|
cs.CV eess.IV
|
The differentiable shift-variant filtered backprojection (FBP) model enables
the reconstruction of cone-beam computed tomography (CBCT) data for any
non-circular trajectories. This method employs deep learning technique to
estimate the redundancy weights required for reconstruction, given knowledge of
the specific trajectory at optimization time. However, computing the redundancy
weight for each projection remains computationally intensive. This paper
presents a novel approach to compress and optimize the differentiable
shift-variant FBP model based on Principal Component Analysis (PCA). We apply
PCA to the redundancy weights learned from sinusoidal trajectory projection
data, revealing significant parameter redundancy in the original model. By
integrating PCA directly into the differentiable shift-variant FBP
reconstruction pipeline, we develop a method that decomposes the redundancy
weight layer parameters into a trainable eigenvector matrix, compressed
weights, and a mean vector. This innovative technique achieves a remarkable
97.25% reduction in trainable parameters without compromising reconstruction
accuracy. As a result, our algorithm significantly decreases the complexity of
the differentiable shift-variant FBP model and greatly improves training speed.
These improvements make the model substantially more practical for real-world
applications.
|
2501.11587
|
Recurrent Diffusion for Large-Scale Parameter Generation
|
cs.LG cs.AI
|
Parameter generation has long struggled to match the scale of today large
vision and language models, curbing its broader utility. In this paper, we
introduce Recurrent Diffusion for Large Scale Parameter Generation (RPG), a
novel framework that generates full neural network parameters up to hundreds of
millions on a single GPU. Our approach first partitions a networks parameters
into non-overlapping tokens, each corresponding to a distinct portion of the
model. A recurrent mechanism then learns the inter token relationships,
producing prototypes which serve as conditions for a diffusion process that
ultimately synthesizes the full parameters. Across a spectrum of architectures
and tasks including ResNets, ConvNeXts and ViTs on ImageNet 1K and COCO, and
even LoRA based LLMs RPG achieves performance on par with fully trained
networks while avoiding excessive memory overhead. Notably, it generalizes
beyond its training set to generate valid parameters for previously unseen
tasks, highlighting its flexibility in dynamic and open ended scenarios. By
overcoming the longstanding memory and scalability barriers, RPG serves as a
critical advance in AI generating AI, potentially enabling efficient weight
generation at scales previously deemed infeasible.
|
2501.11592
|
Training-free Ultra Small Model for Universal Sparse Reconstruction in
Compressed Sensing
|
cs.LG cs.AI cs.CL
|
Pre-trained large models attract widespread attention in recent years, but
they face challenges in applications that require high interpretability or have
limited resources, such as physical sensing, medical imaging, and
bioinformatics. Compressed Sensing (CS) is a well-proved theory that drives
many recent breakthroughs in these applications. However, as a typical
under-determined linear system, CS suffers from excessively long sparse
reconstruction times when using traditional iterative methods, particularly
with large-scale data. Current AI methods like deep unfolding fail to
substitute them because pre-trained models exhibit poor generality beyond their
training conditions and dataset distributions, or lack interpretability.
Instead of following the big model fervor, this paper proposes ultra-small
artificial neural models called coefficients learning (CL), enabling
training-free and rapid sparse reconstruction while perfectly inheriting the
generality and interpretability of traditional iterative methods, bringing new
feature of incorporating prior knowledges. In CL, a signal of length $n$ only
needs a minimal of $n$ trainable parameters. A case study model called CLOMP is
implemented for evaluation. Experiments are conducted on both synthetic and
real one-dimensional and two-dimensional signals, demonstrating significant
improvements in efficiency and accuracy. Compared to representative iterative
methods, CLOMP improves efficiency by 100 to 1000 folds for large-scale data.
Test results on eight diverse image datasets indicate that CLOMP improves
structural similarity index by 292%, 98%, 45% for sampling rates of 0.1, 0.3,
0.5, respectively. We believe this method can truly usher CS reconstruction
into the AI era, benefiting countless under-determined linear systems that rely
on sparse solution.
|
2501.11593
|
Optimal User and Target Scheduling, User-Target Pairing, and
Low-Resolution Phase-Only Beamforming for ISAC Systems
|
eess.SP cs.IT cs.NI math.IT
|
We investigate the joint user and target scheduling, user-target pairing, and
low-resolution phase-only beamforming design for integrated sensing and
communications (ISAC). Scheduling determines which users and targets are
served, while pairing specifies which users and targets are grouped into pairs.
Additionally, the beamformers are designed using few-bit constant-modulus phase
shifts. This resource allocation problem is a nonconvex mixed-integer nonlinear
program (MINLP) and challenging to solve. To address it, we propose an exact
mixed-integer linear program (MILP) reformulation, which leads to a globally
optimal solution. Our results demonstrate the superiority of an optimal joint
design compared to heuristic stage-wise approaches, which are highly sensitive
to scenario characteristics.
|
2501.11597
|
Fairness Testing through Extreme Value Theory
|
cs.SE cs.AI cs.CY cs.LG
|
Data-driven software is increasingly being used as a critical component of
automated decision-support systems. Since this class of software learns its
logic from historical data, it can encode or amplify discriminatory practices.
Previous research on algorithmic fairness has focused on improving average-case
fairness. On the other hand, fairness at the extreme ends of the spectrum,
which often signifies lasting and impactful shifts in societal attitudes, has
received significantly less emphasis.
Leveraging the statistics of extreme value theory (EVT), we propose a novel
fairness criterion called extreme counterfactual discrimination (ECD). This
criterion estimates the worst-case amounts of disadvantage in outcomes for
individuals solely based on their memberships in a protected group. Utilizing
tools from search-based software engineering and generative AI, we present a
randomized algorithm that samples a statistically significant set of points
from the tail of ML outcome distributions even if the input dataset lacks a
sufficient number of relevant samples.
We conducted several experiments on four ML models (deep neural networks,
logistic regression, and random forests) over 10 socially relevant tasks from
the literature on algorithmic fairness. First, we evaluate the generative AI
methods and find that they generate sufficient samples to infer valid EVT
distribution in 95% of cases. Remarkably, we found that the prevalent bias
mitigators reduce the average-case discrimination but increase the worst-case
discrimination significantly in 5% of cases. We also observed that even the
tail-aware mitigation algorithm -- MiniMax-Fairness -- increased the worst-case
discrimination in 30% of cases. We propose a novel ECD-based mitigator that
improves fairness in the tail in 90% of cases with no degradation of the
average-case discrimination.
|
2501.11599
|
SR-FoT: A Syllogistic-Reasoning Framework of Thought for Large Language
Models Tackling Knowledge-based Reasoning Tasks
|
cs.AI cs.CL
|
Deductive reasoning is a crucial logical capability that assists us in
solving complex problems based on existing knowledge. Although augmented by
Chain-of-Thought prompts, Large Language Models (LLMs) might not follow the
correct reasoning paths. Enhancing the deductive reasoning abilities of LLMs,
and leveraging their extensive built-in knowledge for various reasoning tasks,
remains an open question. Attempting to mimic the human deductive reasoning
paradigm, we propose a multi-stage Syllogistic-Reasoning Framework of Thought
(SR-FoT) that enables LLMs to perform syllogistic deductive reasoning to handle
complex knowledge-based reasoning tasks. Our SR-FoT begins by interpreting the
question and then uses the interpretation and the original question to propose
a suitable major premise. It proceeds by generating and answering minor premise
questions in two stages to match the minor premises. Finally, it guides LLMs to
use the previously generated major and minor premises to perform syllogistic
deductive reasoning to derive the answer to the original question. Extensive
and thorough experiments on knowledge-based reasoning tasks have demonstrated
the effectiveness and advantages of our SR-FoT.
|
2501.11605
|
Bootstrapping Social Networks: Lessons from Bluesky Starter Packs
|
cs.SI cs.NI
|
Microblogging is a crucial mode of online communication. However, launching a
new microblogging platform remains challenging, largely due to network effects.
This has resulted in entrenched (and undesirable) dominance by established
players, such as X/Twitter. To overcome these network effects, Bluesky, an
emerging microblogging platform, introduced starter packs -- curated lists of
accounts that users can follow with a single click. We ask if starter packs
have the potential to tackle the critical problem of social bootstrapping in
new online social networks? This paper is the first to address this question:
we asses whether starter packs have been indeed helpful in supporting Bluesky
growth. Our dataset includes $25.05 \times 10^6$ users and $335.42 \times 10^3$
starter packs with $1.73 \times 10^6$ members, covering the entire lifecycle of
Bluesky. We study the usage of these starter packs, their ability to drive
network and activity growth, and their potential downsides. We also quantify
the benefits of starter packs for members and creators on user visibility and
activity while identifying potential challenges. By evaluating starter packs'
effectiveness and limitations, we contribute to the broader discourse on
platform growth strategies and competitive innovation in the social media
landscape.
|
2501.11613
|
Conversation Routines: A Prompt Engineering Framework for Task-Oriented
Dialog Systems
|
cs.CL cs.AI cs.ET cs.HC cs.PL
|
This study introduces Conversation Routines (CR), a structured prompt
engineering framework for developing task-oriented dialog systems using Large
Language Models (LLMs). While LLMs demonstrate remarkable natural language
understanding capabilities, engineering them to reliably execute complex
business workflows remains challenging. The proposed CR framework enables the
development of Conversation Agentic Systems (CAS) through natural language
specifications, embedding task-oriented logic within LLM prompts. This approach
provides a systematic methodology for designing and implementing complex
conversational workflows while maintaining behavioral consistency. We
demonstrate the framework's effectiveness through two proof-of-concept
implementations: a Train Ticket Booking System and an Interactive
Troubleshooting Copilot. These case studies validate CR's capability to encode
sophisticated behavioral patterns and decision logic while preserving natural
conversational flexibility. Results show that CR enables domain experts to
design conversational workflows in natural language while leveraging custom
functions (tools) developed by software engineers, creating an efficient
division of responsibilities where developers focus on core API implementation
and domain experts handle conversation design. While the framework shows
promise in accessibility and adaptability, we identify key challenges including
computational overhead, non-deterministic behavior, and domain-specific logic
optimization. Future research directions include CR evaluation methods based on
prompt engineering frameworks driven by goal-oriented grading criteria,
improving scalability for complex multi-agent interactions, and enhancing
system robustness to address the identified limitations across diverse business
applications.
|
2501.11621
|
Trojan Detection Through Pattern Recognition for Large Language Models
|
cs.CL cs.LG
|
Trojan backdoors can be injected into large language models at various
stages, including pretraining, fine-tuning, and in-context learning, posing a
significant threat to the model's alignment. Due to the nature of causal
language modeling, detecting these triggers is challenging given the vast
search space. In this study, we propose a multistage framework for detecting
Trojan triggers in large language models consisting of token filtration,
trigger identification, and trigger verification. We discuss existing trigger
identification methods and propose two variants of a black-box trigger
inversion method that rely on output logits, utilizing beam search and greedy
decoding respectively. We show that the verification stage is critical in the
process and propose semantic-preserving prompts and special perturbations to
differentiate between actual Trojan triggers and other adversarial strings that
display similar characteristics. The evaluation of our approach on the TrojAI
and RLHF poisoned model datasets demonstrates promising results.
|
2501.11622
|
Causal Learning for Heterogeneous Subgroups Based on Nonlinear Causal
Kernel Clustering
|
cs.LG stat.ML
|
Due to the challenge posed by multi-source and heterogeneous data collected
from diverse environments, causal relationships among features can exhibit
variations influenced by different time spans, regions, or strategies. This
diversity makes a single causal model inadequate for accurately representing
complex causal relationships in all observational data, a crucial consideration
in causal learning. To address this challenge, the nonlinear Causal Kernel
Clustering method is introduced for heterogeneous subgroup causal learning,
highlighting variations in causal relationships across diverse subgroups. The
main component for clustering heterogeneous subgroups lies in the construction
of the $u$-centered sample mapping function with the property of unbiased
estimation, which assesses the differences in potential nonlinear causal
relationships in various samples and supported by causal identifiability
theory. Experimental results indicate that the method performs well in
identifying heterogeneous subgroups and enhancing causal learning, leading to a
reduction in prediction error.
|
2501.11623
|
Early evidence of how LLMs outperform traditional systems on OCR/HTR
tasks for historical records
|
cs.CV cs.AI cs.LG
|
We explore the ability of two LLMs -- GPT-4o and Claude Sonnet 3.5 -- to
transcribe historical handwritten documents in a tabular format and compare
their performance to traditional OCR/HTR systems: EasyOCR, Keras, Pytesseract,
and TrOCR. Considering the tabular form of the data, two types of experiments
are executed: one where the images are split line by line and the other where
the entire scan is used as input. Based on CER and BLEU, we demonstrate that
LLMs outperform the conventional OCR/HTR methods. Moreover, we also compare the
evaluated CER and BLEU scores to human evaluations to better judge the outputs
of whole-scan experiments and understand influential factors for CER and BLEU.
Combining judgments from all the evaluation metrics, we conclude that two-shot
GPT-4o for line-by-line images and two-shot Claude Sonnet 3.5 for whole-scan
images yield the transcriptions of the historical records most similar to the
ground truth.
|
2501.11626
|
DRL-Based Maximization of the Sum Cross-Layer Achievable Rate for
Networks Under Jamming
|
eess.SY cs.SY
|
In quasi-static wireless networks characterized by infrequent changes in the
transmission schedules of user equipment (UE), malicious jammers can easily
deteriorate network performance. Accordingly, a key challenge in these networks
is managing channel access amidst jammers and under dynamic channel conditions.
In this context, we propose a robust learning-based mechanism for channel
access in multi-cell quasi-static networks under jamming. The network comprises
multiple legitimate UEs, including predefined UEs (pUEs) with stochastic
predefined schedules and an intelligent UE (iUE) with an undefined transmission
schedule, all transmitting over a shared, time-varying uplink channel. Jammers
transmit unwanted packets to disturb the pUEs' and the iUE's communication. The
iUE's learning process is based on the deep reinforcement learning (DRL)
framework, utilizing a residual network (ResNet)-based deep Q-Network (DQN). To
coexist in the network and maximize the network's sum cross-layer achievable
rate (SCLAR), the iUE must learn the unknown network dynamics while
concurrently adapting to dynamic channel conditions. Our simulation results
reveal that, with properly defined state space, action space, and rewards in
DRL, the iUE can effectively coexist in the network, maximizing channel
utilization and the network's SCLAR by judiciously selecting transmission time
slots and thus avoiding collisions and jamming.
|
2501.11628
|
Investigating the Scalability of Approximate Sparse Retrieval Algorithms
to Massive Datasets
|
cs.IR
|
Learned sparse text embeddings have gained popularity due to their
effectiveness in top-k retrieval and inherent interpretability. Their
distributional idiosyncrasies, however, have long hindered their use in
real-world retrieval systems. That changed with the recent development of
approximate algorithms that leverage the distributional properties of sparse
embeddings to speed up retrieval. Nonetheless, in much of the existing
literature, evaluation has been limited to datasets with only a few million
documents such as MSMARCO. It remains unclear how these systems behave on much
larger datasets and what challenges lurk in larger scales. To bridge that gap,
we investigate the behavior of state-of-the-art retrieval algorithms on massive
datasets. We compare and contrast the recently-proposed Seismic and graph-based
solutions adapted from dense retrieval. We extensively evaluate Splade
embeddings of 138M passages from MsMarco-v2 and report indexing time and other
efficiency and effectiveness metrics.
|
2501.11631
|
Noise-Agnostic Multitask Whisper Training for Reducing False Alarm
Errors in Call-for-Help Detection
|
cs.SD cs.AI eess.AS
|
Keyword spotting is often implemented by keyword classifier to the encoder in
acoustic models, enabling the classification of predefined or open vocabulary
keywords. Although keyword spotting is a crucial task in various applications
and can be extended to call-for-help detection in emergencies, however, the
previous method often suffers from scalability limitations due to retraining
required to introduce new keywords or adapt to changing contexts. We explore a
simple yet effective approach that leverages off-the-shelf pretrained ASR
models to address these challenges, especially in call-for-help detection
scenarios. Furthermore, we observed a substantial increase in false alarms when
deploying call-for-help detection system in real-world scenarios due to noise
introduced by microphones or different environments. To address this, we
propose a novel noise-agnostic multitask learning approach that integrates a
noise classification head into the ASR encoder. Our method enhances the model's
robustness to noisy environments, leading to a significant reduction in false
alarms and improved overall call-for-help performance. Despite the added
complexity of multitask learning, our approach is computationally efficient and
provides a promising solution for call-for-help detection in real-world
scenarios.
|
2501.11632
|
Biomedical Knowledge Graph: A Survey of Domains, Tasks, and Real-World
Applications
|
cs.CL cs.AI cs.CE cs.IR
|
Biomedical knowledge graphs (BKGs) have emerged as powerful tools for
organizing and leveraging the vast and complex data found across the biomedical
field. Yet, current reviews of BKGs often limit their scope to specific domains
or methods, overlooking the broader landscape and the rapid technological
progress reshaping it. In this survey, we address this gap by offering a
systematic review of BKGs from three core perspectives: domains, tasks, and
applications. We begin by examining how BKGs are constructed from diverse data
sources, including molecular interactions, pharmacological datasets, and
clinical records. Next, we discuss the essential tasks enabled by BKGs,
focusing on knowledge management, retrieval, reasoning, and interpretation.
Finally, we highlight real-world applications in precision medicine, drug
discovery, and scientific research, illustrating the translational impact of
BKGs across multiple sectors. By synthesizing these perspectives into a unified
framework, this survey not only clarifies the current state of BKG research but
also establishes a foundation for future exploration, enabling both innovative
methodological advances and practical implementations.
|
2501.11633
|
PSO-based Sliding Mode Current Control of Grid-Forming Inverter in
Rotating Frame
|
eess.SY cs.SY
|
The Grid-Forming Inverter (GFMI) is an emerging topic that is attracting
significant attention from both academic and industrial communities,
particularly in the area of control design. The Decoupled Average Model-based
Sliding Mode Current Controller (DAM-SMC) has been used to address the need
such as fast response, fixed switching frequency, and no overshoot to avoid
exceeding current limits. Typically, the control parameters for DAM-SMC are
chosen based on expert knowledge and certain assumptions. However, these
parameters may not achieve optimized performance due to system dynamics and
uncertainties. To address this, this paper proposes a Particle Swarm
Optimization (PSO)-based DAM-SMC controller, which inherits the control laws
from DAM-SMC but optimizes the control parameters offline using PSO. The main
goal is to reduce chattering and achieve smaller tracking errors. The proposed
method is compared with other metaheuristic optimization algorithms, such as
Genetic Algorithm (GA) and Simulated Annealing (SA). Simulations are performed
in MATLAB/Simulink across various scenarios to evaluate the effectiveness of
the proposed controller. The proposed approach achieves a substantial reduction
in convergence time, decreasing it by 86.36% compared to the GA and by 88.89%
compared to SA. Furthermore, the tracking error is reduced by 11.61% compared
to the conventional DAM-SMC algorithm. The robustness of the proposed method is
validated under critical conditions, where plant and control model parameters
varied by up to 40%.
|
2501.11636
|
Characterization of the Arithmetic Complexity of the Secrecy Capacity of
Fast-Fading Gaussian Channels
|
cs.IT math.IT
|
This paper studies the computability of the secrecy capacity of fast-fading
wiretap channels from an algorithmic perspective, examining whether it can be
computed algorithmically or not. To address this question, the concept of
Turing machines is used, which establishes fundamental performance limits of
digital computers. It is shown that certain computable continuous fading
probability distribution functions yield secrecy capacities that are
non-computable numbers. Additionally, we assess the secrecy capacity's
classification within the arithmetical hierarchy, revealing the absence of
computable achievability and converse bounds.
|
2501.11638
|
Class Imbalance in Anomaly Detection: Learning from an Exactly Solvable
Model
|
cs.LG cond-mat.dis-nn stat.ML
|
Class imbalance (CI) is a longstanding problem in machine learning, slowing
down training and reducing performances. Although empirical remedies exist, it
is often unclear which ones work best and when, due to the lack of an
overarching theory. We address a common case of imbalance, that of anomaly (or
outlier) detection. We provide a theoretical framework to analyze, interpret
and address CI. It is based on an exact solution of the teacher-student
perceptron model, through replica theory. Within this framework, one can
distinguish several sources of CI: either intrinsic, train or test imbalance.
Our analysis reveals that the optimal train imbalance is generally different
from 50%, with a non trivial dependence on the intrinsic imbalance, the
abundance of data and on the noise in the learning. Moreover, there is a
crossover between a small noise training regime where results are independent
of the noise level to a high noise regime where performances quickly degrade
with noise. Our results challenge some of the conventional wisdom on CI and
offer practical guidelines to address it.
|
2501.11639
|
StAyaL | Multilingual Style Transfer
|
cs.CL cs.AI
|
Stylistic text generation plays a vital role in enhancing communication by
reflecting the nuances of individual expression. This paper presents a novel
approach for generating text in a specific speaker's style across different
languages. We show that by leveraging only 100 lines of text, an individuals
unique style can be captured as a high-dimensional embedding, which can be used
for both text generation and stylistic translation. This methodology breaks
down the language barrier by transferring the style of a speaker between
languages. The paper is structured into three main phases: augmenting the
speaker's data with stylistically consistent external sources, separating style
from content using machine learning and deep learning techniques, and
generating an abstract style profile by mean pooling the learned embeddings.
The proposed approach is shown to be topic-agnostic, with test accuracy and F1
scores of 74.9% and 0.75, respectively. The results demonstrate the potential
of the style profile for multilingual communication, paving the way for further
applications in personalized content generation and cross-linguistic stylistic
transfer.
|
2501.11641
|
A Common Ancestor of PDL, Conjunctive Queries, and Unary Negation
First-order
|
cs.LO cs.DB
|
We introduce and study UCPDL+, a family of expressive logics rooted in
Propositional Dynamic Logic (PDL) with converse (CPDL) and universal modality
(UCPDL). In terms of expressive power, UCPDL+ strictly contains PDL extended
with intersection and converse (a.k.a. ICPDL), as well as Conjunctive Queries
(CQ), Conjunctive Regular Path Queries (CRPQ), or some known extensions thereof
(Regular Queries and CQPDL). Further, it is equivalent to the extension of the
unary-negation fragment of first-order logic (UNFO) with unary transitive
closure, which we denote by UNFO*, which in turn strictly contains a previously
studied extension of UNFO with regular expressions known as UNFO^reg.
We investigate the expressive power, indistinguishability via bisimulations,
satisfiability, and model checking for UCPDL+ and CPDL+. We argue that natural
subclasses of CPDL+ can be defined in terms of the tree-width of the underlying
graphs of the formulas. We show that the class of CPDL+ formulas of tree-width
2 is equivalent to ICPDL, and that it also coincides with CPDL+ formulas of
tree-width 1. However, beyond tree-width 2, incrementing the tree-width
strictly increases the expressive power. We characterize the expressive power
for every class of fixed tree-width formulas in terms of a bisimulation game
with pebbles. Based on this characterization, we show that CPDL+ has a
tree-like model property. We prove that the satisfiability problem for UCPDL+
is decidable in 2ExpTime, coinciding with the complexity of ICPDL. As a
consequence, the satisfiability problem for UNFO* is shown to be
2ExpTime-complete as well. We also exhibit classes for which satisfiability is
reduced to ExpTime. Finally, we establish that the model checking problem for
fixed tree-width formulas is in PTime, contrary to the full class CPDL+.
|
2501.11651
|
Advancing Language Model Reasoning through Reinforcement Learning and
Inference Scaling
|
cs.LG cs.CL
|
Large language models (LLMs) have demonstrated remarkable capabilities in
complex reasoning tasks. However, existing approaches mainly rely on imitation
learning and struggle to achieve effective test-time scaling. While
reinforcement learning (RL) holds promise for enabling self-exploration and
learning from feedback, recent attempts yield only modest improvements in
complex reasoning. In this paper, we present T1 to scale RL by encouraging
exploration and understand inference scaling. We first initialize the LLM using
synthesized chain-of-thought data that integrates trial-and-error and
self-verification. To scale RL training, we promote increased sampling
diversity through oversampling. We further employ an entropy bonus as an
auxiliary loss, alongside a dynamic anchor for regularization to facilitate
reward optimization. We demonstrate that T1 with open LLMs as its base exhibits
inference scaling behavior and achieves superior performance on challenging
math reasoning benchmarks. For example, T1 with Qwen2.5-32B as the base model
outperforms the recent Qwen QwQ-32B-Preview model on MATH500, AIME2024, and
Omni-math-500. More importantly, we present a simple strategy to examine
inference scaling, where increased inference budgets directly lead to T1's
better performance without any additional verification. We will open-source the
T1 models and the data used to train them at \url{https://github.com/THUDM/T1}.
|
2501.11653
|
Dynamic Scene Understanding from Vision-Language Representations
|
cs.CV cs.LG
|
Images depicting complex, dynamic scenes are challenging to parse
automatically, requiring both high-level comprehension of the overall situation
and fine-grained identification of participating entities and their
interactions. Current approaches use distinct methods tailored to sub-tasks
such as Situation Recognition and detection of Human-Human and Human-Object
Interactions. However, recent advances in image understanding have often
leveraged web-scale vision-language (V&L) representations to obviate
task-specific engineering. In this work, we propose a framework for dynamic
scene understanding tasks by leveraging knowledge from modern, frozen V&L
representations. By framing these tasks in a generic manner - as predicting and
parsing structured text, or by directly concatenating representations to the
input of existing models - we achieve state-of-the-art results while using a
minimal number of trainable parameters relative to existing approaches.
Moreover, our analysis of dynamic knowledge of these representations shows that
recent, more powerful representations effectively encode dynamic scene
semantics, making this approach newly possible.
|
2501.11655
|
KKL Observer Synthesis for Nonlinear Systems via Physics-Informed
Learning
|
eess.SY cs.LG cs.SY
|
This paper proposes a novel learning approach for designing
Kazantzis-Kravaris/Luenberger (KKL) observers for autonomous nonlinear systems.
The design of a KKL observer involves finding an injective map that transforms
the system state into a higher-dimensional observer state, whose dynamics is
linear and stable. The observer's state is then mapped back to the original
system coordinates via the inverse map to obtain the state estimate. However,
finding this transformation and its inverse is quite challenging. We propose to
sequentially approximate these maps by neural networks that are trained using
physics-informed learning. We generate synthetic data for training by
numerically solving the system and observer dynamics. Theoretical guarantees
for the robustness of state estimation against approximation error and system
uncertainties are provided. Additionally, a systematic method for optimizing
observer performance through parameter selection is presented. The
effectiveness of the proposed approach is demonstrated through numerical
simulations on benchmark examples and its application to sensor fault detection
and isolation in a network of Kuramoto oscillators using learned KKL observers.
|
2501.11657
|
Classification of HI Galaxy Profiles Using Unsupervised Learning and
Convolutional Neural Networks: A Comparative Analysis and Methodological
Cases of Studies
|
astro-ph.GA cs.LG
|
Hydrogen, the most abundant element in the universe, is crucial for
understanding galaxy formation and evolution. The 21 cm neutral atomic hydrogen
- HI spectral line maps the gas kinematics within galaxies, providing key
insights into interactions, galactic structure, and star formation processes.
With new radio instruments, the volume and complexity of data is increasing. To
analyze and classify integrated HI spectral profiles in a efficient way, this
work presents a framework that integrates Machine Learning techniques,
combining unsupervised methods and CNNs. To this end, we apply our framework to
a selected subsample of 318 spectral HI profiles of the CIG and 30.780 profiles
from the Arecibo Legacy Fast ALFA Survey catalogue. Data pre-processing
involved the Busyfit package and iterative fitting with polynomial, Gaussian,
and double-Lorentzian models. Clustering methods, including K-means, spectral
clustering, DBSCAN, and agglomerative clustering, were used for feature
extraction and to bootstrap classification we applied K-NN, SVM, and Random
Forest classifiers, optimizing accuracy with CNN. Additionally, we introduced a
2D model of the profiles to enhance classification by adding dimensionality to
the data. Three 2D models were generated based on transformations and
normalised versions to quantify the level of asymmetry. These methods were
tested in a previous analytical classification study conducted by the Analysis
of the Interstellar Medium in Isolated Galaxies group. This approach enhances
classification accuracy and aims to establish a methodology that could be
applied to data analysis in future surveys conducted with the Square Kilometre
Array (SKA), currently under construction. All materials, code, and models have
been made publicly available in an open-access repository, adhering to FAIR
principles.
|
2501.11671
|
Exploring Preference-Guided Diffusion Model for Cross-Domain
Recommendation
|
cs.IR
|
Cross-domain recommendation (CDR) has been proven as a promising way to
alleviate the cold-start issue, in which the most critical problem is how to
draw an informative user representation in the target domain via the transfer
of user preference existing in the source domain. Prior efforts mostly follow
the embedding-and-mapping paradigm, which first integrate the preference into
user representation in the source domain, and then perform a mapping function
on this representation to the target domain. However, they focus on mapping
features across domains, neglecting to explicitly model the preference
integration process, which may lead to learning coarse user representation.
Diffusion models (DMs), which contribute to more accurate user/item
representations due to their explicit information injection capability, have
achieved promising performance in recommendation systems. Nevertheless, these
DMs-based methods cannot directly account for valuable user preference in other
domains, leading to challenges in adapting to the transfer of preference for
cold-start users. Consequently, the feasibility of DMs for CDR remains
underexplored. To this end, we explore to utilize the explicit information
injection capability of DMs for user preference integration and propose a
Preference-Guided Diffusion Model for CDR to cold-start users, termed as DMCDR.
Specifically, we leverage a preference encoder to establish the preference
guidance signal with the user's interaction history in the source domain. Then,
we explicitly inject the preference guidance signal into the user
representation step by step to guide the reverse process, and ultimately
generate the personalized user representation in the target domain, thus
achieving the transfer of user preference across domains. Furthermore, we
comprehensively explore the impact of six DMs-based variants on CDR.
|
Subsets and Splits
No community queries yet
The top public SQL queries from the community will appear here once available.