id
stringlengths 9
16
| title
stringlengths 4
278
| categories
stringlengths 5
104
| abstract
stringlengths 6
4.09k
|
|---|---|---|---|
2501.17343
|
Post-Training Quantization for 3D Medical Image Segmentation: A
Practical Study on Real Inference Engines
|
cs.CV cs.AI
|
Quantizing deep neural networks ,reducing the precision (bit-width) of their
computations, can remarkably decrease memory usage and accelerate processing,
making these models more suitable for large-scale medical imaging applications
with limited computational resources. However, many existing methods studied
"fake quantization", which simulates lower precision operations during
inference, but does not actually reduce model size or improve real-world
inference speed. Moreover, the potential of deploying real 3D low-bit
quantization on modern GPUs is still unexplored. In this study, we introduce a
real post-training quantization (PTQ) framework that successfully implements
true 8-bit quantization on state-of-the-art (SOTA) 3D medical segmentation
models, i.e., U-Net, SegResNet, SwinUNETR, nnU-Net, UNesT, TransUNet,
ST-UNet,and VISTA3D. Our approach involves two main steps. First, we use
TensorRT to perform fake quantization for both weights and activations with
unlabeled calibration dataset. Second, we convert this fake quantization into
real quantization via TensorRT engine on real GPUs, resulting in real-world
reductions in model size and inference latency. Extensive experiments
demonstrate that our framework effectively performs 8-bit quantization on GPUs
without sacrificing model performance. This advancement enables the deployment
of efficient deep learning models in medical imaging applications where
computational resources are constrained. The code and models have been
released, including U-Net, TransUNet pretrained on the BTCV dataset for
abdominal (13-label) segmentation, UNesT pretrained on the Whole Brain Dataset
for whole brain (133-label) segmentation, and nnU-Net, SegResNet, SwinUNETR and
VISTA3D pretrained on TotalSegmentator V2 for full body (104-label)
segmentation. https://github.com/hrlblab/PTQ.
|
2501.17345
|
Testing Conditional Mean Independence Using Generative Neural Networks
|
stat.ML cs.LG
|
Conditional mean independence (CMI) testing is crucial for statistical tasks
including model determination and variable importance evaluation. In this work,
we introduce a novel population CMI measure and a bootstrap-based testing
procedure that utilizes deep generative neural networks to estimate the
conditional mean functions involved in the population measure. The test
statistic is thoughtfully constructed to ensure that even slowly decaying
nonparametric estimation errors do not affect the asymptotic accuracy of the
test. Our approach demonstrates strong empirical performance in scenarios with
high-dimensional covariates and response variable, can handle multivariate
responses, and maintains nontrivial power against local alternatives outside an
$n^{-1/2}$ neighborhood of the null hypothesis. We also use numerical
simulations and real-world imaging data applications to highlight the efficacy
and versatility of our testing procedure.
|
2501.17347
|
Deep-and-Wide Learning: Enhancing Data-Driven Inference via Synergistic
Learning of Inter- and Intra-Data Representations
|
cs.LG cs.AI
|
Advancements in deep learning are revolutionizing science and engineering.
The immense success of deep learning is largely due to its ability to extract
essential high-dimensional (HD) features from input data and make inference
decisions based on this information. However, current deep neural network (DNN)
models face several challenges, such as the requirements of extensive amounts
of data and computational resources. Here, we introduce a new learning scheme,
referred to as deep-and-wide learning (DWL), to systematically capture features
not only within individual input data (intra-data features) but also across the
data (inter-data features). Furthermore, we propose a dual-interactive-channel
network (D-Net) to realize the DWL, which leverages our Bayesian formulation of
low-dimensional (LD) inter-data feature extraction and its synergistic
interaction with the conventional HD representation of the dataset, for
substantially enhanced computational efficiency and inference. The proposed
technique has been applied to data across various disciplines for both
classification and regression tasks. Our results demonstrate that DWL surpasses
state-of-the-art DNNs in accuracy by a substantial margin with limited training
data and improves the computational efficiency by order(s) of magnitude. The
proposed DWL strategy dramatically alters the data-driven learning techniques,
including emerging large foundation models, and sheds significant insights into
the evolving field of AI.
|
2501.17348
|
Better Slow than Sorry: Introducing Positive Friction for Reliable
Dialogue Systems
|
cs.CL cs.HC
|
While theories of discourse and cognitive science have long recognized the
value of unhurried pacing, recent dialogue research tends to minimize friction
in conversational systems. Yet, frictionless dialogue risks fostering
uncritical reliance on AI outputs, which can obscure implicit assumptions and
lead to unintended consequences. To meet this challenge, we propose integrating
positive friction into conversational AI, which promotes user reflection on
goals, critical thinking on system response, and subsequent re-conditioning of
AI systems. We hypothesize systems can improve goal alignment, modeling of user
mental states, and task success by deliberately slowing down conversations in
strategic moments to ask questions, reveal assumptions, or pause. We present an
ontology of positive friction and collect expert human annotations on
multi-domain and embodied goal-oriented corpora. Experiments on these corpora,
along with simulated interactions using state-of-the-art systems, suggest
incorporating friction not only fosters accountable decision-making, but also
enhances machine understanding of user beliefs and goals, and increases task
success rates.
|
2501.17349
|
An Efficient Numerical Function Optimization Framework for Constrained
Nonlinear Robotic Problems
|
cs.RO math.OC
|
This paper presents a numerical function optimization framework designed for
constrained optimization problems in robotics. The tool is designed with
real-time considerations and is suitable for online trajectory and control
input optimization problems. The proposed framework does not require any
analytical representation of the problem and works with constrained block-box
optimization functions. The method combines first-order gradient-based line
search algorithms with constraint prioritization through nullspace projections
onto constraint Jacobian space. The tool is implemented in C++ and provided
online for community use, along with some numerical and robotic example
implementations presented in this paper.
|
2501.17351
|
Realtime Limb Trajectory Optimization for Humanoid Running Through
Centroidal Angular Momentum Dynamics
|
cs.RO
|
One of the essential aspects of humanoid robot running is determining the
limb-swinging trajectories. During the flight phases, where the ground reaction
forces are not available for regulation, the limb swinging trajectories are
significant for the stability of the next stance phase. Due to the conservation
of angular momentum, improper leg and arm swinging results in highly tilted and
unsustainable body configurations at the next stance phase landing. In such
cases, the robotic system fails to maintain locomotion independent of the
stability of the center of mass trajectories. This problem is more apparent for
fast and high flight time trajectories. This paper proposes a real-time
nonlinear limb trajectory optimization problem for humanoid running. The
optimization problem is tested on two different humanoid robot models, and the
generated trajectories are verified using a running algorithm for both robots
in a simulation environment.
|
2501.17354
|
Fundamental Computational Limits in Pursuing Invariant Causal Prediction
and Invariance-Guided Regularization
|
math.ST cs.LG stat.ME stat.ML stat.TH
|
Pursuing invariant prediction from heterogeneous environments opens the door
to learning causality in a purely data-driven way and has several applications
in causal discovery and robust transfer learning. However, existing methods
such as ICP [Peters et al., 2016] and EILLS [Fan et al., 2024] that can attain
sample-efficient estimation are based on exponential time algorithms. In this
paper, we show that such a problem is intrinsically hard in computation: the
decision problem, testing whether a non-trivial prediction-invariant solution
exists across two environments, is NP-hard even for the linear causal
relationship. In the world where P$\neq$NP, our results imply that the
estimation error rate can be arbitrarily slow using any computationally
efficient algorithm. This suggests that pursuing causality is fundamentally
harder than detecting associations when no prior assumption is pre-offered.
Given there is almost no hope of computational improvement under the worst
case, this paper proposes a method capable of attaining both computationally
and statistically efficient estimation under additional conditions.
Furthermore, our estimator is a distributionally robust estimator with an
ellipse-shaped uncertain set where more uncertainty is placed on spurious
directions than invariant directions, resulting in a smooth interpolation
between the most predictive solution and the causal solution by varying the
invariance hyper-parameter. Non-asymptotic results and empirical applications
support the claim.
|
2501.17356
|
On the Coexistence and Ensembling of Watermarks
|
cs.CV cs.AI cs.CY
|
Watermarking, the practice of embedding imperceptible information into media
such as images, videos, audio, and text, is essential for intellectual property
protection, content provenance and attribution. The growing complexity of
digital ecosystems necessitates watermarks for different uses to be embedded in
the same media. However, to detect and decode all watermarks, they need to
coexist well with one another. We perform the first study of coexistence of
deep image watermarking methods and, contrary to intuition, we find that
various open-source watermarks can coexist with only minor impacts on image
quality and decoding robustness. The coexistence of watermarks also opens the
avenue for ensembling watermarking methods. We show how ensembling can increase
the overall message capacity and enable new trade-offs between capacity,
accuracy, robustness and image quality, without needing to retrain the base
models.
|
2501.17361
|
The M-factor: A Novel Metric for Evaluating Neural Architecture Search
in Resource-Constrained Environments
|
cs.LG cs.AI
|
Neural Architecture Search (NAS) aims to automate the design of deep neural
networks. However, existing NAS techniques often focus on maximising accuracy,
neglecting model efficiency. This limitation restricts their use in
resource-constrained environments like mobile devices and edge computing
systems. Moreover, current evaluation metrics prioritise performance over
efficiency, lacking a balanced approach for assessing architectures suitable
for constrained scenarios. To address these challenges, this paper introduces
the M-factor, a novel metric combining model accuracy and size. Four diverse
NAS techniques are compared: Policy-Based Reinforcement Learning, Regularised
Evolution, Tree-structured Parzen Estimator (TPE), and Multi-trial Random
Search. These techniques represent different NAS paradigms, providing a
comprehensive evaluation of the M-factor. The study analyses ResNet
configurations on the CIFAR-10 dataset, with a search space of 19,683
configurations. Experiments reveal that Policy-Based Reinforcement Learning and
Regularised Evolution achieved M-factor values of 0.84 and 0.82, respectively,
while Multi-trial Random Search attained 0.75, and TPE reached 0.67.
Policy-Based Reinforcement Learning exhibited performance changes after 39
trials, while Regularised Evolution optimised within 20 trials. The research
investigates the optimisation dynamics and trade-offs between accuracy and
model size for each strategy. Findings indicate that, in some cases, random
search performed comparably to more complex algorithms when assessed using the
M-factor. These results highlight how the M-factor addresses the limitations of
existing metrics by guiding NAS towards balanced architectures, offering
valuable insights for selecting strategies in scenarios requiring both
performance and efficiency.
|
2501.17366
|
Forecasting S&P 500 Using LSTM Models
|
cs.LG cs.AI q-fin.CP q-fin.TR
|
With the volatile and complex nature of financial data influenced by external
factors, forecasting the stock market is challenging. Traditional models such
as ARIMA and GARCH perform well with linear data but struggle with non-linear
dependencies. Machine learning and deep learning models, particularly Long
Short-Term Memory (LSTM) networks, address these challenges by capturing
intricate patterns and long-term dependencies. This report compares ARIMA and
LSTM models in predicting the S&P 500 index, a major financial benchmark.
Using historical price data and technical indicators, we evaluated these
models using Mean Absolute Error (MAE) and Root Mean Squared Error (RMSE). The
ARIMA model showed reasonable performance with an MAE of 462.1, RMSE of 614,
and 89.8 percent accuracy, effectively capturing short-term trends but limited
by its linear assumptions. The LSTM model, leveraging sequential processing
capabilities, outperformed ARIMA with an MAE of 369.32, RMSE of 412.84, and
92.46 percent accuracy, capturing both short- and long-term dependencies.
Notably, the LSTM model without additional features performed best, achieving
an MAE of 175.9, RMSE of 207.34, and 96.41 percent accuracy, showcasing its
ability to handle market data efficiently.
Accurately predicting stock movements is crucial for investment strategies,
risk assessments, and market stability. Our findings confirm the potential of
deep learning models in handling volatile financial data compared to
traditional ones. The results highlight the effectiveness of LSTM and suggest
avenues for further improvements. This study provides insights into financial
forecasting, offering a comparative analysis of ARIMA and LSTM while outlining
their strengths and limitations.
|
2501.17370
|
Breaking the $\log(1/\Delta_2)$ Barrier: Better Batched Best Arm
Identification with Adaptive Grids
|
cs.LG
|
We investigate the problem of batched best arm identification in multi-armed
bandits, where we aim to identify the best arm from a set of $n$ arms while
minimizing both the number of samples and batches. We introduce an algorithm
that achieves near-optimal sample complexity and features an instance-sensitive
batch complexity, which breaks the $\log(1/\Delta_2)$ barrier. The main
contribution of our algorithm is a novel sample allocation scheme that
effectively balances exploration and exploitation for batch sizes. Experimental
results indicate that our approach is more batch-efficient across various
setups. We also extend this framework to the problem of batched best arm
identification in linear bandits and achieve similar improvements.
|
2501.17372
|
Data-Informed Model Complexity Metric for Optimizing Symbolic Regression
Models
|
cs.LG cs.NE
|
Choosing models from a well-fitted evolved population that generalizes beyond
training data is difficult. We introduce a pragmatic method to estimate model
complexity using Hessian rank for post-processing selection. Complexity is
approximated by averaging the model output Hessian rank across a few points
(N=3), offering efficient and accurate rank estimates. This method aligns model
selection with input data complexity, calculated using intrinsic dimensionality
(ID) estimators. Using the StackGP system, we develop symbolic regression
models for the Penn Machine Learning Benchmark and employ twelve
scikit-dimension library methods to estimate ID, aligning model expressiveness
with dataset ID. Our data-informed complexity metric finds the ideal complexity
window, balancing model expressiveness and accuracy, enhancing generalizability
without bias common in methods reliant on user-defined parameters, such as
parsimony pressure in weight selection.
|
2501.17374
|
A Geometric Perspective for High-Dimensional Multiplex Graphs
|
cs.LG cs.AI
|
High-dimensional multiplex graphs are characterized by their high number of
complementary and divergent dimensions. The existence of multiple hierarchical
latent relations between the graph dimensions poses significant challenges to
embedding methods. In particular, the geometric distortions that might occur in
the representational space have been overlooked in the literature. This work
studies the problem of high-dimensional multiplex graph embedding from a
geometric perspective. We find that the node representations reside on highly
curved manifolds, thus rendering their exploitation more challenging for
downstream tasks. Moreover, our study reveals that increasing the number of
graph dimensions can cause further distortions to the highly curved manifolds.
To address this problem, we propose a novel multiplex graph embedding method
that harnesses hierarchical dimension embedding and Hyperbolic Graph Neural
Networks. The proposed approach hierarchically extracts hyperbolic node
representations that reside on Riemannian manifolds while gradually learning
fewer and more expressive latent dimensions of the multiplex graph.
Experimental results on real-world high-dimensional multiplex graphs show that
the synergy between hierarchical and hyperbolic embeddings incurs much fewer
geometric distortions and brings notable improvements over state-of-the-art
approaches on downstream tasks.
|
2501.17377
|
ASAP: Learning Generalizable Online Bin Packing via Adaptive Selection
After Pruning
|
cs.LG cs.AI
|
Recently, deep reinforcement learning (DRL) has achieved promising results in
solving online 3D Bin Packing Problems (3D-BPP). However, these DRL-based
policies may perform poorly on new instances due to distribution shift. Besides
generalization, we also consider adaptation, completely overlooked by previous
work, which aims at rapidly finetuning these policies to a new test
distribution. To tackle both generalization and adaptation issues, we propose
Adaptive Selection After Pruning (ASAP), which decomposes a solver's
decision-making into two policies, one for pruning and one for selection. The
role of the pruning policy is to remove inherently bad actions, which allows
the selection policy to choose among the remaining most valuable actions. To
learn these policies, we propose a training scheme based on a meta-learning
phase of both policies followed by a finetuning phase of the sole selection
policy to rapidly adapt it to a test distribution. Our experiments demonstrate
that ASAP exhibits excellent generalization and adaptation capabilities on
in-distribution and out-of-distribution instances under both discrete and
continuous setup.
|
2501.17379
|
Stable Tree Labelling for Accelerating Distance Queries on Dynamic Road
Networks
|
cs.DS cs.DB
|
Finding the shortest-path distance between two arbitrary vertices is an
important problem in road networks. Due to real-time traffic conditions, road
networks undergo dynamic changes all the time. Current state-of-the-art methods
incrementally maintain a distance labelling based on a hierarchy among vertices
to support efficient distance computation. However, their labelling sizes are
often large and cannot be efficiently maintained. To combat these issues, we
present a simple yet efficient labelling method, namely \emph{Stable Tree
Labelling} (STL), for answering distance queries on dynamic road networks. We
observe that the properties of an underlying hierarchy play an important role
in improving and balancing query and update performance. Thus, we introduce the
notion of \emph{stable tree hierarchy} which lays the ground for developing
efficient maintenance algorithms on dynamic road networks. Based on stable tree
hierarchy, STL can be efficiently constructed as a 2-hop labelling. A crucial
ingredient of STL is to only store distances within subgraphs in labels, rather
than distances in the entire graph, which restricts the labels affected by
dynamic changes. We further develop two efficient maintenance algorithms upon
STL: \emph{Label Search algorithm} and \emph{Pareto Search algorithm}. Label
Search algorithm identifies affected ancestors in a stable tree hierarchy and
performs efficient searches to update labels from those ancestors. Pareto
Search algorithm explores the interaction between search spaces of different
ancestors, and combines searches from multiple ancestors into only two searches
for each update, eliminating duplicate graph traversals. The experiments show
that our algorithms significantly outperform state-of-the-art dynamic methods
in maintaining the labelling and query processing, while requiring an order of
magnitude less space.
|
2501.17381
|
Do We Really Need to Design New Byzantine-robust Aggregation Rules?
|
cs.CR cs.DC cs.LG
|
Federated learning (FL) allows multiple clients to collaboratively train a
global machine learning model through a server, without exchanging their
private training data. However, the decentralized aspect of FL makes it
susceptible to poisoning attacks, where malicious clients can manipulate the
global model by sending altered local model updates. To counter these attacks,
a variety of aggregation rules designed to be resilient to Byzantine failures
have been introduced. Nonetheless, these methods can still be vulnerable to
sophisticated attacks or depend on unrealistic assumptions about the server. In
this paper, we demonstrate that there is no need to design new Byzantine-robust
aggregation rules; instead, FL can be secured by enhancing the robustness of
well-established aggregation rules. To this end, we present FoundationFL, a
novel defense mechanism against poisoning attacks. FoundationFL involves the
server generating synthetic updates after receiving local model updates from
clients. It then applies existing Byzantine-robust foundational aggregation
rules, such as Trimmed-mean or Median, to combine clients' model updates with
the synthetic ones. We theoretically establish the convergence performance of
FoundationFL under Byzantine settings. Comprehensive experiments across several
real-world datasets validate the efficiency of our FoundationFL method.
|
2501.17384
|
A Dual-Agent Adversarial Framework for Robust Generalization in Deep
Reinforcement Learning
|
cs.LG cs.AI
|
Recently, empowered with the powerful capabilities of neural networks,
reinforcement learning (RL) has successfully tackled numerous challenging
tasks. However, while these models demonstrate enhanced decision-making
abilities, they are increasingly prone to overfitting. For instance, a trained
RL model often fails to generalize to even minor variations of the same task,
such as a change in background color or other minor semantic differences. To
address this issue, we propose a dual-agent adversarial policy learning
framework, which allows agents to spontaneously learn the underlying semantics
without introducing any human prior knowledge. Specifically, our framework
involves a game process between two agents: each agent seeks to maximize the
impact of perturbing on the opponent's policy by producing representation
differences for the same state, while maintaining its own stability against
such perturbations. This interaction encourages agents to learn generalizable
policies, capable of handling irrelevant features from the high-dimensional
observations. Extensive experimental results on the Procgen benchmark
demonstrate that the adversarial process significantly improves the
generalization performance of both agents, while also being applied to various
RL algorithms, e.g., Proximal Policy Optimization (PPO). With the adversarial
framework, the RL agent outperforms the baseline methods by a significant
margin, especially in hard-level tasks, marking a significant step forward in
the generalization capabilities of deep reinforcement learning.
|
2501.17386
|
Context-Aware Semantic Recomposition Mechanism for Large Language Models
|
cs.CL cs.AI
|
Context-aware processing mechanisms have increasingly become a critical area
of exploration for improving the semantic and contextual capabilities of
language generation models. The Context-Aware Semantic Recomposition Mechanism
(CASRM) was introduced as a novel framework designed to address limitations in
coherence, contextual adaptability, and error propagation in large-scale text
generation tasks. Through the integration of dynamically generated context
vectors and attention modulation layers, CASRM enhances the alignment between
token-level representations and broader contextual dependencies. Experimental
evaluations demonstrated significant improvements in semantic coherence across
multiple domains, including technical, conversational, and narrative text. The
ability to adapt to unseen domains and ambiguous inputs was evaluated using a
diverse set of test scenarios, highlighting the robustness of the proposed
mechanism. A detailed computational analysis revealed that while CASRM
introduces additional processing overhead, the gains in linguistic precision
and contextual relevance outweigh the marginal increase in complexity. The
framework also successfully mitigates error propagation in sequential tasks,
improving performance in dialogue continuation and multi-step text synthesis.
Additional investigations into token-level attention distribution emphasized
the dynamic focus shifts enabled through context-aware enhancements. The
findings suggest that CASRM offers a scalable and flexible solution for
integrating contextual intelligence into existing language model architectures.
|
2501.17387
|
Assessing the Capability of YOLO- and Transformer-based Object Detectors
for Real-time Weed Detection
|
cs.CV
|
Spot spraying represents an efficient and sustainable method for reducing the
amount of pesticides, particularly herbicides, used in agricultural fields. To
achieve this, it is of utmost importance to reliably differentiate between
crops and weeds, and even between individual weed species in situ and under
real-time conditions. To assess suitability for real-time application,
different object detection models that are currently state-of-the-art are
compared. All available models of YOLOv8, YOLOv9, YOLOv10, and RT-DETR are
trained and evaluated with images from a real field situation. The images are
separated into two distinct datasets: In the initial data set, each species of
plants is trained individually; in the subsequent dataset, a distinction is
made between monocotyledonous weeds, dicotyledonous weeds, and three chosen
crops. The results demonstrate that while all models perform equally well in
the metrics evaluated, the YOLOv9 models, particularly the YOLOv9s and YOLOv9e,
stand out in terms of their strong recall scores (66.58 % and 72.36 %), as well
as mAP50 (73.52 % and 79.86 %), and mAP50-95 (43.82 % and 47.00 %) in dataset
2. However, the RT-DETR models, especially RT-DETR-l, excel in precision with
reaching 82.44 \% on dataset 1 and 81.46 % in dataset 2, making them
particularly suitable for scenarios where minimizing false positives is
critical. In particular, the smallest variants of the YOLO models (YOLOv8n,
YOLOv9t, and YOLOv10n) achieve substantially faster inference times down to
7.58 ms for dataset 2 on the NVIDIA GeForce RTX 4090 GPU for analyzing one
frame, while maintaining competitive accuracy, highlighting their potential for
deployment in resource-constrained embedded computing devices as typically used
in productive setups.
|
2501.17391
|
Learning Free Token Reduction for Multi-Modal LLM
|
cs.CV cs.AI cs.CL
|
Vision-Language Models (VLMs) have achieved remarkable success across a range
of multimodal tasks; however, their practical deployment is often constrained
by high computational costs and prolonged inference times. Since the vision
modality typically carries more information than the text modality, compressing
visual prompts offers a promising solution to alleviate these challenges.
Existing approaches predominantly focus on refining model architectures or
directly reducing the number of visual tokens. However, these methods often
compromise inference performance due to a lack of consideration for the unique
spatial and temporal characteristics of visual data. In this work, we propose a
token compression paradigm that operates on both spatial and temporal
dimensions. Our approach includes a learning-free, plug-and-play compression
pipeline that can be seamlessly integrated into most Multimodal Large Language
Model (MLLM) frameworks. By leveraging this method, we enhance the model
inference capability while simultaneously reducing its computational cost.
Experimental results on the Video-QA task demonstrate the effectiveness of the
proposed approach, showcasing significant improvements in efficiency without
sacrificing performance.
|
2501.17392
|
Byzantine-Robust Federated Learning over Ring-All-Reduce Distributed
Computing
|
cs.CR cs.LG
|
Federated learning (FL) has gained attention as a distributed learning
paradigm for its data privacy benefits and accelerated convergence through
parallel computation. Traditional FL relies on a server-client (SC)
architecture, where a central server coordinates multiple clients to train a
global model, but this approach faces scalability challenges due to server
communication bottlenecks. To overcome this, the ring-all-reduce (RAR)
architecture has been introduced, eliminating the central server and achieving
bandwidth optimality. However, the tightly coupled nature of RAR's ring
topology exposes it to unique Byzantine attack risks not present in SC-based
FL. Despite its potential, designing Byzantine-robust RAR-based FL algorithms
remains an open problem. To address this gap, we propose BRACE
(Byzantine-robust ring-all-reduce), the first RAR-based FL algorithm to achieve
both Byzantine robustness and communication efficiency. We provide theoretical
guarantees for the convergence of BRACE under Byzantine attacks, demonstrate
its bandwidth efficiency, and validate its practical effectiveness through
experiments. Our work offers a foundational understanding of Byzantine-robust
RAR-based FL design.
|
2501.17393
|
Intensional Inheritance Between Concepts: An Information-Theoretic
Interpretation
|
cs.AI cs.IT math.IT
|
This paper addresses the problem of formalizing and quantifying the concept
of "intensional inheritance" between two concepts. We begin by conceiving the
intensional inheritance of $W$ from $F$ as the amount of information the
proposition "x is $F$ " provides about the proposition "x is $W$. To flesh this
out, we consider concepts $F$ and $W$ defined by sets of properties
$\left\{F_{1}, F_{2}, \ldots, F_{n}\right\}$ and $\left\{W_{1}, W_{2}, \ldots,
W_{m}\right\}$ with associated degrees $\left\{d_{1}, d_{2}, \ldots,
d_{n}\right\}$ and $\left\{e_{1}, e_{2}, \ldots, e_{m}\right\}$, respectively,
where the properties may overlap. We then derive formulas for the intensional
inheritance using both Shannon information theory and algorithmic information
theory, incorporating interaction information among properties. We examine a
special case where all properties are mutually exclusive and calculate the
intensional inheritance in this case in both frameworks. We also derive
expressions for $P(W \mid F)$ based on the mutual information formula. Finally
we consider the relationship between intensional inheritance and conventional
set-theoretic "extensional" inheritance, concluding that in our
information-theoretic framework, extensional inheritance emerges as a special
case of intensional inheritance.
|
2501.17396
|
Poisoning Attacks and Defenses to Federated Unlearning
|
cs.CR cs.DC cs.LG
|
Federated learning allows multiple clients to collaboratively train a global
model with the assistance of a server. However, its distributed nature makes it
susceptible to poisoning attacks, where malicious clients can compromise the
global model by sending harmful local model updates to the server. To unlearn
an accurate global model from a poisoned one after identifying malicious
clients, federated unlearning has been introduced. Yet, current research on
federated unlearning has primarily concentrated on its effectiveness and
efficiency, overlooking the security challenges it presents. In this work, we
bridge the gap via proposing BadUnlearn, the first poisoning attacks targeting
federated unlearning. In BadUnlearn, malicious clients send specifically
designed local model updates to the server during the unlearning process,
aiming to ensure that the resulting unlearned model remains poisoned. To
mitigate these threats, we propose UnlearnGuard, a robust federated unlearning
framework that is provably robust against both existing poisoning attacks and
our BadUnlearn. The core concept of UnlearnGuard is for the server to estimate
the clients' local model updates during the unlearning process and employ a
filtering strategy to verify the accuracy of these estimations. Theoretically,
we prove that the model unlearned through UnlearnGuard closely resembles one
obtained by train-from-scratch. Empirically, we show that BadUnlearn can
effectively corrupt existing federated unlearning methods, while UnlearnGuard
remains secure against poisoning attacks.
|
2501.17397
|
Leveraging In-Context Learning and Retrieval-Augmented Generation for
Automatic Question Generation in Educational Domains
|
cs.CL
|
Question generation in education is a time-consuming and cognitively
demanding task, as it requires creating questions that are both contextually
relevant and pedagogically sound. Current automated question generation methods
often generate questions that are out of context. In this work, we explore
advanced techniques for automated question generation in educational contexts,
focusing on In-Context Learning (ICL), Retrieval-Augmented Generation (RAG),
and a novel Hybrid Model that merges both methods. We implement GPT-4 for ICL
using few-shot examples and BART with a retrieval module for RAG. The Hybrid
Model combines RAG and ICL to address these issues and improve question
quality. Evaluation is conducted using automated metrics, followed by human
evaluation metrics. Our results show that both the ICL approach and the Hybrid
Model consistently outperform other methods, including baseline models, by
generating more contextually accurate and relevant questions.
|
2501.17399
|
MultiChallenge: A Realistic Multi-Turn Conversation Evaluation Benchmark
Challenging to Frontier LLMs
|
cs.CL cs.AI
|
We present MultiChallenge, a pioneering benchmark evaluating large language
models (LLMs) on conducting multi-turn conversations with human users, a
crucial yet underexamined capability for their applications. MultiChallenge
identifies four categories of challenges in multi-turn conversations that are
not only common and realistic among current human-LLM interactions, but are
also challenging to all current frontier LLMs. All 4 challenges require
accurate instruction-following, context allocation, and in-context reasoning at
the same time. We also develop LLM as judge with instance-level rubrics to
facilitate an automatic evaluation method with fair agreement with experienced
human raters. Despite achieving near-perfect scores on existing multi-turn
evaluation benchmarks, all frontier models have less than 50% accuracy on
MultiChallenge, with the top-performing Claude 3.5 Sonnet (June 2024) achieving
just a 41.4% average accuracy.
|
2501.17400
|
A Model-Free Data-Driven Algorithm for Continuous-Time Control
|
math.OC cs.SY eess.SY
|
Presented is an algorithm to synthesize an infinite-horizon LQR optimal
feedback controller for continuous-time systems. The algorithm does not require
knowledge of the system dynamics, but instead uses only a finite-length
sampling of (possibly suboptimal) input-output data. The algorithm is based on
a constrained optimization problem that enforces a necessary condition on the
dynamics of the optimal value function along an arbitrary trajectory. This
paper presents the derivation as well as shows examples applied to both linear
and nonlinear systems inspired by air vehicles.
|
2501.17403
|
General Scene Adaptation for Vision-and-Language Navigation
|
cs.CV cs.AI cs.CL
|
Vision-and-Language Navigation (VLN) tasks mainly evaluate agents based on
one-time execution of individual instructions across multiple environments,
aiming to develop agents capable of functioning in any environment in a
zero-shot manner. However, real-world navigation robots often operate in
persistent environments with relatively consistent physical layouts, visual
observations, and language styles from instructors. Such a gap in the task
setting presents an opportunity to improve VLN agents by incorporating
continuous adaptation to specific environments. To better reflect these
real-world conditions, we introduce GSA-VLN, a novel task requiring agents to
execute navigation instructions within a specific scene and simultaneously
adapt to it for improved performance over time. To evaluate the proposed task,
one has to address two challenges in existing VLN datasets: the lack of OOD
data, and the limited number and style diversity of instructions for each
scene. Therefore, we propose a new dataset, GSA-R2R, which significantly
expands the diversity and quantity of environments and instructions for the R2R
dataset to evaluate agent adaptability in both ID and OOD contexts.
Furthermore, we design a three-stage instruction orchestration pipeline that
leverages LLMs to refine speaker-generated instructions and apply role-playing
techniques to rephrase instructions into different speaking styles. This is
motivated by the observation that each individual user often has consistent
signatures or preferences in their instructions. We conducted extensive
experiments on GSA-R2R to thoroughly evaluate our dataset and benchmark various
methods. Based on our findings, we propose a novel method, GR-DUET, which
incorporates memory-based navigation graphs with an environment-specific
training strategy, achieving state-of-the-art results on all GSA-R2R splits.
|
2501.17409
|
Value Function Decomposition in Markov Recommendation Process
|
cs.IR
|
Recent advances in recommender systems have shown that user-system
interaction essentially formulates long-term optimization problems, and online
reinforcement learning can be adopted to improve recommendation performance.
The general solution framework incorporates a value function that estimates the
user's expected cumulative rewards in the future and guides the training of the
recommendation policy. To avoid local maxima, the policy may explore potential
high-quality actions during inference to increase the chance of finding better
future rewards. To accommodate the stepwise recommendation process, one widely
adopted approach to learning the value function is learning from the difference
between the values of two consecutive states of a user. However, we argue that
this paradigm involves a challenge of Mixing Random Factors: there exist two
random factors from the stochastic policy and the uncertain user environment,
but they are not separately modeled in the standard temporal difference (TD)
learning, which may result in a suboptimal estimation of the long-term rewards
and less effective action exploration. As a solution, we show that these two
factors can be separately approximated by decomposing the original temporal
difference loss. The disentangled learning framework can achieve a more
accurate estimation with faster learning and improved robustness against action
exploration. As an empirical verification of our proposed method, we conduct
offline experiments with simulated online environments built on the basis of
public datasets.
|
2501.17411
|
A Genetic Algorithm-Based Approach for Automated Optimization of
Kolmogorov-Arnold Networks in Classification Tasks
|
cs.NE cs.AI cs.LG
|
To address the issue of interpretability in multilayer perceptrons (MLPs),
Kolmogorov-Arnold Networks (KANs) are introduced in 2024. However, optimizing
KAN structures is labor-intensive, typically requiring manual intervention and
parameter tuning. This paper proposes GA-KAN, a genetic algorithm-based
approach that automates the optimization of KANs, requiring no human
intervention in the design process. To the best of our knowledge, this is the
first time that evolutionary computation is explored to optimize KANs
automatically. Furthermore, inspired by the use of sparse connectivity in MLPs
in effectively reducing the number of parameters, GA-KAN further explores
sparse connectivity to tackle the challenge of extensive parameter spaces in
KANs. GA-KAN is validated on two toy datasets, achieving optimal results
without the manual tuning required by the original KAN. Additionally, GA-KAN
demonstrates superior performance across five classification datasets,
outperforming traditional methods on all datasets and providing interpretable
symbolic formulae for the Wine and Iris datasets, thereby enhancing model
transparency. Furthermore, GA-KAN significantly reduces the number of
parameters over the standard KAN across all the five datasets. The core
contributions of GA-KAN include automated optimization, a new encoding
strategy, and a new decoding process, which together improve the accuracy and
interpretability, and reduce the number of parameters.
|
2501.17412
|
Randomized Scheduling for Periodic Multi-Source Systems with PAoI
Violation Guarantees
|
cs.IT math.IT
|
The Age of Information (AoI) has been recognized as a critical metric for
assessing the freshness of information in modern communication systems. In this
work, we examine an information update system where multiple information
sources transmit updates to their respective destinations via a shared base
station. Our main contribution is the proposal of a randomized scheduling
algorithm that offers distinct statistical AoI guarantees for heterogeneous
sources. Specifically, we rigorously derive an analytical upper bound on peak
age of information (PAoI) violation probability by leveraging properties of the
multivariate noncentral hypergeometric Wallenius distribution. Building on
these analytical results, two designs of coefficients for the randomized policy
are proposed to meet the outage constraints for all sources, tailored to the
long and short sampling delay cases, respectively. Simulation results
demonstrate the accuracy of our analysis on PAoI violation probability and also
show that our proposed design always provides a feasible solution in most
cases.
|
2501.17414
|
Reqo: A Robust and Explainable Query Optimization Cost Model
|
cs.DB cs.AI cs.LG
|
In recent years, there has been a growing interest in using machine learning
(ML) in query optimization to select more efficient plans. Existing
learning-based query optimizers use certain model architectures to convert
tree-structured query plans into representations suitable for downstream ML
tasks. As the design of these architectures significantly impacts cost
estimation, we propose a tree model architecture based on Bidirectional Graph
Neural Networks (Bi-GNN) aggregated by Gated Recurrent Units (GRUs) to achieve
more accurate cost estimates. The inherent uncertainty of data and model
parameters also leads to inaccurate cost estimates, resulting in suboptimal
plans and less robust query performance. To address this, we implement a novel
learning-to-rank cost model that effectively quantifies the uncertainty in cost
estimates using approximate probabilistic ML. This model adaptively integrates
quantified uncertainty with estimated costs and learns from comparing pairwise
plans, achieving more robust performance. In addition, we propose the first
explainability technique specifically designed for learning-based cost models.
This technique explains the contribution of any subgraphs in the query plan to
the final predicted cost, which can be integrated and trained with any
learning-based cost model to significantly boost the model's explainability. By
incorporating these innovations, we propose a cost model for a Robust and
Explainable Query Optimizer, Reqo, that improves the accuracy, robustness, and
explainability of cost estimation, outperforming state-of-the-art approaches in
all three dimensions.
|
2501.17415
|
si4onnx: A Python package for Selective Inference in Deep Learning
Models
|
cs.LG stat.ML
|
In this paper, we introduce si4onnx, a package for performing selective
inference on deep learning models. Techniques such as CAM in XAI and
reconstruction-based anomaly detection using VAE can be interpreted as methods
for identifying significant regions within input images. However, the
identified regions may not always carry meaningful significance. Therefore,
evaluating the statistical significance of these regions represents a crucial
challenge in establishing the reliability of AI systems. si4onnx is a Python
package that enables straightforward implementation of hypothesis testing with
controlled type I error rates through selective inference. It is compatible
with deep learning models constructed using common frameworks such as PyTorch
and TensorFlow.
|
2501.17420
|
Actions Speak Louder than Words: Agent Decisions Reveal Implicit Biases
in Language Models
|
cs.CL cs.AI cs.HC
|
While advances in fairness and alignment have helped mitigate overt biases
exhibited by large language models (LLMs) when explicitly prompted, we
hypothesize that these models may still exhibit implicit biases when simulating
human behavior. To test this hypothesis, we propose a technique to
systematically uncover such biases across a broad range of sociodemographic
categories by assessing decision-making disparities among agents with
LLM-generated, sociodemographically-informed personas. Using our technique, we
tested six LLMs across three sociodemographic groups and four decision-making
scenarios. Our results show that state-of-the-art LLMs exhibit significant
sociodemographic disparities in nearly all simulations, with more advanced
models exhibiting greater implicit biases despite reducing explicit biases.
Furthermore, when comparing our findings to real-world disparities reported in
empirical studies, we find that the biases we uncovered are directionally
aligned but markedly amplified. This directional alignment highlights the
utility of our technique in uncovering systematic biases in LLMs rather than
random variations; moreover, the presence and amplification of implicit biases
emphasizes the need for novel strategies to address these biases.
|
2501.17422
|
SIGN: A Statistically-Informed Gaze Network for Gaze Time Prediction
|
cs.CV stat.AP
|
We propose a first version of SIGN, a Statistically-Informed Gaze Network, to
predict aggregate gaze times on images. We develop a foundational statistical
model for which we derive a deep learning implementation involving CNNs and
Visual Transformers, which enables the prediction of overall gaze times. The
model enables us to derive from the aggregate gaze times the underlying gaze
pattern as a probability map over all regions in the image, where each region's
probability represents the likelihood of being gazed at across all possible
scan-paths. We test SIGN's performance on AdGaze3500, a dataset of images of
ads with aggregate gaze times, and on COCO-Search18, a dataset with
individual-level fixation patterns collected during search. We demonstrate that
SIGN (1) improves gaze duration prediction significantly over state-of-the-art
deep learning benchmarks on both datasets, and (2) can deliver plausible gaze
patterns that correspond to empirical fixation patterns in COCO-Search18. These
results suggest that the first version of SIGN holds promise for gaze-time
predictions and deserves further development.
|
2501.17424
|
Certificated Actor-Critic: Hierarchical Reinforcement Learning with
Control Barrier Functions for Safe Navigation
|
cs.RO cs.LG
|
Control Barrier Functions (CBFs) have emerged as a prominent approach to
designing safe navigation systems of robots. Despite their popularity, current
CBF-based methods exhibit some limitations: optimization-based safe control
techniques tend to be either myopic or computationally intensive, and they rely
on simplified system models; conversely, the learning-based methods suffer from
the lack of quantitative indication in terms of navigation performance and
safety. In this paper, we present a new model-free reinforcement learning
algorithm called Certificated Actor-Critic (CAC), which introduces a
hierarchical reinforcement learning framework and well-defined reward functions
derived from CBFs. We carry out theoretical analysis and proof of our
algorithm, and propose several improvements in algorithm implementation. Our
analysis is validated by two simulation experiments, showing the effectiveness
of our proposed CAC algorithm.
|
2501.17428
|
WCDT: Systematic WCET Optimization for Decision Tree Implementations
|
cs.LG cs.PF
|
Machine-learning models are increasingly deployed on resource-constrained
embedded systems with strict timing constraints. In such scenarios, the
worst-case execution time (WCET) of the models is required to ensure safe
operation. Specifically, decision trees are a prominent class of
machine-learning models and the main building blocks of tree-based ensemble
models (e.g., random forests), which are commonly employed in
resource-constrained embedded systems.
In this paper, we develop a systematic approach for WCET optimization of
decision tree implementations. To this end, we introduce a linear surrogate
model that estimates the execution time of individual paths through a decision
tree based on the path's length and the number of taken branches. We provide an
optimization algorithm that constructively builds a WCET-optimal implementation
of a given decision tree with respect to this surrogate model. We
experimentally evaluate both the surrogate model and the WCET-optimization
algorithm. The evaluation shows that the optimization algorithm improves
analytically determined WCET by up to $17\%$ compared to an unoptimized
implementation.
|
2501.17429
|
Algorithmic Segmentation and Behavioral Profiling for Ransomware
Detection Using Temporal-Correlation Graphs
|
cs.CR cs.AI
|
The rapid evolution of cyber threats has outpaced traditional detection
methodologies, necessitating innovative approaches capable of addressing the
adaptive and complex behaviors of modern adversaries. A novel framework was
introduced, leveraging Temporal-Correlation Graphs to model the intricate
relationships and temporal patterns inherent in malicious operations. The
approach dynamically captured behavioral anomalies, offering a robust mechanism
for distinguishing between benign and malicious activities in real-time
scenarios. Extensive experiments demonstrated the framework's effectiveness
across diverse ransomware families, with consistently high precision, recall,
and overall detection accuracy. Comparative evaluations highlighted its better
performance over traditional signature-based and heuristic methods,
particularly in handling polymorphic and previously unseen ransomware variants.
The architecture was designed with scalability and modularity in mind, ensuring
compatibility with enterprise-scale environments while maintaining resource
efficiency. Analysis of encryption speeds, anomaly patterns, and temporal
correlations provided deeper insights into the operational strategies of
ransomware, validating the framework's adaptability to evolving threats. The
research contributes to advancing cybersecurity technologies by integrating
dynamic graph analytics and machine learning for future innovations in threat
detection. Results from this study underline the potential for transforming the
way organizations detect and mitigate complex cyberattacks.
|
2501.17431
|
Human-Aligned Skill Discovery: Balancing Behaviour Exploration and
Alignment
|
cs.LG cs.RO
|
Unsupervised skill discovery in Reinforcement Learning aims to mimic humans'
ability to autonomously discover diverse behaviors. However, existing methods
are often unconstrained, making it difficult to find useful skills, especially
in complex environments, where discovered skills are frequently unsafe or
impractical. We address this issue by proposing Human-aligned Skill Discovery
(HaSD), a framework that incorporates human feedback to discover safer, more
aligned skills. HaSD simultaneously optimises skill diversity and alignment
with human values. This approach ensures that alignment is maintained
throughout the skill discovery process, eliminating the inefficiencies
associated with exploring unaligned skills. We demonstrate its effectiveness in
both 2D navigation and SafetyGymnasium environments, showing that HaSD
discovers diverse, human-aligned skills that are safe and useful for downstream
tasks. Finally, we extend HaSD by learning a range of configurable skills with
varying degrees of diversity alignment trade-offs that could be useful in
practical scenarios.
|
2501.17433
|
Virus: Harmful Fine-tuning Attack for Large Language Models Bypassing
Guardrail Moderation
|
cs.CR cs.AI cs.CL cs.LG
|
Recent research shows that Large Language Models (LLMs) are vulnerable to
harmful fine-tuning attacks -- models lose their safety alignment ability after
fine-tuning on a few harmful samples. For risk mitigation, a guardrail is
typically used to filter out harmful samples before fine-tuning. By designing a
new red-teaming method, we in this paper show that purely relying on the
moderation guardrail for data filtration is not reliable. Our proposed attack
method, dubbed Virus, easily bypasses the guardrail moderation by slightly
modifying the harmful data. Experimental results show that the harmful data
optimized by Virus is not detectable by the guardrail with up to 100\% leakage
ratio, and can simultaneously achieve superior attack performance. Finally, the
key message we want to convey through this paper is that: \textbf{it is
reckless to consider guardrail moderation as a clutch at straws towards harmful
fine-tuning attack}, as it cannot solve the inherent safety issue of the
pre-trained LLMs. Our code is available at https://github.com/git-disl/Virus
|
2501.17437
|
Bayesian BIM-Guided Construction Robot Navigation with NLP Safety
Prompts in Dynamic Environments
|
cs.RO
|
Construction robotics increasingly relies on natural language processing for
task execution, creating a need for robust methods to interpret commands in
complex, dynamic environments. While existing research primarily focuses on
what tasks robots should perform, less attention has been paid to how these
tasks should be executed safely and efficiently. This paper presents a novel
probabilistic framework that uses sentiment analysis from natural language
commands to dynamically adjust robot navigation policies in construction
environments. The framework leverages Building Information Modeling (BIM) data
and natural language prompts to create adaptive navigation strategies that
account for varying levels of environmental risk and uncertainty. We introduce
an object-aware path planning approach that combines exponential potential
fields with a grid-based representation of the environment, where the potential
fields are dynamically adjusted based on the semantic analysis of user prompts.
The framework employs Bayesian inference to consolidate multiple information
sources: the static data from BIM, the semantic content of natural language
commands, and the implied safety constraints from user prompts. We demonstrate
our approach through experiments comparing three scenarios: baseline
shortest-path planning, safety-oriented navigation, and risk-aware routing.
Results show that our method successfully adapts path planning based on natural
language sentiment, achieving a 50\% improvement in minimum distance to
obstacles when safety is prioritized, while maintaining reasonable path
lengths. Scenarios with contrasting prompts, such as "dangerous" and "safe",
demonstrate the framework's ability to modify paths. This approach provides a
flexible foundation for integrating human knowledge and safety considerations
into construction robot navigation.
|
2501.17441
|
Towards Making Flowchart Images Machine Interpretable
|
cs.CV cs.AI cs.CL cs.DL cs.SE
|
Computer programming textbooks and software documentations often contain
flowcharts to illustrate the flow of an algorithm or procedure. Modern OCR
engines often tag these flowcharts as graphics and ignore them in further
processing. In this paper, we work towards making flowchart images
machine-interpretable by converting them to executable Python codes. To this
end, inspired by the recent success in natural language to code generation
literature, we present a novel transformer-based framework, namely FloCo-T5.
Our model is well-suited for this task,as it can effectively learn semantics,
structure, and patterns of programming languages, which it leverages to
generate syntactically correct code. We also used a task-specific pre-training
objective to pre-train FloCo-T5 using a large number of logic-preserving
augmented code samples. Further, to perform a rigorous study of this problem,
we introduce theFloCo dataset that contains 11,884 flowchart images and their
corresponding Python codes. Our experiments show promising results, and
FloCo-T5 clearly outperforms related competitive baselines on code generation
metrics. We make our dataset and implementation publicly available.
|
2501.17443
|
Gradual Domain Adaptation for Graph Learning
|
cs.LG
|
Existing literature lacks a graph domain adaptation technique for handling
large distribution shifts, primarily due to the difficulty in simulating an
evolving path from source to target graph. To make a breakthrough, we present a
graph gradual domain adaptation (GGDA) framework with the construction of a
compact domain sequence that minimizes information loss in adaptations. Our
approach starts with an efficient generation of knowledge-preserving
intermediate graphs over the Fused Gromov-Wasserstein (FGW) metric. With the
bridging data pool, GGDA domains are then constructed via a novel vertex-based
domain progression, which comprises "close" vertex selections and adaptive
domain advancement to enhance inter-domain information transferability.
Theoretically, our framework concretizes the intractable inter-domain distance
$W_p(\mu_t,\mu_{t+1})$ via implementable upper and lower bounds, enabling
flexible adjustments of this metric for optimizing domain formation. Extensive
experiments under various transfer scenarios validate the superior performance
of our GGDA framework.
|
2501.17449
|
Cross-Language Approach for Quranic QA
|
cs.CL cs.IR
|
Question answering systems face critical limitations in languages with
limited resources and scarce data, making the development of robust models
especially challenging. The Quranic QA system holds significant importance as
it facilitates a deeper understanding of the Quran, a Holy text for over a
billion people worldwide. However, these systems face unique challenges,
including the linguistic disparity between questions written in Modern Standard
Arabic and answers found in Quranic verses written in Classical Arabic, and the
small size of existing datasets, which further restricts model performance. To
address these challenges, we adopt a cross-language approach by (1) Dataset
Augmentation: expanding and enriching the dataset through machine translation
to convert Arabic questions into English, paraphrasing questions to create
linguistic diversity, and retrieving answers from an English translation of the
Quran to align with multilingual training requirements; and (2) Language Model
Fine-Tuning: utilizing pre-trained models such as BERT-Medium, RoBERTa-Base,
DeBERTa-v3-Base, ELECTRA-Large, Flan-T5, Bloom, and Falcon to address the
specific requirements of Quranic QA. Experimental results demonstrate that this
cross-language approach significantly improves model performance, with
RoBERTa-Base achieving the highest MAP@10 (0.34) and MRR (0.52), while
DeBERTa-v3-Base excels in Recall@10 (0.50) and Precision@10 (0.24). These
findings underscore the effectiveness of cross-language strategies in
overcoming linguistic barriers and advancing Quranic QA systems
|
2501.17450
|
NF-MKV Net: A Constraint-Preserving Neural Network Approach to Solving
Mean-Field Games Equilibrium
|
cs.LG
|
Neural network-based methods for solving Mean-Field Games (MFGs) equilibria
have garnered significant attention for their effectiveness in high-dimensional
problems. However, many algorithms struggle with ensuring that the evolution of
the density distribution adheres to the required mathematical constraints. This
paper investigates a neural network approach to solving MFGs equilibria through
a stochastic process perspective. It integrates process-regularized Normalizing
Flow (NF) frameworks with state-policy-connected time-series neural networks to
address McKean-Vlasov-type Forward-Backward Stochastic Differential Equation
(MKV FBSDE) fixed-point problems, equivalent to MFGs equilibria.
|
2501.17456
|
A review on the novelty measurements of academic papers
|
cs.DL cs.CL
|
Novelty evaluation is vital for the promotion and management of innovation.
With the advancement of information techniques and the open data movement, some
progress has been made in novelty measurements. Tracking and reviewing novelty
measures provides a data-driven way to assess contributions, progress, and
emerging directions in the science field. As academic papers serve as the
primary medium for the dissemination, validation, and discussion of scientific
knowledge, this review aims to offer a systematic analysis of novelty
measurements for scientific papers. We began by comparing the differences
between scientific novelty and four similar concepts, including originality,
scientific innovation, creativity, and scientific breakthrough. Next, we
reviewed the types of scientific novelty. Then, we classified existing novelty
measures according to data types and reviewed the measures for each type.
Subsequently, we surveyed the approaches employed in validating novelty
measures and examined the current tools and datasets associated with these
measures. Finally, we proposed several open issues for future studies.
|
2501.17459
|
Large Language Models for Single-Step and Multi-Step Flight Trajectory
Prediction
|
cs.AI cs.CL
|
Flight trajectory prediction is a critical time series task in aviation.
While deep learning methods have shown significant promise, the application of
large language models (LLMs) to this domain remains underexplored. This study
pioneers the use of LLMs for flight trajectory prediction by reframing it as a
language modeling problem. Specifically, We extract features representing the
aircraft's position and status from ADS-B flight data to construct a
prompt-based dataset, where trajectory waypoints are converted into language
tokens. The dataset is then employed to fine-tune LLMs, enabling them to learn
complex spatiotemporal patterns for accurate predictions. Comprehensive
experiments demonstrate that LLMs achieve notable performance improvements in
both single-step and multi-step predictions compared to traditional methods,
with LLaMA-3.1 model achieving the highest overall accuracy. However, the high
inference latency of LLMs poses a challenge for real-time applications,
underscoring the need for further research in this promising direction.
|
2501.17468
|
Solving Inverse Problems using Diffusion with Fast Iterative Renoising
|
cs.CV
|
Imaging inverse problems can be solved in an unsupervised manner using
pre-trained diffusion models. In most cases, that involves approximating the
gradient of the measurement-conditional score function in the reverse process.
Since the approximations produced by existing methods are quite poor,
especially early in the reverse process, we propose a new approach that
re-estimates and renoises the image several times per diffusion step. Renoising
adds carefully shaped colored noise that ensures the pre-trained diffusion
model sees white-Gaussian error, in accordance with how it was trained. We
demonstrate the effectiveness of our "DDfire" method at 20, 100, and 1000
neural function evaluations on linear inverse problems and phase retrieval.
|
2501.17473
|
Remote State Estimation over a Wearing Channel: Information Freshness
vs. Channel Aging
|
cs.IT cs.SY eess.SY math.IT
|
We study the remote estimation of a linear Gaussian system over a
nonstationary channel that wears out over time and with every use. The sensor
can either transmit a fresh measurement in the current time slot, restore the
channel quality at the cost of downtime, or remain silent. More frequent
transmissions yield accurate estimates but incur significant wear on the
channel. Renewing the channel too often improves channel conditions but results
in poor estimation quality. What is the optimal timing to transmit measurements
and restore the channel? We formulate the problem as a Markov decision process
(MDP) and show the monotonicity properties of an optimal policy. A structured
policy iteration algorithm is proposed to find the optimal policy.
|
2501.17476
|
Hybrid Channel- and Coding-Based Challenge-Response Physical-Layer
Authentication
|
cs.IT eess.SP math.IT
|
This letter proposes a new physical layer authentication mechanism operating
at the physical layer of a communication system where the receiver has partial
control of the channel conditions (e.g., using an intelligent reflecting
surface). We aim to exploit both instantaneous channel state information (CSI)
and a secret shared key for authentication. This is achieved by both
transmitting an identifying key by wiretap coding (to conceal the key from the
attacker) and checking that the instantaneous CSI corresponds to the channel
configuration randomly selected by the receiver. We investigate the trade-off
between the pilot signals used for CSI estimation and the coding rate (or key
length) to improve the overall security of the authentication procedure.
|
2501.17479
|
DFPE: A Diverse Fingerprint Ensemble for Enhancing LLM Performance
|
cs.LG cs.AI cs.CL
|
Large Language Models (LLMs) have shown remarkable capabilities across
various natural language processing tasks but often struggle to excel uniformly
in diverse or complex domains. We propose a novel ensemble method - Diverse
Fingerprint Ensemble (DFPE), which leverages the complementary strengths of
multiple LLMs to achieve more robust performance. Our approach involves: (1)
clustering models based on response "fingerprints" patterns, (2) applying a
quantile-based filtering mechanism to remove underperforming models at a
per-subject level, and (3) assigning adaptive weights to remaining models based
on their subject-wise validation accuracy. In experiments on the Massive
Multitask Language Understanding (MMLU) benchmark, DFPE outperforms the best
single model by 3% overall accuracy and 5% in discipline-level accuracy. This
method increases the robustness and generalization of LLMs and underscores how
model selection, diversity preservation, and performance-driven weighting can
effectively address challenging, multi-faceted language understanding tasks.
|
2501.17484
|
Capacity Expansion Planning under Uncertainty subject to Expected Energy
Not Served Constraints
|
eess.SY cs.SY
|
We present a method for solving a large-scale stochastic capacity expansion
problem which explicitly considers reliability constraints, in particular
constraints on expected energy not served. Our method tackles this problem by a
Lagrange relaxation of the expected energy not served constraints. We solve the
relaxed formulation in an iterative manner, using a subgradient-based method.
Each iteration requires the solution of a stochastic capacity expansion
problem, for which we implement a subgradient decomposition scheme in a
high-performance computing infrastructure. We apply the proposed methodology on
the Economic Viability Assessment model that is used by ENTSO-E in the annual
European Resource Adequacy Assessment, extended to include explicit reliability
constraints. The approach is able to solve this model achieving a 1.3%
optimality gap. We compare our approach against accounting for reliability
through penalizing load shedding at VOLL, and find that the former results in
1.6% savings in total cost. We are also able to quantify the cost savings from
allowing some load curtailment in the capacity planning process, which ranges
from 1.6 to 6% in the cases analyzed.
|
2501.17486
|
DINT Transformer
|
cs.CL cs.AI cs.LG
|
DIFF Transformer addresses the issue of irrelevant context interference by
introducing a differential attention mechanism that enhances the robustness of
local attention. However, it has two critical limitations: the lack of global
context modeling, which is essential for identifying globally significant
tokens, and numerical instability due to the absence of strict row
normalization in the attention matrix. To overcome these challenges, we propose
DINT Transformer, which extends DIFF Transformer by incorporating a
differential-integral mechanism. By computing global importance scores and
integrating them into the attention matrix, DINT Transformer improves its
ability to capture global dependencies. Moreover, the unified parameter design
enforces row-normalized attention matrices, improving numerical stability.
Experimental results demonstrate that DINT Transformer excels in accuracy and
robustness across various practical applications, such as long-context language
modeling and key information retrieval. These results position DINT Transformer
as a highly effective and promising architecture.
|
2501.17489
|
Neural Spelling: A Spell-Based BCI System for Language Neural Decoding
|
cs.HC cs.AI
|
Brain-computer interfaces (BCIs) present a promising avenue by translating
neural activity directly into text, eliminating the need for physical actions.
However, existing non-invasive BCI systems have not successfully covered the
entire alphabet, limiting their practicality. In this paper, we propose a novel
non-invasive EEG-based BCI system with Curriculum-based Neural Spelling
Framework, which recognizes all 26 alphabet letters by decoding neural signals
associated with handwriting first, and then apply a Generative AI (GenAI) to
enhance spell-based neural language decoding tasks. Our approach combines the
ease of handwriting with the accessibility of EEG technology, utilizing
advanced neural decoding algorithms and pre-trained large language models
(LLMs) to translate EEG patterns into text with high accuracy. This system show
how GenAI can improve the performance of typical spelling-based neural language
decoding task, and addresses the limitations of previous methods, offering a
scalable and user-friendly solution for individuals with communication
impairments, thereby enhancing inclusive communication options.
|
2501.17493
|
Certifying Pareto-Optimality in Multi-Objective Maximum Satisfiability
|
cs.AI
|
Due to the wide employment of automated reasoning in the analysis and
construction of correct systems, the results reported by automated reasoning
engines must be trustworthy. For Boolean satisfiability (SAT) solvers - and
more recently SAT-based maximum satisfiability (MaxSAT) solvers -
trustworthiness is obtained by integrating proof logging into solvers, making
solvers capable of emitting machine-verifiable proofs to certify correctness of
the reasoning steps performed. In this work, we enable for the first time proof
logging based on the VeriPB proof format for multi-objective MaxSAT (MO-MaxSAT)
optimization techniques. Although VeriPB does not offer direct support for
multi-objective problems, we detail how preorders in VeriPB can be used to
provide certificates for MO-MaxSAT algorithms computing a representative
solution for each element in the non-dominated set of the search space under
Pareto-optimality, without extending the VeriPB format or the proof checker. By
implementing VeriPB proof logging into a state-of-the-art multi-objective
MaxSAT solver, we show empirically that proof logging can be made scalable for
MO-MaxSAT with reasonable overhead.
|
2501.17496
|
SemML: Enhancing Automata-Theoretic LTL Synthesis with Machine Learning
|
cs.AI cs.SY eess.SY
|
Synthesizing a reactive system from specifications given in linear temporal
logic (LTL) is a classical problem, finding its applications in safety-critical
systems design. We present our tool SemML, which won this year's LTL
realizability tracks of SYNTCOMP, after years of domination by Strix. While
both tools are based on the automata-theoretic approach, ours relies heavily on
(i) Semantic labelling, additional information of logical nature, coming from
recent LTL-to-automata translations and decorating the resulting parity game,
and (ii) Machine Learning approaches turning this information into a guidance
oracle for on-the-fly exploration of the parity game (whence the name SemML).
Our tool fills the missing gaps of previous suggestions to use such an oracle
and provides an efficeint implementation with additional algorithmic
improvements. We evaluate SemML both on the entire set of SYNTCOMP as well as a
synthetic data set, compare it to Strix, and analyze the advantages and
limitations. As SemML solves more instances on SYNTCOMP and does so
significantly faster on larger instances, this demonstrates for the first time
that machine-learning-aided approaches can out-perform state-of-the-art tools
in real LTL synthesis.
|
2501.17499
|
A Sampling Complexity-aware Framework for Discrete-time Fractional-Order
Dynamical System Identification
|
eess.SY cs.SY
|
A variety of complex biological, natural and man-made systems exhibit
non-Markovian dynamics that can be modeled through fractional order
differential equations, yet, we lack sample comlexity aware system
identification strategies. Towards this end, we propose an affine discrete-time
fractional order dynamical system (FoDS) identification algorithm and provide a
detailed sample complexity analysis. The algorithm effectively addresses the
challenges of FoDS identification in the presence of noisy data. The proposed
algorithm consists of two key steps. Firstly, it avoids solving higher-order
polynomial equations, which would otherwise result in multiple potential
solutions for the fractional orders. Secondly, the identification problem is
reformulated as a least squares estimation, allowing us to infer the system
parameters. We derive the expectation and probabilistic bounds for the FoDS
parameter estimation error, assuming prior knowledge of the functions \( f \)
and \( g \) in the FoDS model. The error decays at a rate of \( N = O\left(
\frac{d}{\epsilon} \right) \), where \( N \) is the number of samples, \( d \)
is the dimension of the state variable, and \( \epsilon \) represents the
desired estimation accuracy. Simulation results demonstrate that our
theoretical bounds are tight, validating the accuracy and robustness of this
algorithm.
|
2501.17507
|
Reflections on "Can AI Understand Our Universe?"
|
cs.AI astro-ph.HE astro-ph.IM
|
This article briefly discusses the philosophical and technical aspects of AI.
It focuses on two concepts of understanding: intuition and causality, and
highlights three AI technologies: Transformers, chain-of-thought reasoning, and
multimodal processing. We anticipate that in principle AI could form
understanding, with these technologies representing promising advancements.
|
2501.17510
|
LLM Assistance for Pediatric Depression
|
cs.LG cs.AI cs.CL
|
Traditional depression screening methods, such as the PHQ-9, are particularly
challenging for children in pediatric primary care due to practical
limitations. AI has the potential to help, but the scarcity of annotated
datasets in mental health, combined with the computational costs of training,
highlights the need for efficient, zero-shot approaches. In this work, we
investigate the feasibility of state-of-the-art LLMs for depressive symptom
extraction in pediatric settings (ages 6-24). This approach aims to complement
traditional screening and minimize diagnostic errors.
Our findings show that all LLMs are 60% more efficient than word match, with
Flan leading in precision (average F1: 0.65, precision: 0.78), excelling in the
extraction of more rare symptoms like "sleep problems" (F1: 0.92) and
"self-loathing" (F1: 0.8). Phi strikes a balance between precision (0.44) and
recall (0.60), performing well in categories like "Feeling depressed" (0.69)
and "Weight change" (0.78). Llama 3, with the highest recall (0.90),
overgeneralizes symptoms, making it less suitable for this type of analysis.
Challenges include the complexity of clinical notes and overgeneralization from
PHQ-9 scores. The main challenges faced by LLMs include navigating the complex
structure of clinical notes with content from different times in the patient
trajectory, as well as misinterpreting elevated PHQ-9 scores.
We finally demonstrate the utility of symptom annotations provided by Flan as
features in an ML algorithm, which differentiates depression cases from
controls with high precision of 0.78, showing a major performance boost
compared to a baseline that does not use these features.
|
2501.17512
|
A Survey on Cluster-based Federated Learning
|
stat.ML cs.LG
|
As the industrial and commercial use of Federated Learning (FL) has expanded,
so has the need for optimized algorithms.
In settings were FL clients' data is non-independently and identically
distributed (non-IID) and with highly heterogeneous distributions, the baseline
FL approach seems to fall short. To tackle this issue, recent studies, have
looked into personalized FL (PFL) which relaxes the implicit single-model
constraint and allows for multiple hypotheses to be learned from the data or
local models. Among the personalized FL approaches, cluster-based solutions
(CFL) are particularly interesting whenever it is clear -through domain
knowledge -that the clients can be separated into groups.
In this paper, we study recent works on CFL, proposing: i) a classification
of CFL solutions for personalization; ii) a structured review of literature
iii) a review of alternative use cases for CFL. CCS Concepts: $\bullet$ General
and reference $\rightarrow$ Surveys and overviews; $\bullet$ Computing
methodologies $\rightarrow$ Machine learning; $\bullet$ Information systems
$\rightarrow$ Clustering; $\bullet$ Security and privacy $\rightarrow$
Privacy-preserving protocols.
|
2501.17513
|
Sequential Learning of the Pareto Front for Multi-objective Bandits
|
stat.ML cs.LG
|
We study the problem of sequential learning of the Pareto front in
multi-objective multi-armed bandits. An agent is faced with K possible arms to
pull. At each turn she picks one, and receives a vector-valued reward. When she
thinks she has enough information to identify the Pareto front of the different
arm means, she stops the game and gives an answer. We are interested in
designing algorithms such that the answer given is correct with probability at
least 1-$\delta$. Our main contribution is an efficient implementation of an
algorithm achieving the optimal sample complexity when the risk $\delta$ is
small. With K arms in d dimensions p of which are in the Pareto set, the
algorithm runs in time O(Kp^d) per round.
|
2501.17518
|
RegD: Hierarchical Embeddings via Distances over Geometric Regions
|
cs.LG cs.AI
|
Hierarchical data are common in many domains like life sciences and
e-commerce, and their embeddings often play a critical role. Although
hyperbolic embeddings offer a grounded approach to representing hierarchical
structures in low-dimensional spaces, their utility is hindered by optimization
difficulties in hyperbolic space and dependence on handcrafted structural
constraints. We propose RegD, a novel Euclidean framework that addresses these
limitations by representing hierarchical data as geometric regions with two new
metrics: (1) depth distance, which preserves the representational power of
hyperbolic spaces for hierarchical data, and (2) boundary distance, which
explicitly encodes set-inclusion relationships between regions in a general
way. Our empirical evaluation on diverse real-world datasets shows consistent
performance gains over state-of-the-art methods and demonstrates RegD's
potential for broader applications beyond hierarchy alone tasks.
|
2501.17529
|
Accelerated DC loadflow solver for topology optimization
|
eess.SY cs.SY
|
We present a massively parallel solver that accelerates DC loadflow
computations for power grid topology optimization tasks. Our approach leverages
low-rank updates of the Power Transfer Distribution Factors (PTDFs) to
represent substation splits, line outages, and reconfigurations without ever
refactorizing the system. Furthermore, we implement the core routines on
Graphics Processing Units (GPUs), thereby exploiting their high-throughput
architecture for linear algebra. A two-level decomposition separates changes in
branch topology from changes in nodal injections, enabling additional speed-ups
by an in-the-loop brute force search over injection variations at minimal
additional cost. We demonstrate billion-loadflow-per-second performance on
power grids of varying sizes in workload settings which are typical for
gradient-free topology optimization such as Reinforcement Learning or Quality
Diversity methods. While adopting the DC approximation sacrifices some accuracy
and prohibits the computation of voltage magnitudes, we show that this
sacrifice unlocks new scales of computational feasibility, offering a powerful
tool for large-scale grid planning and operational topology optimization.
|
2501.17534
|
3DSES: an indoor Lidar point cloud segmentation dataset with real and
pseudo-labels from a 3D model
|
cs.CV
|
Semantic segmentation of indoor point clouds has found various applications
in the creation of digital twins for robotics, navigation and building
information modeling (BIM). However, most existing datasets of labeled indoor
point clouds have been acquired by photogrammetry. In contrast, Terrestrial
Laser Scanning (TLS) can acquire dense sub-centimeter point clouds and has
become the standard for surveyors. We present 3DSES (3D Segmentation of ESGT
point clouds), a new dataset of indoor dense TLS colorized point clouds
covering 427 m 2 of an engineering school. 3DSES has a unique double annotation
format: semantic labels annotated at the point level alongside a full 3D CAD
model of the building. We introduce a model-to-cloud algorithm for automated
labeling of indoor point clouds using an existing 3D CAD model. 3DSES has 3
variants of various semantic and geometrical complexities. We show that our
model-to-cloud alignment can produce pseudo-labels on our point clouds with a
\> 95% accuracy, allowing us to train deep models with significant time
savings compared to manual labeling. First baselines on 3DSES show the
difficulties encountered by existing models when segmenting objects relevant to
BIM, such as light and safety utilities. We show that segmentation accuracy can
be improved by leveraging pseudo-labels and Lidar intensity, an information
rarely considered in current datasets. Code and data will be open sourced.
|
2501.17544
|
Pole-Zero Identification: Unveiling the Critical Dynamics of Microwave
Circuits Beyond Stability Analysis
|
eess.SY cs.SY
|
Pole-zero identification refers to the obtaining of the poles and zeros of a
linear (or linearized) system described by its frequency response. This is
usually done using optimization techniques (such as least squares, maximum
likelihood estimation, or vector fitting) that fit a given frequency response
of the linear system to a transfer function defined as the ratio of two
polynomials. This kind of linear system identification in the frequency domain
has numerous applications in a wide variety of engineering fields (such as
mechanical systems, power systems and Electromagnetic Compatibility). In the
microwave domain, rational approximation is increasingly used to obtain
black-box models of complex passive structures for model order reduction and
efficient transient simulation. In this paper we will focus on a different
application of pole-zero identification. We will review the different ways in
which pole-zero identification can be applied to nonlinear circuit design (for
power amplifier stability analysis and beyond). We will give a comprehensive
view on recent approaches through illustrative application examples. Other uses
of rational approximation techniques are beyond the scope of this paper.
|
2501.17546
|
Is Conversational XAI All You Need? Human-AI Decision Making With a
Conversational XAI Assistant
|
cs.HC cs.AI
|
Explainable artificial intelligence (XAI) methods are being proposed to help
interpret and understand how AI systems reach specific predictions. Inspired by
prior work on conversational user interfaces, we argue that augmenting existing
XAI methods with conversational user interfaces can increase user engagement
and boost user understanding of the AI system. In this paper, we explored the
impact of a conversational XAI interface on users' understanding of the AI
system, their trust, and reliance on the AI system. In comparison to an XAI
dashboard, we found that the conversational XAI interface can bring about a
better understanding of the AI system among users and higher user trust.
However, users of both the XAI dashboard and conversational XAI interfaces
showed clear overreliance on the AI system. Enhanced conversations powered by
large language model (LLM) agents amplified over-reliance. Based on our
findings, we reason that the potential cause of such overreliance is the
illusion of explanatory depth that is concomitant with both XAI interfaces. Our
findings have important implications for designing effective conversational XAI
interfaces to facilitate appropriate reliance and improve human-AI
collaboration. Code can be found at
https://github.com/delftcrowd/IUI2025_ConvXAI
|
2501.17547
|
Towards Training-Free Open-World Classification with 3D Generative
Models
|
cs.CV
|
3D open-world classification is a challenging yet essential task in dynamic
and unstructured real-world scenarios, requiring both open-category and
open-pose recognition. To address these challenges, recent wisdom often takes
sophisticated 2D pre-trained models to provide enriched and stable
representations. However, these methods largely rely on how 3D objects can be
projected into 2D space, which is unfortunately not well solved, and thus
significantly limits their performance. Unlike these present efforts, in this
paper we make a pioneering exploration of 3D generative models for 3D
open-world classification. Drawing on abundant prior knowledge from 3D
generative models, we additionally craft a rotation-invariant feature
extractor. This innovative synergy endows our pipeline with the advantages of
being training-free, open-category, and pose-invariant, thus well suited to 3D
open-world classification. Extensive experiments on benchmark datasets
demonstrate the potential of generative models in 3D open-world classification,
achieving state-of-the-art performance on ModelNet10 and McGill with 32.0% and
8.7% overall accuracy improvement, respectively.
|
2501.17549
|
Query-Aware Learnable Graph Pooling Tokens as Prompt for Large Language
Models
|
cs.CL
|
Graph-structured data plays a vital role in numerous domains, such as social
networks, citation networks, commonsense reasoning graphs and knowledge graphs.
While graph neural networks have been employed for graph processing, recent
advancements have explored integrating large language models for graph-based
tasks. In this paper, we propose a novel approach named Learnable Graph Pooling
Token (LGPT), which addresses the limitations of the scalability issues in
node-level projection and information loss in graph-level projection. LGPT
enables flexible and efficient graph representation by introducing learnable
parameters that act as tokens in large language models, balancing fine-grained
and global graph information. Additionally, we investigate an Early Query
Fusion technique, which fuses query context before constructing the graph
representation, leading to more effective graph embeddings. Our method achieves
a 4.13\% performance improvement on the GraphQA benchmark without training the
large language model, demonstrating significant gains in handling complex
textual-attributed graph data.
|
2501.17550
|
Action Recognition Using Temporal Shift Module and Ensemble Learning
|
cs.CV
|
This paper presents the first-rank solution for the Multi-Modal Action
Recognition Challenge, part of the Multi-Modal Visual Pattern Recognition
Workshop at the \acl{ICPR} 2024. The competition aimed to recognize human
actions using a diverse dataset of 20 action classes, collected from
multi-modal sources. The proposed approach is built upon the \acl{TSM}, a
technique aimed at efficiently capturing temporal dynamics in video data,
incorporating multiple data input types. Our strategy included transfer
learning to leverage pre-trained models, followed by meticulous fine-tuning on
the challenge's specific dataset to optimize performance for the 20 action
classes. We carefully selected a backbone network to balance computational
efficiency and recognition accuracy and further refined the model using an
ensemble technique that integrates outputs from different modalities. This
ensemble approach proved crucial in boosting the overall performance. Our
solution achieved a perfect top-1 accuracy on the test set, demonstrating the
effectiveness of the proposed approach in recognizing human actions across 20
classes. Our code is available online https://github.com/ffyyytt/TSM-MMVPR.
|
2501.17552
|
Efficient Calculation of Stabilization Parameters in RF Power Amplifiers
|
eess.SY cs.SY
|
This paper proposes an efficient method for the calculation of the
stabilization parameters in RF power amplifiers operating in periodic
large-signal regimes. Stabilization is achieved by applying the principles of
linear control theory for Periodic Linear Time-Varying (PLTV) systems. A
numerical method is proposed to obtain the Harmonic Transfer Function that
represents the system linearized around the large-signal steady state. Then, a
feedback analysis is performed to calculate the closed-loop poles of the PLTV
system. The proposed approach is demonstrated with two examples. Firstly, a
three-stage amplifier that exhibits a low-frequency oscillation for increasing
values of input power is correctly stabilized. Next, the stabilization of an
unstable design that exhibits an odd-mode parametric oscillation is presented.
The results of the proposed technique are compared to those obtained with the
conventional parametric stability simulation. These examples serve to
illustrate the capability and efficiency of the proposed approach.
|
2501.17553
|
Closing the Gap Between Synthetic and Ground Truth Time Series
Distributions via Neural Mapping
|
cs.LG stat.ML
|
In this paper, we introduce Neural Mapper for Vector Quantized Time Series
Generator (NM-VQTSG), a novel method aimed at addressing fidelity challenges in
vector quantized (VQ) time series generation. VQ-based methods, such as
TimeVQVAE, have demonstrated success in generating time series but are hindered
by two critical bottlenecks: information loss during compression into discrete
latent spaces and deviations in the learned prior distribution from the ground
truth distribution. These challenges result in synthetic time series with
compromised fidelity and distributional accuracy. To overcome these
limitations, NM-VQTSG leverages a U-Net-based neural mapping model to bridge
the distributional gap between synthetic and ground truth time series. To be
more specific, the model refines synthetic data by addressing artifacts
introduced during generation, effectively aligning the distributions of
synthetic and real data. Importantly, NM-VQTSG can be used for synthetic time
series generated by any VQ-based generative method. We evaluate NM-VQTSG across
diverse datasets from the UCR Time Series Classification archive, demonstrating
its capability to consistently enhance fidelity in both unconditional and
conditional generation tasks. The improvements are evidenced by significant
improvements in FID, IS, and conditional FID, additionally backed up by visual
inspection in a data space and a latent space. Our findings establish NM-VQTSG
as a new method to improve the quality of synthetic time series. Our
implementation is available on \url{https://github.com/ML4ITS/TimeVQVAE}.
|
2501.17554
|
Information Theory for Expectation Measures
|
cs.IT math.IT math.PR
|
Shannon based his information theory on the notion of probability measures as
it we developed by Kolmogorov. In this paper we study some fundamental problems
in information theory based on expectation measures. In the theory of
expectation measures it is natural to study data sets where no randomness is
present and it is also natural to study information theory for point processes
as well as sampling where the sample size is not fixed. Expectation measures in
combination with Kraft's Inequality can be used to clarify in which cases
probability measures can be used to quantify randomness.
|
2501.17555
|
An Exceptional Dataset For Rare Pancreatic Tumor Segmentation
|
cs.CV cs.AI
|
Pancreatic NEuroendocrine Tumors (pNETs) are very rare endocrine neoplasms
that account for less than 5% of all pancreatic malignancies, with an incidence
of only 1-1.5 cases per 100,000. Early detection of pNETs is critical for
improving patient survival, but the rarity of pNETs makes segmenting them from
CT a very challenging problem. So far, there has not been a dataset
specifically for pNETs available to researchers. To address this issue, we
propose a pNETs dataset, a well-annotated Contrast-Enhanced Computed Tomography
(CECT) dataset focused exclusively on Pancreatic Neuroendocrine Tumors,
containing data from 469 patients. This is the first dataset solely dedicated
to pNETs, distinguishing it from previous collections. Additionally, we provide
the baseline detection networks with a new slice-wise weight loss function
designed for the UNet-based model, improving the overall pNET segmentation
performance. We hope that our dataset can enhance the understanding and
diagnosis of pNET Tumors within the medical community, facilitate the
development of more accurate diagnostic tools, and ultimately improve patient
outcomes and advance the field of oncology.
|
2501.17557
|
Heuristic-Informed Mixture of Experts for Link Prediction in Multilayer
Networks
|
cs.LG cs.SI physics.soc-ph
|
Link prediction algorithms for multilayer networks are in principle required
to effectively account for the entire layered structure while capturing the
unique contexts offered by each layer. However, many existing approaches excel
at predicting specific links in certain layers but struggle with others, as
they fail to effectively leverage the diverse information encoded across
different network layers. In this paper, we present MoE-ML-LP, the first
Mixture-of-Experts (MoE) framework specifically designed for multilayer link
prediction. Building on top of multilayer heuristics for link prediction,
MoE-ML-LP synthesizes the decisions taken by diverse experts, resulting in
significantly enhanced predictive capabilities. Our extensive experimental
evaluation on real-world and synthetic networks demonstrates that MoE-ML-LP
consistently outperforms several baselines and competing methods, achieving
remarkable improvements of +60% in Mean Reciprocal Rank, +82% in Hits@1, +55%
in Hits@5, and +41% in Hits@10. Furthermore, MoE-ML-LP features a modular
architecture that enables the seamless integration of newly developed experts
without necessitating the re-training of the entire framework, fostering
efficiency and scalability to new experts, paving the way for future
advancements in link prediction.
|
2501.17559
|
Solving Urban Network Security Games: Learning Platform, Benchmark, and
Challenge for AI Research
|
cs.AI cs.GT
|
After the great achievement of solving two-player zero-sum games, more and
more AI researchers focus on solving multiplayer games. To facilitate the
development of designing efficient learning algorithms for solving multiplayer
games, we propose a multiplayer game platform for solving Urban Network
Security Games (\textbf{UNSG}) that model real-world scenarios. That is,
preventing criminal activity is a highly significant responsibility assigned to
police officers in cities, and police officers have to allocate their limited
security resources to interdict the escaping criminal when a crime takes place
in a city. This interaction between multiple police officers and the escaping
criminal can be modeled as a UNSG. The variants of UNSGs can model different
real-world settings, e.g., whether real-time information is available or not,
and whether police officers can communicate or not. The main challenges of
solving this game include the large size of the game and the co-existence of
cooperation and competition. While previous efforts have been made to tackle
UNSGs, they have been hampered by performance and scalability issues.
Therefore, we propose an open-source UNSG platform (\textbf{GraphChase}) for
designing efficient learning algorithms for solving UNSGs. Specifically,
GraphChase offers a unified and flexible game environment for modeling various
variants of UNSGs, supporting the development, testing, and benchmarking of
algorithms. We believe that GraphChase not only facilitates the development of
efficient algorithms for solving real-world problems but also paves the way for
significant advancements in algorithmic development for solving general
multiplayer games.
|
2501.17561
|
Coalitional model predictive control of an irrigation canal
|
eess.SY cs.MA cs.SY math.OC
|
We present a hierarchical control scheme for large-scale systems whose
components can exchange information through a data network. The main goal of
the supervisory layer is to find the best compromise between control
performance and communicational costs by actively modifying the network
topology. The actions taken at the supervisory layer alter the control agents'
knowledge of the complete system, and the set of agents with which they can
communicate. Each group of linked subsystems, or coalition, is independently
controlled based on a decentralized model predictive control (MPC) scheme,
managed at the bottom layer. Hard constraints on the inputs are imposed, while
soft constraints on the states are considered to avoid feasibility issues. The
performance of the proposed control scheme is validated on a model of the Dez
irrigation canal, implemented on the accurate simulator for water systems
SOBEK. Finally, the results are compared with those obtained using a
centralized MPC controller.
|
2501.17567
|
Exploring the Potential of Wireless-enabled Multi-Chip AI Accelerators
|
cs.AR cs.AI
|
The insatiable appetite of Artificial Intelligence (AI) workloads for
computing power is pushing the industry to develop faster and more efficient
accelerators. The rigidity of custom hardware, however, conflicts with the need
for scalable and versatile architectures capable of catering to the needs of
the evolving and heterogeneous pool of Machine Learning (ML) models in the
literature. In this context, multi-chiplet architectures assembling multiple
(perhaps heterogeneous) accelerators are an appealing option that is
unfortunately hindered by the still rigid and inefficient chip-to-chip
interconnects. In this paper, we explore the potential of wireless technology
as a complement to existing wired interconnects in this multi-chiplet approach.
Using an evaluation framework from the state-of-the-art, we show that wireless
interconnects can lead to speedups of 10% on average and 20% maximum. We also
highlight the importance of load balancing between the wired and wireless
interconnects, which will be further explored in future work.
|
2501.17568
|
Histogram approaches for imbalanced data streams regression
|
cs.LG
|
Handling imbalanced data streams in regression tasks presents a significant
challenge, as rare instances can appear anywhere in the target distribution
rather than being confined to its extreme values. In this paper, we introduce
novel data-level sampling strategies, \texttt{HistUS} and \texttt{HistOS}, that
utilize histogram-based approaches to dynamically balance data streams. Unlike
previous methods based on Chebyshev\textquotesingle s inequality, our proposed
techniques identify and handle rare cases across the entire distribution
effectively. We demonstrate that \texttt{HistUS} and \texttt{HistOS} outperform
traditional methods through extensive experiments on synthetic and real-world
datasets, leading to more accurate and robust regression models in streaming
environments.
|
2501.17569
|
A linguistically-motivated evaluation methodology for unraveling model's
abilities in reading comprehension tasks
|
cs.CL
|
We introduce an evaluation methodology for reading comprehension tasks based
on the intuition that certain examples, by the virtue of their linguistic
complexity, consistently yield lower scores regardless of model size or
architecture. We capitalize on semantic frame annotation for characterizing
this complexity, and study seven complexity factors that may account for
model's difficulty. We first deploy this methodology on a carefully annotated
French reading comprehension benchmark showing that two of those complexity
factors are indeed good predictors of models' failure, while others are less
so. We further deploy our methodology on a well studied English benchmark by
using Chat-GPT as a proxy for semantic annotation. Our study reveals that
fine-grained linguisticallymotivated automatic evaluation of a reading
comprehension task is not only possible, but helps understand models' abilities
to handle specific linguistic characteristics of input examples. It also shows
that current state-of-the-art models fail with some for those characteristics
which suggests that adequately handling them requires more than merely
increasing model size.
|
2501.17570
|
Trustworthy image-to-image translation: evaluating uncertainty
calibration in unpaired training scenarios
|
eess.IV cs.CV physics.med-ph
|
Mammographic screening is an effective method for detecting breast cancer,
facilitating early diagnosis. However, the current need to manually inspect
images places a heavy burden on healthcare systems, spurring a desire for
automated diagnostic protocols. Techniques based on deep neural networks have
been shown effective in some studies, but their tendency to overfit leaves
considerable risk for poor generalisation and misdiagnosis, preventing their
widespread adoption in clinical settings. Data augmentation schemes based on
unpaired neural style transfer models have been proposed that improve
generalisability by diversifying the representations of training image features
in the absence of paired training data (images of the same tissue in either
image style). But these models are similarly prone to various pathologies, and
evaluating their performance is challenging without ground truths/large
datasets (as is often the case in medical imaging). Here, we consider two
frameworks/architectures: a GAN-based cycleGAN, and the more recently developed
diffusion-based SynDiff. We evaluate their performance when trained on image
patches parsed from three open access mammography datasets and one non-medical
image dataset. We consider the use of uncertainty quantification to assess
model trustworthiness, and propose a scheme to evaluate calibration quality in
unpaired training scenarios. This ultimately helps facilitate the trustworthy
use of image-to-image translation models in domains where ground truths are not
typically available.
|
2501.17578
|
Music2Latent2: Audio Compression with Summary Embeddings and
Autoregressive Decoding
|
cs.SD cs.AI cs.LG eess.AS
|
Efficiently compressing high-dimensional audio signals into a compact and
informative latent space is crucial for various tasks, including generative
modeling and music information retrieval (MIR). Existing audio autoencoders,
however, often struggle to achieve high compression ratios while preserving
audio fidelity and facilitating efficient downstream applications. We introduce
Music2Latent2, a novel audio autoencoder that addresses these limitations by
leveraging consistency models and a novel approach to representation learning
based on unordered latent embeddings, which we call summary embeddings. Unlike
conventional methods that encode local audio features into ordered sequences,
Music2Latent2 compresses audio signals into sets of summary embeddings, where
each embedding can capture distinct global features of the input sample. This
enables to achieve higher reconstruction quality at the same compression ratio.
To handle arbitrary audio lengths, Music2Latent2 employs an autoregressive
consistency model trained on two consecutive audio chunks with causal masking,
ensuring coherent reconstruction across segment boundaries. Additionally, we
propose a novel two-step decoding procedure that leverages the denoising
capabilities of consistency models to further refine the generated audio at no
additional cost. Our experiments demonstrate that Music2Latent2 outperforms
existing continuous audio autoencoders regarding audio quality and performance
on downstream tasks. Music2Latent2 paves the way for new possibilities in audio
compression.
|
2501.17581
|
CSEval: Towards Automated, Multi-Dimensional, and Reference-Free
Counterspeech Evaluation using Auto-Calibrated LLMs
|
cs.CL cs.AI cs.CY cs.SI
|
Counterspeech has emerged as a popular and effective strategy for combating
online hate speech, sparking growing research interest in automating its
generation using language models. However, the field still lacks standardised
evaluation protocols and reliable automated evaluation metrics that align with
human judgement. Current automatic evaluation methods, primarily based on
similarity metrics, do not effectively capture the complex and independent
attributes of counterspeech quality, such as contextual relevance,
aggressiveness, or argumentative coherence. This has led to an increased
dependency on labor-intensive human evaluations to assess automated
counter-speech generation methods. To address these challenges, we introduce
CSEval, a novel dataset and framework for evaluating counterspeech quality
across four dimensions: contextual-relevance, aggressiveness,
argument-coherence, and suitableness. Furthermore, we propose Auto-Calibrated
COT for Counterspeech Evaluation (Auto-CSEval), a prompt-based method with
auto-calibrated chain-of-thoughts (CoT) for scoring counterspeech using large
language models. Our experiments show that Auto-CSEval outperforms traditional
metrics like ROUGE, METEOR, and BertScore in correlating with human judgement,
indicating a significant improvement in automated counterspeech evaluation.
|
2501.17582
|
Coalitional Control: Cooperative game theory and control
|
eess.SY cs.GT cs.SY math.OC
|
The evolution of information and communication technologies has yielded the
means of sharing measurements and other information in an efficient and
flexible way, which has enabled the size and complexity of control applications
to increase. At the same time, the improvements in the computational and
communicational capabilities of control devices have fostered the development
of noncentralized control architectures, already motivated by the inherent
structural constraints of large-scale systems. Computer-based control
approaches such as model predictive control (MPC) are visible beneficiaries of
these advances and have registered a significant growth regarding both
theoretical and applied fields. Coalitional control focuses on the local
interests that motivate the controllers to assemble, an aspect so far rarely
contemplated in the distributed control literature. This article presents the
main concepts and challenges in coalitional control, and the links with
cooperative network game theory.
|
2501.17584
|
GLLM: Self-Corrective G-Code Generation using Large Language Models with
User Feedback
|
cs.SE cs.CL cs.LG
|
This paper introduces GLLM, an innovative tool that leverages Large Language
Models (LLMs) to automatically generate G-code from natural language
instructions for Computer Numerical Control (CNC) machining. GLLM addresses the
challenges of manual G-code writing by bridging the gap between human-readable
task descriptions and machine-executable code. The system incorporates a
fine-tuned StarCoder-3B model, enhanced with domain-specific training data and
a Retrieval-Augmented Generation (RAG) mechanism. GLLM employs advanced
prompting strategies and a novel self-corrective code generation approach to
ensure both syntactic and semantic correctness of the generated G-code. The
architecture includes robust validation mechanisms, including syntax checks,
G-code-specific verifications, and functional correctness evaluations using
Hausdorff distance. By combining these techniques, GLLM aims to democratize CNC
programming, making it more accessible to users without extensive programming
experience while maintaining high accuracy and reliability in G-code
generation.
|
2501.17586
|
Boosting Weak Positives for Text Based Person Search
|
cs.CV cs.LG
|
Large vision-language models have revolutionized cross-modal object
retrieval, but text-based person search (TBPS) remains a challenging task due
to limited data and fine-grained nature of the task. Existing methods primarily
focus on aligning image-text pairs into a common representation space, often
disregarding the fact that real world positive image-text pairs share a varied
degree of similarity in between them. This leads models to prioritize easy
pairs, and in some recent approaches, challenging samples are discarded as
noise during training. In this work, we introduce a boosting technique that
dynamically identifies and emphasizes these challenging samples during
training. Our approach is motivated from classical boosting technique and
dynamically updates the weights of the weak positives, wherein, the rank-1
match does not share the identity of the query. The weight allows these
misranked pairs to contribute more towards the loss and the network has to pay
more attention towards such samples. Our method achieves improved performance
across four pedestrian datasets, demonstrating the effectiveness of our
proposed module.
|
2501.17589
|
Extracting Inter-Protein Interactions Via Multitasking Graph Structure
Learning
|
q-bio.QM cs.ET cs.LG
|
Identifying protein-protein interactions (PPI) is crucial for gaining
in-depth insights into numerous biological processes within cells and holds
significant guiding value in areas such as drug development and disease
treatment. Currently, most PPI prediction methods focus primarily on the study
of protein sequences, neglecting the critical role of the internal structure of
proteins. This paper proposes a novel PPI prediction method named MgslaPPI,
which utilizes graph attention to mine protein structural information and
enhances the expressive power of the protein encoder through multitask learning
strategy. Specifically, we decompose the end-to-end PPI prediction process into
two stages: amino acid residue reconstruction (A2RR) and protein interaction
prediction (PIP). In the A2RR stage, we employ a graph attention-based residue
reconstruction method to explore the internal relationships and features of
proteins. In the PIP stage, in addition to the basic interaction prediction
task, we introduce two auxiliary tasks, i.e., protein feature reconstruction
(PFR) and masked interaction prediction (MIP). The PFR task aims to reconstruct
the representation of proteins in the PIP stage, while the MIP task uses
partially masked protein features for PPI prediction, with both working in
concert to prompt MgslaPPI to capture more useful information. Experimental
results demonstrate that MgslaPPI significantly outperforms existing
state-of-the-art methods under various data partitioning schemes.
|
2501.17594
|
Watch Your STEPP: Semantic Traversability Estimation using Pose
Projected Features
|
cs.RO cs.CV
|
Understanding the traversability of terrain is essential for autonomous robot
navigation, particularly in unstructured environments such as natural
landscapes. Although traditional methods, such as occupancy mapping, provide a
basic framework, they often fail to account for the complex mobility
capabilities of some platforms such as legged robots. In this work, we propose
a method for estimating terrain traversability by learning from demonstrations
of human walking. Our approach leverages dense, pixel-wise feature embeddings
generated using the DINOv2 vision Transformer model, which are processed
through an encoder-decoder MLP architecture to analyze terrain segments. The
averaged feature vectors, extracted from the masked regions of interest, are
used to train the model in a reconstruction-based framework. By minimizing
reconstruction loss, the network distinguishes between familiar terrain with a
low reconstruction error and unfamiliar or hazardous terrain with a higher
reconstruction error. This approach facilitates the detection of anomalies,
allowing a legged robot to navigate more effectively through challenging
terrain. We run real-world experiments on the ANYmal legged robot both indoor
and outdoor to prove our proposed method. The code is open-source, while video
demonstrations can be found on our website: https://rpl-cs-ucl.github.io/STEPP
|
2501.17595
|
Technical report on label-informed logit redistribution for better
domain generalization in low-shot classification with foundation models
|
cs.CV
|
Confidence calibration is an emerging challenge in real-world decision
systems based on foundations models when used for downstream vision
classification tasks. Due to various reasons exposed, logit scores on the CLIP
head remain large irrespective of whether the image-language pairs reconcile.
It is difficult to address in data space, given the few-shot regime. We propose
a penalty incorporated into loss objective that penalizes incorrect
classifications whenever one is made during finetuning, by moving an amount of
log-likelihood to the true class commensurate to the relative amplitudes of the
two likelihoods. We refer to it as \textit{confidence misalignment penalty
(CMP)}. Extensive experiments on $12$ vision datasets and $5$ domain
generalization datasets supports the calibration performance of our method
against stat-of-the-art. CMP outperforms the benchmarked prompt learning
methods, demonstrating average improvement in Expected Calibration Error (ECE)
by average $6.01$\%, $4.01$ \% at minimum and $9.72$\% at maximum.
|
2501.17597
|
Economic Nonlinear Model Predictive Control of Prosumer District Heating
Networks: The Extended Version
|
eess.SY cs.SY
|
In this paper, we propose an economic nonlinear model predictive control
(MPC) algorithm for district heating networks (DHNs). The proposed method
features prosumers, multiple producers, and storage systems, which are
essential components of 4th generation DHNs. These networks are characterized
by their ability to optimize their operations, aiming to reduce supply
temperatures, accommodate distributed heat sources, and leverage the
flexibility provided by thermal inertia and storage, all crucial for achieving
a fossil-fuel-free energy supply. Developing a smart energy management system
to accomplish these goals requires detailed models of highly complex nonlinear
systems and computational algorithms able to handle large-scale optimization
problems. To address this, we introduce a graph-based optimization-oriented
model that efficiently integrates distributed producers, prosumers, storage
buffers, and bidirectional pipe flows, such that it can be implemented in a
real-time MPC setting. Furthermore, we conduct several numerical experiments to
evaluate the performance of the proposed algorithms in closed-loop. Our
findings demonstrate that the MPC methods achieved up to 9% cost improvement
over traditional rule-based controllers while better maintaining system
constraints.
|
2501.17598
|
Semantic Consistency Regularization with Large Language Models for
Semi-supervised Sentiment Analysis
|
cs.CL cs.LG
|
Accurate sentiment analysis of texts is crucial for a variety of
applications, such as understanding customer feedback, monitoring market
trends, and detecting public sentiment. However, manually annotating large
sentiment corpora for supervised learning is labor-intensive and
time-consuming. Therefore, it is essential and effective to develop a
semi-supervised method for the sentiment analysis task. Although some methods
have been proposed for semi-supervised text classification, they rely on the
intrinsic information within the unlabeled data and the learning capability of
the NLP model, which lack generalization ability to the sentiment analysis
scenario and may prone to overfit. Inspired by the ability of pretrained Large
Language Models (LLMs) in following instructions and generating coherent text,
we propose a Semantic Consistency Regularization with Large Language Models
(SCR) framework for semi-supervised sentiment analysis. We introduce two
prompting strategies to semantically enhance unlabeled text using LLMs. The
first is Entity-based Enhancement (SCR-EE), which involves extracting entities
and numerical information, and querying the LLM to reconstruct the textual
information. The second is Concept-based Enhancement (SCR-CE), which directly
queries the LLM with the original sentence for semantic reconstruction.
Subsequently, the LLM-augmented data is utilized for a consistency loss with
confidence thresholding, which preserves high-quality agreement samples to
provide additional supervision signals during training. Furthermore, to fully
utilize the uncertain unlabeled data samples, we propose a class re-assembling
strategy inspired by the class space shrinking theorem. Experiments show our
method achieves remarkable performance over prior semi-supervised methods.
|
2501.17599
|
RegionGCN: Spatial-Heterogeneity-Aware Graph Convolutional Networks
|
cs.LG
|
Modeling spatial heterogeneity in the data generation process is essential
for understanding and predicting geographical phenomena. Despite their
prevalence in geospatial tasks, neural network models usually assume spatial
stationarity, which could limit their performance in the presence of spatial
process heterogeneity. By allowing model parameters to vary over space, several
approaches have been proposed to incorporate spatial heterogeneity into neural
networks. However, current geographically weighting approaches are ineffective
on graph neural networks, yielding no significant improvement in prediction
accuracy. We assume the crux lies in the over-fitting risk brought by a large
number of local parameters. Accordingly, we propose to model spatial process
heterogeneity at the regional level rather than at the individual level, which
largely reduces the number of spatially varying parameters. We further develop
a heuristic optimization procedure to learn the region partition adaptively in
the process of model training. Our proposed spatial-heterogeneity-aware graph
convolutional network, named RegionGCN, is applied to the spatial prediction of
county-level vote share in the 2016 US presidential election based on
socioeconomic attributes. Results show that RegionGCN achieves significant
improvement over the basic and geographically weighted GCNs. We also offer an
exploratory analysis tool for the spatial variation of non-linear relationships
through ensemble learning of regional partitions from RegionGCN. Our work
contributes to the practice of Geospatial Artificial Intelligence (GeoAI) in
tackling spatial heterogeneity.
|
2501.17604
|
nabqr: Python package for improving probabilistic forecasts
|
cs.LG stat.AP stat.CO
|
We introduce the open-source Python package NABQR: Neural Adaptive Basis for
(time-adaptive) Quantile Regression that provides reliable probabilistic
forecasts. NABQR corrects ensembles (scenarios) with LSTM networks and then
applies time-adaptive quantile regression to the corrected ensembles to obtain
improved and more reliable forecasts. With the suggested package, accuracy
improvements of up to 40% in mean absolute terms can be achieved in day-ahead
forecasting of onshore and offshore wind power production in Denmark.
|
2501.17612
|
VoicePrompter: Robust Zero-Shot Voice Conversion with Voice Prompt and
Conditional Flow Matching
|
cs.SD cs.AI eess.AS eess.SP
|
Despite remarkable advancements in recent voice conversion (VC) systems,
enhancing speaker similarity in zero-shot scenarios remains challenging. This
challenge arises from the difficulty of generalizing and adapting speaker
characteristics in speech within zero-shot environments, which is further
complicated by mismatch between the training and inference processes. To
address these challenges, we propose VoicePrompter, a robust zero-shot VC model
that leverages in-context learning with voice prompts. VoicePrompter is
composed of (1) a factorization method that disentangles speech components and
(2) a DiT-based conditional flow matching (CFM) decoder that conditions on
these factorized features and voice prompts. Additionally, (3) latent mixup is
used to enhance in-context learning by combining various speaker features. This
approach improves speaker similarity and naturalness in zero-shot VC by
applying mixup to latent representations. Experimental results demonstrate that
VoicePrompter outperforms existing zero-shot VC systems in terms of speaker
similarity, speech intelligibility, and audio quality. Our demo is available at
\url{https://hayeong0.github.io/VoicePrompter-demo/}.
|
2501.17614
|
Coalitional control: a bottom-up approach
|
eess.SY cs.GT cs.SY math.OC
|
The recent major developments in information technologies have opened
interesting possibilities for the effective management of multi-agent systems.
In many cases, the important role of central control nodes can now be
undertaken by several controllers in a distributed topology that suits better
the structure of the system. This opens as well the possibility to promote
cooperation between control agents in competitive environments, establishing
links between controllers in order to adapt the exchange of critical
information to the degree of subsystems' interactions. In this paper a
bottom-up approach to coalitional control is presented, where the structure of
each agent's model predictive controller is adapted to the time-variant
coupling conditions, promoting the formation of coalitions - clusters of
control agents where communication is essential to ensure the cooperation -
whenever it can bring benefit to the overall system performance.
|
2501.17615
|
Cross-lingual Embedding Clustering for Hierarchical Softmax in
Low-Resource Multilingual Speech Recognition
|
cs.CL cs.SD eess.AS
|
We present a novel approach centered on the decoding stage of Automatic
Speech Recognition (ASR) that enhances multilingual performance, especially for
low-resource languages. It utilizes a cross-lingual embedding clustering method
to construct a hierarchical Softmax (H-Softmax) decoder, which enables similar
tokens across different languages to share similar decoder representations. It
addresses the limitations of the previous Huffman-based H-Softmax method, which
relied on shallow features in token similarity assessments. Through experiments
on a downsampled dataset of 15 languages, we demonstrate the effectiveness of
our approach in improving low-resource multilingual ASR accuracy.
|
2501.17617
|
Structured Context Recomposition for Large Language Models Using
Probabilistic Layer Realignment
|
cs.CL
|
Extended sequence generation often leads to degradation in contextual
consistency due to the inability of conventional self-attention mechanisms to
effectively retain long-range dependencies. Existing approaches, including
memory compression and retrieval-augmented conditioning, introduce
computational trade-offs that either increase inference latency or impose
additional storage overhead. Structured Context Recomposition (SCR) introduces
a probabilistic layer realignment strategy that dynamically adjusts learned
representations within transformer layers, ensuring that semantically relevant
embeddings persist throughout extended transformations. The proposed method
enhances coherence retention through a recursive weighting function that
redistributes representational emphasis based on inferred contextual relevance
rather than relying on fixed token-level attention scores. Empirical results
indicate that probabilistic realignment mitigates abrupt topic shifts and
logical inconsistencies, particularly in scenarios where sequences exceed
standard attention window constraints. Sequence-level entropy analysis further
reveals that SCR moderates representational variability without introducing
excessive output regularization, allowing models to sustain generative
diversity while preserving contextual alignment. Attention head deviation
measurements confirm that hierarchical reweighting contributes to smoother
token dependency transitions across transformer layers, reinforcing the
stability of multi-turn interactions and document-level reasoning.
Computational resource assessments show that while SCR incurs a moderate
increase in processing time, memory overhead remains within feasible limits,
making it suitable for practical deployment in autoregressive generative
applications.
|
2501.17621
|
Physics-Informed Neural Networks in Power System Dynamics: Improving
Simulation Accuracy
|
eess.SY cs.SY
|
The importance and cost of time-domain simulations when studying power
systems have exponentially increased in the last decades. With the growing
share of renewable energy sources, the slow and predictable responses from
large turbines are replaced by the fast and unpredictable dynamics from power
electronics. The current existing simulation tools require new solutions
designed for faster dynamics. Physics-Informed Neural Networks (PINNs) have
recently emerged in power systems to accelerate such simulations. By
incorporating knowledge during the up-front training, PINNs provide more
accurate results over larger time steps than traditional numerical methods.
This paper introduces PINNs as an alternative approximation method that
seamlessly integrates with the current simulation framework. We replace a
synchronous machine for a trained PINN in the IEEE 9-, 14-, and 30-bus systems
and simulate several network disturbances. Including PINNs systematically
boosts the simulations' accuracy, providing more accurate results for both the
PINN-modeled component and the whole multi-machine system states.
|
2501.17628
|
Dual Invariance Self-training for Reliable Semi-supervised Surgical
Phase Recognition
|
eess.IV cs.CV
|
Accurate surgical phase recognition is crucial for advancing
computer-assisted interventions, yet the scarcity of labeled data hinders
training reliable deep learning models. Semi-supervised learning (SSL),
particularly with pseudo-labeling, shows promise over fully supervised methods
but often lacks reliable pseudo-label assessment mechanisms. To address this
gap, we propose a novel SSL framework, Dual Invariance Self-Training (DIST),
that incorporates both Temporal and Transformation Invariance to enhance
surgical phase recognition. Our two-step self-training process dynamically
selects reliable pseudo-labels, ensuring robust pseudo-supervision. Our
approach mitigates the risk of noisy pseudo-labels, steering decision
boundaries toward true data distribution and improving generalization to unseen
data. Evaluations on Cataract and Cholec80 datasets show our method outperforms
state-of-the-art SSL approaches, consistently surpassing both supervised and
SSL baselines across various network architectures.
|
2501.17629
|
The Imitation Game According To Turing
|
cs.HC cs.AI cs.CY
|
The current cycle of hype and anxiety concerning the benefits and risks to
human society of Artificial Intelligence is fuelled, not only by the increasing
use of generative AI and other AI tools by the general public, but also by
claims made on behalf of such technology by popularizers and scientists. In
particular, recent studies have claimed that Large Language Models (LLMs) can
pass the Turing Test-a goal for AI since the 1950s-and therefore can "think".
Large-scale impacts on society have been predicted as a result. Upon detailed
examination, however, none of these studies has faithfully applied Turing's
original instructions. Consequently, we conducted a rigorous Turing Test with
GPT-4-Turbo that adhered closely to Turing's instructions for a three-player
imitation game. We followed established scientific standards where Turing's
instructions were ambiguous or missing. For example, we performed a
Computer-Imitates-Human Game (CIHG) without constraining the time duration and
conducted a Man-Imitates-Woman Game (MIWG) as a benchmark. All but one
participant correctly identified the LLM, showing that one of today's most
advanced LLMs is unable to pass a rigorous Turing Test. We conclude that recent
extravagant claims for such models are unsupported, and do not warrant either
optimism or concern about the social impact of thinking machines.
|
2501.17630
|
Uncertainty Quantification and Decomposition for LLM-based
Recommendation
|
cs.IR cs.CL
|
Despite the widespread adoption of large language models (LLMs) for
recommendation, we demonstrate that LLMs often exhibit uncertainty in their
recommendations. To ensure the trustworthy use of LLMs in generating
recommendations, we emphasize the importance of assessing the reliability of
recommendations generated by LLMs. We start by introducing a novel framework
for estimating the predictive uncertainty to quantitatively measure the
reliability of LLM-based recommendations. We further propose to decompose the
predictive uncertainty into recommendation uncertainty and prompt uncertainty,
enabling in-depth analyses of the primary source of uncertainty. Through
extensive experiments, we (1) demonstrate predictive uncertainty effectively
indicates the reliability of LLM-based recommendations, (2) investigate the
origins of uncertainty with decomposed uncertainty measures, and (3) propose
uncertainty-aware prompting for a lower predictive uncertainty and enhanced
recommendation. Our source code and model weights are available at
https://github.com/WonbinKweon/UNC_LLM_REC_WWW2025
|
2501.17634
|
Federated Learning With Individualized Privacy Through Client Sampling
|
cs.LG cs.CR cs.CV
|
With growing concerns about user data collection, individualized privacy has
emerged as a promising solution to balance protection and utility by accounting
for diverse user privacy preferences. Instead of enforcing a uniform level of
anonymization for all users, this approach allows individuals to choose privacy
settings that align with their comfort levels. Building on this idea, we
propose an adapted method for enabling Individualized Differential Privacy
(IDP) in Federated Learning (FL) by handling clients according to their
personal privacy preferences. By extending the SAMPLE algorithm from
centralized settings to FL, we calculate client-specific sampling rates based
on their heterogeneous privacy budgets and integrate them into a modified
IDP-FedAvg algorithm. We test this method under realistic privacy distributions
and multiple datasets. The experimental results demonstrate that our approach
achieves clear improvements over uniform DP baselines, reducing the trade-off
between privacy and utility. Compared to the alternative SCALE method in
related work, which assigns differing noise scales to clients, our method
performs notably better. However, challenges remain for complex tasks with
non-i.i.d. data, primarily stemming from the constraints of the decentralized
setting.
|
Subsets and Splits
No community queries yet
The top public SQL queries from the community will appear here once available.