id
stringlengths 9
16
| title
stringlengths 4
278
| categories
stringlengths 5
104
| abstract
stringlengths 6
4.09k
|
|---|---|---|---|
2502.00064
|
Evaluating Large Language Models in Vulnerability Detection Under
Variable Context Windows
|
cs.CR cs.LG
|
This study examines the impact of tokenized Java code length on the accuracy
and explicitness of ten major LLMs in vulnerability detection. Using chi-square
tests and known ground truth, we found inconsistencies across models: some,
like GPT-4, Mistral, and Mixtral, showed robustness, while others exhibited a
significant link between tokenized length and performance. We recommend future
LLM development focus on minimizing the influence of input length for better
vulnerability detection. Additionally, preprocessing techniques that reduce
token count while preserving code structure could enhance LLM accuracy and
explicitness in these tasks.
|
2502.00065
|
Blood Glucose Level Prediction in Type 1 Diabetes Using Machine Learning
|
q-bio.QM cs.LG
|
Type 1 Diabetes is a chronic autoimmune condition in which the immune system
attacks and destroys insulin-producing beta cells in the pancreas, resulting in
little to no insulin production. Insulin helps glucose in your blood enter your
muscle, fat, and liver cells so they can use it for energy or store it for
later use. If insulin is insufficient, it causes sugar to build up in the blood
and leads to serious health problems. People with Type 1 Diabetes need
synthetic insulin every day. In diabetes management, continuous glucose
monitoring is an important feature that provides near real-time blood glucose
data. It is useful in deciding the synthetic insulin dose. In this research
work, we used machine learning tools, deep neural networks, deep reinforcement
learning, and voting and stacking regressors to predict blood glucose levels at
30-min time intervals using the latest DiaTrend dataset. Predicting blood
glucose levels is useful in better diabetes management systems. The trained
models were compared using several evaluation metrics. Our evaluation results
demonstrate the performance of various models across different glycemic
conditions for blood glucose prediction. The source codes of this work can be
found in: https://github.com/soon-jynn-chu/t1d_bg_prediction
|
2502.00068
|
Privacy Preserving Charge Location Prediction for Electric Vehicles
|
cs.CR cs.AI
|
By 2050, electric vehicles (EVs) are projected to account for 70% of global
vehicle sales. While EVs provide environmental benefits, they also pose
challenges for energy generation, grid infrastructure, and data privacy.
Current research on EV routing and charge management often overlooks privacy
when predicting energy demands, leaving sensitive mobility data vulnerable. To
address this, we developed a Federated Learning Transformer Network (FLTN) to
predict EVs' next charge location with enhanced privacy measures. Each EV
operates as a client, training an onboard FLTN model that shares only model
weights, not raw data with a community-based Distributed Energy Resource
Management System (DERMS), which aggregates them into a community global model.
To further enhance privacy, non-transitory EVs use peer-to-peer weight sharing
and augmentation within their community, obfuscating individual contributions
and improving model accuracy. Community DERMS global model weights are then
redistributed to EVs for continuous training. Our FLTN approach achieved up to
92% accuracy while preserving data privacy, compared to our baseline
centralised model, which achieved 98% accuracy with no data privacy.
Simulations conducted across diverse charge levels confirm the FLTN's ability
to forecast energy demands over extended periods. We present a privacy-focused
solution for forecasting EV charge location prediction, effectively mitigating
data leakage risks.
|
2502.00070
|
Can AI Solve the Peer Review Crisis? A Large Scale Experiment on LLM's
Performance and Biases in Evaluating Economics Papers
|
cs.CY cs.AI econ.GN q-fin.EC
|
We investigate whether artificial intelligence can address the peer review
crisis in economics by analyzing 27,090 evaluations of 9,030 unique submissions
using a large language model (LLM). The experiment systematically varies author
characteristics (e.g., affiliation, reputation, gender) and publication quality
(e.g., top-tier, mid-tier, low-tier, AI generated papers). The results indicate
that LLMs effectively distinguish paper quality but exhibit biases favoring
prominent institutions, male authors, and renowned economists. Additionally,
LLMs struggle to differentiate high-quality AI-generated papers from genuine
top-tier submissions. While LLMs offer efficiency gains, their susceptibility
to bias necessitates cautious integration and hybrid peer review models to
balance equity and accuracy.
|
2502.00072
|
LLM Cyber Evaluations Don't Capture Real-World Risk
|
cs.CR cs.AI cs.CL cs.LG
|
Large language models (LLMs) are demonstrating increasing prowess in
cybersecurity applications, creating creating inherent risks alongside their
potential for strengthening defenses. In this position paper, we argue that
current efforts to evaluate risks posed by these capabilities are misaligned
with the goal of understanding real-world impact. Evaluating LLM cybersecurity
risk requires more than just measuring model capabilities -- it demands a
comprehensive risk assessment that incorporates analysis of threat actor
adoption behavior and potential for impact. We propose a risk assessment
framework for LLM cyber capabilities and apply it to a case study of language
models used as cybersecurity assistants. Our evaluation of frontier models
reveals high compliance rates but moderate accuracy on realistic cyber
assistance tasks. However, our framework suggests that this particular use case
presents only moderate risk due to limited operational advantages and impact
potential. Based on these findings, we recommend several improvements to align
research priorities with real-world impact assessment, including closer
academia-industry collaboration, more realistic modeling of attacker behavior,
and inclusion of economic metrics in evaluations. This work represents an
important step toward more effective assessment and mitigation of LLM-enabled
cybersecurity risks.
|
2502.00074
|
SpikingRTNH: Spiking Neural Network for 4D Radar Object Detection
|
cs.CV cs.AI cs.NE
|
Recently, 4D Radar has emerged as a crucial sensor for 3D object detection in
autonomous vehicles, offering both stable perception in adverse weather and
high-density point clouds for object shape recognition. However, processing
such high-density data demands substantial computational resources and energy
consumption. We propose SpikingRTNH, the first spiking neural network (SNN) for
3D object detection using 4D Radar data. By replacing conventional ReLU
activation functions with leaky integrate-and-fire (LIF) spiking neurons,
SpikingRTNH achieves significant energy efficiency gains. Furthermore, inspired
by human cognitive processes, we introduce biological top-down inference (BTI),
which processes point clouds sequentially from higher to lower densities. This
approach effectively utilizes points with lower noise and higher importance for
detection. Experiments on K-Radar dataset demonstrate that SpikingRTNH with BTI
significantly reduces energy consumption by 78% while achieving comparable
detection performance to its ANN counterpart (51.1% AP 3D, 57.0% AP BEV). These
results establish the viability of SNNs for energy-efficient 4D Radar-based
object detection in autonomous driving systems. All codes are available at
https://github.com/kaist-avelab/k-radar.
|
2502.00075
|
BTS: Harmonizing Specialized Experts into a Generalist LLM
|
cs.CL cs.LG
|
We present Branch-Train-Stitch (BTS), an efficient and flexible training
algorithm for combining independently trained large language model (LLM)
experts into a single, capable generalist model. Following Li et al., we start
with a single seed language model which is branched into domain-specific (e.g.,
coding or math) experts with continual pretraining. BTS combines experts into a
generalist model using lightweight stitch layers, which are inserted between
frozen experts and the seed LLM, and trained on a small datamix of the expert
domains. Stitch layers enable the seed LLM to integrate representations from
any number of experts during the forward pass, allowing it to generalize to new
domains, despite remaining frozen. Because BTS does not alter the constituent
LLMs, BTS provides a modular and flexible approach: experts can be easily
removed and new experts can be added with only a small amount of training.
Compared to alternative model merging approaches, BTS yields the best
generalist performance on a variety of downstream tasks, retaining the
specialized capabilities of each of the experts.
|
2502.00076
|
Influence of color correction on pathology detection in Capsule
Endoscopy
|
cs.CV cs.AI cs.LG
|
Pathology detection in Wireless Capsule Endoscopy (WCE) using deep learning
has been explored in the recent past. However, deep learning models can be
influenced by the color quality of the dataset used to train them, impacting
detection, segmentation and classification tasks. In this work, we evaluate the
impact of color correction on pathology detection using two prominent object
detection models: Retinanet and YOLOv5. We first generate two color corrected
versions of a popular WCE dataset (i.e., SEE-AI dataset) using two different
color correction functions. We then evaluate the performance of the Retinanet
and YOLOv5 on the original and color corrected versions of the dataset. The
results reveal that color correction makes the models generate larger bounding
boxes and larger intersection areas with the ground truth annotations.
Furthermore, color correction leads to an increased number of false positives
for certain pathologies. However, these effects do not translate into a
consistent improvement in performance metrics such as F1-scores, IoU, and AP50.
The code is available at https://github.com/agossouema2011/WCE2024. Keywords:
Wireless Capsule Endoscopy, Color correction, Retinanet, YOLOv5, Detection
|
2502.00077
|
Robot localization aided by quantum algorithms
|
cs.RO
|
Localization is a critical aspect of mobile robotics, enabling robots to
navigate their environment efficiently and avoid obstacles. Current
probabilistic localization methods, such as the Adaptive-Monte Carlo
localization (AMCL) algorithm, are computationally intensive and may struggle
with large maps or high-resolution sensor data. This paper explores the
application of quantum computing in robotics, focusing on the use of Grover's
search algorithm to improve the efficiency of localization in mobile robots. We
propose a novel approach to utilize Grover's algorithm in a 2D map, enabling
faster and more efficient localization. Despite the limitations of current
physical quantum computers, our experimental results demonstrate a significant
speedup over classical methods, highlighting the potential of quantum computing
to improve robotic localization. This work bridges the gap between quantum
computing and robotics, providing a practical solution for robotic localization
and paving the way for future research in quantum robotics.
|
2502.00078
|
Deep Ensembling with Multimodal Image Fusion for Efficient
Classification of Lung Cancer
|
eess.IV cs.CV
|
This study focuses on the classification of cancerous and healthy slices from
multimodal lung images. The data used in the research comprises Computed
Tomography (CT) and Positron Emission Tomography (PET) images. The proposed
strategy achieves the fusion of PET and CT images by utilizing Principal
Component Analysis (PCA) and an Autoencoder. Subsequently, a new ensemble-based
classifier developed, Deep Ensembled Multimodal Fusion (DEMF), employing
majority voting to classify the sample images under examination.
Gradient-weighted Class Activation Mapping (Grad-CAM) employed to visualize the
classification accuracy of cancer-affected images. Given the limited sample
size, a random image augmentation strategy employed during the training phase.
The DEMF network helps mitigate the challenges of scarce data in computer-aided
medical image analysis. The proposed network compared with state-of-the-art
networks across three publicly available datasets. The network outperforms
others based on the metrics - Accuracy, F1-Score, Precision, and Recall. The
investigation results highlight the effectiveness of the proposed network.
|
2502.00079
|
Advanced Assessment of Stroke in Retinal Fundus Imaging with Deep
Multi-view Learning
|
eess.IV cs.CV
|
Stroke is globally a major cause of mortality and morbidity, and hence
accurate and rapid diagnosis of stroke is valuable. Retinal fundus imaging
reveals the known markers of elevated stroke risk in the eyes, which are
retinal venular widening, arteriolar narrowing, and increased tortuosity. In
contrast to other imaging techniques used for stroke diagnosis, the acquisition
of fundus images is easy, non-invasive, fast, and inexpensive. Therefore, in
this study, we propose a multi-view stroke network (MVS-Net) to detect stroke
and transient ischemic attack (TIA) using retinal fundus images. Contrary to
existing studies, our study proposes for the first time a solution to
discriminate stroke and TIA with deep multi-view learning by proposing an
end-to-end deep network, consisting of multi-view inputs of fundus images
captured from both right and left eyes. Accordingly, the proposed MVS-Net
defines representative features from fundus images of both eyes and determines
the relation within their macula-centered and optic nerve head-centered views.
Experiments performed on a dataset collected from stroke and TIA patients, in
addition to healthy controls, show that the proposed framework achieves an AUC
score of 0.84 for stroke and TIA detection.
|
2502.00083
|
CerraData-4MM: A multimodal benchmark dataset on Cerrado for land use
and land cover classification
|
cs.CV eess.IV
|
The Cerrado faces increasing environmental pressures, necessitating accurate
land use and land cover (LULC) mapping despite challenges such as class
imbalance and visually similar categories. To address this, we present
CerraData-4MM, a multimodal dataset combining Sentinel-1 Synthetic Aperture
Radar (SAR) and Sentinel-2 MultiSpectral Imagery (MSI) with 10m spatial
resolution. The dataset includes two hierarchical classification levels with 7
and 14 classes, respectively, focusing on the diverse Bico do Papagaio
ecoregion. We highlight CerraData-4MM's capacity to benchmark advanced semantic
segmentation techniques by evaluating a standard U-Net and a more sophisticated
Vision Transformer (ViT) model. The ViT achieves superior performance in
multimodal scenarios, with the highest macro F1-score of 57.60% and a mean
Intersection over Union (mIoU) of 49.05% at the first hierarchical level. Both
models struggle with minority classes, particularly at the second hierarchical
level, where U-Net's performance drops to an F1-score of 18.16%. Class
balancing improves representation for underrepresented classes but reduces
overall accuracy, underscoring the trade-off in weighted training.
CerraData-4MM offers a challenging benchmark for advancing deep learning models
to handle class imbalance and multimodal data fusion. Code, trained models, and
data are publicly available at https://github.com/ai4luc/CerraData-4MM.
|
2502.00085
|
Efficient Beam Search for Large Language Models Using Trie-Based
Decoding
|
cs.CL
|
In Transformer-based sequence-to-sequence generation, beam search has proven
effective in enhancing the quality of generated sequences compared to greedy
decoding. Conventional beam search methods typically adopt either a sequential
or batch-based approach. The sequential approach, while memory-efficient,
requires multiple decoding passes to construct a complete search tree, leading
to significantly slower inference. On the other hand, the batch-based approach
enables parallel computation across beams, but at the expense of high memory
consumption due to the need to maintain separate key-value (KV) caches for each
beam. In this study, we introduce a novel trie (prefix-tree)-based parallel
decoding method that addresses the memory inefficiency of batch-based beam
search. By sharing a single KV cache among all beams that share the same
prefix, the proposed method not only reduces memory consumption dramatically
but also enables parallel decoding across all branches. This innovative use of
a prefix tree offers an efficient alternative for beam search, achieving
significant memory savings while preserving inference speed, making it
particularly well-suited for memory-constrained environments or large-scale
model deployments.
|
2502.00088
|
Re-Visiting Explainable AI Evaluation Metrics to Identify The Most
Informative Features
|
cs.LG stat.ML
|
Functionality or proxy-based approach is one of the used approaches to
evaluate the quality of explainable artificial intelligence methods. It uses
statistical methods, definitions and new developed metrics for the evaluation
without human intervention. Among them, Selectivity or RemOve And Retrain
(ROAR), and Permutation Importance (PI) are the most commonly used metrics to
evaluate the quality of explainable artificial intelligence methods to
highlight the most significant features in machine learning models. They state
that the model performance should experience a sharp reduction if the most
informative feature is removed from the model or permuted. However, the
efficiency of both metrics is significantly affected by multicollinearity,
number of significant features in the model and the accuracy of the model. This
paper shows with empirical examples that both metrics suffer from the
aforementioned limitations. Accordingly, we propose expected accuracy interval
(EAI), a metric to predict the upper and lower bounds of the the accuracy of
the model when ROAR or IP is implemented. The proposed metric found to be very
useful especially with collinear features.
|
2502.00089
|
Ensembles of Low-Rank Expert Adapters
|
cs.CL cs.AI cs.LG
|
The training and fine-tuning of large language models (LLMs) often involve
diverse textual data from multiple sources, which poses challenges due to
conflicting gradient directions, hindering optimization and specialization.
These challenges can undermine model generalization across tasks, resulting in
reduced downstream performance. Recent research suggests that fine-tuning LLMs
on carefully selected, task-specific subsets of data can match or even surpass
the performance of using the entire dataset. Building on these insights, we
propose the Ensembles of Low-Rank Expert Adapters (ELREA) framework to improve
the model's capability to handle diverse tasks. ELREA clusters the training
instructions based on their gradient directions, representing different areas
of expertise and thereby reducing conflicts during optimization. Expert
adapters are then trained on these clusters, utilizing the low-rank adaptation
(LoRA) technique to ensure training efficiency and model scalability. During
inference, ELREA combines predictions from the most relevant expert adapters
based on the input data's gradient similarity to the training clusters,
ensuring optimal adapter selection for each task. Experiments show that our
method outperforms baseline LoRA adapters trained on the full dataset and other
ensemble approaches with similar training and inference complexity across a
range of domain-specific tasks.
|
2502.00090
|
Disambiguating Numeral Sequences to Decipher Ancient Accounting Corpora
|
cs.CL
|
A numeration system encodes abstract numeric quantities as concrete strings
of written characters. The numeration systems used by modern scripts tend to be
precise and unambiguous, but this was not so for the ancient and
partially-deciphered proto-Elamite (PE) script, where written numerals can have
up to four distinct readings depending on the system that is used to read them.
We consider the task of disambiguating between these readings in order to
determine the values of the numeric quantities recorded in this corpus. We
algorithmically extract a list of possible readings for each PE numeral
notation, and contribute two disambiguation techniques based on structural
properties of the original documents and classifiers learned with the
bootstrapping algorithm. We also contribute a test set for evaluating
disambiguation techniques, as well as a novel approach to cautious rule
selection for bootstrapped classifiers. Our analysis confirms existing
intuitions about this script and reveals previously-unknown correlations
between tablet content and numeral magnitude. This work is crucial to
understanding and deciphering PE, as the corpus is heavily accounting-focused
and contains many more numeric tokens than tokens of text.
|
2502.00094
|
AIN: The Arabic INclusive Large Multimodal Model
|
cs.CV cs.AI cs.CL cs.HC cs.LG
|
Amid the swift progress of large language models (LLMs) and their evolution
into large multimodal models (LMMs), significant strides have been made in
high-resource languages such as English and Chinese. While Arabic LLMs have
seen notable progress, Arabic LMMs remain largely unexplored, often narrowly
focusing on a few specific aspects of the language and visual understanding. To
bridge this gap, we introduce AIN-the Arabic Inclusive Multimodal
Model-designed to excel across diverse domains. AIN is an English-Arabic
bilingual LMM designed to excel in English and Arabic, leveraging carefully
constructed 3.6 million high-quality Arabic-English multimodal data samples.
AIN demonstrates state-of-the-art Arabic performance, while also possessing
strong English-language visual capabilities. On the recent CAMEL-Bench
benchmark comprising 38 sub-domains including, multi-image understanding,
complex visual perception, handwritten document understanding, video
understanding, medical imaging, plant diseases, and remote sensing-based land
use understanding, our AIN demonstrates strong performance with the 7B model
outperforming GPT-4o by an absolute gain of 3.4% averaged over eight domains
and 38 sub-domains. AIN's superior capabilities position it as a significant
step toward empowering Arabic speakers with advanced multimodal generative AI
tools across diverse applications.
|
2502.00108
|
Tracking Most Significant Shifts in Infinite-Armed Bandits
|
cs.LG stat.ML
|
We study an infinite-armed bandit problem where actions' mean rewards are
initially sampled from a reservoir distribution. Most prior works in this
setting focused on stationary rewards (Berry et al., 1997; Wang et al., 2008;
Bonald and Proutiere, 2013; Carpentier and Valko, 2015) with the more
challenging adversarial/non-stationary variant only recently studied in the
context of rotting/decreasing rewards (Kim et al., 2022; 2024). Furthermore,
optimal regret upper bounds were only achieved using parameter knowledge of
non-stationarity and only known for certain regimes of regularity of the
reservoir. This work shows the first parameter-free optimal regret bounds for
all regimes while also relaxing distributional assumptions on the reservoir.
We first introduce a blackbox scheme to convert a finite-armed MAB algorithm
designed for near-stationary environments into a parameter-free algorithm for
the infinite-armed non-stationary problem with optimal regret guarantees. We
next study a natural notion of significant shift for this problem inspired by
recent developments in finite-armed MAB (Suk & Kpotufe, 2022). We show that
tighter regret bounds in terms of significant shifts can be adaptively attained
by employing a randomized variant of elimination within our blackbox scheme.
Our enhanced rates only depend on the rotting non-stationarity and thus exhibit
an interesting phenomenon for this problem where rising rewards do not factor
into the difficulty of non-stationarity.
|
2502.00112
|
SAGRAD: A Program for Neural Network Training with Simulated Annealing
and the Conjugate Gradient Method
|
cs.LG cs.NE
|
SAGRAD (Simulated Annealing GRADient), a Fortran 77 program for computing
neural networks for classification using batch learning, is discussed. Neural
network training in SAGRAD is based on a combination of simulated annealing and
M{\o}ller's scaled conjugate gradient algorithm, the latter a variation of the
traditional conjugate gradient method, better suited for the nonquadratic
nature of neural networks. Different aspects of the implementation of the
training process in SAGRAD are discussed, such as the efficient computation of
gradients and multiplication of vectors by Hessian matrices that are required
by M{\o}ller's algorithm; the (re)initialization of weights with simulated
annealing required to (re)start M{\o}ller's algorithm the first time and each
time thereafter that it shows insufficient progress in reaching a possibly
local minimum; and the use of simulated annealing when M{\o}ller's algorithm,
after possibly making considerable progress, becomes stuck at a local minimum
or flat area of weight space. Outlines of the scaled conjugate gradient
algorithm, the simulated annealing procedure and the training process used in
SAGRAD are presented together with results from running SAGRAD on two examples
of training data.
|
2502.00114
|
Mobile Robot Navigation Using Hand-Drawn Maps: A Vision Language Model
Approach
|
cs.RO cs.CV
|
Hand-drawn maps can be used to convey navigation instructions between humans
and robots in a natural and efficient manner. However, these maps can often
contain inaccuracies such as scale distortions and missing landmarks which
present challenges for mobile robot navigation. This paper introduces a novel
Hand-drawn Map Navigation (HAM-Nav) architecture that leverages pre-trained
vision language models (VLMs) for robot navigation across diverse environments,
hand-drawing styles, and robot embodiments, even in the presence of map
inaccuracies. HAM-Nav integrates a unique Selective Visual Association
Prompting approach for topological map-based position estimation and navigation
planning as well as a Predictive Navigation Plan Parser to infer missing
landmarks. Extensive experiments were conducted in photorealistic simulated
environments, using both wheeled and legged robots, demonstrating the
effectiveness of HAM-Nav in terms of navigation success rates and Success
weighted by Path Length. Furthermore, a user study in real-world environments
highlighted the practical utility of hand-drawn maps for robot navigation as
well as successful navigation outcomes.
|
2502.00115
|
A Direct Semi-Exhaustive Search Method for Robust, Partial-to-Full Point
Cloud Registration
|
cs.RO cs.CV
|
Point cloud registration refers to the problem of finding the rigid
transformation that aligns two given point clouds, and is crucial for many
applications in robotics and computer vision. The main insight of this paper is
that we can directly optimize the point cloud registration problem without
correspondences by utilizing an algorithmically simple, yet computationally
complex, semi-exhaustive search approach that is very well-suited for
parallelization on modern GPUs. Our proposed algorithm, Direct Semi-Exhaustive
Search (DSES), iterates over potential rotation matrices and efficiently
computes the inlier-maximizing translation associated with each rotation. It
then computes the optimal rigid transformation based on any desired distance
metric by directly computing the error associated with each transformation
candidate $\{R, t\}$. By leveraging the parallelism of modern GPUs, DSES
outperforms state-of-the-art methods for partial-to-full point cloud
registration on the simulated ModelNet40 benchmark and demonstrates high
performance and robustness for pose estimation on a real-world robotics problem
(https://youtu.be/q0q2-s2KSuA).
|
2502.00127
|
Sparse Autoencoder Insights on Voice Embeddings
|
cs.CL
|
Recent advances in explainable machine learning have highlighted the
potential of sparse autoencoders in uncovering mono-semantic features in
densely encoded embeddings. While most research has focused on Large Language
Model (LLM) embeddings, the applicability of this technique to other domains
remains largely unexplored. This study applies sparse autoencoders to speaker
embeddings generated from a Titanet model, demonstrating the effectiveness of
this technique in extracting mono-semantic features from non-textual embedded
data. The results show that the extracted features exhibit characteristics
similar to those found in LLM embeddings, including feature splitting and
steering. The analysis reveals that the autoencoder can identify and manipulate
features such as language and music, which are not evident in the original
embedding. The findings suggest that sparse autoencoders can be a valuable tool
for understanding and interpreting embedded data in many domains, including
audio-based speaker recognition.
|
2502.00129
|
ProtoSnap: Prototype Alignment for Cuneiform Signs
|
cs.CV cs.LG
|
The cuneiform writing system served as the medium for transmitting knowledge
in the ancient Near East for a period of over three thousand years. Cuneiform
signs have a complex internal structure which is the subject of expert
paleographic analysis, as variations in sign shapes bear witness to historical
developments and transmission of writing and culture over time. However, prior
automated techniques mostly treat sign types as categorical and do not
explicitly model their highly varied internal configurations. In this work, we
present an unsupervised approach for recovering the fine-grained internal
configuration of cuneiform signs by leveraging powerful generative models and
the appearance and structure of prototype font images as priors. Our approach,
ProtoSnap, enforces structural consistency on matches found with deep image
features to estimate the diverse configurations of cuneiform characters,
snapping a skeleton-based template to photographed cuneiform signs. We provide
a new benchmark of expert annotations and evaluate our method on this task. Our
evaluation shows that our approach succeeds in aligning prototype skeletons to
a wide variety of cuneiform signs. Moreover, we show that conditioning on
structures produced by our method allows for generating synthetic data with
correct structural configurations, significantly boosting the performance of
cuneiform sign recognition beyond existing techniques, in particular over rare
signs. Our code, data, and trained models are available at the project page:
https://tau-vailab.github.io/ProtoSnap/
|
2502.00131
|
Middleman Bias in Advertising: Aligning Relevance of Keyphrase
Recommendations with Search
|
cs.IR
|
E-commerce sellers are recommended keyphrases based on their inventory on
which they advertise to increase buyer engagement (clicks/sales). Keyphrases
must be pertinent to items; otherwise, it can result in seller dissatisfaction
and poor targeting -- towards that end relevance filters are employed. In this
work, we describe the shortcomings of training relevance filter models on
biased click/sales signals. We re-conceptualize advertiser keyphrase relevance
as interaction between two dynamical systems -- Advertising which produces the
keyphrases and Search which acts as a middleman to reach buyers. We discuss the
bias of search relevance systems (middleman bias) and the need to align
advertiser keyphrases with search relevance signals. We also compare the
performance of cross encoders and bi-encoders in modeling this alignment and
the scalability of such a solution for sellers at eBay.
|
2502.00133
|
Exploring Transfer Learning for Deep Learning Polyp Detection in
Colonoscopy Images Using YOLOv8
|
cs.CV cs.AI
|
Deep learning methods have demonstrated strong performance in objection
tasks; however, their ability to learn domain-specific applications with
limited training data remains a significant challenge. Transfer learning
techniques address this issue by leveraging knowledge from pre-training on
related datasets, enabling faster and more efficient learning for new tasks.
Finding the right dataset for pre-training can play a critical role in
determining the success of transfer learning and overall model performance. In
this paper, we investigate the impact of pre-training a YOLOv8n model on seven
distinct datasets, evaluating their effectiveness when transferred to the task
of polyp detection. We compare whether large, general-purpose datasets with
diverse objects outperform niche datasets with characteristics similar to
polyps. In addition, we assess the influence of the size of the dataset on the
efficacy of transfer learning. Experiments on the polyp datasets show that
models pre-trained on relevant datasets consistently outperform those trained
from scratch, highlighting the benefit of pre-training on datasets with shared
domain-specific features.
|
2502.00136
|
A Three-Branch Checks-and-Balances Frameworkfor Context-Aware Ethical
Alignment of Large Language Models
|
cs.CL cs.AI
|
This paper introduces a three-branch checks-and-balances framework for
ethical alignment of Large Language Models (LLMs), inspired by governmental
systems. It implements three independent yet interacting components: LLMs as
the executive branch for knowledge generation, DIKE as the legislative branch
establishing ethical guardrails, and ERIS as the judicial branch for contextual
interpretation. The adversarial DIKE-ERIS duality enables adaptation to diverse
cultural contexts while upholding consistent ethical principles. This
architecture addresses limitations of reinforcement learning with human
feedback (RLHF) by providing interpretable, adaptable, and culturally-aware
ethical reasoning. Through self-supervised learning and adversarial testing,
our framework demonstrates how emotional modeling can guide linguistic
behaviors toward ethical outcomes while preserving independence across
knowledge generation, ethical oversight, and contextual interpretation.
|
2502.00138
|
JustAct+: Justified and Accountable Actions in Policy-Regulated,
Multi-Domain Data Processing
|
cs.LO cs.DC cs.MA cs.PL
|
Inter-organisational data exchange is regulated by norms originating from
sources ranging from (inter)national laws, to processing agreements, and
individual consent. Verifying norm compliance is complex because laws (e.g.,
GDPR) distribute responsibility and require accountability. Moreover, in some
application domains (e.g., healthcare), privacy requirements extend the norms
(e.g., patient consent). In contrast, existing solutions such as smart
contracts, access- and usage-control assume policies to be public, or
otherwise, statically partition policy information at the cost of
accountability and flexibility. Instead, our framework prescribes how
decentralised agents justify their actions with policy fragments that the
agents autonomously create, gossip, and assemble. Crucially, the permission of
actions is always reproducible by any observer, even with a partial view of all
the dynamic policies. Actors can be sure that future auditors will confirm
their permissions. Systems centralise control by (re)configuring externally
synchronised agreements, the bases of all justifications. As a result, control
is centralised only to the extent desired by the agents.
In this paper, we define the JustAct framework, detail its implementation in
a particular data-processing system, and design a suitable policy language
based on logic programming. A case study reproduces Brane - an existing
policy-regulated, inter-domain, medical data processing system - and serves to
demonstrate and assess the qualities of the framework.
|
2502.00139
|
Beamforming with Joint Phase and Time Array: System Design, Prototyping
and Performance
|
cs.IT eess.SP math.IT
|
Joint phase-time arrays (JPTA) is a new mmWave radio frequency front-end
architecture constructed with appending time-delay elements to phase shifters
for analog beamforming. JPTA allows the mmWave base station (BS) to form
multiple frequency-dependent beams with a single RF chain, exploiting the extra
degrees of freedom the time-delay elements offer. Without requiring extra
power-hungry RF chains, a BS with JPTA can schedule multiple users in different
directions in a frequency-division multiplexing (FDM) manner. A BS with JPTA
achieves various advantages over the traditional analog beamforming system.
Simulation results show that JPTA can bring significant system-level benefits,
e.g., extending uplink throughput coverage by 100%. To realize these system
benefits of JPTA, high-resolution delay elements with a wide delay dynamic
range are essential. With newly developed delay elements, we demonstrate that a
single TRX RF chain can serve four users in four different directions in the
mmWave band.
|
2502.00140
|
Demystifying MPNNs: Message Passing as Merely Efficient Matrix
Multiplication
|
cs.LG cs.AI cs.NE cs.SI
|
While Graph Neural Networks (GNNs) have achieved remarkable success, their
design largely relies on empirical intuition rather than theoretical
understanding. In this paper, we present a comprehensive analysis of GNN
behavior through three fundamental aspects: (1) we establish that
\textbf{$k$-layer} Message Passing Neural Networks efficiently aggregate
\textbf{$k$-hop} neighborhood information through iterative computation, (2)
analyze how different loop structures influence neighborhood computation, and
(3) examine behavior across structure-feature hybrid and structure-only tasks.
For deeper GNNs, we demonstrate that gradient-related issues, rather than just
over-smoothing, can significantly impact performance in sparse graphs. We also
analyze how different normalization schemes affect model performance and how
GNNs make predictions with uniform node features, providing a theoretical
framework that bridges the gap between empirical success and theoretical
understanding.
|
2502.00145
|
Counting and Reasoning with Plans
|
cs.AI
|
Classical planning asks for a sequence of operators reaching a given goal.
While the most common case is to compute a plan, many scenarios require more
than that. However, quantitative reasoning on the plan space remains mostly
unexplored. A fundamental problem is to count plans, which relates to the
conditional probability on the plan space. Indeed, qualitative and quantitative
approaches are well-established in various other areas of automated reasoning.
We present the first study to quantitative and qualitative reasoning on the
plan space. In particular, we focus on polynomially bounded plans. On the
theoretical side, we study its complexity, which gives rise to rich reasoning
modes. Since counting is hard in general, we introduce the easier notion of
facets, which enables understanding the significance of operators. On the
practical side, we implement quantitative reasoning for planning. Thereby, we
transform a planning task into a propositional formula and use knowledge
compilation to count different plans. This framework scales well to large plan
spaces, while enabling rich reasoning capabilities such as learning pruning
functions and explainable planning.
|
2502.00146
|
Multimodal MRI-Ultrasound AI for Prostate Cancer Detection Outperforms
Radiologist MRI Interpretation: A Multi-Center Study
|
eess.IV cs.AI cs.CV
|
Pre-biopsy magnetic resonance imaging (MRI) is increasingly used to target
suspicious prostate lesions. This has led to artificial intelligence (AI)
applications improving MRI-based detection of clinically significant prostate
cancer (CsPCa). However, MRI-detected lesions must still be mapped to
transrectal ultrasound (TRUS) images during biopsy, which results in missing
CsPCa. This study systematically evaluates a multimodal AI framework
integrating MRI and TRUS image sequences to enhance CsPCa identification. The
study included 3110 patients from three cohorts across two institutions who
underwent prostate biopsy. The proposed framework, based on the 3D UNet
architecture, was evaluated on 1700 test cases, comparing performance to
unimodal AI models that use either MRI or TRUS alone. Additionally, the
proposed model was compared to radiologists in a cohort of 110 patients. The
multimodal AI approach achieved superior sensitivity (80%) and Lesion Dice
(42%) compared to unimodal MRI (73%, 30%) and TRUS models (49%, 27%). Compared
to radiologists, the multimodal model showed higher specificity (88% vs. 78%)
and Lesion Dice (38% vs. 33%), with equivalent sensitivity (79%). Our findings
demonstrate the potential of multimodal AI to improve CsPCa lesion targeting
during biopsy and treatment planning, surpassing current unimodal models and
radiologists; ultimately improving outcomes for prostate cancer patients.
|
2502.00151
|
A Comprehensive Review: Applicability of Deep Neural Networks in
Business Decision Making and Market Prediction Investment
|
econ.GN cs.AI q-fin.EC
|
Big data, both in its structured and unstructured formats, have brought in
unforeseen challenges in economics and business. How to organize, classify, and
then analyze such data to obtain meaningful insights are the ever-going
research topics for business leaders and academic researchers. This paper
studies recent applications of deep neural networks in decision making in
economical business and investment; especially in risk management, portfolio
optimization, and algorithmic trading. Set aside limitation in data privacy and
cross-market analysis, the article establishes that deep neural networks have
performed remarkably in financial classification and prediction. Moreover, the
study suggests that by compositing multiple neural networks, spanning different
data type modalities, a more robust, efficient, and scalable financial
prediction framework can be constructed.
|
2502.00156
|
ALBAR: Adversarial Learning approach to mitigate Biases in Action
Recognition
|
cs.CV cs.CR
|
Bias in machine learning models can lead to unfair decision making, and while
it has been well-studied in the image and text domains, it remains
underexplored in action recognition. Action recognition models often suffer
from background bias (i.e., inferring actions based on background cues) and
foreground bias (i.e., relying on subject appearance), which can be detrimental
to real-life applications such as autonomous vehicles or assisted living
monitoring. While prior approaches have mainly focused on mitigating background
bias using specialized augmentations, we thoroughly study both biases. We
propose ALBAR, a novel adversarial training method that mitigates foreground
and background biases without requiring specialized knowledge of the bias
attributes. Our framework applies an adversarial cross-entropy loss to the
sampled static clip (where all the frames are the same) and aims to make its
class probabilities uniform using a proposed entropy maximization loss.
Additionally, we introduce a gradient penalty loss for regularization against
the debiasing process. We evaluate our method on established background and
foreground bias protocols, setting a new state-of-the-art and strongly
improving combined debiasing performance by over 12% on HMDB51. Furthermore, we
identify an issue of background leakage in the existing UCF101 protocol for
bias evaluation which provides a shortcut to predict actions and does not
provide an accurate measure of the debiasing capability of a model. We address
this issue by proposing more fine-grained segmentation boundaries for the
actor, where our method also outperforms existing approaches. Project Page:
https://joefioresi718.github.io/ALBAR_webpage/
|
2502.00158
|
Resolving Editing-Unlearning Conflicts: A Knowledge Codebook Framework
for Large Language Model Updating
|
cs.CL
|
Large Language Models (LLMs) excel in natural language processing by encoding
extensive human knowledge, but their utility relies on timely updates as
knowledge evolves. Updating LLMs involves two key tasks simultaneously:
unlearning to remove unwanted knowledge and editing to incorporate new
information. Existing methods face two major challenges: ineffective knowledge
storage (either too sparse or too dense) and task conflicts between editing and
unlearning, as validated through our theoretical and experimental results. To
address these issues, we propose LOKA, a conflict-free framework for LLM
updating based on a knowledge codebook. During training, updated knowledge is
stored in multiple codebook memories. To optimize knowledge storage, a
similarity-aware knowledge mapping ensures that related knowledge pieces are
clustered and allocated to the same memory. Additionally, LOKA resolves task
conflicts by employing task-specific and multi-task memories guided by a
conflict score. In the inference stage, LOKA retrieves the most relevant memory
from the codebook and plugs it into the original LLM to apply the updated
knowledge. A learning-based router controls codebook activation to further
improve knowledge utilization. Extensive experiments demonstrate the
effectiveness of LOKA in LLM knowledge updating tasks.
|
2502.00160
|
Improving Quality Control Of MRI Images Using Synthetic Motion Data
|
eess.IV cs.CV
|
MRI quality control (QC) is challenging due to unbalanced and limited
datasets, as well as subjective scoring, which hinder the development of
reliable automated QC systems. To address these issues, we introduce an
approach that pretrains a model on synthetically generated motion artifacts
before applying transfer learning for QC classification. This method not only
improves the accuracy in identifying poor-quality scans but also reduces
training time and resource requirements compared to training from scratch. By
leveraging synthetic data, we provide a more robust and resource-efficient
solution for QC automation in MRI, paving the way for broader adoption in
diverse research settings.
|
2502.00162
|
Physics-informed Split Koopman Operators for Data-efficient Soft Robotic
Simulation
|
cs.RO
|
Koopman operator theory provides a powerful data-driven technique for
modeling nonlinear dynamical systems in a linear framework, in comparison to
computationally expensive and highly nonlinear physics-based simulations.
However, Koopman operator-based models for soft robots are very high
dimensional and require considerable amounts of data to properly resolve.
Inspired by physics-informed techniques from machine learning, we present a
novel physics-informed Koopman operator identification method that improves
simulation accuracy for small dataset sizes. Through Strang splitting, the
method takes advantage of both continuous and discrete Koopman operator
approximation to obtain information both from trajectory and phase space data.
The method is validated on a tendon-driven soft robotic arm, showing orders of
magnitude improvement over standard methods in terms of the shape error. We
envision this method can significantly reduce the data requirement of Koopman
operators for systems with partially known physical models, and thus reduce the
cost of obtaining data.
|
2502.00168
|
Supervised Quadratic Feature Analysis: An Information Geometry Approach
to Dimensionality Reduction
|
stat.ML cs.LG math.DG math.ST stat.TH
|
Supervised dimensionality reduction aims to map labeled data to a
low-dimensional feature space while maximizing class discriminability. Despite
the availability of methods for learning complex non-linear features (e.g. Deep
Learning), there is an enduring demand for dimensionality reduction methods
that learn linear features due to their interpretability, low computational
cost, and broad applicability. However, there is a gap between methods that
optimize linear separability (e.g. LDA), and more flexible but computationally
expensive methods that optimize over arbitrary class boundaries (e.g.
metric-learning methods). Here, we present Supervised Quadratic Feature
Analysis (SQFA), a dimensionality reduction method for learning linear features
that maximize the differences between class-conditional first- and second-order
statistics, which allow for quadratic discrimination. SQFA exploits the
information geometry of second-order statistics in the symmetric positive
definite manifold. We show that SQFA features support quadratic
discriminability in real-world problems. We also provide a theoretical link,
based on information geometry, between SQFA and the Quadratic Discriminant
Analysis (QDA) classifier.
|
2502.00172
|
Distribution-Specific Agnostic Conditional Classification With
Halfspaces
|
cs.LG cs.CC stat.ML
|
We study ``selective'' or ``conditional'' classification problems under an
agnostic setting. Classification tasks commonly focus on modeling the
relationship between features and categories that captures the vast majority of
data. In contrast to common machine learning frameworks, conditional
classification intends to model such relationships only on a subset of the data
defined by some selection rule. Most work on conditional classification either
solves the problem in a realizable setting or does not guarantee the error is
bounded compared to an optimal solution. In this work, we consider
selective/conditional classification by sparse linear classifiers for subsets
defined by halfspaces, and give both positive as well as negative results for
Gaussian feature distributions. On the positive side, we present the first
PAC-learning algorithm for homogeneous halfspace selectors with error guarantee
$\bigO*{\sqrt{\mathrm{opt}}}$, where $\mathrm{opt}$ is the smallest conditional
classification error over the given class of classifiers and homogeneous
halfspaces. On the negative side, we find that, under cryptographic
assumptions, approximating the conditional classification loss within a small
additive error is computationally hard even under Gaussian distribution. We
prove that approximating conditional classification is at least as hard as
approximating agnostic classification in both additive and multiplicative form.
|
2502.00173
|
Lifting by Gaussians: A Simple, Fast and Flexible Method for 3D Instance
Segmentation
|
cs.CV
|
We introduce Lifting By Gaussians (LBG), a novel approach for open-world
instance segmentation of 3D Gaussian Splatted Radiance Fields (3DGS). Recently,
3DGS Fields have emerged as a highly efficient and explicit alternative to
Neural Field-based methods for high-quality Novel View Synthesis. Our 3D
instance segmentation method directly lifts 2D segmentation masks from SAM
(alternately FastSAM, etc.), together with features from CLIP and DINOv2,
directly fusing them onto 3DGS (or similar Gaussian radiance fields such as
2DGS). Unlike previous approaches, LBG requires no per-scene training, allowing
it to operate seamlessly on any existing 3DGS reconstruction. Our approach is
not only an order of magnitude faster and simpler than existing approaches; it
is also highly modular, enabling 3D semantic segmentation of existing 3DGS
fields without requiring a specific parametrization of the 3D Gaussians.
Furthermore, our technique achieves superior semantic segmentation for 2D
semantic novel view synthesis and 3D asset extraction results while maintaining
flexibility and efficiency. We further introduce a novel approach to evaluate
individually segmented 3D assets from 3D radiance field segmentation methods.
|
2502.00174
|
The role of positional encodings in the ARC benchmark
|
cs.AI cs.LG
|
The Abstraction and Reasoning Corpus challenges AI systems to perform
abstract reasoning with minimal training data, a task intuitive for humans but
demanding for machine learning models. Using CodeT5+ as a case study, we
demonstrate how limitations in positional encoding hinder reasoning and impact
performance. This work further examines the role of positional encoding across
transformer architectures, highlighting its critical influence on models of
varying sizes and configurations. Comparing several strategies, we find that
while 2D positional encoding and Rotary Position Embedding offer competitive
performance, 2D encoding excels in data-constrained scenarios, emphasizing its
effectiveness for ARC tasks
|
2502.00177
|
Evaluating Deep Human-in-the-Loop Optimization for Retinal Implants
Using Sighted Participants
|
cs.LG cs.CV cs.HC
|
Human-in-the-loop optimization (HILO) is a promising approach for
personalizing visual prostheses by iteratively refining stimulus parameters
based on user feedback. Previous work demonstrated HILO's efficacy in
simulation, but its performance with human participants remains untested. Here
we evaluate HILO using sighted participants viewing simulated prosthetic vision
to assess its ability to optimize stimulation strategies under realistic
conditions. Participants selected between phosphenes generated by competing
encoders to iteratively refine a deep stimulus encoder (DSE). We tested HILO in
three conditions: standard optimization, threshold misspecifications, and
out-of-distribution parameter sampling. Participants consistently preferred
HILO-generated stimuli over both a na\"ive encoder and the DSE alone, with log
odds favoring HILO across all conditions. We also observed key differences
between human and simulated decision-making, highlighting the importance of
validating optimization strategies with human participants. These findings
support HILO as a viable approach for adapting visual prostheses to
individuals.
|
2502.00180
|
Designing Scheduling for Diffusion Models via Spectral Analysis
|
cs.LG stat.ML
|
Diffusion models (DMs) have emerged as powerful tools for modeling complex
data distributions and generating realistic new samples. Over the years,
advanced architectures and sampling methods have been developed to make these
models practically usable. However, certain synthesis process decisions still
rely on heuristics without a solid theoretical foundation. In our work, we
offer a novel analysis of the DM's inference process, introducing a
comprehensive frequency response perspective. Specifically, by relying on
Gaussianity and shift-invariance assumptions, we present the inference process
as a closed-form spectral transfer function, capturing how the generated signal
evolves in response to the initial noise. We demonstrate how the proposed
analysis can be leveraged for optimizing the noise schedule, ensuring the best
alignment with the original dataset's characteristics. Our results lead to
scheduling curves that are dependent on the frequency content of the data,
offering a theoretical justification for some of the heuristics taken by
practitioners.
|
2502.00182
|
Understanding Federated Learning from IID to Non-IID dataset: An
Experimental Study
|
cs.LG cs.AI stat.ML
|
As privacy concerns and data regulations grow, federated learning (FL) has
emerged as a promising approach for training machine learning models across
decentralized data sources without sharing raw data. However, a significant
challenge in FL is that client data are often non-IID (non-independent and
identically distributed), leading to reduced performance compared to
centralized learning. While many methods have been proposed to address this
issue, their underlying mechanisms are often viewed from different
perspectives. Through a comprehensive investigation from gradient descent to
FL, and from IID to non-IID data settings, we find that inconsistencies in
client loss landscapes primarily cause performance degradation in non-IID
scenarios. From this understanding, we observe that existing methods can be
grouped into two main strategies: (i) adjusting parameter update paths and (ii)
modifying client loss landscapes. These findings offer a clear perspective on
addressing non-IID challenges in FL and help guide future research in the
field.
|
2502.00185
|
Optimal Coupled Sensor Placement and Path-Planning in Unknown
Time-Varying Environments
|
eess.SY cs.SY
|
We address path-planning for a mobile agent to navigate in an unknown
environment with minimum exposure to a spatially and temporally varying threat
field. The threat field is estimated using pointwise noisy measurements from a
mobile sensor network. For this problem, we present a new information gain
measure for optimal sensor placement that quantifies reduction in uncertainty
in the path cost rather than the environment state. This measure, which we call
the context-relevant mutual information (CRMI), couples the sensor placement
and path-planning problem. We propose an iterative coupled sensor configuration
and path-planning (CSCP) algorithm. At each iteration, this algorithm places
sensors to maximize CRMI, updates the threat estimate using new measurements,
and recalculates the path with minimum expected exposure to the threat. The
iterations converge when the path cost variance, which is an indicator of risk,
reduces below a desired threshold. We show that CRMI is submodular, and
therefore, greedy optimization provides near-optimal sensor placements while
maintaining computational efficiency of the CSCP algorithm. Distance-based
sensor reconfiguration costs are introduced in a modified CRMI measure, which
we also show to be submodular. Through numerical simulations, we demonstrate
that the principal advantage of this algorithm is that near-optimal
low-variance paths are achieved using far fewer sensor measurements as compared
to a standard sensor placement method.
|
2502.00186
|
Formalising Propositional Information via Implication Hypergraphs
|
math.LO cs.IT math.IT
|
This work introduces a framework for quantifying the information content of
logical propositions through the use of implication hypergraphs. We posit that
a proposition's informativeness is primarily determined by its relationships
with other propositions -- specifically, the extent to which it implies or
derives other propositions. To formalize this notion, we develop a framework
based on implication hypergraphs, that seeks to capture these relationships.
Within this framework, we define propositional information, derive some key
properties, and illustrate the concept through examples. While the approach is
broadly applicable, mathematical propositions emerge as an ideal domain for its
application due to their inherently rich and interconnected structure. We
provide several examples to illustrate this and subsequently discuss the
limitations of the framework, along with suggestions for potential refinements.
|
2502.00190
|
On the Effectiveness of Random Weights in Graph Neural Networks
|
cs.LG stat.ML
|
Graph Neural Networks (GNNs) have achieved remarkable success across diverse
tasks on graph-structured data, primarily through the use of learned weights in
message passing layers. In this paper, we demonstrate that random weights can
be surprisingly effective, achieving performance comparable to end-to-end
training counterparts, across various tasks and datasets. Specifically, we show
that by replacing learnable weights with random weights, GNNs can retain strong
predictive power, while significantly reducing training time by up to 6$\times$
and memory usage by up to 3$\times$. Moreover, the random weights combined with
our construction yield random graph propagation operators, which we show to
reduce the problem of feature rank collapse in GNNs. These understandings and
empirical results highlight random weights as a lightweight and efficient
alternative, offering a compelling perspective on the design and training of
GNN architectures.
|
2502.00193
|
Byzantine-Resilient Zero-Order Optimization for Communication-Efficient
Heterogeneous Federated Learning
|
cs.LG cs.CR cs.DC stat.ML
|
We introduce CyBeR-0, a Byzantine-resilient federated zero-order optimization
method that is robust under Byzantine attacks and provides significant savings
in uplink and downlink communication costs. We introduce transformed robust
aggregation to give convergence guarantees for general non-convex objectives
under client data heterogeneity. Empirical evaluations for standard learning
tasks and fine-tuning large language models show that CyBeR-0 exhibits stable
performance with only a few scalars per-round communication cost and reduced
memory requirements.
|
2502.00194
|
Physics-Informed Neural Network based Damage Identification for Truss
Railroad Bridges
|
cs.LG cs.AI physics.comp-ph
|
Railroad bridges are a crucial component of the U.S. freight rail system,
which moves over 40 percent of the nation's freight and plays a critical role
in the economy. However, aging bridge infrastructure and increasing train
traffic pose significant safety hazards and risk service disruptions. The U.S.
rail network includes over 100,000 railroad bridges, averaging one every 1.4
miles of track, with steel bridges comprising over 50% of the network's total
bridge length. Early identification and assessment of damage in these bridges
remain challenging tasks. This study proposes a physics-informed neural network
(PINN) based approach for damage identification in steel truss railroad
bridges. The proposed approach employs an unsupervised learning approach,
eliminating the need for large datasets typically required by supervised
methods. The approach utilizes train wheel load data and bridge response during
train crossing events as inputs for damage identification. The PINN model
explicitly incorporates the governing differential equations of the linear
time-varying (LTV) bridge-train system. Herein, this model employs a recurrent
neural network (RNN) based architecture incorporating a custom Runge-Kutta (RK)
integrator cell, designed for gradient-based learning. The proposed approach
updates the bridge finite element model while also quantifying damage severity
and localizing the affected structural members. A case study on the Calumet
Bridge in Chicago, Illinois, with simulated damage scenarios, is used to
demonstrate the model's effectiveness in identifying damage while maintaining
low false-positive rates. Furthermore, the damage identification pipeline is
designed to seamlessly integrate prior knowledge from inspections and drone
surveys, also enabling context-aware updating and assessment of bridge's
condition.
|
2502.00196
|
DermaSynth: Rich Synthetic Image-Text Pairs Using Open Access
Dermatology Datasets
|
cs.CV cs.AI cs.CL
|
A major barrier to developing vision large language models (LLMs) in
dermatology is the lack of large image--text pairs dataset. We introduce
DermaSynth, a dataset comprising of 92,020 synthetic image--text pairs curated
from 45,205 images (13,568 clinical and 35,561 dermatoscopic) for
dermatology-related clinical tasks. Leveraging state-of-the-art LLMs, using
Gemini 2.0, we used clinically related prompts and self-instruct method to
generate diverse and rich synthetic texts. Metadata of the datasets were
incorporated into the input prompts by targeting to reduce potential
hallucinations. The resulting dataset builds upon open access dermatological
image repositories (DERM12345, BCN20000, PAD-UFES-20, SCIN, and HIBA) that have
permissive CC-BY-4.0 licenses. We also fine-tuned a preliminary
Llama-3.2-11B-Vision-Instruct model, DermatoLlama 1.0, on 5,000 samples. We
anticipate this dataset to support and accelerate AI research in dermatology.
Data and code underlying this work are accessible at
https://github.com/abdurrahimyilmaz/DermaSynth.
|
2502.00197
|
Model Successor Functions
|
cs.LG stat.ML
|
The notion of generalization has moved away from the classical one defined in
statistical learning theory towards an emphasis on out-of-domain generalization
(OODG). Recently, there is a growing focus on inductive generalization, where a
progression of difficulty implicitly governs the direction of domain shifts. In
inductive generalization, it is often assumed that the training data lie in the
easier side, while the testing data lie in the harder side. The challenge is
that training data are always finite, but a learner is expected to infer an
inductive principle that could be applied in an unbounded manner. This emerging
regime has appeared in the literature under different names, such as
length/logical/algorithmic extrapolation, but a formal definition is lacking.
This work provides such a formalization that centers on the concept of model
successors. Then we outline directions to adapt well-established techniques
towards the learning of model successors. This work calls for restructuring of
the research discussion around inductive generalization from fragmented
task-centric communities to a more unified effort, focused on universal
properties of learning and computation.
|
2502.00198
|
Fairshare Data Pricing for Large Language Models
|
cs.GT cs.CL
|
Training data is a pivotal resource for building large language models
(LLMs), but unfair pricing in data markets poses a serious challenge for both
data buyers (e.g., LLM builders) and sellers (e.g., human annotators), which
discourages market participation, reducing data quantity and quality. In this
paper, we propose a fairshare pricing framework that sets training data prices
using data valuation methods to quantify their contribution to LLMs. In our
framework, buyers make purchasing decisions using data valuation and sellers
set prices to maximize their profits based on the anticipated buyer purchases.
We theoretically show that pricing derived from our framework is tightly linked
to data valuation and buyers' budget, optimal for both buyers and sellers.
Through market simulations using current LLMs and datasets (math problems,
medical diagnosis, and physical reasoning), we show that our framework is
fairshare for buyers by ensuring their purchased data is reflective of model
training value, leading to higher LLM task performances per-dollar spent on
data, and fairshare for sellers by ensuring they sell their data at optimal
prices. Our framework lays the foundation for future research on equitable and
sustainable data markets for large-scale AI.
|
2502.00199
|
Optimal Construction of Data Injection Attacks on Process Systems
|
eess.SY cs.SY
|
An information-theoretic framework for constructing data injection attacks on
process systems, from the attacker's standpoint, is studied. The attack
construction aims to distract the stationary distributions of the process
variables and stay stealthy, simultaneously. The problem is formulated as
designing a multivariate Gaussian distribution to maximize the Kullback-Leibler
divergence between the stationary distributions of states and state estimates
under attacks and without attacks, while minimizing that between the
distributions of sensor measurements. When the attacker has limited access to
sensors, sparse attacks are proposed by incorporating a sparsity constraint on
the attack. We conduct a theoretical analysis on the convexity of the attack
construction problem and present a greedy algorithm, which allows for a
systematic quantification of measurements' vulnerability of process systems. We
numerically evaluate the performance of proposed constructions on a two-reactor
process.
|
2502.00201
|
Year-over-Year Developments in Financial Fraud Detection via Deep
Learning: A Systematic Literature Review
|
cs.LG cs.AI q-fin.ST
|
This paper systematically reviews advancements in deep learning (DL)
techniques for financial fraud detection, a critical issue in the financial
sector. Using the Kitchenham systematic literature review approach, 57 studies
published between 2019 and 2024 were analyzed. The review highlights the
effectiveness of various deep learning models such as Convolutional Neural
Networks, Long Short-Term Memory, and transformers across domains such as
credit card transactions, insurance claims, and financial statement audits.
Performance metrics such as precision, recall, F1-score, and AUC-ROC were
evaluated. Key themes explored include the impact of data privacy frameworks
and advancements in feature engineering and data preprocessing. The study
emphasizes challenges such as imbalanced datasets, model interpretability, and
ethical considerations, alongside opportunities for automation and
privacy-preserving techniques such as blockchain integration and Principal
Component Analysis. By examining trends over the past five years, this review
identifies critical gaps and promising directions for advancing DL applications
in financial fraud detection, offering actionable insights for researchers and
practitioners.
|
2502.00203
|
Reward-aware Preference Optimization: A Unified Mathematical Framework
for Model Alignment
|
cs.LG cs.CL
|
The rapid development of large language model (LLM) alignment algorithms has
resulted in a complex and fragmented landscape, with limited clarity on the
effectiveness of different methods and their inter-connections. This paper
introduces Reward-Aware Preference Optimization (RPO), a mathematical framework
that unifies popular preference optimization techniques in LLM alignment,
including DPO, IPO, SimPO, and REINFORCE (LOO), among others. RPO provides a
structured approach to disentangle and systematically study the impact of
various design choices, such as the optimization objective, the number of
responses per prompt, and the use of implicit versus explicit reward models, on
LLM preference optimization. We additionally propose a new experimental setup
that enables the clean and direct ablation of such design choices. Through an
extensive series of ablation studies within the RPO framework, we gain insights
into the critical factors shaping model alignment, offering practical guidance
on the most effective strategies for improving LLM alignment.
|
2502.00204
|
Nearly-Optimal Bandit Learning in Stackelberg Games with Side
Information
|
cs.LG cs.GT
|
We study the problem of online learning in Stackelberg games with side
information between a leader and a sequence of followers. In every round the
leader observes contextual information and commits to a mixed strategy, after
which the follower best-responds. We provide learning algorithms for the leader
which achieve $O(T^{1/2})$ regret under bandit feedback, an improvement from
the previously best-known rates of $O(T^{2/3})$. Our algorithms rely on a
reduction to linear contextual bandits in the utility space: In each round, a
linear contextual bandit algorithm recommends a utility vector, which our
algorithm inverts to determine the leader's mixed strategy. We extend our
algorithms to the setting in which the leader's utility function is unknown,
and also apply it to the problems of bidding in second-price auctions with side
information and online Bayesian persuasion with public and private states.
Finally, we observe that our algorithms empirically outperform previous results
on numerical simulations.
|
2502.00205
|
EcoWeedNet: A Lightweight and Automated Weed Detection Method for
Sustainable Next-Generation Agricultural Consumer Electronics
|
cs.CV cs.AI
|
Sustainable agriculture plays a crucial role in ensuring world food security
for consumers. A critical challenge faced by sustainable precision agriculture
is weed growth, as weeds share essential resources with the crops, such as
water, soil nutrients, and sunlight, which notably affect crop yields. The
traditional methods employed to combat weeds include the usage of chemical
herbicides and manual weed removal methods. However, these could damage the
environment and pose health hazards. The adoption of automated computer vision
technologies and ground agricultural consumer electronic vehicles in precision
agriculture offers sustainable, low-carbon solutions. However, prior works
suffer from issues such as low accuracy and precision and high computational
expense. This work proposes EcoWeedNet, a novel model with enhanced weed
detection performance without adding significant computational complexity,
aligning with the goals of low-carbon agricultural practices. Additionally, our
model is lightweight and optimal for deployment on ground-based consumer
electronic agricultural vehicles and robots. The effectiveness of the proposed
model is demonstrated through comprehensive experiments on the CottonWeedDet12
benchmark dataset reflecting real-world scenarios. EcoWeedNet achieves
performance close to that of large models yet with much fewer parameters.
(approximately 4.21% of the parameters and 6.59% of the GFLOPs of YOLOv4). This
work contributes effectively to the development of automated weed detection
methods for next-generation agricultural consumer electronics featuring lower
energy consumption and lower carbon footprint. This work paves the way forward
for sustainable agricultural consumer technologies.
|
2502.00206
|
BICompFL: Stochastic Federated Learning with Bi-Directional Compression
|
cs.LG cs.DC cs.IT math.IT stat.ML
|
We address the prominent communication bottleneck in federated learning (FL).
We specifically consider stochastic FL, in which models or compressed model
updates are specified by distributions rather than deterministic parameters.
Stochastic FL offers a principled approach to compression, and has been shown
to reduce the communication load under perfect downlink transmission from the
federator to the clients. However, in practice, both the uplink and downlink
communications are constrained. We show that bi-directional compression for
stochastic FL has inherent challenges, which we address by introducing
BICompFL. Our BICompFL is experimentally shown to reduce the communication cost
by an order of magnitude compared to multiple benchmarks, while maintaining
state-of-the-art accuracies. Theoretically, we study the communication cost of
BICompFL through a new analysis of an importance-sampling based technique,
which exposes the interplay between uplink and downlink communication costs.
|
2502.00208
|
Discovering Dataset Nature through Algorithmic Clustering based on
String Compression
|
cs.IT math.IT
|
Text datasets can be represented using models that do not preserve text
structure, or using models that preserve text structure. Our hypothesis is that
depending on the dataset nature, there can be advantages using a model that
preserves text structure over one that does not, and viceversa. The key is to
determine the best way of representing a particular dataset, based on the
dataset itself. In this work, we propose to investigate this problem by
combining text distortion and algorithmic clustering based on string
compression. Specifically, a distortion technique previously developed by the
authors is applied to destroy text structure progressively. Following this, a
clustering algorithm based on string compression is used to analyze the effects
of the distortion on the information contained in the texts. Several
experiments are carried out on text datasets and artificially-generated
datasets. The results show that in strongly structural datasets the clustering
results worsen as text structure is progressively destroyed. Besides, they show
that using a compressor which enables the choice of the size of the
left-context symbols helps to determine the nature of the datasets. Finally,
the results are contrasted with a method based on multidimensional projections
and analogous conclusions are obtained.
|
2502.00212
|
STP: Self-play LLM Theorem Provers with Iterative Conjecturing and
Proving
|
cs.LG cs.AI cs.LO
|
A fundamental challenge in formal theorem proving by LLMs is the lack of
high-quality training data. Although reinforcement learning or expert iteration
partially mitigates this issue by alternating between LLM generating proofs and
finetuning them on correctly generated ones, performance quickly plateaus due
to the scarcity of correct proofs (sparse rewards). To keep improving the
models with limited data, we draw inspiration from mathematicians, who
continuously develop new results, partly by proposing novel conjectures or
exercises (which are often variants of known results) and attempting to solve
them. We design the Self-play Theorem Prover (STP) that simultaneously takes on
two roles, conjecturer and prover, each providing training signals to the
other. The conjecturer is trained iteratively on previously generated
conjectures that are barely provable by the current prover, which incentivizes
it to generate increasingly challenging conjectures over time. The prover
attempts to prove the conjectures with standard expert iteration. We evaluate
STP with both Lean and Isabelle formal versifiers. With 19.8 billion tokens
generated during the training in Lean, STP proves 26.3% of the statements in
the LeanWorkbook dataset, doubling the previous best result of 13.2% achieved
through expert iteration. The final model achieves state-of-the-art performance
among whole-proof generation methods on miniF2F-test (61.7%, pass@3200),
Proofnet-test (23.1%, pass@3200) and PutnamBench (8/644, pass@3200).
|
2502.00213
|
Understanding Why Adam Outperforms SGD: Gradient Heterogeneity in
Transformers
|
cs.LG cs.AI cs.NE
|
Transformer models are challenging to optimize with SGD and typically require
adaptive optimizers such as Adam. However, the reasons behind the superior
performance of Adam over SGD remain unclear. In this study, we investigate the
optimization of transformer models by focusing on \emph{gradient
heterogeneity}, defined as the disparity in gradient norms among parameters.
Our analysis shows that gradient heterogeneity hinders gradient-based
optimization, including SGD, while sign-based optimization, a simplified
variant of Adam, is less affected. We further examine gradient heterogeneity in
transformer models and show that it is influenced by the placement of layer
normalization. Additionally, we show that the momentum term in sign-based
optimization is important for preventing the excessive growth of linear-head
parameters in tasks with many classes. Experimental results from fine-tuning
transformer models in both NLP and vision domains validate our theoretical
analyses. This study provides insights into the optimization challenges of
transformer models and offers guidance for designing future optimization
algorithms. Code is available at
\url{https://github.com/tom4649/gradient-heterogeneity}.
|
2502.00215
|
Impulsive Relative Motion Control with Continuous-Time Constraint
Satisfaction for Cislunar Space Missions
|
eess.SY cs.SY
|
Recent investments in cislunar applications open new frontiers for space
missions within highly nonlinear dynamical regimes. In this paper, we propose a
method based on Sequential Convex Programming (SCP) to loiter around a given
target with impulsive actuation while satisfying path constraints continuously
over the finite time-horizon, i.e., independently of the number of nodes in
which domain is discretized. Location, timing, magnitude, and direction of a
fixed number of impulses are optimized in a model predictive framework,
exploiting the exact nonlinear dynamics of non-stationary orbital regimes. The
proposed approach is validated on a relative orbiting problem with respect to a
selenocentric Near Rectilinear Halo Orbit.
|
2502.00217
|
Fantastic Multi-Task Gradient Updates and How to Find Them In a Cone
|
cs.LG cs.AI cs.CV
|
Balancing competing objectives remains a fundamental challenge in multi-task
learning (MTL), primarily due to conflicting gradients across individual tasks.
A common solution relies on computing a dynamic gradient update vector that
balances competing tasks as optimization progresses. Building on this idea, we
propose ConicGrad, a principled, scalable, and robust MTL approach formulated
as a constrained optimization problem. Our method introduces an angular
constraint to dynamically regulate gradient update directions, confining them
within a cone centered on the reference gradient of the overall objective. By
balancing task-specific gradients without over-constraining their direction or
magnitude, ConicGrad effectively resolves inter-task gradient conflicts.
Moreover, our framework ensures computational efficiency and scalability to
high-dimensional parameter spaces. We conduct extensive experiments on standard
supervised learning and reinforcement learning MTL benchmarks, and demonstrate
that ConicGrad achieves state-of-the-art performance across diverse tasks.
|
2502.00219
|
Team Size and Its Negative Impact on the Disruption Index
|
cs.SI
|
As science transitions from the age of lone geniuses to an era of
collaborative teams, the question of whether large teams can sustain the
creativity of individuals and continue driving innovation has become
increasingly important. Our previous research first revealed a negative
relationship between team size and the Disruption Index-a network-based metric
of innovation-by analyzing 65 million projects across papers, patents, and
software over half a century. This work has sparked lively debates within the
scientific community about the robustness of the Disruption Index in capturing
the impact of team size on innovation. Here, we present additional evidence
that the negative link between team size and disruption holds, even when
accounting for factors such as reference length, citation impact, and
historical time. We further show how a narrow 5-year window for measuring
disruption can misrepresent this relationship as positive, underestimating the
long-term disruptive potential of small teams. Like "sleeping beauties," small
teams need a decade or more to see their transformative contributions to
science.
|
2502.00220
|
Algorithmic Clustering based on String Compression to Extract P300
Structure in EEG Signals
|
cs.LG cs.IT eess.SP math.IT
|
P300 is an Event-Related Potential widely used in Brain-Computer Interfaces,
but its detection is challenging due to inter-subject and temporal variability.
This work introduces a clustering methodology based on Normalized Compression
Distance (NCD) to extract the P300 structure, ensuring robustness against
variability. We propose a novel signal-to-ASCII transformation to generate
compression-friendly objects, which are then clustered using a hierarchical
tree-based method and a multidimensional projection approach. Experimental
results on two datasets demonstrate the method's ability to reveal relevant
P300 structures, showing clustering performance comparable to state-of-the-art
approaches. Furthermore, analysis at the electrode level suggests that the
method could assist in electrode selection for P300 detection. This
compression-driven clustering methodology offers a complementary tool for EEG
analysis and P300 identification.
|
2502.00221
|
Social Robots as Social Proxies for Fostering Connection and Empathy
Towards Humanity
|
cs.HC cs.RO
|
Despite living in an increasingly connected world, social isolation is a
prevalent issue today. While social robots have been explored as tools to
enhance social connection through companionship, their potential as
asynchronous social platforms for fostering connection towards humanity has
received less attention. In this work, we introduce the design of a social
support companion that facilitates the exchange of emotionally relevant stories
and scaffolds reflection to enhance feelings of connection via five design
dimensions. We investigate how social robots can serve as "social proxies"
facilitating human stories, passing stories from other human narrators to the
user. To this end, we conduct a real-world deployment of 40 robot stations in
users' homes over the course of two weeks. Through thematic analysis of user
interviews, we find that social proxy robots can foster connection towards
other people's experiences via mechanisms such as identifying connections
across stories or offering diverse perspectives. We present design guidelines
from our study insights on the use of social robot systems that serve as social
platforms to enhance human empathy and connection.
|
2502.00222
|
The Free Termination Property of Queries Over Time
|
cs.DB cs.DC cs.PL
|
Building on prior work on distributed databases and the CALM Theorem, we
define and study the question of free termination: in the absence of
distributed coordination, what query properties allow nodes in a distributed
(database) system to unilaterally terminate execution even though they may
receive additional data or messages in the future? This completeness question
is complementary to the soundness questions studied in the CALM literature. We
also develop a new model based on semiautomata that allows us to bridge from
the relational transducer model of the CALM papers to algebraic models that are
popular among software engineers (e.g. CRDTs) and of increasing interest to
database theory for datalog extensions and incremental view maintenance.
|
2502.00225
|
Should You Use Your Large Language Model to Explore or Exploit?
|
cs.LG cs.AI cs.CL
|
We evaluate the ability of the current generation of large language models
(LLMs) to help a decision-making agent facing an exploration-exploitation
tradeoff. We use LLMs to explore and exploit in silos in various (contextual)
bandit tasks. We find that while the current LLMs often struggle to exploit,
in-context mitigations may be used to substantially improve performance for
small-scale tasks. However even then, LLMs perform worse than a simple linear
regression. On the other hand, we find that LLMs do help at exploring large
action spaces with inherent semantics, by suggesting suitable candidates to
explore.
|
2502.00226
|
HackerRank-ASTRA: Evaluating Correctness & Consistency of Large Language
Models on cross-domain multi-file project problems
|
cs.LG cs.SE
|
Evaluating the real-world applicability of large language models (LLMs)
provides valuable insights for their development and use in software
development tasks. Existing benchmarks often focus on standalone coding
problems or specific libraries, overlooking multi-file, project-based scenarios
and lacking a rigorous evaluation of consistency. The HackerRank-ASTRA
Benchmark introduces project-based coding problems that mirror real-world
scenarios. It evaluates model consistency through 32 runs (k = 32) and median
standard deviation while incorporating taxonomy-level analysis to assess
sub-skill capabilities. Initial evaluations on 65 problems show that the top
three models -- o1, o1-preview, and Claude-3.5-Sonnet-1022 -- achieved
comparable average scores of 75%, with no statistically significant differences
in performance. Notably, Claude-3.5-Sonnet-1022 demonstrated the highest
consistency across problems, with low variability (SD = 0.0497), which was
statistically significant compared to other models, highlighting its
reliability for real-world software development tasks.
|
2502.00227
|
AK-SLRL: Adaptive Krylov Subspace Exploration Using Single-Life
Reinforcement Learning for Sparse Linear System
|
cs.CE
|
This paper presents a single-life reinforcement learning (SLRL) approach to
adaptively select the dimension of the Krylov subspace during the generalized
minimal residual (GMRES) iteration. GMRES is an iterative algorithm for solving
large and sparse linear systems of equations in the form of \(Ax = b\) which
are mainly derived from partial differential equations (PDEs). The proposed
framework uses RL to adjust the Krylov subspace dimension (m) in the GMRES (m)
algorithm. This research demonstrates that altering the dimension of the Krylov
subspace in an online setup using SLRL can accelerate the convergence of the
GMRES algorithm by more than an order of magnitude. A comparison of different
matrix sizes and sparsity levels is performed to demonstrate the effectiveness
of adaptive Krylov subspace exploration using single-life RL (AK-SLRL). We
compare AK-SLRL with constant-restart GMRES by applying the highest restart
value used in AK-SLRL to the GMRES method. The results show that using an
adjustable restart parameter with single-life soft-actor critic (SLSAC) and an
experience replay buffer sized to half the matrix dimension converges
significantly faster than the constant restart GMRES with higher values. Higher
values of the restart parameter are equivalent to a higher number of Arnoldi
iterations to construct an orthonormal basis for the Krylov subspace $ K_m(A,
r_0) $. This process includes constructing $m$ orthonormal vectors and updating
the Hessenberg matrix $H$. Therefore, lower values of $m$ result in reduced
computation needed in GMRES minimization to solve the least-squares problem in
the smaller Hessenberg matrix. The robustness of the result is validated
through a wide range of matrix dimensions and sparsity. This paper contributes
to the series of RL combinations with numerical solvers to achieve accelerated
scientific computing.
|
2502.00232
|
A Hybrid Random Forest and CNN Framework for Tile-Wise Oil-Water
Classification in Hyperspectral Images
|
cs.CV cs.AI
|
A novel hybrid Random Forest and Convolutional Neural Network (CNN) framework
is presented for oil-water classification in hyperspectral images (HSI). To
address the challenge of preserving spatial context, the images were divided
into smaller, non-overlapping tiles, which served as the basis for training,
validation, and testing. Random Forest demonstrated strong performance in
pixel-wise classification, outperforming models such as XGBoost,
Attention-Based U-Net, and HybridSN. However, Random Forest loses spatial
context, limiting its ability to fully exploit the spatial relationships in
hyperspectral data. To improve performance, a CNN was trained on the
probability maps generated by the Random Forest, leveraging the CNN's capacity
to incorporate spatial context. The hybrid approach achieved 7.6% improvement
in recall (to 0.85), 2.4% improvement in F1 score (to 0.84), and 0.54%
improvement in AUC (to 0.99) compared to the baseline. These results highlight
the effectiveness of combining probabilistic outputs with spatial feature
learning for context-aware analysis of hyperspectral images.
|
2502.00233
|
Vision-Based Fuzzy Control System for Smart Walkers: Enhancing Usability
for Stroke Survivors with Unilateral Upper Limb Impairments
|
cs.RO cs.SY eess.SY
|
Mobility impairments, particularly those caused by stroke-induced
hemiparesis, significantly impact independence and quality of life. Current
smart walker controllers operate by using input forces from the user to control
linear motion and input torques to dictate rotational movement; however,
because they predominantly rely on user-applied torque exerted on the device
handle as an indicator of user intent to turn, they fail to adequately
accommodate users with unilateral upper limb impairments. This leads to
increased physical strain and cognitive load. This paper introduces a novel
smart walker equipped with a fuzzy control algorithm that leverages shoulder
abduction angles to intuitively interpret user intentions using just one
functional hand. By integrating a force sensor and stereo camera, the system
enhances walker responsiveness and usability. Experimental evaluations with
five participants showed that the fuzzy controller outperformed the traditional
admittance controller, reducing wrist torque while using the right hand to
operate the walker by 12.65% for left turns, 80.36% for straight paths, and
81.16% for right turns. Additionally, average user comfort ratings on a Likert
scale increased from 1 to 4. Results confirmed a strong correlation between
shoulder abduction angles and directional intent, with users reporting
decreased effort and enhanced ease of use. This study contributes to assistive
robotics by providing an adaptable control mechanism for smart walkers,
suggesting a pathway towards enhancing mobility and independence for
individuals with mobility impairments.
|
2502.00234
|
Fast Solvers for Discrete Diffusion Models: Theory and Applications of
High-Order Algorithms
|
cs.LG cs.CV cs.NA math.NA physics.comp-ph stat.ML
|
Discrete diffusion models have emerged as a powerful generative modeling
framework for discrete data with successful applications spanning from text
generation to image synthesis. However, their deployment faces challenges due
to the high dimensionality of the state space, necessitating the development of
efficient inference algorithms. Current inference approaches mainly fall into
two categories: exact simulation and approximate methods such as
$\tau$-leaping. While exact methods suffer from unpredictable inference time
and redundant function evaluations, $\tau$-leaping is limited by its
first-order accuracy. In this work, we advance the latter category by tailoring
the first extension of high-order numerical inference schemes to discrete
diffusion models, enabling larger step sizes while reducing error. We
rigorously analyze the proposed schemes and establish the second-order accuracy
of the $\theta$-trapezoidal method in KL divergence. Empirical evaluations on
GPT-2 level text and ImageNet-level image generation tasks demonstrate that our
method achieves superior sample quality compared to existing approaches under
equivalent computational constraints.
|
2502.00240
|
Learning Difference-of-Convex Regularizers for Inverse Problems: A
Flexible Framework with Theoretical Guarantees
|
stat.ML cs.LG eess.IV math.OC
|
Learning effective regularization is crucial for solving ill-posed inverse
problems, which arise in a wide range of scientific and engineering
applications. While data-driven methods that parameterize regularizers using
deep neural networks have demonstrated strong empirical performance, they often
result in highly nonconvex formulations that lack theoretical guarantees.
Recent work has shown that incorporating structured nonconvexity into neural
network-based regularizers, such as weak convexity, can strike a balance
between empirical performance and theoretical tractability. In this paper, we
demonstrate that a broader class of nonconvex functions, difference-of-convex
(DC) functions, can yield improved empirical performance while retaining strong
convergence guarantees. The DC structure enables the use of well-established
optimization algorithms, such as the Difference-of-Convex Algorithm (DCA) and a
Proximal Subgradient Method (PSM), which extend beyond standard gradient
descent. Furthermore, we provide theoretical insights into the conditions under
which optimal regularizers can be expressed as DC functions. Extensive
experiments on computed tomography (CT) reconstruction tasks show that our
approach achieves strong performance across sparse and limited-view settings,
consistently outperforming other weakly supervised learned regularizers. Our
code is available at \url{https://github.com/YasminZhang/ADCR}.
|
2502.00241
|
Mordal: Automated Pretrained Model Selection for Vision Language Models
|
cs.LG cs.AI cs.CL cs.CV
|
Incorporating multiple modalities into large language models (LLMs) is a
powerful way to enhance their understanding of non-textual data, enabling them
to perform multimodal tasks. Vision language models (VLMs) form the fastest
growing category of multimodal models because of their many practical use
cases, including in healthcare, robotics, and accessibility. Unfortunately,
even though different VLMs in the literature demonstrate impressive visual
capabilities in different benchmarks, they are handcrafted by human experts;
there is no automated framework to create task-specific multimodal models.
We introduce Mordal, an automated multimodal model search framework that
efficiently finds the best VLM for a user-defined task without manual
intervention. Mordal achieves this both by reducing the number of candidates to
consider during the search process and by minimizing the time required to
evaluate each remaining candidate. Our evaluation shows that Mordal can find
the best VLM for a given problem using up to $8.9\times$--$11.6\times$ lower
GPU hours than grid search. In the process of our evaluation, we have also
discovered new VLMs that outperform their state-of-the-art counterparts.
|
2502.00242
|
Digital-Twin assisted Network Energy Optimization during Low Traffic
Hours
|
cs.NI cs.SY eess.SY
|
As wireless network technology advances towards the sixth generation (6G),
increasing network energy consumption has become a critical concern due to the
growing demand for diverse services, radio deployments at various frequencies,
larger bandwidths, and more antennas. Network operators must manage energy
usage not only to reduce operational cost and improve revenue but also to
minimize environmental impact by reducing the carbon footprint. The 3rd
Generation Partnership Project (3GPP) has introduced several network energy
savings (NES) features. However, the implementation details and system-level
aspects of these features have not been thoroughly investigated. In this paper,
we explore system-level resource optimization for network energy savings in
low-traffic scenarios. We introduce multiple NES optimization formulations and
strategies, and further analyze their performance using a detailed network
digital twin. Our results demonstrate promising NES gains of up to 44%.
Additionally, we provide practical considerations for implementing the proposed
schemes and examine their impacts on user equipment (UE) operation.
|
2502.00245
|
Contrastive Private Data Synthesis via Weighted Multi-PLM Fusion
|
cs.LG
|
Substantial quantity and high quality are the golden rules of making a good
training dataset with sample privacy protection equally important. Generating
synthetic samples that resemble high-quality private data while ensuring
Differential Privacy (DP), a formal privacy guarantee, promises scalability and
practicality. However, existing methods relying on pre-trained models for data
synthesis %that avoid fine-tuning large pre-trained generative models often
struggle in data-deficient scenarios, suffering from limited sample size,
inevitable generation noise and existing pre-trained model bias. To address
these challenges, we propose a novel contrAstive private data Synthesis via
Weighted multiple Pre-trained language models (PLM) framework, named as WASP.
WASP utilizes limited private samples for more accurate private data
distribution estimation via a Top-Q voting mechanism, and leverages low-quality
synthetic samples for contrastive generation via collaboration among
dynamically weighted multiple pre-trained models.Extensive experiments on 6
well-developed datasets with 6 open-source and 3 closed-source PLMs demonstrate
the superiority of WASP in improving model performance over diverse downstream
tasks. Code is available at https://anonymous.4open.science/r/WASP.
|
2502.00246
|
Context-Preserving Tensorial Reconfiguration in Large Language Model
Training
|
cs.CL
|
Handling long-range dependencies in neural architectures has remained a
persistent challenge due to computational limitations and inefficient
contextual retention mechanisms. Tensorial operations have provided a
foundation for restructuring model representations, yet conventional
architectures have struggled to incorporate such techniques without introducing
excessive complexity. A novel approach, Context-Preserving Tensorial
Reconfiguration (CPTR), enables dynamic reorganization of weight tensors
through structured factorization and adaptive contraction, allowing for
enhanced contextual integration without substantial computational overhead.
Empirical evaluations demonstrate that CPTR improves coherence retention across
extended sequences, leading to measurable reductions in perplexity and improved
recall accuracy for long-context tasks. Performance comparisons reveal that
CPTR-enhanced models exhibit greater computational efficiency and reduced
memory consumption while maintaining competitive language generation fluency
and accuracy. Gradient stability metrics further validate the improved training
efficiency, revealing more controlled variance in weight updates. Comparative
studies across baseline and CPTR-enhanced models confirm that tensorial
reconfiguration contributes to more stable and computationally efficient
language modeling. The findings support the potential of CPTR in refining
contemporary neural architectures for tasks requiring long-range contextual
understanding and efficient memory utilization.
|
2502.00248
|
Provably-Stable Neural Network-Based Control of Nonlinear Systems
|
math.OC cs.LG cs.SY eess.SY
|
In recent years, Neural Networks (NNs) have been employed to control
nonlinear systems due to their potential capability in dealing with situations
that might be difficult for conventional nonlinear control schemes. However, to
the best of our knowledge, the current literature on NN-based control lacks
theoretical guarantees for stability and tracking performance. This precludes
the application of NN-based control schemes to systems where stringent
stability and performance guarantees are required. To address this gap, this
paper proposes a systematic and comprehensive methodology to design
provably-stable NN-based control schemes for affine nonlinear systems. Rigorous
analysis is provided to show that the proposed approach guarantees stability of
the closed-loop system with the NN in the loop. Also, it is shown that the
resulting NN-based control scheme ensures that system states asymptotically
converge to a neighborhood around the desired equilibrium point, with a tunable
proximity threshold. The proposed methodology is validated and evaluated via
simulation studies on an inverted pendulum and experimental studies on a Parrot
Bebop 2 drone.
|
2502.00250
|
Transformer-Based Vector Font Classification Using Different Font
Formats: TrueType versus PostScript
|
cs.CV
|
Modern fonts adopt vector-based formats, which ensure scalability without
loss of quality. While many deep learning studies on fonts focus on bitmap
formats, deep learning for vector fonts remains underexplored. In studies
involving deep learning for vector fonts, the choice of font representation has
often been made conventionally. However, the font representation format is one
of the factors that can influence the computational performance of machine
learning models in font-related tasks. Here we show that font representations
based on PostScript outlines outperform those based on TrueType outlines in
Transformer-based vector font classification. TrueType outlines represent
character shapes as sequences of points and their associated flags, whereas
PostScript outlines represent them as sequences of commands. In previous
research, PostScript outlines have been predominantly used when fonts are
treated as part of vector graphics, while TrueType outlines are mainly employed
when focusing on fonts alone. Whether to use PostScript or TrueType outlines
has been mainly determined by file format specifications and precedent settings
in previous studies, rather than performance considerations. To date, few
studies have compared which outline format provides better embedding
representations. Our findings suggest that information aggregation is crucial
in Transformer-based deep learning for vector graphics, as in tokenization in
language models and patch division in bitmap-based image recognition models.
This insight provides valuable guidance for selecting outline formats in future
research on vector graphics.
|
2502.00253
|
Patch Triplet Similarity Purification for Guided Real-World Low-Dose CT
Image Denoising
|
eess.IV cs.CV
|
Image denoising of low-dose computed tomography (LDCT) is an important
problem for clinical diagnosis with reduced radiation exposure. Previous
methods are mostly trained with pairs of synthetic or misaligned LDCT and
normal-dose CT (NDCT) images. However, trained with synthetic noise or
misaligned LDCT/NDCT image pairs, the denoising networks would suffer from
blurry structure or motion artifacts. Since non-contrast CT (NCCT) images share
the content characteristics to the corresponding NDCT images in a three-phase
scan, they can potentially provide useful information for real-world LDCT image
denoising. To exploit this aspect, in this paper, we propose to incorporate
clean NCCT images as useful guidance for the learning of real-world LDCT image
denoising networks. To alleviate the issue of spatial misalignment in training
data, we design a new Patch Triplet Similarity Purification (PTSP) strategy to
select highly similar patch (instead of image) triplets of LDCT, NDCT, and NCCT
images for network training. Furthermore, we modify two image denoising
transformers of SwinIR and HAT to accommodate the NCCT image guidance, by
replacing vanilla self-attention with cross-attention. On our collected
clinical dataset, the modified transformers trained with the data selected by
our PTSP strategy show better performance than 15 comparison methods on
real-world LDCT image denoising. Ablation studies validate the effectiveness of
our NCCT image guidance and PTSP strategy. We will publicly release our data
and code.
|
2502.00258
|
ProxSparse: Regularized Learning of Semi-Structured Sparsity Masks for
Pretrained LLMs
|
cs.LG cs.CL
|
Large Language Models (LLMs) have demonstrated exceptional performance in
natural language processing tasks, yet their massive size makes serving them
inefficient and costly. Semi-structured pruning has emerged as an effective
method for model acceleration, but existing approaches are suboptimal because
they focus on local, layer-wise optimizations using heuristic rules, failing to
leverage global feedback. We present ProxSparse, a learning-based framework for
mask selection enabled by regularized optimization. ProxSparse transforms the
rigid, non-differentiable mask selection process into a smoother optimization
procedure, allowing gradual mask exploration with flexibility. ProxSparse does
not involve additional weight updates once the mask is determined. Our
extensive evaluations on 7 widely used models show that ProxSparse consistently
outperforms previously proposed semi-structured mask selection methods with
significant improvement, demonstrating the effectiveness of our learned
approach towards semi-structured pruning.
|
2502.00262
|
INSIGHT: Enhancing Autonomous Driving Safety through Vision-Language
Models on Context-Aware Hazard Detection and Edge Case Evaluation
|
cs.CV cs.AI
|
Autonomous driving systems face significant challenges in handling
unpredictable edge-case scenarios, such as adversarial pedestrian movements,
dangerous vehicle maneuvers, and sudden environmental changes. Current
end-to-end driving models struggle with generalization to these rare events due
to limitations in traditional detection and prediction approaches. To address
this, we propose INSIGHT (Integration of Semantic and Visual Inputs for
Generalized Hazard Tracking), a hierarchical vision-language model (VLM)
framework designed to enhance hazard detection and edge-case evaluation. By
using multimodal data fusion, our approach integrates semantic and visual
representations, enabling precise interpretation of driving scenarios and
accurate forecasting of potential dangers. Through supervised fine-tuning of
VLMs, we optimize spatial hazard localization using attention-based mechanisms
and coordinate regression techniques. Experimental results on the BDD100K
dataset demonstrate a substantial improvement in hazard prediction
straightforwardness and accuracy over existing models, achieving a notable
increase in generalization performance. This advancement enhances the
robustness and safety of autonomous driving systems, ensuring improved
situational awareness and potential decision-making in complex real-world
scenarios.
|
2502.00264
|
Beyond the Permutation Symmetry of Transformers: The Role of Rotation
for Model Fusion
|
cs.LG cs.CV
|
Symmetry in the parameter space of deep neural networks (DNNs) has proven
beneficial for various deep learning applications. A well-known example is the
permutation symmetry in Multi-Layer Perceptrons (MLPs), where permuting the
rows of weight matrices in one layer and applying the inverse permutation to
adjacent layers yields a functionally equivalent model. While permutation
symmetry fully characterizes the equivalence set for MLPs, its discrete nature
limits its utility for transformers. In this paper, we introduce rotation
symmetry, a novel form of parameter space symmetry for transformers that
generalizes permutation symmetry by rotating parameter matrices in
self-attention layers. Unlike permutation symmetry, rotation symmetry operates
in a continuous domain, thereby significantly expanding the equivalence set for
transformers. Based on this property, we propose a theoretically optimal
parameter matching algorithm as a plug-and-play module to enhance model fusion.
We evaluate our approach using pre-trained transformers across diverse natural
language and vision tasks. Experimental results demonstrate that our rotation
symmetry-based matching algorithm substantially improves model fusion,
highlighting the potential of parameter space symmetry to facilitate model
fusion. Our code is available on
https://github.com/zhengzaiyi/RotationSymmetry.
|
2502.00265
|
RADx Data Hub: A Cloud Platform for FAIR, Harmonized COVID-19 Data
|
cs.DB
|
The COVID-19 pandemic highlighted the urgent need for robust systems to
enable rapid data collection, integration, and analysis for public health
responses. Existing approaches often relied on disparate, non-interoperable
systems, creating bottlenecks in comprehensive analyses and timely
decision-making. To address these challenges, the U.S. National Institutes of
Health (NIH) launched the Rapid Acceleration of Diagnostics (RADx) initiative
in 2020, with the RADx Data Hub, a centralized repository for de-identified and
curated COVID-19 data, as its cornerstone. The RADx Data Hub hosts diverse
study data, including clinical data, testing results, smart sensor outputs,
self-reported symptoms, and information on social determinants of health. Built
on cloud infrastructure, the RADx Data Hub integrates metadata standards,
interoperable formats, and ontology-based tools to adhere to the FAIR
(Findable, Accessible, Interoperable, Reusable) principles for data sharing.
Initially developed for COVID-19 research, its architecture and processes are
adaptable to other scientific disciplines. This paper provides an overview of
the data hosted by the RADx Data Hub and describes the platform's capabilities
and architecture.
|
2502.00266
|
MCM: Multi-layer Concept Map for Efficient Concept Learning from Masked
Images
|
cs.CV cs.LG
|
Masking strategies commonly employed in natural language processing are still
underexplored in vision tasks such as concept learning, where conventional
methods typically rely on full images. However, using masked images diversifies
perceptual inputs, potentially offering significant advantages in concept
learning with large-scale Transformer models. To this end, we propose
Multi-layer Concept Map (MCM), the first work to devise an efficient concept
learning method based on masked images. In particular, we introduce an
asymmetric concept learning architecture by establishing correlations between
different encoder and decoder layers, updating concept tokens using backward
gradients from reconstruction tasks. The learned concept tokens at various
levels of granularity help either reconstruct the masked image patches by
filling in gaps or guide the reconstruction results in a direction that
reflects specific concepts. Moreover, we present both quantitative and
qualitative results across a wide range of metrics, demonstrating that MCM
significantly reduces computational costs by training on fewer than 75% of the
total image patches while enhancing concept prediction performance.
Additionally, editing specific concept tokens in the latent space enables
targeted image generation from masked images, aligning both the visible
contextual patches and the provided concepts. By further adjusting the testing
time mask ratio, we could produce a range of reconstructions that blend the
visible patches with the provided concepts, proportional to the chosen ratios.
|
2502.00270
|
DUET: Optimizing Training Data Mixtures via Feedback from Unseen
Evaluation Tasks
|
cs.LG cs.AI stat.ML
|
The performance of a machine learning (ML) model depends heavily on the
relevance of its training data to the domain of the downstream evaluation task.
However, in practice, the data involved in an unseen evaluation task is often
not known to us (e.g., conversations between an LLM and a user are end-to-end
encrypted). So, it is not obvious what data would be relevant for
training/fine-tuning the ML model to maximize its task performance. Instead,
one can only deploy the ML model in the unseen evaluation task to gather
multiple rounds of coarse feedback on how well the model has performed. This
paper presents a novel global-to-local algorithm called DUET that can exploit
the feedback loop by interleaving a data selection method with Bayesian
optimization. As a result, DUET can efficiently refine the training data
mixture from a pool of data domains to maximize the model's performance on the
unseen evaluation task and its convergence to the optimal data mixture can be
theoretically guaranteed by analyzing its cumulative regret. Empirical
evaluation on image and LLM evaluation tasks shows that DUET finds better
training data mixtures than conventional baselines.
|
2502.00271
|
Scaling Flaws of Verifier-Guided Search in Mathematical Reasoning
|
cs.CL
|
Large language models (LLMs) struggle with multi-step reasoning, where
inference-time scaling has emerged as a promising strategy for performance
improvement. Verifier-guided search outperforms repeated sampling when sample
size is limited by selecting and prioritizing valid reasoning paths. However,
we identify a critical limitation: scaling flaws, prevalent across different
models (Mistral 7B and DeepSeekMath 7B), benchmarks (GSM8K and MATH), and
verifiers (outcome value models and process reward models). As sample size
increases, verifier-guided search exhibits diminishing advantages and
eventually underperforms repeated sampling. Our analysis attributes this to
verifier failures, where imperfect verifiers misrank candidates and erroneously
prune all valid paths. These issues are further exacerbated in challenging and
out-of-distribution problems, restricting search effectiveness. To mitigate
verifier failures, we explore reducing reliance on verifiers and conduct
preliminary investigations using two simple methods. Our findings reveal
fundamental limitations in verifier-guided search and suggest future
directions.
|
2502.00274
|
AoI in M/G/1/1 Queues with Probabilistic Preemption
|
cs.IT math.IT
|
We consider a status update system consisting of one source, one server, and
one sink. The source generates packets according to a Poisson process and the
packets are served according to a generally distributed service time. We
consider a system with a capacity of one packet, i.e., there is no waiting
buffer in the system, and model it as an M/G/1/1 queueing system. We introduce
a probabilistically preemptive packet management policy and calculate the
moment generating functions (MGFs) of the age of information (AoI) and peak AoI
(PAoI) under the policy. According to the probabilistically preemptive policy,
when a packet arrives, the possible packet in the system is replaced by the
arriving packet with a fixed probability. Numerical results show the
effectiveness of the packet management policy.
|
2502.00275
|
Simultaneous Estimation of Manipulation Skill and Hand Grasp Force from
Forearm Ultrasound Images
|
cs.RO cs.CV cs.ET cs.HC
|
Accurate estimation of human hand configuration and the forces they exert is
critical for effective teleoperation and skill transfer in robotic
manipulation. A deeper understanding of human interactions with objects can
further enhance teleoperation performance. To address this need, researchers
have explored methods to capture and translate human manipulation skills and
applied forces to robotic systems. Among these, biosignal-based approaches,
particularly those using forearm ultrasound data, have shown significant
potential for estimating hand movements and finger forces. In this study, we
present a method for simultaneously estimating manipulation skills and applied
hand force using forearm ultrasound data. Data collected from seven
participants were used to train deep learning models for classifying
manipulation skills and estimating grasp force. Our models achieved an average
classification accuracy of 94.87 percent plus or minus 10.16 percent for
manipulation skills and an average root mean square error (RMSE) of 0.51 plus
or minus 0.19 Newtons for force estimation, as evaluated using five-fold
cross-validation. These results highlight the effectiveness of forearm
ultrasound in advancing human-machine interfacing and robotic teleoperation for
complex manipulation tasks. This work enables new and effective possibilities
for human-robot skill transfer and tele-manipulation, bridging the gap between
human dexterity and robotic control.
|
2502.00277
|
Regularized Langevin Dynamics for Combinatorial Optimization
|
cs.LG stat.ML
|
This work proposes a simple yet effective sampling framework for
combinatorial optimization (CO). Our method builds on discrete Langevin
dynamics (LD), an efficient gradient-guided generative algorithm. However, we
observed that directly applying LD often leads to limited exploration. To
overcome this limitation, we propose the Regularized Langevin Dynamics (RLD),
which enforces an expected distance between the sampled and current solutions,
effectively avoiding local minima. We develop two CO solvers on top of RLD, one
based on simulated annealing (SA) and the other one based on neural network
(NN). Empirical results on three classical CO problems demonstrate that both of
our methods can achieve comparable or better performance against the previous
state-of-the-art (SOTA) SA and NN-based solvers. In particular, our SA
algorithm reduces the running time of the previous SOTA SA method by up to
80\%, while achieving equal or superior performance. In summary, RLD offers a
promising framework for enhancing both traditional heuristics and NN models to
solve CO problems.
|
2502.00279
|
Improving realistic semi-supervised learning with doubly robust
estimation
|
cs.LG stat.ML
|
A major challenge in Semi-Supervised Learning (SSL) is the limited
information available about the class distribution in the unlabeled data. In
many real-world applications this arises from the prevalence of long-tailed
distributions, where the standard pseudo-label approach to SSL is biased
towards the labeled class distribution and thus performs poorly on unlabeled
data. Existing methods typically assume that the unlabeled class distribution
is either known a priori, which is unrealistic in most situations, or estimate
it on-the-fly using the pseudo-labels themselves. We propose to explicitly
estimate the unlabeled class distribution, which is a finite-dimensional
parameter, \emph{as an initial step}, using a doubly robust estimator with a
strong theoretical guarantee; this estimate can then be integrated into
existing methods to pseudo-label the unlabeled data during training more
accurately. Experimental results demonstrate that incorporating our techniques
into common pseudo-labeling approaches improves their performance.
|
2502.00280
|
On the study of frequency control and spectral bias in Wavelet-Based
Kolmogorov Arnold networks: A path to physics-informed KANs
|
cs.LG cs.NA math.NA
|
Spectral bias, the tendency of neural networks to prioritize learning
low-frequency components of functions during the initial training stages, poses
a significant challenge when approximating solutions with high-frequency
details. This issue is particularly pronounced in physics-informed neural
networks (PINNs), widely used to solve differential equations that describe
physical phenomena. In the literature, contributions such as Wavelet Kolmogorov
Arnold Networks (Wav-KANs) have demonstrated promising results in capturing
both low- and high-frequency components. Similarly, Fourier features (FF) are
often employed to address this challenge. However, the theoretical foundations
of Wav-KANs, particularly the relationship between the frequency of the mother
wavelet and spectral bias, remain underexplored. A more in-depth understanding
of how Wav-KANs manage high-frequency terms could offer valuable insights for
addressing oscillatory phenomena encountered in parabolic, elliptic, and
hyperbolic differential equations. In this work, we analyze the eigenvalues of
the neural tangent kernel (NTK) of Wav-KANs to enhance their ability to
converge on high-frequency components, effectively mitigating spectral bias.
Our theoretical findings are validated through numerical experiments, where we
also discuss the limitations of traditional approaches, such as standard PINNs
and Fourier features, in addressing multi-frequency problems.
|
2502.00281
|
Sigmoid Self-Attention is Better than Softmax Self-Attention: A
Mixture-of-Experts Perspective
|
cs.LG cs.AI
|
At the core of the popular Transformer architecture is the self-attention
mechanism, which dynamically assigns softmax weights to each input token so
that the model can focus on the most salient information. However, the softmax
structure slows down the attention computation due to its row-wise nature, and
inherently introduces competition among tokens: as the weight assigned to one
token increases, the weights of others decrease. This competitive dynamic may
narrow the focus of self-attention to a limited set of features, potentially
overlooking other informative characteristics. Recent experimental studies have
shown that using the element-wise sigmoid function helps eliminate token
competition and reduce the computational overhead. Despite these promising
empirical results, a rigorous comparison between sigmoid and softmax
self-attention mechanisms remains absent in the literature. This paper closes
this gap by theoretically demonstrating that sigmoid self-attention is more
sample-efficient than its softmax counterpart. Toward that goal, we illustrate
that each row of the self-attention matrix can be represented as a mixture of
experts. Our analysis shows that ''experts'' in sigmoid self-attention require
significantly less data to achieve the same approximation error as those in
softmax self-attention. We corroborate our theoretical findings through
extensive experiments on both synthetic and real-world datasets.
|
2502.00282
|
GraphMinNet: Learning Dependencies in Graphs with Light Complexity
Minimal Architecture
|
cs.LG
|
Graph Neural Networks (GNNs) have demonstrated remarkable success in various
applications, yet they often struggle to capture long-range dependencies (LRD)
effectively. This paper introduces GraphMinNet, a novel GNN architecture that
generalizes the idea of minimal Gated Recurrent Units to graph-structured data.
Our approach achieves efficient LRD modeling with linear computational
complexity while maintaining permutation equivariance and stability. The model
incorporates both structural and positional information through a unique
combination of feature and positional encodings, leading to provably stronger
expressiveness than the 1-WL test. Theoretical analysis establishes that
GraphMinNet maintains non-decaying gradients over long distances, ensuring
effective long-range information propagation. Extensive experiments on ten
diverse datasets, including molecular graphs, image graphs, and synthetic
networks, demonstrate that GraphMinNet achieves state-of-the-art performance
while being computationally efficient. Our results show superior performance on
6 out of 10 datasets and competitive results on the others, validating the
effectiveness of our approach in capturing both local and global graph
structures.
|
2502.00284
|
Bounded-Confidence Models of Multi-Dimensional Opinions with
Topic-Weighted Discordance
|
physics.soc-ph cs.SI math.DS
|
People's opinions on a wide range of topics often evolve over time through
their interactions with others. Models of opinion dynamics primarily focus on
one-dimensional opinions which represent opinions on one topic. However,
opinions on various topics are rarely isolated; instead, they can be
interdependent and exhibit correlations. In a bounded-confidence model (BCM) of
opinion dynamics, agents influence each other's opinions only if their opinions
are sufficiently similar. We extend classical agent-based BCMs -- namely, the
Hegeselmann--Krause BCM, which has synchronous interactions, and the
Deffuant--Weisbuch BCM, which has asynchronous interactions -- to a
multidimensional setting, in which the opinions are multidimensional vectors
representing opinions of different topics and opinions on different topics are
interdependent. To measure opinion differences between agents, we introduce
topic-weighted discordance functions that account for opinion differences in
all topics. We use the regions of receptiveness to characterize the
steady-state opinion clusters and provide an analytical approach to compute
these regions. In addition, we numerically simulate our models on various
networks with initial opinions drawn from a variety of distributions. When
initial opinions are correlated across different topics, our topic-weighted
BCMs yield significantly different results in both transient and steady states
compared to baseline models, where the dynamics of each opinion topic are
independent.
|
2502.00285
|
K Nearest Neighbor-Guided Trajectory Similarity Learning
|
cs.LG cs.CV cs.DB
|
Trajectory similarity is fundamental to many spatio-temporal data mining
applications. Recent studies propose deep learning models to approximate
conventional trajectory similarity measures, exploiting their fast inference
time once trained. Although efficient inference has been reported, challenges
remain in similarity approximation accuracy due to difficulties in trajectory
granularity modeling and in exploiting similarity signals in the training data.
To fill this gap, we propose TSMini, a highly effective trajectory similarity
model with a sub-view modeling mechanism capable of learning multi-granularity
trajectory patterns and a k nearest neighbor-based loss that guides TSMini to
learn not only absolute similarity values between trajectories but also their
relative similarity ranks. Together, these two innovations enable highly
accurate trajectory similarity approximation. Experiments show that TSMini can
outperform the state-of-the-art models by 22% in accuracy on average when
learning trajectory similarity measures.
|
2502.00288
|
Learning from Suboptimal Data in Continuous Control via Auto-Regressive
Soft Q-Network
|
cs.LG cs.RO
|
Reinforcement learning (RL) for continuous control often requires large
amounts of online interaction data. Value-based RL methods can mitigate this
burden by offering relatively high sample efficiency. Some studies further
enhance sample efficiency by incorporating offline demonstration data to
"kick-start" training, achieving promising results in continuous control.
However, they typically compute the Q-function independently for each action
dimension, neglecting interdependencies and making it harder to identify
optimal actions when learning from suboptimal data, such as non-expert
demonstration and online-collected data during the training process. To address
these issues, we propose Auto-Regressive Soft Q-learning (ARSQ), a value-based
RL algorithm that models Q-values in a coarse-to-fine, auto-regressive manner.
First, ARSQ decomposes the continuous action space into discrete spaces in a
coarse-to-fine hierarchy, enhancing sample efficiency for fine-grained
continuous control tasks. Next, it auto-regressively predicts dimensional
action advantages within each decision step, enabling more effective
decision-making in continuous control tasks. We evaluate ARSQ on two continuous
control benchmarks, RLBench and D4RL, integrating demonstration data into
online training. On D4RL, which includes non-expert demonstrations, ARSQ
achieves an average $1.62\times$ performance improvement over SOTA value-based
baseline. On RLBench, which incorporates expert demonstrations, ARSQ surpasses
various baselines, demonstrating its effectiveness in learning from suboptimal
online-collected data.
|
2502.00290
|
Estimating LLM Uncertainty with Logits
|
cs.CL cs.AI
|
In recent years, Large Language Models (LLMs) have seen remarkable
advancements and have been extensively integrated across various fields.
Despite their progress, LLMs are prone to hallucinations, producing responses
that may not be dependable if the models lack sufficient grounding knowledge.
To mitigate this issue, methods for estimating uncertainty have been adopted,
with a focus on critical tokens as indicators of reliability. Nevertheless,
probability-based approaches have shown limitations in assessing token-level
reliability due to the erosion of evidence strength information acquired during
training. In this paper, we introduce Logits-induced Token Uncertainty (LogU),
a novel framework designed to estimate token-specific uncertainty in LLMs in
real time, without the need for multiple sampling rounds. By leveraging
evidence modeling for the implementation of LogU, we utilize the derived
uncertainty measures to steer downstream tasks. Our experimental findings
highlight the substantial effectiveness and potential of LogU, marking a
significant advancement in addressing the challenge of model hallucinations.
|
2502.00294
|
On the Source Model Key Agreement Problem
|
cs.IT math.IT
|
We consider the source model key agreement problem involving two legitimate
parties and an eavesdropper who observe n i.i.d. samples of X and Y and Z
respectively. The best-known upper bound on the key capacity is characterized
by an inf-max optimization problem that generally lacks a closed-form solution.
In this paper, we solve the optimization for some class of sources, thereby
providing simple expressions for the upper bound. We provide general conditions
under which the upper bound reduces to I(X;Y). As an example, we consider the
XOR setting in which X and Y are binary, and Z is the XOR of X and Y . The
upper bound reduces to I(X;Y) for this source. Next, we conjecture that the
rate I(X;Y) is not achievable for the XOR source, and provide some ideas that
might be useful for developing a new upper bound on the source model problem.
|
2502.00298
|
The Price of Linear Time: Error Analysis of Structured Kernel
Interpolation
|
cs.LG stat.ML
|
Structured Kernel Interpolation (SKI) (Wilson et al. 2015) helps scale
Gaussian Processes (GPs) by approximating the kernel matrix via interpolation
at inducing points, achieving linear computational complexity. However, it
lacks rigorous theoretical error analysis. This paper bridges the gap: we prove
error bounds for the SKI Gram matrix and examine the error's effect on
hyperparameter estimation and posterior inference. We further provide a
practical guide to selecting the number of inducing points under convolutional
cubic interpolation: they should grow as $n^{d/3}$ for error control.
Crucially, we identify two dimensionality regimes governing the trade-off
between SKI Gram matrix spectral norm error and computational complexity. For
$d \leq 3$, any error tolerance can achieve linear time for sufficiently large
sample size. For $d > 3$, the error must increase with sample size to maintain
linear time. Our analysis provides key insights into SKI's scalability-accuracy
trade-offs, establishing precise conditions for achieving linear-time GP
inference with controlled approximation error.
|
Subsets and Splits
No community queries yet
The top public SQL queries from the community will appear here once available.