id
stringlengths 9
16
| title
stringlengths 4
278
| categories
stringlengths 5
104
| abstract
stringlengths 6
4.09k
|
|---|---|---|---|
2502.09082
|
CoSER: Coordinating LLM-Based Persona Simulation of Established Roles
|
cs.CL cs.AI
|
Role-playing language agents (RPLAs) have emerged as promising applications
of large language models (LLMs). However, simulating established characters
presents a challenging task for RPLAs, due to the lack of authentic character
datasets and nuanced evaluation methods using such data. In this paper, we
present CoSER, a collection of a high-quality dataset, open models, and an
evaluation protocol towards effective RPLAs of established characters. The
CoSER dataset covers 17,966 characters from 771 renowned books. It provides
authentic dialogues with real-world intricacies, as well as diverse data types
such as conversation setups, character experiences and internal thoughts.
Drawing from acting methodology, we introduce given-circumstance acting for
training and evaluating role-playing LLMs, where LLMs sequentially portray
multiple characters in book scenes. Using our dataset, we develop CoSER 8B and
CoSER 70B, i.e., advanced open role-playing LLMs built on LLaMA-3.1 models.
Extensive experiments demonstrate the value of the CoSER dataset for RPLA
training, evaluation and retrieval. Moreover, CoSER 70B exhibits
state-of-the-art performance surpassing or matching GPT-4o on our evaluation
and three existing benchmarks, i.e., achieving 75.80% and 93.47% accuracy on
the InCharacter and LifeChoice benchmarks respectively.
|
2502.09083
|
Show Me the Work: Fact-Checkers' Requirements for Explainable Automated
Fact-Checking
|
cs.HC cs.AI cs.CL
|
The pervasiveness of large language models and generative AI in online media
has amplified the need for effective automated fact-checking to assist
fact-checkers in tackling the increasing volume and sophistication of
misinformation. The complex nature of fact-checking demands that automated
fact-checking systems provide explanations that enable fact-checkers to
scrutinise their outputs. However, it is unclear how these explanations should
align with the decision-making and reasoning processes of fact-checkers to be
effectively integrated into their workflows. Through semi-structured interviews
with fact-checking professionals, we bridge this gap by: (i) providing an
account of how fact-checkers assess evidence, make decisions, and explain their
processes; (ii) examining how fact-checkers use automated tools in practice;
and (iii) identifying fact-checker explanation requirements for automated
fact-checking tools. The findings show unmet explanation needs and identify
important criteria for replicable fact-checking explanations that trace the
model's reasoning path, reference specific evidence, and highlight uncertainty
and information gaps.
|
2502.09084
|
Application of Tabular Transformer Architectures for Operating System
Fingerprinting
|
cs.CR cs.LG cs.NI
|
Operating System (OS) fingerprinting is essential for network management and
cybersecurity, enabling accurate device identification based on network traffic
analysis. Traditional rule-based tools such as Nmap and p0f face challenges in
dynamic environments due to frequent OS updates and obfuscation techniques.
While Machine Learning (ML) approaches have been explored, Deep Learning (DL)
models, particularly Transformer architectures, remain unexploited in this
domain. This study investigates the application of Tabular Transformer
architectures-specifically TabTransformer and FT-Transformer-for OS
fingerprinting, leveraging structured network data from three publicly
available datasets. Our experiments demonstrate that FT-Transformer generally
outperforms traditional ML models, previous approaches and TabTransformer
across multiple classification levels (OS family, major, and minor versions).
The results establish a strong foundation for DL-based OS fingerprinting,
improving accuracy and adaptability in complex network environments.
Furthermore, we ensure the reproducibility of our research by providing an
open-source implementation.
|
2502.09085
|
Multi-user Visible Light Communications with Probabilistic Constellation
Shaping and Precoding
|
eess.SY cs.SY
|
This paper proposes a joint design of probabilistic constellation shaping
(PCS) and precoding to enhance the sum-rate performance of multi-user visible
light communications (VLC) broadcast channels subject to signal amplitude
constraint. In the proposed design, the transmission probabilities of bipolar
$M$-pulse amplitude modulation ($M$-PAM) symbols for each user and the transmit
precoding matrix are jointly optimized to improve the sum-rate performance. The
joint design problem is shown to be a complex multivariate non-convex problem
due to the non-convexity of the objective function. To tackle the original
non-convex optimization problem, the firefly algorithm (FA), a nature-inspired
heuristic optimization approach, is employed to solve a local optima. The
FA-based approach, however, suffers from high computational complexity. Thus,
using zero-forcing (ZF) precoding, we propose a low-complexity design, which is
solved using an alternating optimization approach. Additionally, considering
the channel uncertainty, a robust design based on the concept of end-to-end
learning with autoencoder (AE) is also presented. Simulation results reveal
that the proposed joint design with PCS significantly improves the sum-rate
performance compared to the conventional design with uniform signaling. For
instance, the joint design achieves $\mathbf{17.5\%}$ and $\mathbf{19.2\%}$
higher sum-rate for 8-PAM and 16-PAM, respectively, at 60 dB peak
amplitude-to-noise ratio. Some insights into the optimal symbol distributions
of the two joint design approaches are also provided. Furthermore, our results
show the advantage of the proposed robust design over the non-robust one under
uncertain channel conditions.
|
2502.09086
|
A Hybrid Model for Few-Shot Text Classification Using Transfer and
Meta-Learning
|
cs.CL
|
With the continuous development of natural language processing (NLP)
technology, text classification tasks have been widely used in multiple
application fields. However, obtaining labeled data is often expensive and
difficult, especially in few-shot learning scenarios. To solve this problem,
this paper proposes a few-shot text classification model based on transfer
learning and meta-learning. The model uses the knowledge of the pre-trained
model for transfer and optimizes the model's rapid adaptability in few-sample
tasks through a meta-learning mechanism. Through a series of comparative
experiments and ablation experiments, we verified the effectiveness of the
proposed method. The experimental results show that under the conditions of few
samples and medium samples, the model based on transfer learning and
meta-learning significantly outperforms traditional machine learning and deep
learning methods. In addition, ablation experiments further analyzed the
contribution of each component to the model performance and confirmed the key
role of transfer learning and meta-learning in improving model accuracy.
Finally, this paper discusses future research directions and looks forward to
the potential of this method in practical applications.
|
2502.09088
|
Unsupervised Anomaly Detection on Implicit Shape representations for
Sarcopenia Detection
|
cs.CV cs.LG
|
Sarcopenia is an age-related progressive loss of muscle mass and strength
that significantly impacts daily life. A commonly studied criterion for
characterizing the muscle mass has been the combination of 3D imaging and
manual segmentations. In this paper, we instead study the muscles' shape. We
rely on an implicit neural representation (INR) to model normal muscle shapes.
We then introduce an unsupervised anomaly detection method to identify
sarcopenic muscles based on the reconstruction error of the implicit model.
Relying on a conditional INR with an auto-decoding strategy, we also learn a
latent representation of the muscles that clearly separates normal from
abnormal muscles in an unsupervised fashion. Experimental results on a dataset
of 103 segmented volumes indicate that our double anomaly detection strategy
effectively discriminates sarcopenic and non-sarcopenic muscles.
|
2502.09089
|
Semantic Ads Retrieval at Walmart eCommerce with Language Models
Progressively Trained on Multiple Knowledge Domains
|
cs.IR
|
Sponsored search in e-commerce poses several unique and complex challenges.
These challenges stem from factors such as the asymmetric language structure
between search queries and product names, the inherent ambiguity in user search
intent, and the vast volume of sparse and imbalanced search corpus data. The
role of the retrieval component within a sponsored search system is pivotal,
serving as the initial step that directly affects the subsequent ranking and
bidding systems. In this paper, we present an end-to-end solution tailored to
optimize the ads retrieval system on Walmart.com. Our approach is to pretrain
the BERT-like classification model with product category information, enhancing
the model's understanding of Walmart product semantics. Second, we design a
two-tower Siamese Network structure for embedding structures to augment
training efficiency. Third, we introduce a Human-in-the-loop Progressive Fusion
Training method to ensure robust model performance. Our results demonstrate the
effectiveness of this pipeline. It enhances the search relevance metric by up
to 16% compared to a baseline DSSM-based model. Moreover, our large-scale
online A/B testing demonstrates that our approach surpasses the ad revenue of
the existing production model.
|
2502.09093
|
From Visuals to Vocabulary: Establishing Equivalence Between Image and
Text Token Through Autoregressive Pre-training in MLLMs
|
cs.CV
|
While MLLMs perform well on perceptual tasks, they lack precise multimodal
alignment, limiting performance. To address this challenge, we propose Vision
Dynamic Embedding-Guided Pretraining (VDEP), a hybrid autoregressive training
paradigm for MLLMs. Utilizing dynamic embeddings from the MLP following the
visual encoder, this approach supervises image hidden states and integrates
image tokens into autoregressive training. Existing MLLMs primarily focused on
recovering information from textual inputs, often neglecting the effective
processing of image data. In contrast, the key improvement of this work is the
reinterpretation of multimodal alignment as a process of recovering information
from input data, with particular emphasis on reconstructing detailed visual
features.The proposed method seamlessly integrates into standard models without
architectural changes. Experiments on 13 benchmarks show VDEP outperforms
baselines, surpassing existing methods.
|
2502.09097
|
A Hybrid Transformer Model for Fake News Detection: Leveraging Bayesian
Optimization and Bidirectional Recurrent Unit
|
cs.CL
|
In this paper, we propose an optimized Transformer model that integrates
Bayesian algorithms with a Bidirectional Gated Recurrent Unit (BiGRU), and
apply it to fake news classification for the first time. First, we employ the
TF-IDF method to extract features from news texts and transform them into
numeric representations to facilitate subsequent machine learning tasks. Two
sets of experiments are then conducted for fake news detection and
classification: one using a Transformer model optimized only with BiGRU, and
the other incorporating Bayesian algorithms into the BiGRU-based Transformer.
Experimental results show that the BiGRU-optimized Transformer achieves 100%
accuracy on the training set and 99.67% on the test set, while the addition of
the Bayesian algorithm maintains 100% accuracy on the training set and slightly
improves test-set accuracy to 99.73%. This indicates that the Bayesian
algorithm boosts model accuracy by 0.06%, further enhancing the detection
capability for fake news. Moreover, the proposed algorithm converges rapidly at
around the 10th training epoch with accuracy nearing 100%, demonstrating both
its effectiveness and its fast classification ability. Overall, the optimized
Transformer model, enhanced by the Bayesian algorithm and BiGRU, exhibits
excellent continuous learning and detection performance, offering a robust
technical means to combat the spread of fake news in the current era of
information overload.
|
2502.09100
|
Logical Reasoning in Large Language Models: A Survey
|
cs.AI cs.CL
|
With the emergence of advanced reasoning models like OpenAI o3 and
DeepSeek-R1, large language models (LLMs) have demonstrated remarkable
reasoning capabilities. However, their ability to perform rigorous logical
reasoning remains an open question. This survey synthesizes recent advancements
in logical reasoning within LLMs, a critical area of AI research. It outlines
the scope of logical reasoning in LLMs, its theoretical foundations, and the
benchmarks used to evaluate reasoning proficiency. We analyze existing
capabilities across different reasoning paradigms - deductive, inductive,
abductive, and analogical - and assess strategies to enhance reasoning
performance, including data-centric tuning, reinforcement learning, decoding
strategies, and neuro-symbolic approaches. The review concludes with future
directions, emphasizing the need for further exploration to strengthen logical
reasoning in AI systems.
|
2502.09104
|
One-shot Federated Learning Methods: A Practical Guide
|
cs.LG cs.AI
|
One-shot Federated Learning (OFL) is a distributed machine learning paradigm
that constrains client-server communication to a single round, addressing
privacy and communication overhead issues associated with multiple rounds of
data exchange in traditional Federated Learning (FL). OFL demonstrates the
practical potential for integration with future approaches that require
collaborative training models, such as large language models (LLMs). However,
current OFL methods face two major challenges: data heterogeneity and model
heterogeneity, which result in subpar performance compared to conventional FL
methods. Worse still, despite numerous studies addressing these limitations, a
comprehensive summary is still lacking. To address these gaps, this paper
presents a systematic analysis of the challenges faced by OFL and thoroughly
reviews the current methods. We also offer an innovative categorization method
and analyze the trade-offs of various techniques. Additionally, we discuss the
most promising future directions and the technologies that should be integrated
into the OFL field. This work aims to provide guidance and insights for future
research.
|
2502.09106
|
Scaling Law for Stochastic Gradient Descent in Quadratically
Parameterized Linear Regression
|
cs.LG
|
In machine learning, the scaling law describes how the model performance
improves with the model and data size scaling up. From a learning theory
perspective, this class of results establishes upper and lower generalization
bounds for a specific learning algorithm. Here, the exact algorithm running
using a specific model parameterization often offers a crucial implicit
regularization effect, leading to good generalization. To characterize the
scaling law, previous theoretical studies mainly focus on linear models,
whereas, feature learning, a notable process that contributes to the remarkable
empirical success of neural networks, is regretfully vacant. This paper studies
the scaling law over a linear regression with the model being quadratically
parameterized. We consider infinitely dimensional data and slope ground truth,
both signals exhibiting certain power-law decay rates. We study convergence
rates for Stochastic Gradient Descent and demonstrate the learning rates for
variables will automatically adapt to the ground truth. As a result, in the
canonical linear regression, we provide explicit separations for generalization
curves between SGD with and without feature learning, and the
information-theoretical lower bound that is agnostic to parametrization method
and the algorithm. Our analysis for decaying ground truth provides a new
characterization for the learning dynamic of the model.
|
2502.09110
|
Pulling Back the Curtain: Unsupervised Adversarial Detection via
Contrastive Auxiliary Networks
|
cs.CV
|
Deep learning models are widely employed in safety-critical applications yet
remain susceptible to adversarial attacks -- imperceptible perturbations that
can significantly degrade model performance. Conventional defense mechanisms
predominantly focus on either enhancing model robustness or detecting
adversarial inputs independently. In this work, we propose an Unsupervised
adversarial detection via Contrastive Auxiliary Networks (U-CAN) to uncover
adversarial behavior within auxiliary feature representations, without the need
for adversarial examples. U-CAN is embedded within selected intermediate layers
of the target model. These auxiliary networks, comprising projection layers and
ArcFace-based linear layers, refine feature representations to more effectively
distinguish between benign and adversarial inputs. Comprehensive experiments
across multiple datasets (CIFAR-10, Mammals, and a subset of ImageNet) and
architectures (ResNet-50, VGG-16, and ViT) demonstrate that our method
surpasses existing unsupervised adversarial detection techniques, achieving
superior F1 scores against four distinct attack methods. The proposed framework
provides a scalable and effective solution for enhancing the security and
reliability of deep learning systems.
|
2502.09111
|
DenseSplat: Densifying Gaussian Splatting SLAM with Neural Radiance
Prior
|
cs.CV
|
Gaussian SLAM systems excel in real-time rendering and fine-grained
reconstruction compared to NeRF-based systems. However, their reliance on
extensive keyframes is impractical for deployment in real-world robotic
systems, which typically operate under sparse-view conditions that can result
in substantial holes in the map. To address these challenges, we introduce
DenseSplat, the first SLAM system that effectively combines the advantages of
NeRF and 3DGS. DenseSplat utilizes sparse keyframes and NeRF priors for
initializing primitives that densely populate maps and seamlessly fill gaps. It
also implements geometry-aware primitive sampling and pruning strategies to
manage granularity and enhance rendering efficiency. Moreover, DenseSplat
integrates loop closure and bundle adjustment, significantly enhancing
frame-to-frame tracking accuracy. Extensive experiments on multiple large-scale
datasets demonstrate that DenseSplat achieves superior performance in tracking
and mapping compared to current state-of-the-art methods.
|
2502.09120
|
The influence of visual and linguistic cues on ignorance inference in
Vision-Language Models
|
cs.CL
|
This study explored how Vision-Language Models (VLMs) process ignorance
implicatures with visual and linguistic cues. Particularly, we focused on the
effects of contexts (precise and approximate contexts) and modifier types (bare
numerals, superlative, and comparative modifiers), which were considered
pragmatic and semantic factors respectively. Methodologically, we conducted a
truth-value judgment task in visually grounded settings using GPT-4o and Gemini
1.5 Pro. The results indicate that while both models exhibited sensitivity to
linguistic cues (modifier), they failed to process ignorance implicatures with
visual cues (context) as humans do. Specifically, the influence of context was
weaker and inconsistent across models, indicating challenges in pragmatic
reasoning for VLMs. On the other hand, superlative modifiers were more strongly
associated with ignorance implicatures as compared to comparative modifiers,
supporting the semantic view. These findings highlight the need for further
advancements in VLMs to process language-vision information in a
context-dependent way to achieve human-like pragmatic inference.
|
2502.09122
|
Improving Deep Regression with Tightness
|
cs.LG cs.AI cs.CV
|
For deep regression, preserving the ordinality of the targets with respect to
the feature representation improves performance across various tasks. However,
a theoretical explanation for the benefits of ordinality is still lacking. This
work reveals that preserving ordinality reduces the conditional entropy
$H(Z|Y)$ of representation $Z$ conditional on the target $Y$. However, our
findings reveal that typical regression losses do little to reduce $H(Z|Y)$,
even though it is vital for generalization performance. With this motivation,
we introduce an optimal transport-based regularizer to preserve the similarity
relationships of targets in the feature space to reduce $H(Z|Y)$. Additionally,
we introduce a simple yet efficient strategy of duplicating the regressor
targets, also with the aim of reducing $H(Z|Y)$. Experiments on three
real-world regression tasks verify the effectiveness of our strategies to
improve deep regression. Code:
https://github.com/needylove/Regression_tightness.
|
2502.09125
|
Automatic Pruning via Structured Lasso with Class-wise Information
|
cs.CV cs.AI
|
Most pruning methods concentrate on unimportant filters of neural networks.
However, they face the loss of statistical information due to a lack of
consideration for class-wise data. In this paper, from the perspective of
leveraging precise class-wise information for model pruning, we utilize
structured lasso with guidance from Information Bottleneck theory. Our approach
ensures that statistical information is retained during the pruning process.
With these techniques, we introduce two innovative adaptive network pruning
schemes: sparse graph-structured lasso pruning with Information Bottleneck
(\textbf{sGLP-IB}) and sparse tree-guided lasso pruning with Information
Bottleneck (\textbf{sTLP-IB}). The key aspect is pruning model filters using
sGLP-IB and sTLP-IB to better capture class-wise relatedness. Compared to
multiple state-of-the-art methods, our approaches demonstrate superior
performance across three datasets and six model architectures in extensive
experiments. For instance, using the VGG16 model on the CIFAR-10 dataset, we
achieve a parameter reduction of 85%, a decrease in FLOPs by 61%, and maintain
an accuracy of 94.10% (0.14% higher than the original model); we reduce the
parameters by 55% with the accuracy at 76.12% using the ResNet architecture on
ImageNet (only drops 0.03%). In summary, we successfully reduce model size and
computational resource usage while maintaining accuracy. Our codes are at
https://anonymous.4open.science/r/IJCAI-8104.
|
2502.09128
|
A Novel Dialect-Aware Framework for the Classification of Arabic
Dialects and Emotions
|
cs.CL cs.LG
|
Arabic is one of the oldest languages still in use today. As a result,
several Arabic-speaking regions have developed dialects that are unique to
them. Dialect and emotion recognition have various uses in Arabic text
analysis, such as determining an online customer's origin based on their
comments. Furthermore, intelligent chatbots that are aware of a user's emotions
can respond appropriately to the user. Current research in emotion detection in
the Arabic language lacks awareness of how emotions are exhibited in different
dialects, which motivates the work found in this study. This research addresses
the problems of dialect and emotion classification in Arabic. Specifically,
this is achieved by building a novel framework that can identify and predict
Arabic dialects and emotions from a given text. The framework consists of three
modules: A text-preprocessing module, a classification module, and a clustering
module with the novel capability of building new dialect-aware emotion
lexicons. The proposed framework generated a new emotional lexicon for
different dialects. It achieved an accuracy of 88.9% in classifying Arabic
dialects, which outperforms the state-of-the-art results by 6.45 percentage
points. Furthermore, the framework achieved 89.1-79% accuracy in detecting
emotions in the Egyptian and Gulf dialects, respectively.
|
2502.09130
|
Finite-Time Analysis of Discrete-Time Stochastic Interpolants
|
cs.LG
|
The stochastic interpolant framework offers a powerful approach for
constructing generative models based on ordinary differential equations (ODEs)
or stochastic differential equations (SDEs) to transform arbitrary data
distributions. However, prior analyses of this framework have primarily focused
on the continuous-time setting, assuming a perfect solution of the underlying
equations. In this work, we present the first discrete-time analysis of the
stochastic interpolant framework, where we introduce an innovative
discrete-time sampler and derive a finite-time upper bound on its distribution
estimation error. Our result provides a novel quantification of how different
factors, including the distance between source and target distributions and
estimation accuracy, affect the convergence rate and also offers a new
principled way to design efficient schedules for convergence acceleration.
Finally, numerical experiments are conducted on the discrete-time sampler to
corroborate our theoretical findings.
|
2502.09131
|
A Stochastic Fundamental Lemma with Reduced Disturbance Data
Requirements
|
eess.SY cs.SY math.OC
|
Recently, the fundamental lemma by Willems et. al has been extended towards
stochastic LTI systems subject to process disturbances. Using this lemma
requires previously recorded data of inputs, outputs, and disturbances. In this
paper, we exploit causality concepts of stochastic control to propose a variant
of the stochastic fundamental lemma that does not require past disturbance data
in the Hankel matrices. Our developments rely on polynomial chaos expansions
and on the knowledge of the disturbance distribution. Similar to our previous
results, the proposed variant of the fundamental lemma allows to predict future
input-output trajectories of stochastic LTI systems. We draw upon a numerical
example to illustrate the proposed variant in data-driven control context.
|
2502.09135
|
Interpreting and Steering Protein Language Models through Sparse
Autoencoders
|
cs.LG q-bio.BM
|
The rapid advancements in transformer-based language models have
revolutionized natural language processing, yet understanding the internal
mechanisms of these models remains a significant challenge. This paper explores
the application of sparse autoencoders (SAE) to interpret the internal
representations of protein language models, specifically focusing on the ESM-2
8M parameter model. By performing a statistical analysis on each latent
component's relevance to distinct protein annotations, we identify potential
interpretations linked to various protein characteristics, including
transmembrane regions, binding sites, and specialized motifs.
We then leverage these insights to guide sequence generation, shortlisting
the relevant latent components that can steer the model towards desired targets
such as zinc finger domains. This work contributes to the emerging field of
mechanistic interpretability in biological sequence models, offering new
perspectives on model steering for sequence design.
|
2502.09137
|
Trust Me, I Know the Way: Predictive Uncertainty in the Presence of
Shortcut Learning
|
cs.LG
|
The correct way to quantify predictive uncertainty in neural networks remains
a topic of active discussion. In particular, it is unclear whether the
state-of-the art entropy decomposition leads to a meaningful representation of
model, or epistemic, uncertainty (EU) in the light of a debate that pits
ignorance against disagreement perspectives. We aim to reconcile the
conflicting viewpoints by arguing that both are valid but arise from different
learning situations. Notably, we show that the presence of shortcuts is
decisive for EU manifesting as disagreement.
|
2502.09140
|
Replay-free Online Continual Learning with Self-Supervised MultiPatches
|
cs.LG cs.CV
|
Online Continual Learning (OCL) methods train a model on a non-stationary
data stream where only a few examples are available at a time, often leveraging
replay strategies. However, usage of replay is sometimes forbidden, especially
in applications with strict privacy regulations. Therefore, we propose
Continual MultiPatches (CMP), an effective plug-in for existing OCL
self-supervised learning strategies that avoids the use of replay samples. CMP
generates multiple patches from a single example and projects them into a
shared feature space, where patches coming from the same example are pushed
together without collapsing into a single point. CMP surpasses replay and other
SSL-based strategies on OCL streams, challenging the role of replay as a go-to
solution for self-supervised OCL.
|
2502.09142
|
LLM-Driven Augmented Reality Puppeteer: Controller-Free Voice-Commanded
Robot Teleoperation
|
cs.HC cs.RO
|
The integration of robotics and augmented reality (AR) presents
transformative opportunities for advancing human-robot interaction (HRI) by
improving usability, intuitiveness, and accessibility. This work introduces a
controller-free, LLM-driven voice-commanded AR puppeteering system, enabling
users to teleoperate a robot by manipulating its virtual counterpart in real
time. By leveraging natural language processing (NLP) and AR technologies, our
system -- prototyped using Meta Quest 3 -- eliminates the need for physical
controllers, enhancing ease of use while minimizing potential safety risks
associated with direct robot operation. A preliminary user demonstration
successfully validated the system's functionality, demonstrating its potential
for safer, more intuitive, and immersive robotic control.
|
2502.09143
|
Feature-based Graph Attention Networks Improve Online Continual Learning
|
cs.CV cs.LG
|
Online continual learning for image classification is crucial for models to
adapt to new data while retaining knowledge of previously learned tasks. This
capability is essential to address real-world challenges involving dynamic
environments and evolving data distributions. Traditional approaches
predominantly employ Convolutional Neural Networks, which are limited to
processing images as grids and primarily capture local patterns rather than
relational information. Although the emergence of transformer architectures has
improved the ability to capture relationships, these models often require
significantly larger resources. In this paper, we present a novel online
continual learning framework based on Graph Attention Networks (GATs), which
effectively capture contextual relationships and dynamically update the
task-specific representation via learned attention weights. Our approach
utilizes a pre-trained feature extractor to convert images into graphs using
hierarchical feature maps, representing information at varying levels of
granularity. These graphs are then processed by a GAT and incorporate an
enhanced global pooling strategy to improve classification performance for
continual learning. In addition, we propose the rehearsal memory duplication
technique that improves the representation of the previous tasks while
maintaining the memory budget. Comprehensive evaluations on benchmark datasets,
including SVHN, CIFAR10, CIFAR100, and MiniImageNet, demonstrate the
superiority of our method compared to the state-of-the-art methods.
|
2502.09148
|
Multimodal HIE Lesion Segmentation in Neonates: A Comparative Study of
Loss Functions
|
cs.CV
|
Segmentation of Hypoxic-Ischemic Encephalopathy (HIE) lesions in neonatal MRI
is a crucial but challenging task due to diffuse multifocal lesions with
varying volumes and the limited availability of annotated HIE lesion datasets.
Using the BONBID-HIE dataset, we implemented a 3D U-Net with optimized
preprocessing, augmentation, and training strategies to overcome data
constraints. The goal of this study is to identify the optimal loss function
specifically for the HIE lesion segmentation task. To this end, we evaluated
various loss functions, including Dice, Dice-Focal, Tversky, Hausdorff Distance
(HausdorffDT) Loss, and two proposed compound losses -- Dice-Focal-HausdorffDT
and Tversky-HausdorffDT -- to enhance segmentation performance. The results
show that different loss functions predict distinct segmentation masks, with
compound losses outperforming standalone losses. Tversky-HausdorffDT Loss
achieves the highest Dice and Normalized Surface Dice scores, while
Dice-Focal-HausdorffDT Loss minimizes Mean Surface Distance. This work
underscores the significance of task-specific loss function optimization,
demonstrating that combining region-based and boundary-aware losses leads to
more accurate HIE lesion segmentation, even with limited training data.
|
2502.09150
|
Shortcut Learning Susceptibility in Vision Classifiers
|
cs.LG cs.CV
|
Shortcut learning, where machine learning models exploit spurious
correlations in data instead of capturing meaningful features, poses a
significant challenge to building robust and generalizable models. This
phenomenon is prevalent across various machine learning applications, including
vision, natural language processing, and speech recognition, where models may
find unintended cues that minimize training loss but fail to capture the
underlying structure of the data. Vision classifiers such as Convolutional
Neural Networks (CNNs), Multi-Layer Perceptrons (MLPs), and Vision Transformers
(ViTs) leverage distinct architectural principles to process spatial and
structural information, making them differently susceptible to shortcut
learning. In this study, we systematically evaluate these architectures by
introducing deliberate shortcuts into the dataset that are positionally
correlated with class labels, creating a controlled setup to assess whether
models rely on these artificial cues or learn actual distinguishing features.
We perform both quantitative evaluation by training on the shortcut-modified
dataset and testing them on two different test sets -- one containing the same
shortcuts and another without them -- to determine the extent of reliance on
shortcuts. Additionally, qualitative evaluation is performed by using network
inversion-based reconstruction techniques to analyze what the models
internalize in their weights, aiming to reconstruct the training data as
perceived by the classifiers. We evaluate shortcut learning behavior across
multiple benchmark datasets, including MNIST, Fashion-MNIST, SVHN, and
CIFAR-10, to compare the susceptibility of different vision classifier
architectures to shortcut reliance and assess their varying degrees of
sensitivity to spurious correlations.
|
2502.09151
|
Regularization can make diffusion models more efficient
|
cs.LG math.ST stat.ML stat.TH
|
Diffusion models are one of the key architectures of generative AI. Their
main drawback, however, is the computational costs. This study indicates that
the concept of sparsity, well known especially in statistics, can provide a
pathway to more efficient diffusion pipelines. Our mathematical guarantees
prove that sparsity can reduce the input dimension's influence on the
computational complexity to that of a much smaller intrinsic dimension of the
data. Our empirical findings confirm that inducing sparsity can indeed lead to
better samples at a lower cost.
|
2502.09152
|
Vertical Federated Continual Learning via Evolving Prototype Knowledge
|
cs.LG cs.NE
|
Vertical Federated Learning (VFL) has garnered significant attention as a
privacy-preserving machine learning framework for sample-aligned feature
federation. However, traditional VFL approaches do not address the challenges
of class and feature continual learning, resulting in catastrophic forgetting
of knowledge from previous tasks. To address the above challenge, we propose a
novel vertical federated continual learning method, named Vertical Federated
Continual Learning via Evolving Prototype Knowledge (V-LETO), which primarily
facilitates the transfer of knowledge from previous tasks through the evolution
of prototypes. Specifically, we propose an evolving prototype knowledge method,
enabling the global model to retain both previous and current task knowledge.
Furthermore, we introduce a model optimization technique that mitigates the
forgetting of previous task knowledge by restricting updates to specific
parameters of the local model, thereby enhancing overall performance. Extensive
experiments conducted in both CIL and FIL settings demonstrate that our method,
V-LETO, outperforms the other state-of-the-art methods. For example, our method
outperforms the state-of-the-art method by 10.39% and 35.15% for CIL and FIL
tasks, respectively. Our code is available at
https://anonymous.4open.science/r/V-LETO-0108/README.md.
|
2502.09155
|
Use of Air Quality Sensor Network Data for Real-time Pollution-Aware POI
Suggestion
|
cs.IR
|
This demo paper presents AirSense-R, a privacy-preserving mobile application
that provides real-time, pollution-aware recommendations for points of interest
(POIs) in urban environments. By combining real-time air quality monitoring
data with user preferences, the proposed system aims to help users make
health-conscious decisions about the locations they visit. The application
utilizes collaborative filtering for personalized suggestions, and federated
learning for privacy protection, and integrates air pollutant readings from
AirSENCE sensor networks in cities such as Bari, Italy, and Cork, Ireland.
Additionally, the AirSENCE prediction engine can be employed to detect anomaly
readings and interpolate for air quality readings in areas with sparse sensor
coverage. This system offers a promising, health-oriented POI recommendation
solution that adapts dynamically to current urban air quality conditions while
safeguarding user privacy. The code of AirTOWN and a demonstration video is
made available at the following repo:
https://github.com/AirtownApp/Airtown-Application.git.
|
2502.09156
|
Improving TCM Question Answering through Tree-Organized Self-Reflective
Retrieval with LLMs
|
cs.CL
|
Objectives: Large language models (LLMs) can harness medical knowledge for
intelligent question answering (Q&A), promising support for auxiliary diagnosis
and medical talent cultivation. However, there is a deficiency of highly
efficient retrieval-augmented generation (RAG) frameworks within the domain of
Traditional Chinese Medicine (TCM). Our purpose is to observe the effect of the
Tree-Organized Self-Reflective Retrieval (TOSRR) framework on LLMs in TCM Q&A
tasks.
Materials and Methods: We introduce the novel approach of knowledge
organization, constructing a tree structure knowledge base with hierarchy. At
inference time, our self-reflection framework retrieves from this knowledge
base, integrating information across chapters. Questions from the TCM Medical
Licensing Examination (MLE) and the college Classics Course Exam (CCE) were
randomly selected as benchmark datasets.
Results: By coupling with GPT-4, the framework can improve the best
performance on the TCM MLE benchmark by 19.85% in absolute accuracy, and
improve recall accuracy from 27% to 38% on CCE datasets. In manual evaluation,
the framework improves a total of 18.52 points across dimensions of safety,
consistency, explainability, compliance, and coherence.
Conclusion: The TOSRR framework can effectively improve LLM's capability in
Q&A tasks of TCM.
|
2502.09164
|
E-MD3C: Taming Masked Diffusion Transformers for Efficient Zero-Shot
Object Customization
|
cs.CV cs.LG
|
We propose E-MD3C ($\underline{E}$fficient $\underline{M}$asked
$\underline{D}$iffusion Transformer with Disentangled $\underline{C}$onditions
and $\underline{C}$ompact $\underline{C}$ollector), a highly efficient
framework for zero-shot object image customization. Unlike prior works reliant
on resource-intensive Unet architectures, our approach employs lightweight
masked diffusion transformers operating on latent patches, offering
significantly improved computational efficiency. The framework integrates three
core components: (1) an efficient masked diffusion transformer for processing
autoencoder latents, (2) a disentangled condition design that ensures
compactness while preserving background alignment and fine details, and (3) a
learnable Conditions Collector that consolidates multiple inputs into a compact
representation for efficient denoising and learning. E-MD3C outperforms the
existing approach on the VITON-HD dataset across metrics such as PSNR, FID,
SSIM, and LPIPS, demonstrating clear advantages in parameters, memory
efficiency, and inference speed. With only $\frac{1}{4}$ of the parameters, our
Transformer-based 468M model delivers $2.5\times$ faster inference and uses
$\frac{2}{3}$ of the GPU memory compared to an 1720M Unet-based latent
diffusion model.
|
2502.09166
|
Integrated Sensing and Communication with Distributed Rate-Limited
Helpers
|
cs.IT math.IT
|
This paper studies integrated sensing and communication (ISAC) systems with
two rate-limited helpers who observe the channel state sequence and the
feedback sequence, respectively. Depending on the timing of compressing and
using the state information, our proposed coding scheme gives an inner bound of
the capacity-compression-distortion tradeoff region. The tradeoff is realized
by sending part of the state information at the beginning of the transmission
to facilitate the communication and compressing the remaining part together
with the feedback signal. The inner bound becomes tight bounds in several
special cases.
|
2502.09168
|
Musical Heritage Historical Entity Linking
|
cs.CL
|
Linking named entities occurring in text to their corresponding entity in a
Knowledge Base (KB) is challenging, especially when dealing with historical
texts. In this work, we introduce Musical Heritage named Entities Recognition,
Classification and Linking (MHERCL), a novel benchmark consisting of manually
annotated sentences extrapolated from historical periodicals of the music
domain. MHERCL contains named entities under-represented or absent in the most
famous KBs. We experiment with several State-of-the-Art models on the Entity
Linking (EL) task and show that MHERCL is a challenging dataset for all of
them. We propose a novel unsupervised EL model and a method to extend
supervised entity linkers by using Knowledge Graphs (KGs) to tackle the main
difficulties posed by historical documents. Our experiments reveal that relying
on unsupervised techniques and improving models with logical constraints based
on KGs and heuristics to predict NIL entities (entities not represented in the
KB of reference) results in better EL performance on historical documents.
|
2502.09170
|
LimSim Series: An Autonomous Driving Simulation Platform for Validation
and Enhancement
|
cs.RO
|
Closed-loop simulation environments play a crucial role in the validation and
enhancement of autonomous driving systems (ADS). However, certain challenges
warrant significant attention, including balancing simulation accuracy with
duration, reconciling functionality with practicality, and establishing
comprehensive evaluation mechanisms. This paper addresses these challenges by
introducing the LimSim Series, a comprehensive simulation platform designed to
support the rapid deployment and efficient iteration of ADS. The LimSim Series
integrates multi-type information from road networks, employs human-like
decision-making and planning algorithms for background vehicles, and introduces
the concept of the Area of Interest (AoI) to optimize computational resources.
The platform offers a variety of baseline algorithms and user-friendly
interfaces, facilitating flexible validation of multiple technical pipelines.
Additionally, the LimSim Series incorporates multi-dimensional evaluation
metrics, delivering thorough insights into system performance, thus enabling
researchers to promptly identify issues for further improvements. Experiments
demonstrate that the LimSim Series is compatible with modular, end-to-end, and
VLM-based knowledge-driven systems. It can assist in the iteration and updating
of ADS by evaluating performance across various scenarios. The code of the
LimSim Series is released at: https://github.com/PJLab-ADG/LimSim.
|
2502.09172
|
LOB-Bench: Benchmarking Generative AI for Finance -- an Application to
Limit Order Book Data
|
cs.LG cs.CE q-fin.CP q-fin.TR
|
While financial data presents one of the most challenging and interesting
sequence modelling tasks due to high noise, heavy tails, and strategic
interactions, progress in this area has been hindered by the lack of consensus
on quantitative evaluation paradigms. To address this, we present LOB-Bench, a
benchmark, implemented in python, designed to evaluate the quality and realism
of generative message-by-order data for limit order books (LOB) in the LOBSTER
format. Our framework measures distributional differences in conditional and
unconditional statistics between generated and real LOB data, supporting
flexible multivariate statistical evaluation. The benchmark also includes
features commonly used LOB statistics such as spread, order book volumes, order
imbalance, and message inter-arrival times, along with scores from a trained
discriminator network. Lastly, LOB-Bench contains "market impact metrics", i.e.
the cross-correlations and price response functions for specific events in the
data. We benchmark generative autoregressive state-space models, a (C)GAN, as
well as a parametric LOB model and find that the autoregressive GenAI approach
beats traditional model classes.
|
2502.09173
|
Two-Stage Representation Learning for Analyzing Movement Behavior
Dynamics in People Living with Dementia
|
cs.LG cs.AI
|
In remote healthcare monitoring, time series representation learning reveals
critical patient behavior patterns from high-frequency data. This study
analyzes home activity data from individuals living with dementia by proposing
a two-stage, self-supervised learning approach tailored to uncover low-rank
structures. The first stage converts time-series activities into text sequences
encoded by a pre-trained language model, providing a rich, high-dimensional
latent state space using a PageRank-based method. This PageRank vector captures
latent state transitions, effectively compressing complex behaviour data into a
succinct form that enhances interpretability. This low-rank representation not
only enhances model interpretability but also facilitates clustering and
transition analysis, revealing key behavioral patterns correlated with
clinicalmetrics such as MMSE and ADAS-COG scores. Our findings demonstrate the
framework's potential in supporting cognitive status prediction, personalized
care interventions, and large-scale health monitoring.
|
2502.09175
|
FLAME: Flexible LLM-Assisted Moderation Engine
|
cs.CR cs.AI cs.CL
|
The rapid advancement of Large Language Models (LLMs) has introduced
significant challenges in moderating user-model interactions. While LLMs
demonstrate remarkable capabilities, they remain vulnerable to adversarial
attacks, particularly ``jailbreaking'' techniques that bypass content safety
measures. Current content moderation systems, which primarily rely on input
prompt filtering, have proven insufficient, with techniques like Best-of-N
(BoN) jailbreaking achieving success rates of 80% or more against popular LLMs.
In this paper, we introduce Flexible LLM-Assisted Moderation Engine (FLAME): a
new approach that shifts the focus from input filtering to output moderation.
Unlike traditional circuit-breaking methods that analyze user queries, FLAME
evaluates model responses, offering several key advantages: (1) computational
efficiency in both training and inference, (2) enhanced resistance to BoN
jailbreaking attacks, and (3) flexibility in defining and updating safety
criteria through customizable topic filtering. Our experiments demonstrate that
FLAME significantly outperforms current moderation systems. For example, FLAME
reduces attack success rate in GPT-4o-mini and DeepSeek-v3 by a factor of ~9,
while maintaining low computational overhead. We provide comprehensive
evaluation on various LLMs and analyze the engine's efficiency against the
state-of-the-art jailbreaking. This work contributes to the development of more
robust and adaptable content moderation systems for LLMs.
|
2502.09180
|
A Machine Learning Approach to Sensor Substitution for Non-Prehensile
Manipulation
|
cs.RO
|
Mobile manipulators are increasingly deployed in complex environments,
requiring diverse sensors to perceive and interact with their surroundings.
However, equipping every robot with every possible sensor is often impractical
due to cost and physical constraints. A critical challenge arises when robots
with differing sensor capabilities need to collaborate or perform similar
tasks. For example, consider a scenario where a mobile manipulator equipped
with high-resolution tactile skin is skilled at non-prehensile manipulation
tasks like pushing. If this robot needs to be replaced or augmented by a robot
lacking such tactile sensing, the learned manipulation policies become
inapplicable. This paper addresses the problem of sensor substitution in
non-prehensile manipulation. We propose a novel machine learning-based
framework that enables a robot with a limited sensor set (e.g., LiDAR or RGB-D
camera) to effectively perform tasks previously reliant on a richer sensor
suite (e.g., tactile skin). Our approach learns a mapping between the available
sensor data and the information provided by the substituted sensor, effectively
synthesizing the missing sensory input. Specifically, we demonstrate the
efficacy of our framework by training a model to substitute tactile skin data
for the task of non-prehensile pushing using a mobile manipulator. We show that
a manipulator equipped only with LiDAR or RGB-D can, after training, achieve
comparable and sometimes even better pushing performance to a mobile base
utilizing direct tactile feedback.
|
2502.09183
|
RefineCoder: Iterative Improving of Large Language Models via Adaptive
Critique Refinement for Code Generation
|
cs.CL cs.AI
|
Code generation has attracted increasing attention with the rise of Large
Language Models (LLMs). Many studies have developed powerful code LLMs by
synthesizing code-related instruction data and applying supervised fine-tuning.
However, these methods are limited by teacher model distillation and ignore the
potential of iterative refinement by self-generated code. In this paper, we
propose Adaptive Critique Refinement (ACR), which enables the model to refine
itself by self-generated code and external critique, rather than directly
imitating the code responses of the teacher model. Concretely, ACR includes a
composite scoring system with LLM-as-a-Judge to evaluate the quality of code
responses and a selective critique strategy with LLM-as-a-Critic to critique
self-generated low-quality code responses. We develop the RefineCoder series by
iteratively applying ACR, achieving continuous performance improvement on
multiple code generation benchmarks. Compared to the baselines of the same
size, our proposed RefineCoder series can achieve comparable or even superior
performance using less data.
|
2502.09184
|
Array-Fed RIS: Validation of Friis-Based Modeling Using Full-Wave
Simulations
|
eess.SP cs.SY eess.SY
|
Space-fed large antenna arrays offer superior efficiency, simplicity, and
reductions in size, weight, power, and cost (SWaP-C) compared to
constrained-feed systems. Historically, horn antennas have been used for space
feeding, but they suffer from limitations such as bulky designs, low aperture
efficiency ($\approx 50\%$), and restricted degrees of freedom at the
continuous aperture. In contrast, planar patch arrays achieve significantly
higher aperture efficiency ($>90\%$) due to their more uniform aperture
distribution, reduced weight, and increased degrees of freedom from the
discretized aperture. Building on these advantages, we proposed an array-fed
Reflective Intelligent Surface (RIS) system, where an active multi-antenna
feeder (AMAF) optimizes power transfer by aligning with the principal eigenmode
of the AMAF-RIS propagation matrix $\mathbf{T}$. While our previous studies
relied on the Friis transmission formula for system modeling, we now validate
this approach through full-wave simulations in CST Microwave Studio. By
comparing the Friis-based matrix, $\mathbf{T}_{\rm Friis}$, with the full-wave
solution, $\mathbf{T}_{\rm full. wave}$, we validate the relevance of the
Friis-based modeling for top-level system design. Our findings confirm the
feasibility of the proposed AMAF-RIS architecture for next-generation
communication systems.
|
2502.09188
|
Matina: A Large-Scale 73B Token Persian Text Corpus
|
cs.CL cs.AI
|
Text corpora are essential for training models used in tasks like
summarization, translation, and large language models (LLMs). While various
efforts have been made to collect monolingual and multilingual datasets in many
languages, Persian has often been underrepresented due to limited resources for
data collection and preprocessing. Existing Persian datasets are typically
small and lack content diversity, consisting mainly of weblogs and news
articles. This shortage of high-quality, varied data has slowed the development
of NLP models and open-source LLMs for Persian. Since model performance depends
heavily on the quality of training data, we address this gap by introducing the
Matina corpus, a new Persian dataset of 72.9B tokens, carefully preprocessed
and deduplicated to ensure high data quality. We further assess its
effectiveness by training and evaluating transformer-based models on key NLP
tasks. Both the dataset and preprocessing codes are publicly available,
enabling researchers to build on and improve this resource for future Persian
NLP advancements.
|
2502.09192
|
Thinking beyond the anthropomorphic paradigm benefits LLM research
|
cs.CL
|
Anthropomorphism, or the attribution of human traits to technology, is an
automatic and unconscious response that occurs even in those with advanced
technical expertise. In this position paper, we analyze hundreds of thousands
of computer science research articles from the past decade and present
empirical evidence of the prevalence and growth of anthropomorphic terminology
in research on large language models (LLMs). This terminology reflects deeper
anthropomorphic conceptualizations which shape how we think about and conduct
LLM research. We argue these conceptualizations may be limiting, and that
challenging them opens up new pathways for understanding and improving LLMs
beyond human analogies. To illustrate this, we identify and analyze five core
anthropomorphic assumptions shaping prominent methodologies across the LLM
development lifecycle, from the assumption that models must use natural
language for reasoning tasks to the assumption that model capabilities should
be evaluated through human-centric benchmarks. For each assumption, we
demonstrate how non-anthropomorphic alternatives can open new directions for
research and development.
|
2502.09193
|
Generalizability through Explainability: Countering Overfitting with
Counterfactual Examples
|
cs.LG
|
Overfitting is a well-known issue in machine learning that occurs when a
model struggles to generalize its predictions to new, unseen data beyond the
scope of its training set. Traditional techniques to mitigate overfitting
include early stopping, data augmentation, and regularization. In this work, we
demonstrate that the degree of overfitting of a trained model is correlated
with the ability to generate counterfactual examples. The higher the
overfitting, the easier it will be to find a valid counterfactual example for a
randomly chosen input data point. Therefore, we introduce CF-Reg, a novel
regularization term in the training loss that controls overfitting by ensuring
enough margin between each instance and its corresponding counterfactual.
Experiments conducted across multiple datasets and models show that our
counterfactual regularizer generally outperforms existing regularization
techniques.
|
2502.09194
|
XAInomaly: Explainable and Interpretable Deep Contractive Autoencoder
for O-RAN Traffic Anomaly Detection
|
cs.IT math.IT
|
Generative Artificial Intelligence (AI) techniques have become integral part
in advancing next generation wireless communication systems by enabling
sophisticated data modeling and feature extraction for enhanced network
performance. In the realm of open radio access networks (O-RAN), characterized
by their disaggregated architecture and heterogeneous components from multiple
vendors, the deployment of generative models offers significant advantages for
network management such as traffic analysis, traffic forecasting and anomaly
detection. However, the complex and dynamic nature of O-RAN introduces
challenges that necessitate not only accurate detection mechanisms but also
reduced complexity, scalability, and most importantly interpretability to
facilitate effective network management. In this study, we introduce the
XAInomaly framework, an explainable and interpretable Semi-supervised (SS) Deep
Contractive Autoencoder (DeepCAE) design for anomaly detection in O-RAN. Our
approach leverages the generative modeling capabilities of our SS-DeepCAE model
to learn compressed, robust representations of normal network behavior, which
captures essential features, enabling the identification of deviations
indicative of anomalies. To address the black-box nature of deep learning
models, we propose reactive Explainable AI (XAI) technique called fastshap-C.
|
2502.09198
|
Understanding High-Dimensional Bayesian Optimization
|
cs.LG
|
Recent work reported that simple Bayesian optimization methods perform well
for high-dimensional real-world tasks, seemingly contradicting prior work and
tribal knowledge. This paper investigates the 'why'. We identify fundamental
challenges that arise in high-dimensional Bayesian optimization and explain why
recent methods succeed. Our analysis shows that vanishing gradients caused by
Gaussian process initialization schemes play a major role in the failures of
high-dimensional Bayesian optimization and that methods that promote local
search behaviors are better suited for the task. We find that maximum
likelihood estimation of Gaussian process length scales suffices for
state-of-the-art performance. Based on this, we propose a simple variant of
maximum likelihood estimation called MSR that leverages these findings to
achieve state-of-the-art performance on a comprehensive set of real-world
applications. We also present targeted experiments to illustrate and confirm
our findings.
|
2502.09202
|
Faster than real-time detection of shot boundaries, sampling structure
and dynamic keyframes in video
|
cs.CV
|
The detection of shot boundaries (hardcuts and short dissolves), sampling
structure (progressive / interlaced / pulldown) and dynamic keyframes in a
video are fundamental video analysis tasks which have to be done before any
further high-level analysis tasks. We present a novel algorithm which does all
these analysis tasks in an unified way, by utilizing a combination of
inter-frame and intra-frame measures derived from the motion field and
normalized cross correlation. The algorithm runs four times faster than
real-time due to sparse and selective calculation of these measures. An initial
evaluation furthermore shows that the proposed algorithm is extremely robust
even for challenging content showing large camera or object motion,
flashlights, flicker or low contrast / noise.
|
2502.09203
|
Revisiting Euclidean Alignment for Transfer Learning in EEG-Based
Brain-Computer Interfaces
|
cs.HC cs.LG
|
Due to the non-stationarity and large individual differences of EEG signals,
EEG-based brain-computer interfaces (BCIs) usually need subject-specific
calibration to tailor the decoding algorithm for each new subject, which is
time-consuming and user-unfriendly, hindering their real-world applications.
Transfer learning (TL) has been extensively used to expedite the calibration,
by making use of EEG data from other subjects/sessions. An important
consideration in TL for EEG-based BCIs is to reduce the data distribution
discrepancies among different subjects/session, to avoid negative transfer.
Euclidean alignment (EA) was proposed in 2020 to address this challenge.
Numerous experiments from 10 different BCI paradigms demonstrated its
effectiveness and efficiency. This paper revisits the EA, explaining its
procedure and correct usage, introducing its applications and extensions, and
pointing out potential new research directions. It should be very helpful to
BCI researchers, especially those who are working on EEG signal decoding.
|
2502.09204
|
Logical Lease Litigation: Prolog and LLMs for Rental Law Compliance in
New York
|
cs.AI cs.LO
|
Legal cases require careful logical reasoning following the laws, whereas
interactions with non-technical users must be in natural language. As an
application combining logical reasoning using Prolog and natural language
processing using large language models (LLMs), this paper presents a novel
approach and system, LogicLease, to automate the analysis of landlord-tenant
legal cases in the state of New York. LogicLease determines compliance with
relevant legal requirements by analyzing case descriptions and citing all
relevant laws. It leverages LLMs for information extraction and Prolog for
legal reasoning. By separating information extraction from legal reasoning,
LogicLease achieves greater transparency and control over the legal logic
applied to each case. We evaluate the accuracy, efficiency, and robustness of
LogicLease through a series of tests, achieving 100% accuracy and an average
processing time of 2.57 seconds. LogicLease presents advantages over
state-of-the-art LLM-based legal analysis systems by providing clear,
step-by-step reasoning, citing specific laws, and distinguishing itself by its
ability to avoid hallucinations -- a common issue in LLMs.
|
2502.09205
|
Counterfactual Explanations as Plans
|
cs.AI cs.LO
|
There has been considerable recent interest in explainability in AI,
especially with black-box machine learning models. As correctly observed by the
planning community, when the application at hand is not a single-shot decision
or prediction, but a sequence of actions that depend on observations, a richer
notion of explanations are desirable.
In this paper, we look to provide a formal account of ``counterfactual
explanations," based in terms of action sequences. We then show that this
naturally leads to an account of model reconciliation, which might take the
form of the user correcting the agent's model, or suggesting actions to the
agent's plan. For this, we will need to articulate what is true versus what is
known, and we appeal to a modal fragment of the situation calculus to formalise
these intuitions. We consider various settings: the agent knowing partial
truths, weakened truths and having false beliefs, and show that our definitions
easily generalize to these different settings.
|
2502.09206
|
Efficient OWL2QL Meta-reasoning Using ASP-based Hybrid Knowledge Bases
|
cs.LO cs.AI cs.SC
|
Metamodeling refers to scenarios in ontologies in which classes and roles can
be members of classes or occur in roles. This is a desirable modelling feature
in several applications, but allowing it without restrictions is problematic
for several reasons, mainly because it causes undecidability. Therefore,
practical languages either forbid metamodeling explicitly or treat occurrences
of classes as instances to be semantically different from other occurrences,
thereby not allowing metamodeling semantically. Several extensions have been
proposed to provide metamodeling to some extent. Building on earlier work that
reduces metamodeling query answering to Datalog query answering, recently
reductions to query answering over hybrid knowledge bases were proposed with
the aim of using the Datalog transformation only where necessary. Preliminary
work showed that the approach works, but the hoped-for performance improvements
were not observed yet. In this work we expand on this body of work by improving
the theoretical basis of the reductions and by using alternative tools that
show competitive performance.
|
2502.09209
|
On LLM-generated Logic Programs and their Inference Execution Methods
|
cs.AI
|
Large Language Models (LLMs) trained on petabytes of data are highly
compressed repositories of a significant proportion of the knowledge
accumulated and distilled so far. In this paper we study techniques to elicit
this knowledge in the form of several classes of logic programs, including
propositional Horn clauses, Dual Horn clauses, relational triplets and Definite
Clause Grammars. Exposing this knowledge as logic programs enables sound
reasoning methods that can verify alignment of LLM outputs to their intended
uses and extend their inference capabilities. We study new execution methods
for the generated programs, including soft-unification of abducible facts
against LLM-generated content stored in a vector database as well as GPU-based
acceleration of minimal model computation that supports inference with large
LLM-generated programs.
|
2502.09211
|
Visual Graph Question Answering with ASP and LLMs for Language Parsing
|
cs.AI cs.CV cs.LO
|
Visual Question Answering (VQA) is a challenging problem that requires to
process multimodal input. Answer-Set Programming (ASP) has shown great
potential in this regard to add interpretability and explainability to modular
VQA architectures. In this work, we address the problem of how to integrate ASP
with modules for vision and natural language processing to solve a new and
demanding VQA variant that is concerned with images of graphs (not graphs in
symbolic form). Images containing graph-based structures are an ubiquitous and
popular form of visualisation. Here, we deal with the particular problem of
graphs inspired by transit networks, and we introduce a novel dataset that
amends an existing one by adding images of graphs that resemble metro lines.
Our modular neuro-symbolic approach combines optical graph recognition for
graph parsing, a pretrained optical character recognition neural network for
parsing labels, Large Language Models (LLMs) for language processing, and ASP
for reasoning. This method serves as a first baseline and achieves an overall
average accuracy of 73% on the dataset. Our evaluation provides further
evidence of the potential of modular neuro-symbolic systems, in particular with
pretrained models that do not involve any further training and logic
programming for reasoning, to solve complex VQA tasks.
|
2502.09212
|
LP-LM: No Hallucinations in Question Answering with Logic Programming
|
cs.AI cs.CL
|
Large language models (LLMs) are able to generate human-like responses to
user queries. However, LLMs exhibit inherent limitations, especially because
they hallucinate. This paper introduces LP-LM, a system that grounds answers to
questions in known facts contained in a knowledge base (KB), facilitated
through semantic parsing in Prolog, and always produces answers that are
reliable.
LP-LM generates a most probable constituency parse tree along with a
corresponding Prolog term for an input question via Prolog definite clause
grammar (DCG) parsing. The term is then executed against a KB of natural
language sentences also represented as Prolog terms for question answering. By
leveraging DCG and tabling, LP-LM runs in linear time in the size of input
sentences for sufficiently many grammar rules. Performing experiments comparing
LP-LM with current well-known LLMs in accuracy, we show that LLMs hallucinate
on even simple questions, unlike LP-LM.
|
2502.09213
|
Neuro-Symbolic Contrastive Learning for Cross-domain Inference
|
cs.LG cs.CL
|
Pre-trained language models (PLMs) have made significant advances in natural
language inference (NLI) tasks, however their sensitivity to textual
perturbations and dependence on large datasets indicate an over-reliance on
shallow heuristics. In contrast, inductive logic programming (ILP) excels at
inferring logical relationships across diverse, sparse and limited datasets,
but its discrete nature requires the inputs to be precisely specified, which
limits their application. This paper proposes a bridge between the two
approaches: neuro-symbolic contrastive learning. This allows for smooth and
differentiable optimisation that improves logical accuracy across an otherwise
discrete, noisy, and sparse topological space of logical functions. We show
that abstract logical relationships can be effectively embedded within a
neuro-symbolic paradigm, by representing data as logic programs and sets of
logic rules. The embedding space captures highly varied textual information
with similar semantic logical relations, but can also separate similar textual
relations that have dissimilar logical relations. Experimental results
demonstrate that our approach significantly improves the inference capabilities
of the models in terms of generalisation and reasoning.
|
2502.09215
|
Architecture for Simulating Behavior Mode Changes in Norm-Aware
Autonomous Agents
|
cs.LO cs.AI
|
This paper presents an architecture for simulating the actions of a
norm-aware intelligent agent whose behavior with respect to norm compliance is
set, and can later be changed, by a human controller. Updating an agent's
behavior mode from a norm-abiding to a riskier one may be relevant when the
agent is involved in time-sensitive rescue operations, for example. We base our
work on the Authorization and Obligation Policy Language AOPL designed by
Gelfond and Lobo for the specification of norms. We introduce an architecture
and a prototype software system that can be used to simulate an agent's plans
under different behavior modes that can later be changed by the controller. We
envision such software to be useful to policy makers, as they can more readily
understand how agents may act in certain situations based on the agents'
attitudes towards norm-compliance. Policy makers may then refine their policies
if simulations show unwanted consequences.
|
2502.09216
|
Mind the Gaps: Logical English, Prolog, and Multi-agent Systems for
Autonomous Vehicles
|
cs.AI cs.CL cs.LO cs.MA
|
In this paper, we present a modular system for representing and reasoning
with legal aspects of traffic rules for autonomous vehicles. We focus on a
subset of the United Kingdom's Highway Code (HC) related to junctions. As human
drivers and automated vehicles (AVs) will interact on the roads, especially in
urban environments, we claim that an accessible, unitary, high-level
computational model should exist and be applicable to both users. Autonomous
vehicles introduce a shift in liability that should not bring disadvantages or
increased burden on human drivers. We develop a system "in silico" of the
model. The proposed system is built of three main components: a natural
language interface, using Logical English, which encodes the rules; an internal
representation of the rules in Prolog; and an multi-agent-based simulation
environment, built in NetLogo. The three components interact: Logical English
is translated into and out of Prolog (along with some support code); Prolog and
NetLogo interface via predicates. Such a modular approach enables the different
components to carry different "burdens" in the overall system; it also allows
swapping of modules. Given NetLogo, we can visualize the effect of the modeled
rules as well as validate the system with a simple dynamic running scenario.
Designated agents monitor the behaviour of the vehicles for compliance and
record potential violations where they occur. The information on potential
violations is then utilized by Validators, to determine whether the violation
is punishable, differentiating between exceptions and cases.
|
2502.09218
|
Data2Concept2Text: An Explainable Multilingual Framework for Data
Analysis Narration
|
cs.LO cs.AI
|
This paper presents a complete explainable system that interprets a set of
data, abstracts the underlying features and describes them in a natural
language of choice. The system relies on two crucial stages: (i) identifying
emerging properties from data and transforming them into abstract concepts, and
(ii) converting these concepts into natural language. Despite the impressive
natural language generation capabilities demonstrated by Large Language Models,
their statistical nature and the intricacy of their internal mechanism still
force us to employ these techniques as black boxes, forgoing trustworthiness.
Developing an explainable pipeline for data interpretation would allow
facilitating its use in safety-critical environments like processing medical
information and allowing non-experts and visually impaired people to access
narrated information. To this end, we believe that the fields of knowledge
representation and automated reasoning research could present a valid
alternative. Expanding on prior research that tackled the first stage (i), we
focus on the second stage, named Concept2Text. Being explainable, data
translation is easily modeled through logic-based rules, once again emphasizing
the role of declarative programming in achieving AI explainability. This paper
explores a Prolog/CLP-based rewriting system to interpret concepts-articulated
in terms of classes and relations, plus common knowledge-derived from a generic
ontology, generating natural language text. Its main features include
hierarchical tree rewritings, modular multilingual generation, support for
equivalent variants across semantic, grammar, and lexical levels, and a
transparent rule-based system. We outline the architecture and demonstrate its
flexibility through some examples capable of generating numerous diverse and
equivalent rewritings based on the input concept.
|
2502.09219
|
Abduction of Domain Relationships from Data for VQA
|
cs.LO cs.AI cs.LG
|
In this paper, we study the problem of visual question answering (VQA) where
the image and query are represented by ASP programs that lack domain data. We
provide an approach that is orthogonal and complementary to existing knowledge
augmentation techniques where we abduce domain relationships of image
constructs from past examples. After framing the abduction problem, we provide
a baseline approach, and an implementation that significantly improves the
accuracy of query answering yet requires few examples.
|
2502.09220
|
Graphical Conditions for the Existence, Unicity and Number of Regular
Models
|
cs.LO cs.AI cs.DM
|
The regular models of a normal logic program are a particular type of partial
(i.e. 3-valued) models which correspond to stable partial models with minimal
undefinedness. In this paper, we explore graphical conditions on the dependency
graph of a finite ground normal logic program to analyze the existence, unicity
and number of regular models for the program. We show three main results: 1) a
necessary condition for the existence of non-trivial (i.e. non-2-valued)
regular models, 2) a sufficient condition for the unicity of regular models,
and 3) two upper bounds for the number of regular models based on positive
feedback vertex sets. The first two conditions generalize the finite cases of
the two existing results obtained by You and Yuan (1994) for normal logic
programs with well-founded stratification. The third result is also new to the
best of our knowledge. Key to our proofs is a connection that we establish
between finite ground normal logic programs and Boolean network theory.
|
2502.09221
|
Pearce's Characterisation in an Epistemic Domain
|
cs.AI cs.LO cs.PL
|
Answer-set programming (ASP) is a successful problem-solving approach in
logic-based AI. In ASP, problems are represented as declarative logic programs,
and solutions are identified through their answer sets. Equilibrium logic (EL)
is a general-purpose nonmonotonic reasoning formalism, based on a monotonic
logic called here-and-there logic. EL was basically proposed by Pearce as a
foundational framework of ASP. Epistemic specifications (ES) are extensions of
ASP-programs with subjective literals. These new modal constructs in the
ASP-language make it possible to check whether a regular literal of ASP is true
in every (or some) answer-set of a program. ES-programs are interpreted by
world-views, which are essentially collections of answer-sets. (Reflexive)
autoepistemic logic is a nonmonotonic formalism, modeling self-belief
(knowledge) of ideally rational agents. A relatively new semantics for ES is
based on a combination of EL and (reflexive) autoepistemic logic. In this
paper, we first propose an overarching framework in the epistemic ASP domain.
We then establish a correspondence between existing (reflexive) (auto)epistemic
equilibrium logics and our easily-adaptable comprehensive framework, building
on Pearce's characterisation of answer-sets as equilibrium models. We achieve
this by extending Ferraris' work on answer sets for propositional theories to
the epistemic case and reveal the relationship between some ES-semantic
proposals.
|
2502.09222
|
ASP-driven User-interaction with Clinguin
|
cs.AI cs.HC cs.LO cs.SE
|
We present clinguin, a system for ASP-driven user interface design. Clinguin
streamlines the development of user interfaces for ASP developers by letting
them build interactive prototypes directly in ASP, eliminating the need for
separate frontend languages. To this end, clinguin uses a few dedicated
predicates to define user interfaces and the treatment of user-triggered
events. This simple design greatly facilitates the specification of user
interactions with an ASP system, in our case clingo.
|
2502.09223
|
A Prolog Program for Bottom-up Evaluation
|
cs.PL cs.DB cs.LO
|
This short paper describes a simple and intuitive Prolog program, a
metainterpreter, that computes the bottom up meaning of a simple positive Horn
clause definition. It involves a simple transformation of the object program
rules into metarules, which are then used by a metainterpreter to compute
bottom up the model of the original program. The resulting algorithm is a form
of semi-naive bottom-up evaluation. We discuss various reasons why this Prolog
program is particularly interesting.
In particular, this is perhaps the only Prolog program for which I find the
use of Prolog's assert/1 to be intrinsic, easily understood, and the best, most
perspicuous, way to program an algorithm. This short paper might be best
characterized as a Prolog programming pearl.
|
2502.09224
|
Order-Sorted Intensional Logic: Expressing Subtyping Polymorphism with
Typing Assertions and Quantification over Concepts
|
cs.AI cs.LO
|
Subtyping, also known as subtype polymorphism, is a concept extensively
studied in programming language theory, delineating the substitutability
relation among datatypes. This property ensures that programs designed for
supertype objects remain compatible with their subtypes.
In this paper, we explore the capability of order-sorted logic for utilizing
these ideas in the context of Knowledge Representation. We recognize two
fundamental limitations: First, the inability of this logic to address the
concept rather than the value of non-logical symbols, and second, the lack of
language constructs for constraining the type of terms. Consequently, we
propose guarded order-sorted intensional logic, where guards are language
constructs for annotating typing information and intensional logic provides
support for quantification over concepts.
|
2502.09226
|
Generating Causally Compliant Counterfactual Explanations using ASP
|
cs.AI
|
This research is focused on generating achievable counterfactual
explanations. Given a negative outcome computed by a machine learning model or
a decision system, the novel CoGS approach generates (i) a counterfactual
solution that represents a positive outcome and (ii) a path that will take us
from the negative outcome to the positive one, where each node in the path
represents a change in an attribute (feature) value. CoGS computes paths that
respect the causal constraints among features. Thus, the counterfactuals
computed by CoGS are realistic. CoGS utilizes rule-based machine learning
algorithms to model causal dependencies between features. The paper discusses
the current status of the research and the preliminary results obtained.
|
2502.09228
|
Computational methods for Dynamic Answer Set Programming
|
cs.AI cs.FL cs.LO
|
In our daily lives and industrial settings, we often encounter dynamic
problems that require reasoning over time and metric constraints. These include
tasks such as scheduling, routing, and production sequencing. Dynamic logics
have traditionally addressed these needs but often lack the flexibility and
integration required for comprehensive problem modeling. This research aims to
extend Answer Set Programming (ASP), a powerful declarative problem-solving
approach, to handle dynamic domains effectively. By integrating concepts from
dynamic, temporal, and metric logics into ASP, we seek to develop robust
systems capable of modeling complex dynamic problems and performing efficient
reasoning tasks, thereby enhancing ASPs applicability in industrial contexts.
|
2502.09230
|
Relating Answer Set Programming and Many-sorted Logics for Formal
Verification
|
cs.LO cs.AI cs.PL
|
Answer Set Programming (ASP) is an important logic programming paradigm
within the field of Knowledge Representation and Reasoning. As a concise,
human-readable, declarative language, ASP is an excellent tool for developing
trustworthy (especially, artificially intelligent) software systems. However,
formally verifying ASP programs offers some unique challenges, such as
1. a lack of modularity (the meanings of rules are difficult to define in
isolation from the enclosing program),
2. the ground-and-solve semantics (the meanings of rules are dependent on the
input data with which the program is grounded), and
3. limitations of existing tools.
My research agenda has been focused on addressing these three issues with the
intention of making ASP verification an accessible, routine task that is
regularly performed alongside program development. In this vein, I have
investigated alternative semantics for ASP based on translations into the logic
of here-and-there and many-sorted first-order logic. These semantics promote a
modular understanding of logic programs, bypass grounding, and enable us to use
automated theorem provers to automatically verify properties of programs.
|
2502.09231
|
Answer Set Counting and its Applications
|
cs.CL cs.LO
|
We have focused on Answer Set Programming (ASP), more specifically, answer
set counting, exploring both exact and approximate methodologies. We developed
an exact ASP counter, sharpASP, which utilizes a compact encoding for
propositional formulas, significantly enhancing efficiency compared to existing
methods that often struggle with inefficient encodings. Our evaluations
indicate that sharpASP outperforms current ASP counters on several benchmarks.
In addition, we proposed an approximate ASP counter, named ApproxASP, a
hashing-based counter integrating Gauss-Jordan elimination within the ASP
solver, clingo. As a practical application, we employed ApproxASP for network
reliability estimation, demonstrating superior performance over both
traditional reliability estimators and #SAT-based methods.
|
2502.09232
|
Logical foundations of Smart Contracts
|
cs.LO cs.AI
|
Nowadays, sophisticated domains are emerging which require appropriate
formalisms to be specified accurately in order to reason about them. One such
domain is constituted of smart contracts that have emerged in cyber physical
systems as a way of enforcing formal agreements between components of these
systems. Smart contracts self-execute to run and share business processes
through blockchain, in decentralized systems, with many different participants.
Legal contracts are in many cases complex documents, with a number of
exceptions, and many subcontracts. The implementation of smart contracts based
on legal contracts is a long and laborious task, that needs to include all
actions, procedures, and the effects of actions related to the execution of the
contract. An ongoing open problem in this area is to formally account for smart
contracts using a uniform and somewhat universal formalism. This thesis
proposes logical foundations to smart contracts using the Situation Calculus, a
logic for reasoning about actions. Situation Calculus is one of the prominent
logic-based artificial intelligence approaches that provides enough logical
mechanism to specify and implement dynamic and complex systems such as
contracts. Situation Calculus is suitable to show how worlds dynamically
change. Smart contracts are going to be implement with Golog (written en
Prolog), a Situation Calculus-based programming language for modeling complex
and dynamic behaviors.
|
2502.09233
|
Commonsense Reasoning-Aided Autonomous Vehicle Systems
|
cs.AI
|
Autonomous Vehicle (AV) systems have been developed with a strong reliance on
machine learning techniques. While machine learning approaches, such as deep
learning, are extremely effective at tasks that involve observation and
classification, they struggle when it comes to performing higher level
reasoning about situations on the road. This research involves incorporating
commonsense reasoning models that use image data to improve AV systems. This
will allow AV systems to perform more accurate reasoning while also making them
more adjustable, explainable, and ethical. This paper will discuss the findings
so far and motivate its direction going forward.
|
2502.09235
|
Hybrid Answer Set Programming: Foundations and Applications
|
cs.AI cs.LO
|
Answer Set Programming (ASP) is a powerful tool for solving real-world
problems. However, many problems involve numeric values and complex constraints
beyond the capabilities of standard ASP solvers. Hybrid solvers like CLINGCON
and CLINGO[DL] address this by using specialized methods for specific
constraints. However, these solvers lack a strong theoretical foundation.
This issue has first been addressed by introducing the Logic of
Here-and-There with constraints (HT_c) as an extension of the Logic of
Here-and-There (HT) and its non-monotone extension Equilibrium Logic. Nowadays,
HT serves as a logical foundation for ASP and has facilitated a broader
understanding of this paradigm. The idea is that HTC (and other extensions)
play an analogous role for hybrid ASP.
There remain many open questions about these logics regarding their
fundamental characteristics as well as their practical use in solvers, ie. how
they can guide the implementation.
Having a formal understanding of these hybrid logics is also needed to better
understand the inherent structure of the (real-world) problems they are applied
to and to improve their representations in ASP. As an example of an application
of ASP we use product configuration.
|
2502.09237
|
Reliable Conversational Agents under ASP Control that Understand Natural
Language
|
cs.LO cs.CL
|
Efforts have been made to make machines converse like humans in the past few
decades. The recent techniques of Large Language Models (LLMs) make it possible
to have human-like conversations with machines, but LLM's flaws of lacking
understanding and reliability are well documented. We believe that the best way
to eliminate this problem is to use LLMs only as parsers to translate text to
knowledge and vice versa and carry out the conversation by reasoning over this
knowledge using the answer set programming. I have been developing a framework
based on LLMs and ASP to realize reliable chatbots that "understand" human
conversation. This framework has been used to develop task-specific chatbots as
well as socialbots. My future research is focused on making these chatbots
scalable and trainable.
|
2502.09238
|
OpenBench: A New Benchmark and Baseline for Semantic Navigation in Smart
Logistics
|
cs.RO
|
The increasing demand for efficient last-mile delivery in smart logistics
underscores the role of autonomous robots in enhancing operational efficiency
and reducing costs. Traditional navigation methods, which depend on
high-precision maps, are resource-intensive, while learning-based approaches
often struggle with generalization in real-world scenarios. To address these
challenges, this work proposes the Openstreetmap-enhanced oPen-air sEmantic
Navigation (OPEN) system that combines foundation models with classic
algorithms for scalable outdoor navigation. The system uses off-the-shelf
OpenStreetMap (OSM) for flexible map representation, thereby eliminating the
need for extensive pre-mapping efforts. It also employs Large Language Models
(LLMs) to comprehend delivery instructions and Vision-Language Models (VLMs)
for global localization, map updates, and house number recognition. To
compensate the limitations of existing benchmarks that are inadequate for
assessing last-mile delivery, this work introduces a new benchmark specifically
designed for outdoor navigation in residential areas, reflecting the real-world
challenges faced by autonomous delivery systems. Extensive experiments in
simulated and real-world environments demonstrate the proposed system's
efficacy in enhancing navigation efficiency and reliability. To facilitate
further research, our code and benchmark are publicly available.
|
2502.09241
|
Safety Evaluation of Human Arm Operations Using IMU Sensors with a
Spring-Damper-Mass Predictive Model
|
cs.RO
|
This paper presents a novel approach to real-time safety monitoring in
human-robot collaborative manufacturing environments through a wrist-mounted
Inertial Measurement Unit (IMU) system integrated with a Predictive Safety
Model (PSM). The proposed system extends previous PSM implementations through
the adaptation of a spring-damper-mass model specifically optimized for wrist
motions, employing probabilistic safety assessment through impedance-based
computations. We analyze our proposed impedance-based safety approach with
frequency domain methods, establishing quantitative safety thresholds through
comprehensive comparative analysis. Experimental validation across three
manufacturing tasks - tool manipulation, visual inspection, and pick-and-place
operations. Results show robust performance across diverse manufacturing
scenarios while maintaining computational efficiency through optimized
parameter selection. This work establishes a foundation for future developments
in adaptive risk assessment in real-time for human-robot collaborative
manufacturing environments.
|
2502.09242
|
From large language models to multimodal AI: A scoping review on the
potential of generative AI in medicine
|
cs.AI
|
Generative artificial intelligence (AI) models, such as diffusion models and
OpenAI's ChatGPT, are transforming medicine by enhancing diagnostic accuracy
and automating clinical workflows. The field has advanced rapidly, evolving
from text-only large language models for tasks such as clinical documentation
and decision support to multimodal AI systems capable of integrating diverse
data modalities, including imaging, text, and structured data, within a single
model. The diverse landscape of these technologies, along with rising interest,
highlights the need for a comprehensive review of their applications and
potential. This scoping review explores the evolution of multimodal AI,
highlighting its methods, applications, datasets, and evaluation in clinical
settings. Adhering to PRISMA-ScR guidelines, we systematically queried PubMed,
IEEE Xplore, and Web of Science, prioritizing recent studies published up to
the end of 2024. After rigorous screening, 144 papers were included, revealing
key trends and challenges in this dynamic field. Our findings underscore a
shift from unimodal to multimodal approaches, driving innovations in diagnostic
support, medical report generation, drug discovery, and conversational AI.
However, critical challenges remain, including the integration of heterogeneous
data types, improving model interpretability, addressing ethical concerns, and
validating AI systems in real-world clinical settings. This review summarizes
the current state of the art, identifies critical gaps, and provides insights
to guide the development of scalable, trustworthy, and clinically impactful
multimodal AI solutions in healthcare.
|
2502.09244
|
Memristor-Based Meta-Learning for Fast mmWave Beam Prediction in
Non-Stationary Environments
|
cs.IT math.IT
|
Traditional machine learning techniques have achieved great success in
improving data-rate performance and reducing latency in millimeter wave
(mmWave) communications. However, these methods still face two key challenges:
(i) their reliance on large-scale paired data for model training and tuning
which limits performance gains and makes beam predictions outdated, especially
in multi-user mmWave systems with large antenna arrays, and (ii) meta-learning
(ML)-based beamforming solutions are prone to overfitting when trained on a
limited number of tasks. To address these issues, we propose a memristorbased
meta-learning (M-ML) framework for predicting mmWave beam in real time. The
M-ML framework generates optimal initialization parameters during the training
phase, providing a strong starting point for adapting to unknown environments
during the testing phase. By leveraging memory to store key data, M-ML ensures
the predicted beamforming vectors are wellsuited to episodically dynamic
channel distributions, even when testing and training environments do not
align. Simulation results show that our approach delivers high prediction
accuracy in new environments, without relying on large datasets. Moreover, MML
enhances the model's generalization ability and adaptability.
|
2502.09245
|
You Do Not Fully Utilize Transformer's Representation Capacity
|
cs.LG cs.CL
|
In contrast to RNNs, which compress previous tokens into a single hidden
state, Transformers can attend to all previous tokens directly. However,
standard Transformers only use representations from the immediately preceding
layer. In this paper, we show that this design choice causes representation
collapse and leads to suboptimal performance. To address this issue, we
introduce Layer-Integrated Memory (LIMe), a simple yet powerful approach that
preserves the model's overall memory footprint while expanding its
representational capacity by allowing access to hidden states from earlier
layers. Through extensive experiments across various architectures and
different lookup mechanisms, we demonstrate consistent performance improvements
on a wide range of tasks. Moreover, our analysis of the learned representation
dynamics and our exploration of depthwise circuits reveal how LIMe integrates
information across layers, pointing to promising directions for future
research.
|
2502.09247
|
The Joint Entity-Relation Extraction Model Based on Span and Interactive
Fusion Representation for Chinese Medical Texts with Complex Semantics
|
cs.CL cs.AI
|
Joint entity-relation extraction is a critical task in transforming
unstructured or semi-structured text into triplets, facilitating the
construction of large-scale knowledge graphs, and supporting various downstream
applications. Despite its importance, research on Chinese text, particularly
with complex semantics in specialized domains like medicine, remains limited.
To address this gap, we introduce the CH-DDI, a Chinese drug-drug interactions
dataset designed to capture the intricacies of medical text. Leveraging the
strengths of attention mechanisms in capturing long-range dependencies, we
propose the SEA module, which enhances the extraction of complex contextual
semantic information, thereby improving entity recognition and relation
extraction. Additionally, to address the inefficiencies of existing methods in
facilitating information exchange between entity recognition and relation
extraction, we present an interactive fusion representation module. This module
employs Cross Attention for bidirectional information exchange between the
tasks and further refines feature extraction through BiLSTM. Experimental
results on both our CH-DDI dataset and public CoNLL04 dataset demonstrate that
our model exhibits strong generalization capabilities. On the CH-DDI dataset,
our model achieves an F1-score of 96.73% for entity recognition and 78.43% for
relation extraction. On the CoNLL04 dataset, it attains an entity recognition
precision of 89.54% and a relation extraction accuracy of 71.64%.
|
2502.09252
|
On the Importance of Embedding Norms in Self-Supervised Learning
|
cs.LG
|
Self-supervised learning (SSL) allows training data representations without a
supervised signal and has become an important paradigm in machine learning.
Most SSL methods employ the cosine similarity between embedding vectors and
hence effectively embed data on a hypersphere. While this seemingly implies
that embedding norms cannot play any role in SSL, a few recent works have
suggested that embedding norms have properties related to network convergence
and confidence. In this paper, we resolve this apparent contradiction and
systematically establish the embedding norm's role in SSL training. Using
theoretical analysis, simulations, and experiments, we show that embedding
norms (i) govern SSL convergence rates and (ii) encode network confidence, with
smaller norms corresponding to unexpected samples. Additionally, we show that
manipulating embedding norms can have large effects on convergence speed. Our
findings demonstrate that SSL embedding norms are integral to understanding and
optimizing network behavior.
|
2502.09254
|
AnomalyGFM: Graph Foundation Model for Zero/Few-shot Anomaly Detection
|
cs.LG cs.AI
|
Graph anomaly detection (GAD) aims to identify abnormal nodes that differ
from the majority of the nodes in a graph, which has been attracting
significant attention in recent years. Existing generalist graph models have
achieved remarkable success in different graph tasks but struggle to generalize
to the GAD task. This limitation arises from their difficulty in learning
generalized knowledge for capturing the inherently infrequent, irregular and
heterogeneous abnormality patterns in graphs from different domains. To address
this challenge, we propose AnomalyGFM, a GAD-oriented graph foundation model
that supports zero-shot inference and few-shot prompt tuning for GAD in diverse
graph datasets. One key insight is that graph-agnostic representations for
normal and abnormal classes are required to support effective zero/few-shot GAD
across different graphs. Motivated by this, AnomalyGFM is pre-trained to align
data-independent, learnable normal and abnormal class prototypes with node
representation residuals (i.e., representation deviation of a node from its
neighbors). The residual features essentially project the node information into
a unified feature space where we can effectively measure the abnormality of
nodes from different graphs in a consistent way. This provides a driving force
for the learning of graph-agnostic, discriminative prototypes for the normal
and abnormal classes, which can be used to enable zero-shot GAD on new graphs,
including very large-scale graphs. If there are few-shot labeled normal nodes
available in the new graphs, AnomalyGFM can further support prompt tuning to
leverage these nodes for better adaptation. Comprehensive experiments on 11
widely-used GAD datasets with real anomalies, demonstrate that AnomalyGFM
significantly outperforms state-of-the-art competing methods under both zero-
and few-shot GAD settings.
|
2502.09256
|
DynSegNet:Dynamic Architecture Adjustment for Adversarial Learning in
Segmenting Hemorrhagic Lesions from Fundus Images
|
cs.CV cs.AI
|
The hemorrhagic lesion segmentation plays a critical role in ophthalmic
diagnosis, directly influencing early disease detection, treatment planning,
and therapeutic efficacy evaluation. However, the task faces significant
challenges due to lesion morphological variability, indistinct boundaries, and
low contrast with background tissues. To improve diagnostic accuracy and
treatment outcomes, developing advanced segmentation techniques remains
imperative. This paper proposes an adversarial learning-based dynamic
architecture adjustment approach that integrates hierarchical U-shaped
encoder-decoder, residual blocks, attention mechanisms, and ASPP modules. By
dynamically optimizing feature fusion, our method enhances segmentation
performance. Experimental results demonstrate a Dice coefficient of 0.6802, IoU
of 0.5602, Recall of 0.766, Precision of 0.6525, and Accuracy of 0.9955,
effectively addressing the challenges in fundus image hemorrhage
segmentation.[* Corresponding author.]
|
2502.09257
|
Bandit Multiclass List Classification
|
cs.LG cs.AI stat.ML
|
We study the problem of multiclass list classification with (semi-)bandit
feedback, where input examples are mapped into subsets of size $m$ of a
collection of $K$ possible labels, and the feedback consists of the predicted
labels which lie in the set of true labels of the given example. Our main
result is for the $(\varepsilon,\delta)$-PAC variant of the problem for which
we design an algorithm that returns an $\varepsilon$-optimal hypothesis with
high probability using a sample complexity of $O \big( (\mathrm{poly}(K/m) + sm
/ \varepsilon^2) \log (|H|/\delta) \big)$ where $H$ is the underlying (finite)
hypothesis class and $s$ is an upper bound on the number of true labels for a
given example. This bound improves upon known bounds for combinatorial
semi-bandits whenever $s \ll K$. Moreover, in the regime where $s = O(1)$ the
leading terms in our bound match the corresponding full-information rates,
implying that bandit feedback essentially comes at no cost. Our PAC learning
algorithm is also computationally efficient given access to an ERM oracle for
$H$. Additionally, we consider the regret minimization setting where data can
be generated adversarially, and establish a regret bound of $\widetilde O(|H| +
\sqrt{smT \log |H|})$. Our results generalize and extend those of Erez et al.
(2024) who consider the simpler single-label setting corresponding to $s=m=1$,
and in fact hold for the more general contextual combinatorial semi-bandit
problem with $s$-sparse rewards.
|
2502.09263
|
Unlocking the Potential of Classic GNNs for Graph-level Tasks: Simple
Architectures Meet Excellence
|
cs.LG
|
Message-passing Graph Neural Networks (GNNs) are often criticized for their
limited expressiveness, issues like over-smoothing and over-squashing, and
challenges in capturing long-range dependencies, while Graph Transformers (GTs)
are considered superior due to their global attention mechanisms. Literature
frequently suggests that GTs outperform GNNs, particularly in graph-level tasks
such as graph classification and regression. In this study, we explore the
untapped potential of GNNs through an enhanced framework, GNN+, which
integrates six widely used techniques: edge feature integration, normalization,
dropout, residual connections, feed-forward networks, and positional encoding,
to effectively tackle graph-level tasks. We conduct a systematic evaluation of
three classic GNNs, namely GCN, GIN, and GatedGCN, enhanced by the GNN+
framework across 14 well-known graph-level datasets. Our results show that,
contrary to the prevailing belief, classic GNNs excel in graph-level tasks,
securing top three rankings across all datasets and achieving first place in
eight, while also demonstrating greater efficiency than GTs. This highlights
the potential of simple GNN architectures, challenging the belief that complex
mechanisms in GTs are essential for superior graph-level performance.
|
2502.09268
|
GEVRM: Goal-Expressive Video Generation Model For Robust Visual
Manipulation
|
cs.RO cs.LG
|
With the rapid development of embodied artificial intelligence, significant
progress has been made in vision-language-action (VLA) models for general robot
decision-making. However, the majority of existing VLAs fail to account for the
inevitable external perturbations encountered during deployment. These
perturbations introduce unforeseen state information to the VLA, resulting in
inaccurate actions and consequently, a significant decline in generalization
performance. The classic internal model control (IMC) principle demonstrates
that a closed-loop system with an internal model that includes external input
signals can accurately track the reference input and effectively offset the
disturbance. We propose a novel closed-loop VLA method GEVRM that integrates
the IMC principle to enhance the robustness of robot visual manipulation. The
text-guided video generation model in GEVRM can generate highly expressive
future visual planning goals. Simultaneously, we evaluate perturbations by
simulating responses, which are called internal embeddings and optimized
through prototype contrastive learning. This allows the model to implicitly
infer and distinguish perturbations from the external environment. The proposed
GEVRM achieves state-of-the-art performance on both standard and perturbed
CALVIN benchmarks and shows significant improvements in realistic robot tasks.
|
2502.09269
|
Memory-based Ensemble Learning in CMR Semantic Segmentation
|
cs.CV
|
Existing models typically segment either the entire 3D frame or 2D slices
independently to derive clinical functional metrics from ventricular
segmentation in cardiac cine sequences. While performing well overall, they
struggle at the end slices. To address this, we leverage spatial continuity to
extract global uncertainty from segmentation variance and use it as memory in
our ensemble learning method, Streaming, for classifier weighting, balancing
overall and end-slice performance. Additionally, we introduce the End
Coefficient (EC) to quantify end-slice accuracy. Experiments on ACDC and M&Ms
datasets show that our framework achieves near-state-of-the-art Dice Similarity
Coefficient (DSC) and outperforms all models on end-slice performance,
improving patient-specific segmentation accuracy.
|
2502.09271
|
LiSA: Leveraging Link Recommender to Attack Graph Neural Networks via
Subgraph Injection
|
cs.LG cs.AI
|
Graph Neural Networks (GNNs) have demonstrated remarkable proficiency in
modeling data with graph structures, yet recent research reveals their
susceptibility to adversarial attacks. Traditional attack methodologies, which
rely on manipulating the original graph or adding links to artificially created
nodes, often prove impractical in real-world settings. This paper introduces a
novel adversarial scenario involving the injection of an isolated subgraph to
deceive both the link recommender and the node classifier within a GNN system.
Specifically, the link recommender is mislead to propose links between targeted
victim nodes and the subgraph, encouraging users to unintentionally establish
connections and that would degrade the node classification accuracy, thereby
facilitating a successful attack. To address this, we present the LiSA
framework, which employs a dual surrogate model and bi-level optimization to
simultaneously meet two adversarial objectives. Extensive experiments on
real-world datasets demonstrate the effectiveness of our method.
|
2502.09274
|
FLARES: Fast and Accurate LiDAR Multi-Range Semantic Segmentation
|
cs.CV
|
3D scene understanding is a critical yet challenging task in autonomous
driving, primarily due to the irregularity and sparsity of LiDAR data, as well
as the computational demands of processing large-scale point clouds. Recent
methods leverage the range-view representation to improve processing
efficiency. To mitigate the performance drop caused by information loss
inherent to the "many-to-one" problem, where multiple nearby 3D points are
mapped to the same 2D grids and only the closest is retained, prior works tend
to choose a higher azimuth resolution for range-view projection. However, this
can bring the drawback of reducing the proportion of pixels that carry
information and heavier computation within the network. We argue that it is not
the optimal solution and show that, in contrast, decreasing the resolution is
more advantageous in both efficiency and accuracy. In this work, we present a
comprehensive re-design of the workflow for range-view-based LiDAR semantic
segmentation. Our approach addresses data representation, augmentation, and
post-processing methods for improvements. Through extensive experiments on two
public datasets, we demonstrate that our pipeline significantly enhances the
performance of various network architectures over their baselines, paving the
way for more effective LiDAR-based perception in autonomous systems.
|
2502.09278
|
ConsistentDreamer: View-Consistent Meshes Through Balanced Multi-View
Gaussian Optimization
|
cs.CV
|
Recent advances in diffusion models have significantly improved 3D
generation, enabling the use of assets generated from an image for embodied AI
simulations. However, the one-to-many nature of the image-to-3D problem limits
their use due to inconsistent content and quality across views. Previous models
optimize a 3D model by sampling views from a view-conditioned diffusion prior,
but diffusion models cannot guarantee view consistency. Instead, we present
ConsistentDreamer, where we first generate a set of fixed multi-view prior
images and sample random views between them with another diffusion model
through a score distillation sampling (SDS) loss. Thereby, we limit the
discrepancies between the views guided by the SDS loss and ensure a consistent
rough shape. In each iteration, we also use our generated multi-view prior
images for fine-detail reconstruction. To balance between the rough shape and
the fine-detail optimizations, we introduce dynamic task-dependent weights
based on homoscedastic uncertainty, updated automatically in each iteration.
Additionally, we employ opacity, depth distortion, and normal alignment losses
to refine the surface for mesh extraction. Our method ensures better view
consistency and visual quality compared to the state-of-the-art.
|
2502.09280
|
Adaptive Multi-Objective Bayesian Optimization for Capacity Planning of
Hybrid Heat Sources in Electric-Heat Coupling Systems of Cold Regions
|
eess.SY cs.NE cs.SY
|
The traditional heat-load generation pattern of combined heat and power
generators has become a problem leading to renewable energy source (RES) power
curtailment in cold regions, motivating the proposal of a planning model for
alternative heat sources. The model aims to identify non-dominant capacity
allocation schemes for heat pumps, thermal energy storage, electric boilers,
and combined storage heaters to construct a Pareto front, considering both
economic and sustainable objectives. The integration of various heat sources
from both generation and consumption sides enhances flexibility in utilization.
The study introduces a novel optimization algorithm, the adaptive
multi-objective Bayesian optimization (AMBO). Compared to other widely used
multi-objective optimization algorithms, AMBO eliminates predefined parameters
that may introduce subjectivity from planners. Beyond the algorithm, the
proposed model incorporates a noise term to account for inevitable simulation
deviations, enabling the identification of better-performing planning results
that meet the unique requirements of cold regions. What's more, the
characteristics of electric-thermal coupling scenarios are captured and
reflected in the operation simulation model to make sure the simulation is
close to reality. Numerical simulation verifies the superiority of the proposed
approach in generating a more diverse and evenly distributed Pareto front in a
sample-efficient manner, providing comprehensive and objective planning
choices.
|
2502.09282
|
FE-LWS: Refined Image-Text Representations via Decoder Stacking and
Fused Encodings for Remote Sensing Image Captioning
|
cs.CV cs.HC cs.LG
|
Remote sensing image captioning aims to generate descriptive text from remote
sensing images, typically employing an encoder-decoder framework. In this
setup, a convolutional neural network (CNN) extracts feature representations
from the input image, which then guide the decoder in a sequence-to-sequence
caption generation process. Although much research has focused on refining the
decoder, the quality of image representations from the encoder remains crucial
for accurate captioning. This paper introduces a novel approach that integrates
features from two distinct CNN based encoders, capturing complementary
information to enhance caption generation. Additionally, we propose a weighted
averaging technique to combine the outputs of all GRUs in the stacked decoder.
Furthermore, a comparison-based beam search strategy is incorporated to refine
caption selection. The results demonstrate that our fusion-based approach,
along with the enhanced stacked decoder, significantly outperforms both the
transformer-based state-of-the-art model and other LSTM-based baselines.
|
2502.09284
|
SparQLe: Speech Queries to Text Translation Through LLMs
|
cs.CL cs.AI
|
With the growing influence of Large Language Models (LLMs), there is
increasing interest in integrating speech representations with them to enable
more seamless multi-modal processing and speech understanding. This study
introduces a novel approach that leverages self-supervised speech
representations in combination with instruction-tuned LLMs for speech-to-text
translation. The proposed approach leverages a modality adapter to align
extracted speech features with instruction-tuned LLMs using English-language
data. Our experiments demonstrate that this method effectively preserves the
semantic content of the input speech and serves as an effective bridge between
self-supervised speech models and instruction-tuned LLMs, offering a promising
solution for various speech understanding applications.
|
2502.09285
|
EmoAssist: Emotional Assistant for Visual Impairment Community
|
cs.CV cs.CY
|
The rapid advancement of large multi-modality models (LMMs) has significantly
propelled the integration of artificial intelligence into practical
applications. Visual Question Answering (VQA) systems, which can process
multi-modal data including vision, text, and audio, hold great potential for
assisting the Visual Impairment (VI) community in navigating complex and
dynamic real-world environments. However, existing VI assistive LMMs overlook
the emotional needs of VI individuals, and current benchmarks lack emotional
evaluation of these LMMs. To address these gaps, this paper introduces the
EmoAssist Benchmark, a comprehensive benchmark designed to evaluate the
assistive performance of LMMs for the VI community. To the best of our
knowledge, this is the first benchmark that incorporates emotional intelligence
as a key consideration. Furthermore, we propose the EmoAssist Model, an
Emotion-Assistive LMM specifically designed for the VI community. The EmoAssist
Model utilizes Direct Preference Optimization (DPO) to align outputs with human
emotional preferences. Experiment results demonstrate that the EmoAssist Model
significantly enhances the recognition of implicit emotions and intentions of
VI users, delivers empathetic responses, and provides actionable guidance.
Specifically, it shows respective improvements of 147.8% and 89.7% in the
Empathy and Suggestion metrics on the EmoAssist Benchmark, compared to the
pre-tuning LMM, and even outperforms state-of-the-art LLMs such as GPT-4o.
|
2502.09287
|
An Uncertainty Principle for Linear Recurrent Neural Networks
|
cs.LG
|
We consider linear recurrent neural networks, which have become a key
building block of sequence modeling due to their ability for stable and
effective long-range modeling. In this paper, we aim at characterizing this
ability on a simple but core copy task, whose goal is to build a linear filter
of order $S$ that approximates the filter that looks $K$ time steps in the past
(which we refer to as the shift-$K$ filter), where $K$ is larger than $S$.
Using classical signal models and quadratic cost, we fully characterize the
problem by providing lower bounds of approximation, as well as explicit filters
that achieve this lower bound up to constants. The optimal performance
highlights an uncertainty principle: the optimal filter has to average values
around the $K$-th time step in the past with a range~(width) that is
proportional to $K/S$.
|
2502.09290
|
Dynamic Rolling Horizon Optimization for Network-Constrained V2X Value
Stacking of Electric Vehicles Under Uncertainties
|
math.OC cs.LG cs.SY eess.SY
|
Electric vehicle (EV) coordination can provide significant benefits through
vehicle-to-everything (V2X) by interacting with the grid, buildings, and other
EVs. This work aims to develop a V2X value-stacking framework, including
vehicle-to-building (V2B), vehicle-to-grid (V2G), and energy trading, to
maximize economic benefits for residential communities while maintaining
distribution voltage. This work also seeks to quantify the impact of prediction
errors related to building load, renewable energy, and EV arrivals. A dynamic
rolling-horizon optimization (RHO) method is employed to leverage multiple
revenue streams and maximize the potential of EV coordination. To address
energy uncertainties, including hourly local building load, local photovoltaic
(PV) generation, and EV arrivals, this work develops a Transformer-based
forecasting model named Gated Recurrent Units-Encoder-Temporal Fusion Decoder
(GRU-EN-TFD). The simulation results, using real data from Australia's National
Electricity Market, and the Independent System Operators in New England and New
York in the US, reveal that V2X value stacking can significantly reduce energy
costs. The proposed GRU-EN-TFD model outperforms the benchmark forecast model.
Uncertainties in EV arrivals have a more substantial impact on value-stacking
performance, highlighting the significance of its accurate forecast. This work
provides new insights into the dynamic interactions among residential
communities, unlocking the full potential of EV batteries.
|
2502.09291
|
Joint Attention Mechanism Learning to Facilitate Opto-physiological
Monitoring during Physical Activity
|
eess.SP cs.LG
|
Opto-physiological monitoring is a non-contact technique for measuring
cardiac signals, i.e., photoplethysmography (PPG). Quality PPG signals directly
lead to reliable physiological readings. However, PPG signal acquisition
procedures are often accompanied by spurious motion artefacts (MAs), especially
during low-to-high-intensity physical activity. This study proposes a practical
adversarial learning approach for opto-physiological monitoring by using a
generative adversarial network with an attention mechanism (AM-GAN) to model
motion noise and to allow MA removal. The AM-GAN learns an MA-resistant mapping
from raw and noisy signals to clear PPG signals in an adversarial manner,
guided by an attention mechanism to directly translate the motion reference of
triaxial acceleration to the MAs appearing in the raw signal. The AM-GAN was
experimented with three various protocols engaged with 39 subjects in various
physical activities. The average absolute error for heart rate (HR) derived
from the MA-free PPG signal via the AM-GAN, is 1.81 beats/min for the IEEE-SPC
dataset and 3.86 beats/min for the PPGDalia dataset. The same procedure applied
to an in-house LU dataset resulted in average absolute errors for HR and
respiratory rate (RR) of less than 1.37 beats/min and 2.49 breaths/min,
respectively. The study demonstrates the robustness and resilience of AM-GAN,
particularly during low-to-high-intensity physical activities.
|
2502.09294
|
Indeterminacy in Affective Computing: Considering Meaning and Context in
Data Collection Practices
|
cs.AI
|
Automatic Affect Prediction (AAP) uses computational analysis of input data
such as text, speech, images, and physiological signals to predict various
affective phenomena (e.g., emotions or moods). These models are typically
constructed using supervised machine-learning algorithms, which rely heavily on
labeled training datasets. In this position paper, we posit that all AAP
training data are derived from human Affective Interpretation Processes,
resulting in a form of Affective Meaning. Research on human affect indicates a
form of complexity that is fundamental to such meaning: it can possess what we
refer to here broadly as Qualities of Indeterminacy (QIs) - encompassing
Subjectivity (meaning depends on who is interpreting), Uncertainty (lack of
confidence regarding meanings' correctness), Ambiguity (meaning contains
mutually exclusive concepts) and Vagueness (meaning is situated at different
levels in a nested hierarchy). Failing to appropriately consider QIs leads to
results incapable of meaningful and reliable predictions. Based on this
premise, we argue that a crucial step in adequately addressing indeterminacy in
AAP is the development of data collection practices for modeling corpora that
involve the systematic consideration of 1) a relevant set of QIs and 2) context
for the associated interpretation processes. To this end, we are 1) outlining a
conceptual model of AIPs and the QIs associated with the meaning these produce
and a conceptual structure of relevant context, supporting understanding of its
role. Finally, we use our framework for 2) discussing examples of
context-sensitivity-related challenges for addressing QIs in data collection
setups. We believe our efforts can stimulate a structured discussion of both
the role of aspects of indeterminacy and context in research on AAP, informing
the development of better practices for data collection and analysis.
|
2502.09296
|
A Physics-Informed Deep Learning Model for MRI Brain Motion Correction
|
cs.CV physics.med-ph
|
Background: MRI is crucial for brain imaging but is highly susceptible to
motion artifacts due to long acquisition times. This study introduces
PI-MoCoNet, a physics-informed motion correction network that integrates
spatial and k-space information to remove motion artifacts without explicit
motion parameter estimation, enhancing image fidelity and diagnostic
reliability. Materials and Methods: PI-MoCoNet consists of a motion detection
network (U-net with spatial averaging) to identify corrupted k-space lines and
a motion correction network (U-net with Swin Transformer blocks) to reconstruct
motion-free images. The correction is guided by three loss functions:
reconstruction (L1), perceptual (LPIPS), and data consistency (Ldc). Motion
artifacts were simulated via rigid phase encoding perturbations and evaluated
on IXI and MR-ART datasets against Pix2Pix, CycleGAN, and U-net using PSNR,
SSIM, and NMSE. Results: PI-MoCoNet significantly improved image quality. On
IXI, for minor artifacts, PSNR increased from 34.15 dB to 45.95 dB, SSIM from
0.87 to 1.00, and NMSE reduced from 0.55% to 0.04%. For moderate artifacts,
PSNR improved from 30.23 dB to 42.16 dB, SSIM from 0.80 to 0.99, and NMSE from
1.32% to 0.09%. For heavy artifacts, PSNR rose from 27.99 dB to 36.01 dB, SSIM
from 0.75 to 0.97, and NMSE decreased from 2.21% to 0.36%. On MR-ART,
PI-MoCoNet achieved PSNR gains of ~10 dB and SSIM improvements of up to 0.20,
with NMSE reductions of ~6%. Ablation studies confirmed the importance of data
consistency and perceptual losses, yielding a 1 dB PSNR gain and 0.17% NMSE
reduction. Conclusions: PI-MoCoNet effectively mitigates motion artifacts in
brain MRI, outperforming existing methods. Its ability to integrate spatial and
k-space information makes it a promising tool for clinical use in motion-prone
settings. Code: https://github.com/mosaf/PI-MoCoNet.git.
|
2502.09297
|
When do neural networks learn world models?
|
cs.LG
|
Humans develop world models that capture the underlying generation process of
data. Whether neural networks can learn similar world models remains an open
problem. In this work, we provide the first theoretical results for this
problem, showing that in a multi-task setting, models with a low-degree bias
provably recover latent data-generating variables under mild assumptions --
even if proxy tasks involve complex, non-linear functions of the latents.
However, such recovery is also sensitive to model architecture. Our analysis
leverages Boolean models of task solutions via the Fourier-Walsh transform and
introduces new techniques for analyzing invertible Boolean transforms, which
may be of independent interest. We illustrate the algorithmic implications of
our results and connect them to related research areas, including
self-supervised learning, out-of-distribution generalization, and the linear
representation hypothesis in large language models.
|
2502.09298
|
Convex Is Back: Solving Belief MDPs With Convexity-Informed Deep
Reinforcement Learning
|
cs.LG
|
We present a novel method for Deep Reinforcement Learning (DRL),
incorporating the convex property of the value function over the belief space
in Partially Observable Markov Decision Processes (POMDPs). We introduce hard-
and soft-enforced convexity as two different approaches, and compare their
performance against standard DRL on two well-known POMDP environments, namely
the Tiger and FieldVisionRockSample problems. Our findings show that including
the convexity feature can substantially increase performance of the agents, as
well as increase robustness over the hyperparameter space, especially when
testing on out-of-distribution domains. The source code for this work can be
found at https://github.com/Dakout/Convex_DRL.
|
2502.09299
|
Moving Matter: Efficient Reconfiguration of Tile Arrangements by a
Single Active Robot
|
cs.CG cs.DS cs.RO
|
We consider the problem of reconfiguring a two-dimensional connected grid
arrangement of passive building blocks from a start configuration to a goal
configuration, using a single active robot that can move on the tiles, remove
individual tiles from a given location and physically move them to a new
position by walking on the remaining configuration. The objective is to
determine a reconfiguration schedule that minimizes the overall makespan, while
ensuring that the tile configuration remains connected. We provide both
negative and positive results. (1) We present a generalized version of the
problem, parameterized by weighted costs for moving with or without tiles, and
show that this is NP-complete. (2) We give a polynomial-time constant-factor
approximation algorithm for the case of disjoint start and target bounding
boxes. In addition, our approach yields optimal carry distance for 2-scaled
instances.
|
Subsets and Splits
No community queries yet
The top public SQL queries from the community will appear here once available.