id
stringlengths 9
16
| title
stringlengths 4
278
| categories
stringlengths 5
104
| abstract
stringlengths 6
4.09k
|
|---|---|---|---|
2501.07762
|
PSReg: Prior-guided Sparse Mixture of Experts for Point Cloud
Registration
|
cs.CV cs.AI
|
The discriminative feature is crucial for point cloud registration. Recent
methods improve the feature discriminative by distinguishing between
non-overlapping and overlapping region points. However, they still face
challenges in distinguishing the ambiguous structures in the overlapping
regions. Therefore, the ambiguous features they extracted resulted in a
significant number of outlier matches from overlapping regions. To solve this
problem, we propose a prior-guided SMoE-based registration method to improve
the feature distinctiveness by dispatching the potential correspondences to the
same experts. Specifically, we propose a prior-guided SMoE module by fusing
prior overlap and potential correspondence embeddings for routing, assigning
tokens to the most suitable experts for processing. In addition, we propose a
registration framework by a specific combination of Transformer layer and
prior-guided SMoE module. The proposed method not only pays attention to the
importance of locating the overlapping areas of point clouds, but also commits
to finding more accurate correspondences in overlapping areas. Our extensive
experiments demonstrate the effectiveness of our method, achieving
state-of-the-art registration recall (95.7\%/79.3\%) on the 3DMatch/3DLoMatch
benchmark. Moreover, we also test the performance on ModelNet40 and demonstrate
excellent performance.
|
2501.07763
|
On the Statistical Capacity of Deep Generative Models
|
stat.ML cs.AI cs.LG math.ST stat.TH
|
Deep generative models are routinely used in generating samples from complex,
high-dimensional distributions. Despite their apparent successes, their
statistical properties are not well understood. A common assumption is that
with enough training data and sufficiently large neural networks, deep
generative model samples will have arbitrarily small errors in sampling from
any continuous target distribution. We set up a unifying framework that debunks
this belief. We demonstrate that broad classes of deep generative models,
including variational autoencoders and generative adversarial networks, are not
universal generators. Under the predominant case of Gaussian latent variables,
these models can only generate concentrated samples that exhibit light tails.
Using tools from concentration of measure and convex geometry, we give
analogous results for more general log-concave and strongly log-concave latent
variable distributions. We extend our results to diffusion models via a
reduction argument. We use the Gromov--Levy inequality to give similar
guarantees when the latent variables lie on manifolds with positive Ricci
curvature. These results shed light on the limited capacity of common deep
generative models to handle heavy tails. We illustrate the empirical relevance
of our work with simulations and financial data.
|
2501.07764
|
Deep Learning for Disease Outbreak Prediction: A Robust Early Warning
Signal for Transcritical Bifurcations
|
cs.LG cs.AI
|
Early Warning Signals (EWSs) are vital for implementing preventive measures
before a disease turns into a pandemic. While new diseases exhibit unique
behaviors, they often share fundamental characteristics from a dynamical
systems perspective. Moreover, measurements during disease outbreaks are often
corrupted by different noise sources, posing challenges for Time Series
Classification (TSC) tasks. In this study, we address the problem of having a
robust EWS for disease outbreak prediction using a best-performing deep
learning model in the domain of TSC. We employed two simulated datasets to
train the model: one representing generated dynamical systems with randomly
selected polynomial terms to model new disease behaviors, and another
simulating noise-induced disease dynamics to account for noisy measurements.
The model's performance was analyzed using both simulated data from different
disease models and real-world data, including influenza and COVID-19. Results
demonstrate that the proposed model outperforms previous models, effectively
providing EWSs of impending outbreaks across various scenarios. This study
bridges advancements in deep learning with the ability to provide robust early
warning signals in noisy environments, making it highly applicable to
real-world crises involving emerging disease outbreaks.
|
2501.07765
|
PINN-FEM: A Hybrid Approach for Enforcing Dirichlet Boundary Conditions
in Physics-Informed Neural Networks
|
cs.LG physics.comp-ph stat.ML
|
Physics-Informed Neural Networks (PINNs) solve partial differential equations
(PDEs) by embedding governing equations and boundary/initial conditions into
the loss function. However, enforcing Dirichlet boundary conditions accurately
remains challenging, often leading to soft enforcement that compromises
convergence and reliability in complex domains. We propose a hybrid approach,
PINN-FEM, which combines PINNs with finite element methods (FEM) to impose
strong Dirichlet boundary conditions via domain decomposition. This method
incorporates FEM-based representations near the boundary, ensuring exact
enforcement without compromising convergence. Through six experiments of
increasing complexity, PINN-FEM outperforms standard PINN models, showcasing
superior accuracy and robustness. While distance functions and similar
techniques have been proposed for boundary condition enforcement, they lack
generality for real-world applications. PINN-FEM bridges this gap by leveraging
FEM near boundaries, making it well-suited for industrial and scientific
problems.
|
2501.07766
|
Large Language Models for Knowledge Graph Embedding Techniques, Methods,
and Challenges: A Survey
|
cs.CL cs.AI
|
Large Language Models (LLMs) have attracted a lot of attention in various
fields due to their superior performance, aiming to train hundreds of millions
or more parameters on large amounts of text data to understand and generate
natural language. As the superior performance of LLMs becomes apparent, they
are increasingly being applied to knowledge graph embedding (KGE) related tasks
to improve the processing results. As a deep learning model in the field of
Natural Language Processing (NLP), it learns a large amount of textual data to
predict the next word or generate content related to a given text. However,
LLMs have recently been invoked to varying degrees in different types of KGE
related scenarios such as multi-modal KGE and open KGE according to their task
characteristics. In this paper, we investigate a wide range of approaches for
performing LLMs-related tasks in different types of KGE scenarios. To better
compare the various approaches, we summarize each KGE scenario in a
classification. In addition to the categorization methods, we provide a tabular
overview of the methods and their source code links for a more direct
comparison. In the article we also discuss the applications in which the
methods are mainly used and suggest several forward-looking directions for the
development of this new research area.
|
2501.07769
|
BMIP: Bi-directional Modality Interaction Prompt Learning for VLM
|
cs.LG cs.CV
|
Vision-language models (VLMs) have exhibited remarkable generalization
capabilities, and prompt learning for VLMs has attracted great attention for
the ability to adapt pre-trained VLMs to specific downstream tasks. However,
existing studies mainly focus on single-modal prompts or uni-directional
modality interaction, overlooking the powerful alignment effects resulting from
the interaction between the vision and language modalities. To this end, we
propose a novel prompt learning method called
$\underline{\textbf{B}}i-directional \underline{\textbf{M}}odality
\underline{\textbf{I}}nteraction \underline{\textbf{P}}rompt (BMIP)$, which
dynamically weights bi-modal information through learning the information of
the attention layer, enhancing trainability and inter-modal consistency
compared to simple information aggregation methods. To evaluate the
effectiveness of prompt learning methods, we propose a more realistic
evaluation paradigm called open-world generalization complementing the widely
adopted cross-dataset transfer and domain generalization tasks. Comprehensive
experiments on various datasets reveal that BMIP not only outperforms current
state-of-the-art methods across all three evaluation paradigms but is also
flexible enough to be combined with other prompt-based methods for consistent
performance enhancement.
|
2501.07771
|
An Empirical Evaluation of Serverless Cloud Infrastructure for
Large-Scale Data Processing
|
cs.DB
|
Data processing systems are increasingly deployed in the cloud. While
monolithic systems run fully on virtual servers, recent systems embrace cloud
infrastructure and utilize the disaggregation of compute and storage to scale
them independently. The introduction of serverless compute services, such as
AWS Lambda, enables finer-grained and elastic scalability within these systems.
Prior work shows the viability of serverless infrastructure for scalable data
processing yet also sees limitations due to variable performance and cost
overhead, in particular for networking and storage.
In this paper, we perform a detailed analysis of the performance and cost
characteristics of serverless infrastructure in the data processing context. We
base our analysis on a large series of micro-benchmarks across different
compute and storage services, as well as end-to-end workloads. To enable our
analysis, we propose the Skyrise serverless evaluation platform. For the widely
used serverless infrastructure of AWS, our analysis reveals distinct boundaries
for performance variability in serverless networks and storage. We further
present cost break-even points for serverless compute and storage. These
insights provide guidance on when and how serverless infrastructure can be
efficiently used for data processing.
|
2501.07773
|
Symmetry-Aware Generative Modeling through Learned Canonicalization
|
cs.LG
|
Generative modeling of symmetric densities has a range of applications in AI
for science, from drug discovery to physics simulations. The existing
generative modeling paradigm for invariant densities combines an invariant
prior with an equivariant generative process. However, we observe that this
technique is not necessary and has several drawbacks resulting from the
limitations of equivariant networks. Instead, we propose to model a learned
slice of the density so that only one representative element per orbit is
learned. To accomplish this, we learn a group-equivariant canonicalization
network that maps training samples to a canonical pose and train a
non-equivariant generative model over these canonicalized samples. We implement
this idea in the context of diffusion models. Our preliminary experimental
results on molecular modeling are promising, demonstrating improved sample
quality and faster inference time.
|
2501.07774
|
Transforming Indoor Localization: Advanced Transformer Architecture for
NLOS Dominated Wireless Environments with Distributed Sensors
|
cs.LG cs.AI eess.SP
|
Indoor localization in challenging non-line-of-sight (NLOS) environments
often leads to mediocre accuracy with traditional approaches. Deep learning
(DL) has been applied to tackle these challenges; however, many DL approaches
overlook computational complexity, especially for floating-point operations
(FLOPs), making them unsuitable for resource-limited devices. Transformer-based
models have achieved remarkable success in natural language processing (NLP)
and computer vision (CV) tasks, motivating their use in wireless applications.
However, their use in indoor localization remains nascent, and directly
applying Transformers for indoor localization can be both computationally
intensive and exhibit limitations in accuracy. To address these challenges, in
this work, we introduce a novel tokenization approach, referred to as Sensor
Snapshot Tokenization (SST), which preserves variable-specific representations
of power delay profile (PDP) and enhances attention mechanisms by effectively
capturing multi-variate correlation. Complementing this, we propose a
lightweight Swish-Gated Linear Unit-based Transformer (L-SwiGLU Transformer)
model, designed to reduce computational complexity without compromising
localization accuracy. Together, these contributions mitigate the computational
burden and dependency on large datasets, making Transformer models more
efficient and suitable for resource-constrained scenarios. The proposed
tokenization method enables the Vanilla Transformer to achieve a 90th
percentile positioning error of 0.388 m in a highly NLOS indoor factory,
surpassing conventional tokenization methods. The L-SwiGLU ViT further reduces
the error to 0.355 m, achieving an 8.51% improvement. Additionally, the
proposed model outperforms a 14.1 times larger model with a 46.13% improvement,
underscoring its computational efficiency.
|
2501.07783
|
Parameter-Inverted Image Pyramid Networks for Visual Perception and
Multimodal Understanding
|
cs.CV cs.CL
|
Image pyramids are widely adopted in top-performing methods to obtain
multi-scale features for precise visual perception and understanding. However,
current image pyramids use the same large-scale model to process multiple
resolutions of images, leading to significant computational cost. To address
this challenge, we propose a novel network architecture, called
Parameter-Inverted Image Pyramid Networks (PIIP). Specifically, PIIP uses
pretrained models (ViTs or CNNs) as branches to process multi-scale images,
where images of higher resolutions are processed by smaller network branches to
balance computational cost and performance. To integrate information from
different spatial scales, we further propose a novel cross-branch feature
interaction mechanism. To validate PIIP, we apply it to various perception
models and a representative multimodal large language model called LLaVA, and
conduct extensive experiments on various tasks such as object detection,
segmentation, image classification and multimodal understanding. PIIP achieves
superior performance compared to single-branch and existing multi-resolution
approaches with lower computational cost. When applied to InternViT-6B, a
large-scale vision foundation model, PIIP can improve its performance by 1%-2%
on detection and segmentation with only 40%-60% of the original computation,
finally achieving 60.0 box AP on MS COCO and 59.7 mIoU on ADE20K. For
multimodal understanding, our PIIP-LLaVA achieves 73.0% accuracy on TextVQA and
74.5% on MMBench with only 2.8M training data. Our code is released at
https://github.com/OpenGVLab/PIIP.
|
2501.07793
|
Unsupervised Query Routing for Retrieval Augmented Generation
|
cs.IR
|
Query routing for retrieval-augmented generation aims to assign an input
query to the most suitable search engine. Existing works rely heavily on
supervised datasets that require extensive manual annotation, resulting in high
costs and limited scalability, as well as poor generalization to
out-of-distribution scenarios. To address these challenges, we introduce a
novel unsupervised method that constructs the "upper-bound" response to
evaluate the quality of retrieval-augmented responses. This evaluation enables
the decision of the most suitable search engine for a given query. By
eliminating manual annotations, our approach can automatically process
large-scale real user queries and create training data. We conduct extensive
experiments across five datasets, demonstrating that our method significantly
enhances scalability and generalization capabilities.
|
2501.07794
|
Linearly Convergent Mixup Learning
|
cs.LG
|
Learning in the reproducing kernel Hilbert space (RKHS) such as the support
vector machine has been recognized as a promising technique. It continues to be
highly effective and competitive in numerous prediction tasks, particularly in
settings where there is a shortage of training data or computational
limitations exist. These methods are especially valued for their ability to
work with small datasets and their interpretability. To address the issue of
limited training data, mixup data augmentation, widely used in deep learning,
has remained challenging to apply to learning in RKHS due to the generation of
intermediate class labels. Although gradient descent methods handle these
labels effectively, dual optimization approaches are typically not directly
applicable. In this study, we present two novel algorithms that extend to a
broader range of binary classification models. Unlike gradient-based
approaches, our algorithms do not require hyperparameters like learning rates,
simplifying their implementation and optimization. Both the number of
iterations to converge and the computational cost per iteration scale linearly
with respect to the dataset size. The numerical experiments demonstrate that
our algorithms achieve faster convergence to the optimal solution compared to
gradient descent approaches, and that mixup data augmentation consistently
improves the predictive performance across various loss functions.
|
2501.07800
|
BioPose: Biomechanically-accurate 3D Pose Estimation from Monocular
Videos
|
cs.CV cs.AI cs.LG
|
Recent advancements in 3D human pose estimation from single-camera images and
videos have relied on parametric models, like SMPL. However, these models
oversimplify anatomical structures, limiting their accuracy in capturing true
joint locations and movements, which reduces their applicability in
biomechanics, healthcare, and robotics. Biomechanically accurate pose
estimation, on the other hand, typically requires costly marker-based motion
capture systems and optimization techniques in specialized labs. To bridge this
gap, we propose BioPose, a novel learning-based framework for predicting
biomechanically accurate 3D human pose directly from monocular videos. BioPose
includes three key components: a Multi-Query Human Mesh Recovery model
(MQ-HMR), a Neural Inverse Kinematics (NeurIK) model, and a 2D-informed pose
refinement technique. MQ-HMR leverages a multi-query deformable transformer to
extract multi-scale fine-grained image features, enabling precise human mesh
recovery. NeurIK treats the mesh vertices as virtual markers, applying a
spatial-temporal network to regress biomechanically accurate 3D poses under
anatomical constraints. To further improve 3D pose estimations, a 2D-informed
refinement step optimizes the query tokens during inference by aligning the 3D
structure with 2D pose observations. Experiments on benchmark datasets
demonstrate that BioPose significantly outperforms state-of-the-art methods.
Project website:
\url{https://m-usamasaleem.github.io/publication/BioPose/BioPose.html}.
|
2501.07801
|
A Comparative Analysis of DNN-based White-Box Explainable AI Methods in
Network Security
|
cs.CR cs.AI
|
New research focuses on creating artificial intelligence (AI) solutions for
network intrusion detection systems (NIDS), drawing its inspiration from the
ever-growing number of intrusions on networked systems, increasing its
complexity and intelligibility. Hence, the use of explainable AI (XAI)
techniques in real-world intrusion detection systems comes from the requirement
to comprehend and elucidate black-box AI models to security analysts. In an
effort to meet such requirements, this paper focuses on applying and evaluating
White-Box XAI techniques (particularly LRP, IG, and DeepLift) for NIDS via an
end-to-end framework for neural network models, using three widely used network
intrusion datasets (NSL-KDD, CICIDS-2017, and RoEduNet-SIMARGL2021), assessing
its global and local scopes, and examining six distinct assessment measures
(descriptive accuracy, sparsity, stability, robustness, efficiency, and
completeness). We also compare the performance of white-box XAI methods with
black-box XAI methods. The results show that using White-box XAI techniques
scores high in robustness and completeness, which are crucial metrics for IDS.
Moreover, the source codes for the programs developed for our XAI evaluation
framework are available to be improved and used by the research community.
|
2501.07802
|
Visual Language Models as Operator Agents in the Space Domain
|
cs.AI physics.space-ph
|
This paper explores the application of Vision-Language Models (VLMs) as
operator agents in the space domain, focusing on both software and hardware
operational paradigms. Building on advances in Large Language Models (LLMs) and
their multimodal extensions, we investigate how VLMs can enhance autonomous
control and decision-making in space missions. In the software context, we
employ VLMs within the Kerbal Space Program Differential Games (KSPDG)
simulation environment, enabling the agent to interpret visual screenshots of
the graphical user interface to perform complex orbital maneuvers. In the
hardware context, we integrate VLMs with robotic systems equipped with cameras
to inspect and diagnose physical space objects, such as satellites. Our results
demonstrate that VLMs can effectively process visual and textual data to
generate contextually appropriate actions, competing with traditional methods
and non-multimodal LLMs in simulation tasks, and showing promise in real-world
applications.
|
2501.07804
|
Balance Divergence for Knowledge Distillation
|
cs.CV
|
Knowledge distillation has been widely adopted in computer vision task
processing, since it can effectively enhance the performance of lightweight
student networks by leveraging the knowledge transferred from cumbersome
teacher networks. Most existing knowledge distillation methods utilize
Kullback-Leibler divergence to mimic the logit output probabilities between the
teacher network and the student network. Nonetheless, these methods may neglect
the negative parts of the teacher's ''dark knowledge'' because the divergence
calculations may ignore the effect of the minute probabilities from the
teacher's logit output. This deficiency may lead to suboptimal performance in
logit mimicry during the distillation process and result in an imbalance of
information acquired by the student network. In this paper, we investigate the
impact of this imbalance and propose a novel method, named Balance Divergence
Distillation. By introducing a compensatory operation using reverse
Kullback-Leibler divergence, our method can improve the modeling of the
extremely small values in the negative from the teacher and preserve the
learning capacity for the positive. Furthermore, we test the impact of
different temperature coefficients adjustments, which may conducted to further
balance for knowledge transferring. We evaluate the proposed method on several
computer vision tasks, including image classification and semantic
segmentation. The evaluation results show that our method achieves an accuracy
improvement of 1%~3% for lightweight students on both CIFAR-100 and ImageNet
dataset, and a 4.55% improvement in mIoU for PSP-ResNet18 on the Cityscapes
dataset. The experiments show that our method is a simple yet highly effective
solution that can be smoothly applied to different knowledge distillation
methods.
|
2501.07806
|
Learning Motion and Temporal Cues for Unsupervised Video Object
Segmentation
|
cs.CV
|
In this paper, we address the challenges in unsupervised video object
segmentation (UVOS) by proposing an efficient algorithm, termed MTNet, which
concurrently exploits motion and temporal cues. Unlike previous methods that
focus solely on integrating appearance with motion or on modeling temporal
relations, our method combines both aspects by integrating them within a
unified framework. MTNet is devised by effectively merging appearance and
motion features during the feature extraction process within encoders,
promoting a more complementary representation. To capture the intricate
long-range contextual dynamics and information embedded within videos, a
temporal transformer module is introduced, facilitating efficacious inter-frame
interactions throughout a video clip. Furthermore, we employ a cascade of
decoders all feature levels across all feature levels to optimally exploit the
derived features, aiming to generate increasingly precise segmentation masks.
As a result, MTNet provides a strong and compact framework that explores both
temporal and cross-modality knowledge to robustly localize and track the
primary object accurately in various challenging scenarios efficiently.
Extensive experiments across diverse benchmarks conclusively show that our
method not only attains state-of-the-art performance in unsupervised video
object segmentation but also delivers competitive results in video salient
object detection. These findings highlight the method's robust versatility and
its adeptness in adapting to a range of segmentation tasks. Source code is
available on https://github.com/hy0523/MTNet.
|
2501.07808
|
A Low-cost and Ultra-lightweight Binary Neural Network for Traffic
Signal Recognition
|
cs.AI cs.CV eess.IV
|
The deployment of neural networks in vehicle platforms and wearable
Artificial Intelligence-of-Things (AIOT) scenarios has become a research area
that has attracted much attention. With the continuous evolution of deep
learning technology, many image classification models are committed to
improving recognition accuracy, but this is often accompanied by problems such
as large model resource usage, complex structure, and high power consumption,
which makes it challenging to deploy on resource-constrained platforms. Herein,
we propose an ultra-lightweight binary neural network (BNN) model designed for
hardware deployment, and conduct image classification research based on the
German Traffic Sign Recognition Benchmark (GTSRB) dataset. In addition, we also
verify it on the Chinese Traffic Sign (CTS) and Belgian Traffic Sign (BTS)
datasets. The proposed model shows excellent recognition performance with an
accuracy of up to 97.64%, making it one of the best performing BNN models in
the GTSRB dataset. Compared with the full-precision model, the accuracy loss is
controlled within 1%, and the parameter storage overhead of the model is only
10% of that of the full-precision model. More importantly, our network model
only relies on logical operations and low-bit width fixed-point addition and
subtraction operations during the inference phase, which greatly simplifies the
design complexity of the processing element (PE). Our research shows the great
potential of BNN in the hardware deployment of computer vision models,
especially in the field of computer vision tasks related to autonomous driving.
|
2501.07809
|
Conformal mapping Coordinates Physics-Informed Neural Networks
(CoCo-PINNs): learning neural networks for designing neutral inclusions
|
cs.LG cs.AI math.AP
|
We focus on designing and solving the neutral inclusion problem via neural
networks. The neutral inclusion problem has a long history in the theory of
composite materials, and it is exceedingly challenging to identify the precise
condition that precipitates a general-shaped inclusion into a neutral
inclusion. Physics-informed neural networks (PINNs) have recently become a
highly successful approach to addressing both forward and inverse problems
associated with partial differential equations. We found that traditional PINNs
perform inadequately when applied to the inverse problem of designing neutral
inclusions with arbitrary shapes. In this study, we introduce a novel approach,
Conformal mapping Coordinates Physics-Informed Neural Networks (CoCo-PINNs),
which integrates complex analysis techniques into PINNs. This method exhibits
strong performance in solving forward-inverse problems to construct neutral
inclusions of arbitrary shapes in two dimensions, where the imperfect interface
condition on the inclusion's boundary is modeled by training neural networks.
Notably, we mathematically prove that training with a single linear field is
sufficient to achieve neutrality for untrained linear fields in arbitrary
directions, given a minor assumption. We demonstrate that CoCo-PINNs offer
enhanced performances in terms of credibility, consistency, and stability.
|
2501.07810
|
AVS-Mamba: Exploring Temporal and Multi-modal Mamba for Audio-Visual
Segmentation
|
cs.CV
|
The essence of audio-visual segmentation (AVS) lies in locating and
delineating sound-emitting objects within a video stream. While
Transformer-based methods have shown promise, their handling of long-range
dependencies struggles due to quadratic computational costs, presenting a
bottleneck in complex scenarios. To overcome this limitation and facilitate
complex multi-modal comprehension with linear complexity, we introduce
AVS-Mamba, a selective state space model to address the AVS task. Our framework
incorporates two key components for video understanding and cross-modal
learning: Temporal Mamba Block for sequential video processing and
Vision-to-Audio Fusion Block for advanced audio-vision integration. Building on
this, we develop the Multi-scale Temporal Encoder, aimed at enhancing the
learning of visual features across scales, facilitating the perception of
intra- and inter-frame information. To perform multi-modal fusion, we propose
the Modality Aggregation Decoder, leveraging the Vision-to-Audio Fusion Block
to integrate visual features into audio features across both frame and temporal
levels. Further, we adopt the Contextual Integration Pyramid to perform
audio-to-vision spatial-temporal context collaboration. Through these
innovative contributions, our approach achieves new state-of-the-art results on
the AVSBench-object and AVSBench-semantic datasets. Our source code and model
weights are available at AVS-Mamba.
|
2501.07813
|
Talk to Right Specialists: Routing and Planning in Multi-agent System
for Question Answering
|
cs.MA cs.AI cs.CL
|
Leveraging large language models (LLMs), an agent can utilize
retrieval-augmented generation (RAG) techniques to integrate external knowledge
and increase the reliability of its responses. Current RAG-based agents
integrate single, domain-specific knowledge sources, limiting their ability and
leading to hallucinated or inaccurate responses when addressing cross-domain
queries. Integrating multiple knowledge bases into a unified RAG-based agent
raises significant challenges, including increased retrieval overhead and data
sovereignty when sensitive data is involved. In this work, we propose RopMura,
a novel multi-agent system that addresses these limitations by incorporating
highly efficient routing and planning mechanisms. RopMura features two key
components: a router that intelligently selects the most relevant agents based
on knowledge boundaries and a planner that decomposes complex multi-hop queries
into manageable steps, allowing for coordinating cross-domain responses.
Experimental results demonstrate that RopMura effectively handles both
single-hop and multi-hop queries, with the routing mechanism enabling precise
answers for single-hop queries and the combined routing and planning mechanisms
achieving accurate, multi-step resolutions for complex queries.
|
2501.07814
|
STTS-EAD: Improving Spatio-Temporal Learning Based Time Series
Prediction via
|
cs.LG cs.AI
|
Handling anomalies is a critical preprocessing step in multivariate time
series prediction. However, existing approaches that separate anomaly
preprocessing from model training for multivariate time series prediction
encounter significant limitations. Specifically, these methods fail to utilize
auxiliary information crucial for identifying latent anomalies associated with
spatiotemporal factors during the preprocessing stage. Instead, they rely
solely on data distribution for anomaly detection, which can result in the
incorrect processing of numerous samples that could otherwise contribute
positively to model training. To address this, we propose STTS-EAD, an
end-to-end method that seamlessly integrates anomaly detection into the
training process of multivariate time series forecasting and aims to improve
Spatio-Temporal learning based Time Series prediction via Embedded Anomaly
Detection. Our proposed STTS-EAD leverages spatio-temporal information for
forecasting and anomaly detection, with the two parts alternately executed and
optimized for each other. To the best of our knowledge, STTS-EAD is the first
to integrate anomaly detection and forecasting tasks in the training phase for
improving the accuracy of multivariate time series forecasting. Extensive
experiments on a public stock dataset and two real-world sales datasets from a
renowned coffee chain enterprise show that our proposed method can effectively
process detected anomalies in the training stage to improve forecasting
performance in the inference stage and significantly outperform baselines.
|
2501.07815
|
Agent-Centric Projection of Prompting Techniques and Implications for
Synthetic Training Data for Large Language Models
|
cs.AI cs.CL cs.MA
|
Recent advances in prompting techniques and multi-agent systems for Large
Language Models (LLMs) have produced increasingly complex approaches. However,
we lack a framework for characterizing and comparing prompting techniques or
understanding their relationship to multi-agent LLM systems. This position
paper introduces and explains the concepts of linear contexts (a single,
continuous sequence of interactions) and non-linear contexts (branching or
multi-path) in LLM systems. These concepts enable the development of an
agent-centric projection of prompting techniques, a framework that can reveal
deep connections between prompting strategies and multi-agent systems. We
propose three conjectures based on this framework: (1) results from non-linear
prompting techniques can predict outcomes in equivalent multi-agent systems,
(2) multi-agent system architectures can be replicated through single-LLM
prompting techniques that simulate equivalent interaction patterns, and (3)
these equivalences suggest novel approaches for generating synthetic training
data. We argue that this perspective enables systematic cross-pollination of
research findings between prompting and multi-agent domains, while providing
new directions for improving both the design and training of future LLM
systems.
|
2501.07818
|
A Multi-Encoder Frozen-Decoder Approach for Fine-Tuning Large Language
Models
|
cs.CL cs.AI cs.LG
|
Among parameter-efficient fine-tuning methods, freezing has emerged as a
popular strategy for speeding up training, reducing catastrophic forgetting,
and improving downstream performance. We investigate the impact of freezing the
decoder in a multi-task setup comprising diverse natural language tasks, aiming
to reduce deployment overhead and enhance portability to novel tasks. Our
experiments, conducted by fine-tuning both individual and multi-task setups on
the AlexaTM model, reveal that freezing decoders is highly effective for tasks
with natural language outputs and mitigates catastrophic forgetting in
multilingual tasks. However, we find that pairing frozen decoders with a larger
model can effectively maintain or even enhance performance in structured and QA
tasks, making it a viable strategy for a broader range of task types.
|
2501.07819
|
3UR-LLM: An End-to-End Multimodal Large Language Model for 3D Scene
Understanding
|
cs.CV
|
Multi-modal Large Language Models (MLLMs) exhibit impressive capabilities in
2D tasks, yet encounter challenges in discerning the spatial positions,
interrelations, and causal logic in scenes when transitioning from 2D to 3D
representations. We find that the limitations mainly lie in: i) the high
annotation cost restricting the scale-up of volumes of 3D scene data, and ii)
the lack of a straightforward and effective way to perceive 3D information
which results in prolonged training durations and complicates the streamlined
framework. To this end, we develop pipeline based on open-source 2D MLLMs and
LLMs to generate high-quality 3D-text pairs and construct 3DS-160K , to enhance
the pre-training process. Leveraging this high-quality pre-training data, we
introduce the 3UR-LLM model, an end-to-end 3D MLLM designed for precise
interpretation of 3D scenes, showcasing exceptional capability in navigating
the complexities of the physical world. 3UR-LLM directly receives 3D point
cloud as input and project 3D features fused with text instructions into a
manageable set of tokens. Considering the computation burden derived from these
hybrid tokens, we design a 3D compressor module to cohesively compress the 3D
spatial cues and textual narrative. 3UR-LLM achieves promising performance with
respect to the previous SOTAs, for instance, 3UR-LLM exceeds its counterparts
by 7.1\% CIDEr on ScanQA, while utilizing fewer training resources. The code
and model weights for 3UR-LLM and the 3DS-160K benchmark are available at
3UR-LLM.
|
2501.07824
|
Real-time Verification and Refinement of Language Model Text Generation
|
cs.CL cs.AI cs.LG
|
Large language models (LLMs) have shown remarkable performance across a wide
range of natural language tasks. However, a critical challenge remains in that
they sometimes generate factually incorrect answers. To address this, while
many previous work has focused on identifying errors in their generation and
further refining them, they are slow in deployment since they are designed to
verify the response from LLMs only after their entire generation (from the
first to last tokens) is done. Further, we observe that once LLMs generate
incorrect tokens early on, there is a higher likelihood that subsequent tokens
will also be factually incorrect. To this end, in this work, we propose
Streaming-VR (Streaming Verification and Refinement), a novel approach designed
to enhance the efficiency of verification and refinement of LLM outputs.
Specifically, the proposed Streaming-VR enables on-the-fly verification and
correction of tokens as they are being generated, similar to a streaming
process, ensuring that each subset of tokens is checked and refined in
real-time by another LLM as the LLM constructs its response. Through
comprehensive evaluations on multiple datasets, we demonstrate that our
approach not only enhances the factual accuracy of LLMs, but also offers a more
efficient solution compared to prior refinement methods.
|
2501.07827
|
Prediction Interval Construction Method for Electricity Prices
|
cs.LG cs.SY eess.SY
|
Accurate prediction of electricity prices plays an essential role in the
electricity market. To reflect the uncertainty of electricity prices, price
intervals are predicted. This paper proposes a novel prediction interval
construction method. A conditional generative adversarial network is first
presented to generate electricity price scenarios, with which the prediction
intervals can be constructed. Then, different generated scenarios are stacked
to obtain the probability densities, which can be applied to accurately reflect
the uncertainty of electricity prices. Furthermore, a reinforced prediction
mechanism based on the volatility level of weather factors is introduced to
address the spikes or volatile prices. A case study is conducted to verify the
effectiveness of the proposed novel prediction interval construction method.
The method can also provide the probability density of each price scenario
within the prediction interval and has the superiority to address the volatile
prices and price spikes with a reinforced prediction mechanism.
|
2501.07832
|
Low-Contact Grasping of Soft Tissue with Complex Geometry using a Vortex
Gripper
|
cs.RO
|
Soft tissue manipulation is an integral aspect of most surgical procedures;
however, the vast majority of surgical graspers used today are made of hard
materials, such as metals or hard plastics. Furthermore, these graspers
predominately function by pinching tissue between two hard objects as a method
for tissue manipulation. As such, the potential to apply too much force during
contact, and thus damage tissue, is inherently high. As an alternative
approach, gaspers developed using a pneumatic vortex could potentially levitate
soft tissue, enabling manipulation with low or even no contact force. In this
paper, we present the design and well as a full factorial study of the force
characteristics of the vortex gripper grasping soft surfaces with four common
shapes, with convex and concave curvature, and ranging over 10 different radii
of curvature, for a total of 40 unique surfaces. By changing the parameters of
the nozzle elements in the design of the gripper, it was possible to
investigate the influence of the mass flow parameters of the vortex gripper on
the lifting force for all of these different soft surfaces. An $\pmb{ex}$
$\pmb{vivo}$ experiment was conducted on grasping biological tissues and soft
balls of various shapes to show the advantages and disadvantages of the
proposed technology. The obtained results allowed us to find limitations in the
use of vortex technology and the following stages of its improvement for
medical use.
|
2501.07834
|
Flow: A Modular Approach to Automated Agentic Workflow Generation
|
cs.AI cs.LG cs.MA
|
Multi-agent frameworks powered by large language models (LLMs) have
demonstrated great success in automated planning and task execution. However,
the effective adjustment of Agentic workflows during execution has not been
well-studied. A effective workflow adjustment is crucial, as in many real-world
scenarios, the initial plan must adjust to unforeseen challenges and changing
conditions in real-time to ensure the efficient execution of complex tasks. In
this paper, we define workflows as an activity-on-vertex (AOV) graphs. We
continuously refine the workflow by dynamically adjusting task allocations
based on historical performance and previous AOV with LLM agents. To further
enhance system performance, we emphasize modularity in workflow design based on
measuring parallelism and dependence complexity. Our proposed multi-agent
framework achieved efficient sub-task concurrent execution, goal achievement,
and error tolerance. Empirical results across different practical tasks
demonstrate dramatic improvements in the efficiency of multi-agent frameworks
through dynamic workflow updating and modularization.
|
2501.07837
|
A Driver Advisory System Based on Large Language Model for High-speed
Train
|
cs.AI
|
With the rapid development of China high-speed railway, drivers face
increasingly significant technical challenges during operations, such as fault
handling. Currently, drivers depend on the onboard mechanic when facing
technical issues, for instance, traction loss or sensor faults. This dependency
can hinder effective operation, even lead to accidents, while waiting for
faults to be addressed. To enhance the accuracy and explainability of actions
during fault handling, an Intelligent Driver Advisory System (IDAS) framework
based on a large language model (LLM) named IDAS-LLM, is introduced. Initially,
domain-fine-tuning of the LLM is performed using a constructed railway
knowledge question-and-answer dataset to improve answer accuracy in
railway-related questions. Subsequently, integration of the Retrieval-augmented
Generation (RAG) architecture is pursued for system design to enhance the
explainability of generated responses. Comparative experiments are conducted
using the constructed railway driving knowledge assessment dataset. Results
indicate that domain-fine-tuned LLMs show an improvement in answer accuracy by
an average of 10%, outperforming some current mainstream LLMs. Additionally,
the inclusion of the RAG framework increases the average recall rate of
question-and-answer sessions by about 4%. Finally, the fault handling
capability of IDAS-LLM is demonstrated through simulations of real operational
scenarios, proving that the proposed framework has practical application
prospects.
|
2501.07839
|
Social Media Data Mining With Natural Language Processing on Public
Dream Contents
|
cs.CY cs.AI cs.CL cs.SI
|
The COVID-19 pandemic has significantly transformed global lifestyles,
enforcing physical isolation and accelerating digital adoption for work,
education, and social interaction. This study examines the pandemic's impact on
mental health by analyzing dream content shared on the Reddit r/Dreams
community. With over 374,000 subscribers, this platform offers a rich dataset
for exploring subconscious responses to the pandemic. Using statistical
methods, we assess shifts in dream positivity, negativity, and neutrality from
the pre-pandemic to post-pandemic era. To enhance our analysis, we fine-tuned
the LLaMA 3.1-8B model with labeled data, enabling precise sentiment
classification of dream content. Our findings aim to uncover patterns in dream
content, providing insights into the psychological effects of the pandemic and
its influence on subconscious processes. This research highlights the profound
changes in mental landscapes and the role of dreams as indicators of public
well-being during unprecedented times.
|
2501.07845
|
Reasoning with Graphs: Structuring Implicit Knowledge to Enhance LLMs
Reasoning
|
cs.CL
|
Large language models (LLMs) have demonstrated remarkable success across a
wide range of tasks; however, they still encounter challenges in reasoning
tasks that require understanding and inferring relationships between distinct
pieces of information within text sequences. This challenge is particularly
pronounced in tasks involving multi-step processes, such as logical reasoning
and multi-hop question answering, where understanding implicit relationships
between entities and leveraging multi-hop connections in the given context are
crucial. Graphs, as fundamental data structures, explicitly represent pairwise
relationships between entities, thereby offering the potential to enhance LLMs'
reasoning capabilities. External graphs have proven effective in supporting
LLMs across multiple tasks. However, in many reasoning tasks, no pre-existing
graph structure is provided. Can we structure implicit knowledge derived from
context into graphs to assist LLMs in reasoning? In this paper, we propose
Reasoning with Graphs (RwG) by first constructing explicit graphs from the
context and then leveraging these graphs to enhance LLM reasoning performance
on reasoning tasks. Extensive experiments demonstrate the effectiveness of the
proposed method in improving both logical reasoning and multi-hop question
answering tasks.
|
2501.07849
|
Unveiling Provider Bias in Large Language Models for Code Generation
|
cs.SE cs.AI cs.CR
|
Large Language Models (LLMs) have emerged as the new recommendation engines,
outperforming traditional methods in both capability and scope, particularly in
code generation applications. Our research reveals a novel provider bias in
LLMs, namely without explicit input prompts, these models show systematic
preferences for services from specific providers in their recommendations
(e.g., favoring Google Cloud over Microsoft Azure). This bias holds significant
implications for market dynamics and societal equilibrium, potentially
promoting digital monopolies. It may also deceive users and violate their
expectations, leading to various consequences. This paper presents the first
comprehensive empirical study of provider bias in LLM code generation. We
develop a systematic methodology encompassing an automated pipeline for dataset
generation, incorporating 6 distinct coding task categories and 30 real-world
application scenarios. Our analysis encompasses over 600,000 LLM-generated
responses across seven state-of-the-art models, utilizing approximately 500
million tokens (equivalent to \$5,000+ in computational costs). The study
evaluates both the generated code snippets and their embedded service provider
selections to quantify provider bias. Additionally, we conduct a comparative
analysis of seven debiasing prompting techniques to assess their efficacy in
mitigating these biases. Our findings demonstrate that LLMs exhibit significant
provider preferences, predominantly favoring services from Google and Amazon,
and can autonomously modify input code to incorporate their preferred providers
without users' requests. Notably, we observe discrepancies between providers
recommended in conversational contexts versus those implemented in generated
code. The complete dataset and analysis results are available in our
repository.
|
2501.07850
|
An Intra- and Cross-frame Topological Consistency Scheme for
Semi-supervised Atherosclerotic Coronary Plaque Segmentation
|
eess.IV cs.CV cs.LG
|
Enhancing the precision of segmenting coronary atherosclerotic plaques from
CT Angiography (CTA) images is pivotal for advanced Coronary Atherosclerosis
Analysis (CAA), which distinctively relies on the analysis of vessel
cross-section images reconstructed via Curved Planar Reformation. This task
presents significant challenges due to the indistinct boundaries and structures
of plaques and blood vessels, leading to the inadequate performance of current
deep learning models, compounded by the inherent difficulty in annotating such
complex data. To address these issues, we propose a novel dual-consistency
semi-supervised framework that integrates Intra-frame Topological Consistency
(ITC) and Cross-frame Topological Consistency (CTC) to leverage labeled and
unlabeled data. ITC employs a dual-task network for simultaneous segmentation
mask and Skeleton-aware Distance Transform (SDT) prediction, achieving similar
prediction of topology structure through consistency constraint without
additional annotations. Meanwhile, CTC utilizes an unsupervised estimator for
analyzing pixel flow between skeletons and boundaries of adjacent frames,
ensuring spatial continuity. Experiments on two CTA datasets show that our
method surpasses existing semi-supervised methods and approaches the
performance of supervised methods on CAA. In addition, our method also performs
better than other methods on the ACDC dataset, demonstrating its
generalization.
|
2501.07853
|
Optimizing Language Models for Grammatical Acceptability: A Comparative
Study of Fine-Tuning Techniques
|
cs.CL cs.AI
|
This study explores the fine-tuning (FT) of the Open Pre-trained Transformer
(OPT-125M) for grammatical acceptability tasks using the CoLA dataset. By
comparing Vanilla-Fine-Tuning (VFT), Pattern-Based-Fine-Tuning (PBFT), and
Parameter-Efficient Fine-Tuning techniques (PEFT) like Low-Rank Adaptation
(LoRA), we demonstrate significant improvements in computational efficiency
while maintaining high accuracy. Our experiments reveal that while VFT achieves
the highest accuracy (81.2%), LoRA enhancing FT by reducing memory usage and
iteration time by more than 50%, and increases accuracy in PBFT case. Context
Distillation (CD), though computationally efficient, underperformed with
accuracy around 31%. Our findings contribute to democratizing access to large
language models (LLM) by reducing computational barriers.
|
2501.07855
|
State-of-the-Art Transformer Models for Image Super-Resolution:
Techniques, Challenges, and Applications
|
cs.CV cs.AI cs.ET cs.LG cs.NE
|
Image Super-Resolution (SR) aims to recover a high-resolution image from its
low-resolution counterpart, which has been affected by a specific degradation
process. This is achieved by enhancing detail and visual quality. Recent
advancements in transformer-based methods have remolded image super-resolution
by enabling high-quality reconstructions surpassing previous deep-learning
approaches like CNN and GAN-based. This effectively addresses the limitations
of previous methods, such as limited receptive fields, poor global context
capture, and challenges in high-frequency detail recovery. Additionally, the
paper reviews recent trends and advancements in transformer-based SR models,
exploring various innovative techniques and architectures that combine
transformers with traditional networks to balance global and local contexts.
These neoteric methods are critically analyzed, revealing promising yet
unexplored gaps and potential directions for future research. Several
visualizations of models and techniques are included to foster a holistic
understanding of recent trends. This work seeks to offer a structured roadmap
for researchers at the forefront of deep learning, specifically exploring the
impact of transformers on super-resolution techniques.
|
2501.07857
|
Hierarchical Repository-Level Code Summarization for Business
Applications Using Local LLMs
|
cs.SE cs.AI
|
In large-scale software development, understanding the functionality and
intent behind complex codebases is critical for effective development and
maintenance. While code summarization has been widely studied, existing methods
primarily focus on smaller code units, such as functions, and struggle with
larger code artifacts like files and packages. Additionally, current
summarization models tend to emphasize low-level implementation details, often
overlooking the domain and business context that are crucial for real-world
applications. This paper proposes a two-step hierarchical approach for
repository-level code summarization, tailored to business applications. First,
smaller code units such as functions and variables are identified using syntax
analysis and summarized with local LLMs. These summaries are then aggregated to
generate higher-level file and package summaries. To ensure the summaries are
grounded in business context, we design custom prompts that capture the
intended purpose of code artifacts based on the domain and problem context of
the business application. We evaluate our approach on a business support system
(BSS) for the telecommunications domain, showing that syntax analysis-based
hierarchical summarization improves coverage, while business-context grounding
enhances the relevance of the generated summaries.
|
2501.07859
|
deepTerra -- AI Land Classification Made Easy
|
cs.CV cs.AI cs.LG
|
deepTerra is a comprehensive platform designed to facilitate the
classification of land surface features using machine learning and satellite
imagery. The platform includes modules for data collection, image augmentation,
training, testing, and prediction, streamlining the entire workflow for image
classification tasks. This paper presents a detailed overview of the
capabilities of deepTerra, shows how it has been applied to various research
areas, and discusses the future directions it might take.
|
2501.07861
|
ReARTeR: Retrieval-Augmented Reasoning with Trustworthy Process
Rewarding
|
cs.CL
|
Retrieval-Augmented Generation (RAG) systems for Large Language Models (LLMs)
hold promise in knowledge-intensive tasks but face limitations in complex
multi-step reasoning. While recent methods have integrated RAG with
chain-of-thought reasoning or test-time search using Process Reward Models
(PRMs), these approaches encounter challenges such as a lack of explanations,
bias in PRM training data, early-step bias in PRM scores, and insufficient
post-training optimization of reasoning potential. To address these issues, we
propose Retrieval-Augmented Reasoning through Trustworthy Process Rewarding
(ReARTeR), a framework that enhances RAG systems' reasoning capabilities
through post-training and test-time scaling. At test time, ReARTeR introduces
Trustworthy Process Rewarding via a Process Reward Model for accurate scalar
scoring and a Process Explanation Model (PEM) for generating natural language
explanations, enabling step refinement. During post-training, it utilizes Monte
Carlo Tree Search guided by Trustworthy Process Rewarding to collect
high-quality step-level preference data, optimized through Iterative Preference
Optimization. ReARTeR addresses three core challenges: (1) misalignment between
PRM and PEM, tackled through off-policy preference learning; (2) bias in PRM
training data, mitigated by balanced annotation methods and stronger
annotations for challenging examples; and (3) early-step bias in PRM, resolved
through a temporal-difference-based look-ahead search strategy. Experimental
results on multi-step reasoning benchmarks demonstrate significant
improvements, underscoring ReARTeR's potential to advance the reasoning
capabilities of RAG systems.
|
2501.07870
|
Make-A-Character 2: Animatable 3D Character Generation From a Single
Image
|
cs.CV
|
This report introduces Make-A-Character 2, an advanced system for generating
high-quality 3D characters from single portrait photographs, ideal for game
development and digital human applications. Make-A-Character 2 builds upon its
predecessor by incorporating several significant improvements for image-based
head generation. We utilize the IC-Light method to correct non-ideal
illumination in input photos and apply neural network-based color correction to
harmonize skin tones between the photos and game engine renders. We also employ
the Hierarchical Representation Network to capture high-frequency facial
structures and conduct adaptive skeleton calibration for accurate and
expressive facial animations. The entire image-to-3D-character generation
process takes less than 2 minutes. Furthermore, we leverage transformer
architecture to generate co-speech facial and gesture actions, enabling
real-time conversation with the generated character. These technologies have
been integrated into our conversational AI avatar products.
|
2501.07875
|
Continual Learning with Embedding Layer Surgery and Task-wise Beam
Search using Whisper
|
cs.CL cs.AI
|
Current Multilingual ASR models only support a fraction of the world's
languages. Continual Learning (CL) aims to tackle this problem by adding new
languages to pre-trained models while avoiding the loss of performance on
existing languages, also known as Catastrophic Forgetting (CF). However,
existing CL methods overlook the adaptation of the token embedding lookup table
at the decoder, despite its significant contribution to CF. We propose
Embedding Layer Surgery where separate copies of the token embeddings are
created for each new languages, and one of the copies is selected to replace
the old languages embeddings when transcribing the corresponding new language.
Unfortunately, this approach means LID errors also cause incorrect ASR
embedding selection. Our Task-wise Beam Search allows self-correction for such
mistakes. By adapting Whisper to 10 hours of data for each of 10 unseen
languages from Common Voice, results show that our method reduces the Average
WER (AWER) of pre-trained languages from 14.2% to 11.9% compared with
Experience Replay, without compromising the AWER of the unseen languages.
|
2501.07879
|
Distributed Nonparametric Estimation: from Sparse to Dense Samples per
Terminal
|
cs.LG cs.IT math.IT math.ST stat.TH
|
Consider the communication-constrained problem of nonparametric function
estimation, in which each distributed terminal holds multiple i.i.d. samples.
Under certain regularity assumptions, we characterize the minimax optimal rates
for all regimes, and identify phase transitions of the optimal rates as the
samples per terminal vary from sparse to dense. This fully solves the problem
left open by previous works, whose scopes are limited to regimes with either
dense samples or a single sample per terminal. To achieve the optimal rates, we
design a layered estimation protocol by exploiting protocols for the parametric
density estimation problem. We show the optimality of the protocol using
information-theoretic methods and strong data processing inequalities, and
incorporating the classic balls and bins model. The optimal rates are immediate
for various special cases such as density estimation, Gaussian, binary, Poisson
and heteroskedastic regression models.
|
2501.07884
|
MD-Syn: Synergistic drug combination prediction based on the
multidimensional feature fusion method and attention mechanisms
|
cs.LG q-bio.QM
|
Drug combination therapies have shown promising therapeutic efficacy in
complex diseases and have demonstrated the potential to reduce drug resistance.
However, the huge number of possible drug combinations makes it difficult to
screen them all in traditional experiments. In this study, we proposed MD-Syn,
a computational framework, which is based on the multidimensional feature
fusion method and multi-head attention mechanisms. Given drug pair-cell line
triplets, MD-Syn considers one-dimensional and two-dimensional feature spaces
simultaneously. It consists of a one-dimensional feature embedding module
(1D-FEM), a two-dimensional feature embedding module (2D-FEM), and a deep
neural network-based classifier for synergistic drug combination prediction.
MD-Syn achieved the AUROC of 0.919 in 5-fold cross-validation, outperforming
the state-of-the-art methods. Further, MD-Syn showed comparable results over
two independent datasets. In addition, the multi-head attention mechanisms not
only learn embeddings from different feature aspects but also focus on
essential interactive feature elements, improving the interpretability of
MD-Syn. In summary, MD-Syn is an interpretable framework to prioritize
synergistic drug combination pairs with chemicals and cancer cell line gene
expression profiles. To facilitate broader community access to this model, we
have developed a web portal (https://labyeh104-2.life.nthu.edu.tw/) that
enables customized predictions of drug combination synergy effects based on
user-specified compounds.
|
2501.07885
|
Mitigating Algorithmic Bias in Multiclass CNN Classifications Using
Causal Modeling
|
cs.LG cs.CV
|
This study describes a procedure for applying causal modeling to detect and
mitigate algorithmic bias in a multiclass classification problem. The dataset
was derived from the FairFace dataset, supplemented with emotional labels
generated by the DeepFace pre-trained model. A custom Convolutional Neural
Network (CNN) was developed, consisting of four convolutional blocks, followed
by fully connected layers and dropout layers to mitigate overfitting. Gender
bias was identified in the CNN model's classifications: Females were more
likely to be classified as "happy" or "sad," while males were more likely to be
classified as "neutral." To address this, the one-vs-all (OvA) technique was
applied. A causal model was constructed for each emotion class to adjust the
CNN model's predicted class probabilities. The adjusted probabilities for the
various classes were then aggregated by selecting the class with the highest
probability. The resulting debiased classifications demonstrated enhanced
gender fairness across all classes, with negligible impact--or even a slight
improvement--on overall accuracy. This study highlights that algorithmic
fairness and accuracy are not necessarily trade-offs. All data and code for
this study are publicly available for download.
|
2501.07886
|
Iterative Label Refinement Matters More than Preference Optimization
under Weak Supervision
|
cs.LG cs.AI cs.CL
|
Language model (LM) post-training relies on two stages of human supervision:
task demonstrations for supervised finetuning (SFT), followed by preference
comparisons for reinforcement learning from human feedback (RLHF). As LMs
become more capable, the tasks they are given become harder to supervise. Will
post-training remain effective under unreliable supervision? To test this, we
simulate unreliable demonstrations and comparison feedback using small LMs and
time-constrained humans. We find that in the presence of unreliable
supervision, SFT still retains some effectiveness, but DPO (a common RLHF
algorithm) fails to improve the model beyond SFT. To address this, we propose
iterative label refinement (ILR) as an alternative to RLHF. ILR improves the
SFT data by using comparison feedback to decide whether human demonstrations
should be replaced by model-generated alternatives, then retrains the model via
SFT on the updated data. SFT+ILR outperforms SFT+DPO on several tasks with
unreliable supervision (math, coding, and safe instruction-following). Our
findings suggest that as LMs are used for complex tasks where human supervision
is unreliable, RLHF may no longer be the best use of human comparison feedback;
instead, it is better to direct feedback towards improving the training data
rather than continually training the model. Our code and data are available at
https://github.com/helloelwin/iterative-label-refinement.
|
2501.07888
|
Tarsier2: Advancing Large Vision-Language Models from Detailed Video
Description to Comprehensive Video Understanding
|
cs.CV cs.AI
|
We introduce Tarsier2, a state-of-the-art large vision-language model (LVLM)
designed for generating detailed and accurate video descriptions, while also
exhibiting superior general video understanding capabilities. Tarsier2 achieves
significant advancements through three key upgrades: (1) Scaling pre-training
data from 11M to 40M video-text pairs, enriching both volume and diversity; (2)
Performing fine-grained temporal alignment during supervised fine-tuning; (3)
Using model-based sampling to automatically construct preference data and
applying DPO training for optimization. Extensive experiments show that
Tarsier2-7B consistently outperforms leading proprietary models, including
GPT-4o and Gemini 1.5 Pro, in detailed video description tasks. On the DREAM-1K
benchmark, Tarsier2-7B improves F1 by 2.8% over GPT-4o and 5.8% over
Gemini-1.5-Pro. In human side-by-side evaluations, Tarsier2-7B shows a +8.6%
performance advantage over GPT-4o and +24.9% over Gemini-1.5-Pro. Tarsier2-7B
also sets new state-of-the-art results across 15 public benchmarks, spanning
tasks such as video question-answering, video grounding, hallucination test,
and embodied question-answering, demonstrating its versatility as a robust
generalist vision-language model.
|
2501.07890
|
GRAPHMOE: Amplifying Cognitive Depth of Mixture-of-Experts Network via
Introducing Self-Rethinking Mechanism
|
cs.CL cs.AI
|
Traditional Mixture-of-Experts (MoE) networks benefit from utilizing multiple
smaller expert models as opposed to a single large network. However, these
experts typically operate independently, leaving a question open about whether
interconnecting these models could enhance the performance of MoE networks. In
response, we introduce GRAPHMOE, a novel method aimed at augmenting the
cognitive depth of language models via a self-rethinking mechanism constructed
on Pseudo GraphMoE networks. GRAPHMOE employs a recurrent routing strategy to
simulate iterative thinking steps, thereby facilitating the flow of information
among expert nodes. We implement the GRAPHMOE architecture using Low-Rank
Adaptation techniques (LoRA) and conduct extensive experiments on various
benchmark datasets. The experimental results reveal that GRAPHMOE outperforms
other LoRA based models, achieving state-of-the-art (SOTA) performance.
Additionally, this study explores a novel recurrent routing strategy that may
inspire further advancements in enhancing the reasoning capabilities of
language models.
|
2501.07892
|
Leveraging Metamemory Mechanisms for Enhanced Data-Free Code Generation
in LLMs
|
cs.SE cs.AI
|
Automated code generation using large language models (LLMs) has gained
attention due to its efficiency and adaptability. However, real-world coding
tasks or benchmarks like HumanEval and StudentEval often lack dedicated
training datasets, challenging existing few-shot prompting approaches that rely
on reference examples. Inspired by human metamemory-a cognitive process
involving recall and evaluation-we present a novel framework (namely M^2WF) for
improving LLMs' one-time code generation. This approach enables LLMs to
autonomously generate, evaluate, and utilize synthetic examples to enhance
reliability and performance. Unlike prior methods, it minimizes dependency on
curated data and adapts flexibly to various coding scenarios. Our experiments
demonstrate significant improvements in coding benchmarks, offering a scalable
and robust solution for data-free environments. The code and framework will be
publicly available on GitHub and HuggingFace.
|
2501.07896
|
Anytime Cooperative Implicit Hitting Set Solving
|
cs.AI
|
The Implicit Hitting Set (HS) approach has shown to be very effective for
MaxSAT, Pseudo-boolean optimization and other boolean frameworks. Very
recently, it has also shown its potential in the very similar Weighted CSP
framework by means of the so-called cost-function merging. The original
formulation of the HS approach focuses on obtaining increasingly better lower
bounds (HS-lb). However, and as shown for Pseudo-Boolean Optimization, this
approach can also be adapted to compute increasingly better upper bounds
(HS-ub). In this paper we consider both HS approaches and show how they can be
easily combined in a multithread architecture where cores discovered by either
component are available by the other which, interestingly, generates synergy
between them. We show that the resulting algorithm (HS-lub) is consistently
superior to either HS-lb and HS-ub in isolation. Most importantly, HS-lub has
an effective anytime behaviour with which the optimality gap is reduced during
the execution. We tested our approach on the Weighted CSP framework and show on
three different benchmarks that our very simple implementation sometimes
outperforms the parallel hybrid best-first search implementation of the far
more developed state-of-the-art Toulbar2.
|
2501.07898
|
Demographic Variability in Face Image Quality Measures
|
cs.CV
|
Face image quality assessment (FIQA) algorithms are being integrated into
online identity management applications. These applications allow users to
upload a face image as part of their document issuance process, where the image
is then run through a quality assessment process to make sure it meets the
quality and compliance requirements. Concerns about demographic bias have been
raised about biometric systems, given the societal implications this may cause.
It is therefore important that demographic variability in FIQA algorithms is
assessed such that mitigation measures can be created. In this work, we study
the demographic variability of all face image quality measures included in the
ISO/IEC 29794-5 international standard across three demographic variables: age,
gender, and skin tone. The results are rather promising and show no clear bias
toward any specific demographic group for most measures. Only two quality
measures are found to have considerable variations in their outcomes for
different groups on the skin tone variable.
|
2501.07901
|
Cloud Removal With PolSAR-Optical Data Fusion Using A Two-Flow Residual
Network
|
cs.CV eess.IV
|
Optical remote sensing images play a crucial role in the observation of the
Earth's surface. However, obtaining complete optical remote sensing images is
challenging due to cloud cover. Reconstructing cloud-free optical images has
become a major task in recent years. This paper presents a two-flow
Polarimetric Synthetic Aperture Radar (PolSAR)-Optical data fusion cloud
removal algorithm (PODF-CR), which achieves the reconstruction of missing
optical images. PODF-CR consists of an encoding module and a decoding module.
The encoding module includes two parallel branches that extract PolSAR image
features and optical image features. To address speckle noise in PolSAR images,
we introduce dynamic filters in the PolSAR branch for image denoising. To
better facilitate the fusion between multimodal optical images and PolSAR
images, we propose fusion blocks based on cross-skip connections to enable
interaction of multimodal data information. The obtained fusion features are
refined through an attention mechanism to provide better conditions for the
subsequent decoding of the fused images. In the decoding module, multi-scale
convolution is introduced to obtain multi-scale information. Additionally, to
better utilize comprehensive scattering information and polarization
characteristics to assist in the restoration of optical images, we use a
dataset for cloud restoration called OPT-BCFSAR-PFSAR, which includes
backscatter coefficient feature images and polarization feature images obtained
from PoLSAR data and optical images. Experimental results demonstrate that this
method outperforms existing methods in both qualitative and quantitative
evaluations.
|
2501.07903
|
Optimal Classification Trees for Continuous Feature Data Using Dynamic
Programming with Branch-and-Bound
|
cs.LG cs.AI cs.DS
|
Computing an optimal classification tree that provably maximizes training
performance within a given size limit, is NP-hard, and in practice, most
state-of-the-art methods do not scale beyond computing optimal trees of depth
three. Therefore, most methods rely on a coarse binarization of continuous
features to maintain scalability. We propose a novel algorithm that optimizes
trees directly on the continuous feature data using dynamic programming with
branch-and-bound. We develop new pruning techniques that eliminate many
sub-optimal splits in the search when similar to previously computed splits and
we provide an efficient subroutine for computing optimal depth-two trees. Our
experiments demonstrate that these techniques improve runtime by one or more
orders of magnitude over state-of-the-art optimal methods and improve test
accuracy by 5% over greedy heuristics.
|
2501.07905
|
Logarithmic Memory Networks (LMNs): Efficient Long-Range Sequence
Modeling for Resource-Constrained Environments
|
cs.AI cs.LG
|
Long-range sequence modeling is a crucial aspect of natural language
processing and time series analysis. However, traditional models like Recurrent
Neural Networks (RNNs) and Transformers suffer from computational and memory
inefficiencies, especially when dealing with long sequences. This paper
introduces Logarithmic Memory Networks (LMNs), a novel architecture that
leverages a hierarchical logarithmic tree structure to efficiently store and
retrieve past information. LMNs dynamically summarize historical context,
significantly reducing the memory footprint and computational complexity of
attention mechanisms from O(n2) to O(log(n)). The model employs a
single-vector, targeted attention mechanism to access stored information, and
the memory block construction worker (summarizer) layer operates in two modes:
a parallel execution mode during training for efficient processing of
hierarchical tree structures and a sequential execution mode during inference,
which acts as a memory management system. It also implicitly encodes positional
information, eliminating the need for explicit positional encodings. These
features make LMNs a robust and scalable solution for processing long-range
sequences in resource-constrained environments, offering practical improvements
in efficiency and scalability. The code is publicly available under the MIT
License on GitHub: https://github.com/AhmedBoin/LogarithmicMemory.
|
2501.07911
|
Deep Learning and Natural Language Processing in the Field of
Construction
|
cs.AI
|
This article presents a complete process to extract hypernym relationships in
the field of construction using two main steps: terminology extraction and
detection of hypernyms from these terms. We first describe the corpus analysis
method to extract terminology from a collection of technical specifications in
the field of construction. Using statistics and word n-grams analysis, we
extract the domain's terminology and then perform pruning steps with linguistic
patterns and internet queries to improve the quality of the final terminology.
Second, we present a machine-learning approach based on various words embedding
models and combinations to deal with the detection of hypernyms from the
extracted terminology. Extracted terminology is evaluated using a manual
evaluation carried out by 6 experts in the domain, and the hypernym
identification method is evaluated with different datasets. The global approach
provides relevant and promising results.
|
2501.07913
|
Governing AI Agents
|
cs.AI
|
The field of AI is undergoing a fundamental transition from generative models
that can produce synthetic content to artificial agents that can plan and
execute complex tasks with only limited human involvement. Companies that
pioneered the development of language models have now built AI agents that can
independently navigate the internet, perform a wide range of online tasks, and
increasingly serve as AI personal assistants and virtual coworkers. The
opportunities presented by this new technology are tremendous, as are the
associated risks. Fortunately, there exist robust analytic frameworks for
confronting many of these challenges, namely, the economic theory of
principal-agent problems and the common law doctrine of agency relationships.
Drawing on these frameworks, this Article makes three contributions. First, it
uses agency law and theory to identify and characterize problems arising from
AI agents, including issues of information asymmetry, discretionary authority,
and loyalty. Second, it illustrates the limitations of conventional solutions
to agency problems: incentive design, monitoring, and enforcement might not be
effective for governing AI agents that make uninterpretable decisions and
operate at unprecedented speed and scale. Third, the Article explores the
implications of agency law and theory for designing and regulating AI agents,
arguing that new technical and legal infrastructure is needed to support
governance principles of inclusivity, visibility, and liability.
|
2501.07919
|
Large Language Model Interface for Home Energy Management Systems
|
cs.AI
|
Home Energy Management Systems (HEMSs) help households tailor their
electricity usage based on power system signals such as energy prices. This
technology helps to reduce energy bills and offers greater demand-side
flexibility that supports the power system stability. However, residents who
lack a technical background may find it difficult to use HEMSs effectively,
because HEMSs require well-formatted parameterization that reflects the
characteristics of the energy resources, houses, and users' needs. Recently,
Large-Language Models (LLMs) have demonstrated an outstanding ability in
language understanding. Motivated by this, we propose an LLM-based interface
that interacts with users to understand and parameterize their
``badly-formatted answers'', and then outputs well-formatted parameters to
implement an HEMS. We further use Reason and Act method (ReAct) and few-shot
prompting to enhance the LLM performance. Evaluating the interface performance
requires multiple user--LLM interactions. To avoid the efforts in finding
volunteer users and reduce the evaluation time, we additionally propose a
method that uses another LLM to simulate users with varying expertise, ranging
from knowledgeable to non-technical. By comprehensive evaluation, the proposed
LLM-based HEMS interface achieves an average parameter retrieval accuracy of
88\%, outperforming benchmark models without ReAct and/or few-shot prompting.
|
2501.07922
|
VENOM: Text-driven Unrestricted Adversarial Example Generation with
Diffusion Models
|
cs.CV
|
Adversarial attacks have proven effective in deceiving machine learning
models by subtly altering input images, motivating extensive research in recent
years. Traditional methods constrain perturbations within $l_p$-norm bounds,
but advancements in Unrestricted Adversarial Examples (UAEs) allow for more
complex, generative-model-based manipulations. Diffusion models now lead UAE
generation due to superior stability and image quality over GANs. However,
existing diffusion-based UAE methods are limited to using reference images and
face challenges in generating Natural Adversarial Examples (NAEs) directly from
random noise, often producing uncontrolled or distorted outputs. In this work,
we introduce VENOM, the first text-driven framework for high-quality
unrestricted adversarial examples generation through diffusion models. VENOM
unifies image content generation and adversarial synthesis into a single
reverse diffusion process, enabling high-fidelity adversarial examples without
sacrificing attack success rate (ASR). To stabilize this process, we
incorporate an adaptive adversarial guidance strategy with momentum, ensuring
that the generated adversarial examples $x^*$ align with the distribution
$p(x)$ of natural images. Extensive experiments demonstrate that VENOM achieves
superior ASR and image quality compared to prior methods, marking a significant
advancement in adversarial example generation and providing insights into model
vulnerabilities for improved defense development.
|
2501.07923
|
Aviation Safety Enhancement via NLP & Deep Learning: Classifying Flight
Phases in ATSB Safety Reports
|
cs.LG cs.CL
|
Aviation safety is paramount, demanding precise analysis of safety
occurrences during different flight phases. This study employs Natural Language
Processing (NLP) and Deep Learning models, including LSTM, CNN, Bidirectional
LSTM (BLSTM), and simple Recurrent Neural Networks (sRNN), to classify flight
phases in safety reports from the Australian Transport Safety Bureau (ATSB).
The models exhibited high accuracy, precision, recall, and F1 scores, with LSTM
achieving the highest performance of 87%, 88%, 87%, and 88%, respectively. This
performance highlights their effectiveness in automating safety occurrence
analysis. The integration of NLP and Deep Learning technologies promises
transformative enhancements in aviation safety analysis, enabling targeted
safety measures and streamlined report handling.
|
2501.07924
|
Exploring Aviation Incident Narratives Using Topic Modeling and
Clustering Techniques
|
cs.AI cs.CL
|
Aviation safety is a global concern, requiring detailed investigations into
incidents to understand contributing factors comprehensively. This study uses
the National Transportation Safety Board (NTSB) dataset. It applies advanced
natural language processing (NLP) techniques, including Latent Dirichlet
Allocation (LDA), Non-Negative Matrix Factorization (NMF), Latent Semantic
Analysis (LSA), Probabilistic Latent Semantic Analysis (pLSA), and K-means
clustering. The main objectives are identifying latent themes, exploring
semantic relationships, assessing probabilistic connections, and cluster
incidents based on shared characteristics. This research contributes to
aviation safety by providing insights into incident narratives and
demonstrating the versatility of NLP and topic modelling techniques in
extracting valuable information from complex datasets. The results, including
topics identified from various techniques, provide an understanding of
recurring themes. Comparative analysis reveals that LDA performed best with a
coherence value of 0.597, pLSA of 0.583, LSA of 0.542, and NMF of 0.437.
K-means clustering further reveals commonalities and unique insights into
incident narratives. In conclusion, this study uncovers latent patterns and
thematic structures within incident narratives, offering a comparative analysis
of multiple-topic modelling techniques. Future research avenues include
exploring temporal patterns, incorporating additional datasets, and developing
predictive models for early identification of safety issues. This research lays
the groundwork for enhancing the understanding and improvement of aviation
safety by utilising the wealth of information embedded in incident narratives.
|
2501.07925
|
Phase of Flight Classification in Aviation Safety using LSTM, GRU, and
BiLSTM: A Case Study with ASN Dataset
|
cs.LG
|
Safety is the main concern in the aviation industry, where even minor
operational issues can lead to serious consequences. This study addresses the
need for comprehensive aviation accident analysis by leveraging natural
language processing (NLP) and advanced AI models to classify the phase of
flight from unstructured aviation accident analysis narratives. The research
aims to determine whether the phase of flight can be inferred from narratives
of post-accident events using NLP techniques. The classification performance of
various deep learning models was evaluated. For single RNN-based models, LSTM
achieved an accuracy of 63%, precision 60%, and recall 61%. BiLSTM recorded an
accuracy of 64%, precision 63%, and a recall of 64%. GRU exhibited balanced
performance with an accuracy and recall of 60% and a precision of 63%. Joint
RNN-based models further enhanced predictive capabilities. GRU-LSTM,
LSTM-BiLSTM, and GRU-BiLSTM demonstrated accuracy rates of 62%, 67%, and 60%,
respectively, showcasing the benefits of combining these architectures. To
provide a comprehensive overview of model performance, single and combined
models were compared in terms of the various metrics. These results underscore
the models' capacity to classify the phase of flight from raw text narratives,
equipping aviation industry stakeholders with valuable insights for proactive
decision-making. Therefore, this research signifies a substantial advancement
in the application of NLP and deep learning models to enhance aviation safety.
|
2501.07927
|
Gandalf the Red: Adaptive Security for LLMs
|
cs.LG cs.AI cs.CL cs.CR
|
Current evaluations of defenses against prompt attacks in large language
model (LLM) applications often overlook two critical factors: the dynamic
nature of adversarial behavior and the usability penalties imposed on
legitimate users by restrictive defenses. We propose D-SEC (Dynamic Security
Utility Threat Model), which explicitly separates attackers from legitimate
users, models multi-step interactions, and expresses the security-utility in an
optimizable form. We further address the shortcomings in existing evaluations
by introducing Gandalf, a crowd-sourced, gamified red-teaming platform designed
to generate realistic, adaptive attack. Using Gandalf, we collect and release a
dataset of 279k prompt attacks. Complemented by benign user data, our analysis
reveals the interplay between security and utility, showing that defenses
integrated in the LLM (e.g., system prompts) can degrade usability even without
blocking requests. We demonstrate that restricted application domains,
defense-in-depth, and adaptive defenses are effective strategies for building
secure and useful LLM applications.
|
2501.07930
|
An Adaptive Orthogonal Convolution Scheme for Efficient and Flexible CNN
Architectures
|
cs.AI cs.NE
|
Orthogonal convolutional layers are the workhorse of multiple areas in
machine learning, such as adversarial robustness, normalizing flows, GANs, and
Lipschitzconstrained models. Their ability to preserve norms and ensure stable
gradient propagation makes them valuable for a large range of problems. Despite
their promise, the deployment of orthogonal convolution in large-scale
applications is a significant challenge due to computational overhead and
limited support for modern features like strides, dilations, group
convolutions, and transposed convolutions.In this paper, we introduce AOC
(Adaptative Orthogonal Convolution), a scalable method for constructing
orthogonal convolutions, effectively overcoming these limitations. This
advancement unlocks the construction of architectures that were previously
considered impractical. We demonstrate through our experiments that our method
produces expressive models that become increasingly efficient as they scale. To
foster further advancement, we provide an open-source library implementing this
method, available at https://github.com/thib-s/orthogonium.
|
2501.07931
|
Advice for Diabetes Self-Management by ChatGPT Models: Challenges and
Recommendations
|
cs.AI
|
Given their ability for advanced reasoning, extensive contextual
understanding, and robust question-answering abilities, large language models
have become prominent in healthcare management research. Despite adeptly
handling a broad spectrum of healthcare inquiries, these models face
significant challenges in delivering accurate and practical advice for chronic
conditions such as diabetes. We evaluate the responses of ChatGPT versions 3.5
and 4 to diabetes patient queries, assessing their depth of medical knowledge
and their capacity to deliver personalized, context-specific advice for
diabetes self-management. Our findings reveal discrepancies in accuracy and
embedded biases, emphasizing the models' limitations in providing tailored
advice unless activated by sophisticated prompting techniques. Additionally, we
observe that both models often provide advice without seeking necessary
clarification, a practice that can result in potentially dangerous advice. This
underscores the limited practical effectiveness of these models without human
oversight in clinical settings. To address these issues, we propose a
commonsense evaluation layer for prompt evaluation and incorporating
disease-specific external memory using an advanced Retrieval Augmented
Generation technique. This approach aims to improve information quality and
reduce misinformation risks, contributing to more reliable AI applications in
healthcare settings. Our findings seek to influence the future direction of AI
in healthcare, enhancing both the scope and quality of its integration.
|
2501.07945
|
Early prediction of the transferability of bovine embryos from
videomicroscopy
|
eess.IV cs.AI cs.CV q-bio.QM
|
Videomicroscopy is a promising tool combined with machine learning for
studying the early development of in vitro fertilized bovine embryos and
assessing its transferability as soon as possible. We aim to predict the embryo
transferability within four days at most, taking 2D time-lapse microscopy
videos as input. We formulate this problem as a supervised binary
classification problem for the classes transferable and not transferable. The
challenges are three-fold: 1) poorly discriminating appearance and motion, 2)
class ambiguity, 3) small amount of annotated data. We propose a 3D
convolutional neural network involving three pathways, which makes it
multi-scale in time and able to handle appearance and motion in different ways.
For training, we retain the focal loss. Our model, named SFR, compares
favorably to other methods. Experiments demonstrate its effectiveness and
accuracy for our challenging biological task.
|
2501.07947
|
"Wait, did you mean the doctor?": Collecting a Dialogue Corpus for
Topical Analysis
|
cs.CL
|
Dialogue is at the core of human behaviour and being able to identify the
topic at hand is crucial to take part in conversation. Yet, there are few
accounts of the topical organisation in casual dialogue and of how people
recognise the current topic in the literature. Moreover, analysing topics in
dialogue requires conversations long enough to contain several topics and types
of topic shifts. Such data is complicated to collect and annotate. In this
paper we present a dialogue collection experiment which aims to build a corpus
suitable for topical analysis. We will carry out the collection with a
messaging tool we developed.
|
2501.07948
|
Synchronization of Kuramoto oscillators via HEOL, and a discussion on AI
|
math.OC cs.SY eess.SY
|
Artificial neural networks and their applications in deep learning have
recently made an incursion into the field of control. Deep learning techniques
in control are often related to optimal control, which relies on Pontryagin
maximum principle or the Hamilton-Jacobi-Bellman equation. They imply control
schemes that are tedious to implement. We show here that the new HEOL setting,
resulting from the fusion of the two established approaches, namely
differential flatness and model-free control, provides a solution to control
problems that is more sober in terms of computational resources. This
communication is devoted to the synchronization of the popular Kuramoto's
coupled oscillators, which was already considered via artificial neural
networks (B\"ottcher et al., Nature Communications 2022), where, contrarily to
this communication, only the single control variable is examined. One
establishes the flatness of Kuramoto's coupled oscillator model with
multiplicative control and develops the resulting HEOL control. Unlike many
exemples, this system reveals singularities that are avoided by a clever
generation of phase angle trajectories. The results obtained, verified in
simulation, show that it is not only possible to synchronize these oscillators
in finite time, and even to follow angular frequency profiles, but also to
exhibit robustness concerning model mismatches. To the best of our knowledge
this has never been done before. Concluding remarks advocate a viewpoint, which
might be traced back to Wiener's cybernetics: control theory belongs to AI.
|
2501.07952
|
Spiking Neural Network Accelerator Architecture for Differential-Time
Representation using Learned Encoding
|
cs.NE eess.SP
|
Spiking Neural Networks (SNNs) have garnered attention over recent years due
to their increased energy efficiency and advantages in terms of operational
complexity compared to traditional Artificial Neural Networks (ANNs). Two
important questions when implementing SNNs are how to best encode existing data
into spike trains and how to efficiently process these spike trains in
hardware. This paper addresses both of these problems by incorporating the
encoding into the learning process, thus allowing the network to learn the
spike encoding alongside the weights. Furthermore, this paper proposes a
hardware architecture based on a recently introduced differential-time
representation for spike trains allowing decoupling of spike time and
processing time. Together these contributions lead to a feedforward SNN using
only Leaky-Integrate and Fire (LIF) neurons that surpasses 99% accuracy on the
MNIST dataset while still being implementable on medium-sized FPGAs with
inference times of less than 295us.
|
2501.07953
|
Robust Hyperspectral Image Panshapring via Sparse Spatial-Spectral
Representation
|
cs.CV eess.IV
|
High-resolution hyperspectral imaging plays a crucial role in various remote
sensing applications, yet its acquisition often faces fundamental limitations
due to hardware constraints. This paper introduces S$^{3}$RNet, a novel
framework for hyperspectral image pansharpening that effectively combines
low-resolution hyperspectral images (LRHSI) with high-resolution multispectral
images (HRMSI) through sparse spatial-spectral representation. The core of
S$^{3}$RNet is the Multi-Branch Fusion Network (MBFN), which employs parallel
branches to capture complementary features at different spatial and spectral
scales. Unlike traditional approaches that treat all features equally, our
Spatial-Spectral Attention Weight Block (SSAWB) dynamically adjusts feature
weights to maintain sparse representation while suppressing noise and
redundancy. To enhance feature propagation, we incorporate the Dense Feature
Aggregation Block (DFAB), which efficiently aggregates inputted features
through dense connectivity patterns. This integrated design enables S$^{3}$RNet
to selectively emphasize the most informative features from differnt scale
while maintaining computational efficiency. Comprehensive experiments
demonstrate that S$^{3}$RNet achieves state-of-the-art performance across
multiple evaluation metrics, showing particular strength in maintaining high
reconstruction quality even under challenging noise conditions. The code will
be made publicly available.
|
2501.07954
|
Many-Objective Neuroevolution for Testing Games
|
cs.SE cs.NE
|
Generating tests for games is challenging due to the high degree of
randomisation inherent to games and hard-to-reach program states that require
sophisticated gameplay. The test generator NEATEST tackles these challenges by
combining search-based software testing principles with neuroevolution to
optimise neural networks that serve as test cases. However, since NEATEST is
designed as a single-objective algorithm, it may require a long time to cover
fairly simple program states or may even get stuck trying to reach unreachable
program states. In order to resolve these shortcomings of NEATEST, this work
aims to transform the algorithm into a many-objective search algorithm that
targets several program states simultaneously. To this end, we combine the
neuroevolution algorithm NEATEST with the two established search-based software
testing algorithms, MIO and MOSA. Moreover, we adapt the existing
many-objective neuroevolution algorithm NEWS/D to serve as a test generator.
Our experiments on a dataset of 20 SCRATCH programs show that extending NEATEST
to target several objectives simultaneously increases the average branch
coverage from 75.88% to 81.33% while reducing the required search time by
93.28%.
|
2501.07957
|
AI Guide Dog: Egocentric Path Prediction on Smartphone
|
cs.RO cs.AI cs.CV cs.HC cs.LG
|
This paper presents AI Guide Dog (AIGD), a lightweight egocentric
(first-person) navigation system for visually impaired users, designed for
real-time deployment on smartphones. AIGD employs a vision-only multi-label
classification approach to predict directional commands, ensuring safe
navigation across diverse environments. We introduce a novel technique for
goal-based outdoor navigation by integrating GPS signals and high-level
directions, while also handling uncertain multi-path predictions for
destination-free indoor navigation. As the first navigation assistance system
to handle both goal-oriented and exploratory navigation across indoor and
outdoor settings, AIGD establishes a new benchmark in blind navigation. We
present methods, datasets, evaluations, and deployment insights to encourage
further innovations in assistive navigation systems.
|
2501.07959
|
Self-Instruct Few-Shot Jailbreaking: Decompose the Attack into Pattern
and Behavior Learning
|
cs.AI
|
Recently, several works have been conducted on jailbreaking Large Language
Models (LLMs) with few-shot malicious demos. In particular, Zheng et al. focus
on improving the efficiency of Few-Shot Jailbreaking (FSJ) by injecting special
tokens into the demos and employing demo-level random search, known as Improved
Few-Shot Jailbreaking (I-FSJ). Nevertheless, we notice that this method may
still require a long context to jailbreak advanced models e.g. 32 shots of
demos for Meta-Llama-3-8B-Instruct (Llama-3) \cite{llama3modelcard}. In this
paper, we discuss the limitations of I-FSJ and propose Self-Instruct Few-Shot
Jailbreaking (Self-Instruct-FSJ) facilitated with the demo-level greedy search.
This framework decomposes the FSJ attack into pattern and behavior learning to
exploit the model's vulnerabilities in a more generalized and efficient way. We
conduct elaborate experiments to evaluate our method on common open-source
models and compare it with baseline algorithms. Our code is available at
https://github.com/iphosi/Self-Instruct-FSJ.
|
2501.07960
|
SkipClick: Combining Quick Responses and Low-Level Features for
Interactive Segmentation in Winter Sports Contexts
|
cs.CV
|
In this paper, we present a novel architecture for interactive segmentation
in winter sports contexts. The field of interactive segmentation deals with the
prediction of high-quality segmentation masks by informing the network about
the objects position with the help of user guidance. In our case the guidance
consists of click prompts. For this task, we first present a baseline
architecture which is specifically geared towards quickly responding after each
click. Afterwards, we motivate and describe a number of architectural
modifications which improve the performance when tasked with segmenting winter
sports equipment on the WSESeg dataset. With regards to the average NoC@85
metric on the WSESeg classes, we outperform SAM and HQ-SAM by 2.336 and 7.946
clicks, respectively. When applied to the HQSeg-44k dataset, our system
delivers state-of-the-art results with a NoC@90 of 6.00 and NoC@95 of 9.89. In
addition to that, we test our model on a novel dataset containing masks for
humans during skiing.
|
2501.07964
|
Derivation of Output Correlation Inferences for Multi-Output (aka
Multi-Task) Gaussian Process
|
cs.LG cs.AI stat.ML
|
Gaussian process (GP) is arguably one of the most widely used machine
learning algorithms in practice. One of its prominent applications is Bayesian
optimization (BO). Although the vanilla GP itself is already a powerful tool
for BO, it is often beneficial to be able to consider the dependencies of
multiple outputs. To do so, Multi-task GP (MTGP) is formulated, but it is not
trivial to fully understand the derivations of its formulations and their
gradients from the previous literature. This paper serves friendly derivations
of the MTGP formulations and their gradients.
|
2501.07970
|
Comprehensive Metapath-based Heterogeneous Graph Transformer for
Gene-Disease Association Prediction
|
cs.AI
|
Discovering gene-disease associations is crucial for understanding disease
mechanisms, yet identifying these associations remains challenging due to the
time and cost of biological experiments. Computational methods are increasingly
vital for efficient and scalable gene-disease association prediction.
Graph-based learning models, which leverage node features and network
relationships, are commonly employed for biomolecular predictions. However,
existing methods often struggle to effectively integrate node features,
heterogeneous structures, and semantic information. To address these
challenges, we propose COmprehensive MEtapath-based heterogeneous graph
Transformer(COMET) for predicting gene-disease associations. COMET integrates
diverse datasets to construct comprehensive heterogeneous networks,
initializing node features with BioGPT. We define seven Metapaths and utilize a
transformer framework to aggregate Metapath instances, capturing global
contexts and long-distance dependencies. Through intra- and inter-metapath
aggregation using attention mechanisms, COMET fuses latent vectors from
multiple Metapaths to enhance GDA prediction accuracy. Our method demonstrates
superior robustness compared to state-of-the-art approaches. Ablation studies
and visualizations validate COMET's effectiveness, providing valuable insights
for advancing human health research.
|
2501.07972
|
Zero-shot Video Moment Retrieval via Off-the-shelf Multimodal Large
Language Models
|
cs.MM cs.CV
|
The target of video moment retrieval (VMR) is predicting temporal spans
within a video that semantically match a given linguistic query. Existing VMR
methods based on multimodal large language models (MLLMs) overly rely on
expensive high-quality datasets and time-consuming fine-tuning. Although some
recent studies introduce a zero-shot setting to avoid fine-tuning, they
overlook inherent language bias in the query, leading to erroneous
localization. To tackle the aforementioned challenges, this paper proposes
Moment-GPT, a tuning-free pipeline for zero-shot VMR utilizing frozen MLLMs.
Specifically, we first employ LLaMA-3 to correct and rephrase the query to
mitigate language bias. Subsequently, we design a span generator combined with
MiniGPT-v2 to produce candidate spans adaptively. Finally, to leverage the
video comprehension capabilities of MLLMs, we apply VideoChatGPT and span
scorer to select the most appropriate spans. Our proposed method substantially
outperforms the state-ofthe-art MLLM-based and zero-shot models on several
public datasets, including QVHighlights, ActivityNet-Captions, and
Charades-STA.
|
2501.07973
|
An Open Source Validation System for Continuous Arterial Blood Pressure
Measuring Sensors
|
physics.med-ph cs.SY eess.SY
|
Measuring the blood pressure waveform is becoming a more frequently studied
area. The development of sensor technologies opens many new ways to be able to
measure high-quality signals. The development of such an aim-specific sensor
can be time-consuming, expensive, and difficult to test or validate with known
and consistent waveforms. In this paper, we present an open source blood
pressure waveform simulator with an open source Python validation package to
reduce development costs for early-stage sensor development and research. The
simulator mainly consists of 3D printed parts which technology has become a
widely available and cheap solution. The core part of the simulator is a 3D
printed cam that can be generated based on real blood pressure waveforms. The
validation framework can create a detailed comparison between the signal
waveform used to design the cam and the measured time series from the sensor
being validated. The presented simulator proved to be robust and accurate in
short- and long-term use, as it produced the signal waveform consistently and
accurately. To validate this solution, a 3D force sensor was used, which was
proven earlier to be able to measure high-quality blood pressure waveforms on
the radial artery at the wrist. The results showed high similarity between the
measured and the nominal waveforms, meaning that comparing the normalized
signals, the RMSE value ranged from $0.0276 \pm 0.0047$ to $0.0212 \pm 0.0023$,
and the Pearson correlation ranged from $0.9933 \pm 0.0027$ to $0.9978 \pm
0.0005$. Our validation framework is available at
https://github.com/repat8/cam-bpw-sim. Our hardware framework, which allows
reproduction of the presented solution, is available at
https://github.com/repat8/cam-bpw-sim-hardware. The entire design is an open
source project and was developed using free software.
|
2501.07975
|
Some observations on the ambivalent role of symmetries in Bayesian
inference problems
|
cond-mat.dis-nn cond-mat.stat-mech cs.IT math.IT math.PR math.ST stat.TH
|
We collect in this note some observations on the role of symmetries in
Bayesian inference problems, that can be useful or detrimental depending on the
way they act on the signal and on the observations. We emphasize in particular
the need to gauge away unobservable invariances in the definition of a distance
between a signal and its estimator, and the consequences this implies for the
statistical mechanics treatment of such models, taking as a motivating example
the extensive rank matrix factorization problem.
|
2501.07978
|
Facial Dynamics in Video: Instruction Tuning for Improved Facial
Expression Perception and Contextual Awareness
|
cs.CV cs.AI
|
Facial expression captioning has found widespread application across various
domains. Recently, the emergence of video Multimodal Large Language Models
(MLLMs) has shown promise in general video understanding tasks. However,
describing facial expressions within videos poses two major challenges for
these models: (1) the lack of adequate datasets and benchmarks, and (2) the
limited visual token capacity of video MLLMs. To address these issues, this
paper introduces a new instruction-following dataset tailored for dynamic
facial expression caption. The dataset comprises 5,033 high-quality video clips
annotated manually, containing over 700,000 tokens. Its purpose is to improve
the capability of video MLLMs to discern subtle facial nuances. Furthermore, we
propose FaceTrack-MM, which leverages a limited number of tokens to encode the
main character's face. This model demonstrates superior performance in tracking
faces and focusing on the facial expressions of the main characters, even in
intricate multi-person scenarios. Additionally, we introduce a novel evaluation
metric combining event extraction, relation classification, and the longest
common subsequence (LCS) algorithm to assess the content consistency and
temporal sequence consistency of generated text. Moreover, we present
FEC-Bench, a benchmark designed to assess the performance of existing video
MLLMs in this specific task. All data and source code will be made publicly
available.
|
2501.07981
|
A resource management approach for concurrent operation of RF
functionalities
|
eess.SP cs.SY eess.SY
|
Future multifunction RF systems will be able to not only perform various
different radar, communication and electronic warfare functionalities but also
to perform them simultaneously on the same aperture. This ability of concurrent
operations requires new, cognitive approaches of resource management compared
to classical methods. This paper presents such a new approach using a
combination of quality of service based resource management and Monte Carlo
tree search.
|
2501.07983
|
V-Trans4Style: Visual Transition Recommendation for Video Production
Style Adaptation
|
cs.CV
|
We introduce V-Trans4Style, an innovative algorithm tailored for dynamic
video content editing needs. It is designed to adapt videos to different
production styles like documentaries, dramas, feature films, or a specific
YouTube channel's video-making technique. Our algorithm recommends optimal
visual transitions to help achieve this flexibility using a more bottom-up
approach. We first employ a transformer-based encoder-decoder network to learn
recommending temporally consistent and visually seamless sequences of visual
transitions using only the input videos. We then introduce a style conditioning
module that leverages this model to iteratively adjust the visual transitions
obtained from the decoder through activation maximization. We demonstrate the
efficacy of our method through experiments conducted on our newly introduced
AutoTransition++ dataset. It is a 6k video version of AutoTransition Dataset
that additionally categorizes its videos into different production style
categories. Our encoder-decoder model outperforms the state-of-the-art
transition recommendation method, achieving improvements of 10% to 80% in
Recall@K and mean rank values over baseline. Our style conditioning module
results in visual transitions that improve the capture of the desired video
production style characteristics by an average of around 12% in comparison to
other methods when measured with similarity metrics. We hope that our work
serves as a foundation for exploring and understanding video production styles
further.
|
2501.07984
|
Threshold Attention Network for Semantic Segmentation of Remote Sensing
Images
|
cs.CV
|
Semantic segmentation of remote sensing images is essential for various
applications, including vegetation monitoring, disaster management, and urban
planning. Previous studies have demonstrated that the self-attention mechanism
(SA) is an effective approach for designing segmentation networks that can
capture long-range pixel dependencies. SA enables the network to model the
global dependencies between the input features, resulting in improved
segmentation outcomes. However, the high density of attentional feature maps
used in this mechanism causes exponential increases in computational
complexity. Additionally, it introduces redundant information that negatively
impacts the feature representation. Inspired by traditional threshold
segmentation algorithms, we propose a novel threshold attention mechanism
(TAM). This mechanism significantly reduces computational effort while also
better modeling the correlation between different regions of the feature map.
Based on TAM, we present a threshold attention network (TANet) for semantic
segmentation. TANet consists of an attentional feature enhancement module
(AFEM) for global feature enhancement of shallow features and a threshold
attention pyramid pooling module (TAPP) for acquiring feature information at
different scales for deep features. We have conducted extensive experiments on
the ISPRS Vaihingen and Potsdam datasets. The results demonstrate the validity
and superiority of our proposed TANet compared to the most state-of-the-art
models.
|
2501.07985
|
CHEQ-ing the Box: Safe Variable Impedance Learning for Robotic Polishing
|
cs.RO cs.LG
|
Robotic systems are increasingly employed for industrial automation, with
contact-rich tasks like polishing requiring dexterity and compliant behaviour.
These tasks are difficult to model, making classical control challenging. Deep
reinforcement learning (RL) offers a promising solution by enabling the
learning of models and control policies directly from data. However, its
application to real-world problems is limited by data inefficiency and unsafe
exploration. Adaptive hybrid RL methods blend classical control and RL
adaptively, combining the strengths of both: structure from control and
learning from RL. This has led to improvements in data efficiency and
exploration safety. However, their potential for hardware applications remains
underexplored, with no evaluations on physical systems to date. Such
evaluations are critical to fully assess the practicality and effectiveness of
these methods in real-world settings. This work presents an experimental
demonstration of the hybrid RL algorithm CHEQ for robotic polishing with
variable impedance, a task requiring precise force and velocity tracking. In
simulation, we show that variable impedance enhances polishing performance. We
compare standalone RL with adaptive hybrid RL, demonstrating that CHEQ achieves
effective learning while adhering to safety constraints. On hardware, CHEQ
achieves effective polishing behaviour, requiring only eight hours of training
and incurring just five failures. These results highlight the potential of
adaptive hybrid RL for real-world, contact-rich tasks trained directly on
hardware.
|
2501.07988
|
GAC-Net_Geometric and attention-based Network for Depth Completion
|
cs.CV cs.AI
|
Depth completion is a key task in autonomous driving, aiming to complete
sparse LiDAR depth measurements into high-quality dense depth maps through
image guidance. However, existing methods usually treat depth maps as an
additional channel of color images, or directly perform convolution on sparse
data, failing to fully exploit the 3D geometric information in depth maps,
especially with limited performance in complex boundaries and sparse areas. To
address these issues, this paper proposes a depth completion network combining
channel attention mechanism and 3D global feature perception (CGA-Net). The
main innovations include: 1) Utilizing PointNet++ to extract global 3D
geometric features from sparse depth maps, enhancing the scene perception
ability of low-line LiDAR data; 2) Designing a channel-attention-based
multimodal feature fusion module to efficiently integrate sparse depth, RGB
images, and 3D geometric features; 3) Combining residual learning with CSPN++
to optimize the depth refinement stage, further improving the completion
quality in edge areas and complex scenes. Experiments on the KITTI depth
completion dataset show that CGA-Net can significantly improve the prediction
accuracy of dense depth maps, achieving a new state-of-the-art (SOTA), and
demonstrating strong robustness to sparse and complex scenes.
|
2501.07991
|
Training Hybrid Neural Networks with Multimode Optical Nonlinearities
Using Digital Twins
|
physics.optics cs.AI
|
The ability to train ever-larger neural networks brings artificial
intelligence to the forefront of scientific and technical discoveries. However,
their exponentially increasing size creates a proportionally greater demand for
energy and computational hardware. Incorporating complex physical events in
networks as fixed, efficient computation modules can address this demand by
decreasing the complexity of trainable layers. Here, we utilize ultrashort
pulse propagation in multimode fibers, which perform large-scale nonlinear
transformations, for this purpose. Training the hybrid architecture is achieved
through a neural model that differentiably approximates the optical system. The
training algorithm updates the neural simulator and backpropagates the error
signal over this proxy to optimize layers preceding the optical one. Our
experimental results achieve state-of-the-art image classification accuracies
and simulation fidelity. Moreover, the framework demonstrates exceptional
resilience to experimental drifts. By integrating low-energy physical systems
into neural networks, this approach enables scalable, energy-efficient AI
models with significantly reduced computational demands.
|
2501.07992
|
LLM-Ehnanced Holonic Architecture for Ad-Hoc Scalable SoS
|
cs.AI cs.ET cs.MA cs.SE
|
As modern system of systems (SoS) become increasingly adaptive and human
centred, traditional architectures often struggle to support interoperability,
reconfigurability, and effective human system interaction. This paper addresses
these challenges by advancing the state of the art holonic architecture for
SoS, offering two main contributions to support these adaptive needs. First, we
propose a layered architecture for holons, which includes reasoning,
communication, and capabilities layers. This design facilitates seamless
interoperability among heterogeneous constituent systems by improving data
exchange and integration. Second, inspired by principles of intelligent
manufacturing, we introduce specialised holons namely, supervisor, planner,
task, and resource holons aimed at enhancing the adaptability and
reconfigurability of SoS. These specialised holons utilise large language
models within their reasoning layers to support decision making and ensure real
time adaptability. We demonstrate our approach through a 3D mobility case study
focused on smart city transportation, showcasing its potential for managing
complex, multimodal SoS environments. Additionally, we propose evaluation
methods to assess the architecture efficiency and scalability,laying the
groundwork for future empirical validations through simulations and real world
implementations.
|
2501.07994
|
Combining imaging and shape features for prediction tasks of Alzheimer's
disease classification and brain age regression
|
cs.CV cs.LG eess.IV
|
We investigate combining imaging and shape features extracted from MRI for
the clinically relevant tasks of brain age prediction and Alzheimer's disease
classification. Our proposed model fuses ResNet-extracted image embeddings with
shape embeddings from a bespoke graph neural network. The shape embeddings are
derived from surface meshes of 15 brain structures, capturing detailed
geometric information. Combined with the appearance features from T1-weighted
images, we observe improvements in the prediction performance on both tasks,
with substantial gains for classification. We evaluate the model using public
datasets, including CamCAN, IXI, and OASIS3, demonstrating the effectiveness of
fusing imaging and shape features for brain analysis.
|
2501.07995
|
MMAPs to model complex multi-state systems with vacation policies in the
repair facility
|
stat.ME cs.SY eess.SY
|
Two complex multi-state systems subject to multiple events are built in an
algorithmic and computational way by considering phase-type distributions and
Markovian arrival processes with marked arrivals. The internal performance of
the system is composed of different degradation levels and internal repairable
and non-repairable failures can occur. Also, the system is subject to external
shocks that may provoke repairable or non-repairable failure. A multiple
vacation policy is introduced in the system for the repairperson. Preventive
maintenance is included in the system to improve the behaviour. Two types of
task may be performed by the repairperson; corrective repair and preventive
maintenance. The systems are modelled, the transient and stationary
distributions are built and different performance measures are calculated in a
matrix-algorithmic form. Cost and rewards are included in the model in a vector
matrix way. Several economic measures are worked out and the net reward per
unit of time is used to optimize the system. A numerical example shows that the
system can be optimized according to the existence of preventive maintenance
and the distribution of vacation time. The results have been implemented
computationally with Matlab and R (packages: expm, optim).
|
2501.07996
|
Reward Compatibility: A Framework for Inverse RL
|
cs.LG
|
We provide an original theoretical study of Inverse Reinforcement Learning
(IRL) through the lens of reward compatibility, a novel framework to quantify
the compatibility of a reward with the given expert's demonstrations.
Intuitively, a reward is more compatible with the demonstrations the closer the
performance of the expert's policy computed with that reward is to the optimal
performance for that reward. This generalizes the notion of feasible reward
set, the most common framework in the theoretical IRL literature, for which a
reward is either compatible or not compatible. The grayscale introduced by the
reward compatibility is the key to extend the realm of provably efficient IRL
far beyond what is attainable with the feasible reward set: from tabular to
large-scale MDPs. We analyze the IRL problem across various settings, including
optimal and suboptimal expert's demonstrations and both online and offline data
collection. For all of these dimensions, we provide a tractable algorithm and
corresponding sample complexity analysis, as well as various insights on reward
compatibility and how the framework can pave the way to yet more general
problem settings.
|
2501.07999
|
Unsupervised Feature Construction for Anomaly Detection in Time Series
-- An Evaluation
|
cs.LG
|
To detect anomalies with precision and without prior knowledge in time
series, is it better to build a detector from the initial temporal
representation, or to compute a new (tabular) representation using an existing
automatic variable construction library? In this article, we address this
question by conducting an in-depth experimental study for two popular detectors
(Isolation Forest and Local Outlier Factor). The obtained results, for 5
different datasets, show that the new representation, computed using the
tsfresh library, allows Isolation Forest to significantly improve its
performance.
|
2501.08001
|
GDiffRetro: Retrosynthesis Prediction with Dual Graph Enhanced Molecular
Representation and Diffusion Generation
|
cs.AI
|
Retrosynthesis prediction focuses on identifying reactants capable of
synthesizing a target product. Typically, the retrosynthesis prediction
involves two phases: Reaction Center Identification and Reactant Generation.
However, we argue that most existing methods suffer from two limitations in the
two phases: (i) Existing models do not adequately capture the ``face''
information in molecular graphs for the reaction center identification. (ii)
Current approaches for the reactant generation predominantly use sequence
generation in a 2D space, which lacks versatility in generating reasonable
distributions for completed reactive groups and overlooks molecules' inherent
3D properties. To overcome the above limitations, we propose GDiffRetro. For
the reaction center identification, GDiffRetro uniquely integrates the original
graph with its corresponding dual graph to represent molecular structures,
which helps guide the model to focus more on the faces in the graph. For the
reactant generation, GDiffRetro employs a conditional diffusion model in 3D to
further transform the obtained synthon into a complete reactant. Our
experimental findings reveal that GDiffRetro outperforms state-of-the-art
semi-template models across various evaluative metrics.
|
2501.08002
|
Maximizing Uncertainty for Federated learning via Bayesian
Optimisation-based Model Poisoning
|
cs.LG cs.AI cs.CV
|
As we transition from Narrow Artificial Intelligence towards Artificial Super
Intelligence, users are increasingly concerned about their privacy and the
trustworthiness of machine learning (ML) technology. A common denominator for
the metrics of trustworthiness is the quantification of uncertainty inherent in
DL algorithms, and specifically in the model parameters, input data, and model
predictions. One of the common approaches to address privacy-related issues in
DL is to adopt distributed learning such as federated learning (FL), where
private raw data is not shared among users. Despite the privacy-preserving
mechanisms in FL, it still faces challenges in trustworthiness. Specifically,
the malicious users, during training, can systematically create malicious model
parameters to compromise the models predictive and generative capabilities,
resulting in high uncertainty about their reliability. To demonstrate malicious
behaviour, we propose a novel model poisoning attack method named Delphi which
aims to maximise the uncertainty of the global model output. We achieve this by
taking advantage of the relationship between the uncertainty and the model
parameters of the first hidden layer of the local model. Delphi employs two
types of optimisation , Bayesian Optimisation and Least Squares Trust Region,
to search for the optimal poisoned model parameters, named as Delphi-BO and
Delphi-LSTR. We quantify the uncertainty using the KL Divergence to minimise
the distance of the predictive probability distribution towards an uncertain
distribution of model output. Furthermore, we establish a mathematical proof
for the attack effectiveness demonstrated in FL. Numerical results demonstrate
that Delphi-BO induces a higher amount of uncertainty than Delphi-LSTR
highlighting vulnerability of FL systems to model poisoning attacks.
|
2501.08003
|
Formalising lexical and syntactic diversity for data sampling in French
|
cs.CL
|
Diversity is an important property of datasets and sampling data for
diversity is useful in dataset creation. Finding the optimally diverse sample
is expensive, we therefore present a heuristic significantly increasing
diversity relative to random sampling. We also explore whether different kinds
of diversity -- lexical and syntactic -- correlate, with the purpose of
sampling for expensive syntactic diversity through inexpensive lexical
diversity. We find that correlations fluctuate with different datasets and
versions of diversity measures. This shows that an arbitrarily chosen measure
may fall short of capturing diversity-related properties of datasets.
|
2501.08004
|
Bridging financial gaps for infrastructure climate adaptation via
integrated carbon markets
|
econ.GN cs.CE q-fin.EC
|
Climate physical risks pose an increasing threat to urban infrastructure,
necessitating urgent climate adaptation measures to protect lives and assets.
Implementing such measures, including the development of resilient
infrastructure and retrofitting existing systems, demands substantial financial
investment. Unfortunately, a significant financial gap remains in funding
infrastructure climate adaptation, primarily due to the unprofitability
stemming from the conflict between long-term returns, uncertainty, and
complexity of these adaptations and the short-term profit objectives of private
capital. This study suggests incentivizing private capital to bridge this
financial gap through integrated carbon markets. Specifically, the framework
combines carbon taxes and carbon markets to involve infrastructures and
individuals in the climate mitigation phase, using the funds collected for
climate adaptation. It integrates lifestyle reformation, environmental
mitigation, and infrastructure adaptation to establish harmonized standards and
provide continuous positive feedback to sustain the markets. It is explored how
integrated carbon markets can facilitate fund collection and discuss the
challenges of incorporating them into infrastructure climate adaptation. This
study aims to foster collaboration between private and public capital to enable
a more scientific, rational, and actionable implementation of integrated carbon
markets, thus supporting financial backing for infrastructure climate
adaptation.
|
2501.08005
|
DisCoPatch: Batch Statistics Are All You Need For OOD Detection, But
Only If You Can Trust Them
|
cs.CV cs.AI eess.IV
|
Out-of-distribution (OOD) detection holds significant importance across many
applications. While semantic and domain-shift OOD problems are well-studied,
this work focuses on covariate shifts - subtle variations in the data
distribution that can degrade machine learning performance. We hypothesize that
detecting these subtle shifts can improve our understanding of in-distribution
boundaries, ultimately improving OOD detection. In adversarial discriminators
trained with Batch Normalization (BN), real and adversarial samples form
distinct domains with unique batch statistics - a property we exploit for OOD
detection. We introduce DisCoPatch, an unsupervised Adversarial Variational
Autoencoder (VAE) framework that harnesses this mechanism. During inference,
batches consist of patches from the same image, ensuring a consistent data
distribution that allows the model to rely on batch statistics. DisCoPatch uses
the VAE's suboptimal outputs (generated and reconstructed) as negative samples
to train the discriminator, thereby improving its ability to delineate the
boundary between in-distribution samples and covariate shifts. By tightening
this boundary, DisCoPatch achieves state-of-the-art results in public OOD
detection benchmarks. The proposed model not only excels in detecting covariate
shifts, achieving 95.5% AUROC on ImageNet-1K(-C) but also outperforms all prior
methods on public Near-OOD (95.0%) benchmarks. With a compact model size of
25MB, it achieves high OOD detection performance at notably lower latency than
existing methods, making it an efficient and practical solution for real-world
OOD detection applications. The code will be made publicly available
|
2501.08008
|
TriAdaptLoRA: Brain-Inspired Triangular Adaptive Low-Rank Adaptation for
Parameter-Efficient Fine-Tuning
|
cs.CL cs.AI
|
The fine-tuning of Large Language Models (LLMs) is pivotal for achieving
optimal performance across diverse downstream tasks. However, while full
fine-tuning delivers superior results, it entails significant computational and
resource costs. Parameter-Efficient Fine-Tuning (PEFT) methods, such as LoRA,
address these challenges by reducing the number of trainable parameters, but
they often struggle with rank adjustment efficiency and task-specific
adaptability. We propose Triangular Adaptive Low-Rank Adaptation
(TriAdaptLoRA), a novel PEFT framework inspired by neuroscience principles,
which dynamically optimizes the allocation of trainable parameters.
TriAdaptLoRA introduces three key innovations: 1) a triangular split of
transformation matrices into lower and upper triangular components to maximize
parameter utilization, 2) a parameter importance metric based on normalized
Frobenius norms for efficient adaptation, and 3) an adaptive rank-growth
strategy governed by dynamic thresholds, allowing flexible parameter allocation
across training steps. Experiments conducted on a variety of natural language
understanding and generation tasks demonstrate that TriAdaptLoRA consistently
outperforms existing PEFT methods. It achieves superior performance, enhanced
stability, and reduced computational overhead, particularly under linear
threshold-driven rank growth. These results highlight its efficacy as a
scalable and resource-efficient solution for fine-tuning LLMs.
|
2501.08009
|
Tutorial: VAE as an inference paradigm for neuroimaging
|
eess.IV cs.AI
|
In this tutorial, we explore Variational Autoencoders (VAEs), an essential
framework for unsupervised learning, particularly suited for high-dimensional
datasets such as neuroimaging. By integrating deep learning with Bayesian
inference, VAEs enable the generation of interpretable latent representations.
This tutorial outlines the theoretical foundations of VAEs, addresses practical
challenges such as convergence issues and over-fitting, and discusses
strategies like the reparameterization trick and hyperparameter optimization.
We also highlight key applications of VAEs in neuroimaging, demonstrating their
potential to uncover meaningful patterns, including those associated with
neurodegenerative processes, and their broader implications for analyzing
complex brain data.
|
2501.08019
|
An AI-driven framework for rapid and localized optimizations of urban
open spaces
|
cs.LG cs.AI cs.CY
|
As urbanization accelerates, open spaces are increasingly recognized for
their role in enhancing sustainability and well-being, yet they remain
underexplored compared to built spaces. This study introduces an AI-driven
framework that integrates machine learning models (MLMs) and explainable AI
techniques to optimize Sky View Factor (SVF) and visibility, key spatial
metrics influencing thermal comfort and perceived safety in urban spaces.
Unlike global optimization methods, which are computationally intensive and
impractical for localized adjustments, this framework supports incremental
design improvements with lower computational costs and greater flexibility. The
framework employs SHapley Adaptive Explanations (SHAP) to analyze feature
importance and Counterfactual Explanations (CFXs) to propose minimal design
changes. Simulations tested five MLMs, identifying XGBoost as the most
accurate, with building width, park area, and heights of surrounding buildings
as critical for SVF, and distances from southern buildings as key for
visibility. Compared to Genetic Algorithms, which required approximately 15/30
minutes across 3/4 generations to converge, the tested CFX approach achieved
optimized results in 1 minute with a 5% RMSE error, demonstrating significantly
faster performance and suitability for scalable retrofitting strategies. This
interpretable and computationally efficient framework advances urban
performance optimization, providing data-driven insights and practical
retrofitting solutions for enhancing usability and environmental quality across
diverse urban contexts.
|
2501.08020
|
Cooperative Patrol Routing: Optimizing Urban Crime Surveillance through
Multi-Agent Reinforcement Learning
|
cs.AI
|
The effective design of patrol strategies is a difficult and complex problem,
especially in medium and large areas. The objective is to plan, in a
coordinated manner, the optimal routes for a set of patrols in a given area, in
order to achieve maximum coverage of the area, while also trying to minimize
the number of patrols. In this paper, we propose a multi-agent reinforcement
learning (MARL) model, based on a decentralized partially observable Markov
decision process, to plan unpredictable patrol routes within an urban
environment represented as an undirected graph. The model attempts to maximize
a target function that characterizes the environment within a given time frame.
Our model has been tested to optimize police patrol routes in three
medium-sized districts of the city of Malaga. The aim was to maximize
surveillance coverage of the most crime-prone areas, based on actual crime data
in the city. To address this problem, several MARL algorithms have been
studied, and among these the Value Decomposition Proximal Policy Optimization
(VDPPO) algorithm exhibited the best performance. We also introduce a novel
metric, the coverage index, for the evaluation of the coverage performance of
the routes generated by our model. This metric is inspired by the predictive
accuracy index (PAI), which is commonly used in criminology to detect hotspots.
Using this metric, we have evaluated the model under various scenarios in which
the number of agents (or patrols), their starting positions, and the level of
information they can observe in the environment have been modified. Results
show that the coordinated routes generated by our model achieve a coverage of
more than $90\%$ of the $3\%$ of graph nodes with the highest crime incidence,
and $65\%$ for $20\%$ of these nodes; $3\%$ and $20\%$ represent the coverage
standards for police resource allocation.
|
2501.08025
|
Analysis of Power Losses and the Efficacy of Power Minimization
Strategies in Multichannel Electrical Stimulation Systems
|
eess.SY cs.SY
|
Neuroprosthetic devices require multichannel stimulator systems with an
increasing number of channels. However, there are inherent power losses in
typical multichannel stimulation circuits caused by a mismatch between the
power supply voltage and the voltage required at each electrode to successfully
stimulate tissue. This imposes a bottleneck towards high-channel-count devices,
which is particularly severe in wirelessly-powered devices. Hence, advances in
the power efficiency of stimulation systems are critical. To support these
advances, this paper presents a methodology to identify and quantify power
losses associated with different power supply scaling strategies in
multichannel stimulation systems. The proposed methodology utilizes
distributions of stimulation amplitudes and electrode impedances to calculate
power losses in multichannel systems. Experimental data from previously
published studies spanning various stimulation applications were analyzed to
evaluate the performance of fixed, global, and stepped supply scaling methods,
focusing on their impact on power dissipation and efficiency. Variability in
output conditions results in low power efficiency in multichannel stimulation
systems across all applications. Stepped voltage scaling demonstrated
substantial efficiency improvements, achieving an increase of 67 % to 146 %,
particularly in high-channel-count applications with significant variability in
tissue impedance. Global scaling, by contrast, was more advantageous for
systems with fewer channels. The findings highlight the importance of tailoring
power management strategies to specific applications to optimize efficiency
while minimizing system complexity. The proposed methodology offers a framework
for evaluating efficiency-complexity trade-offs, advancing the design of
scalable neurostimulation systems.
|
2501.08026
|
Orthogonal Delay-Doppler Division Multiplexing Modulation with
Hierarchical Mode-Based Index Modulation
|
eess.SP cs.IT math.IT
|
The orthogonal time frequency space with index modulation (OTFS-IM) offers
flexible tradeoffs between spectral efficiency (SE) and bit error rate (BER) in
doubly selective fading channels. While OTFS-IM schemes demonstrated such
potential, a persistent challenge lies in the detection complexity. To address
this problem, we propose the hierarchical mode-based index modulation (HMIM).
HMIM introduces a novel approach to modulate information bits by IM patterns,
significantly simplifying the complexity of maximum a posteriori (MAP)
estimation with Gaussian noise. Further, we incorporate HMIM with the recently
proposed orthogonal delay-Doppler division multiplexing (ODDM) modulation,
namely ODDM-HMIM, to exploit the full diversity of the delay-Doppler (DD)
channel. The BER performance of ODDM-HMIM is analyzed considering a maximum
likelihood (ML) detector. Our numerical results reveal that, with the same SE,
HMIM can outperform conventional IM in terms of both BER and computational
complexity. In addition, we propose a successive interference
cancellation-based minimum mean square error (SIC-MMSE) detector for ODDM-HMIM,
which enables low-complexity detection with large frame sizes.
|
Subsets and Splits
No community queries yet
The top public SQL queries from the community will appear here once available.