id stringlengths 9 16 | title stringlengths 4 278 | categories stringlengths 5 104 | abstract stringlengths 6 4.09k |
|---|---|---|---|
2501.01507 | Transfer Learning Analysis of Variational Quantum Circuits | quant-ph cs.AI cs.LG | This work analyzes transfer learning of the Variational Quantum Circuit
(VQC). Our framework begins with a pretrained VQC configured in one domain and
calculates the transition of 1-parameter unitary subgroups required for a new
domain. A formalism is established to investigate the adaptability and
capability of a VQC under the analysis of loss bounds. Our theory observes
knowledge transfer in VQCs and provides a heuristic interpretation for the
mechanism. An analytical fine-tuning method is derived to attain the optimal
transition for adaptations of similar domains.
|
2501.01508 | Garbage in Garbage out: Impacts of data quality on criminal network
intervention | physics.soc-ph cs.SI | Criminal networks such as human trafficking rings are threats to the rule of
law, democracy and public safety in our global society. Network science
provides invaluable tools to identify key players and design interventions for
Law Enforcement Agencies (LEAs), e.g., to dismantle their organisation.
However, poor data quality and the adaptiveness of criminal networks through
self-organization make effective disruption extremely challenging. Although
there exists a large body of work building and applying network scientific
tools to attack criminal networks, these work often implicitly assume that the
network measurements are accurate and complete. Moreover, there is thus far no
comprehensive understanding of the impacts of data quality on the downstream
effectiveness of interventions. This work investigates the relationship between
data quality and intervention effectiveness based on classical graph theoretic
and machine learning-based approaches. Decentralization emerges as a major
factor in network robustness, particularly under conditions of incomplete data,
which renders attack strategies largely ineffective. Moreover, the robustness
of centralized networks can be boosted using simple heuristics, making targeted
attack more infeasible. Consequently, we advocate for a more cautious
application of network science in disrupting criminal networks, the continuous
development of an interoperable intelligence ecosystem, and the creation of
novel network inference techniques to address data quality challenges.
|
2501.01509 | AI-Enabled Operations at Fermi Complex: Multivariate Time Series
Prediction for Outage Prediction and Diagnosis | cs.LG cs.AI cs.ET eess.SP | The Main Control Room of the Fermilab accelerator complex continuously
gathers extensive time-series data from thousands of sensors monitoring the
beam. However, unplanned events such as trips or voltage fluctuations often
result in beam outages, causing operational downtime. This downtime not only
consumes operator effort in diagnosing and addressing the issue but also leads
to unnecessary energy consumption by idle machines awaiting beam restoration.
The current threshold-based alarm system is reactive and faces challenges
including frequent false alarms and inconsistent outage-cause labeling. To
address these limitations, we propose an AI-enabled framework that leverages
predictive analytics and automated labeling. Using data from $2,703$ Linac
devices and $80$ operator-labeled outages, we evaluate state-of-the-art deep
learning architectures, including recurrent, attention-based, and linear
models, for beam outage prediction. Additionally, we assess a Random
Forest-based labeling system for providing consistent, confidence-scored outage
annotations. Our findings highlight the strengths and weaknesses of these
architectures for beam outage prediction and identify critical gaps that must
be addressed to fully harness AI for transitioning downtime handling from
reactive to predictive, ultimately reducing downtime and improving
decision-making in accelerator management.
|
2501.01510 | Explainable Brain Age Gap Prediction in Neurodegenerative Conditions
using coVariance Neural Networks | cs.LG eess.SP q-bio.QM | Brain age is the estimate of biological age derived from neuroimaging
datasets using machine learning algorithms. Increasing \textit{brain age gap}
characterized by an elevated brain age relative to the chronological age can
reflect increased vulnerability to neurodegeneration and cognitive decline.
Hence, brain age gap is a promising biomarker for monitoring brain health.
However, black-box machine learning approaches to brain age gap prediction have
limited practical utility. Recent studies on coVariance neural networks (VNN)
have proposed a relatively transparent deep learning pipeline for neuroimaging
data analyses, which possesses two key features: (i) inherent
\textit{anatomically interpretablity} of derived biomarkers; and (ii) a
methodologically interpretable perspective based on \textit{linkage with
eigenvectors of anatomic covariance matrix}. In this paper, we apply the
VNN-based approach to study brain age gap using cortical thickness features for
various prevalent neurodegenerative conditions. Our results reveal distinct
anatomic patterns for brain age gap in Alzheimer's disease, frontotemporal
dementia, and atypical Parkinsonian disorders. Furthermore, we demonstrate that
the distinct anatomic patterns of brain age gap are linked with the differences
in how VNN leverages the eigenspectrum of the anatomic covariance matrix, thus
lending explainability to the reported results.
|
2501.01511 | TreeLUT: An Efficient Alternative to Deep Neural Networks for Inference
Acceleration Using Gradient Boosted Decision Trees | cs.LG cs.AR | Accelerating machine learning inference has been an active research area in
recent years. In this context, field-programmable gate arrays (FPGAs) have
demonstrated compelling performance by providing massive parallelism in deep
neural networks (DNNs). Neural networks (NNs) are computationally intensive
during inference, as they require massive amounts of multiplication and
addition, which makes their implementations costly. Numerous studies have
recently addressed this challenge to some extent using a combination of
sparsity induction, quantization, and transformation of neurons or sub-networks
into lookup tables (LUTs) on FPGAs. Gradient boosted decision trees (GBDTs) are
a high-accuracy alternative to DNNs in a wide range of regression and
classification tasks, particularly for tabular datasets. The basic building
block of GBDTs is a decision tree, which resembles the structure of binary
decision diagrams. FPGA design flows are heavily optimized to implement such a
structure efficiently. In addition to decision trees, GBDTs perform simple
operations during inference, including comparison and addition. We present
TreeLUT as an open-source tool for implementing GBDTs using an efficient
quantization scheme, hardware architecture, and pipelining strategy. It
primarily utilizes LUTs with no BRAMs or DSPs on FPGAs, resulting in high
efficiency. We show the effectiveness of TreeLUT using multiple classification
datasets, commonly used to evaluate ultra-low area and latency architectures.
Using these benchmarks, we compare our implementation results with existing DNN
and GBDT methods, such as DWN, PolyLUT-Add, NeuraLUT, LogicNets, FINN, hls4ml,
and others. Our results show that TreeLUT significantly improves hardware
utilization, latency, and throughput at competitive accuracy compared to
previous works.
|
2501.01515 | DiagrammaticLearning: A Graphical Language for Compositional Training
Regimes | cs.LG cs.AI cs.PL math.CT | Motivated by deep learning regimes with multiple interacting yet distinct
model components, we introduce learning diagrams, graphical depictions of
training setups that capture parameterized learning as data rather than code. A
learning diagram compiles to a unique loss function on which component models
are trained. The result of training on this loss is a collection of models
whose predictions ``agree" with one another. We show that a number of popular
learning setups such as few-shot multi-task learning, knowledge distillation,
and multi-modal learning can be depicted as learning diagrams. We further
implement learning diagrams in a library that allows users to build diagrams of
PyTorch and Flux.jl models. By implementing some classic machine learning use
cases, we demonstrate how learning diagrams allow practitioners to build
complicated models as compositions of smaller components, identify
relationships between workflows, and manipulate models during or after
training. Leveraging a category theoretic framework, we introduce a rigorous
semantics for learning diagrams that puts such operations on a firm
mathematical foundation.
|
2501.01516 | Improving Robustness Estimates in Natural Language Explainable AI though
Synonymity Weighted Similarity Measures | cs.LG cs.AI cs.CL | Explainable AI (XAI) has seen a surge in recent interest with the
proliferation of powerful but intractable black-box models. Moreover, XAI has
come under fire for techniques that may not offer reliable explanations. As
many of the methods in XAI are themselves models, adversarial examples have
been prominent in the literature surrounding the effectiveness of XAI, with the
objective of these examples being to alter the explanation while maintaining
the output of the original model. For explanations in natural language, it is
natural to use measures found in the domain of information retrieval for use
with ranked lists to guide the adversarial XAI process. We show that the
standard implementation of these measures are poorly suited for the comparison
of explanations in adversarial XAI and amend them by using information that is
discarded, the synonymity of perturbed words. This synonymity weighting
produces more accurate estimates of the actual weakness of XAI methods to
adversarial examples.
|
2501.01525 | Transfer Neyman-Pearson Algorithm for Outlier Detection | cs.LG stat.ML | We consider the problem of transfer learning in outlier detection where
target abnormal data is rare. While transfer learning has been considered
extensively in traditional balanced classification, the problem of transfer in
outlier detection and more generally in imbalanced classification settings has
received less attention. We propose a general meta-algorithm which is shown
theoretically to yield strong guarantees w.r.t. to a range of changes in
abnormal distribution, and at the same time amenable to practical
implementation. We then investigate different instantiations of this general
meta-algorithm, e.g., based on multi-layer neural networks, and show
empirically that they outperform natural extensions of transfer methods for
traditional balanced classification settings (which are the only solutions
available at the moment).
|
2501.01529 | SAFER: Sharpness Aware layer-selective Finetuning for Enhanced
Robustness in vision transformers | cs.CV | Vision transformers (ViTs) have become essential backbones in advanced
computer vision applications and multi-modal foundation models. Despite their
strengths, ViTs remain vulnerable to adversarial perturbations, comparable to
or even exceeding the vulnerability of convolutional neural networks (CNNs).
Furthermore, the large parameter count and complex architecture of ViTs make
them particularly prone to adversarial overfitting, often compromising both
clean and adversarial accuracy.
This paper mitigates adversarial overfitting in ViTs through a novel,
layer-selective fine-tuning approach: SAFER. Instead of optimizing the entire
model, we identify and selectively fine-tune a small subset of layers most
susceptible to overfitting, applying sharpness-aware minimization to these
layers while freezing the rest of the model. Our method consistently enhances
both clean and adversarial accuracy over baseline approaches. Typical
improvements are around 5%, with some cases achieving gains as high as 20%
across various ViT architectures and datasets.
|
2501.01531 | A Global Games-Inspired Approach to Multi-Robot Task Allocation for
Heterogeneous Teams | cs.RO cs.MA cs.SY eess.SY | In this article we propose a game-theoretic approach to the multi-robot task
allocation problem using the framework of global games. Each task is associated
with a global signal, a real-valued number that captures the task execution
progress and/or urgency. We propose a linear objective function for each robot
in the system, which, for each task, increases with global signal and decreases
with the number assigned robots. We provide conditions on the objective
function hyperparameters to induce a mixed Nash equilibrium, i.e., solutions
where all robots are not assigned to a single task. The resulting algorithm
only requires the inversion of a matrix to determine a probability distribution
over the robot assignments. We demonstrate the performance of our algorithm in
simulation and provide direction for applications and future work.
|
2501.01535 | A Metasemantic-Metapragmatic Framework for Taxonomizing Multimodal
Communicative Alignment | cs.HC cs.AI cs.CL cs.CY | Drawing on contemporary pragmatist philosophy and linguistic theories on
cognition, meaning, and communication, this paper presents a dynamic,
metasemantic-metapragmatic taxonomy for grounding and conceptualizing
human-like multimodal communicative alignment. The framework is rooted in
contemporary developments of the three basic communicative capacities initially
identified by American logician and pragmatist philosopher Charles Sanders
Peirce: iconic (sensory and perceptual qualities), indexical (contextual and
sociocultural associations), and rule-like (symbolic and intuitive reasoning).
Expanding on these developments, I introduce the concept of indexical
contextualization and propose the principle of "contextualization
directionality" for characterizing the crucial metapragmatic capacity for
maintaining, navigating, or transitioning between semantic and pragmatic modes
of multimodal communication. I contend that current cognitive-social
computational and engineering methodologies disproportionately emphasize the
semantic/metasemantic domain, overlooking the pivotal role of metapragmatic
indexicality in traversing the semantic-pragmatic spectrum of communication.
The framework's broader implications for intentionality, identity, affect, and
ethics in within-modal and cross-modal human-machine alignment are also
discussed.
|
2501.01539 | In Search of a Lost Metric: Human Empowerment as a Pillar of Socially
Conscious Navigation | cs.RO cs.AI cs.HC | In social robot navigation, traditional metrics like proxemics and behavior
naturalness emphasize human comfort and adherence to social norms but often
fail to capture an agent's autonomy and adaptability in dynamic environments.
This paper introduces human empowerment, an information-theoretic concept that
measures a human's ability to influence their future states and observe those
changes, as a complementary metric for evaluating social compliance. This
metric reveals how robot navigation policies can indirectly impact human
empowerment. We present a framework that integrates human empowerment into the
evaluation of social performance in navigation tasks. Through numerical
simulations, we demonstrate that human empowerment as a metric not only aligns
with intuitive social behavior, but also shows statistically significant
differences across various robot navigation policies. These results provide a
deeper understanding of how different policies affect social compliance,
highlighting the potential of human empowerment as a complementary metric for
future research in social navigation.
|
2501.01540 | BoxingGym: Benchmarking Progress in Automated Experimental Design and
Model Discovery | cs.LG cs.AI | Understanding the world and explaining it with scientific theories is a
central aspiration of artificial intelligence research. Proposing theories,
designing experiments to test them, and then revising them based on data are
fundamental to scientific discovery. Despite the significant promise of
LLM-based scientific agents, no benchmarks systematically test LLM's ability to
propose scientific models, collect experimental data, and revise them in light
of new data. We introduce BoxingGym, a benchmark with 10 environments for
systematically evaluating both experimental design (e.g. collecting data to
test a scientific theory) and model discovery (e.g. proposing and revising
scientific theories). To enable tractable and quantitative evaluation, we
implement each environment as a generative probabilistic model with which a
scientific agent can run interactive experiments. These probabilistic models
are drawn from various real-world scientific domains ranging from psychology to
ecology. To quantitatively evaluate a scientific agent's ability to collect
informative experimental data, we compute the expected information gain (EIG),
an information-theoretic quantity which measures how much an experiment reduces
uncertainty about the parameters of a generative model. A good scientific
theory is a concise and predictive explanation. Therefore, to quantitatively
evaluate model discovery, we ask a scientific agent to explain their model and
then assess whether this explanation enables another scientific agent to make
reliable predictions about this environment. In addition to this
explanation-based evaluation, we compute standard model evaluation metrics such
as prediction errors. We find that current LLMs, such as GPT-4o, struggle with
both experimental design and model discovery. We find that augmenting the
LLM-based agent with an explicit statistical model does not reliably improve
these results.
|
2501.01544 | Many of Your DPOs are Secretly One: Attempting Unification Through
Mutual Information | cs.LG cs.CL stat.ML | Post-alignment of large language models (LLMs) is critical in improving their
utility, safety, and alignment with human intentions. Direct preference
optimisation (DPO) has become one of the most widely used algorithms for
achieving this alignment, given its ability to optimise models based on human
feedback directly. However, the vast number of DPO variants in the literature
has made it increasingly difficult for researchers to navigate and fully grasp
the connections between these approaches. This paper introduces a unifying
framework inspired by mutual information, which proposes a new loss function
with flexible priors. By carefully specifying these priors, we demonstrate that
many existing algorithms, such as SimPO, TDPO, SparsePO, and others, can be
derived from our framework. This unification offers a clearer and more
structured approach, allowing researchers to understand the relationships
between different DPO variants better. We aim to simplify the landscape of DPO
algorithms, making it easier for the research community to gain insights and
foster further advancements in LLM alignment. Ultimately, we hope our framework
can be a foundation for developing more robust and interpretable alignment
techniques.
|
2501.01548 | Task-Driven Fixation Network: An Efficient Architecture with Fixation
Selection | cs.CV | This paper presents a novel neural network architecture featuring automatic
fixation point selection, designed to efficiently address complex tasks with
reduced network size and computational overhead. The proposed model consists
of: a low-resolution channel that captures low-resolution global features from
input images; a high-resolution channel that sequentially extracts localized
high-resolution features; and a hybrid encoding module that integrates the
features from both channels. A defining characteristic of the hybrid encoding
module is the inclusion of a fixation point generator, which dynamically
produces fixation points, enabling the high-resolution channel to focus on
regions of interest. The fixation points are generated in a task-driven manner,
enabling the automatic selection of regions of interest. This approach avoids
exhaustive high-resolution analysis of the entire image, maintaining task
performance and computational efficiency.
|
2501.01555 | Indoor Position and Attitude Tracking with SO(3) Manifold | eess.SP cs.RO | Driven by technological breakthroughs, indoor tracking and localization have
gained importance in various applications including the Internet of Things
(IoT), robotics, and unmanned aerial vehicles (UAVs). To tackle some of the
challenges associated with indoor tracking, this study explores the potential
benefits of incorporating the SO(3) manifold structure of the rotation matrix.
The goal is to enhance the 3D tracking performance of the extended Kalman
filter (EKF) and unscented Kalman filter (UKF) of a moving target within an
indoor environment. Our results demonstrate that the proposed extended Kalman
filter with Riemannian (EKFRie) and unscented Kalman filter with Riemannian
(UKFRie) algorithms consistently outperform the conventional EKF and UKF in
terms of position and orientation accuracy. While the conventional EKF and UKF
achieved root mean square error (RMSE) of 0.36m and 0.43m, respectively, for a
long stair path, the proposed EKFRie and UKFRie algorithms achieved a lower
RMSE of 0.21m and 0.10m. Our results show also the outperforming of the
proposed algorithms over the EKF and UKF algorithms with the Isosceles triangle
manifold. While the latter achieved RMSE of 7.26cm and 7.27cm, respectively,
our proposed algorithms achieved RMSE of 6.73cm and 6.16cm. These results
demonstrate the enhanced performance of the proposed algorithms.
|
2501.01556 | Extended Information Geometry: Large Deviation Theory, Statistical
Thermodynamics, and Empirical Counting Frequencies | cs.IT math.IT | Combinatorics, probabilities, and measurements are fundamental to
understanding information. This work explores how the application of large
deviation theory (LDT) in counting phenomena leads to the emergence of various
entropy functions, including Shannon's entropy, mutual information, and
relative and conditional entropies. In terms of these functions, we reveal an
inherent geometrical structure through operations, including contractions,
lift, change of basis, and projections. Legendre-Fenchel (LF) transform, which
is central to both LDT and Gibbs' method of thermodynamics, offers a novel
energetic description of data. The manifold of empirical mean values of
statistical data ad infinitum has a parametrization using LF conjugates w.r.t.
an entropy function; this gives rise to the additivity known in statistical
thermodynamic energetics. This work extends current information geometry to
information projection as defined through conditional expectations in
Kolmogorov's probability theory.
|
2501.01557 | Click-Calib: A Robust Extrinsic Calibration Method for Surround-View
Systems | cs.CV | Surround-View System (SVS) is an essential component in Advanced Driver
Assistance System (ADAS) and requires precise calibrations. However,
conventional offline extrinsic calibration methods are cumbersome and
time-consuming as they rely heavily on physical patterns. Additionally, these
methods primarily focus on short-range areas surrounding the vehicle, resulting
in lower calibration quality in more distant zones. To address these
limitations, we propose Click-Calib, a pattern-free approach for offline SVS
extrinsic calibration. Without requiring any special setup, the user only needs
to click a few keypoints on the ground in natural scenes. Unlike other offline
calibration approaches, Click-Calib optimizes camera poses over a wide range by
minimizing reprojection distance errors of keypoints, thereby achieving
accurate calibrations at both short and long distances. Furthermore,
Click-Calib supports both single-frame and multiple-frame modes, with the
latter offering even better results. Evaluations on our in-house dataset and
the public WoodScape dataset demonstrate its superior accuracy and robustness
compared to baseline methods. Code is available at
https://github.com/lwangvaleo/click_calib.
|
2501.01558 | Predicting the Performance of Black-box LLMs through Self-Queries | cs.LG cs.CL | As large language models (LLMs) are increasingly relied on in AI systems,
predicting when they make mistakes is crucial. While a great deal of work in
the field uses internal representations to interpret model behavior, these
representations are inaccessible when given solely black-box access through an
API. In this paper, we extract features of LLMs in a black-box manner by using
follow-up prompts and taking the probabilities of different responses as
representations to train reliable predictors of model behavior. We demonstrate
that training a linear model on these low-dimensional representations produces
reliable and generalizable predictors of model performance at the instance
level (e.g., if a particular generation correctly answers a question).
Remarkably, these can often outperform white-box linear predictors that operate
over a model's hidden state or the full distribution over its vocabulary. In
addition, we demonstrate that these extracted features can be used to evaluate
more nuanced aspects of a language model's state. For instance, they can be
used to distinguish between a clean version of GPT-4o-mini and a version that
has been influenced via an adversarial system prompt that answers
question-answering tasks incorrectly or introduces bugs into generated code.
Furthermore, they can reliably distinguish between different model
architectures and sizes, enabling the detection of misrepresented models
provided through an API (e.g., identifying if GPT-3.5 is supplied instead of
GPT-4o-mini).
|
2501.01559 | K-ARC: Adaptive Robot Coordination for Multi-Robot Kinodynamic Planning | cs.RO cs.MA | This work presents Kinodynamic Adaptive Robot Coordination (K-ARC), a novel
algorithm for multi-robot kinodynamic planning. Our experimental results show
the capability of K-ARC to plan for up to 32 planar mobile robots, while
achieving up to an order of magnitude of speed-up compared to previous methods
in various scenarios. K-ARC is able to achieve this due to its two main
properties. First, K-ARC constructs its solution iteratively by planning in
segments, where initial kinodynamic paths are found through optimization-based
approaches and the inter-robot conflicts are resolved through sampling-based
approaches. The interleaving use of sampling-based and optimization-based
approaches allows K-ARC to leverage the strengths of both approaches in
different sections of the planning process where one is more suited than the
other, while previous methods tend to emphasize on one over the other. Second,
K-ARC builds on a previously proposed multi-robot motion planning framework,
Adaptive Robot Coordination (ARC), and inherits its strength of focusing on
coordination between robots only when needed, saving computation efforts. We
show how the combination of these two properties allows K-ARC to achieve
overall better performance in our simulated experiments with increasing numbers
of robots, increasing degrees of problem difficulties, and increasing
complexities of robot dynamics.
|
2501.01564 | Semialgebraic Neural Networks: From roots to representations | cs.LG cs.NA cs.NE math.NA | Many numerical algorithms in scientific computing -- particularly in areas
like numerical linear algebra, PDE simulation, and inverse problems -- produce
outputs that can be represented by semialgebraic functions; that is, the graph
of the computed function can be described by finitely many polynomial
equalities and inequalities. In this work, we introduce Semialgebraic Neural
Networks (SANNs), a neural network architecture capable of representing any
bounded semialgebraic function, and computing such functions up to the accuracy
of a numerical ODE solver chosen by the programmer. Conceptually, we encode the
graph of the learned function as the kernel of a piecewise polynomial selected
from a class of functions whose roots can be evaluated using a particular
homotopy continuation method. We show by construction that the SANN
architecture is able to execute this continuation method, thus evaluating the
learned semialgebraic function. Furthermore, the architecture can exactly
represent even discontinuous semialgebraic functions by executing a
continuation method on each connected component of the target function. Lastly,
we provide example applications of these networks and show they can be trained
with traditional deep-learning techniques.
|
2501.01568 | Interruption Handling for Conversational Robots | cs.HC cs.RO | Interruptions, a fundamental component of human communication, can enhance
the dynamism and effectiveness of conversations, but only when effectively
managed by all parties involved. Despite advancements in robotic systems,
state-of-the-art systems still have limited capabilities in handling
user-initiated interruptions in real-time. Prior research has primarily focused
on post hoc analysis of interruptions. To address this gap, we present a system
that detects user-initiated interruptions and manages them in real-time based
on the interrupter's intent (i.e., cooperative agreement, cooperative
assistance, cooperative clarification, or disruptive interruption). The system
was designed based on interaction patterns identified from human-human
interaction data. We integrated our system into an LLM-powered social robot and
validated its effectiveness through a timed decision-making task and a
contentious discussion task with 21 participants. Our system successfully
handled 93.69% (n=104/111) of user-initiated interruptions. We discuss our
learnings and their implications for designing interruption-handling behaviors
in conversational robots.
|
2501.01576 | Constructing and explaining machine learning models for chemistry:
example of the exploration and design of boron-based Lewis acids | physics.chem-ph cs.AI | The integration of machine learning (ML) into chemistry offers transformative
potential in the design of molecules with targeted properties. However, the
focus has often been on creating highly efficient predictive models, sometimes
at the expense of interpretability. In this study, we leverage explainable AI
techniques to explore the rational design of boron-based Lewis acids, which
play a pivotal role in organic reactions due to their electron-ccepting
properties. Using Fluoride Ion Affinity as a proxy for Lewis acidity, we
developed interpretable ML models based on chemically meaningful descriptors,
including ab initio computed features and substituent-based parameters derived
from the Hammett linear free-energy relationship. By constraining the chemical
space to well-defined molecular scaffolds, we achieved highly accurate
predictions (mean absolute error < 6 kJ/mol), surpassing conventional black-box
deep learning models in low-data regimes. Interpretability analyses of the
models shed light on the origin of Lewis acidity in these compounds and
identified actionable levers to modulate it through the nature and positioning
of substituents on the molecular scaffold. This work bridges ML and chemist's
way of thinking, demonstrating how explainable models can inspire molecular
design and enhance scientific understanding of chemical reactivity.
|
2501.01579 | Unsupervised learning for anticipating critical transitions | nlin.CD cs.LG | For anticipating critical transitions in complex dynamical systems, the
recent approach of parameter-driven reservoir computing requires explicit
knowledge of the bifurcation parameter. We articulate a framework combining a
variational autoencoder (VAE) and reservoir computing to address this
challenge. In particular, the driving factor is detected from time series using
the VAE in an unsupervised-learning fashion and the extracted information is
then used as the parameter input to the reservoir computer for anticipating the
critical transition. We demonstrate the power of the unsupervised learning
scheme using prototypical dynamical systems including the spatiotemporal
Kuramoto-Sivashinsky system. The scheme can also be extended to scenarios where
the target system is driven by several independent parameters or with partial
state observations.
|
2501.01584 | Stackelberg Game Based Performance Optimization in Digital Twin Assisted
Federated Learning over NOMA Networks | cs.LG cs.CR cs.GT cs.NI | Despite the advantage of preserving data privacy, federated learning (FL)
still suffers from the straggler issue due to the limited computing resources
of distributed clients and the unreliable wireless communication environment.
By effectively imitating the distributed resources, digital twin (DT) shows
great potential in alleviating this issue. In this paper, we leverage DT in the
FL framework over non-orthogonal multiple access (NOMA) network to assist FL
training process, considering malicious attacks on model updates from clients.
A reputationbased client selection scheme is proposed, which accounts for
client heterogeneity in multiple aspects and effectively mitigates the risks of
poisoning attacks in FL systems. To minimize the total latency and energy
consumption in the proposed system, we then formulate a Stackelberg game by
considering clients and the server as the leader and the follower,
respectively. Specifically, the leader aims to minimize the energy consumption
while the objective of the follower is to minimize the total latency during FL
training. The Stackelberg equilibrium is achieved to obtain the optimal
solutions. We first derive the strategies for the followerlevel problem and
include them in the leader-level problem which is then solved via problem
decomposition. Simulation results verify the superior performance of the
proposed scheme.
|
2501.01586 | GRAMC: General-purpose and reconfigurable analog matrix computing
architecture | cs.AR cs.ET cs.SY eess.SY | In-memory analog matrix computing (AMC) with resistive random-access memory
(RRAM) represents a highly promising solution that solves matrix problems in
one step. However, the existing AMC circuits each have a specific connection
topology to implement a single computing function, lack of the universality as
a matrix processor. In this work, we design a reconfigurable AMC macro for
general-purpose matrix computations, which is achieved by configuring proper
connections between memory array and amplifier circuits. Based on this macro,
we develop a hybrid system that incorporates an on-chip write-verify scheme and
digital functional modules, to deliver a general-purpose AMC solver for various
applications.
|
2501.01588 | (WhyPHI) Fine-Tuning PHI-3 for Multiple-Choice Question Answering:
Methodology, Results, and Challenges | cs.CL cs.AI | Large Language Models (LLMs) have become essential tools across various
domains due to their impressive capabilities in understanding and generating
human-like text. The ability to accurately answer multiple-choice questions
(MCQs) holds significant value in education, particularly in automated tutoring
systems and assessment platforms. However, adapting LLMs to handle MCQ tasks
effectively remains challenging due to the hallucinations and unclear prompts.
This work explores the potential of Microsoft's PHI-3\cite{Abdin2024}, a
compact yet efficient LLM, for MCQ answering. Our contributions include
fine-tuning the model on the TruthfulQA dataset, designing optimized prompts to
enhance model performance, and evaluating using perplexity and traditional
metrics like accuracy and F1 score. Results show a remarkable improvement in
PHI-3.5's MCQ handling post-fine-tuning, with perplexity decreasing from 4.68
to 2.27, and accuracy rising from 62\% to 90.8\%. This research underlines the
importance of efficient models in adaptive learning systems and educational
assessments, paving the way for broader integration into the classroom,
particularly in fields like test preparation, student feedback, and
personalized learning.
|
2501.01589 | D$^3$-Human: Dynamic Disentangled Digital Human from Monocular Video | cs.CV cs.GR | We introduce D$^3$-Human, a method for reconstructing Dynamic Disentangled
Digital Human geometry from monocular videos. Past monocular video human
reconstruction primarily focuses on reconstructing undecoupled clothed human
bodies or only reconstructing clothing, making it difficult to apply directly
in applications such as animation production. The challenge in reconstructing
decoupled clothing and body lies in the occlusion caused by clothing over the
body. To this end, the details of the visible area and the plausibility of the
invisible area must be ensured during the reconstruction process. Our proposed
method combines explicit and implicit representations to model the decoupled
clothed human body, leveraging the robustness of explicit representations and
the flexibility of implicit representations. Specifically, we reconstruct the
visible region as SDF and propose a novel human manifold signed distance field
(hmSDF) to segment the visible clothing and visible body, and then merge the
visible and invisible body. Extensive experimental results demonstrate that,
compared with existing reconstruction schemes, D$^3$-Human can achieve
high-quality decoupled reconstruction of the human body wearing different
clothing, and can be directly applied to clothing transfer and animation.
|
2501.01591 | Multivariate Time Series Anomaly Detection using DiffGAN Model | cs.LG math.ST stat.TH | In recent years, some researchers have applied diffusion models to
multivariate time series anomaly detection. The partial diffusion strategy,
which depends on the diffusion steps, is commonly used for anomaly detection in
these models. However, different diffusion steps have an impact on the
reconstruction of the original data, thereby impacting the effectiveness of
anomaly detection. To address this issue, we propose a novel method named
DiffGAN, which adds a generative adversarial network component to the denoiser
of diffusion model. This addition allows for the simultaneous generation of
noisy data and prediction of diffusion steps. Compared to multiple
state-of-the-art reconstruction models, experimental results demonstrate that
DiffGAN achieves superior performance in anomaly detection.
|
2501.01593 | BLAST: A Stealthy Backdoor Leverage Attack against Cooperative
Multi-Agent Deep Reinforcement Learning based Systems | cs.AI cs.CR cs.LG | Recent studies have shown that cooperative multi-agent deep reinforcement
learning (c-MADRL) is under the threat of backdoor attacks. Once a backdoor
trigger is observed, it will perform malicious actions leading to failures or
malicious goals. However, existing backdoor attacks suffer from several issues,
e.g., instant trigger patterns lack stealthiness, the backdoor is trained or
activated by an additional network, or all agents are backdoored. To this end,
in this paper, we propose a novel backdoor leverage attack against c-MADRL,
BLAST, which attacks the entire multi-agent team by embedding the backdoor only
in a single agent. Firstly, we introduce adversary spatiotemporal behavior
patterns as the backdoor trigger rather than manual-injected fixed visual
patterns or instant status and control the period to perform malicious actions.
This method can guarantee the stealthiness and practicality of BLAST. Secondly,
we hack the original reward function of the backdoor agent via unilateral
guidance to inject BLAST, so as to achieve the \textit{leverage attack effect}
that can pry open the entire multi-agent system via a single backdoor agent. We
evaluate our BLAST against 3 classic c-MADRL algorithms (VDN, QMIX, and MAPPO)
in 2 popular c-MADRL environments (SMAC and Pursuit), and 2 existing defense
mechanisms. The experimental results demonstrate that BLAST can achieve a high
attack success rate while maintaining a low clean performance variance rate.
|
2501.01594 | PSYCHE: A Multi-faceted Patient Simulation Framework for Evaluation of
Psychiatric Assessment Conversational Agents | cs.CL cs.AI cs.LG | Recent advances in large language models (LLMs) have accelerated the
development of conversational agents capable of generating human-like
responses. Since psychiatric assessments typically involve complex
conversational interactions between psychiatrists and patients, there is
growing interest in developing LLM-based psychiatric assessment conversational
agents (PACAs) that aim to simulate the role of psychiatrists in clinical
evaluations. However, standardized methods for benchmarking the clinical
appropriateness of PACAs' interaction with patients still remain underexplored.
Here, we propose PSYCHE, a novel framework designed to enable the 1) clinically
relevant, 2) ethically safe, 3) cost-efficient, and 4) quantitative evaluation
of PACAs. This is achieved by simulating psychiatric patients based on a
multi-faceted psychiatric construct that defines the simulated patients'
profiles, histories, and behaviors, which PACAs are expected to assess. We
validate the effectiveness of PSYCHE through a study with 10 board-certified
psychiatrists, supported by an in-depth analysis of the simulated patient
utterances.
|
2501.01595 | Adaptive Homophily Clustering: Structure Homophily Graph Learning with
Adaptive Filter for Hyperspectral Image | cs.CV | Hyperspectral image (HSI) clustering has been a fundamental but challenging
task with zero training labels. Currently, some deep graph clustering methods
have been successfully explored for HSI due to their outstanding performance in
effective spatial structural information encoding. Nevertheless, insufficient
structural information utilization, poor feature presentation ability, and weak
graph update capability limit their performance. Thus, in this paper, a
homophily structure graph learning with an adaptive filter clustering method
(AHSGC) for HSI is proposed. Specifically, homogeneous region generation is
first developed for HSI processing and constructing the original graph.
Afterward, an adaptive filter graph encoder is designed to adaptively capture
the high and low frequency features on the graph for subsequence processing.
Then, a graph embedding clustering self-training decoder is developed with KL
Divergence, with which the pseudo-label is generated for network training.
Meanwhile, homophily-enhanced structure learning is introduced to update the
graph according to the clustering task, in which the orient correlation
estimation is adopted to estimate the node connection, and graph edge
sparsification is designed to adjust the edges in the graph dynamically.
Finally, a joint network optimization is introduced to achieve network
self-training and update the graph. The K-means is adopted to express the
latent features. Extensive experiments and repeated comparative analysis have
verified that our AHSGC contains high clustering accuracy, low computational
complexity, and strong robustness. The code source will be available at
https://github.com/DY-HYX.
|
2501.01598 | Prism: Mining Task-aware Domains in Non-i.i.d. IMU Data for Flexible
User Perception | cs.AI cs.HC | A wide range of user perception applications leverage inertial measurement
unit (IMU) data for online prediction. However, restricted by the non-i.i.d.
nature of IMU data collected from mobile devices, most systems work well only
in a controlled setting (e.g., for a specific user in particular postures),
limiting application scenarios. To achieve uncontrolled online prediction on
mobile devices, referred to as the flexible user perception (FUP) problem, is
attractive but hard. In this paper, we propose a novel scheme, called Prism,
which can obtain high FUP accuracy on mobile devices. The core of Prism is to
discover task-aware domains embedded in IMU dataset, and to train a
domain-aware model on each identified domain. To this end, we design an
expectation-maximization (EM) algorithm to estimate latent domains with respect
to the specific downstream perception task. Finally, the best-fit model can be
automatically selected for use by comparing the test sample and all identified
domains in the feature space. We implement Prism on various mobile devices and
conduct extensive experiments. Results demonstrate that Prism can achieve the
best FUP performance with a low latency.
|
2501.01601 | Few-shot Implicit Function Generation via Equivariance | cs.CV cs.AI | Implicit Neural Representations (INRs) have emerged as a powerful framework
for representing continuous signals. However, generating diverse INR weights
remains challenging due to limited training data. We introduce Few-shot
Implicit Function Generation, a new problem setup that aims to generate diverse
yet functionally consistent INR weights from only a few examples. This is
challenging because even for the same signal, the optimal INRs can vary
significantly depending on their initializations. To tackle this, we propose
EquiGen, a framework that can generate new INRs from limited data. The core
idea is that functionally similar networks can be transformed into one another
through weight permutations, forming an equivariance group. By projecting these
weights into an equivariant latent space, we enable diverse generation within
these groups, even with few examples. EquiGen implements this through an
equivariant encoder trained via contrastive learning and smooth augmentation,
an equivariance-guided diffusion process, and controlled perturbations in the
equivariant subspace. Experiments on 2D image and 3D shape INR datasets
demonstrate that our approach effectively generates diverse INR weights while
preserving their functional properties in few-shot scenarios.
|
2501.01608 | Online Meta-Learning Channel Autoencoder for Dynamic End-to-end Physical
Layer Optimization | cs.LG eess.SP | Channel Autoencoders (CAEs) have shown significant potential in optimizing
the physical layer of a wireless communication system for a specific channel
through joint end-to-end training. However, the practical implementation of
CAEs faces several challenges, particularly in realistic and dynamic scenarios.
Channels in communication systems are dynamic and change with time. Still, most
proposed CAE designs assume stationary scenarios, meaning they are trained and
tested for only one channel realization without regard for the dynamic nature
of wireless communication systems. Moreover, conventional CAEs are designed
based on the assumption of having access to a large number of pilot signals,
which act as training samples in the context of CAEs. However, in real-world
applications, it is not feasible for a CAE operating in real-time to acquire
large amounts of training samples for each new channel realization. Hence, the
CAE has to be deployable in few-shot learning scenarios where only limited
training samples are available. Furthermore, most proposed conventional CAEs
lack fast adaptability to new channel realizations, which becomes more
pronounced when dealing with a limited number of pilots. To address these
challenges, this paper proposes the Online Meta Learning channel AE (OML-CAE)
framework for few-shot CAE scenarios with dynamic channels. The OML-CAE
framework enhances adaptability to varying channel conditions in an online
manner, allowing for dynamic adjustments in response to evolving communication
scenarios. Moreover, it can adapt to new channel conditions using only a few
pilots, drastically increasing pilot efficiency and making the CAE design
feasible in realistic scenarios.
|
2501.01611 | Google is all you need: Semi-Supervised Transfer Learning Strategy For
Light Multimodal Multi-Task Classification Model | cs.CV cs.AI | As the volume of digital image data increases, the effectiveness of image
classification intensifies. This study introduces a robust multi-label
classification system designed to assign multiple labels to a single image,
addressing the complexity of images that may be associated with multiple
categories (ranging from 1 to 19, excluding 12). We propose a multi-modal
classifier that merges advanced image recognition algorithms with Natural
Language Processing (NLP) models, incorporating a fusion module to integrate
these distinct modalities. The purpose of integrating textual data is to
enhance the accuracy of label prediction by providing contextual understanding
that visual analysis alone cannot fully capture. Our proposed classification
model combines Convolutional Neural Networks (CNN) for image processing with
NLP techniques for analyzing textual description (i.e., captions). This
approach includes rigorous training and validation phases, with each model
component verified and analyzed through ablation experiments. Preliminary
results demonstrate the classifier's accuracy and efficiency, highlighting its
potential as an automatic image-labeling system.
|
2501.01614 | Evaluation of Rail Decarbonization Alternatives: Framework and
Application | eess.SY cs.SY math.OC | The Northwestern University Freight Rail Infrastructure and Energy Network
Decarbonization (NUFRIEND) framework is a comprehensive industry-oriented tool
for simulating the deployment of new energy technologies including biofuels,
e-fuels, battery-electric, and hydrogen locomotives. By classifying fuel types
into two categories based on deployment requirements, the associated optimal
charging/fueling facility location and sizing problem are solved with a
five-step framework. Life cycle analyses (LCA) and techno-economic analyses
(TEA) are used to estimate carbon reduction, capital investments, cost of
carbon reduction, and operational impacts, enabling sensitivity analysis with
operational and technological parameters. The framework is illustrated on
lower-carbon drop-in fuels as well as battery-electric technology deployments
for US Eastern and Western Class I railroad networks. Drop-in fuel deployments
are modeled as admixtures with diesel in existing locomotives, while
battery-electric deployments are shown for varying technology penetration
levels and locomotive ranges. When mixed in a 50 percent ratio with diesel,
results show biodiesel's capacity to reduce emissions at 36 percent with a cost
of 0.13 USD per kilogram of CO2 reduced, while e-fuels offer a 50 percent
emissions reduction potential at a cost of 0.22 USD per kilogram of CO2
reduced. Battery-electric results for 50 percent deployment over all ton-miles
highlight the value of future innovations in battery energy densities as
scenarios assuming 800-mile range locomotives show an estimated emissions
reduction of 46 percent with a cost of 0.06 USD per kilogram of CO2 reduced,
compared to 16 percent emissions reduction at a cost of 0.11 USD per kilogram
of CO2 reduced for 400-mile range locomotives.
|
2501.01615 | Equity Impacts of Public Transit Network Redesign with Shared Autonomous
Mobility Services | eess.SY cs.SY math.OC | This study examines the equity impacts of integrating shared autonomous
mobility services (SAMS) into transit system redesign. Using the Greater
Chicago area as a case study, we compare two optimization objectives in
multimodal transit network redesign: minimizing total generalized costs
(equity-agnostic) versus prioritizing service in low-income areas
(equity-focused). We evaluate the achieved accessibility of clustered zones
with redesigned transit networks under two objectives, compared to driving and
the existing transit network. The transit access gaps across zones and between
transit and driving are found to be generally reduced with the introduction of
SAMS, but less so with the subsequent improved infrastructure under budget.
Differential improvement in equity is seen across suburbs and areas of the
city, reflecting the disparity in current transit access and improvement
potential. In particular, SAMS bridges the transit access gaps in suburban and
city areas currently underserved by transit. The City of Chicago, which is also
disproportionately home to vulnerable populations, offers an avenue to improve
vertical equity. These findings demonstrate that SAMS can enhance both
horizontal and vertical equity in transit systems, particularly when equity is
explicitly incorporated into the design objective.
|
2501.01618 | Merging Context Clustering with Visual State Space Models for Medical
Image Segmentation | cs.CV cs.AI | Medical image segmentation demands the aggregation of global and local
feature representations, posing a challenge for current methodologies in
handling both long-range and short-range feature interactions. Recently, vision
mamba (ViM) models have emerged as promising solutions for addressing model
complexities by excelling in long-range feature iterations with linear
complexity. However, existing ViM approaches overlook the importance of
preserving short-range local dependencies by directly flattening spatial tokens
and are constrained by fixed scanning patterns that limit the capture of
dynamic spatial context information. To address these challenges, we introduce
a simple yet effective method named context clustering ViM (CCViM), which
incorporates a context clustering module within the existing ViM models to
segment image tokens into distinct windows for adaptable local clustering. Our
method effectively combines long-range and short-range feature interactions,
thereby enhancing spatial contextual representations for medical image
segmentation tasks. Extensive experimental evaluations on diverse public
datasets, i.e., Kumar, CPM17, ISIC17, ISIC18, and Synapse demonstrate the
superior performance of our method compared to current state-of-the-art
methods. Our code can be found at https://github.com/zymissy/CCViM.
|
2501.01620 | Adaptive Meta-learning-based Adversarial Training for Robust Automatic
Modulation Classification | cs.LG cs.CR | DL-based automatic modulation classification (AMC) models are highly
susceptible to adversarial attacks, where even minimal input perturbations can
cause severe misclassifications. While adversarially training an AMC model
based on an adversarial attack significantly increases its robustness against
that attack, the AMC model will still be defenseless against other adversarial
attacks. The theoretically infinite possibilities for adversarial perturbations
mean that an AMC model will inevitably encounter new unseen adversarial attacks
if it is ever to be deployed to a real-world communication system. Moreover,
the computational limitations and challenges of obtaining new data in real-time
will not allow a full training process for the AMC model to adapt to the new
attack when it is online. To this end, we propose a meta-learning-based
adversarial training framework for AMC models that substantially enhances
robustness against unseen adversarial attacks and enables fast adaptation to
these attacks using just a few new training samples, if any are available. Our
results demonstrate that this training framework provides superior robustness
and accuracy with much less online training time than conventional adversarial
training of AMC models, making it highly efficient for real-world deployment.
|
2501.01625 | ICPC: In-context Prompt Compression with Faster Inference | cs.CL cs.AI | Despite the recent success of Large Language Models (LLMs), it remains
challenging to feed LLMs with long prompts due to the fixed size of LLM inputs.
As a remedy, prompt compression becomes a promising solution by removing
redundant tokens in the prompt. However, using LLM in the existing works
requires additional computation resources and leads to memory overheads. To
address it, we propose ICPC (In-context Prompt Compression), a novel and
scalable prompt compression method that adaptively reduces the prompt length.
The key idea of ICPC is to calculate the probability of each word appearing in
the prompt using encoders and calculate information carried by each word
through the information function, which effectively reduces the information
loss during prompt compression and increases the speed of compression.
Empirically, we demonstrate that ICPC can effectively compress long texts of
different categories and thus achieve better performance and speed on different
types of NLP tasks.
|
2501.01629 | Crossing Language Borders: A Pipeline for Indonesian Manhwa Translation | cs.LG cs.CL cs.CV | In this project, we develop a practical and efficient solution for automating
the Manhwa translation from Indonesian to English. Our approach combines
computer vision, text recognition, and natural language processing techniques
to streamline the traditionally manual process of Manhwa(Korean comics)
translation. The pipeline includes fine-tuned YOLOv5xu for speech bubble
detection, Tesseract for OCR and fine-tuned MarianMT for machine translation.
By automating these steps, we aim to make Manhwa more accessible to a global
audience while saving time and effort compared to manual translation methods.
While most Manhwa translation efforts focus on Japanese-to-English, we focus on
Indonesian-to-English translation to address the challenges of working with
low-resource languages. Our model shows good results at each step and was able
to translate from Indonesian to English efficiently.
|
2501.01630 | A Probabilistic Model for Node Classification in Directed Graphs | cs.LG cs.SI | In this work, we present a probabilistic model for directed graphs where
nodes have attributes and labels. This model serves as a generative classifier
capable of predicting the labels of unseen nodes using either maximum
likelihood or maximum a posteriori estimations. The predictions made by this
model are highly interpretable, contrasting with some common methods for node
classification, such as graph neural networks. We applied the model to two
datasets, demonstrating predictive performance that is competitive with, and
even superior to, state-of-the-art methods. One of the datasets considered is
adapted from the Math Genealogy Project, which has not previously been utilized
for this purpose. Consequently, we evaluated several classification algorithms
on this dataset to compare the performance of our model and provide benchmarks
for this new resource.
|
2501.01631 | Revisiting Data Analysis with Pre-trained Foundation Models | cs.DB | Data analysis focuses on harnessing advanced statistics, programming, and
machine learning techniques to extract valuable insights from vast datasets. An
increasing volume and variety of research emerged, addressing datasets of
diverse modalities, formats, scales, and resolutions across various industries.
However, experienced data analysts often find themselves overwhelmed by
intricate details in ad-hoc solutions or attempts to extract the semantics of
grounded data properly. This makes it difficult to maintain and scale to more
complex systems. Pre-trained foundation models (PFMs), grounded with a large
amount of grounded data that previous data analysis methods can not fully
understand, leverage complete statistics that combine reasoning of an
admissible subset of results and statistical approximations by surprising
engineering effects, to automate and enhance the analysis process. It pushes us
to revisit data analysis to make better sense of data with PFMs. This paper
provides a comprehensive review of systematic approaches to optimizing data
analysis through the power of PFMs, while critically identifying the
limitations of PFMs, to establish a roadmap for their future application in
data analysis.
|
2501.01632 | Integrated Communication and Bayesian Estimation of Fixed Channel States | cs.IT math.IT | This work studies an information-theoretic performance limit of an integrated
sensing and communication (ISAC) system where the goal of sensing is to
estimate a random continuous state. Considering the mean-squared error (MSE)
for estimation performance metric, the Bayesian Cram\'{e}r-Rao lower bound
(BCRB) is widely used in literature as a proxy of the MSE; however, the BCRB is
not generally tight even asymptotically except for restrictive distributions.
Instead, we characterize the full tradeoff between information rate and the
exact MSE using the asymptotically tight BCRB (ATBCRB) analysis, a recent
variant of the BCRB. Our characterization is applicable for general channels as
long as the regularity conditions are met, and the proof relies on constant
composition codes and ATBCRB analysis with the codes. We also perform a
numerical evaluation of the tradeoff in a variance estimation example, which
commonly arises in spectrum sensing scenarios.
|
2501.01633 | ACE: Anti-Editing Concept Erasure in Text-to-Image Models | cs.CV | Recent advance in text-to-image diffusion models have significantly
facilitated the generation of high-quality images, but also raising concerns
about the illegal creation of harmful content, such as copyrighted images.
Existing concept erasure methods achieve superior results in preventing the
production of erased concept from prompts, but typically perform poorly in
preventing undesired editing. To address this issue, we propose an Anti-Editing
Concept Erasure (ACE) method, which not only erases the target concept during
generation but also filters out it during editing. Specifically, we propose to
inject the erasure guidance into both conditional and the unconditional noise
prediction, enabling the model to effectively prevent the creation of erasure
concepts during both editing and generation. Furthermore, a stochastic
correction guidance is introduced during training to address the erosion of
unrelated concepts. We conducted erasure editing experiments with
representative editing methods (i.e., LEDITS++ and MasaCtrl) to erase IP
characters, and the results indicate that our ACE effectively filters out
target concepts in both types of edits. Additional experiments on erasing
explicit concepts and artistic styles further demonstrate that our ACE performs
favorably against state-of-the-art methods. Our code will be publicly available
at https://github.com/120L020904/ACE.
|
2501.01638 | A non-ergodic framework for understanding emergent capabilities in Large
Language Models | cs.CL cs.AI cs.LG | Large language models have emergent capabilities that come unexpectedly at
scale, but we need a theoretical framework to explain why and how they emerge.
We prove that language models are actually non-ergodic systems while providing
a mathematical framework based on Stuart Kauffman's theory of the adjacent
possible (TAP) to explain capability emergence. Our resource-constrained TAP
equation demonstrates how architectural, training, and contextual constraints
interact to shape model capabilities through phase transitions in semantic
space. We prove through experiments with three different language models that
capacities emerge through discrete transitions guided by constraint
interactions and path-dependent exploration. This framework provides a
theoretical basis for understanding emergence in language models and guides the
development of architectures that can guide capability emergence.
|
2501.01639 | Implications of Artificial Intelligence on Health Data Privacy and
Confidentiality | cs.CY cs.AI | The rapid integration of artificial intelligence (AI) in healthcare is
revolutionizing medical diagnostics, personalized medicine, and operational
efficiency. However, alongside these advancements, significant challenges arise
concerning patient data privacy, ethical considerations, and regulatory
compliance. This paper examines the dual impact of AI on healthcare,
highlighting its transformative potential and the critical need for
safeguarding sensitive health information. It explores the role of the Health
Insurance Portability and Accountability Act (HIPAA) as a regulatory framework
for ensuring data privacy and security, emphasizing the importance of robust
safeguards and ethical standards in AI-driven healthcare. Through case studies,
including AI applications in diabetic retinopathy, oncology, and the
controversies surrounding data sharing, this study underscores the ethical and
legal complexities of AI implementation. A balanced approach that fosters
innovation while maintaining patient trust and privacy is imperative. The
findings emphasize the importance of continuous education, transparency, and
adherence to regulatory frameworks to harness AI's full potential responsibly
and ethically in healthcare.
|
2501.01640 | Uncertainty and Energy based Loss Guided Semi-Supervised Semantic
Segmentation | cs.CV | Semi-supervised (SS) semantic segmentation exploits both labeled and
unlabeled images to overcome tedious and costly pixel-level annotation
problems. Pseudolabel supervision is one of the core approaches of training
networks with both pseudo labels and ground-truth labels. This work uses
aleatoric or data uncertainty and energy based modeling in intersection-union
pseudo supervised network.The aleatoric uncertainty is modeling the inherent
noise variations of the data in a network with two predictive branches. The
per-pixel variance parameter obtained from the network gives a quantitative
idea about the data uncertainty. Moreover, energy-based loss realizes the
potential of generative modeling on the downstream SS segmentation task. The
aleatoric and energy loss are applied in conjunction with pseudo-intersection
labels, pseudo-union labels, and ground-truth on the respective network branch.
The comparative analysis with state-of-the-art methods has shown improvement in
performance metrics.
|
2501.01642 | iCBIR-Sli: Interpretable Content-Based Image Retrieval with 2D Slice
Embeddings | cs.CV cs.LG eess.IV | Current methods for searching brain MR images rely on text-based approaches,
highlighting a significant need for content-based image retrieval (CBIR)
systems. Directly applying 3D brain MR images to machine learning models offers
the benefit of effectively learning the brain's structure; however, building
the generalized model necessitates a large amount of training data. While
models that consider depth direction and utilize continuous 2D slices have
demonstrated success in segmentation and classification tasks involving 3D
data, concerns remain. Specifically, using general 2D slices may lead to the
oversight of pathological features and discontinuities in depth direction
information. Furthermore, to the best of the authors' knowledge, there have
been no attempts to develop a practical CBIR system that preserves the entire
brain's structural information. In this study, we propose an interpretable CBIR
method for brain MR images, named iCBIR-Sli (Interpretable CBIR with 2D Slice
Embedding), which, for the first time globally, utilizes a series of 2D slices.
iCBIR-Sli addresses the challenges associated with using 2D slices by
effectively aggregating slice information, thereby achieving low-dimensional
representations with high completeness, usability, robustness, and
interoperability, which are qualities essential for effective CBIR. In
retrieval evaluation experiments utilizing five publicly available brain MR
datasets (ADNI2/3, OASIS3/4, AIBL) for Alzheimer's disease and cognitively
normal, iCBIR-Sli demonstrated top-1 retrieval performance (macro F1 = 0.859),
comparable to existing deep learning models explicitly designed for
classification, without the need for an external classifier. Additionally, the
method provided high interpretability by clearly identifying the brain regions
indicative of the searched-for disease.
|
2501.01644 | Multimodal Contrastive Representation Learning in Augmented Biomedical
Knowledge Graphs | cs.CL cs.LG | Biomedical Knowledge Graphs (BKGs) integrate diverse datasets to elucidate
complex relationships within the biomedical field. Effective link prediction on
these graphs can uncover valuable connections, such as potential novel
drug-disease relations. We introduce a novel multimodal approach that unifies
embeddings from specialized Language Models (LMs) with Graph Contrastive
Learning (GCL) to enhance intra-entity relationships while employing a
Knowledge Graph Embedding (KGE) model to capture inter-entity relationships for
effective link prediction. To address limitations in existing BKGs, we present
PrimeKG++, an enriched knowledge graph incorporating multimodal data, including
biological sequences and textual descriptions for each entity type. By
combining semantic and relational information in a unified representation, our
approach demonstrates strong generalizability, enabling accurate link
predictions even for unseen nodes. Experimental results on PrimeKG++ and the
DrugBank drug-target interaction dataset demonstrate the effectiveness and
robustness of our method across diverse biomedical datasets. Our source code,
pre-trained models, and data are publicly available at
https://github.com/HySonLab/BioMedKG
|
2501.01645 | HLV-1K: A Large-scale Hour-Long Video Benchmark for Time-Specific Long
Video Understanding | cs.CV cs.AI | Multimodal large language models have become a popular topic in deep visual
understanding due to many promising real-world applications. However, hour-long
video understanding, spanning over one hour and containing tens of thousands of
visual frames, remains under-explored because of 1) challenging long-term video
analyses, 2) inefficient large-model approaches, and 3) lack of large-scale
benchmark datasets. Among them, in this paper, we focus on building a
large-scale hour-long long video benchmark, HLV-1K, designed to evaluate long
video understanding models. HLV-1K comprises 1009 hour-long videos with 14,847
high-quality question answering (QA) and multi-choice question asnwering (MCQA)
pairs with time-aware query and diverse annotations, covering frame-level,
within-event-level, cross-event-level, and long-term reasoning tasks. We
evaluate our benchmark using existing state-of-the-art methods and demonstrate
its value for testing deep long video understanding capabilities at different
levels and for various tasks. This includes promoting future long video
understanding tasks at a granular level, such as deep understanding of long
live videos, meeting recordings, and movies.
|
2501.01648 | Dual Mutual Learning Network with Global-local Awareness for RGB-D
Salient Object Detection | cs.CV cs.MM | RGB-D salient object detection (SOD), aiming to highlight prominent regions
of a given scene by jointly modeling RGB and depth information, is one of the
challenging pixel-level prediction tasks. Recently, the dual-attention
mechanism has been devoted to this area due to its ability to strengthen the
detection process. However, most existing methods directly fuse attentional
cross-modality features under a manual-mandatory fusion paradigm without
considering the inherent discrepancy between the RGB and depth, which may lead
to a reduction in performance. Moreover, the long-range dependencies derived
from global and local information make it difficult to leverage a unified
efficient fusion strategy. Hence, in this paper, we propose the GL-DMNet, a
novel dual mutual learning network with global-local awareness. Specifically,
we present a position mutual fusion module and a channel mutual fusion module
to exploit the interdependencies among different modalities in spatial and
channel dimensions. Besides, we adopt an efficient decoder based on cascade
transformer-infused reconstruction to integrate multi-level fusion features
jointly. Extensive experiments on six benchmark datasets demonstrate that our
proposed GL-DMNet performs better than 24 RGB-D SOD methods, achieving an
average improvement of ~3% across four evaluation metrics compared to the
second-best model (S3Net). Codes and results are available at
https://github.com/kingkung2016/GL-DMNet.
|
2501.01649 | AVATAR: Adversarial Autoencoders with Autoregressive Refinement for Time
Series Generation | cs.LG cs.AI | Data augmentation can significantly enhance the performance of machine
learning tasks by addressing data scarcity and improving generalization.
However, generating time series data presents unique challenges. A model must
not only learn a probability distribution that reflects the real data
distribution but also capture the conditional distribution at each time step to
preserve the inherent temporal dependencies. To address these challenges, we
introduce AVATAR, a framework that combines Adversarial Autoencoders (AAE) with
Autoregressive Learning to achieve both objectives. Specifically, our technique
integrates the autoencoder with a supervisor and introduces a novel supervised
loss to assist the decoder in learning the temporal dynamics of time series
data. Additionally, we propose another innovative loss function, termed
distribution loss, to guide the encoder in more efficiently aligning the
aggregated posterior of the autoencoder's latent representation with a prior
Gaussian distribution. Furthermore, our framework employs a joint training
mechanism to simultaneously train all networks using a combined loss, thereby
fulfilling the dual objectives of time series generation. We evaluate our
technique across a variety of time series datasets with diverse
characteristics. Our experiments demonstrate significant improvements in both
the quality and practical utility of the generated data, as assessed by various
qualitative and quantitative metrics.
|
2501.01652 | MIRAGE: Exploring How Large Language Models Perform in Complex Social
Interactive Environments | cs.CL | Large Language Models (LLMs) have shown remarkable capabilities in
environmental perception, reasoning-based decision-making, and simulating
complex human behaviors, particularly in interactive role-playing contexts.
This paper introduces the Multiverse Interactive Role-play Ability General
Evaluation (MIRAGE), a comprehensive framework designed to assess LLMs'
proficiency in portraying advanced human behaviors through murder mystery
games. MIRAGE features eight intricately crafted scripts encompassing diverse
themes and styles, providing a rich simulation. To evaluate LLMs' performance,
MIRAGE employs four distinct methods: the Trust Inclination Index (TII) to
measure dynamics of trust and suspicion, the Clue Investigation Capability
(CIC) to measure LLMs' capability of conducting information, the Interactivity
Capability Index (ICI) to assess role-playing capabilities and the Script
Compliance Index (SCI) to assess LLMs' capability of understanding and
following instructions. Our experiments indicate that even popular models like
GPT-4 face significant challenges in navigating the complexities presented by
the MIRAGE. The datasets and simulation codes are available in
\href{https://github.com/lime728/MIRAGE}{github}.
|
2501.01653 | Look Back for More: Harnessing Historical Sequential Updates for
Personalized Federated Adapter Tuning | cs.LG cs.DC | Personalized federated learning (PFL) studies effective model personalization
to address the data heterogeneity issue among clients in traditional federated
learning (FL). Existing PFL approaches mainly generate personalized models by
relying solely on the clients' latest updated models while ignoring their
previous updates, which may result in suboptimal personalized model learning.
To bridge this gap, we propose a novel framework termed pFedSeq, designed for
personalizing adapters to fine-tune a foundation model in FL. In pFedSeq, the
server maintains and trains a sequential learner, which processes a sequence of
past adapter updates from clients and generates calibrations for personalized
adapters. To effectively capture the cross-client and cross-step relations
hidden in previous updates and generate high-performing personalized adapters,
pFedSeq adopts the powerful selective state space model (SSM) as the
architecture of sequential learner. Through extensive experiments on four
public benchmark datasets, we demonstrate the superiority of pFedSeq over
state-of-the-art PFL methods.
|
2501.01658 | EAUWSeg: Eliminating annotation uncertainty in weakly-supervised medical
image segmentation | cs.CV cs.AI | Weakly-supervised medical image segmentation is gaining traction as it
requires only rough annotations rather than accurate pixel-to-pixel labels,
thereby reducing the workload for specialists. Although some progress has been
made, there is still a considerable performance gap between the label-efficient
methods and fully-supervised one, which can be attributed to the uncertainty
nature of these weak labels. To address this issue, we propose a novel weak
annotation method coupled with its learning framework EAUWSeg to eliminate the
annotation uncertainty. Specifically, we first propose the Bounded Polygon
Annotation (BPAnno) by simply labeling two polygons for a lesion. Then, the
tailored learning mechanism that explicitly treat bounded polygons as two
separated annotations is proposed to learn invariant feature by providing
adversarial supervision signal for model training. Subsequently, a
confidence-auxiliary consistency learner incorporates with a
classification-guided confidence generator is designed to provide reliable
supervision signal for pixels in uncertain region by leveraging the feature
presentation consistency across pixels within the same category as well as
class-specific information encapsulated in bounded polygons annotation.
Experimental results demonstrate that EAUWSeg outperforms existing
weakly-supervised segmentation methods. Furthermore, compared to
fully-supervised counterparts, the proposed method not only delivers superior
performance but also costs much less annotation workload. This underscores the
superiority and effectiveness of our approach.
|
2501.01664 | BARTPredict: Empowering IoT Security with LLM-Driven Cyber Threat
Prediction | cs.CR cs.AI | The integration of Internet of Things (IoT) technology in various domains has
led to operational advancements, but it has also introduced new vulnerabilities
to cybersecurity threats, as evidenced by recent widespread cyberattacks on IoT
devices. Intrusion detection systems are often reactive, triggered by specific
patterns or anomalies observed within the network. To address this challenge,
this work proposes a proactive approach to anticipate and preemptively mitigate
malicious activities, aiming to prevent potential damage before it occurs. This
paper proposes an innovative intrusion prediction framework empowered by
Pre-trained Large Language Models (LLMs). The framework incorporates two LLMs:
a fine-tuned Bidirectional and AutoRegressive Transformers (BART) model for
predicting network traffic and a fine-tuned Bidirectional Encoder
Representations from Transformers (BERT) model for evaluating the predicted
traffic. By harnessing the bidirectional capabilities of BART the framework
then identifies malicious packets among these predictions. Evaluated using the
CICIoT2023 IoT attack dataset, our framework showcases a notable enhancement in
predictive performance, attaining an impressive 98% overall accuracy, providing
a powerful response to the cybersecurity challenges that confront IoT networks.
|
2501.01665 | FairSense: Long-Term Fairness Analysis of ML-Enabled Systems | cs.LG cs.CY cs.SE | Algorithmic fairness of machine learning (ML) models has raised significant
concern in the recent years. Many testing, verification, and bias mitigation
techniques have been proposed to identify and reduce fairness issues in ML
models. The existing methods are model-centric and designed to detect fairness
issues under static settings. However, many ML-enabled systems operate in a
dynamic environment where the predictive decisions made by the system impact
the environment, which in turn affects future decision-making. Such a
self-reinforcing feedback loop can cause fairness violations in the long term,
even if the immediate outcomes are fair. In this paper, we propose a
simulation-based framework called FairSense to detect and analyze long-term
unfairness in ML-enabled systems. Given a fairness requirement, FairSense
performs Monte-Carlo simulation to enumerate evolution traces for each system
configuration. Then, FairSense performs sensitivity analysis on the space of
possible configurations to understand the impact of design options and
environmental factors on the long-term fairness of the system. We demonstrate
FairSense's potential utility through three real-world case studies: Loan
lending, opioids risk scoring, and predictive policing.
|
2501.01668 | CoT-based Synthesizer: Enhancing LLM Performance through Answer
Synthesis | cs.CL | Current inference scaling methods, such as Self-consistency and Best-of-N,
have proven effective in improving the accuracy of LLMs on complex reasoning
tasks. However, these methods rely heavily on the quality of candidate
responses and are unable to produce correct answers when all candidates are
incorrect. In this paper, we propose a novel inference scaling strategy,
CoT-based Synthesizer, which leverages CoT reasoning to synthesize superior
answers by analyzing complementary information from multiple candidate
responses, even when all candidate responses are flawed. To enable a
lightweight and cost-effective implementation, we introduce an automated data
generation pipeline that creates diverse training data. This allows smaller
LLMs trained on this data to improve the inference accuracy of larger models,
including API-based LLMs. Experimental results across four benchmark datasets
with seven policy models demonstrate that our method significantly enhances
performance, with gains of 11.8% for Llama3-8B and 10.3% for GPT-4o on the MATH
dataset. The corresponding training data and code are publicly available on
https://github.com/RUCKBReasoning/CoT-based-Synthesizer.
|
2501.01669 | Inversely Learning Transferable Rewards via Abstracted States | cs.LG cs.RO | Inverse reinforcement learning (IRL) has progressed significantly toward
accurately learning the underlying rewards in both discrete and continuous
domains from behavior data. The next advance is to learn {\em intrinsic}
preferences in ways that produce useful behavior in settings or tasks which are
different but aligned with the observed ones. In the context of robotic
applications, this helps integrate robots into processing lines involving new
tasks (with shared intrinsic preferences) without programming from scratch. We
introduce a method to inversely learn an abstract reward function from behavior
trajectories in two or more differing instances of a domain. The abstract
reward function is then used to learn task behavior in another separate
instance of the domain. This step offers evidence of its transferability and
validates its correctness. We evaluate the method on trajectories in tasks from
multiple domains in OpenAI's Gym testbed and AssistiveGym and show that the
learned abstract reward functions can successfully learn task behaviors in
instances of the respective domains, which have not been seen previously.
|
2501.01677 | PG-SAG: Parallel Gaussian Splatting for Fine-Grained Large-Scale Urban
Buildings Reconstruction via Semantic-Aware Grouping | cs.CV | 3D Gaussian Splatting (3DGS) has emerged as a transformative method in the
field of real-time novel synthesis. Based on 3DGS, recent advancements cope
with large-scale scenes via spatial-based partition strategy to reduce video
memory and optimization time costs. In this work, we introduce a parallel
Gaussian splatting method, termed PG-SAG, which fully exploits semantic cues
for both partitioning and Gaussian kernel optimization, enabling fine-grained
building surface reconstruction of large-scale urban areas without downsampling
the original image resolution. First, the Cross-modal model - Language Segment
Anything is leveraged to segment building masks. Then, the segmented building
regions is grouped into sub-regions according to the visibility check across
registered images. The Gaussian kernels for these sub-regions are optimized in
parallel with masked pixels. In addition, the normal loss is re-formulated for
the detected edges of masks to alleviate the ambiguities in normal vectors on
edges. Finally, to improve the optimization of 3D Gaussians, we introduce a
gradient-constrained balance-load loss that accounts for the complexity of the
corresponding scenes, effectively minimizing the thread waiting time in the
pixel-parallel rendering stage as well as the reconstruction lost. Extensive
experiments are tested on various urban datasets, the results demonstrated the
superior performance of our PG-SAG on building surface reconstruction, compared
to several state-of-the-art 3DGS-based methods. Project
Web:https://github.com/TFWang-9527/PG-SAG.
|
2501.01679 | Adaptive Few-shot Prompting for Machine Translation with Pre-trained
Language Models | cs.CL cs.AI | Recently, Large language models (LLMs) with in-context learning have
demonstrated remarkable potential in handling neural machine translation.
However, existing evidence shows that LLMs are prompt-sensitive and it is
sub-optimal to apply the fixed prompt to any input for downstream machine
translation tasks. To address this issue, we propose an adaptive few-shot
prompting (AFSP) framework to automatically select suitable translation
demonstrations for various source input sentences to further elicit the
translation capability of an LLM for better machine translation. First, we
build a translation demonstration retrieval module based on LLM's embedding to
retrieve top-k semantic-similar translation demonstrations from aligned
parallel translation corpus. Rather than using other embedding models for
semantic demonstration retrieval, we build a hybrid demonstration retrieval
module based on the embedding layer of the deployed LLM to build better input
representation for retrieving more semantic-related translation demonstrations.
Then, to ensure better semantic consistency between source inputs and target
outputs, we force the deployed LLM itself to generate multiple output
candidates in the target language with the help of translation demonstrations
and rerank these candidates. Besides, to better evaluate the effectiveness of
our AFSP framework on the latest language and extend the research boundary of
neural machine translation, we construct a high-quality diplomatic
Chinese-English parallel dataset that consists of 5,528 parallel
Chinese-English sentences. Finally, extensive experiments on the proposed
diplomatic Chinese-English parallel dataset and the United Nations Parallel
Corpus (Chinese-English part) show the effectiveness and superiority of our
proposed AFSP.
|
2501.01681 | SNeRV: Spectra-preserving Neural Representation for Video | eess.IV cs.CV | Neural representation for video (NeRV), which employs a neural network to
parameterize video signals, introduces a novel methodology in video
representations. However, existing NeRV-based methods have difficulty in
capturing fine spatial details and motion patterns due to spectral bias, in
which a neural network learns high-frequency (HF) components at a slower rate
than low-frequency (LF) components. In this paper, we propose
spectra-preserving NeRV (SNeRV) as a novel approach to enhance implicit video
representations by efficiently handling various frequency components. SNeRV
uses 2D discrete wavelet transform (DWT) to decompose video into LF and HF
features, preserving spatial structures and directly addressing the spectral
bias issue. To balance the compactness, we encode only the LF components, while
HF components that include fine textures are generated by a decoder.
Specialized modules, including a multi-resolution fusion unit (MFU) and a
high-frequency restorer (HFR), are integrated into a backbone to facilitate the
representation. Furthermore, we extend SNeRV to effectively capture temporal
correlations between adjacent video frames, by casting the extension as
additional frequency decomposition to a temporal domain. This approach allows
us to embed spatio-temporal LF features into the network, using temporally
extended up-sampling blocks (TUBs). Experimental results demonstrate that SNeRV
outperforms existing NeRV models in capturing fine details and achieves
enhanced reconstruction, making it a promising approach in the field of
implicit video representations. The codes are available at
https://github.com/qwertja/SNeRV.
|
2501.01685 | IAM: Enhancing RGB-D Instance Segmentation with New Benchmarks | cs.CV | Image segmentation is a vital task for providing human assistance and
enhancing autonomy in our daily lives. In particular, RGB-D
segmentation-leveraging both visual and depth cues-has attracted increasing
attention as it promises richer scene understanding than RGB-only methods.
However, most existing efforts have primarily focused on semantic segmentation
and thus leave a critical gap. There is a relative scarcity of instance-level
RGB-D segmentation datasets, which restricts current methods to broad category
distinctions rather than fully capturing the fine-grained details required for
recognizing individual objects. To bridge this gap, we introduce three RGB-D
instance segmentation benchmarks, distinguished at the instance level. These
datasets are versatile, supporting a wide range of applications from indoor
navigation to robotic manipulation. In addition, we present an extensive
evaluation of various baseline models on these benchmarks. This comprehensive
analysis identifies both their strengths and shortcomings, guiding future work
toward more robust, generalizable solutions. Finally, we propose a simple yet
effective method for RGB-D data integration. Extensive evaluations affirm the
effectiveness of our approach, offering a robust framework for advancing toward
more nuanced scene understanding.
|
2501.01689 | Quantitative Gait Analysis from Single RGB Videos Using a Dual-Input
Transformer-Based Network | cs.CV | Gait and movement analysis have become a well-established clinical tool for
diagnosing health conditions, monitoring disease progression for a wide
spectrum of diseases, and to implement and assess treatment, surgery and or
rehabilitation interventions. However, quantitative motion assessment remains
limited to costly motion capture systems and specialized personnel, restricting
its accessibility and broader application. Recent advancements in deep neural
networks have enabled quantitative movement analysis using single-camera
videos, offering an accessible alternative to conventional motion capture
systems. In this paper, we present an efficient approach for clinical gait
analysis through a dual-pattern input convolutional Transformer network. The
proposed system leverages a dual-input Transformer model to estimate essential
gait parameters from single RGB videos captured by a single-view camera. The
system demonstrates high accuracy in estimating critical metrics such as the
gait deviation index (GDI), knee flexion angle, step length, and walking
cadence, validated on a dataset of individuals with movement disorders.
Notably, our approach surpasses state-of-the-art methods in various scenarios,
using fewer resources and proving highly suitable for clinical application,
particularly in resource-constrained environments.
|
2501.01690 | Analyzing Aviation Safety Narratives with LDA, NMF and PLSA: A Case
Study Using Socrata Datasets | cs.LG | This study explores the application of topic modelling techniques Latent
Dirichlet Allocation (LDA), Nonnegative Matrix Factorization (NMF), and
Probabilistic Latent Semantic Analysis (PLSA) on the Socrata dataset spanning
from 1908 to 2009. Categorized by operator type (military, commercial, and
private), the analysis identified key themes such as pilot error, mechanical
failure, weather conditions, and training deficiencies. The study highlights
the unique strengths of each method: LDA ability to uncover overlapping themes,
NMF production of distinct and interpretable topics, and PLSA nuanced
probabilistic insights despite interpretative complexity. Statistical analysis
revealed that PLSA achieved a coherence score of 0.32 and a perplexity value of
-4.6, NMF scored 0.34 and 37.1, while LDA achieved the highest coherence of
0.36 but recorded the highest perplexity at 38.2. These findings demonstrate
the value of topic modelling in extracting actionable insights from
unstructured aviation safety narratives, aiding in the identification of risk
factors and areas for improvement across sectors. Future directions include
integrating additional contextual variables, leveraging neural topic models,
and enhancing aviation safety protocols. This research provides a foundation
for advanced text-mining applications in aviation safety management.
|
2501.01691 | VidFormer: A novel end-to-end framework fused by 3DCNN and Transformer
for Video-based Remote Physiological Measurement | cs.CV cs.AI | Remote physiological signal measurement based on facial videos, also known as
remote photoplethysmography (rPPG), involves predicting changes in facial
vascular blood flow from facial videos. While most deep learning-based methods
have achieved good results, they often struggle to balance performance across
small and large-scale datasets due to the inherent limitations of convolutional
neural networks (CNNs) and Transformer. In this paper, we introduce VidFormer,
a novel end-to-end framework that integrates 3-Dimension Convolutional Neural
Network (3DCNN) and Transformer models for rPPG tasks. Initially, we conduct an
analysis of the traditional skin reflection model and subsequently introduce an
enhanced model for the reconstruction of rPPG signals. Based on this improved
model, VidFormer utilizes 3DCNN and Transformer to extract local and global
features from input data, respectively. To enhance the spatiotemporal feature
extraction capabilities of VidFormer, we incorporate temporal-spatial attention
mechanisms tailored for both 3DCNN and Transformer. Additionally, we design a
module to facilitate information exchange and fusion between the 3DCNN and
Transformer. Our evaluation on five publicly available datasets demonstrates
that VidFormer outperforms current state-of-the-art (SOTA) methods. Finally, we
discuss the essential roles of each VidFormer module and examine the effects of
ethnicity, makeup, and exercise on its performance.
|
2501.01692 | Recursive decoding of projective Reed-Muller codes | cs.IT math.IT | We give a recursive decoding algorithm for projective Reed-Muller codes
making use of a decoder for affine Reed-Muller codes. We determine the number
of errors that can be corrected in this way, which is the current highest for
decoders of projective Reed-Muller codes. We show when we can decode up to the
error correction capability of these codes, and we compute the order of
complexity of the algorithm, which is given by that of the chosen decoder for
affine Reed-Muller codes.
|
2501.01693 | Denoising and Adaptive Online Vertical Federated Learning for Sequential
Multi-Sensor Data in Industrial Internet of Things | cs.LG cs.NI | With the continuous improvement in the computational capabilities of edge
devices such as intelligent sensors in the Industrial Internet of Things, these
sensors are no longer limited to mere data collection but are increasingly
capable of performing complex computational tasks. This advancement provides
both the motivation and the foundation for adopting distributed learning
approaches. This study focuses on an industrial assembly line scenario where
multiple sensors, distributed across various locations, sequentially collect
real-time data characterized by distinct feature spaces. To leverage the
computational potential of these sensors while addressing the challenges of
communication overhead and privacy concerns inherent in centralized learning,
we propose the Denoising and Adaptive Online Vertical Federated Learning
(DAO-VFL) algorithm. Tailored to the industrial assembly line scenario, DAO-VFL
effectively manages continuous data streams and adapts to shifting learning
objectives. Furthermore, it can address critical challenges prevalent in
industrial environment, such as communication noise and heterogeneity of sensor
capabilities. To support the proposed algorithm, we provide a comprehensive
theoretical analysis, highlighting the effects of noise reduction and adaptive
local iteration decisions on the regret bound. Experimental results on two
real-world datasets further demonstrate the superior performance of DAO-VFL
compared to benchmarks algorithms.
|
2501.01694 | Comparative Study of Deep Learning Architectures for Textual Damage
Level Classification | cs.LG | Given the paramount importance of safety in the aviation industry, even minor
operational anomalies can have significant consequences. Comprehensive
documentation of incidents and accidents serves to identify root causes and
propose safety measures. However, the unstructured nature of incident event
narratives poses a challenge for computer systems to interpret. Our study aimed
to leverage Natural Language Processing (NLP) and deep learning models to
analyze these narratives and classify the aircraft damage level incurred during
safety occurrences. Through the implementation of LSTM, BLSTM, GRU, and sRNN
deep learning models, our research yielded promising results, with all models
showcasing competitive performance, achieving an accuracy of over 88%
significantly surpassing the 25% random guess threshold for a four-class
classification problem. Notably, the sRNN model emerged as the top performer in
terms of recall and accuracy, boasting a remarkable 89%. These findings
underscore the potential of NLP and deep learning models in extracting
actionable insights from unstructured text narratives, particularly in
evaluating the extent of aircraft damage within the realm of aviation safety
occurrences.
|
2501.01695 | CrossView-GS: Cross-view Gaussian Splatting For Large-scale Scene
Reconstruction | cs.CV | 3D Gaussian Splatting (3DGS) has emerged as a prominent method for scene
representation and reconstruction, leveraging densely distributed Gaussian
primitives to enable real-time rendering of high-resolution images. While
existing 3DGS methods perform well in scenes with minor view variation, large
view changes in cross-view scenes pose optimization challenges for these
methods. To address these issues, we propose a novel cross-view Gaussian
Splatting method for large-scale scene reconstruction, based on dual-branch
fusion. Our method independently reconstructs models from aerial and ground
views as two independent branches to establish the baselines of Gaussian
distribution, providing reliable priors for cross-view reconstruction during
both initialization and densification. Specifically, a gradient-aware
regularization strategy is introduced to mitigate smoothing issues caused by
significant view disparities. Additionally, a unique Gaussian supplementation
strategy is utilized to incorporate complementary information of dual-branch
into the cross-view model. Extensive experiments on benchmark datasets
demonstrate that our method achieves superior performance in novel view
synthesis compared to state-of-the-art methods.
|
2501.01696 | Guaranteed Nonconvex Low-Rank Tensor Estimation via Scaled Gradient
Descent | stat.ML cs.IT cs.LG math.IT | Tensors, which give a faithful and effective representation to deliver the
intrinsic structure of multi-dimensional data, play a crucial role in an
increasing number of signal processing and machine learning problems. However,
tensor data are often accompanied by arbitrary signal corruptions, including
missing entries and sparse noise. A fundamental challenge is to reliably
extract the meaningful information from corrupted tensor data in a
statistically and computationally efficient manner. This paper develops a
scaled gradient descent (ScaledGD) algorithm to directly estimate the tensor
factors with tailored spectral initializations under the tensor-tensor product
(t-product) and tensor singular value decomposition (t-SVD) framework. In
theory, ScaledGD achieves linear convergence at a constant rate that is
independent of the condition number of the ground truth low-rank tensor for two
canonical problems -- tensor robust principal component analysis and tensor
completion -- as long as the level of corruptions is not too large and the
sample size is sufficiently large, while maintaining the low per-iteration cost
of gradient descent. To the best of our knowledge, ScaledGD is the first
algorithm that provably has such properties for low-rank tensor estimation with
the t-SVD decomposition. Finally, numerical examples are provided to
demonstrate the efficacy of ScaledGD in accelerating the convergence rate of
ill-conditioned low-rank tensor estimation in these two applications.
|
2501.01699 | Robust Self-Paced Hashing for Cross-Modal Retrieval with Noisy Labels | cs.CV cs.MM | Cross-modal hashing (CMH) has appeared as a popular technique for cross-modal
retrieval due to its low storage cost and high computational efficiency in
large-scale data. Most existing methods implicitly assume that multi-modal data
is correctly labeled, which is expensive and even unattainable due to the
inevitable imperfect annotations (i.e., noisy labels) in real-world scenarios.
Inspired by human cognitive learning, a few methods introduce self-paced
learning (SPL) to gradually train the model from easy to hard samples, which is
often used to mitigate the effects of feature noise or outliers. It is a
less-touched problem that how to utilize SPL to alleviate the misleading of
noisy labels on the hash model. To tackle this problem, we propose a new
cognitive cross-modal retrieval method called Robust Self-paced Hashing with
Noisy Labels (RSHNL), which can mimic the human cognitive process to identify
the noise while embracing robustness against noisy labels. Specifically, we
first propose a contrastive hashing learning (CHL) scheme to improve
multi-modal consistency, thereby reducing the inherent semantic gap. Afterward,
we propose center aggregation learning (CAL) to mitigate the intra-class
variations. Finally, we propose Noise-tolerance Self-paced Hashing (NSH) that
dynamically estimates the learning difficulty for each instance and
distinguishes noisy labels through the difficulty level. For all estimated
clean pairs, we further adopt a self-paced regularizer to gradually learn hash
codes from easy to hard. Extensive experiments demonstrate that the proposed
RSHNL performs remarkably well over the state-of-the-art CMH methods.
|
2501.01700 | Aesthetic Matters in Music Perception for Image Stylization: A
Emotion-driven Music-to-Visual Manipulation | cs.CV | Emotional information is essential for enhancing human-computer interaction
and deepening image understanding. However, while deep learning has advanced
image recognition, the intuitive understanding and precise control of emotional
expression in images remain challenging. Similarly, music research largely
focuses on theoretical aspects, with limited exploration of its emotional
dimensions and their integration with visual arts. To address these gaps, we
introduce EmoMV, an emotion-driven music-to-visual manipulation method that
manipulates images based on musical emotions. EmoMV combines bottom-up
processing of music elements-such as pitch and rhythm-with top-down application
of these emotions to visual aspects like color and lighting. We evaluate EmoMV
using a multi-scale framework that includes image quality metrics, aesthetic
assessments, and EEG measurements to capture real-time emotional responses. Our
results demonstrate that EmoMV effectively translates music's emotional content
into visually compelling images, advancing multimodal emotional integration and
opening new avenues for creative industries and interactive technologies.
|
2501.01702 | AgentRefine: Enhancing Agent Generalization through Refinement Tuning | cs.AI cs.CL cs.RO | Large Language Model (LLM) based agents have proved their ability to perform
complex tasks like humans. However, there is still a large gap between
open-sourced LLMs and commercial models like the GPT series. In this paper, we
focus on improving the agent generalization capabilities of LLMs via
instruction tuning. We first observe that the existing agent training corpus
exhibits satisfactory results on held-in evaluation sets but fails to
generalize to held-out sets. These agent-tuning works face severe formatting
errors and are frequently stuck in the same mistake for a long while. We
analyze that the poor generalization ability comes from overfitting to several
manual agent environments and a lack of adaptation to new situations. They
struggle with the wrong action steps and can not learn from the experience but
just memorize existing observation-action relations. Inspired by the insight,
we propose a novel AgentRefine framework for agent-tuning. The core idea is to
enable the model to learn to correct its mistakes via observation in the
trajectory. Specifically, we propose an agent synthesis framework to encompass
a diverse array of environments and tasks and prompt a strong LLM to refine its
error action according to the environment feedback. AgentRefine significantly
outperforms state-of-the-art agent-tuning work in terms of generalization
ability on diverse agent tasks. It also has better robustness facing
perturbation and can generate diversified thought in inference. Our findings
establish the correlation between agent generalization and self-refinement and
provide a new paradigm for future research.
|
2501.01704 | Optimal Fiducial Marker Placement for Satellite Proximity Operations
Using Observability Gramians | eess.SY cs.CV cs.RO cs.SY math.OC | This paper investigates optimal fiducial marker placement on the surface of a
satellite performing relative proximity operations with an observer satellite.
The absolute and relative translation and attitude equations of motion for the
satellite pair are modeled using dual quaternions. The observability of the
relative dual quaternion system is analyzed using empirical observability
Gramian methods. The optimal placement of a fiducial marker set, in which each
marker gives simultaneous optical range and attitude measurements, is
determined for the pair of satellites. A geostationary flyby between the
observing body (chaser) and desired (target) satellites is numerically
simulated and the optimal fiducial placement sets of five and ten on the
surface of the desired satellite are solved. It is shown that the optimal
solution maximizes the distance between fiducial markers and selects marker
locations that are most sensitive to measuring changes in the state during the
nonlinear trajectory, despite being visible for less time than other candidate
marker locations. Definitions and properties of quaternions and dual
quaternions, and parallels between the two, are presented alongside the
relative motion model.
|
2501.01705 | The Essence of Contextual Understanding in Theory of Mind: A Study on
Question Answering with Story Characters | cs.CL cs.AI | Theory-of-Mind (ToM) is a fundamental psychological capability that allows
humans to understand and interpret the mental states of others. Humans infer
others' thoughts by integrating causal cues and indirect clues from broad
contextual information, often derived from past interactions. In other words,
human ToM heavily relies on the understanding about the backgrounds and life
stories of others. Unfortunately, this aspect is largely overlooked in existing
benchmarks for evaluating machines' ToM capabilities, due to their usage of
short narratives without global backgrounds. In this paper, we verify the
importance of understanding long personal backgrounds in ToM and assess the
performance of LLMs in such realistic evaluation scenarios. To achieve this, we
introduce a novel benchmark, CharToM-QA, comprising 1,035 ToM questions based
on characters from classic novels. Our human study reveals a significant
disparity in performance: the same group of educated participants performs
dramatically better when they have read the novels compared to when they have
not. In parallel, our experiments on state-of-the-art LLMs, including the very
recent o1 model, show that LLMs still perform notably worse than humans,
despite that they have seen these stories during pre-training. This highlights
the limitations of current LLMs in capturing the nuanced contextual information
required for ToM reasoning.
|
2501.01707 | Catch Causal Signals from Edges for Label Imbalance in Graph
Classification | cs.LG | Despite significant advancements in causal research on graphs and its
application to cracking label imbalance, the role of edge features in detecting
the causal effects within graphs has been largely overlooked, leaving existing
methods with untapped potential for further performance gains. In this paper,
we enhance the causal attention mechanism through effectively leveraging edge
information to disentangle the causal subgraph from the original graph, as well
as further utilizing edge features to reshape graph representations. Capturing
more comprehensive causal signals, our design leads to improved performance on
graph classification tasks with label imbalance issues. We evaluate our
approach on real-word datasets PTC, Tox21, and ogbg-molhiv, observing
improvements over baselines. Overall, we highlight the importance of edge
features in graph causal detection and provide a promising direction for
addressing label imbalance challenges in graph-level tasks. The model
implementation details and the codes are available on
https://github.com/fengrui-z/ECAL
|
2501.01708 | $(\Theta, \Delta_\Theta, \mathbf{a})$-cyclic codes over $\mathbb{F}_q^l$
and their applications in the construction of quantum codes | cs.IT math.IT | In this article, for a finite field $\mathbb{F}_q$ and a natural number $l,$
let $\mathcal{R}$ denote the product ring $\mathbb{F}_q^l.$ Firstly, for an
automorphism $\Theta$ of $\mathcal{R},$ a $\Theta$-derivation $\Delta_\Theta$
of $\mathcal{R}$ and for a unit $\mathbf{a}$ in $\mathcal{R},$ we study
$(\Theta, \Delta_\Theta, \mathbf{a})$-cyclic codes over $\mathcal{R}.$ In this
direction, we give an algebraic characterization of a $(\Theta, \Delta_\Theta,
\mathbf{a})$-cyclic code over $\mathcal{R}$, determine its generator
polynomial, and find its decomposition over $\mathbb{F}_q.$ Secondly, we give a
necessary and sufficient condition for a $(\Theta, 0, \mathbf{a})$-cyclic code
to be Euclidean dual-containing code over $\mathcal{R}.$ Thirdly, we study Gray
maps and obtain several MDS and optimal linear codes over $\mathbb{F}_q$ as
Gray images of $(\Theta, \Delta_\Theta, \mathbf{a})$-cyclic codes over
$\mathcal{R}.$ Moreover, we determine orthogonality preserving Gray maps and
construct Euclidean dual-containing codes with good parameters. Lastly, as an
application, we construct MDS and almost MDS quantum codes by employing the
Euclidean dual-containing and annihilator dual-containing CSS constructions.
|
2501.01709 | MoVE-KD: Knowledge Distillation for VLMs with Mixture of Visual Encoders | cs.CV cs.AI | Visual encoders are fundamental components in vision-language models (VLMs),
each showcasing unique strengths derived from various pre-trained visual
foundation models. To leverage the various capabilities of these encoders,
recent studies incorporate multiple encoders within a single VLM, leading to a
considerable increase in computational cost. In this paper, we present
Mixture-of-Visual-Encoder Knowledge Distillation (MoVE-KD), a novel framework
that distills the unique proficiencies of multiple vision encoders into a
single, efficient encoder model. Specifically, to mitigate conflicts and retain
the unique characteristics of each teacher encoder, we employ low-rank
adaptation (LoRA) and mixture-of-experts (MoEs) to selectively activate
specialized knowledge based on input features, enhancing both adaptability and
efficiency. To regularize the KD process and enhance performance, we propose an
attention-based distillation strategy that adaptively weighs the different
visual encoders and emphasizes valuable visual tokens, reducing the burden of
replicating comprehensive but distinct features from multiple teachers.
Comprehensive experiments on popular VLMs, such as LLaVA and LLaVA-NeXT,
validate the effectiveness of our method. The code will be released.
|
2501.01710 | Enhancing Large Vision Model in Street Scene Semantic Understanding
through Leveraging Posterior Optimization Trajectory | cs.CV cs.LG cs.RO | To improve the generalization of the autonomous driving (AD) perception
model, vehicles need to update the model over time based on the continuously
collected data. As time progresses, the amount of data fitted by the AD model
expands, which helps to improve the AD model generalization substantially.
However, such ever-expanding data is a double-edged sword for the AD model.
Specifically, as the fitted data volume grows to exceed the the AD model's
fitting capacities, the AD model is prone to under-fitting. To address this
issue, we propose to use a pretrained Large Vision Models (LVMs) as backbone
coupled with downstream perception head to understand AD semantic information.
This design can not only surmount the aforementioned under-fitting problem due
to LVMs' powerful fitting capabilities, but also enhance the perception
generalization thanks to LVMs' vast and diverse training data. On the other
hand, to mitigate vehicles' computational burden of training the perception
head while running LVM backbone, we introduce a Posterior Optimization
Trajectory (POT)-Guided optimization scheme (POTGui) to accelerate the
convergence. Concretely, we propose a POT Generator (POTGen) to generate
posterior (future) optimization direction in advance to guide the current
optimization iteration, through which the model can generally converge within
10 epochs. Extensive experiments demonstrate that the proposed method improves
the performance by over 66.48\% and converges faster over 6 times, compared to
the existing state-of-the-art approach.
|
2501.01711 | LLMs & Legal Aid: Understanding Legal Needs Exhibited Through User
Queries | cs.HC cs.AI | The paper presents a preliminary analysis of an experiment conducted by Frank
Bold, a Czech expert group, to explore user interactions with GPT-4 for
addressing legal queries. Between May 3, 2023, and July 25, 2023, 1,252 users
submitted 3,847 queries. Unlike studies that primarily focus on the accuracy,
factuality, or hallucination tendencies of large language models (LLMs), our
analysis focuses on the user query dimension of the interaction. Using GPT-4o
for zero-shot classification, we categorized queries on (1) whether users
provided factual information about their issue (29.95%) or not (70.05%), (2)
whether they sought legal information (64.93%) or advice on the course of
action (35.07\%), and (3) whether they imposed requirements to shape or control
the model's answer (28.57%) or not (71.43%). We provide both quantitative and
qualitative insight into user needs and contribute to a better understanding of
user engagement with LLMs.
|
2501.01715 | Cloth-Splatting: 3D Cloth State Estimation from RGB Supervision | cs.CV cs.RO | We introduce Cloth-Splatting, a method for estimating 3D states of cloth from
RGB images through a prediction-update framework. Cloth-Splatting leverages an
action-conditioned dynamics model for predicting future states and uses 3D
Gaussian Splatting to update the predicted states. Our key insight is that
coupling a 3D mesh-based representation with Gaussian Splatting allows us to
define a differentiable map between the cloth state space and the image space.
This enables the use of gradient-based optimization techniques to refine
inaccurate state estimates using only RGB supervision. Our experiments
demonstrate that Cloth-Splatting not only improves state estimation accuracy
over current baselines but also reduces convergence time.
|
2501.01716 | Beyond Non-Degeneracy: Revisiting Certainty Equivalent Heuristic for
Online Linear Programming | math.OC cs.DS cs.LG math.PR | The Certainty Equivalent heuristic (CE) is a widely-used algorithm for
various dynamic resource allocation problems in OR and OM. Despite its
popularity, existing theoretical guarantees of CE are limited to settings
satisfying restrictive fluid regularity conditions, particularly, the
non-degeneracy conditions, under the widely held belief that the violation of
such conditions leads to performance deterioration and necessitates algorithmic
innovation beyond CE.
In this work, we conduct a refined performance analysis of CE within the
general framework of online linear programming. We show that CE achieves
uniformly near-optimal regret (up to a polylogarithmic factor in $T$) under
only mild assumptions on the underlying distribution, without relying on any
fluid regularity conditions. Our result implies that, contrary to prior belief,
CE effectively beats the curse of degeneracy for a wide range of problem
instances with continuous conditional reward distributions, highlighting the
distinction of the problem's structure between discrete and non-discrete
settings. Our explicit regret bound interpolates between the mild $(\log T)^2$
regime and the worst-case $\sqrt{T}$ regime with a parameter $\beta$
quantifying the minimal rate of probability accumulation of the conditional
reward distributions, generalizing prior findings in the multisecretary
setting.
To achieve these results, we develop novel algorithmic analytical techniques.
Drawing tools from the empirical processes theory, we establish strong
concentration analysis of the solutions to random linear programs, leading to
improved regret analysis under significantly relaxed assumptions. These
techniques may find potential applications in broader online decision-making
contexts.
|
2501.01717 | KeyNode-Driven Geometry Coding for Real-World Scanned Human Dynamic Mesh
Compression | cs.CV cs.MM eess.SP | The compression of real-world scanned 3D human dynamic meshes is an emerging
research area, driven by applications such as telepresence, virtual reality,
and 3D digital streaming. Unlike synthesized dynamic meshes with fixed
topology, scanned dynamic meshes often not only have varying topology across
frames but also scan defects such as holes and outliers, increasing the
complexity of prediction and compression. Additionally, human meshes often
combine rigid and non-rigid motions, making accurate prediction and encoding
significantly more difficult compared to objects that exhibit purely rigid
motion. To address these challenges, we propose a compression method designed
for real-world scanned human dynamic meshes, leveraging embedded key nodes. The
temporal motion of each vertex is formulated as a distance-weighted combination
of transformations from neighboring key nodes, requiring the transmission of
solely the key nodes' transformations. To enhance the quality of the
KeyNode-driven prediction, we introduce an octree-based residual coding scheme
and a Dual-direction prediction mode, which uses I-frames from both directions.
Extensive experiments demonstrate that our method achieves significant
improvements over the state-of-the-art, with an average bitrate saving of
24.51% across the evaluated sequences, particularly excelling at low bitrates.
|
2501.01720 | Interpretable Face Anti-Spoofing: Enhancing Generalization with
Multimodal Large Language Models | cs.CV | Face Anti-Spoofing (FAS) is essential for ensuring the security and
reliability of facial recognition systems. Most existing FAS methods are
formulated as binary classification tasks, providing confidence scores without
interpretation. They exhibit limited generalization in out-of-domain scenarios,
such as new environments or unseen spoofing types. In this work, we introduce a
multimodal large language model (MLLM) framework for FAS, termed Interpretable
Face Anti-Spoofing (I-FAS), which transforms the FAS task into an interpretable
visual question answering (VQA) paradigm. Specifically, we propose a
Spoof-aware Captioning and Filtering (SCF) strategy to generate high-quality
captions for FAS images, enriching the model's supervision with natural
language interpretations. To mitigate the impact of noisy captions during
training, we develop a Lopsided Language Model (L-LM) loss function that
separates loss calculations for judgment and interpretation, prioritizing the
optimization of the former. Furthermore, to enhance the model's perception of
global visual features, we design a Globally Aware Connector (GAC) to align
multi-level visual representations with the language model. Extensive
experiments on standard and newly devised One to Eleven cross-domain
benchmarks, comprising 12 public datasets, demonstrate that our method
significantly outperforms state-of-the-art methods.
|
2501.01721 | Uncovering the Iceberg in the Sea: Fundamentals of Pulse Shaping and
Modulation Design for Random ISAC Signals | eess.SP cs.IT math.IT | Integrated Sensing and Communications (ISAC) is expected to play a pivotal
role in future 6G networks. To maximize time-frequency resource utilization, 6G
ISAC systems must exploit data payload signals, that are inherently random, for
both communication and sensing tasks. This paper provides a comprehensive
analysis of the sensing performance of such communication-centric ISAC signals,
with a focus on modulation and pulse shaping design to reshape the statistical
properties of their auto-correlation functions (ACFs), thereby improving the
target ranging performance. We derive a closed-form expression for the
expectation of the squared ACF of random ISAC signals, considering arbitrary
modulation bases and constellation mappings within the Nyquist pulse shaping
framework. The structure is metaphorically described as an ``iceberg hidden in
the sea", where the ``iceberg'' represents the squared mean of the ACF of
random ISAC signals, that is determined by the pulse shaping filter, and the
``sea level'' characterizes the corresponding variance, caused by the
randomness of the data payload. Our analysis shows that, for QAM/PSK
constellations with Nyquist pulse shaping, Orthogonal Frequency Division
Multiplexing (OFDM) achieves the lowest ranging sidelobe level across all lags.
Building on these insights, we propose a novel Nyquist pulse shaping design to
enhance the sensing performance of random ISAC signals. Numerical results
validate our theoretical findings, showing that the proposed pulse shaping
significantly reduces ranging sidelobes compared to conventional root-raised
cosine (RRC) pulse shaping, thereby improving the ranging performance.
|
2501.01722 | AR4D: Autoregressive 4D Generation from Monocular Videos | cs.CV | Recent advancements in generative models have ignited substantial interest in
dynamic 3D content creation (\ie, 4D generation). Existing approaches primarily
rely on Score Distillation Sampling (SDS) to infer novel-view videos, typically
leading to issues such as limited diversity, spatial-temporal inconsistency and
poor prompt alignment, due to the inherent randomness of SDS. To tackle these
problems, we propose AR4D, a novel paradigm for SDS-free 4D generation.
Specifically, our paradigm consists of three stages. To begin with, for a
monocular video that is either generated or captured, we first utilize
pre-trained expert models to create a 3D representation of the first frame,
which is further fine-tuned to serve as the canonical space. Subsequently,
motivated by the fact that videos happen naturally in an autoregressive manner,
we propose to generate each frame's 3D representation based on its previous
frame's representation, as this autoregressive generation manner can facilitate
more accurate geometry and motion estimation. Meanwhile, to prevent overfitting
during this process, we introduce a progressive view sampling strategy,
utilizing priors from pre-trained large-scale 3D reconstruction models. To
avoid appearance drift introduced by autoregressive generation, we further
incorporate a refinement stage based on a global deformation field and the
geometry of each frame's 3D representation. Extensive experiments have
demonstrated that AR4D can achieve state-of-the-art 4D generation without SDS,
delivering greater diversity, improved spatial-temporal consistency, and better
alignment with input prompts.
|
2501.01723 | IGAF: Incremental Guided Attention Fusion for Depth Super-Resolution | cs.CV | Accurate depth estimation is crucial for many fields, including robotics,
navigation, and medical imaging. However, conventional depth sensors often
produce low-resolution (LR) depth maps, making detailed scene perception
challenging. To address this, enhancing LR depth maps to high-resolution (HR)
ones has become essential, guided by HR-structured inputs like RGB or grayscale
images. We propose a novel sensor fusion methodology for guided depth
super-resolution (GDSR), a technique that combines LR depth maps with HR images
to estimate detailed HR depth maps. Our key contribution is the Incremental
guided attention fusion (IGAF) module, which effectively learns to fuse
features from RGB images and LR depth maps, producing accurate HR depth maps.
Using IGAF, we build a robust super-resolution model and evaluate it on
multiple benchmark datasets. Our model achieves state-of-the-art results
compared to all baseline models on the NYU v2 dataset for $\times 4$, $\times
8$, and $\times 16$ upsampling. It also outperforms all baselines in a
zero-shot setting on the Middlebury, Lu, and RGB-D-D datasets. Code,
environments, and models are available on GitHub.
|
2501.01725 | Subject Specific Deep Learning Model for Motor Imagery Direction
Decoding | eess.SP cs.NE | Hemispheric strokes impair motor control in contralateral body parts,
necessitating effective rehabilitation strategies. Motor Imagery-based
Brain-Computer Interfaces (MI-BCIs) promote neuroplasticity, aiding the
recovery of motor functions. While deep learning has shown promise in decoding
MI actions for stroke rehabilitation, existing studies largely focus on
bilateral MI actions and are limited to offline evaluations. Decoding
directional information from unilateral MI, however, offers a more natural
control interface with greater degrees of freedom but remains challenging due
to spatially overlapping neural activity. This work proposes a novel deep
learning framework for online decoding of binary directional MI signals from
the dominant hand of 20 healthy subjects. The proposed method employs
EEGNet-based convolutional filters to extract temporal and spatial features.
The EEGNet model is enhanced by Squeeze-and-Excitation (SE) layers that rank
the electrode importance and feature maps. A subject-independent model is
initially trained using calibration data from multiple subjects and fine-tuned
for subject-specific adaptation. The performance of the proposed method is
evaluated using subject-specific online session data. The proposed method
achieved an average right vs left binary direction decoding accuracy of 58.7
+\- 8% for unilateral MI tasks, outperforming the existing deep learning
models. Additionally, the SE-layer ranking offers insights into electrode
contribution, enabling potential subject-specific BCI optimization. The
findings highlight the efficacy of the proposed method in advancing MI-BCI
applications for a more natural and effective control of BCI systems.
|
2501.01726 | Sensor Placement on a Cantilever Beam Using Observability Gramians | eess.SY cs.SY math.AP math.OC | Working from an observability characterization based on output energy
sensitivity to changes in initial conditions, we derive both analytical and
empirical observability Gramian tools for a class of continuum material
systems. Using these results, optimal sensor placement is calculated for an
Euler-Bernoulli cantilever beam for the following cases: analytical
observability for the continuum system and analytical observability for a
finite number of modes. Error covariance of an Unscented Kalman Filter is
determined for both cases and compared to randomly placed sensors to
demonstrate effectiveness of the techniques.
|
2501.01727 | Proposing Hierarchical Goal-Conditioned Policy Planning in Multi-Goal
Reinforcement Learning | cs.AI cs.LG | Humanoid robots must master numerous tasks with sparse rewards, posing a
challenge for reinforcement learning (RL). We propose a method combining RL and
automated planning to address this. Our approach uses short goal-conditioned
policies (GCPs) organized hierarchically, with Monte Carlo Tree Search (MCTS)
planning using high-level actions (HLAs). Instead of primitive actions, the
planning process generates HLAs. A single plan-tree, maintained during the
agent's lifetime, holds knowledge about goal achievement. This hierarchy
enhances sample efficiency and speeds up reasoning by reusing HLAs and
anticipating future actions. Our Hierarchical Goal-Conditioned Policy Planning
(HGCPP) framework uniquely integrates GCPs, MCTS, and hierarchical RL,
potentially improving exploration and planning in complex tasks.
|
2501.01728 | Multi-modal classification of forest biodiversity potential from 2D
orthophotos and 3D airborne laser scanning point clouds | cs.CV | Accurate assessment of forest biodiversity is crucial for ecosystem
management and conservation. While traditional field surveys provide
high-quality assessments, they are labor-intensive and spatially limited. This
study investigates whether deep learning-based fusion of close-range sensing
data from 2D orthophotos (12.5 cm resolution) and 3D airborne laser scanning
(ALS) point clouds (8 points/m^2) can enhance biodiversity assessment. We
introduce the BioVista dataset, comprising 44.378 paired samples of orthophotos
and ALS point clouds from temperate forests in Denmark, designed to explore
multi-modal fusion approaches for biodiversity potential classification. Using
deep neural networks (ResNet for orthophotos and PointVector for ALS point
clouds), we investigate each data modality's ability to assess forest
biodiversity potential, achieving mean accuracies of 69.4% and 72.8%,
respectively. We explore two fusion approaches: a confidence-based ensemble
method and a feature-level concatenation strategy, with the latter achieving a
mean accuracy of 75.5%. Our results demonstrate that spectral information from
orthophotos and structural information from ALS point clouds effectively
complement each other in forest biodiversity assessment.
|
2501.01732 | Combined Hyper-Extensible Extremely-Secured Zero-Trust CIAM-PAM
architecture | cs.CR cs.AI cs.NI | Customer Identity and Access Management (CIAM) systems play a pivotal role in
securing enterprise infrastructures. However, the complexity of implementing
these systems requires careful architectural planning to ensure positive Return
on Investment (RoI) and avoid costly delays. The proliferation of Active
Persistent cyber threats, coupled with advancements in AI, cloud computing, and
geographically distributed customer populations, necessitates a paradigm shift
towards adaptive and zero-trust security frameworks. This paper introduces the
Combined Hyper-Extensible Extremely-Secured Zero-Trust (CHEZ) CIAM-PAM
architecture, designed specifically for large-scale enterprises. The CHEZ PL
CIAM-PAM framework addresses critical security gaps by integrating federated
identity management (private and public identities), password-less
authentication, adaptive multi-factor authentication (MFA), microservice-based
PEP (Policy Entitlement Point), multi-layer RBAC (Role Based Access Control)
and multi-level trust systems. This future-proof design also includes
end-to-end data encryption, and seamless integration with state-of-the-art
AI-based threat detection systems, while ensuring compliance with stringent
regulatory standards.
|
2501.01733 | Augmentation Matters: A Mix-Paste Method for X-Ray Prohibited Item
Detection under Noisy Annotations | cs.CV cs.AI | Automatic X-ray prohibited item detection is vital for public safety.
Existing deep learning-based methods all assume that the annotations of
training X-ray images are correct. However, obtaining correct annotations is
extremely hard if not impossible for large-scale X-ray images, where item
overlapping is ubiquitous.As a result, X-ray images are easily contaminated
with noisy annotations, leading to performance deterioration of existing
methods.In this paper, we address the challenging problem of training a robust
prohibited item detector under noisy annotations (including both category noise
and bounding box noise) from a novel perspective of data augmentation, and
propose an effective label-aware mixed patch paste augmentation method
(Mix-Paste). Specifically, for each item patch, we mix several item patches
with the same category label from different images and replace the original
patch in the image with the mixed patch. In this way, the probability of
containing the correct prohibited item within the generated image is increased.
Meanwhile, the mixing process mimics item overlapping, enabling the model to
learn the characteristics of X-ray images. Moreover, we design an item-based
large-loss suppression (LLS) strategy to suppress the large losses
corresponding to potentially positive predictions of additional items due to
the mixing operation. We show the superiority of our method on X-ray datasets
under noisy annotations. In addition, we evaluate our method on the noisy
MS-COCO dataset to showcase its generalization ability. These results clearly
indicate the great potential of data augmentation to handle noise annotations.
The source code is released at https://github.com/wscds/Mix-Paste.
|
2501.01741 | How Toxic Can You Get? Search-based Toxicity Testing for Large Language
Models | cs.SE cs.AI cs.CL | Language is a deep-rooted means of perpetration of stereotypes and
discrimination. Large Language Models (LLMs), now a pervasive technology in our
everyday lives, can cause extensive harm when prone to generating toxic
responses. The standard way to address this issue is to align the LLM, which,
however, dampens the issue without constituting a definitive solution.
Therefore, testing LLM even after alignment efforts remains crucial for
detecting any residual deviations with respect to ethical standards. We present
EvoTox, an automated testing framework for LLMs' inclination to toxicity,
providing a way to quantitatively assess how much LLMs can be pushed towards
toxic responses even in the presence of alignment. The framework adopts an
iterative evolution strategy that exploits the interplay between two LLMs, the
System Under Test (SUT) and the Prompt Generator steering SUT responses toward
higher toxicity. The toxicity level is assessed by an automated oracle based on
an existing toxicity classifier. We conduct a quantitative and qualitative
empirical evaluation using four state-of-the-art LLMs as evaluation subjects
having increasing complexity (7-13 billion parameters). Our quantitative
evaluation assesses the cost-effectiveness of four alternative versions of
EvoTox against existing baseline methods, based on random search, curated
datasets of toxic prompts, and adversarial attacks. Our qualitative assessment
engages human evaluators to rate the fluency of the generated prompts and the
perceived toxicity of the responses collected during the testing sessions.
Results indicate that the effectiveness, in terms of detected toxicity level,
is significantly higher than the selected baseline methods (effect size up to
1.0 against random search and up to 0.99 against adversarial attacks).
Furthermore, EvoTox yields a limited cost overhead (from 22% to 35% on
average).
|
2501.01743 | Automating Legal Concept Interpretation with LLMs: Retrieval,
Generation, and Evaluation | cs.CL cs.AI | Legal articles often include vague concepts for adapting to the ever-changing
society. Providing detailed interpretations of these concepts is a critical and
challenging task even for legal practitioners. It requires meticulous and
professional annotations and summarizations by legal experts, which are
admittedly time-consuming and expensive to collect at scale. By emulating legal
experts' doctrinal method, we introduce a novel framework, ATRIE, using large
language models (LLMs) to AuTomatically Retrieve concept-related information,
Interpret legal concepts, and Evaluate generated interpretations, eliminating
dependence on legal experts. ATRIE comprises a legal concept interpreter and a
legal concept interpretation evaluator. The interpreter uses LLMs to retrieve
relevant information from judicial precedents and interpret legal concepts. The
evaluator uses performance changes on legal concept entailment, a downstream
task we propose, as a proxy of interpretation quality. Automatic and
multifaceted human evaluations indicate that the quality of our interpretations
is comparable to those written by legal experts, with superior
comprehensiveness and readability. Although there remains a slight gap in
accuracy, it can already assist legal practitioners in improving the efficiency
of concept interpretation.
|
2501.01752 | Laparoscopic Scene Analysis for Intraoperative Visualisation of Gamma
Probe Signals in Minimally Invasive Cancer Surgery | eess.IV cs.CV physics.med-ph | Cancer remains a significant health challenge worldwide, with a new diagnosis
occurring every two minutes in the UK. Surgery is one of the main treatment
options for cancer. However, surgeons rely on the sense of touch and naked eye
with limited use of pre-operative image data to directly guide the excision of
cancerous tissues and metastases due to the lack of reliable intraoperative
visualisation tools. This leads to increased costs and harm to the patient
where the cancer is removed with positive margins, or where other critical
structures are unintentionally impacted. There is therefore a pressing need for
more reliable and accurate intraoperative visualisation tools for minimally
invasive surgery to improve surgical outcomes and enhance patient care.
A recent miniaturised cancer detection probe (i.e., SENSEI developed by
Lightpoint Medical Ltd.) leverages the cancer-targeting ability of nuclear
agents to more accurately identify cancer intra-operatively using the emitted
gamma signal. However, the use of this probe presents a visualisation challenge
as the probe is non-imaging and is air-gapped from the tissue, making it
challenging for the surgeon to locate the probe-sensing area on the tissue
surface. Geometrically, the sensing area is defined as the intersection point
between the gamma probe axis and the tissue surface in 3D space but projected
onto the 2D laparoscopic image. Hence, in this thesis, tool tracking, pose
estimation, and segmentation tools were developed first, followed by
laparoscope image depth estimation algorithms and 3D reconstruction methods.
|
2501.01760 | From Age Estimation to Age-Invariant Face Recognition: Generalized Age
Feature Extraction Using Order-Enhanced Contrastive Learning | cs.CV | Generalized age feature extraction is crucial for age-related facial analysis
tasks, such as age estimation and age-invariant face recognition (AIFR).
Despite the recent successes of models in homogeneous-dataset experiments,
their performance drops significantly in cross-dataset evaluations. Most of
these models fail to extract generalized age features as they only attempt to
map extracted features with training age labels directly without explicitly
modeling the natural progression of aging. In this paper, we propose
Order-Enhanced Contrastive Learning (OrdCon), which aims to extract generalized
age features to minimize the domain gap across different datasets and
scenarios. OrdCon aligns the direction vector of two features with either the
natural aging direction or its reverse to effectively model the aging process.
The method also leverages metric learning which is incorporated with a novel
soft proxy matching loss to ensure that features are positioned around the
center of each age cluster with minimum intra-class variance. We demonstrate
that our proposed method achieves comparable results to state-of-the-art
methods on various benchmark datasets in homogeneous-dataset evaluations for
both age estimation and AIFR. In cross-dataset experiments, our method reduces
the mean absolute error by about 1.38 in average for age estimation task and
boosts the average accuracy for AIFR by 1.87%.
|
Subsets and Splits
No community queries yet
The top public SQL queries from the community will appear here once available.