id
stringlengths 9
16
| title
stringlengths 4
278
| categories
stringlengths 5
104
| abstract
stringlengths 6
4.09k
|
|---|---|---|---|
2501.06003
|
Learning to generate feasible graphs using graph grammars
|
cs.LG
|
Generative methods for graphs need to be sufficiently flexible to model
complex dependencies between sets of nodes. At the same time, the generated
graphs need to satisfy domain-dependent feasibility conditions, that is, they
should not violate certain constraints that would make their interpretation
impossible within the given application domain (e.g. a molecular graph where an
atom has a very large number of chemical bounds). Crucially, constraints can
involve not only local but also long-range dependencies: for example, the
maximal length of a cycle can be bounded.
Currently, a large class of generative approaches for graphs, such as methods
based on artificial neural networks, is based on message passing schemes. These
approaches suffer from information 'dilution' issues that severely limit the
maximal range of the dependencies that can be modeled. To address this problem,
we propose a generative approach based on the notion of graph grammars. The key
novel idea is to introduce a domain-dependent coarsening procedure to provide
short-cuts for long-range dependencies.
We show the effectiveness of our proposal in two domains: 1) small drugs and
2) RNA secondary structures. In the first case, we compare the quality of the
generated molecular graphs via the Molecular Sets (MOSES) benchmark suite,
which evaluates the distance between generated and real molecules, their
lipophilicity, synthesizability, and drug-likeness. In the second case, we show
that the approach can generate very large graphs (with hundreds of nodes) that
are accepted as valid examples for a desired RNA family by the "Infernal"
covariance model, a state-of-the-art RNA classifier.
Our implementation is available on github:
github.com/fabriziocosta/GraphLearn
|
2501.06004
|
SeMi: When Imbalanced Semi-Supervised Learning Meets Mining Hard
Examples
|
cs.CV
|
Semi-Supervised Learning (SSL) can leverage abundant unlabeled data to boost
model performance. However, the class-imbalanced data distribution in
real-world scenarios poses great challenges to SSL, resulting in performance
degradation. Existing class-imbalanced semi-supervised learning (CISSL) methods
mainly focus on rebalancing datasets but ignore the potential of using hard
examples to enhance performance, making it difficult to fully harness the power
of unlabeled data even with sophisticated algorithms. To address this issue, we
propose a method that enhances the performance of Imbalanced Semi-Supervised
Learning by Mining Hard Examples (SeMi). This method distinguishes the entropy
differences among logits of hard and easy examples, thereby identifying hard
examples and increasing the utility of unlabeled data, better addressing the
imbalance problem in CISSL. In addition, we maintain a class-balanced memory
bank with confidence decay for storing high-confidence embeddings to enhance
the pseudo-labels' reliability. Although our method is simple, it is effective
and seamlessly integrates with existing approaches. We perform comprehensive
experiments on standard CISSL benchmarks and experimentally demonstrate that
our proposed SeMi outperforms existing state-of-the-art methods on multiple
benchmarks, especially in reversed scenarios, where our best result shows
approximately a 54.8\% improvement over the baseline methods.
|
2501.06006
|
CamCtrl3D: Single-Image Scene Exploration with Precise 3D Camera Control
|
cs.CV
|
We propose a method for generating fly-through videos of a scene, from a
single image and a given camera trajectory. We build upon an image-to-video
latent diffusion model. We condition its UNet denoiser on the camera
trajectory, using four techniques. (1) We condition the UNet's temporal blocks
on raw camera extrinsics, similar to MotionCtrl. (2) We use images containing
camera rays and directions, similar to CameraCtrl. (3) We reproject the initial
image to subsequent frames and use the resulting video as a condition. (4) We
use 2D<=>3D transformers to introduce a global 3D representation, which
implicitly conditions on the camera poses. We combine all conditions in a
ContolNet-style architecture. We then propose a metric that evaluates overall
video quality and the ability to preserve details with view changes, which we
use to analyze the trade-offs of individual and combined conditions. Finally,
we identify an optimal combination of conditions. We calibrate camera positions
in our datasets for scale consistency across scenes, and we train our scene
exploration model, CamCtrl3D, demonstrating state-of-theart results.
|
2501.06007
|
CoNOAir: A Neural Operator for Forecasting Carbon Monoxide Evolution in
Cities
|
cs.LG
|
Carbon Monoxide (CO) is a dominant pollutant in urban areas due to the energy
generation from fossil fuels for industry, automobile, and domestic
requirements. Forecasting the evolution of CO in real-time can enable the
deployment of effective early warning systems and intervention strategies.
However, the computational cost associated with the physics and chemistry-based
simulation makes it prohibitive to implement such a model at the city and
country scale. To address this challenge, here, we present a machine learning
model based on neural operator, namely, Complex Neural Operator for Air Quality
(CoNOAir), that can effectively forecast CO concentrations. We demonstrate this
by developing a country-level model for short-term (hourly) and long-term
(72-hour) forecasts of CO concentrations. Our model outperforms
state-of-the-art models such as Fourier neural operators (FNO) and provides
reliable predictions for both short and long-term forecasts. We further analyse
the capability of the model to capture extreme events and generate forecasts in
urban cities in India. Interestingly, we observe that the model predicts the
next hour CO concentrations with R2 values greater than 0.95 for all the cities
considered. The deployment of such a model can greatly assist the governing
bodies to provide early warning, plan intervention strategies, and develop
effective strategies by considering several what-if scenarios. Altogether, the
present approach could provide a fillip to real-time predictions of CO
pollution in urban cities.
|
2501.06014
|
Pose-independent 3D Anthropometry from Sparse Data
|
cs.CV
|
3D digital anthropometry is the study of estimating human body measurements
from 3D scans. Precise body measurements are important health indicators in the
medical industry, and guiding factors in the fashion, ergonomic and
entertainment industries. The measuring protocol consists of scanning the whole
subject in the static A-pose, which is maintained without breathing or movement
during the scanning process. However, the A-pose is not easy to maintain during
the whole scanning process, which can last even up to a couple of minutes. This
constraint affects the final quality of the scan, which in turn affects the
accuracy of the estimated body measurements obtained from methods that rely on
dense geometric data. Additionally, this constraint makes it impossible to
develop a digital anthropometry method for subjects unable to assume the
A-pose, such as those with injuries or disabilities. We propose a method that
can obtain body measurements from sparse landmarks acquired in any pose. We
make use of the sparse landmarks of the posed subject to create
pose-independent features, and train a network to predict the body measurements
as taken from the standard A-pose. We show that our method achieves comparable
results to competing methods that use dense geometry in the standard A-pose,
but has the capability of estimating the body measurements from any pose using
sparse landmarks only. Finally, we address the lack of open-source 3D
anthropometry methods by making our method available to the research community
at https://github.com/DavidBoja/pose-independent-anthropometry.
|
2501.06016
|
Investigating the Impact of Observation Space Design Choices On Training
Reinforcement Learning Solutions for Spacecraft Problems
|
cs.LG cs.SY eess.SY
|
Recent research using Reinforcement Learning (RL) to learn autonomous control
for spacecraft operations has shown great success. However, a recent study
showed their performance could be improved by changing the action space, i.e.
control outputs, used in the learning environment. This has opened the door for
finding more improvements through further changes to the environment. The work
in this paper focuses on how changes to the environment's observation space can
impact the training and performance of RL agents learning the spacecraft
inspection task. The studies are split into two groups. The first looks at the
impact of sensors that were designed to help agents learn the task. The second
looks at the impact of reference frames, reorienting the agent to see the world
from a different perspective. The results show the sensors are not necessary,
but most of them help agents learn more optimal behavior, and that the
reference frame does not have a large impact, but is best kept consistent.
|
2501.06019
|
BRIGHT: A globally distributed multimodal building damage assessment
dataset with very-high-resolution for all-weather disaster response
|
cs.CV cs.AI eess.IV eess.SP
|
Disaster events occur around the world and cause significant damage to human
life and property. Earth observation (EO) data enables rapid and comprehensive
building damage assessment (BDA), an essential capability in the aftermath of a
disaster to reduce human casualties and to inform disaster relief efforts.
Recent research focuses on the development of AI models to achieve accurate
mapping of unseen disaster events, mostly using optical EO data. However,
solutions based on optical data are limited to clear skies and daylight hours,
preventing a prompt response to disasters. Integrating multimodal (MM) EO data,
particularly the combination of optical and SAR imagery, makes it possible to
provide all-weather, day-and-night disaster responses. Despite this potential,
the development of robust multimodal AI models has been constrained by the lack
of suitable benchmark datasets. In this paper, we present a BDA dataset using
veRy-hIGH-resoluTion optical and SAR imagery (BRIGHT) to support AI-based
all-weather disaster response. To the best of our knowledge, BRIGHT is the
first open-access, globally distributed, event-diverse MM dataset specifically
curated to support AI-based disaster response. It covers five types of natural
disasters and two types of man-made disasters across 12 regions worldwide, with
a particular focus on developing countries where external assistance is most
needed. The optical and SAR imagery in BRIGHT, with a spatial resolution
between 0.3-1 meters, provides detailed representations of individual
buildings, making it ideal for precise BDA. In our experiments, we have tested
seven advanced AI models trained with our BRIGHT to validate the
transferability and robustness. The dataset and code are available at
https://github.com/ChenHongruixuan/BRIGHT. BRIGHT also serves as the official
dataset for the 2025 IEEE GRSS Data Fusion Contest.
|
2501.06025
|
How to Tune a Multilingual Encoder Model for Germanic Languages: A Study
of PEFT, Full Fine-Tuning, and Language Adapters
|
cs.CL cs.AI
|
This paper investigates the optimal use of the multilingual encoder model
mDeBERTa for tasks in three Germanic languages -- German, Swedish, and
Icelandic -- representing varying levels of presence and likely data quality in
mDeBERTas pre-training data. We compare full fine-tuning with the
parameter-efficient fine-tuning (PEFT) methods LoRA and Pfeiffer bottleneck
adapters, finding that PEFT is more effective for the higher-resource language,
German. However, results for Swedish and Icelandic are less consistent. We also
observe differences between tasks: While PEFT tends to work better for question
answering, full fine-tuning is preferable for named entity recognition.
Inspired by previous research on modular approaches that combine task and
language adapters, we evaluate the impact of adding PEFT modules trained on
unstructured text, finding that this approach is not beneficial.
|
2501.06027
|
Geometric-Based Nail Segmentation for Clinical Measurements
|
cs.CV
|
A robust segmentation method that can be used to perform measurements on
toenails is presented. The proposed method is used as the first step in a
clinical trial to objectively quantify the incidence of a particular pathology.
For such an assessment, it is necessary to distinguish a nail, which locally
appears to be similar to the skin. Many algorithms have been used, each of
which leverages different aspects of toenail appearance. We used the Hough
transform to locate the tip of the toe and estimate the nail location and size.
Subsequently, we classified the super-pixels of the image based on their
geometric and photometric information. Thereafter, the watershed transform
delineated the border of the nail. The method was validated using a 348-image
medical dataset, achieving an accuracy of 0.993 and an F-measure of 0.925. The
proposed method is considerably robust across samples, with respect to factors
such as nail shape, skin pigmentation, illumination conditions, and appearance
of large regions affected by a medical condition
|
2501.06030
|
Resiliency metrics quantifying emergency response in a distribution
system
|
eess.SY cs.SY
|
The electric distribution system is a cornerstone of modern life, playing a
critical role in the daily activities and well-being of individuals. As the
world transitions toward a decarbonized future, where even mobility relies on
electricity, ensuring the resilience of the grid becomes paramount. This paper
introduces novel resilience metrics designed to equip utilities and
stakeholders with actionable tools to assess performance during storm events.
The metrics focus on emergency storm response and the resources required to
improve customer service. The practical calculation of the metrics from
historical utility data is demonstrated for multiple storm events.
Additionally, the metrics' improvement with added crews is estimated by
"rerunning history" with faster restoration. By applying this resilience
framework, utilities can enhance their restoration strategies and unlock
potential cost savings, benefiting both providers and customers in an era of
heightened energy dependency.
|
2501.06031
|
Generate, Transduct, Adapt: Iterative Transduction with VLMs
|
cs.CV
|
Transductive zero-shot learning with vision-language models leverages
image-image similarities within the dataset to achieve better classification
accuracy compared to the inductive setting. However, there is little work that
explores the structure of the language space in this context. We propose
GTA-CLIP, a novel technique that incorporates supervision from language models
for joint transduction in language and vision spaces. Our approach is iterative
and consists of three steps: (i) incrementally exploring the attribute space by
querying language models, (ii) an attribute-augmented transductive inference
procedure, and (iii) fine-tuning the language and vision encoders based on
inferred labels within the dataset. Through experiments with CLIP encoders, we
demonstrate that GTA-CLIP, yields an average performance improvement of 8.6%
and 3.7% across 12 datasets and 3 encoders, over CLIP and transductive CLIP
respectively in the zero-shot setting. We also observe similar improvements in
a few-shot setting. We present ablation studies that demonstrate the value of
each step and visualize how the vision and language spaces evolve over
iterations driven by the transductive learning.
|
2501.06035
|
Nonisotropic Gaussian Diffusion for Realistic 3D Human Motion Prediction
|
cs.CV
|
Probabilistic human motion prediction aims to forecast multiple possible
future movements from past observations. While current approaches report high
diversity and realism, they often generate motions with undetected limb
stretching and jitter. To address this, we introduce SkeletonDiffusion, a
latent diffusion model that embeds an explicit inductive bias on the human body
within its architecture and training. Our model is trained with a novel
nonisotropic Gaussian diffusion formulation that aligns with the natural
kinematic structure of the human skeleton. Results show that our approach
outperforms conventional isotropic alternatives, consistently generating
realistic predictions while avoiding artifacts such as limb distortion.
Additionally, we identify a limitation in commonly used diversity metrics,
which may inadvertently favor models that produce inconsistent limb lengths
within the same sequence. SkeletonDiffusion sets a new benchmark on three
real-world datasets, outperforming various baselines across multiple evaluation
metrics. Visit our project page:
https://ceveloper.github.io/publications/skeletondiffusion/
|
2501.06038
|
A Holistically Point-guided Text Framework for Weakly-Supervised
Camouflaged Object Detection
|
cs.CV
|
Weakly-Supervised Camouflaged Object Detection (WSCOD) has gained popularity
for its promise to train models with weak labels to segment objects that
visually blend into their surroundings. Recently, some methods using
sparsely-annotated supervision shown promising results through scribbling in
WSCOD, while point-text supervision remains underexplored. Hence, this paper
introduces a novel holistically point-guided text framework for WSCOD by
decomposing into three phases: segment, choose, train. Specifically, we propose
Point-guided Candidate Generation (PCG), where the point's foreground serves as
a correction for the text path to explicitly correct and rejuvenate the loss
detection object during the mask generation process (SEGMENT). We also
introduce a Qualified Candidate Discriminator (QCD) to choose the optimal mask
from a given text prompt using CLIP (CHOOSE), and employ the chosen pseudo mask
for training with a self-supervised Vision Transformer (TRAIN). Additionally,
we developed a new point-supervised dataset (P2C-COD) and a text-supervised
dataset (T-COD). Comprehensive experiments on four benchmark datasets
demonstrate our method outperforms state-of-the-art methods by a large margin,
and also outperforms some existing fully-supervised camouflaged object
detection methods.
|
2501.06039
|
AI-powered virtual tissues from spatial proteomics for clinical
diagnostics and biomedical discovery
|
q-bio.QM cs.AI cs.CV cs.LG
|
Spatial proteomics technologies have transformed our understanding of complex
tissue architectures by enabling simultaneous analysis of multiple molecular
markers and their spatial organization. The high dimensionality of these data,
varying marker combinations across experiments and heterogeneous study designs
pose unique challenges for computational analysis. Here, we present Virtual
Tissues (VirTues), a foundation model framework for biological tissues that
operates across the molecular, cellular and tissue scale. VirTues introduces
innovations in transformer architecture design, including a novel tokenization
scheme that captures both spatial and marker dimensions, and attention
mechanisms that scale to high-dimensional multiplex data while maintaining
interpretability. Trained on diverse cancer and non-cancer tissue datasets,
VirTues demonstrates strong generalization capabilities without task-specific
fine-tuning, enabling cross-study analysis and novel marker integration. As a
generalist model, VirTues outperforms existing approaches across clinical
diagnostics, biological discovery and patient case retrieval tasks, while
providing insights into tissue function and disease mechanisms.
|
2501.06040
|
MSCViT: A Small-size ViT architecture with Multi-Scale Self-Attention
Mechanism for Tiny Datasets
|
cs.CV
|
Vision Transformer (ViT) has demonstrated significant potential in various
vision tasks due to its strong ability in modelling long-range dependencies.
However, such success is largely fueled by training on massive samples. In real
applications, the large-scale datasets are not always available, and ViT
performs worse than Convolutional Neural Networks (CNNs) if it is only trained
on small scale dataset (called tiny dataset), since it requires large amount of
training data to ensure its representational capacity. In this paper, a
small-size ViT architecture with multi-scale self-attention mechanism and
convolution blocks is presented (dubbed MSCViT) to model different scales of
attention at each layer. Firstly, we introduced wavelet convolution, which
selectively combines the high-frequency components obtained by frequency
division with our convolution channel to extract local features. Then, a
lightweight multi-head attention module is developed to reduce the number of
tokens and computational costs. Finally, the positional encoding (PE) in the
backbone is replaced by a local feature extraction module. Compared with the
original ViT, it is parameter-efficient and is particularly suitable for tiny
datasets. Extensive experiments have been conducted on tiny datasets, in which
our model achieves an accuracy of 84.68% on CIFAR-100 with 14.0M parameters and
2.5 GFLOPs, without pre-training on large datasets.
|
2501.06042
|
The improvement in transmission resilience metrics from reduced outages
or faster restoration can be calculated by rerunning historical outage data
|
physics.soc-ph cs.SY eess.SY
|
Transmission utilities routinely collect detailed outage data, including
resilience events in which outages bunch up due to weather. The resilience
events and their resilience metrics can readily be extracted from this
historical outage data. Improvements such as grid hardening or investments in
restoration lead to reduced outages or faster restoration. We show how to rerun
this history with the effects of the reduced outages or faster restoration
included to find the resulting improvement in resilience metrics, thus
quantifying the benefits of these investments. This is demonstrated with case
studies for specific events (a derecho and a hurricane), and all large events
or large thunderstorms in the Midwest USA. Instead of predicting future extreme
events with models, which is very challenging, the historical rerun readily
quantifies the benefits that a resilience investment would have had if it had
been made in the past. The historical rerun is particularly vivid in making the
case for resilience investments to stakeholders because it quantifies the
benefits for events actually experienced by those stakeholders, rather than for
future events predicted with uncertainty.
|
2501.06047
|
Learning Affordances from Interactive Exploration using an Object-level
Map
|
cs.RO
|
Many robotic tasks in real-world environments require physical interactions
with an object such as pick up or push. For successful interactions, the robot
needs to know the object's affordances, which are defined as the potential
actions the robot can perform with the object. In order to learn a
robot-specific affordance predictor, we propose an interactive exploration
pipeline which allows the robot to collect interaction experiences while
exploring an unknown environment. We integrate an object-level map in the
exploration pipeline such that the robot can identify different object
instances and track objects across diverse viewpoints. This results in denser
and more accurate affordance annotations compared to state-of-the-art methods,
which do not incorporate a map. We show that our affordance exploration
approach makes exploration more efficient and results in more accurate
affordance prediction models compared to baseline methods.
|
2501.06051
|
Benchmarking Rotary Position Embeddings for Automatic Speech Recognition
|
cs.CL cs.AI eess.AS
|
Rotary Position Embedding (RoPE) encodes relative and absolute positional
information in Transformer-based models through rotation matrices applied to
input vectors within sequences. While RoPE has demonstrated superior
performance compared to other positional embedding technologies in natural
language processing tasks, its effectiveness in speech processing applications
remains understudied. In this work, we conduct a comprehensive evaluation of
RoPE across diverse automatic speech recognition (ASR) tasks. Our experimental
results demonstrate that for ASR tasks, RoPE consistently achieves lower error
rates compared to the currently widely used relative positional embedding. To
facilitate further research, we release the implementation and all experimental
recipes through the SpeechBrain toolkit.
|
2501.06053
|
Enhancing, Refining, and Fusing: Towards Robust Multi-Scale and Dense
Ship Detection
|
cs.CV
|
Synthetic aperture radar (SAR) imaging, celebrated for its high resolution,
all-weather capability, and day-night operability, is indispensable for
maritime applications. However, ship detection in SAR imagery faces significant
challenges, including complex backgrounds, densely arranged targets, and large
scale variations. To address these issues, we propose a novel framework,
Center-Aware SAR Ship Detector (CASS-Det), designed for robust multi-scale and
densely packed ship detection. CASS-Det integrates three key innovations: (1) a
center enhancement module (CEM) that employs rotational convolution to
emphasize ship centers, improving localization while suppressing background
interference; (2) a neighbor attention module (NAM) that leverages cross-layer
dependencies to refine ship boundaries in densely populated scenes; and (3) a
cross-connected feature pyramid network (CC-FPN) that enhances multi-scale
feature fusion by integrating shallow and deep features. Extensive experiments
on the SSDD, HRSID, and LS-SSDD-v1.0 datasets demonstrate the state-of-the-art
performance of CASS-Det, excelling at detecting multi-scale and densely
arranged ships.
|
2501.06058
|
Learning Flexible Heterogeneous Coordination with Capability-Aware
Shared Hypernetworks
|
cs.MA cs.LG
|
Cooperative heterogeneous multi-agent tasks require agents to effectively
coordinate their behaviors while accounting for their relative capabilities.
Learning-based solutions to this challenge span between two extremes: i)
shared-parameter methods, which encode diverse behaviors within a single
architecture by assigning an ID to each agent, and are sample-efficient but
result in limited behavioral diversity; ii) independent methods, which learn a
separate policy for each agent, and show greater behavioral diversity but lack
sample-efficiency. Prior work has also explored selective parameter-sharing,
allowing for a compromise between diversity and efficiency. None of these
approaches, however, effectively generalize to unseen agents or teams. We
present Capability-Aware Shared Hypernetworks (CASH), a novel architecture for
heterogeneous multi-agent coordination that generates sufficient diversity
while maintaining sample-efficiency via soft parameter-sharing hypernetworks.
Intuitively, CASH allows the team to learn common strategies using a shared
encoder, which are then adapted according to the team's individual and
collective capabilities with a hypernetwork, allowing for zero-shot
generalization to unseen teams and agents. We present experiments across two
heterogeneous coordination tasks and three standard learning paradigms
(imitation learning, on- and off-policy reinforcement learning). CASH is able
to outperform baseline architectures in success rate and sample efficiency when
evaluated on unseen teams and agents despite using less than half of the
learnable parameters.
|
2501.06059
|
COMIX: Compositional Explanations using Prototypes
|
cs.LG
|
Aligning machine representations with human understanding is key to improving
interpretability of machine learning (ML) models. When classifying a new image,
humans often explain their decisions by decomposing the image into concepts and
pointing to corresponding regions in familiar images. Current ML explanation
techniques typically either trace decision-making processes to reference
prototypes, generate attribution maps highlighting feature importance, or
incorporate intermediate bottlenecks designed to align with human-interpretable
concepts. The proposed method, named COMIX, classifies an image by decomposing
it into regions based on learned concepts and tracing each region to
corresponding ones in images from the training dataset, assuring that
explanations fully represent the actual decision-making process. We dissect the
test image into selected internal representations of a neural network to derive
prototypical parts (primitives) and match them with the corresponding
primitives derived from the training data. In a series of qualitative and
quantitative experiments, we theoretically prove and demonstrate that our
method, in contrast to post hoc analysis, provides fidelity of explanations and
shows that the efficiency is competitive with other inherently interpretable
architectures. Notably, it shows substantial improvements in fidelity and
sparsity metrics, including 48.82% improvement in the C-insertion score on the
ImageNet dataset over the best state-of-the-art baseline.
|
2501.06062
|
Personalized Language Model Learning on Text Data Without User
Identifiers
|
cs.LG
|
In many practical natural language applications, user data are highly
sensitive, requiring anonymous uploads of text data from mobile devices to the
cloud without user identifiers. However, the absence of user identifiers
restricts the ability of cloud-based language models to provide personalized
services, which are essential for catering to diverse user needs. The trivial
method of replacing an explicit user identifier with a static user embedding as
model input still compromises data anonymization. In this work, we propose to
let each mobile device maintain a user-specific distribution to dynamically
generate user embeddings, thereby breaking the one-to-one mapping between an
embedding and a specific user. We further theoretically demonstrate that to
prevent the cloud from tracking users via uploaded embeddings, the local
distributions of different users should either be derived from a linearly
dependent space to avoid identifiability or be close to each other to prevent
accurate attribution. Evaluation on both public and industrial datasets using
different language models reveals a remarkable improvement in accuracy from
incorporating anonymous user embeddings, while preserving real-time inference
requirement.
|
2501.06066
|
Distilling Calibration via Conformalized Credal Inference
|
cs.LG cs.AI eess.SP
|
Deploying artificial intelligence (AI) models on edge devices involves a
delicate balance between meeting stringent complexity constraints, such as
limited memory and energy resources, and ensuring reliable performance in
sensitive decision-making tasks. One way to enhance reliability is through
uncertainty quantification via Bayesian inference. This approach, however,
typically necessitates maintaining and running multiple models in an ensemble,
which may exceed the computational limits of edge devices. This paper
introduces a low-complexity methodology to address this challenge by distilling
calibration information from a more complex model. In an offline phase,
predictive probabilities generated by a high-complexity cloud-based model are
leveraged to determine a threshold based on the typical divergence between the
cloud and edge models. At run time, this threshold is used to construct credal
sets -- ranges of predictive probabilities that are guaranteed, with a
user-selected confidence level, to include the predictions of the cloud model.
The credal sets are obtained through thresholding of a divergence measure in
the simplex of predictive probabilities. Experiments on visual and language
tasks demonstrate that the proposed approach, termed Conformalized Distillation
for Credal Inference (CD-CI), significantly improves calibration performance
compared to low-complexity Bayesian methods, such as Laplace approximation,
making it a practical and efficient solution for edge AI deployments.
|
2501.06067
|
Decentralized Multi-Antenna Architectures with Unitary Constraints
|
eess.SP cs.IT math.IT
|
The increase in the number of base station (BS) antennas calls for efficient
solutions to deal with the increased interconnection bandwidth and processing
complexity of traditional centralized approaches. Decentralized approaches are
thus gaining momentum, since they achieve important reductions in
data/processing volume by preprocessing the received signals before forwarding
them to a central node. The WAX framework offers a general description of
decentralized architectures with arbitrary interplay between interconnection
bandwidth and decentralized processing complexity, but the applicability of
this framework has only been studied assuming unrestricted baseband processing.
We consider an adaptation of the WAX framework where the decentralized
processing has unitary restriction, which allows for energy-efficient
implementations based on reconfigurable impedance networks at the cost of some
performance loss. Moreover, we propose an effective method to minimize the
performance gap with respect to centralized processing. The previous method
gives a first step towards characterizing the information-lossless trade-off
between interconnection bandwidth and processing complexity in decentralized
architectures with unitary constraints.
|
2501.06074
|
Geometry and Optimization of Shallow Polynomial Networks
|
cs.LG math.AG
|
We study shallow neural networks with polynomial activations. The function
space for these models can be identified with a set of symmetric tensors with
bounded rank. We describe general features of these networks, focusing on the
relationship between width and optimization. We then consider teacher-student
problems, that can be viewed as a problem of low-rank tensor approximation with
respect to a non-standard inner product that is induced by the data
distribution. In this setting, we introduce a teacher-metric discriminant which
encodes the qualitative behavior of the optimization as a function of the
training data distribution. Finally, we focus on networks with quadratic
activations, presenting an in-depth analysis of the optimization landscape. In
particular, we present a variation of the Eckart-Young Theorem characterizing
all critical points and their Hessian signatures for teacher-student problems
with quadratic networks and Gaussian training data.
|
2501.06076
|
A monthly sub-national Harmonized Food Insecurity Dataset for
comprehensive analysis and predictive modeling
|
cs.LG
|
Food security is a complex, multidimensional concept challenging to measure
comprehensively. Effective anticipation, monitoring, and mitigation of food
crises require timely and comprehensive global data. This paper introduces the
Harmonized Food Insecurity Dataset (HFID), an open-source resource
consolidating four key data sources: the Integrated Food Security Phase
Classification (IPC)/Cadre Harmonis\'e (CH) phases, the Famine Early Warning
Systems Network (FEWS NET) IPC-compatible phases, and the World Food Program's
(WFP) Food Consumption Score (FCS) and reduced Coping Strategy Index (rCSI).
Updated monthly and using a common reference system for administrative units,
the HFID offers extensive spatial and temporal coverage. It serves as a vital
tool for food security experts and humanitarian agencies, providing a unified
resource for analyzing food security conditions and highlighting global data
disparities. The scientific community can also leverage the HFID to develop
data-driven predictive models, enhancing the capacity to forecast and prevent
future food crises.
|
2501.06077
|
Explainable Federated Bayesian Causal Inference and Its Application in
Advanced Manufacturing
|
cs.LG stat.AP
|
Causal inference has recently gained notable attention across various fields
like biology, healthcare, and environmental science, especially within
explainable artificial intelligence (xAI) systems, for uncovering the causal
relationships among multiple variables and outcomes. Yet, it has not been fully
recognized and deployed in the manufacturing systems. In this paper, we
introduce an explainable, scalable, and flexible federated Bayesian learning
framework, \texttt{xFBCI}, designed to explore causality through treatment
effect estimation in distributed manufacturing systems. By leveraging federated
Bayesian learning, we efficiently estimate posterior of local parameters to
derive the propensity score for each client without accessing local private
data. These scores are then used to estimate the treatment effect using
propensity score matching (PSM). Through simulations on various datasets and a
real-world Electrohydrodynamic (EHD) printing data, we demonstrate that our
approach outperforms standard Bayesian causal inference methods and several
state-of-the-art federated learning benchmarks.
|
2501.06078
|
Explaining k-Nearest Neighbors: Abductive and Counterfactual
Explanations
|
cs.LG cs.AI
|
Despite the wide use of $k$-Nearest Neighbors as classification models, their
explainability properties remain poorly understood from a theoretical
perspective. While nearest neighbors classifiers offer interpretability from a
"data perspective", in which the classification of an input vector $\bar{x}$ is
explained by identifying the vectors $\bar{v}_1, \ldots, \bar{v}_k$ in the
training set that determine the classification of $\bar{x}$, we argue that such
explanations can be impractical in high-dimensional applications, where each
vector has hundreds or thousands of features and it is not clear what their
relative importance is. Hence, we focus on understanding nearest neighbor
classifications through a "feature perspective", in which the goal is to
identify how the values of the features in $\bar{x}$ affect its classification.
Concretely, we study abductive explanations such as "minimum sufficient
reasons", which correspond to sets of features in $\bar{x}$ that are enough to
guarantee its classification, and "counterfactual explanations" based on the
minimum distance feature changes one would have to perform in $\bar{x}$ to
change its classification. We present a detailed landscape of positive and
negative complexity results for counterfactual and abductive explanations,
distinguishing between discrete and continuous feature spaces, and considering
the impact of the choice of distance function involved. Finally, we show that
despite some negative complexity results, Integer Quadratic Programming and SAT
solving allow for computing explanations in practice.
|
2501.06080
|
Scale-up Unlearnable Examples Learning with High-Performance Computing
|
cs.LG cs.AI cs.DC
|
Recent advancements in AI models are structured to retain user interactions,
which could inadvertently include sensitive healthcare data. In the healthcare
field, particularly when radiologists use AI-driven diagnostic tools hosted on
online platforms, there is a risk that medical imaging data may be repurposed
for future AI training without explicit consent, spotlighting critical privacy
and intellectual property concerns around healthcare data usage. Addressing
these privacy challenges, a novel approach known as Unlearnable Examples (UEs)
has been introduced, aiming to make data unlearnable to deep learning models. A
prominent method within this area, called Unlearnable Clustering (UC), has
shown improved UE performance with larger batch sizes but was previously
limited by computational resources. To push the boundaries of UE performance
with theoretically unlimited resources, we scaled up UC learning across various
datasets using Distributed Data Parallel (DDP) training on the Summit
supercomputer. Our goal was to examine UE efficacy at high-performance
computing (HPC) levels to prevent unauthorized learning and enhance data
security, particularly exploring the impact of batch size on UE's
unlearnability. Utilizing the robust computational capabilities of the Summit,
extensive experiments were conducted on diverse datasets such as Pets,
MedMNist, Flowers, and Flowers102. Our findings reveal that both overly large
and overly small batch sizes can lead to performance instability and affect
accuracy. However, the relationship between batch size and unlearnability
varied across datasets, highlighting the necessity for tailored batch size
strategies to achieve optimal data protection. Our results underscore the
critical role of selecting appropriate batch sizes based on the specific
characteristics of each dataset to prevent learning and ensure data security in
deep learning applications.
|
2501.06081
|
Averaged Adam accelerates stochastic optimization in the training of
deep neural network approximations for partial differential equation and
optimal control problems
|
math.OC cs.LG cs.NA math.NA
|
Deep learning methods - usually consisting of a class of deep neural networks
(DNNs) trained by a stochastic gradient descent (SGD) optimization method - are
nowadays omnipresent in data-driven learning problems as well as in scientific
computing tasks such as optimal control (OC) and partial differential equation
(PDE) problems. In practically relevant learning tasks, often not the
plain-vanilla standard SGD optimization method is employed to train the
considered class of DNNs but instead more sophisticated adaptive and
accelerated variants of the standard SGD method such as the popular Adam
optimizer are used. Inspired by the classical Polyak-Ruppert averaging
approach, in this work we apply averaged variants of the Adam optimizer to
train DNNs to approximately solve exemplary scientific computing problems in
the form of PDEs and OC problems. We test the averaged variants of Adam in a
series of learning problems including physics-informed neural network (PINN),
deep backward stochastic differential equation (deep BSDE), and deep Kolmogorov
approximations for PDEs (such as heat, Black-Scholes, Burgers, and Allen-Cahn
PDEs), including DNN approximations for OC problems, and including DNN
approximations for image classification problems (ResNet for CIFAR-10). In each
of the numerical examples the employed averaged variants of Adam outperform the
standard Adam and the standard SGD optimizers, particularly, in the situation
of the scientific machine learning problems. The Python source codes for the
numerical experiments associated to this work can be found on GitHub at
https://github.com/deeplearningmethods/averaged-adam.
|
2501.06086
|
All AI Models are Wrong, but Some are Optimal
|
cs.AI cs.LG
|
AI models that predict the future behavior of a system (a.k.a. predictive AI
models) are central to intelligent decision-making. However, decision-making
using predictive AI models often results in suboptimal performance. This is
primarily because AI models are typically constructed to best fit the data, and
hence to predict the most likely future rather than to enable high-performance
decision-making. The hope that such prediction enables high-performance
decisions is neither guaranteed in theory nor established in practice. In fact,
there is increasing empirical evidence that predictive models must be tailored
to decision-making objectives for performance. In this paper, we establish
formal (necessary and sufficient) conditions that a predictive model (AI-based
or not) must satisfy for a decision-making policy established using that model
to be optimal. We then discuss their implications for building predictive AI
models for sequential decision-making.
|
2501.06088
|
Non-planar 3D Printing of Double Shells
|
cs.RO cs.CG
|
We present a method to fabricate double shell structures printed in
trans-versal directions using multi-axis fused-deposition-modeling (FDM)
robot-ic 3D printing. Shell structures, characterized by lightweight, thin
walls, fast buildup, and minimal material usage, find diverse applications in
pro-totyping and architecture for uses such as fa\c{c}ade panels, molds for
concrete casting, or full-scale pavilions. We leverage an underlying
representation of transversal strip networks generated using existing methods
and propose a methodology for converting them into printable partitions. Each
partition is printed separately and assembled into a double-shell structure. We
out-line the specifications and workflow that make the printing of each piece
and the subsequent assembly process feasible. The versatility and robust-ness
of our method are demonstrated with both digital and fabricated re-sults on
surfaces of different scales and geometric complexity.
|
2501.06089
|
Towards Developing Socially Compliant Automated Vehicles: State of the
Art, Experts Expectations, and A Conceptual Framework
|
cs.RO cs.AI cs.LG cs.MA cs.SY eess.SY
|
Automated Vehicles (AVs) hold promise for revolutionizing transportation by
improving road safety, traffic efficiency, and overall mobility. Despite the
steady advancement in high-level AVs in recent years, the transition to full
automation entails a period of mixed traffic, where AVs of varying automation
levels coexist with human-driven vehicles (HDVs). Making AVs socially compliant
and understood by human drivers is expected to improve the safety and
efficiency of mixed traffic. Thus, ensuring AVs compatibility with HDVs and
social acceptance is crucial for their successful and seamless integration into
mixed traffic. However, research in this critical area of developing Socially
Compliant AVs (SCAVs) remains sparse. This study carries out the first
comprehensive scoping review to assess the current state of the art in
developing SCAVs, identifying key concepts, methodological approaches, and
research gaps. An expert interview was also conducted to identify critical
research gaps and expectations towards SCAVs. Based on the scoping review and
expert interview input, a conceptual framework is proposed for the development
of SCAVs. The conceptual framework is evaluated using an online survey
targeting researchers, technicians, policymakers, and other relevant
professionals worldwide. The survey results provide valuable validation and
insights, affirming the significance of the proposed conceptual framework in
tackling the challenges of integrating AVs into mixed-traffic environments.
Additionally, future research perspectives and suggestions are discussed,
contributing to the research and development agenda of SCAVs.
|
2501.06092
|
Molecular Communication-Inspired Particle Collector-Transmitter (PaCoT)
for Heavy Metal Removal from Human Circulatory System
|
eess.SY cs.SY
|
This study proposes a novel molecular communication (MC)-inspired
nanomachine, PArticle COllector-Transmitter (PaCoT), to remove toxic heavy
metals from the human circulatory system. PaCoT collects these toxic metals and
transmits them to release nodes, such as lymph capillaries, before they reach
critical organs. The design incorporates key physical parameters and operates
through particle reception and release mechanisms. In the reception process,
described as ligand-receptor binding reactions, modeled as a continuous-time
Markov process (CTMP), PaCoT uses metallothionein proteins as receptors and
heavy metals (e.g., Zn, Pb, Cd) as ligands. We assume that the toxicity
condition (toxic (bit-1), non-toxic (bit-0)) is encoded into the concentration
of heavy metal molecules. Thus, we consider that heavy metal concentration
within the MC channel (e.g., human circulatory system) employs binary
concentration shift keying (binary CSK). The concentration ratio of specific
heavy metals is estimated to infer toxicity, i.e., a high ratio indicates
toxicity and a low ratio suggests non-toxicity. Toxicity detection is achieved
by monitoring the receptor bound duration in the presence of interferers and
various types of heavy metals. After detecting and collecting toxic heavy
metals, PaCoT securely retains them in a liquid medium (e.g., water) until
release, employing two mechanisms: (1) a single-disc viscous micropump to
regulate flow rate, and (2) Brownian motion to facilitate diffusion. PaCoT's
performance is evaluated through MATLAB simulations, focusing on bit error
probability (BEP) of the toxicity detection method, release time of molecules
from PaCoT and energy consumption.
|
2501.06099
|
Explaining Deep Learning-based Anomaly Detection in Energy Consumption
Data by Focusing on Contextually Relevant Data
|
cs.LG cs.AI
|
Detecting anomalies in energy consumption data is crucial for identifying
energy waste, equipment malfunction, and overall, for ensuring efficient energy
management. Machine learning, and specifically deep learning approaches, have
been greatly successful in anomaly detection; however, they are black-box
approaches that do not provide transparency or explanations. SHAP and its
variants have been proposed to explain these models, but they suffer from high
computational complexity (SHAP) or instability and inconsistency (e.g., Kernel
SHAP). To address these challenges, this paper proposes an explainability
approach for anomalies in energy consumption data that focuses on
context-relevant information. The proposed approach leverages existing
explainability techniques, focusing on SHAP variants, together with global
feature importance and weighted cosine similarity to select background dataset
based on the context of each anomaly point. By focusing on the context and most
relevant features, this approach mitigates the instability of explainability
algorithms. Experimental results across 10 different machine learning models,
five datasets, and five XAI techniques, demonstrate that our method reduces the
variability of explanations providing consistent explanations. Statistical
analyses confirm the robustness of our approach, showing an average reduction
in variability of approximately 38% across multiple datasets.
|
2501.06100
|
Practical Quantum Circuit Implementation for Simulating Coupled
Classical Oscillators
|
quant-ph cs.CE
|
Simulating large-scale coupled-oscillator systems presents substantial
computational challenges for classical algorithms, particularly when pursuing
first-principles analyses in the thermodynamic limit. Motivated by the quantum
algorithm framework proposed by Babbush et al., we present and implement a
detailed quantum circuit construction for simulating one-dimensional
spring-mass systems. Our approach incorporates key quantum subroutines,
including block encoding, quantum singular value transformation (QSVT), and
amplitude amplification, to realize the unitary time-evolution operator
associated with simulating classical oscillators dynamics. In the uniform
spring-mass setting, our circuit construction requires a gate complexity of
$\mathcal{O}\bigl(\log_2^2 N\,\log_2(1/\varepsilon)\bigr)$, where $N$ is the
number of oscillators and $\varepsilon$ is the target accuracy of the
approximation. For more general, heterogeneous spring-mass systems, the total
gate complexity is $\mathcal{O}\bigl(N\log_2 N\,\log_2(1/\varepsilon)\bigr)$.
Both settings require $\mathcal{O}(\log_2 N)$ qubits. Numerical simulations
agree with classical solvers across all tested configurations, indicating that
this circuit-based Hamiltonian simulation approach can substantially reduce
computational costs and potentially enable larger-scale many-body studies on
future quantum hardware.
|
2501.06101
|
From Conversation to Automation: Leveraging LLMs for Problem-Solving
Therapy Analysis
|
cs.CL
|
Problem-solving therapy (PST) is a structured psychological approach that
helps individuals manage stress and resolve personal issues by guiding them
through problem identification, solution brainstorming, decision-making, and
outcome evaluation. As mental health care increasingly adopts technologies like
chatbots and large language models (LLMs), it is important to thoroughly
understand how each session of PST is conducted before attempting to automate
it. We developed a comprehensive framework for PST annotation using established
PST Core Strategies and a set of novel Facilitative Strategies to analyze a
corpus of real-world therapy transcripts to determine which strategies are most
prevalent. Using various LLMs and transformer-based models, we found that
GPT-4o outperformed all models, achieving the highest accuracy (0.76) in
identifying all strategies. To gain deeper insights, we examined how strategies
are applied by analyzing Therapeutic Dynamics (autonomy, self-disclosure, and
metaphor), and linguistic patterns within our labeled data. Our research
highlights LLMs' potential to automate therapy dialogue analysis, offering a
scalable tool for mental health interventions. Our framework enhances PST by
improving accessibility, effectiveness, and personalized support for
therapists.
|
2501.06103
|
Finite-Horizon Single-Pull Restless Bandits: An Efficient Index Policy
For Scarce Resource Allocation
|
cs.MA cs.LG
|
Restless multi-armed bandits (RMABs) have been highly successful in
optimizing sequential resource allocation across many domains. However, in many
practical settings with highly scarce resources, where each agent can only
receive at most one resource, such as healthcare intervention programs, the
standard RMAB framework falls short. To tackle such scenarios, we introduce
Finite-Horizon Single-Pull RMABs (SPRMABs), a novel variant in which each arm
can only be pulled once. This single-pull constraint introduces additional
complexity, rendering many existing RMAB solutions suboptimal or ineffective.
%To address this, we propose using dummy states to duplicate the system,
ensuring that once an arm is activated, it transitions exclusively within the
dummy states. To address this shortcoming, we propose using \textit{dummy
states} that expand the system and enforce the one-pull constraint. We then
design a lightweight index policy for this expanded system. For the first time,
we demonstrate that our index policy achieves a sub-linearly decaying average
optimality gap of $\tilde{\mathcal{O}}\left(\frac{1}{\rho^{1/2}}\right)$ for a
finite number of arms, where $\rho$ is the scaling factor for each arm cluster.
Extensive simulations validate the proposed method, showing robust performance
across various domains compared to existing benchmarks.
|
2501.06104
|
Weather-Driven Priority Charging for Battery Storage Systems in Hybrid
Renewable Energy Grids
|
eess.SY cs.SY
|
The integration of renewable energy into the power grid is often hindered by
its fragmented infrastructure, leading to inefficient utilization due to the
variability of energy production and its reliance on weather conditions.
Battery storage systems, while essential for stabilizing energy supply, face
challenges like sub-optimal energy distribution, accelerating battery
degradation, and reducing operational efficiency. This paper presents a novel
solution to these challenges by developing a large-scale, interconnected
renewable energy network that optimizes energy storage and distribution. The
proposed system includes strategically placed battery storage facilities that
stabilize energy production by compensating for fluctuations in renewable
output. A priority charging algorithm, informed by real-time weather
forecasting and load monitoring, ensures that the most suitable battery systems
are charged under varying conditions. Within each storage facility, a secondary
priority charging algorithm minimizes battery degradation by ranking batteries
based on critical parameters such as state of health (SoH) and state of charge
(SoC) and deciding which to charge. This comprehensive approach enhances the
efficiency and longevity of battery storage systems, offering a more reliable
and resilient renewable energy infrastructure.
|
2501.06108
|
Inferring High-Order Couplings with Neural Networks
|
cond-mat.dis-nn cond-mat.stat-mech cs.LG
|
Maximum entropy methods, based on the inverse Ising/Potts problem from
statistical mechanics, are essential for modeling interactions between pairs of
variables in data-driven problems across disciplines such as bioinformatics,
ecology, and neuroscience. Despite their considerable success, these methods
typically fail to capture higher-order interactions that are often essential
for understanding complex systems. Conversely, modern machine learning methods
capture these complex interactions, but the computational cost of interpretable
frameworks makes them impractical for real-world applications. Restricted
Boltzmann Machines (RBMs) provide a computationally efficient way to capture
statistical correlations using hidden nodes in a bipartite neural network. In
this study, we introduce a new method that maps RBMs to generalized Potts
models, allowing for the extraction of interactions up to any specified order.
This method utilizes large-$N$ approximations, enabled by the RBM's simple
structure, to extract effective many-body couplings with minimal computational
effort. Furthermore, we propose a robust framework for extracting higher-order
interactions in more complex probabilistic models and a simple gauge-fixing
method within the effective many-body Potts model. Our validation on synthetic
datasets confirms the method's ability to recover two- and three-body
interactions accurately. When applied to protein sequence data, the framework
competently reconstructs protein contact maps and provides performance
comparable to the best inverse Potts models. These findings confirm that RBMs
are an effective and streamlined tool for exploring higher-order interactions
within complex systems.
|
2501.06112
|
Optimizing Experiments for Accurate Battery Circuit Parameters
Estimation: Reduction and Adjustment of Frequency Set Used in Electrochemical
Impedance Spectroscopy
|
eess.SY cs.SY
|
In this paper, we study a suitable experimental design of electrochemical
impedance spectroscopy (EIS) to reduce the number of frequency points while not
significantly affecting the uncertainties of the estimated cell's equivalent
circuit model (ECM) parameters. It is based on an E-optimal experimental design
that aims to maximize the information about the ECM parameters collected by EIS
measurements and, at the same time, minimize the overall uncertainty. In a
numerical experiment, we first analyze to which extent reducing the number of
measurement points at low frequencies affects the uncertainty of the estimated
parameters. Secondly, we show that applying the frequency adjustments can lead
to the same or even improved global uncertainty of ECM parameter estimates as
with a higher number of measurements. This is numerically verified through a
case study using the ECM parameters of a commercial battery cell.
|
2501.06113
|
Vehicle-in-Virtual-Environment (VVE) Based Autonomous Driving Function
Development and Evaluation Methodology for Vulnerable Road User Safety
|
cs.RO cs.SY eess.SY
|
Traditional methods for developing and evaluating autonomous driving
functions, such as model-in-the-loop (MIL) and hardware-in-the-loop (HIL)
simulations, heavily depend on the accuracy of simulated vehicle models and
human factors, especially for vulnerable road user safety systems. Continuation
of development during public road deployment forces other road users including
vulnerable ones to involuntarily participate in the development process,
leading to safety risks, inefficiencies, and a decline in public trust. To
address these deficiencies, the Vehicle-in-Virtual-Environment (VVE) method was
proposed as a safer, more efficient, and cost-effective solution for developing
and testing connected and autonomous driving technologies by operating the real
vehicle and multiple other actors like vulnerable road users in different test
areas while being immersed within the same highly realistic virtual
environment. This VVE approach synchronizes real-world vehicle and vulnerable
road user motion within the same virtual scenario, enabling the safe and
realistic testing of various traffic situations in a safe and repeatable
manner. In this paper, we propose a new testing pipeline that sequentially
integrates MIL, HIL, and VVE methods to comprehensively develop and evaluate
autonomous driving functions. The effectiveness of this testing pipeline will
be demonstrated using an autonomous driving path-tracking algorithm with local
deep reinforcement learning modification for vulnerable road user collision
avoidance.
|
2501.06115
|
Development of an Advisory System for Parking of a Car and Trailer
|
cs.RO cs.SY eess.SY
|
Trailer parking is a challenging task due to the unstable nature of the
vehicle-trailer system in reverse motion and the unintuitive steering actions
required at the vehicle to accomplish the parking maneuver. This paper presents
a strategy to tackle this kind of maneuver with an advisory graphic aid to help
the human driver with the task of manually backing up the vehicle-trailer
system. A kinematic vehicle-trailer model is derived to describe the low-speed
motion of the vehicle-trailer system, and its inverse kinematics is established
by generating an equivalent virtual trailer axle steering command. The advisory
system graphics is generated based on the inverse kinematics and displays the
expected trailer orientation given the current vehicle steer angle and
configuration (hitch angle). Simulation study and animation are set up to test
the efficacy of the approach, where the user can select both vehicle speed and
vehicle steering angle freely, which allows the user to stop the
vehicle-trailer system and experiment with different steering inputs to see
their effect on the predicted trailer motion before proceeding with the best
one according to the advisory graphics, hence creating a series of piecewise
continuous control actions similar to how manual trailer reverse parking is
usually carried out. The advisory graphics proves to provide the driver with an
intuitive understanding of the trailer motion at any given configuration (hitch
angle).
|
2501.06117
|
Fleurs-SLU: A Massively Multilingual Benchmark for Spoken Language
Understanding
|
cs.CL cs.AI
|
Spoken language understanding (SLU) is indispensable for half of all living
languages that lack a formal writing system, since these languages cannot pair
automatic speech recognition (ASR) with language models to benefit from
language technology. Even if low-resource languages possess a writing system,
ASR for these languages remains unreliable due to limited bimodal speech and
text training data. Better SLU can strengthen the robustness of massively
multilingual ASR by levering language semantics to disambiguate utterances via
context or exploiting semantic similarities across languages. However, the
evaluation of multilingual SLU remains limited to shallow tasks such as intent
classification or language identification. To address this, we present
Fleurs-SLU, a multilingual SLU benchmark that encompasses (i) 692 hours of
speech for topical utterance classification in 102 languages and (ii)
multiple-choice question answering through listening comprehension spanning 944
hours of speech across 92 languages. We extensively evaluate both end-to-end
speech classification models and cascaded systems that combine speech-to-text
transcription with subsequent classification by large language models on
Fleurs-SLU. Our results show that cascaded systems exhibit greater robustness
in multilingual SLU tasks, though speech encoders can achieve competitive
performance in topical speech classification when appropriately pre-trained. We
further find a strong correlation between robust multilingual ASR, effective
speech-to-text translation, and strong multilingual SLU, highlighting the
mutual benefits between acoustic and semantic speech representations.
|
2501.06118
|
Nonlinear port-Hamiltonian system identification from input-state-output
data
|
eess.SY cs.SY math.DS math.OC nlin.CD
|
A framework for identifying nonlinear port-Hamiltonian systems using
input-state-output data is introduced. The framework utilizes neural networks'
universal approximation capacity to effectively represent complex dynamics in a
structured way. We show that using the structure helps to make long-term
predictions compared to baselines that do not incorporate physics. We also
explore different architectures based on MLPs, KANs, and using prior
information. The technique is validated through examples featuring
nonlinearities in either the skew-symmetric terms, the dissipative terms, or
the Hamiltonian.
|
2501.06121
|
kANNolo: Sweet and Smooth Approximate k-Nearest Neighbors Search
|
cs.IR
|
Approximate Nearest Neighbors (ANN) search is a crucial task in several
applications like recommender systems and information retrieval. Current
state-of-the-art ANN libraries, although being performance-oriented, often lack
modularity and ease of use. This translates into them not being fully suitable
for easy prototyping and testing of research ideas, an important feature to
enable. We address these limitations by introducing kANNolo, a novel
research-oriented ANN library written in Rust and explicitly designed to
combine usability with performance effectively. kANNolo is the first ANN
library that supports dense and sparse vector representations made available on
top of different similarity measures, e.g., euclidean distance and inner
product. Moreover, it also supports vector quantization techniques, e.g.,
Product Quantization, on top of the indexing strategies implemented. These
functionalities are managed through Rust traits, allowing shared behaviors to
be handled abstractly. This abstraction ensures flexibility and facilitates an
easy integration of new components. In this work, we detail the architecture of
kANNolo and demonstrate that its flexibility does not compromise performance.
The experimental analysis shows that kANNolo achieves state-of-the-art
performance in terms of speed-accuracy trade-off while allowing fast and easy
prototyping, thus making kANNolo a valuable tool for advancing ANN research.
Source code available on GitHub: https://github.com/TusKANNy/kannolo.
|
2501.06122
|
NDOB-Based Control of a UAV with Delta-Arm Considering Manipulator
Dynamics
|
cs.RO
|
Aerial Manipulators (AMs) provide a versatile platform for various
applications, including 3D printing, architecture, and aerial grasping
missions. However, their operational speed is often sacrificed to uphold
precision. Existing control strategies for AMs often regard the manipulator as
a disturbance and employ robust control methods to mitigate its influence. This
research focuses on elevating the precision of the end-effector and enhancing
the agility of aerial manipulator movements. We present a composite control
scheme to address these challenges. Initially, a Nonlinear Disturbance Observer
(NDOB) is utilized to compensate for internal coupling effects and external
disturbances. Subsequently, manipulator dynamics are processed through a high
pass filter to facilitate agile movements. By integrating the proposed control
method into a fully autonomous delta-arm-based AM system, we substantiate the
controller's efficacy through extensive real-world experiments. The outcomes
illustrate that the end-effector can achieve accuracy at the millimeter level.
|
2501.06126
|
Merging Feed-Forward Sublayers for Compressed Transformers
|
cs.CL cs.LG
|
With the rise and ubiquity of larger deep learning models, the need for
high-quality compression techniques is growing in order to deploy these models
widely. The sheer parameter count of these models makes it difficult to fit
them into the memory constraints of different hardware. In this work, we
present a novel approach to model compression by merging similar parameter
groups within a model, rather than pruning away less important parameters.
Specifically, we select, align, and merge separate feed-forward sublayers in
Transformer models, and test our method on language modeling, image
classification, and machine translation. With our method, we demonstrate
performance comparable to the original models while combining more than a third
of model feed-forward sublayers, and demonstrate improved performance over a
strong layer-pruning baseline. For instance, we can remove over 21% of total
parameters from a Vision Transformer, while maintaining 99% of its original
performance. Additionally, we observe that some groups of feed-forward
sublayers exhibit high activation similarity, which may help explain their
surprising mergeability.
|
2501.06129
|
Contextual ASR Error Handling with LLMs Augmentation for Goal-Oriented
Conversational AI
|
cs.CL cs.AI
|
General-purpose automatic speech recognition (ASR) systems do not always
perform well in goal-oriented dialogue. Existing ASR correction methods rely on
prior user data or named entities. We extend correction to tasks that have no
prior user data and exhibit linguistic flexibility such as lexical and
syntactic variations. We propose a novel context augmentation with a large
language model and a ranking strategy that incorporates contextual information
from the dialogue states of a goal-oriented conversational AI and its tasks.
Our method ranks (1) n-best ASR hypotheses by their lexical and semantic
similarity with context and (2) context by phonetic correspondence with ASR
hypotheses. Evaluated in home improvement and cooking domains with real-world
users, our method improves recall and F1 of correction by 34% and 16%,
respectively, while maintaining precision and false positive rate. Users rated
.8-1 point (out of 5) higher when our correction method worked properly, with
no decrease due to false positives.
|
2501.06130
|
A Mixed-Integer Conic Program for the Multi-Agent Moving-Target
Traveling Salesman Problem
|
cs.RO
|
The Moving-Target Traveling Salesman Problem (MT-TSP) aims to find a shortest
path for an agent that starts at a stationary depot, visits a set of moving
targets exactly once, each within one of their respective time windows, and
then returns to the depot. In this paper, we introduce a new Mixed-Integer
Conic Program (MICP) formulation that finds the optimum for the Multi-Agent
Moving-Target Traveling Salesman Problem (MA-MT-TSP), a generalization of the
MT-TSP involving multiple agents. We obtain our formulation by first restating
the current state-of-the-art MICP formulation for MA-MT-TSP as a Mixed-Integer
Nonlinear Nonconvex Program, and then reformulating it as a new MICP. We
present computational results to demonstrate the performance of our approach.
The results show that our formulation significantly outperforms the
state-of-the-art, with up to a two-order-of-magnitude reduction in runtime, and
up to over 90% tighter optimality gap.
|
2501.06132
|
CoDriveVLM: VLM-Enhanced Urban Cooperative Dispatching and Motion
Planning for Future Autonomous Mobility on Demand Systems
|
cs.RO cs.AI cs.MA
|
The increasing demand for flexible and efficient urban transportation
solutions has spotlighted the limitations of traditional Demand Responsive
Transport (DRT) systems, particularly in accommodating diverse passenger needs
and dynamic urban environments. Autonomous Mobility-on-Demand (AMoD) systems
have emerged as a promising alternative, leveraging connected and autonomous
vehicles (CAVs) to provide responsive and adaptable services. However, existing
methods primarily focus on either vehicle scheduling or path planning, which
often simplify complex urban layouts and neglect the necessity for simultaneous
coordination and mutual avoidance among CAVs. This oversimplification poses
significant challenges to the deployment of AMoD systems in real-world
scenarios. To address these gaps, we propose CoDriveVLM, a novel framework that
integrates high-fidelity simultaneous dispatching and cooperative motion
planning for future AMoD systems. Our method harnesses Vision-Language Models
(VLMs) to enhance multi-modality information processing, and this enables
comprehensive dispatching and collision risk evaluation. The VLM-enhanced CAV
dispatching coordinator is introduced to effectively manage complex and
unforeseen AMoD conditions, thus supporting efficient scheduling
decision-making. Furthermore, we propose a scalable decentralized cooperative
motion planning method via consensus alternating direction method of
multipliers (ADMM) focusing on collision risk evaluation and decentralized
trajectory optimization. Simulation results demonstrate the feasibility and
robustness of CoDriveVLM in various traffic conditions, showcasing its
potential to significantly improve the fidelity and effectiveness of AMoD
systems in future urban transportation networks. The code is available at
https://github.com/henryhcliu/CoDriveVLM.git.
|
2501.06137
|
Supervision policies can shape long-term risk management in
general-purpose AI models
|
cs.AI cs.CY cs.SI
|
The rapid proliferation and deployment of General-Purpose AI (GPAI) models,
including large language models (LLMs), present unprecedented challenges for AI
supervisory entities. We hypothesize that these entities will need to navigate
an emergent ecosystem of risk and incident reporting, likely to exceed their
supervision capacity. To investigate this, we develop a simulation framework
parameterized by features extracted from the diverse landscape of risk,
incident, or hazard reporting ecosystems, including community-driven platforms,
crowdsourcing initiatives, and expert assessments. We evaluate four supervision
policies: non-prioritized (first-come, first-served), random selection,
priority-based (addressing the highest-priority risks first), and
diversity-prioritized (balancing high-priority risks with comprehensive
coverage across risk types). Our results indicate that while priority-based and
diversity-prioritized policies are more effective at mitigating high-impact
risks, particularly those identified by experts, they may inadvertently neglect
systemic issues reported by the broader community. This oversight can create
feedback loops that amplify certain types of reporting while discouraging
others, leading to a skewed perception of the overall risk landscape. We
validate our simulation results with several real-world datasets, including one
with over a million ChatGPT interactions, of which more than 150,000
conversations were identified as risky. This validation underscores the complex
trade-offs inherent in AI risk supervision and highlights how the choice of
risk management policies can shape the future landscape of AI risks across
diverse GPAI models used in society.
|
2501.06138
|
MS-Temba : Multi-Scale Temporal Mamba for Efficient Temporal Action
Detection
|
cs.CV
|
Action detection in real-world scenarios is particularly challenging due to
densely distributed actions in hour-long untrimmed videos. It requires modeling
both short- and long-term temporal relationships while handling significant
intra-class temporal variations. Previous state-of-the-art (SOTA)
Transformer-based architectures, though effective, are impractical for
real-world deployment due to their high parameter count, GPU memory usage, and
limited throughput, making them unsuitable for very long videos. In this work,
we innovatively adapt the Mamba architecture for action detection and propose
Multi-scale Temporal Mamba (MS-Temba), comprising two key components: Temporal
Mamba (Temba) Blocks and the Temporal Mamba Fuser. Temba Blocks include the
Temporal Local Module (TLM) for short-range temporal modeling and the Dilated
Temporal SSM (DTS) for long-range dependencies. By introducing dilations, a
novel concept for Mamba, TLM and DTS capture local and global features at
multiple scales. The Temba Fuser aggregates these scale-specific features using
Mamba to learn comprehensive multi-scale representations of untrimmed videos.
MS-Temba is validated on three public datasets, outperforming SOTA methods on
long videos and matching prior methods on short videos while using only
one-eighth of the parameters.
|
2501.06141
|
Emergent Symbol-like Number Variables in Artificial Neural Networks
|
cs.LG cs.AI cs.SC
|
What types of numeric representations emerge in Neural Networks (NNs)? To
what degree do NNs induce abstract, mutable, slot-like numeric variables, and
in what situations do these representations emerge? How do these
representations change over learning, and how can we understand the neural
implementations in ways that are unified across different NNs? In this work, we
approach these questions by first training sequence based neural systems using
Next Token Prediction (NTP) objectives on numeric tasks. We then seek to
understand the neural solutions through the lens of causal abstractions or
symbolic algorithms. We use a combination of causal interventions and
visualization methods to find that artificial neural models do indeed develop
analogs of interchangeable, mutable, latent number variables purely from the
NTP objective. We then ask how variations on the tasks and model architectures
affect the models' learned solutions to find that these symbol-like numeric
representations do not form for every variant of the task, and transformers
solve the problem in a notably different way than their recurrent counterparts.
We then show how the symbol-like variables change over the course of training
to find a strong correlation between the models' task performance and the
alignment of their symbol-like representations. Lastly, we show that in all
cases, some degree of gradience exists in these neural symbols, highlighting
the difficulty of finding simple, interpretable symbolic stories of how neural
networks perform numeric tasks. Taken together, our results are consistent with
the view that neural networks can approximate interpretable symbolic programs
of number cognition, but the particular program they approximate and the extent
to which they approximate it can vary widely, depending on the network
architecture, training data, extent of training, and network size.
|
2501.06143
|
Multilingual Performance of a Multimodal Artificial Intelligence System
on Multisubject Physics Concept Inventories
|
physics.ed-ph cs.AI
|
We investigate the multilingual and multimodal performance of a large
language model-based artificial intelligence (AI) system, GPT-4o, on a diverse
set of physics concept inventories spanning multiple languages and subject
areas. The inventories taken from the PhysPort website cover the classical
physics topics of mechanics, electromagnetism, optics, and thermodynamics as
well as relativity, quantum mechanics, astronomy, mathematics, and laboratory
skills. Unlike previous text-only studies, we uploaded the inventories as
images mirroring what a student would see on paper, assessing the system's
multimodal functionality. The AI is prompted in English and autonomously
chooses the language of its response - either remaining in the nominal language
of the test, switching entirely to English, or mixing languages - revealing
adaptive behavior dependent on linguistic complexity and data availability. Our
results indicate some variation in performance across subject areas, with
laboratory skills standing out as the area of poorest performance. Furthermore,
the AI's performance on questions that require visual interpretation of images
is worse than on purely text-based questions. Questions that are difficult for
the AI tend to be that way invariably of the inventory language. We also find
large variations in performance across languages, with some appearing to
benefit substantially from language switching, a phenomenon similar to
code-switching ofhuman speakers. Overall, comparing the obtained AI results to
the existing literature, we find that the AI system outperforms average
undergraduate students post-instruction in all subject areas but laboratory
skills.
|
2501.06146
|
xLSTM-SENet: xLSTM for Single-Channel Speech Enhancement
|
cs.SD cs.AI eess.AS
|
While attention-based architectures, such as Conformers, excel in speech
enhancement, they face challenges such as scalability with respect to input
sequence length. In contrast, the recently proposed Extended Long Short-Term
Memory (xLSTM) architecture offers linear scalability. However, xLSTM-based
models remain unexplored for speech enhancement. This paper introduces
xLSTM-SENet, the first xLSTM-based single-channel speech enhancement system. A
comparative analysis reveals that xLSTM-and notably, even LSTM-can match or
outperform state-of-the-art Mamba- and Conformer-based systems across various
model sizes in speech enhancement on the VoiceBank+Demand dataset. Through
ablation studies, we identify key architectural design choices such as
exponential gating and bidirectionality contributing to its effectiveness. Our
best xLSTM-based model, xLSTM-SENet2, outperforms state-of-the-art Mamba- and
Conformer-based systems on the Voicebank+DEMAND dataset.
|
2501.06148
|
From discrete-time policies to continuous-time diffusion samplers:
Asymptotic equivalences and faster training
|
cs.LG stat.ML
|
We study the problem of training neural stochastic differential equations, or
diffusion models, to sample from a Boltzmann distribution without access to
target samples. Existing methods for training such models enforce time-reversal
of the generative and noising processes, using either differentiable simulation
or off-policy reinforcement learning (RL). We prove equivalences between
families of objectives in the limit of infinitesimal discretization steps,
linking entropic RL methods (GFlowNets) with continuous-time objects (partial
differential equations and path space measures). We further show that an
appropriate choice of coarse time discretization during training allows greatly
improved sample efficiency and the use of time-local objectives, achieving
competitive performance on standard sampling benchmarks with reduced
computational cost.
|
2501.06151
|
PySpatial: A High-Speed Whole Slide Image Pathomics Toolkit
|
eess.IV cs.CV
|
Whole Slide Image (WSI) analysis plays a crucial role in modern digital
pathology, enabling large-scale feature extraction from tissue samples.
However, traditional feature extraction pipelines based on tools like
CellProfiler often involve lengthy workflows, requiring WSI segmentation into
patches, feature extraction at the patch level, and subsequent mapping back to
the original WSI. To address these challenges, we present PySpatial, a
high-speed pathomics toolkit specifically designed for WSI-level analysis.
PySpatial streamlines the conventional pipeline by directly operating on
computational regions of interest, reducing redundant processing steps.
Utilizing rtree-based spatial indexing and matrix-based computation, PySpatial
efficiently maps and processes computational regions, significantly
accelerating feature extraction while maintaining high accuracy. Our
experiments on two datasets-Perivascular Epithelioid Cell (PEC) and data from
the Kidney Precision Medicine Project (KPMP)-demonstrate substantial
performance improvements. For smaller and sparse objects in PEC datasets,
PySpatial achieves nearly a 10-fold speedup compared to standard CellProfiler
pipelines. For larger objects, such as glomeruli and arteries in KPMP datasets,
PySpatial achieves a 2-fold speedup. These results highlight PySpatial's
potential to handle large-scale WSI analysis with enhanced efficiency and
accuracy, paving the way for broader applications in digital pathology.
|
2501.06158
|
GenMol: A Drug Discovery Generalist with Discrete Diffusion
|
cs.LG
|
Drug discovery is a complex process that involves multiple scenarios and
stages, such as fragment-constrained molecule generation, hit generation and
lead optimization. However, existing molecular generative models can only
tackle one or two of these scenarios and lack the flexibility to address
various aspects of the drug discovery pipeline. In this paper, we present
Generalist Molecular generative model (GenMol), a versatile framework that
addresses these limitations by applying discrete diffusion to the Sequential
Attachment-based Fragment Embedding (SAFE) molecular representation. GenMol
generates SAFE sequences through non-autoregressive bidirectional parallel
decoding, thereby allowing utilization of a molecular context that does not
rely on the specific token ordering and enhanced computational efficiency.
Moreover, under the discrete diffusion framework, we introduce fragment
remasking, a strategy that optimizes molecules by replacing fragments with
masked tokens and regenerating them, enabling effective exploration of chemical
space. GenMol significantly outperforms the previous GPT-based model trained on
SAFE representations in de novo generation and fragment-constrained generation,
and achieves state-of-the-art performance in goal-directed hit generation and
lead optimization. These experimental results demonstrate that GenMol can
tackle a wide range of drug discovery tasks, providing a unified and versatile
approach for molecular design.
|
2501.06159
|
Efficient Transition State Searches by Freezing String Method with Graph
Neural Network Potentials
|
physics.chem-ph cs.LG
|
Transition states are a critical bottleneck in chemical transformations.
Significant efforts have been made to develop algorithms that efficiently
locate transition states on potential energy surfaces. However, the
computational cost of ab-initio potential energy surface evaluation limits the
size of chemical systems that can routinely studied. In this work, we develop
and fine-tune a graph neural network potential energy function suitable for
describing organic chemical reactions and use it to rapidly identify transition
state guess structures. We successfully refine guess structures and locate a
transition state in each test system considered and reduce the average number
of ab-initio calculations by 47% though use of the graph neural network
potential energy function. Our results show that modern machine learning models
have reached levels of reliability whereby they can be used to accelerate
routine computational chemistry tasks.
|
2501.06164
|
Model Alignment Search
|
cs.LG cs.AI
|
When can we say that two neural systems are the same? The answer to this
question is goal-dependent, and it is often addressed through correlative
methods such as Representational Similarity Analysis (RSA) and Centered Kernel
Alignment (CKA). How do we target functionally relevant similarity, and how do
we isolate specific causal aspects of the representations? In this work, we
introduce Model Alignment Search (MAS), a method for causally exploring
distributed representational similarity. The method learns invertible linear
transformations that align a subspace between two distributed networks'
representations where causal information can be freely interchanged. We first
show that the method can be used to transfer values of specific causal
variables -- such as the number of items in a counting task -- between networks
with different training seeds. We then explore open questions in number
cognition by comparing different types of numeric representations in models
trained on structurally different tasks. We then explore differences between
MAS vs preexisting causal similarity methods, and lastly, we introduce a
counterfactual latent auxiliary loss function that helps shape causally
relevant alignments even in cases where we do not have causal access to one of
the two models for training.
|
2501.06167
|
Meta-Learning for Physically-Constrained Neural System Identification
|
cs.LG cs.SY eess.SY math.OC
|
We present a gradient-based meta-learning framework for rapid adaptation of
neural state-space models (NSSMs) for black-box system identification. When
applicable, we also incorporate domain-specific physical constraints to improve
the accuracy of the NSSM. The major benefit of our approach is that instead of
relying solely on data from a single target system, our framework utilizes data
from a diverse set of source systems, enabling learning from limited target
data, as well as with few online training iterations. Through benchmark
examples, we demonstrate the potential of our approach, study the effect of
fine-tuning subnetworks rather than full fine-tuning, and report real-world
case studies to illustrate the practical application and generalizability of
the approach to practical problems with physical-constraints. Specifically, we
show that the meta-learned models result in improved downstream performance in
model-based state estimation in indoor localization and energy systems.
|
2501.06171
|
Machine Learning Force-Field Approach for Itinerant Electron Magnets
|
cond-mat.str-el cs.LG physics.comp-ph
|
We review the recent development of machine-learning (ML) force-field
frameworks for Landau-Lifshitz-Gilbert (LLG) dynamics simulations of itinerant
electron magnets, focusing on the general theory and implementations of
symmetry-invariant representations of spin configurations. The crucial
properties that such magnetic descriptors must satisfy are differentiability
with respect to spin rotations and invariance to both lattice point-group
symmetry and internal spin rotation symmetry. We propose an efficient
implementation based on the concept of reference irreducible representations,
modified from the group-theoretical power-spectrum and bispectrum methods. The
ML framework is demonstrated using the s-d models, which are widely applied in
spintronics research. We show that LLG simulations based on local fields
predicted by the trained ML models successfully reproduce representative
non-collinear spin structures, including 120$^\circ$, tetrahedral, and skyrmion
crystal orders of the triangular-lattice s-d models. Large-scale thermal quench
simulations enabled by ML models further reveal intriguing freezing dynamics
and glassy stripe states consisting of skyrmions and bi-merons. Our work
highlights the utility of ML force-field approach to dynamical modeling of
complex spin orders in itinerant electron magnets.
|
2501.06173
|
VideoAuteur: Towards Long Narrative Video Generation
|
cs.CV
|
Recent video generation models have shown promising results in producing
high-quality video clips lasting several seconds. However, these models face
challenges in generating long sequences that convey clear and informative
events, limiting their ability to support coherent narrations. In this paper,
we present a large-scale cooking video dataset designed to advance long-form
narrative generation in the cooking domain. We validate the quality of our
proposed dataset in terms of visual fidelity and textual caption accuracy using
state-of-the-art Vision-Language Models (VLMs) and video generation models,
respectively. We further introduce a Long Narrative Video Director to enhance
both visual and semantic coherence in generated videos and emphasize the role
of aligning visual embeddings to achieve improved overall video quality. Our
method demonstrates substantial improvements in generating visually detailed
and semantically aligned keyframes, supported by finetuning techniques that
integrate text and image embeddings within the video generation process.
Project page: https://videoauteur.github.io/
|
2501.06181
|
Best Response Convergence for Zero-sum Stochastic Dynamic Games with
Partial and Asymmetric Information
|
eess.SY cs.SY math.OC
|
We analyze best response dynamics for finding a Nash equilibrium of an
infinite horizon zero-sum stochastic linear quadratic dynamic game (LQDG) with
partial and asymmetric information. We derive explicit expressions for each
player's best response within the class of pure linear dynamic output feedback
control strategies where the internal state dimension of each control strategy
is an integer multiple of the system state dimension. With each best response,
the players form increasingly higher-order belief states, leading to
infinite-dimensional internal states. However, we observe in extensive
numerical experiments that the game's value converges after just a few
iterations, suggesting that strategies associated with increasingly
higher-order belief states eventually provide no benefit. To help explain this
convergence, our numerical analysis reveals rapid decay of the controllability
and observability Gramian eigenvalues and Hankel singular values in
higher-order belief dynamics, indicating that the higher-order belief dynamics
become increasingly difficult for both players to control and observe.
Consequently, the higher-order belief dynamics can be closely approximated by
low-order belief dynamics with bounded error, and thus feedback strategies with
limited internal state dimension can closely approximate a Nash equilibrium.
|
2501.06184
|
PEACE: Empowering Geologic Map Holistic Understanding with MLLMs
|
cs.CV cs.MA
|
Geologic map, as a fundamental diagram in geology science, provides critical
insights into the structure and composition of Earth's subsurface and surface.
These maps are indispensable in various fields, including disaster detection,
resource exploration, and civil engineering. Despite their significance,
current Multimodal Large Language Models (MLLMs) often fall short in geologic
map understanding. This gap is primarily due to the challenging nature of
cartographic generalization, which involves handling high-resolution map,
managing multiple associated components, and requiring domain-specific
knowledge. To quantify this gap, we construct GeoMap-Bench, the first-ever
benchmark for evaluating MLLMs in geologic map understanding, which assesses
the full-scale abilities in extracting, referring, grounding, reasoning, and
analyzing. To bridge this gap, we introduce GeoMap-Agent, the inaugural agent
designed for geologic map understanding, which features three modules:
Hierarchical Information Extraction (HIE), Domain Knowledge Injection (DKI),
and Prompt-enhanced Question Answering (PEQA). Inspired by the
interdisciplinary collaboration among human scientists, an AI expert group acts
as consultants, utilizing a diverse tool pool to comprehensively analyze
questions. Through comprehensive experiments, GeoMap-Agent achieves an overall
score of 0.811 on GeoMap-Bench, significantly outperforming 0.369 of GPT-4o.
Our work, emPowering gEologic mAp holistiC undErstanding (PEACE) with MLLMs,
paves the way for advanced AI applications in geology, enhancing the efficiency
and accuracy of geological investigations.
|
2501.06186
|
LlamaV-o1: Rethinking Step-by-step Visual Reasoning in LLMs
|
cs.CV
|
Reasoning is a fundamental capability for solving complex multi-step
problems, particularly in visual contexts where sequential step-wise
understanding is essential. Existing approaches lack a comprehensive framework
for evaluating visual reasoning and do not emphasize step-wise problem-solving.
To this end, we propose a comprehensive framework for advancing step-by-step
visual reasoning in large language models (LMMs) through three key
contributions. First, we introduce a visual reasoning benchmark specifically
designed to evaluate multi-step reasoning tasks. The benchmark presents a
diverse set of challenges with eight different categories ranging from complex
visual perception to scientific reasoning with over 4k reasoning steps in
total, enabling robust evaluation of LLMs' abilities to perform accurate and
interpretable visual reasoning across multiple steps. Second, we propose a
novel metric that assesses visual reasoning quality at the granularity of
individual steps, emphasizing both correctness and logical coherence. The
proposed metric offers deeper insights into reasoning performance compared to
traditional end-task accuracy metrics. Third, we present a new multimodal
visual reasoning model, named LlamaV-o1, trained using a multi-step curriculum
learning approach, where tasks are progressively organized to facilitate
incremental skill acquisition and problem-solving. The proposed LlamaV-o1 is
designed for multi-step reasoning and learns step-by-step through a structured
training paradigm. Extensive experiments show that our LlamaV-o1 outperforms
existing open-source models and performs favorably against close-source
proprietary models. Compared to the recent Llava-CoT, our LlamaV-o1 achieves an
average score of 67.3 with an absolute gain of 3.8\% across six benchmarks
while being 5 times faster during inference scaling. Our benchmark, model, and
code are publicly available.
|
2501.06187
|
Multi-subject Open-set Personalization in Video Generation
|
cs.CV
|
Video personalization methods allow us to synthesize videos with specific
concepts such as people, pets, and places. However, existing methods often
focus on limited domains, require time-consuming optimization per subject, or
support only a single subject. We present Video Alchemist $-$ a video model
with built-in multi-subject, open-set personalization capabilities for both
foreground objects and background, eliminating the need for time-consuming
test-time optimization. Our model is built on a new Diffusion Transformer
module that fuses each conditional reference image and its corresponding
subject-level text prompt with cross-attention layers. Developing such a large
model presents two main challenges: dataset and evaluation. First, as paired
datasets of reference images and videos are extremely hard to collect, we
sample selected video frames as reference images and synthesize a clip of the
target video. However, while models can easily denoise training videos given
reference frames, they fail to generalize to new contexts. To mitigate this
issue, we design a new automatic data construction pipeline with extensive
image augmentations. Second, evaluating open-set video personalization is a
challenge in itself. To address this, we introduce a personalization benchmark
that focuses on accurate subject fidelity and supports diverse personalization
scenarios. Finally, our extensive experiments show that our method
significantly outperforms existing personalization methods in both quantitative
and qualitative evaluations.
|
2501.06189
|
A Multimodal Social Agent
|
cs.AI cs.CL
|
In recent years, large language models (LLMs) have demonstrated remarkable
progress in common-sense reasoning tasks. This ability is fundamental to
understanding social dynamics, interactions, and communication. However, the
potential of integrating computers with these social capabilities is still
relatively unexplored. However, the potential of integrating computers with
these social capabilities is still relatively unexplored. This paper introduces
MuSA, a multimodal LLM-based agent that analyzes text-rich social content
tailored to address selected human-centric content analysis tasks, such as
question answering, visual question answering, title generation, and
categorization. It uses planning, reasoning, acting, optimizing, criticizing,
and refining strategies to complete a task. Our approach demonstrates that MuSA
can automate and improve social content analysis, helping decision-making
processes across various applications. We have evaluated our agent's
capabilities in question answering, title generation, and content
categorization tasks. MuSA performs substantially better than our baselines.
|
2501.06192
|
A Computational Model of Learning and Memory Using Structurally Dynamic
Cellular Automata
|
cs.AI cs.NE math.DS q-bio.NC
|
In the fields of computation and neuroscience, much is still unknown about
the underlying computations that enable key cognitive functions including
learning, memory, abstraction and behavior. This paper proposes a mathematical
and computational model of learning and memory based on a small set of
bio-plausible functions that include coincidence detection, signal modulation,
and reward/penalty mechanisms. Our theoretical approach proposes that these
basic functions are sufficient to establish and modulate an information space
over which computation can be carried out, generating signal gradients usable
for inference and behavior. The computational method used to test this is a
structurally dynamic cellular automaton with continuous-valued cell states and
a series of recursive steps propagating over an undirected graph with the
memory function embedded entirely in the creation and modulation of graph
edges. The experimental results show: that the toy model can make near-optimal
choices to re-discover a reward state after a single training run; that it can
avoid complex penalty configurations; that signal modulation and network
plasticity can generate exploratory behaviors in sparse reward environments;
that the model generates context-dependent memory representations; and that it
exhibits high computational efficiency because of its minimal, single-pass
training requirements combined with flexible and contextual memory
representation.
|
2501.06193
|
A Novel Task-Driven Method with Evolvable Interactive Agents Using Event
Trees for Enhanced Emergency Decision Support
|
cs.AI cs.CL
|
As climate change and other global challenges increase the likelihood of
unforeseen emergencies, the limitations of human-driven strategies in critical
situations become more pronounced. Inadequate pre-established emergency plans
can lead operators to become overwhelmed during complex systems malfunctions.
This study addresses the urgent need for agile decision-making in response to
various unforeseen incidents through a novel approach, EvoTaskTree (a
task-driven method with evolvable interactive agents using event trees for
emergency decision support). This advanced approach integrates two types of
agents powered by large language models (LLMs): task executors, responsible for
executing critical procedures, and task validators, ensuring the efficacy of
those actions. By leveraging insights from event tree analysis, our framework
encompasses three crucial tasks: initiating event subevent analysis, event tree
header event analysis, and decision recommendations. The agents learn from both
successful and unsuccessful responses from these tasks. Finally, we use nuclear
power plants as a demonstration of a safety-critical system. Our findings
indicate that the designed agents are not only effective but also outperform
existing approaches, achieving an impressive accuracy rate of up to 100 % in
processing previously unencoun32 tered incident scenarios. This paper
demonstrates that EvoTaskTree significantly enhances the rapid formulation of
emergency decision-making.
|
2501.06196
|
How Do Artificial Intelligences Think? The Three Mathematico-Cognitive
Factors of Categorical Segmentation Operated by Synthetic Neurons
|
q-bio.NC cs.AI cs.NE
|
How do the synthetic neurons in language models create "thought categories"
to segment and analyze their informational environment? What are the cognitive
characteristics, at the very level of formal neurons, of this artificial
categorical thought? Based on the mathematical nature of algebraic operations
inherent to neuronal aggregation functions, we attempt to identify
mathematico-cognitive factors that genetically shape the categorical
reconstruction of the informational world faced by artificial cognition. This
study explores these concepts through the notions of priming, attention, and
categorical phasing.
|
2501.06201
|
A Novel Method for Pignistic Information Fusion in the View of Z-number
|
cs.AI
|
How to properly fuse information from complex sources is still an open
problem. Lots of methods have been put forward to provide a effective solution
in fusing intricate information. Among them, Dempster-Shafer evidences theory
(DSET) is one of the representatives, it is widely used to handle uncertain
information. Based on DSET, a completely new method to fuse information from
different sources based on pignistic transformation and Z-numbers is proposed
in this paper which is able to handle separate situations of information and
keeps high accuracy in producing rational and correct judgments on actual
situations. Besides, in order to illustrate the superiority of the proposed
method, some numerical examples and application are also provided to verify the
validity and robustness of it.
|
2501.06205
|
Leveraging Edge Intelligence and LLMs to Advance 6G-Enabled Internet of
Automated Defense Vehicles
|
cs.NI cs.AI cs.CL
|
The evolution of Artificial Intelligence (AI) and its subset Deep Learning
(DL), has profoundly impacted numerous domains, including autonomous driving.
The integration of autonomous driving in military settings reduces human
casualties and enables precise and safe execution of missions in hazardous
environments while allowing for reliable logistics support without the risks
associated with fatigue-related errors. However, relying on autonomous driving
solely requires an advanced decision-making model that is adaptable and optimum
in any situation. Considering the presence of numerous interconnected
autonomous vehicles in mission-critical scenarios, Ultra-Reliable Low Latency
Communication (URLLC) is vital for ensuring seamless coordination, real-time
data exchange, and instantaneous response to dynamic driving environments. The
advent of 6G strengthens the Internet of Automated Defense Vehicles (IoADV)
concept within the realm of Internet of Military Defense Things (IoMDT) by
enabling robust connectivity, crucial for real-time data exchange, advanced
navigation, and enhanced safety features through IoADV interactions. On the
other hand, a critical advancement in this space is using pre-trained
Generative Large Language Models (LLMs) for decision-making and communication
optimization for autonomous driving. Hence, this work presents opportunities
and challenges with a vision of realizing the full potential of these
technologies in critical defense applications, especially through the
advancement of IoADV and its role in enhancing autonomous military operations.
|
2501.06208
|
Enhancing AI Safety Through the Fusion of Low Rank Adapters
|
cs.CL
|
Instruction fine-tuning of large language models (LLMs) is a powerful method
for improving task-specific performance, but it can inadvertently lead to a
phenomenon where models generate harmful responses when faced with malicious
prompts. In this paper, we explore Low-Rank Adapter Fusion (LoRA) as a means to
mitigate these risks while preserving the model's ability to handle diverse
instructions effectively. Through an extensive comparative analysis against
established baselines using recognized benchmark datasets, we demonstrate a
42\% reduction in the harmfulness rate by leveraging LoRA fusion between a task
adapter and a safety adapter, the latter of which is specifically trained on
our safety dataset. However, we also observe exaggerated safety behaviour,
where the model rejects safe prompts that closely resemble unsafe ones
|
2501.06210
|
Applications of natural language processing in aviation safety: A review
and qualitative analysis
|
cs.CL cs.LG
|
This study explores using Natural Language Processing in aviation safety,
focusing on machine learning algorithms to enhance safety measures. There are
currently May 2024, 34 Scopus results from the keyword search natural language
processing and aviation safety. Analyzing these studies allows us to uncover
trends in the methodologies, findings and implications of NLP in aviation. Both
qualitative and quantitative tools have been used to investigate the current
state of literature on NLP for aviation safety. The qualitative analysis
summarises the research motivations, objectives, and outcomes, showing how NLP
can be utilized to help identify critical safety issues and improve aviation
safety. This study also identifies research gaps and suggests areas for future
exploration, providing practical recommendations for the aviation industry. We
discuss challenges in implementing NLP in aviation safety, such as the need for
large, annotated datasets, and the difficulty in interpreting complex models.
We propose solutions like active learning for data annotation and explainable
AI for model interpretation. Case studies demonstrate the successful
application of NLP in improving aviation safety, highlighting its potential to
make aviation safer and more efficient.
|
2501.06211
|
FLAME: Financial Large-Language Model Assessment and Metrics Evaluation
|
cs.CL cs.AI cs.CE
|
LLMs have revolutionized NLP and demonstrated potential across diverse
domains. More and more financial LLMs have been introduced for finance-specific
tasks, yet comprehensively assessing their value is still challenging. In this
paper, we introduce FLAME, a comprehensive financial LLMs evaluation system in
Chinese, which includes two core evaluation benchmarks: FLAME-Cer and
FLAME-Sce. FLAME-Cer covers 14 types of authoritative financial certifications,
including CPA, CFA, and FRM, with a total of approximately 16,000 carefully
selected questions. All questions have been manually reviewed to ensure
accuracy and representativeness. FLAME-Sce consists of 10 primary core
financial business scenarios, 21 secondary financial business scenarios, and a
comprehensive evaluation set of nearly 100 tertiary financial application
tasks. We evaluate 6 representative LLMs, including GPT-4o, GLM-4, ERNIE-4.0,
Qwen2.5, XuanYuan3, and the latest Baichuan4-Finance, revealing
Baichuan4-Finance excels other LLMs in most tasks. By establishing a
comprehensive and professional evaluation system, FLAME facilitates the
advancement of financial LLMs in Chinese contexts. Instructions for
participating in the evaluation are available on GitHub:
https://github.com/FLAME-ruc/FLAME.
|
2501.06214
|
Path Space Partitioning and Guided Image Sampling for MCMC
|
cs.CV cs.GR
|
Rendering algorithms typically integrate light paths over path space.
However, integrating over this one unified space is not necessarily the most
efficient approach, and we show that partitioning path space and integrating
each of these partitioned spaces with a separate estimator can have advantages.
We propose an approach for partitioning path space based on analyzing paths
from a standard Monte Carlo estimator and integrating these partitioned path
spaces using a Markov Chain Monte Carlo (MCMC) estimator. This also means that
integration happens within a sparser subset of path space, so we propose the
use of guided proposal distributions in image space to improve efficiency. We
show that our method improves image quality over other MCMC integration
approaches at the same number of samples.
|
2501.06215
|
Fitting Different Interactive Information: Joint Classification of
Emotion and Intention
|
cs.CV cs.CL cs.LG cs.MM eess.AS
|
This paper is the first-place solution for ICASSP MEIJU@2025 Track I, which
focuses on low-resource multimodal emotion and intention recognition. How to
effectively utilize a large amount of unlabeled data, while ensuring the mutual
promotion of different difficulty levels tasks in the interaction stage, these
two points become the key to the competition. In this paper, pseudo-label
labeling is carried out on the model trained with labeled data, and samples
with high confidence and their labels are selected to alleviate the problem of
low resources. At the same time, the characteristic of easy represented ability
of intention recognition found in the experiment is used to make mutually
promote with emotion recognition under different attention heads, and higher
performance of intention recognition is achieved through fusion. Finally, under
the refined processing data, we achieve the score of 0.5532 in the Test set,
and win the championship of the track.
|
2501.06216
|
Understanding colors of Dufaycolor: Can we recover them using historical
colorimetric and spectral data?
|
cs.CV cs.GR
|
Dufaycolor, an additive color photography process produced from 1935 to the
late 1950s, represents one of the most advanced iterations of this technique.
This paper presents ongoing research and development of an open-source
Color-Screen tool designed to reconstruct the original colors of additive color
photographs. We discuss the incorporation of historical measurements of dyes
used in the production of the color-screen filter (r\'eseau) to achieve
accurate color recovery.
|
2501.06218
|
Dissecting Bit-Level Scaling Laws in Quantizing Vision Generative Models
|
cs.CV
|
Vision generative models have recently made significant advancements along
two primary paradigms: diffusion-style and language-style, both of which have
demonstrated excellent scaling laws. Quantization is crucial for efficiently
deploying these models, as it reduces memory and computation costs. In this
work, we systematically investigate the impact of quantization on these two
paradigms. Surprisingly, despite achieving comparable performance in full
precision, language-style models consistently outperform diffusion-style models
across various quantization settings. This observation suggests that
language-style models have superior bit-level scaling laws, offering a better
tradeoff between model quality and total bits. To dissect this phenomenon, we
conduct extensive experiments and find that the primary reason is the discrete
representation space of language-style models, which is more tolerant of
information loss during quantization. Furthermore, our analysis indicates that
improving the bit-level scaling law of quantized vision generative models is
challenging, with model distillation identified as a highly effective approach.
Specifically, we propose TopKLD to optimize the transfer of distilled knowledge
by balancing ``implicit knowledge'' and ``explicit knowledge'' during the
distillation process. This approach elevates the bit-level scaling laws by one
level across both integer and floating-point quantization settings.
|
2501.06219
|
WhACC: Whisker Automatic Contact Classifier with Expert Human-Level
Performance
|
cs.CV cs.LG
|
The rodent vibrissal system is pivotal in advancing neuroscience research,
particularly for studies of cortical plasticity, learning, decision-making,
sensory encoding, and sensorimotor integration. Despite the advantages,
curating touch events is labor intensive and often requires >3 hours per
million video frames, even after leveraging automated tools like the Janelia
Whisker Tracker. We address this limitation by introducing Whisker Automatic
Contact Classifier (WhACC), a python package designed to identify touch periods
from high-speed videos of head-fixed behaving rodents with human-level
performance. WhACC leverages ResNet50V2 for feature extraction, combined with
LightGBM for Classification. Performance is assessed against three expert human
curators on over one million frames. Pairwise touch classification agreement on
99.5% of video frames, equal to between-human agreement. Finally, we offer a
custom retraining interface to allow model customization on a small subset of
data, which was validated on four million frames across 16 single-unit
electrophysiology recordings. Including this retraining step, we reduce human
hours required to curate a 100 million frame dataset from ~333 hours to ~6
hours.
|
2501.06220
|
Powerful Design of Small Vision Transformer on CIFAR10
|
cs.LG cs.CV
|
Vision Transformers (ViTs) have demonstrated remarkable success on
large-scale datasets, but their performance on smaller datasets often falls
short of convolutional neural networks (CNNs). This paper explores the design
and optimization of Tiny ViTs for small datasets, using CIFAR-10 as a
benchmark. We systematically evaluate the impact of data augmentation, patch
token initialization, low-rank compression, and multi-class token strategies on
model performance. Our experiments reveal that low-rank compression of queries
in Multi-Head Latent Attention (MLA) incurs minimal performance loss,
indicating redundancy in ViTs. Additionally, introducing multiple CLS tokens
improves global representation capacity, boosting accuracy. These findings
provide a comprehensive framework for optimizing Tiny ViTs, offering practical
insights for efficient and effective designs. Code is available at
https://github.com/erow/PoorViTs.
|
2501.06221
|
Optimizing Supply Chain Networks with the Power of Graph Neural Networks
|
cs.LG econ.GN q-fin.EC
|
Graph Neural Networks (GNNs) have emerged as transformative tools for
modeling complex relational data, offering unprecedented capabilities in tasks
like forecasting and optimization. This study investigates the application of
GNNs to demand forecasting within supply chain networks using the SupplyGraph
dataset, a benchmark for graph-based supply chain analysis. By leveraging
advanced GNN methodologies, we enhance the accuracy of forecasting models,
uncover latent dependencies, and address temporal complexities inherent in
supply chain operations. Comparative analyses demonstrate that GNN-based models
significantly outperform traditional approaches, including Multilayer
Perceptrons (MLPs) and Graph Convolutional Networks (GCNs), particularly in
single-node demand forecasting tasks. The integration of graph representation
learning with temporal data highlights GNNs' potential to revolutionize
predictive capabilities for inventory management, production scheduling, and
logistics optimization. This work underscores the pivotal role of forecasting
in supply chain management and provides a robust framework for advancing
research and applications in this domain.
|
2501.06222
|
Can Explainable AI Assess Personalized Health Risks from Indoor Air
Pollution?
|
cs.LG
|
Acknowledging the effects of outdoor air pollution, the literature
inadequately addresses indoor air pollution's impacts. Despite daily health
risks, existing research primarily focused on monitoring, lacking accuracy in
pinpointing indoor pollution sources. In our research work, we thoroughly
investigated the influence of indoor activities on pollution levels. A survey
of 143 participants revealed limited awareness of indoor air pollution.
Leveraging 65 days of diverse data encompassing activities like incense stick
usage, indoor smoking, inadequately ventilated cooking, excessive AC usage, and
accidental paper burning, we developed a comprehensive monitoring system. We
identify pollutant sources and effects with high precision through clustering
analysis and interpretability models (LIME and SHAP). Our method integrates
Decision Trees, Random Forest, Naive Bayes, and SVM models, excelling at 99.8%
accuracy with Decision Trees. Continuous 24-hour data allows personalized
assessments for targeted pollution reduction strategies, achieving 91% accuracy
in predicting activities and pollution exposure.
|
2501.06223
|
Interpretable Auto Window Setting for Deep-Learning-Based CT Analysis
|
eess.IV cs.CV cs.LG
|
Whether during the early days of popularization or in the present, the window
setting in Computed Tomography (CT) has always been an indispensable part of
the CT analysis process. Although research has investigated the capabilities of
CT multi-window fusion in enhancing neural networks, there remains a paucity of
domain-invariant, intuitively interpretable methodologies for Auto Window
Setting. In this work, we propose an plug-and-play module originate from Tanh
activation function, which is compatible with mainstream deep learning
architectures. Starting from the physical principles of CT, we adhere to the
principle of interpretability to ensure the module's reliability for medical
implementations. The domain-invariant design facilitates observation of the
preference decisions rendered by the adaptive mechanism from a clinically
intuitive perspective. This enables the proposed method to be understood not
only by experts in neural networks but also garners higher trust from
clinicians. We confirm the effectiveness of the proposed method in multiple
open-source datasets, yielding 10%~200% Dice improvements on hard segment
targets.
|
2501.06224
|
Detection, Retrieval, and Explanation Unified: A Violence Detection
System Based on Knowledge Graphs and GAT
|
cs.CV cs.AI
|
Recently, violence detection systems developed using unified multimodal
models have achieved significant success and attracted widespread attention.
However, most of these systems face two critical challenges: the lack of
interpretability as black-box models and limited functionality, offering only
classification or retrieval capabilities. To address these challenges, this
paper proposes a novel interpretable violence detection system, termed the
Three-in-One (TIO) System. The TIO system integrates knowledge graphs (KG) and
graph attention networks (GAT) to provide three core functionalities:
detection, retrieval, and explanation. Specifically, the system processes each
video frame along with text descriptions generated by a large language model
(LLM) for videos containing potential violent behavior. It employs ImageBind to
generate high-dimensional embeddings for constructing a knowledge graph, uses
GAT for reasoning, and applies lightweight time series modules to extract video
embedding features. The final step connects a classifier and retriever for
multi-functional outputs. The interpretability of KG enables the system to
verify the reasoning process behind each output. Additionally, the paper
introduces several lightweight methods to reduce the resource consumption of
the TIO system and enhance its efficiency. Extensive experiments conducted on
the XD-Violence and UCF-Crime datasets validate the effectiveness of the
proposed system. A case study further reveals an intriguing phenomenon: as the
number of bystanders increases, the occurrence of violent behavior tends to
decrease.
|
2501.06225
|
A Distributed Hybrid Quantum Convolutional Neural Network for Medical
Image Classification
|
cs.CV cs.LG
|
Medical images are characterized by intricate and complex features, requiring
interpretation by physicians with medical knowledge and experience. Classical
neural networks can reduce the workload of physicians, but can only handle
these complex features to a limited extent. Theoretically, quantum computing
can explore a broader parameter space with fewer parameters, but it is
currently limited by the constraints of quantum hardware.Considering these
factors, we propose a distributed hybrid quantum convolutional neural network
based on quantum circuit splitting. This model leverages the advantages of
quantum computing to effectively capture the complex features of medical
images, enabling efficient classification even in resource-constrained
environments. Our model employs a quantum convolutional neural network (QCNN)
to extract high-dimensional features from medical images, thereby enhancing the
model's expressive capability.By integrating distributed techniques based on
quantum circuit splitting, the 8-qubit QCNN can be reconstructed using only 5
qubits.Experimental results demonstrate that our model achieves strong
performance across 3 datasets for both binary and multiclass classification
tasks. Furthermore, compared to recent technologies, our model achieves
superior performance with fewer parameters, and experimental results validate
the effectiveness of our model.
|
2501.06226
|
asanAI: In-Browser, No-Code, Offline-First Machine Learning Toolkit
|
cs.LG cs.AI cs.SE
|
Machine learning (ML) has become crucial in modern life, with growing
interest from researchers and the public. Despite its potential, a significant
entry barrier prevents widespread adoption, making it challenging for
non-experts to understand and implement ML techniques. The increasing desire to
leverage ML is counterbalanced by its technical complexity, creating a gap
between potential and practical application. This work introduces asanAI, an
offline-first, open-source, no-code machine learning toolkit designed for users
of all skill levels. It allows individuals to design, debug, train, and test ML
models directly in a web browser, eliminating the need for software
installations and coding. The toolkit runs on any device with a modern web
browser, including smartphones, and ensures user privacy through local
computations while utilizing WebGL for enhanced GPU performance. Users can
quickly experiment with neural networks and train custom models using various
data sources, supported by intuitive visualizations of network structures and
data flows. asanAI simplifies the teaching of ML concepts in educational
settings and is released under an open-source MIT license, encouraging
modifications. It also supports exporting models in industry-ready formats,
empowering a diverse range of users to effectively learn and apply machine
learning in their projects. The proposed toolkit is successfully utilized by
researchers of ScaDS.AI to swiftly draft and test machine learning ideas, by
trainers to effectively educate enthusiasts, and by teachers to introduce
contemporary ML topics in classrooms with minimal effort and high clarity.
|
2501.06227
|
Generating and Detecting Various Types of Fake Image and Audio Content:
A Review of Modern Deep Learning Technologies and Tools
|
cs.CR cs.LG
|
This paper reviews the state-of-the-art in deepfake generation and detection,
focusing on modern deep learning technologies and tools based on the latest
scientific advancements. The rise of deepfakes, leveraging techniques like
Variational Autoencoders (VAEs), Generative Adversarial Networks (GANs),
Diffusion models and other generative models, presents significant threats to
privacy, security, and democracy. This fake media can deceive individuals,
discredit real people and organizations, facilitate blackmail, and even
threaten the integrity of legal, political, and social systems. Therefore,
finding appropriate solutions to counter the potential threats posed by this
technology is essential. We explore various deepfake methods, including face
swapping, voice conversion, reenactment and lip synchronization, highlighting
their applications in both benign and malicious contexts. The review critically
examines the ongoing "arms race" between deepfake generation and detection,
analyzing the challenges in identifying manipulated contents. By examining
current methods and highlighting future research directions, this paper
contributes to a crucial understanding of this rapidly evolving field and the
urgent need for robust detection strategies to counter the misuse of this
powerful technology. While focusing primarily on audio, image, and video
domains, this study allows the reader to easily grasp the latest advancements
in deepfake generation and detection.
|
2501.06229
|
Open-Source Manually Annotated Vocal Tract Database for Automatic
Segmentation from 3D MRI Using Deep Learning: Benchmarking 2D and 3D
Convolutional and Transformer Networks
|
cs.CV cs.SD eess.AS
|
Accurate segmentation of the vocal tract from magnetic resonance imaging
(MRI) data is essential for various voice and speech applications. Manual
segmentation is time intensive and susceptible to errors. This study aimed to
evaluate the efficacy of deep learning algorithms for automatic vocal tract
segmentation from 3D MRI.
|
2501.06230
|
BEN: Using Confidence-Guided Matting for Dichotomous Image Segmentation
|
cs.CV eess.IV
|
Current approaches to dichotomous image segmentation (DIS) treat image
matting and object segmentation as fundamentally different tasks. As
improvements in image segmentation become increasingly challenging to achieve,
combining image matting and grayscale segmentation techniques offers promising
new directions for architectural innovation. Inspired by the possibility of
aligning these two model tasks, we propose a new architectural approach for DIS
called Confidence-Guided Matting (CGM). We created the first CGM model called
Background Erase Network (BEN). BEN is comprised of two components: BEN Base
for initial segmentation and BEN Refiner for confidence refinement. Our
approach achieves substantial improvements over current state-of-the-art
methods on the DIS5K validation dataset, demonstrating that matting-based
refinement can significantly enhance segmentation quality. This work opens new
possibilities for cross-pollination between matting and segmentation techniques
in computer vision.
|
2501.06231
|
Sustainable and Intelligent Public Facility Failure Management System
Based on Large Language Models
|
cs.AI
|
This paper presents a new Large Language Model (LLM)-based Smart Device
Management framework, a pioneering approach designed to address the intricate
challenges of managing intelligent devices within public facilities, with a
particular emphasis on applications to libraries. Our framework leverages
state-of-the-art LLMs to analyze and predict device failures, thereby enhancing
operational efficiency and reliability. Through prototype validation in
real-world library settings, we demonstrate the framework's practical
applicability and its capacity to significantly reduce budgetary constraints on
public facilities. The advanced and innovative nature of our model is evident
from its successful implementation in prototype testing. We plan to extend the
framework's scope to include a wider array of public facilities and to
integrate it with cutting-edge cybersecurity technologies, such as Internet of
Things (IoT) security and machine learning algorithms for threat detection and
response. This will result in a comprehensive and proactive maintenance system
that not only bolsters the security of intelligent devices but also utilizes
machine learning for automated analysis and real-time threat mitigation. By
incorporating these advanced cybersecurity elements, our framework will be
well-positioned to tackle the dynamic challenges of modern public
infrastructure, ensuring robust protection against potential threats and
enabling facilities to anticipate and prevent failures, leading to substantial
cost savings and enhanced service quality.
|
2501.06232
|
An Interpretable ML-based Model for Predicting p-y Curves of Monopile
Foundations in Sand
|
cs.LG cond-mat.soft
|
Predicting the lateral pile response is challenging due to the complexity of
pile-soil interactions. Machine learning (ML) techniques have gained
considerable attention for their effectiveness in non-linear analysis and
prediction. This study develops an interpretable ML-based model for predicting
p-y curves of monopile foundations. An XGBoost model was trained using a
database compiled from existing research. The results demonstrate that the
model achieves superior predictive accuracy. Shapley Additive Explanations
(SHAP) was employed to enhance interpretability. The SHAP value distributions
for each variable demonstrate strong alignment with established theoretical
knowledge on factors affecting the lateral response of pile foundations.
|
2501.06233
|
Mechanics and Design of Metastructured Auxetic Patches with Bio-inspired
Materials
|
cs.LG cond-mat.mtrl-sci
|
Metastructured auxetic patches, characterized by negative Poisson's ratios,
offer unique mechanical properties that closely resemble the behavior of human
tissues and organs. As a result, these patches have gained significant
attention for their potential applications in organ repair and tissue
regeneration. This study focuses on neural networks-based computational
modeling of auxetic patches with a sinusoidal metastructure fabricated from
silk fibroin, a bio-inspired material known for its biocompatibility and
strength. The primary objective of this research is to introduce a novel,
data-driven framework for patch design. To achieve this, we conducted
experimental fabrication and mechanical testing to determine material
properties and validate the corresponding finite element models. Finite element
simulations were then employed to generate the necessary data, while greedy
sampling, an active learning technique, was utilized to reduce the
computational cost associated with data labeling. Two neural networks were
trained to accurately predict Poisson's ratios and stresses for strains up to
15\%, respectively. Both models achieved $R^2$ scores exceeding 0.995, which
indicates highly reliable predictions. Building on this, we developed a neural
network-based design model capable of tailoring patch designs to achieve
specific mechanical properties. This model demonstrated superior performance
when compared to traditional optimization methods, such as genetic algorithms,
by providing more efficient and precise design solutions. The proposed
framework represents a significant advancement in the design of bio-inspired
metastructures for medical applications, paving the way for future innovations
in tissue engineering and regenerative medicine.
|
2501.06235
|
NextStop: An Improved Tracker For Panoptic LIDAR Segmentation Data
|
cs.CV cs.AI cs.RO
|
4D panoptic LiDAR segmentation is essential for scene understanding in
autonomous driving and robotics ,combining semantic and instance segmentation
with temporal consistency.Current methods, like 4D-PLS and 4D-STOP, use a
tracking-by-detection methodology, employing deep learning networks to perform
semantic and instance segmentation on each frame. To maintain temporal
consistency, large-size instances detected in the current frame are compared
and associated with instances within a temporal window that includes the
current and preceding frames. However, their reliance on short-term instance
detection, lack of motion estimation, and exclusion of small-sized instances
lead to frequent identity switches and reduced tracking performance. We address
these issues with the NextStop1 tracker, which integrates Kalman filter-based
motion estimation, data association, and lifespan management, along with a
tracklet state concept to improve prioritization. Evaluated using the LiDAR
Segmentation and Tracking Quality (LSTQ) metric on the SemanticKITTI validation
set, NextStop demonstrated enhanced tracking performance, particularly for
small-sized objects like people and bicyclists, with fewer ID switches, earlier
tracking initiation, and improved reliability in complex environments. The
source code is available at https://github.com/AIROTAU/NextStopTracker
|
2501.06236
|
Data-Driven Radio Propagation Modeling using Graph Neural Networks
|
cs.LG cs.AI cs.NI
|
Modeling radio propagation is essential for wireless network design and
performance optimization. Traditional methods rely on physics models of radio
propagation, which can be inaccurate or inflexible. In this work, we propose
using graph neural networks to learn radio propagation behaviors directly from
real-world network data. Our approach converts the radio propagation
environment into a graph representation, with nodes corresponding to locations
and edges representing spatial and ray-tracing relationships between locations.
The graph is generated by converting images of the environment into a graph
structure, with specific relationships between nodes. The model is trained on
this graph representation, using sensor measurements as target data.
We demonstrate that the graph neural network, which learns to predict radio
propagation directly from data, achieves competitive performance compared to
traditional heuristic models. This data-driven approach outperforms classic
numerical solvers in terms of both speed and accuracy. To the best of our
knowledge, we are the first to apply graph neural networks to real-world radio
propagation data to generate coverage maps, enabling generative models of
signal propagation with point measurements only.
|
2501.06237
|
Forecasting Anonymized Electricity Load Profiles
|
cs.CR cs.AI cs.LG
|
In the evolving landscape of data privacy, the anonymization of electric load
profiles has become a critical issue, especially with the enforcement of the
General Data Protection Regulation (GDPR) in Europe. These electric load
profiles, which are essential datasets in the energy industry, are classified
as personal behavioral data, necessitating stringent protective measures. This
article explores the implications of this classification, the importance of
data anonymization, and the potential of forecasting using microaggregated
data. The findings underscore that effective anonymization techniques, such as
microaggregation, do not compromise the performance of forecasting models under
certain conditions (i.e., forecasting aggregated). In such an aggregated level,
microaggregated data maintains high levels of utility, with minimal impact on
forecasting accuracy. The implications for the energy sector are profound,
suggesting that privacy-preserving data practices can be integrated into smart
metering technology applications without hindering their effectiveness.
|
2501.06238
|
Multi-field Visualization: Trait design and trait-induced merge trees
|
cs.LG cs.GR
|
Feature level sets (FLS) have shown significant potential in the analysis of
multi-field data by using traits defined in attribute space to specify features
in the domain. In this work, we address key challenges in the practical use of
FLS: trait design and feature selection for rendering. To simplify trait
design, we propose a Cartesian decomposition of traits into simpler components,
making the process more intuitive and computationally efficient. Additionally,
we utilize dictionary learning results to automatically suggest point traits.
To enhance feature selection, we introduce trait-induced merge trees (TIMTs), a
generalization of merge trees for feature level sets, aimed at topologically
analyzing tensor fields or general multi-variate data. The leaves in the TIMT
represent areas in the input data that are closest to the defined trait,
thereby most closely resembling the defined feature. This merge tree provides a
hierarchy of features, enabling the querying of the most relevant and
persistent features. Our method includes various query techniques for the tree,
allowing the highlighting of different aspects. We demonstrate the
cross-application capabilities of this approach through five case studies from
different domains.
|
2501.06239
|
Towards a scalable AI-driven framework for data-independent Cyber Threat
Intelligence Information Extraction
|
cs.CR cs.AI cs.CL
|
Cyber Threat Intelligence (CTI) is critical for mitigating threats to
organizations, governments, and institutions, yet the necessary data are often
dispersed across diverse formats. AI-driven solutions for CTI Information
Extraction (IE) typically depend on high-quality, annotated data, which are not
always available. This paper introduces 0-CTI, a scalable AI-based framework
designed for efficient CTI Information Extraction. Leveraging advanced Natural
Language Processing (NLP) techniques, particularly Transformer-based
architectures, the proposed system processes complete text sequences of CTI
reports to extract a cyber ontology of named entities and their relationships.
Our contribution is the development of 0-CTI, the first modular framework for
CTI Information Extraction that supports both supervised and zero-shot
learning. Unlike existing state-of-the-art models that rely heavily on
annotated datasets, our system enables fully dataless operation through
zero-shot methods for both Entity and Relation Extraction, making it adaptable
to various data availability scenarios. Additionally, our supervised Entity
Extractor surpasses current state-of-the-art performance in cyber Entity
Extraction, highlighting the dual strength of the framework in both
low-resource and data-rich environments.
By aligning the system's outputs with the Structured Threat Information
Expression (STIX) format, a standard for information exchange in the
cybersecurity domain, 0-CTI standardizes extracted knowledge, enhancing
communication and collaboration in cybersecurity operations.
|
Subsets and Splits
No community queries yet
The top public SQL queries from the community will appear here once available.