id
stringlengths 9
16
| title
stringlengths 4
278
| categories
stringlengths 5
104
| abstract
stringlengths 6
4.09k
|
|---|---|---|---|
2501.05729
|
ExPO: Explainable Phonetic Trait-Oriented Network for Speaker
Verification
|
cs.SD cs.AI eess.AS
|
In speaker verification, we use computational method to verify if an
utterance matches the identity of an enrolled speaker. This task is similar to
the manual task of forensic voice comparison, where linguistic analysis is
combined with auditory measurements to compare and evaluate voice samples.
Despite much success, we have yet to develop a speaker verification system that
offers explainable results comparable to those from manual forensic voice
comparison. A novel approach, Explainable Phonetic Trait-Oriented (ExPO)
network, is proposed in this paper to introduce the speaker's phonetic trait
which describes the speaker's characteristics at the phonetic level, resembling
what forensic comparison does. ExPO not only generates utterance-level speaker
embeddings but also allows for fine-grained analysis and visualization of
phonetic traits, offering an explainable speaker verification process.
Furthermore, we investigate phonetic traits from within-speaker and
between-speaker variation perspectives to determine which trait is most
effective for speaker verification, marking an important step towards
explainable speaker verification. Our code is available at
https://github.com/mmmmayi/ExPO.
|
2501.05730
|
Element-wise Attention Is All You Need
|
cs.LG cs.AI
|
The self-attention (SA) mechanism has demonstrated superior performance
across various domains, yet it suffers from substantial complexity during both
training and inference. The next-generation architecture, aiming at retaining
the competitive performance of SA while achieving low-cost inference and
efficient long-sequence training, primarily focuses on three approaches: linear
attention, linear RNNs, and state space models. Although these approaches
achieve reduced complexity than SA, they all have built-in performance
degradation factors, such as diminished âspikinessâ and compression of
historical information. In contrast to these approaches, we propose a novel
element-wise attention mechanism, which uses the element-wise squared Euclidean
distance, instead of the dot product operation, to compute similarity and
approximates the quadratic complexity term $\exp(q_{ic}k_{jc})$ with a Taylor
polynomial. This design achieves remarkable efficiency: during training, the
element-wise attention has a complexity of $\mathcal{O}(tLD)$, making
long-sequence training both computationally and memory efficient, where $L$ is
the sequence length, $D$ is the feature dimension, and $t$ is the highest order
of the polynomial; during inference, it can be reformulated as recurrent neural
networks, achieving a inference complexity of $\mathcal{O}(tD)$. Furthermore,
the element-wise attention circumvents the performance degradation factors
present in these approaches and achieves performance comparable to SA in both
causal and non-causal forms.
|
2501.05731
|
Diving Deep: Forecasting Sea Surface Temperatures and Anomalies
|
cs.LG physics.ao-ph stat.AP
|
This overview paper details the findings from the Diving Deep: Forecasting
Sea Surface Temperatures and Anomalies Challenge at the European Conference on
Machine Learning and Principles and Practice of Knowledge Discovery in
Databases (ECML PKDD) 2024. The challenge focused on the data-driven
predictability of global sea surface temperatures (SSTs), a key factor in
climate forecasting, ecosystem management, fisheries management, and climate
change monitoring. The challenge involved forecasting SST anomalies (SSTAs)
three months in advance using historical data and included a special task of
predicting SSTAs nine months ahead for the Baltic Sea. Participants utilized
various machine learning approaches to tackle the task, leveraging data from
ERA5. This paper discusses the methodologies employed, the results obtained,
and the lessons learned, offering insights into the future of climate-related
predictive modeling.
|
2501.05733
|
TB-Bench: Training and Testing Multi-Modal AI for Understanding
Spatio-Temporal Traffic Behaviors from Dashcam Images/Videos
|
cs.CV
|
The application of Multi-modal Large Language Models (MLLMs) in Autonomous
Driving (AD) faces significant challenges due to their limited training on
traffic-specific data and the absence of dedicated benchmarks for
spatiotemporal understanding. This study addresses these issues by proposing
TB-Bench, a comprehensive benchmark designed to evaluate MLLMs on understanding
traffic behaviors across eight perception tasks from ego-centric views. We also
introduce vision-language instruction tuning datasets, TB-100k and TB-250k,
along with simple yet effective baselines for the tasks. Through extensive
experiments, we show that existing MLLMs underperform in these tasks, with even
a powerful model like GPT-4o achieving less than 35% accuracy on average. In
contrast, when fine-tuned with TB-100k or TB-250k, our baseline models achieve
average accuracy up to 85%, significantly enhancing performance on the tasks.
Additionally, we demonstrate performance transfer by co-training TB-100k with
another traffic dataset, leading to improved performance on the latter.
Overall, this study represents a step forward by introducing a comprehensive
benchmark, high-quality datasets, and baselines, thus supporting the gradual
integration of MLLMs into the perception, prediction, and planning stages of
AD.
|
2501.05735
|
ELENA: Epigenetic Learning through Evolved Neural Adaptation
|
cs.NE cs.LG
|
Despite the success of metaheuristic algorithms in solving complex network
optimization problems, they often struggle with adaptation, especially in
dynamic or high-dimensional search spaces. Traditional approaches can become
stuck in local optima, leading to inefficient exploration and suboptimal
solutions. Most of the widely accepted advanced algorithms do well either on
highly complex or smaller search spaces due to the lack of adaptation. To
address these limitations, we present ELENA (Epigenetic Learning through
Evolved Neural Adaptation), a new evolutionary framework that incorporates
epigenetic mechanisms to enhance the adaptability of the core evolutionary
approach. ELENA leverages compressed representation of learning parameters
improved dynamically through epigenetic tags that serve as adaptive memory.
Three epigenetic tags (mutation resistance, crossover affinity, and stability
score) assist with guiding solution space search, facilitating a more
intelligent hypothesis landscape exploration. To assess the framework
performance, we conduct experiments on three critical network optimization
problems: the Traveling Salesman Problem (TSP), the Vehicle Routing Problem
(VRP), and the Maximum Clique Problem (MCP). Experiments indicate that ELENA
achieves competitive results, often surpassing state-of-the-art methods on
network optimization tasks.
|
2501.05744
|
LLVD: LSTM-based Explicit Motion Modeling in Latent Space for Blind
Video Denoising
|
cs.CV cs.LG
|
Video restoration plays a pivotal role in revitalizing degraded video content
by rectifying imperfections caused by various degradations introduced during
capturing (sensor noise, motion blur, etc.), saving/sharing (compression,
resizing, etc.) and editing. This paper introduces a novel algorithm designed
for scenarios where noise is introduced during video capture, aiming to enhance
the visual quality of videos by reducing unwanted noise artifacts. We propose
the Latent space LSTM Video Denoiser (LLVD), an end-to-end blind denoising
model. LLVD uniquely combines spatial and temporal feature extraction,
employing Long Short Term Memory (LSTM) within the encoded feature domain. This
integration of LSTM layers is crucial for maintaining continuity and minimizing
flicker in the restored video. Moreover, processing frames in the encoded
feature domain significantly reduces computations, resulting in a very
lightweight architecture. LLVD's blind nature makes it versatile for real,
in-the-wild denoising scenarios where prior information about noise
characteristics is not available. Experiments reveal that LLVD demonstrates
excellent performance for both synthetic and captured noise. Specifically, LLVD
surpasses the current State-Of-The-Art (SOTA) in RAW denoising by 0.3dB, while
also achieving a 59\% reduction in computational complexity.
|
2501.05745
|
Covariate Dependent Mixture of Bayesian Networks
|
stat.ML cs.LG
|
Learning the structure of Bayesian networks from data provides insights into
underlying processes and the causal relationships that generate the data, but
its usefulness depends on the homogeneity of the data population, a condition
often violated in real-world applications. In such cases, using a single
network structure for inference can be misleading, as it may not capture
sub-population differences. To address this, we propose a novel approach of
modelling a mixture of Bayesian networks where component probabilities depend
on individual characteristics. Our method identifies both network structures
and demographic predictors of sub-population membership, aiding personalised
interventions. We evaluate our method through simulations and a youth mental
health case study, demonstrating its potential to improve tailored
interventions in health, education, and social policy.
|
2501.05748
|
From Bit to Block: Decoding on Erasure Channels
|
cs.IT math.IT
|
We provide a general framework for bounding the block error threshold of a
linear code $C\subseteq \mathbb{F}_2^N$ over the erasure channel in terms of
its bit error threshold. Our approach relies on understanding the minimum
support weight of any $r$-dimensional subcode of $C$, for all small values of
$r$. As a proof of concept, we use our machinery to obtain a new proof of the
celebrated result that Reed-Muller codes achieve capacity on the erasure
channel with respect to block error probability.
|
2501.05749
|
Bridging Dialects: Translating Standard Bangla to Regional Variants
Using Neural Models
|
cs.CL
|
The Bangla language includes many regional dialects, adding to its cultural
richness. The translation of Bangla Language into regional dialects presents a
challenge due to significant variations in vocabulary, pronunciation, and
sentence structure across regions like Chittagong, Sylhet, Barishal, Noakhali,
and Mymensingh. These dialects, though vital to local identities, lack of
representation in technological applications. This study addresses this gap by
translating standard Bangla into these dialects using neural machine
translation (NMT) models, including BanglaT5, mT5, and mBART50. The work is
motivated by the need to preserve linguistic diversity and improve
communication among dialect speakers. The models were fine-tuned using the
"Vashantor" dataset, containing 32,500 sentences across various dialects, and
evaluated through Character Error Rate (CER) and Word Error Rate (WER) metrics.
BanglaT5 demonstrated superior performance with a CER of 12.3% and WER of
15.7%, highlighting its effectiveness in capturing dialectal nuances. The
outcomes of this research contribute to the development of inclusive language
technologies that support regional dialects and promote linguistic diversity.
|
2501.05750
|
Semantic Mapping in Indoor Embodied AI -- A Comprehensive Survey and
Future Directions
|
cs.RO cs.CV
|
Intelligent embodied agents (e.g. robots) need to perform complex semantic
tasks in unfamiliar environments. Among many skills that the agents need to
possess, building and maintaining a semantic map of the environment is most
crucial in long-horizon tasks. A semantic map captures information about the
environment in a structured way, allowing the agent to reference it for
advanced reasoning throughout the task. While existing surveys in embodied AI
focus on general advancements or specific tasks like navigation and
manipulation, this paper provides a comprehensive review of semantic
map-building approaches in embodied AI, specifically for indoor navigation. We
categorize these approaches based on their structural representation (spatial
grids, topological graphs, dense point-clouds or hybrid maps) and the type of
information they encode (implicit features or explicit environmental data). We
also explore the strengths and limitations of the map building techniques,
highlight current challenges, and propose future research directions. We
identify that the field is moving towards developing open-vocabulary,
queryable, task-agnostic map representations, while high memory demands and
computational inefficiency still remaining to be open challenges. This survey
aims to guide current and future researchers in advancing semantic mapping
techniques for embodied AI systems.
|
2501.05752
|
Semantic Exploration with Adaptive Gating for Efficient Problem Solving
with Language Models
|
cs.AI cs.CL
|
Recent advancements in large language models (LLMs) have shown remarkable
potential in various complex tasks requiring multi-step reasoning methods like
tree search to explore diverse reasoning paths. However, existing methods often
suffer from computational inefficiency and redundancy. First, they overlook the
diversity of task difficulties, leading to unnecessarily extensive searches
even for easy tasks. Second, they neglect the semantics of reasoning paths,
resulting in redundant exploration of semantically identical paths. To address
these limitations, we propose Semantic Exploration with Adaptive Gating (SEAG),
a computationally efficient method. SEAG employs an adaptive gating mechanism
that dynamically decides whether to conduct a tree search, based on the
confidence level of answers from a preceding simple reasoning method.
Furthermore, its tree-based exploration consolidates semantically identical
reasoning steps, reducing redundant explorations while maintaining or even
improving accuracy. Our extensive experiments demonstrate that SEAG
significantly improves accuracy by 4.3% on average while requiring only 31% of
computational costs compared to existing tree search-based methods on complex
reasoning benchmarks including GSM8K and ARC with diverse language models such
as Llama2, Llama3, and Mistral.
|
2501.05755
|
CognoSpeak: an automatic, remote assessment of early cognitive decline
in real-world conversational speech
|
cs.SD cs.LG eess.AS
|
The early signs of cognitive decline are often noticeable in conversational
speech, and identifying those signs is crucial in dealing with later and more
serious stages of neurodegenerative diseases. Clinical detection is costly and
time-consuming and although there has been recent progress in the automatic
detection of speech-based cues, those systems are trained on relatively small
databases, lacking detailed metadata and demographic information. This paper
presents CognoSpeak and its associated data collection efforts. CognoSpeak asks
memory-probing long and short-term questions and administers standard cognitive
tasks such as verbal and semantic fluency and picture description using a
virtual agent on a mobile or web platform. In addition, it collects multimodal
data such as audio and video along with a rich set of metadata from primary and
secondary care, memory clinics and remote settings like people's homes. Here,
we present results from 126 subjects whose audio was manually transcribed.
Several classic classifiers, as well as large language model-based classifiers,
have been investigated and evaluated across the different types of prompts. We
demonstrate a high level of performance; in particular, we achieved an F1-score
of 0.873 using a DistilBERT model to discriminate people with cognitive
impairment (dementia and people with mild cognitive impairment (MCI)) from
healthy volunteers using the memory responses, fluency tasks and cookie theft
picture description. CognoSpeak is an automatic, remote, low-cost, repeatable,
non-invasive and less stressful alternative to existing clinical cognitive
assessments.
|
2501.05757
|
Locality-aware Gaussian Compression for Fast and High-quality Rendering
|
cs.CV
|
We present LocoGS, a locality-aware 3D Gaussian Splatting (3DGS) framework
that exploits the spatial coherence of 3D Gaussians for compact modeling of
volumetric scenes. To this end, we first analyze the local coherence of 3D
Gaussian attributes, and propose a novel locality-aware 3D Gaussian
representation that effectively encodes locally-coherent Gaussian attributes
using a neural field representation with a minimal storage requirement. On top
of the novel representation, LocoGS is carefully designed with additional
components such as dense initialization, an adaptive spherical harmonics
bandwidth scheme and different encoding schemes for different Gaussian
attributes to maximize compression performance. Experimental results
demonstrate that our approach outperforms the rendering quality of existing
compact Gaussian representations for representative real-world 3D datasets
while achieving from 54.6$\times$ to 96.6$\times$ compressed storage size and
from 2.1$\times$ to 2.4$\times$ rendering speed than 3DGS. Even our approach
also demonstrates an averaged 2.4$\times$ higher rendering speed than the
state-of-the-art compression method with comparable compression performance.
|
2501.05762
|
Development and Comparison of Model-Based and Data-Driven Approaches for
the Prediction of the Mechanical Properties of Lattice Structures
|
cond-mat.soft cs.CE cs.LG physics.comp-ph
|
Lattice structures have great potential for several application fields
ranging from medical and tissue engineering to aeronautical one. Their
development is further speeded up by the continuing advances in additive
manufacturing technologies that allow to overcome issues typical of standard
processes and to propose tailored designs. However, the design of lattice
structures is still challenging since their properties are considerably
affected by numerous factors. The present paper aims to propose, discuss, and
compare various modeling approaches to describe, understand, and predict the
correlations between the mechanical properties and the void volume fraction of
different types of lattice structures fabricated by fused deposition modeling
3D printing. Particularly, four approaches are proposed: (i) a simplified
analytical model; (ii) a semi-empirical model combining analytical equations
with experimental correction factors; (iii) an artificial neural network
trained on experimental data; (iv) numerical simulations by finite element
analyses. The comparison among the various approaches, and with experimental
data, allows to identify the performances, advantages, and disadvantages of
each approach, thus giving important guidelines for choosing the right design
methodology based on the needs and available data.
|
2501.05763
|
StarGen: A Spatiotemporal Autoregression Framework with Video Diffusion
Model for Scalable and Controllable Scene Generation
|
cs.CV
|
Recent advances in large reconstruction and generative models have
significantly improved scene reconstruction and novel view generation. However,
due to compute limitations, each inference with these large models is confined
to a small area, making long-range consistent scene generation challenging. To
address this, we propose StarGen, a novel framework that employs a pre-trained
video diffusion model in an autoregressive manner for long-range scene
generation. The generation of each video clip is conditioned on the 3D warping
of spatially adjacent images and the temporally overlapping image from
previously generated clips, improving spatiotemporal consistency in long-range
scene generation with precise pose control. The spatiotemporal condition is
compatible with various input conditions, facilitating diverse tasks, including
sparse view interpolation, perpetual view generation, and layout-conditioned
city generation. Quantitative and qualitative evaluations demonstrate StarGen's
superior scalability, fidelity, and pose accuracy compared to state-of-the-art
methods.
|
2501.05764
|
Controlling Large Language Models Through Concept Activation Vectors
|
cs.CL
|
As large language models (LLMs) are widely deployed across various domains,
the ability to control their generated outputs has become more critical. This
control involves aligning LLMs outputs with human values and ethical principles
or customizing LLMs on specific topics or styles for individual users. Existing
controlled generation methods either require significant computational
resources and extensive trial-and-error or provide coarse-grained control. In
this paper, we propose Generation with Concept Activation Vector (GCAV), a
lightweight model control framework that ensures accurate control without
requiring resource-extensive fine-tuning. Specifically, GCAV first trains a
concept activation vector for specified concepts to be controlled, such as
toxicity. During inference, GCAV steers the concept vector in LLMs, for
example, by removing the toxicity concept vector from the activation layers.
Control experiments from different perspectives, including toxicity reduction,
sentiment control, linguistic style, and topic control, demonstrate that our
framework achieves state-of-the-art performance with granular control, allowing
for fine-grained adjustments of both the steering layers and the steering
magnitudes for individual samples.
|
2501.05765
|
Deontic Temporal Logic for Formal Verification of AI Ethics
|
cs.AI cs.LO
|
Ensuring ethical behavior in Artificial Intelligence (AI) systems amidst
their increasing ubiquity and influence is a major concern the world over. The
use of formal methods in AI ethics is a possible crucial approach for
specifying and verifying the ethical behavior of AI systems. This paper
proposes a formalization based on deontic logic to define and evaluate the
ethical behavior of AI systems, focusing on system-level specifications,
contributing to this important goal. It introduces axioms and theorems to
capture ethical requirements related to fairness and explainability. The
formalization incorporates temporal operators to reason about the ethical
behavior of AI systems over time. The authors evaluate the effectiveness of
this formalization by assessing the ethics of the real-world COMPAS and loan
prediction AI systems. Various ethical properties of the COMPAS and loan
prediction systems are encoded using deontic logical formulas, allowing the use
of an automated theorem prover to verify whether these systems satisfy the
defined properties. The formal verification reveals that both systems fail to
fulfill certain key ethical properties related to fairness and
non-discrimination, demonstrating the effectiveness of the proposed
formalization in identifying potential ethical issues in real-world AI
applications.
|
2501.05767
|
Migician: Revealing the Magic of Free-Form Multi-Image Grounding in
Multimodal Large Language Models
|
cs.CL cs.AI cs.CV
|
The recent advancement of Multimodal Large Language Models (MLLMs) has
significantly improved their fine-grained perception of single images and
general comprehension across multiple images. However, existing MLLMs still
face challenges in achieving precise grounding in complex multi-image
scenarios. To address this, we first explore a Chain-of-Thought (CoT) framework
that integrates single-image grounding with multi-image comprehension. While
partially effective, it remains unstable and struggles to capture abstract
visual information due to its non-end-to-end nature. Therefore, we introduce
Migician, the first multi-image grounding model capable of performing free-form
and accurate grounding across multiple images. To support this, we present the
MGrounding-630k dataset, which comprises data for several multi-image grounding
tasks derived from existing datasets, along with newly generated free-form
grounding instruction-following data. Furthermore, we propose MIG-Bench, a
comprehensive benchmark specifically designed for evaluating multi-image
grounding capabilities. Experimental results demonstrate that our model
achieves significantly superior multi-image grounding capabilities,
outperforming the best existing MLLMs by 24.94% and even surpassing much larger
70B models. Our code, model, dataset, and benchmark are fully open-sourced at
https://migician-vg.github.io/.
|
2501.05768
|
Halal or Not: Knowledge Graph Completion for Predicting Cultural
Appropriateness of Daily Products
|
cs.LG cs.AI
|
The growing demand for halal cosmetic products has exposed significant
challenges, especially in Muslim-majority countries. Recently, various machine
learning-based strategies, e.g., image-based methods, have shown remarkable
success in predicting the halal status of cosmetics. However, these methods
mainly focus on analyzing the discrete and specific ingredients within separate
cosmetics, which ignore the high-order and complex relations between cosmetics
and ingredients. To address this problem, we propose a halal cosmetic
recommendation framework, namely HaCKG, that leverages a knowledge graph of
cosmetics and their ingredients to explicitly model and capture the
relationships between cosmetics and their components. By representing cosmetics
and ingredients as entities within the knowledge graph, HaCKG effectively
learns the high-order and complex relations between entities, offering a robust
method for predicting halal status. Specifically, we first construct a cosmetic
knowledge graph representing the relations between various cosmetics,
ingredients, and their properties. We then propose a pre-trained relational
graph attention network model with residual connections to learn the structural
relation between entities in the knowledge graph. The pre-trained model is then
fine-tuned on downstream cosmetic data to predict halal status. Extensive
experiments on the cosmetic dataset over halal prediction tasks demonstrate the
superiority of our model over state-of-the-art baselines.
|
2501.05769
|
Conditional Diffusion Model for Electrical Impedance Tomography
|
cs.CV
|
Electrical impedance tomography (EIT) is a non-invasive imaging technique,
which has been widely used in the fields of industrial inspection, medical
monitoring and tactile sensing. However, due to the inherent non-linearity and
ill-conditioned nature of the EIT inverse problem, the reconstructed image is
highly sensitive to the measured data, and random noise artifacts often appear
in the reconstructed image, which greatly limits the application of EIT. To
address this issue, a conditional diffusion model with voltage consistency
(CDMVC) is proposed in this study. The method consists of a pre-imaging module,
a conditional diffusion model for reconstruction, a forward voltage constraint
network and a scheme of voltage consistency constraint during sampling process.
The pre-imaging module is employed to generate the initial reconstruction. This
serves as a condition for training the conditional diffusion model. Finally,
based on the forward voltage constraint network, a voltage consistency
constraint is implemented in the sampling phase to incorporate forward
information of EIT, thereby enhancing imaging quality. A more complete dataset,
including both common and complex concave shapes, is generated. The proposed
method is validated using both simulation and physical experiments.
Experimental results demonstrate that our method can significantly improves the
quality of reconstructed images. In addition, experimental results also
demonstrate that our method has good robustness and generalization performance.
|
2501.05770
|
Path Planning for Multi-Copter UAV Formation Employing a Generalized
Particle Swarm Optimization
|
cs.RO cs.SY eess.SY
|
The paper investigates the problem of path planning techniques for
multi-copter uncrewed aerial vehicles (UAV) cooperation in a formation shape to
examine surrounding surfaces. We first describe the problem as a joint
objective cost for planning a path of the formation centroid working in a
complicated space. The path planning algorithm, named the generalized particle
swarm optimization algorithm, is then presented to construct an optimal,
flyable path while avoiding obstacles and ensuring the flying mission
requirements. A path-development scheme is then incorporated to generate a
relevant path for each drone to maintain its position in the formation
configuration. Simulation, comparison, and experiments have been conducted to
verify the proposed approach. Results show the feasibility of the proposed
path-planning algorithm with GEPSO.
|
2501.05772
|
rmlnomogram: An R package to construct an explainable nomogram for any
machine learning algorithms
|
cs.LG stat.ML
|
Background: Current nomogram can only be created for regression algorithm.
Providing nomogram for any machine learning (ML) algorithms may accelerate
model deployment in clinical settings or improve model availability. We
developed an R package and web application to construct nomogram with model
explainability of any ML algorithms. Methods: We formulated a function to
transform an ML prediction model into a nomogram, requiring datasets with: (1)
all possible combinations of predictor values; (2) the corresponding outputs of
the model; and (3) the corresponding explainability values for each predictor
(optional). Web application was also created. Results: Our R package could
create 5 types of nomograms for categorical predictors and binary outcome
without probability (1), categorical predictors and binary outcome with
probability (2) or continuous outcome (3), and categorical with single
numerical predictors and binary outcome with probability (4) or continuous
outcome (5). Respectively, the first and remaining types optimally allowed
maximum 15 and 5 predictors with maximum 3,200 combinations. Web application is
provided with such limits. The explainability values were possible for types 2
to 5. Conclusions: Our R package and web application could construct nomogram
with model explainability of any ML algorithms using a fair number of
predictors.
|
2501.05775
|
STHFL: Spatio-Temporal Heterogeneous Federated Learning
|
cs.LG cs.DC
|
Federated learning is a new framework that protects data privacy and allows
multiple devices to cooperate in training machine learning models. Previous
studies have proposed multiple approaches to eliminate the challenges posed by
non-iid data and inter-domain heterogeneity issues. However, they ignore the
\textbf{spatio-temporal} heterogeneity formed by different data distributions
of increasing task data in the intra-domain. Moreover, the global data is
generally a long-tailed distribution rather than assuming the global data is
balanced in practical applications. To tackle the \textbf{spatio-temporal}
dilemma, we propose a novel setting named \textbf{Spatio-Temporal
Heterogeneity} Federated Learning (STHFL). Specially, the Global-Local Dynamic
Prototype (GLDP) framework is designed for STHFL. In GLDP, the model in each
client contains personalized layers which can dynamically adapt to different
data distributions. For long-tailed data distribution, global prototypes are
served as complementary knowledge for the training on classes with few samples
in clients without leaking privacy. As tasks increase in clients, the knowledge
of local prototypes generated in previous tasks guides for training in the
current task to solve catastrophic forgetting. Meanwhile, the global-local
prototypes are updated through the moving average method after training local
prototypes in clients. Finally, we evaluate the effectiveness of GLDP, which
achieves remarkable results compared to state-of-the-art methods in STHFL
scenarios.
|
2501.05777
|
StructSR: Refuse Spurious Details in Real-World Image Super-Resolution
|
cs.CV
|
Diffusion-based models have shown great promise in real-world image
super-resolution (Real-ISR), but often generate content with structural errors
and spurious texture details due to the empirical priors and illusions of these
models. To address this issue, we introduce StructSR, a simple, effective, and
plug-and-play method that enhances structural fidelity and suppresses spurious
details for diffusion-based Real-ISR. StructSR operates without the need for
additional fine-tuning, external model priors, or high-level semantic
knowledge. At its core is the Structure-Aware Screening (SAS) mechanism, which
identifies the image with the highest structural similarity to the
low-resolution (LR) input in the early inference stage, allowing us to leverage
it as a historical structure knowledge to suppress the generation of spurious
details. By intervening in the diffusion inference process, StructSR seamlessly
integrates with existing diffusion-based Real-ISR models. Our experimental
results demonstrate that StructSR significantly improves the fidelity of
structure and texture, improving the PSNR and SSIM metrics by an average of
5.27% and 9.36% on a synthetic dataset (DIV2K-Val) and 4.13% and 8.64% on two
real-world datasets (RealSR and DRealSR) when integrated with four
state-of-the-art diffusion-based Real-ISR methods.
|
2501.05778
|
Formally Verified Neural Lyapunov Function for Incremental
Input-to-State Stability of Unknown Systems
|
eess.SY cs.SY
|
This work presents an approach to synthesize a Lyapunov-like function to
ensure incrementally input-to-state stability ($\delta$-ISS) property for an
unknown discrete-time system. To deal with challenges posed by unknown system
dynamics, we parameterize the Lyapunov-like function as a neural network, which
we train using the data samples collected from the unknown system along with
appropriately designed loss functions. We propose a validity condition to test
the obtained function and incorporate it into the training framework to ensure
provable correctness at the end of the training. Finally, the usefulness of the
proposed technique is proved using two case studies: a scalar non-linear
dynamical system and a permanent magnet DC motor.
|
2501.05780
|
Multi-layer RIS on Edge: Communication, Computation and Wireless Power
Transfer
|
cs.IT math.IT
|
The rapid expansion of Internet of Things (IoT) and its integration into
various applications highlight the need for advanced communication,
computation, and energy transfer techniques. However, the traditional
hardware-based evolution of communication systems faces challenges due to
excessive power consumption and prohibitive hardware cost. With the rapid
advancement of reconfigurable intelligent surface (RIS), a new approach by
parallel stacking a series of RIS, i.e., multi-layer RIS, has been proposed.
Benefiting from the characteristics of scalability, passivity, low cost, and
enhanced computation capability, multi-layer RIS is a promising technology for
future massive IoT scenarios. Thus, this article proposes a multi-layer
RIS-based universal paradigm at the network edge, enabling three functions,
i.e., multiple-input multiple-output (MIMO) communication, computation, and
wireless power transfer (WPT). Starting by picturing the possible applications
of multi-layer RIS, we explore the potential signal transmission links, energy
transmission links, and computation processes in IoT scenarios, showing its
ability to handle on-edge IoT tasks and associated green challenges. Then,
these three key functions are analyzed respectively in detail, showing the
advantages of the proposed scheme, compared with the traditional hardware-based
scheme. To facilitate the implementation of this new paradigm into reality, we
list the dominant future research directions at last, such as inter-layer
channel modeling, resource allocation and scheduling, channel estimation, and
edge training. It is anticipated that multi-layer RIS will contribute to more
energy-efficient wireless networks in the future by introducing a revolutionary
paradigm shift to an all-wave-based approach.
|
2501.05783
|
UV-Attack: Physical-World Adversarial Attacks for Person Detection via
Dynamic-NeRF-based UV Mapping
|
cs.CV cs.AI
|
In recent research, adversarial attacks on person detectors using patches or
static 3D model-based texture modifications have struggled with low success
rates due to the flexible nature of human movement. Modeling the 3D
deformations caused by various actions has been a major challenge. Fortunately,
advancements in Neural Radiance Fields (NeRF) for dynamic human modeling offer
new possibilities. In this paper, we introduce UV-Attack, a groundbreaking
approach that achieves high success rates even with extensive and unseen human
actions. We address the challenge above by leveraging dynamic-NeRF-based UV
mapping. UV-Attack can generate human images across diverse actions and
viewpoints, and even create novel actions by sampling from the SMPL parameter
space. While dynamic NeRF models are capable of modeling human bodies,
modifying clothing textures is challenging because they are embedded in neural
network parameters. To tackle this, UV-Attack generates UV maps instead of RGB
images and modifies the texture stacks. This approach enables real-time texture
edits and makes the attack more practical. We also propose a novel Expectation
over Pose Transformation loss (EoPT) to improve the evasion success rate on
unseen poses and views. Our experiments show that UV-Attack achieves a 92.75%
attack success rate against the FastRCNN model across varied poses in dynamic
video settings, significantly outperforming the state-of-the-art AdvCamou
attack, which only had a 28.50% ASR. Moreover, we achieve 49.5% ASR on the
latest YOLOv8 detector in black-box settings. This work highlights the
potential of dynamic NeRF-based UV mapping for creating more effective
adversarial attacks on person detectors, addressing key challenges in modeling
human movement and texture modification.
|
2501.05786
|
Cryptanalysis of Cancelable Biometrics Vault
|
cs.CR cs.CV
|
Cancelable Biometrics (CB) stands for a range of biometric transformation
schemes combining biometrics with user specific tokens to generate secure
templates. Required properties are the irreversibility, unlikability and
recognition accuracy of templates while making their revocation possible. In
biometrics, a key-binding scheme is used for protecting a cryptographic key
using a biometric data. The key can be recomputed only if a correct biometric
data is acquired during authentication. Applications of key-binding schemes are
typically disk encryption, where the cryptographic key is used to encrypt and
decrypt the disk. In this paper, we cryptanalyze a recent key-binding scheme,
called Cancelable Biometrics Vault (CBV) based on cancelable biometrics. More
precisely, the introduced cancelable transformation, called BioEncoding scheme,
for instantiating the CBV framework is attacked in terms of reversibility and
linkability of templates. Subsequently, our linkability attack enables to
recover the key in the vault without additional assumptions. Our cryptanalysis
introduces a new perspective by uncovering the CBV scheme's revocability and
linkability vulnerabilities, which were not previously identified in comparable
biometric-based key-binding schemes.
|
2501.05787
|
MARS6: A Small and Robust Hierarchical-Codec Text-to-Speech Model
|
eess.AS cs.CL
|
Codec-based text-to-speech (TTS) models have shown impressive quality with
zero-shot voice cloning abilities. However, they often struggle with more
expressive references or complex text inputs. We present MARS6, a robust
encoder-decoder transformer for rapid, expressive TTS. MARS6 is built on recent
improvements in spoken language modelling. Utilizing a hierarchical setup for
its decoder, new speech tokens are processed at a rate of only 12 Hz, enabling
efficient modelling of long-form text while retaining reconstruction quality.
We combine several recent training and inference techniques to reduce
repetitive generation and improve output stability and quality. This enables
the 70M-parameter MARS6 to achieve similar performance to models many times
larger. We show this in objective and subjective evaluations, comparing TTS
output quality and reference speaker cloning ability. Project page:
https://camb-ai.github.io/mars6-turbo/
|
2501.05790
|
Understanding Impact of Human Feedback via Influence Functions
|
cs.AI cs.HC cs.LG
|
In Reinforcement Learning from Human Feedback (RLHF), it is crucial to learn
suitable reward models from human feedback to align large language models
(LLMs) with human intentions. However, human feedback can often be noisy,
inconsistent, or biased, especially when evaluating complex responses. Such
feedback can lead to misaligned reward signals, potentially causing unintended
side effects during the RLHF process. To address these challenges, we explore
the use of influence functions to measure the impact of human feedback on the
performance of reward models. We propose a compute-efficient approximation
method that enables the application of influence functions to LLM-based reward
models and large-scale preference datasets. In our experiments, we demonstrate
two key applications of influence functions: (1) detecting common forms of
labeler bias in human feedback datasets and (2) guiding labelers to refine
their strategies to align more closely with expert feedback. By quantifying the
impact of human feedback on reward models, we believe that influence functions
can enhance feedback interpretability and contribute to scalable oversight in
RLHF, helping labelers provide more accurate and consistent feedback. Source
code is available at https://github.com/mintaywon/IF_RLHF
|
2501.05795
|
Robust Counterfactual Explanations under Model Multiplicity Using
Multi-Objective Optimization
|
cs.LG cs.AI
|
In recent years, explainability in machine learning has gained importance. In
this context, counterfactual explanation (CE), which is an explanation method
that uses examples, has attracted attention. However, it has been pointed out
that CE is not robust when there are multiple machine-learning models with
similar accuracy. These problems are important when using machine learning to
make safe decisions. In this paper, we propose robust CEs that introduce a new
viewpoint -- Pareto improvement -- and a method that uses multi-objective
optimization to generate it. To evaluate the proposed method, we conducted
experiments using both simulated and real data. The results demonstrate that
the proposed method is both robust and practical. This study highlights the
potential of ensuring robustness in decision-making by applying the concept of
social welfare. We believe that this research can serve as a valuable
foundation for various fields, including explainability in machine learning,
decision-making, and action planning based on machine learning.
|
2501.05803
|
Test-time Alignment of Diffusion Models without Reward Over-optimization
|
cs.LG cs.AI cs.CV math.ST stat.TH
|
Diffusion models excel in generative tasks, but aligning them with specific
objectives while maintaining their versatility remains challenging. Existing
fine-tuning methods often suffer from reward over-optimization, while
approximate guidance approaches fail to optimize target rewards effectively.
Addressing these limitations, we propose a training-free, test-time method
based on Sequential Monte Carlo (SMC) to sample from the reward-aligned target
distribution. Our approach, tailored for diffusion sampling and incorporating
tempering techniques, achieves comparable or superior target rewards to
fine-tuning methods while preserving diversity and cross-reward generalization.
We demonstrate its effectiveness in single-reward optimization, multi-objective
scenarios, and online black-box optimization. This work offers a robust
solution for aligning diffusion models with diverse downstream objectives
without compromising their general capabilities. Code is available at
https://github.com/krafton-ai/DAS.
|
2501.05808
|
Real-Time Integrated Dispatching and Idle Fleet Steering with Deep
Reinforcement Learning for A Meal Delivery Platform
|
eess.SY cs.AI cs.SY
|
To achieve high service quality and profitability, meal delivery platforms
like Uber Eats and Grubhub must strategically operate their fleets to ensure
timely deliveries for current orders while mitigating the consequential impacts
of suboptimal decisions that leads to courier understaffing in the future. This
study set out to solve the real-time order dispatching and idle courier
steering problems for a meal delivery platform by proposing a reinforcement
learning (RL)-based strategic dual-control framework. To address the inherent
sequential nature of these problems, we model both order dispatching and
courier steering as Markov Decision Processes. Trained via a deep reinforcement
learning (DRL) framework, we obtain strategic policies by leveraging the
explicitly predicted demands as part of the inputs. In our dual-control
framework, the dispatching and steering policies are iteratively trained in an
integrated manner. These forward-looking policies can be executed in real-time
and provide decisions while jointly considering the impacts on local and
network levels. To enhance dispatching fairness, we propose convolutional deep
Q networks to construct fair courier embeddings. To simultaneously rebalance
the supply and demand within the service network, we propose to utilize
mean-field approximated supply-demand knowledge to reallocate idle couriers at
the local level. Utilizing the policies generated by the RL-based strategic
dual-control framework, we find the delivery efficiency and fairness of
workload distribution among couriers have been improved, and under-supplied
conditions have been alleviated within the service network. Our study sheds
light on designing an RL-based framework to enable forward-looking real-time
operations for meal delivery platforms and other on-demand services.
|
2501.05809
|
AdaPRL: Adaptive Pairwise Regression Learning with Uncertainty
Estimation for Universal Regression Tasks
|
cs.LG
|
Current deep regression models usually learn in a point-wise way that treats
each sample as an independent input, neglecting the relative ordering among
different data. Consequently, the regression model could neglect the data's
interrelationships, potentially resulting in suboptimal performance. Moreover,
the existence of aleatoric uncertainty in the training data may drive the model
to capture non-generalizable patterns, contributing to increased overfitting.
To address these issues, we propose a novel adaptive pairwise learning
framework for regression tasks (AdaPRL) which leverages the relative
differences between data points and integrates with deep probabilistic models
to quantify the uncertainty associated with the predictions. Additionally, we
adapt AdaPRL for applications in multi-task learning and multivariate time
series forecasting. Extensive experiments with several real-world regression
datasets including recommendation systems, age prediction, time series
forecasting, natural language understanding, finance, and industry datasets
show that AdaPRL is compatible with different backbone networks in various
tasks and achieves state-of-the-art performance on the vast majority of tasks
without extra inference cost, highlighting its notable potential including
enhancing prediction accuracy and ranking ability, increasing generalization
capability, improving robustness to noisy data, improving resilience to reduced
data, and enhancing interpretability. Experiments also show that AdaPRL can be
seamlessly incorporated into recently proposed regression frameworks to gain
performance improvement.
|
2501.05813
|
Social web and Wikipedia: an opportunity to rethink the links between
sources' credibility, trust and authority
|
cs.IR cs.CY cs.SI
|
The Web and its main tools (Google, Wikipedia, Facebook, Twitter) deeply
raise and renew fundamental questions, that everyone asks almost every day: Is
this information or content true? Can I trust this author or source? These
questions are not new, they have been the same with books, newspapers,
broadcasting and television, and, more fundamentally, in every human
interpersonal communication. This paper is focused on two scientific problems
on this issue. The first one is theoretical: to address this issue, many
concepts have been used in library and information sciences, communication and
psychology. The links between these concepts are not clear: sometimes two
concepts are considered as synonymous, sometimes as very different. The second
one is historical: sources like Wikipedia deeply challenge the epistemic
evaluation of information sources, compared to previous modes of information
production. This paper proposes an integrated and simple model considering the
relation between a user, a document and an author as human communication. It
reduces the problem to three concepts: credibility as a characteristic granted
to information depending on its truth-value; trust as the ability to produce
credible information; authority when the power to influence of an author is
accepted, i.e., when readers accept that the source can modify their opinion,
knowledge and decisions. The model describes also two kinds of relationships
between the three concepts: an upward link and a downward link. The model is
confronted with findings of empirical research on Wikipedia in particular.
|
2501.05815
|
Enhanced sampled-data model predictive control via nonlinear lifting
|
eess.SY cs.SY
|
This paper introduces a novel nonlinear model predictive control (NMPC)
framework that incorporates a lifting technique to enhance control performance
for nonlinear systems. While the lifting technique has been widely employed in
linear systems to capture intersample behaviour, their application to nonlinear
systems remains unexplored. We address this gap by formulating an NMPC scheme
that combines fast-sample fast-hold (FSFH) approximations and numerical methods
to approximate system dynamics and cost functions. The proposed approach is
validated through two case studies: the Van der Pol oscillator and the inverted
pendulum on a cart. Simulation results demonstrate that the lifted NMPC
outperforms conventional NMPC in terms of reduced settling time and improved
control accuracy. These findings underscore the potential of the lifting-based
NMPC for efficient control of nonlinear systems, offering a practical solution
for real-time applications.
|
2501.05816
|
IndoNLP 2025: Shared Task on Real-Time Reverse Transliteration for
Romanized Indo-Aryan languages
|
cs.CL
|
The paper overviews the shared task on Real-Time Reverse Transliteration for
Romanized Indo-Aryan languages. It focuses on the reverse transliteration of
low-resourced languages in the Indo-Aryan family to their native scripts.
Typing Romanized Indo-Aryan languages using ad-hoc transliterals and achieving
accurate native scripts are complex and often inaccurate processes with the
current keyboard systems. This task aims to introduce and evaluate a real-time
reverse transliterator that converts Romanized Indo-Aryan languages to their
native scripts, improving the typing experience for users. Out of 11 registered
teams, four teams participated in the final evaluation phase with
transliteration models for Sinhala, Hindi and Malayalam. These proposed
solutions not only solve the issue of ad-hoc transliteration but also empower
low-resource language usability in the digital arena.
|
2501.05819
|
Diffusion Models for Smarter UAVs: Decision-Making and Modeling
|
cs.LG cs.AI
|
Unmanned Aerial Vehicles (UAVs) are increasingly adopted in modern
communication networks. However, challenges in decision-making and digital
modeling continue to impede their rapid advancement. Reinforcement Learning
(RL) algorithms face limitations such as low sample efficiency and limited data
versatility, further magnified in UAV communication scenarios. Moreover,
Digital Twin (DT) modeling introduces substantial decision-making and data
management complexities. RL models, often integrated into DT frameworks,
require extensive training data to achieve accurate predictions. In contrast to
traditional approaches that focus on class boundaries, Diffusion Models (DMs),
a new class of generative AI, learn the underlying probability distribution
from the training data and can generate trustworthy new patterns based on this
learned distribution. This paper explores the integration of DMs with RL and DT
to effectively address these challenges. By combining the data generation
capabilities of DMs with the decision-making framework of RL and the modeling
accuracy of DT, the integration improves the adaptability and real-time
performance of UAV communication. Moreover, the study shows how DMs can
alleviate data scarcity, improve policy networks, and optimize dynamic
modeling, providing a robust solution for complex UAV communication scenarios.
|
2501.05823
|
PersonaHOI: Effortlessly Improving Personalized Face with Human-Object
Interaction Generation
|
cs.CV
|
We introduce PersonaHOI, a training- and tuning-free framework that fuses a
general StableDiffusion model with a personalized face diffusion (PFD) model to
generate identity-consistent human-object interaction (HOI) images. While
existing PFD models have advanced significantly, they often overemphasize
facial features at the expense of full-body coherence, PersonaHOI introduces an
additional StableDiffusion (SD) branch guided by HOI-oriented text inputs. By
incorporating cross-attention constraints in the PFD branch and spatial merging
at both latent and residual levels, PersonaHOI preserves personalized facial
details while ensuring interactive non-facial regions. Experiments, validated
by a novel interaction alignment metric, demonstrate the superior realism and
scalability of PersonaHOI, establishing a new standard for practical
personalized face with HOI generation. Our code will be available at
https://github.com/JoyHuYY1412/PersonaHOI
|
2501.05826
|
AI-Driven Diabetic Retinopathy Screening: Multicentric Validation of
AIDRSS in India
|
eess.IV cs.AI cs.CV
|
Purpose: Diabetic retinopathy (DR) is a major cause of vision loss,
particularly in India, where access to retina specialists is limited in rural
areas. This study aims to evaluate the Artificial Intelligence-based Diabetic
Retinopathy Screening System (AIDRSS) for DR detection and prevalence
assessment, addressing the growing need for scalable, automated screening
solutions in resource-limited settings.
Approach: A multicentric, cross-sectional study was conducted in Kolkata,
India, involving 5,029 participants and 10,058 macula-centric retinal fundus
images. The AIDRSS employed a deep learning algorithm with 50 million trainable
parameters, integrated with Contrast Limited Adaptive Histogram Equalization
(CLAHE) preprocessing for enhanced image quality. DR was graded using the
International Clinical Diabetic Retinopathy (ICDR) Scale, categorizing disease
into five stages (DR0 to DR4). Statistical metrics including sensitivity,
specificity, and prevalence rates were evaluated against expert retina
specialist assessments.
Results: The prevalence of DR in the general population was 13.7%, rising to
38.2% among individuals with elevated random blood glucose levels. The AIDRSS
achieved an overall sensitivity of 92%, specificity of 88%, and 100%
sensitivity for detecting referable DR (DR3 and DR4). These results demonstrate
the system's robust performance in accurately identifying and grading DR in a
diverse population.
Conclusions: AIDRSS provides a reliable, scalable solution for early DR
detection in resource-constrained environments. Its integration of advanced AI
techniques ensures high diagnostic accuracy, with potential to significantly
reduce the burden of diabetes-related vision loss in underserved regions.
|
2501.05828
|
UltraRay: Full-Path Ray Tracing for Enhancing Realism in Ultrasound
Simulation
|
cs.CV cs.GR
|
Traditional ultrasound simulators solve the wave equation to model pressure
distribution fields, achieving high accuracy but requiring significant
computational time and resources. To address this, ray tracing approaches have
been introduced, modeling wave propagation as rays interacting with boundaries
and scatterers. However, existing models simplify ray propagation, generating
echoes at interaction points without considering return paths to the sensor.
This can result in unrealistic artifacts and necessitates careful scene tuning
for plausible results. We propose a novel ultrasound simulation pipeline that
utilizes a ray tracing algorithm to generate echo data, tracing each ray from
the transducer through the scene and back to the sensor. To replicate advanced
ultrasound imaging, we introduce a ray emission scheme optimized for plane wave
imaging, incorporating delay and steering capabilities. Furthermore, we
integrate a standard signal processing pipeline to simulate end-to-end
ultrasound image formation. We showcase the efficacy of the proposed pipeline
by modeling synthetic scenes featuring highly reflective objects, such as
bones. In doing so, our proposed approach, UltraRay, not only enhances the
overall visual quality but also improves the realism of the simulated images by
accurately capturing secondary reflections and reducing unnatural artifacts. By
building on top of a differentiable framework, the proposed pipeline lays the
groundwork for a fast and differentiable ultrasound simulation tool necessary
for gradient-based optimization, enabling advanced ultrasound beamforming
strategies, neural network integration, and accurate inverse scene
reconstruction.
|
2501.05835
|
Fine-tuning is Not Fine: Mitigating Backdoor Attacks in GNNs with
Limited Clean Data
|
cs.LG cs.CR
|
Graph Neural Networks (GNNs) have achieved remarkable performance through
their message-passing mechanism. However, recent studies have highlighted the
vulnerability of GNNs to backdoor attacks, which can lead the model to
misclassify graphs with attached triggers as the target class. The
effectiveness of recent promising defense techniques, such as fine-tuning or
distillation, is heavily contingent on having comprehensive knowledge of the
sufficient training dataset. Empirical studies have shown that fine-tuning
methods require a clean dataset of 20% to reduce attack accuracy to below 25%,
while distillation methods require a clean dataset of 15%. However, obtaining
such a large amount of clean data is commonly impractical.
In this paper, we propose a practical backdoor mitigation framework, denoted
as GRAPHNAD, which can capture high-quality intermediate-layer representations
in GNNs to enhance the distillation process with limited clean data. To achieve
this, we address the following key questions: How to identify the appropriate
attention representations in graphs for distillation? How to enhance
distillation with limited data? By adopting the graph attention transfer
method, GRAPHNAD can effectively align the intermediate-layer attention
representations of the backdoored model with that of the teacher model, forcing
the backdoor neurons to transform into benign ones. Besides, we extract the
relation maps from intermediate-layer transformation and enforce the relation
maps of the backdoored model to be consistent with that of the teacher model,
thereby ensuring model accuracy while further reducing the influence of
backdoors. Extensive experimental results show that by fine-tuning a teacher
model with only 3% of the clean data, GRAPHNAD can reduce the attack success
rate to below 5%.
|
2501.05839
|
Poetry in Pixels: Prompt Tuning for Poem Image Generation via Diffusion
Models
|
cs.CV
|
The task of text-to-image generation has encountered significant challenges
when applied to literary works, especially poetry. Poems are a distinct form of
literature, with meanings that frequently transcend beyond the literal words.
To address this shortcoming, we propose a PoemToPixel framework designed to
generate images that visually represent the inherent meanings of poems. Our
approach incorporates the concept of prompt tuning in our image generation
framework to ensure that the resulting images closely align with the poetic
content. In addition, we propose the PoeKey algorithm, which extracts three key
elements in the form of emotions, visual elements, and themes from poems to
form instructions which are subsequently provided to a diffusion model for
generating corresponding images. Furthermore, to expand the diversity of the
poetry dataset across different genres and ages, we introduce MiniPo, a novel
multimodal dataset comprising 1001 children's poems and images. Leveraging this
dataset alongside PoemSum, we conducted both quantitative and qualitative
evaluations of image generation using our PoemToPixel framework. This paper
demonstrates the effectiveness of our approach and offers a fresh perspective
on generating images from literary sources.
|
2501.05842
|
Orthogonal projection-based regularization for efficient model
augmentation
|
cs.LG cs.SY eess.SY
|
Deep-learning-based nonlinear system identification has shown the ability to
produce reliable and highly accurate models in practice. However, these
black-box models lack physical interpretability, and often a considerable part
of the learning effort is spent on capturing already expected/known behavior
due to first-principles-based understanding of some aspects of the system. A
potential solution is to integrate prior physical knowledge directly into the
model structure, combining the strengths of physics-based modeling and
deep-learning-based identification. The most common approach is to use an
additive model augmentation structure, where the physics-based and the
machine-learning (ML) components are connected in parallel. However, such
models are overparametrized, training them is challenging, potentially causing
the physics-based part to lose interpretability. To overcome this challenge,
this paper proposes an orthogonal projection-based regularization technique to
enhance parameter learning, convergence, and even model accuracy in
learning-based augmentation of nonlinear baseline models.
|
2501.05844
|
"Cause" is Mechanistic Narrative within Scientific Domains: An Ordinary
Language Philosophical Critique of "Causal Machine Learning"
|
cs.LG
|
Causal Learning has emerged as a major theme of research in statistics and
machine learning in recent years, promising specific computational techniques
to apply to datasets that reveal the true nature of cause and effect in a
number of important domains. In this paper we consider the epistemology of
recognizing true cause and effect phenomena. We apply the Ordinary Language
method of engaging on the customary use of the word 'cause' to investigate
valid semantics of reasoning about cause and effect. We recognize that the
grammars of cause and effect are fundamentally distinct in form across
scientific domains, yet they maintain a consistent and central function. This
function can best be described as the mechanism underlying fundamental forces
of influence as considered prominent in the respective scientific domain. We
demarcate 1) physics and engineering as domains wherein mathematical models are
sufficient to comprehensively describe causality, 2) biology as introducing
challenges of emergence while providing opportunities for showing consistent
mechanisms across scale, and 3) the social sciences as introducing grander
difficulties for establishing models of low prediction error but providing,
through Hermeneutics, the potential for findings that are still instrumentally
useful to individuals. We posit that definitive causal claims regarding a given
phenomenon (writ large) can only come through an agglomeration of consistent
evidence across multiple domains. This presents important methodological
questions as far as harmonizing between language games and emergence across
scales. Given the role of epistemic hubris in the contemporary crisis of
credibility in the sciences, exercising greater caution as far as communicating
precision as to the real degree of certainty certain evidence provides for rich
collections of open problems in optimizing integration of different findings.
|
2501.05845
|
Annealing Machine-assisted Learning of Graph Neural Network for
Combinatorial Optimization
|
cs.AI cs.LG
|
While Annealing Machines (AM) have shown increasing capabilities in solving
complex combinatorial problems, positioning themselves as a more immediate
alternative to the expected advances of future fully quantum solutions, there
are still scaling limitations. In parallel, Graph Neural Networks (GNN) have
been recently adapted to solve combinatorial problems, showing competitive
results and potentially high scalability due to their distributed nature. We
propose a merging approach that aims at retaining both the accuracy exhibited
by AMs and the representational flexibility and scalability of GNNs. Our model
considers a compression step, followed by a supervised interaction where
partial solutions obtained from the AM are used to guide local GNNs from where
node feature representations are obtained and combined to initialize an
additional GNN-based solver that handles the original graph's target problem.
Intuitively, the AM can solve the combinatorial problem indirectly by infusing
its knowledge into the GNN. Experiments on canonical optimization problems show
that the idea is feasible, effectively allowing the AM to solve size problems
beyond its original limits.
|
2501.05848
|
Isogeometric Analysis for 2D Magnetostatic Computations with Multi-level
B\'{e}zier Extraction for Local Refinement
|
cs.CE
|
Local refinement is vital for efficient numerical simulations. In the context
of Isogeometric Analysis (IGA), hierarchical B-splines have gained prominence.
The work applies the methodology of truncated hierarchical B-splines
(THB-splines) as they keep additional properties. The framework is further
enriched with B\'{e}zier extraction, resulting in the multi-level B\'{e}zier
extraction method. We apply this discretization method to 2D magnetostatic
problems. The implementation is based on an open-source Octave/MATLAB IGA code
called GeoPDEs, which allows us to compare our routines with globally refined
spline models as well as locally refined ones where the solver does not rely on
B\'{e}zier extraction.
|
2501.05851
|
Identity-aware Feature Decoupling Learning for Clothing-change Person
Re-identification
|
cs.CV
|
Clothing-change person re-identification (CC Re-ID) has attracted increasing
attention in recent years due to its application prospect. Most existing works
struggle to adequately extract the ID-related information from the original RGB
images. In this paper, we propose an Identity-aware Feature Decoupling (IFD)
learning framework to mine identity-related features. Particularly, IFD
exploits a dual stream architecture that consists of a main stream and an
attention stream. The attention stream takes the clothing-masked images as
inputs and derives the identity attention weights for effectively transferring
the spatial knowledge to the main stream and highlighting the regions with
abundant identity-related information. To eliminate the semantic gap between
the inputs of two streams, we propose a clothing bias diminishing module
specific to the main stream to regularize the features of clothing-relevant
regions. Extensive experimental results demonstrate that our framework
outperforms other baseline models on several widely-used CC Re-ID datasets.
|
2501.05852
|
MRI Patterns of the Hippocampus and Amygdala for Predicting Stages of
Alzheimer's Progression: A Minimal Feature Machine Learning Framework
|
cs.CV cs.LG
|
Alzheimer's disease (AD) progresses through distinct stages, from early mild
cognitive impairment (EMCI) to late mild cognitive impairment (LMCI) and
eventually to AD. Accurate identification of these stages, especially
distinguishing LMCI from EMCI, is crucial for developing pre-dementia
treatments but remains challenging due to subtle and overlapping imaging
features. This study proposes a minimal-feature machine learning framework that
leverages structural MRI data, focusing on the hippocampus and amygdala as
regions of interest. The framework addresses the curse of dimensionality
through feature selection, utilizes region-specific voxel information, and
implements innovative data organization to enhance classification performance
by reducing noise. The methodology integrates dimensionality reduction
techniques such as PCA and t-SNE with state-of-the-art classifiers, achieving
the highest accuracy of 88.46%. This framework demonstrates the potential for
efficient and accurate staging of AD progression while providing valuable
insights for clinical applications.
|
2501.05855
|
ConSim: Measuring Concept-Based Explanations' Effectiveness with
Automated Simulatability
|
cs.CL
|
Concept-based explanations work by mapping complex model computations to
human-understandable concepts. Evaluating such explanations is very difficult,
as it includes not only the quality of the induced space of possible concepts
but also how effectively the chosen concepts are communicated to users.
Existing evaluation metrics often focus solely on the former, neglecting the
latter. We introduce an evaluation framework for measuring concept explanations
via automated simulatability: a simulator's ability to predict the explained
model's outputs based on the provided explanations. This approach accounts for
both the concept space and its interpretation in an end-to-end evaluation.
Human studies for simulatability are notoriously difficult to enact,
particularly at the scale of a wide, comprehensive empirical evaluation (which
is the subject of this work). We propose using large language models (LLMs) as
simulators to approximate the evaluation and report various analyses to make
such approximations reliable. Our method allows for scalable and consistent
evaluation across various models and datasets. We report a comprehensive
empirical evaluation using this framework and show that LLMs provide consistent
rankings of explanation methods. Code available at
https://github.com/AnonymousConSim/ConSim.
|
2501.05862
|
Language-Inspired Relation Transfer for Few-shot Class-Incremental
Learning
|
cs.CV
|
Depicting novel classes with language descriptions by observing few-shot
samples is inherent in human-learning systems. This lifelong learning
capability helps to distinguish new knowledge from old ones through the
increase of open-world learning, namely Few-Shot Class-Incremental Learning
(FSCIL). Existing works to solve this problem mainly rely on the careful tuning
of visual encoders, which shows an evident trade-off between the base knowledge
and incremental ones. Motivated by human learning systems, we propose a new
Language-inspired Relation Transfer (LRT) paradigm to understand objects by
joint visual clues and text depictions, composed of two major steps. We first
transfer the pretrained text knowledge to the visual domains by proposing a
graph relation transformation module and then fuse the visual and language
embedding by a text-vision prototypical fusion module. Second, to mitigate the
domain gap caused by visual finetuning, we propose context prompt learning for
fast domain alignment and imagined contrastive learning to alleviate the
insufficient text data during alignment. With collaborative learning of domain
alignments and text-image transfer, our proposed LRT outperforms the
state-of-the-art models by over $13\%$ and $7\%$ on the final session of
mini-ImageNet and CIFAR-100 FSCIL benchmarks.
|
2501.05867
|
Neural Network Verification is a Programming Language Challenge
|
cs.PL cs.LG cs.LO
|
Neural network verification is a new and rapidly developing field of
research. So far, the main priority has been establishing efficient
verification algorithms and tools, while proper support from the programming
language perspective has been considered secondary or unimportant. Yet, there
is mounting evidence that insights from the programming language community may
make a difference in the future development of this domain. In this paper, we
formulate neural network verification challenges as programming language
challenges and suggest possible future solutions.
|
2501.05870
|
A Neighbor-based Approach to Pitch Ownership Models in Soccer
|
cs.LG
|
Pitch ownership models allow many types of analysis in soccer and provide
valuable assistance to tactical analysts in understanding the game's dynamics.
The novelty they provide over event-based analysis is that tracking data
incorporates context that event-based data does not possess, like player
positioning. This paper proposes a novel approach to building pitch ownership
models in soccer games using the K-Nearest Neighbors (KNN) algorithm. Our
approach provides a fast inference mechanism that can model different
approaches to pitch control using the same algorithm. Despite its flexibility,
it uses only three hyperparameters to tune the model, facilitating the tuning
process for different player skill levels. The flexibility of the approach
allows for the emulation of different methods available in the literature by
adjusting a small number of parameters, including adjusting for different
levels of uncertainty. In summary, the proposed model provides a new and more
flexible strategy for building pitch ownership models, extending beyond just
replicating existing algorithms, and can provide valuable insights for tactical
analysts and open up new avenues for future research. We thoroughly visualize
several examples demonstrating the presented models' strengths and weaknesses.
The code is available at github.com/nvsclub/KNNPitchControl.
|
2501.05871
|
Collaborative Content Moderation in the Fediverse
|
cs.SI cs.LG cs.NI
|
The Fediverse, a group of interconnected servers providing a variety of
interoperable services (e.g. micro-blogging in Mastodon) has gained rapid
popularity. This sudden growth, partly driven by Elon Musk's acquisition of
Twitter, has created challenges for administrators though. This paper focuses
on one particular challenge: content moderation, e.g. the need to remove spam
or hate speech. While centralized platforms like Facebook and Twitter rely on
automated tools for moderation, their dependence on massive labeled datasets
and specialized infrastructure renders them impractical for decentralized,
low-resource settings like the Fediverse. In this work, we design and evaluate
FedMod, a collaborative content moderation system based on federated learning.
Our system enables servers to exchange parameters of partially trained local
content moderation models with similar servers, creating a federated model
shared among collaborating servers. FedMod demonstrates robust performance on
three different content moderation tasks: harmful content detection, bot
content detection, and content warning assignment, achieving average per-server
macro-F1 scores of 0.71, 0.73, and 0.58, respectively.
|
2501.05874
|
VideoRAG: Retrieval-Augmented Generation over Video Corpus
|
cs.CV cs.AI cs.CL cs.IR cs.LG
|
Retrieval-Augmented Generation (RAG) is a powerful strategy to address the
issue of generating factually incorrect outputs in foundation models by
retrieving external knowledge relevant to queries and incorporating it into
their generation process. However, existing RAG approaches have primarily
focused on textual information, with some recent advancements beginning to
consider images, and they largely overlook videos, a rich source of multimodal
knowledge capable of representing events, processes, and contextual details
more effectively than any other modality. While a few recent studies explore
the integration of videos in the response generation process, they either
predefine query-associated videos without retrieving them according to queries,
or convert videos into the textual descriptions without harnessing their
multimodal richness. To tackle these, we introduce VideoRAG, a novel framework
that not only dynamically retrieves relevant videos based on their relevance
with queries but also utilizes both visual and textual information of videos in
the output generation. Further, to operationalize this, our method revolves
around the recent advance of Large Video Language Models (LVLMs), which enable
the direct processing of video content to represent it for retrieval and
seamless integration of the retrieved videos jointly with queries. We
experimentally validate the effectiveness of VideoRAG, showcasing that it is
superior to relevant baselines.
|
2501.05880
|
TakuNet: an Energy-Efficient CNN for Real-Time Inference on Embedded UAV
systems in Emergency Response Scenarios
|
cs.CV cs.PF
|
Designing efficient neural networks for embedded devices is a critical
challenge, particularly in applications requiring real-time performance, such
as aerial imaging with drones and UAVs for emergency responses. In this work,
we introduce TakuNet, a novel light-weight architecture which employs
techniques such as depth-wise convolutions and an early downsampling stem to
reduce computational complexity while maintaining high accuracy. It leverages
dense connections for fast convergence during training and uses 16-bit
floating-point precision for optimization on embedded hardware accelerators.
Experimental evaluation on two public datasets shows that TakuNet achieves
near-state-of-the-art accuracy in classifying aerial images of emergency
situations, despite its minimal parameter count. Real-world tests on embedded
devices, namely Jetson Orin Nano and Raspberry Pi, confirm TakuNet's
efficiency, achieving more than 650 fps on the 15W Jetson board, making it
suitable for real-time AI processing on resource-constrained platforms and
advancing the applicability of drones in emergency scenarios. The code and
implementation details are publicly released.
|
2501.05882
|
Solving nonograms using Neural Networks
|
cs.AI cs.NE
|
Nonograms are logic puzzles in which cells in a grid must be colored or left
blank according to the numbers that are located in its headers. In this study,
we analyze different techniques to solve this type of logical problem using an
Heuristic Algorithm, Genetic Algorithm, and Heuristic Algorithm with Neural
Network. Furthermore, we generate a public dataset to train the neural
networks. We published this dataset and the code of the algorithms. Combination
of the heuristic algorithm with a neural network obtained the best results.
From state of the art review, no previous works used neural network to solve
nonograms, nor combined a network with other algorithms to accelerate the
resolution process.
|
2501.05884
|
Text-to-Edit: Controllable End-to-End Video Ad Creation via Multimodal
LLMs
|
cs.CV
|
The exponential growth of short-video content has ignited a surge in the
necessity for efficient, automated solutions to video editing, with challenges
arising from the need to understand videos and tailor the editing according to
user requirements. Addressing this need, we propose an innovative end-to-end
foundational framework, ultimately actualizing precise control over the final
video content editing. Leveraging the flexibility and generalizability of
Multimodal Large Language Models (MLLMs), we defined clear input-output
mappings for efficient video creation. To bolster the model's capability in
processing and comprehending video content, we introduce a strategic
combination of a denser frame rate and a slow-fast processing technique,
significantly enhancing the extraction and understanding of both temporal and
spatial video information. Furthermore, we introduce a text-to-edit mechanism
that allows users to achieve desired video outcomes through textual input,
thereby enhancing the quality and controllability of the edited videos. Through
comprehensive experimentation, our method has not only showcased significant
effectiveness within advertising datasets, but also yields universally
applicable conclusions on public datasets.
|
2501.05885
|
EDNet: Edge-Optimized Small Target Detection in UAV Imagery -- Faster
Context Attention, Better Feature Fusion, and Hardware Acceleration
|
cs.CV cs.AI cs.LG
|
Detecting small targets in drone imagery is challenging due to low
resolution, complex backgrounds, and dynamic scenes. We propose EDNet, a novel
edge-target detection framework built on an enhanced YOLOv10 architecture,
optimized for real-time applications without post-processing. EDNet
incorporates an XSmall detection head and a Cross Concat strategy to improve
feature fusion and multi-scale context awareness for detecting tiny targets in
diverse environments. Our unique C2f-FCA block employs Faster Context Attention
to enhance feature extraction while reducing computational complexity. The WIoU
loss function is employed for improved bounding box regression. With seven
model sizes ranging from Tiny to XL, EDNet accommodates various deployment
environments, enabling local real-time inference and ensuring data privacy.
Notably, EDNet achieves up to a 5.6% gain in mAP@50 with significantly fewer
parameters. On an iPhone 12, EDNet variants operate at speeds ranging from 16
to 55 FPS, providing a scalable and efficient solution for edge-based object
detection in challenging drone imagery. The source code and pre-trained models
are available at: https://github.com/zsniko/EDNet.
|
2501.05891
|
Affordably Fine-tuned LLMs Provide Better Answers to Course-specific
MCQs
|
cs.CL cs.AI
|
In education, the capability of generating human-like text of Large Language
Models (LLMs) inspired work on how they can increase the efficiency of learning
and teaching. We study the affordability of these models for educators and
students by investigating how LLMs answer multiple-choice questions (MCQs) with
respect to hardware constraints and refinement techniques. We explore this
space by using generic pre-trained LLMs (the 7B, 13B, and 70B variants of
LLaMA-2) to answer 162 undergraduate-level MCQs from a course on Programming
Languages (PL) -- the MCQ dataset is a contribution of this work, which we make
publicly available. Specifically, we dissect how different factors, such as
using readily-available material -- (parts of) the course's textbook -- for
fine-tuning and quantisation (to decrease resource usage) can change the
accuracy of the responses. The main takeaway is that smaller textbook-based
fine-tuned models outperform generic larger ones (whose pre-training requires
conspicuous resources), making the usage of LLMs for answering MCQs resource-
and material-wise affordable.
|
2501.05892
|
Beyond Flat Text: Dual Self-inherited Guidance for Visual Text
Generation
|
cs.CV
|
In real-world images, slanted or curved texts, especially those on cans,
banners, or badges, appear as frequently, if not more so, than flat texts due
to artistic design or layout constraints. While high-quality visual text
generation has become available with the advanced generative capabilities of
diffusion models, these models often produce distorted text and inharmonious
text background when given slanted or curved text layouts due to training data
limitation. In this paper, we introduce a new training-free framework, STGen,
which accurately generates visual texts in challenging scenarios (\eg, slanted
or curved text layouts) while harmonizing them with the text background. Our
framework decomposes the visual text generation process into two branches: (i)
\textbf{Semantic Rectification Branch}, which leverages the ability in
generating flat but accurate visual texts of the model to guide the generation
of challenging scenarios. The generated latent of flat text is abundant in
accurate semantic information related both to the text itself and its
background. By incorporating this, we rectify the semantic information of the
texts and harmonize the integration of the text with its background in complex
layouts. (ii) \textbf{Structure Injection Branch}, which reinforces the visual
text structure during inference. We incorporate the latent information of the
glyph image, rich in glyph structure, as a new condition to further strengthen
the text structure. To enhance image harmony, we also apply an effective
combination method to merge the priors, providing a solid foundation for
generation. Extensive experiments across a variety of visual text layouts
demonstrate that our framework achieves superior accuracy and outstanding
quality.
|
2501.05894
|
Text2Playlist: Generating Personalized Playlists from Text on Deezer
|
cs.IR cs.LG
|
The streaming service Deezer heavily relies on the search to help users
navigate through its extensive music catalog. Nonetheless, it is primarily
designed to find specific items and does not lead directly to a smooth
listening experience. We present Text2Playlist, a stand-alone tool that
addresses these limitations. Text2Playlist leverages generative AI, music
information retrieval and recommendation systems to generate query-specific and
personalized playlists, successfully deployed at scale.
|
2501.05901
|
Valley2: Exploring Multimodal Models with Scalable Vision-Language
Design
|
cs.CV
|
Recently, vision-language models have made remarkable progress, demonstrating
outstanding capabilities in various tasks such as image captioning and video
understanding. We introduce Valley2, a novel multimodal large language model
designed to enhance performance across all domains and extend the boundaries of
practical applications in e-commerce and short video scenarios. Notably,
Valley2 achieves state-of-the-art (SOTA) performance on e-commerce benchmarks,
surpassing open-source models of similar size by a large margin (79.66 vs.
72.76). Additionally, Valley2 ranks second on the OpenCompass leaderboard among
models with fewer than 10B parameters, with an impressive average score of
67.4. The code and model weights are open-sourced at
https://github.com/bytedance/Valley.
|
2501.05903
|
Discovery of sustainable energy materials via the machine-learned
material space
|
cond-mat.mtrl-sci cs.LG physics.comp-ph
|
Does a machine learning model actually gain an understanding of the material
space? We answer this question in the affirmative on the example of the
OptiMate model, a graph attention network trained to predict the optical
properties of semiconductors and insulators. By applying the UMAP
dimensionality reduction technique to its latent embeddings, we demonstrate
that the model captures a nuanced and interpretable representation of the
materials space, reflecting chemical and physical principles, without any
user-induced bias. This enables clustering of almost 10,000 materials based on
optical properties and chemical similarities. Beyond this understanding, we
demonstrate how the learned material space can be used to identify more
sustainable alternatives to critical materials in energy-related technologies,
such as photovoltaics. These findings demonstrate the dual utility of machine
learning models in materials science: Accurately predicting material properties
while providing insights into the underlying materials space. The approach
demonstrates the broader potential of leveraging learned materials spaces for
the discovery and design of materials for diverse applications, and is easily
applicable to any state-of-the-art machine learning model.
|
2501.05904
|
Binary Event-Driven Spiking Transformer
|
cs.CV
|
Transformer-based Spiking Neural Networks (SNNs) introduce a novel
event-driven self-attention paradigm that combines the high performance of
Transformers with the energy efficiency of SNNs. However, the larger model size
and increased computational demands of the Transformer structure limit their
practicality in resource-constrained scenarios. In this paper, we integrate
binarization techniques into Transformer-based SNNs and propose the Binary
Event-Driven Spiking Transformer, i.e. BESTformer. The proposed BESTformer can
significantly reduce storage and computational demands by representing weights
and attention maps with a mere 1-bit. However, BESTformer suffers from a severe
performance drop from its full-precision counterpart due to the limited
representation capability of binarization. To address this issue, we propose a
Coupled Information Enhancement (CIE) method, which consists of a reversible
framework and information enhancement distillation. By maximizing the mutual
information between the binary model and its full-precision counterpart, the
CIE method effectively mitigates the performance degradation of the BESTformer.
Extensive experiments on static and neuromorphic datasets demonstrate that our
method achieves superior performance to other binary SNNs, showcasing its
potential as a compact yet high-performance model for resource-limited edge
devices.
|
2501.05906
|
Q-MAML: Quantum Model-Agnostic Meta-Learning for Variational Quantum
Algorithms
|
quant-ph cs.LG
|
In the Noisy Intermediate-Scale Quantum (NISQ) era, using variational quantum
algorithms (VQAs) to solve optimization problems has become a key application.
However, these algorithms face significant challenges, such as choosing an
effective initial set of parameters and the limited quantum processing time
that restricts the number of optimization iterations. In this study, we
introduce a new framework for optimizing parameterized quantum circuits (PQCs)
that employs a classical optimizer, inspired by Model-Agnostic Meta-Learning
(MAML) technique. This approach aim to achieve better parameter initialization
that ensures fast convergence. Our framework features a classical neural
network, called Learner}, which interacts with a PQC using the output of
Learner as an initial parameter. During the pre-training phase, Learner is
trained with a meta-objective based on the quantum circuit cost function. In
the adaptation phase, the framework requires only a few PQC updates to converge
to a more accurate value, while the learner remains unchanged. This method is
highly adaptable and is effectively extended to various Hamiltonian
optimization problems. We validate our approach through experiments, including
distribution function mapping and optimization of the Heisenberg XYZ
Hamiltonian. The result implies that the Learner successfully estimates initial
parameters that generalize across the problem space, enabling fast adaptation.
|
2501.05921
|
The New Anticipatory Governance Culture for Innovation: Regulatory
Foresight, Regulatory Experimentation and Regulatory Learning
|
cs.CY cs.AI
|
With the rapid pace of technological innovation, traditional methods of
policy formation and legislating are becoming conspicuously anachronistic. The
need for regulatory choices to be made to counter the deadening effect of
regulatory lag is more important to developing markets and fostering growth
than achieving one off regulatory perfection. This article advances scholarship
on innovation policy and the regulation of technological innovation in the
European Union. It does so by considering what building an agile yet robust
anticipatory governance regulatory culture involves. It systematically
excavates a variety of tools and elements that are being put into use in
inventive ways and argues that these need to be more cohesively and
systemically integrated into the regulatory toolbox. Approaches covered include
strategic foresight, the critical embrace of iterative policy development and
regulatory learning in the face of uncertainty and the embrace of bottom up
approaches to cocreation of policy such as Policy Labs and the testing and
regulatory learning through pilot regulation and experimentation. The growing
use of regulatory sandboxes as an EU policy tool to boost innovation and
navigate regulatory complexity as seen in the EU AI Act is also probed
|
2501.05925
|
Navigating Tomorrow: Reliably Assessing Large Language Models
Performance on Future Event Prediction
|
cs.CL cs.IR
|
Predicting future events is an important activity with applications across
multiple fields and domains. For example, the capacity to foresee stock market
trends, natural disasters, business developments, or political events can
facilitate early preventive measures and uncover new opportunities. Multiple
diverse computational methods for attempting future predictions, including
predictive analysis, time series forecasting, and simulations have been
proposed. This study evaluates the performance of several large language models
(LLMs) in supporting future prediction tasks, an under-explored domain. We
assess the models across three scenarios: Affirmative vs. Likelihood
questioning, Reasoning, and Counterfactual analysis. For this, we create a
dataset1 by finding and categorizing news articles based on entity type and its
popularity. We gather news articles before and after the LLMs training cutoff
date in order to thoroughly test and compare model performance. Our research
highlights LLMs potential and limitations in predictive modeling, providing a
foundation for future improvements.
|
2501.05926
|
LLMs Reproduce Stereotypes of Sexual and Gender Minorities
|
cs.CL
|
A large body of research has found substantial gender bias in NLP systems.
Most of this research takes a binary, essentialist view of gender: limiting its
variation to the categories _men_ and _women_, conflating gender with sex, and
ignoring different sexual identities. But gender and sexuality exist on a
spectrum, so in this paper we study the biases of large language models (LLMs)
towards sexual and gender minorities beyond binary categories. Grounding our
study in a widely used psychological framework -- the Stereotype Content Model
-- we demonstrate that English-language survey questions about social
perceptions elicit more negative stereotypes of sexual and gender minorities
from LLMs, just as they do from humans. We then extend this framework to a more
realistic use case: text generation. Our analysis shows that LLMs generate
stereotyped representations of sexual and gender minorities in this setting,
raising concerns about their capacity to amplify representational harms in
creative writing, a widely promoted use case.
|
2501.05927
|
Expressing One's Identity Online: Left-Right and cross EU-country
variation in self-representation in social media
|
cs.SI
|
We examine how social media users from eight European Union (EU) member
states express their socio-political identities, focusing on users' online
self-presentation and group identity cues conveyed through bios. Our goal is to
explore commonalities and differences in topics discussed in social media
profiles, across Left-and Right-wing user groups, within and across EU
countries. Through a novel approach we map how identity-related discourse
varies by country and political orientation, revealing how group identity is
expressed within the EU. We find that topics related to democracy, national way
of life, and decentralization emerge as particularly divisive, showing
considerable variation both within and between EU countries. A subset of
topics, which includes education, environmentalism, sustainability, equality,
freedom & human rights, and traditional morality, among others, clearly
differentiate Left-from Right-leaning user groups. These partisan topics are
relevant as they could be leveraged for mobilizing ideological groups and
highlight Left-Right identitarian differences at the EU level. Finally, we show
that our Left-Right identity similarity metrics reflect aspects of real-world
political fragmentation, which are closely aligned to the perceptions of
political conflict intensity by country, as measured by the 2022 PEW survey.
|
2501.05928
|
Towards Backdoor Stealthiness in Model Parameter Space
|
cs.CR cs.AI
|
Recent research on backdoor stealthiness focuses mainly on indistinguishable
triggers in input space and inseparable backdoor representations in feature
space, aiming to circumvent backdoor defenses that examine these respective
spaces. However, existing backdoor attacks are typically designed to resist a
specific type of backdoor defense without considering the diverse range of
defense mechanisms. Based on this observation, we pose a natural question: Are
current backdoor attacks truly a real-world threat when facing diverse
practical defenses?
To answer this question, we examine 12 common backdoor attacks that focus on
input-space or feature-space stealthiness and 17 diverse representative
defenses. Surprisingly, we reveal a critical blind spot: Backdoor attacks
designed to be stealthy in input and feature spaces can be mitigated by
examining backdoored models in parameter space. To investigate the underlying
causes behind this common vulnerability, we study the characteristics of
backdoor attacks in the parameter space. Notably, we find that input- and
feature-space attacks introduce prominent backdoor-related neurons in parameter
space, which are not thoroughly considered by current backdoor attacks. Taking
comprehensive stealthiness into account, we propose a novel supply-chain attack
called Grond. Grond limits the parameter changes by a simple yet effective
module, Adversarial Backdoor Injection (ABI), which adaptively increases the
parameter-space stealthiness during the backdoor injection. Extensive
experiments demonstrate that Grond outperforms all 12 backdoor attacks against
state-of-the-art (including adaptive) defenses on CIFAR-10, GTSRB, and a subset
of ImageNet. In addition, we show that ABI consistently improves the
effectiveness of common backdoor attacks.
|
2501.05931
|
Environment Modeling for Service Robots From a Task Execution
Perspective
|
cs.RO
|
Service robots are increasingly entering the home to provide domestic tasks
for residents. However, when working in an open, dynamic, and unstructured home
environment, service robots still face challenges such as low intelligence for
task execution and poor long-term autonomy (LTA), which has limited their
deployment. As the basis of robotic task execution, environment modeling has
attracted significant attention. This integrates core technologies such as
environment perception, understanding, and representation to accurately
recognize environmental information. This paper presents a comprehensive survey
of environmental modeling from a new task-executionoriented perspective. In
particular, guided by the requirements of robots in performing domestic service
tasks in the home environment, we systematically review the progress that has
been made in task-execution-oriented environmental modeling in four respects:
1) localization, 2) navigation, 3) manipulation, and 4) LTA. Current challenges
are discussed, and potential research opportunities are also highlighted.
|
2501.05932
|
DiffuSETS: 12-lead ECG Generation Conditioned on Clinical Text Reports
and Patient-Specific Information
|
cs.LG cs.AI
|
Heart disease remains a significant threat to human health. As a non-invasive
diagnostic tool, the electrocardiogram (ECG) is one of the most widely used
methods for cardiac screening. However, the scarcity of high-quality ECG data,
driven by privacy concerns and limited medical resources, creates a pressing
need for effective ECG signal generation. Existing approaches for generating
ECG signals typically rely on small training datasets, lack comprehensive
evaluation frameworks, and overlook potential applications beyond data
augmentation. To address these challenges, we propose DiffuSETS, a novel
framework capable of generating ECG signals with high semantic alignment and
fidelity. DiffuSETS accepts various modalities of clinical text reports and
patient-specific information as inputs, enabling the creation of clinically
meaningful ECG signals. Additionally, to address the lack of standardized
evaluation in ECG generation, we introduce a comprehensive benchmarking
methodology to assess the effectiveness of generative models in this domain.
Our model achieve excellent results in tests, proving its superiority in the
task of ECG generation. Furthermore, we showcase its potential to mitigate data
scarcity while exploring novel applications in cardiology education and medical
knowledge discovery, highlighting the broader impact of our work.
|
2501.05933
|
Weakly Supervised Segmentation of Hyper-Reflective Foci with Compact
Convolutional Transformers and SAM2
|
cs.CV
|
Weakly supervised segmentation has the potential to greatly reduce the
annotation effort for training segmentation models for small structures such as
hyper-reflective foci (HRF) in optical coherence tomography (OCT). However,
most weakly supervised methods either involve a strong downsampling of input
images, or only achieve localization at a coarse resolution, both of which are
unsatisfactory for small structures. We propose a novel framework that
increases the spatial resolution of a traditional attention-based Multiple
Instance Learning (MIL) approach by using Layer-wise Relevance Propagation
(LRP) to prompt the Segment Anything Model (SAM~2), and increases recall with
iterative inference. Moreover, we demonstrate that replacing MIL with a Compact
Convolutional Transformer (CCT), which adds a positional encoding, and permits
an exchange of information between different regions of the OCT image, leads to
a further and substantial increase in segmentation accuracy.
|
2501.05934
|
Encoded Spatial Attribute in Multi-Tier Federated Learning
|
cs.LG cs.DC
|
This research presents an Encoded Spatial Multi-Tier Federated Learning
approach for a comprehensive evaluation of aggregated models for geospatial
data. In the client tier, encoding spatial information is introduced to better
predict the target outcome. The research aims to assess the performance of
these models across diverse datasets and spatial attributes, highlighting
variations in predictive accuracy. Using evaluation metrics such as accuracy,
our research reveals insights into the complexities of spatial granularity and
the challenges of capturing underlying patterns in the data. We extended the
scope of federated learning (FL) by having multi-tier along with the
functionality of encoding spatial attributes. Our N-tier FL approach used
encoded spatial data to aggregate in different tiers. We obtained multiple
models that predicted the different granularities of spatial data. Our findings
underscore the need for further research to improve predictive accuracy and
model generalization, with potential avenues including incorporating additional
features, refining model architectures, and exploring alternative modeling
approaches. Our experiments have several tiers representing different levels of
spatial aspects. We obtained accuracy of 75.62% and 89.52% for the global model
without having to train the model using the data constituted with the
designated tier. The research also highlights the importance of the proposed
approach in real-time applications.
|
2501.05936
|
A Multimodal Dataset for Enhancing Industrial Task Monitoring and
Engagement Prediction
|
cs.CV
|
Detecting and interpreting operator actions, engagement, and object
interactions in dynamic industrial workflows remains a significant challenge in
human-robot collaboration research, especially within complex, real-world
environments. Traditional unimodal methods often fall short of capturing the
intricacies of these unstructured industrial settings. To address this gap, we
present a novel Multimodal Industrial Activity Monitoring (MIAM) dataset that
captures realistic assembly and disassembly tasks, facilitating the evaluation
of key meta-tasks such as action localization, object interaction, and
engagement prediction. The dataset comprises multi-view RGB, depth, and
Inertial Measurement Unit (IMU) data collected from 22 sessions, amounting to
290 minutes of untrimmed video, annotated in detail for task performance and
operator behavior. Its distinctiveness lies in the integration of multiple data
modalities and its emphasis on real-world, untrimmed industrial workflows-key
for advancing research in human-robot collaboration and operator monitoring.
Additionally, we propose a multimodal network that fuses RGB frames, IMU data,
and skeleton sequences to predict engagement levels during industrial tasks.
Our approach improves the accuracy of recognizing engagement states, providing
a robust solution for monitoring operator performance in dynamic industrial
environments. The dataset and code can be accessed from
https://github.com/navalkishoremehta95/MIAM/.
|
2501.05942
|
Soft regression trees: a model variant and a decomposition training
algorithm
|
cs.LG math.OC
|
Decision trees are widely used for classification and regression tasks in a
variety of application fields due to their interpretability and good accuracy.
During the past decade, growing attention has been devoted to globally
optimized decision trees with deterministic or soft splitting rules at branch
nodes, which are trained by optimizing the error function over all the tree
parameters. In this work, we propose a new variant of soft multivariate
regression trees (SRTs) where, for every input vector, the prediction is
defined as the linear regression associated to a single leaf node, namely, the
leaf node obtained by routing the input vector from the root along the branches
with higher probability. SRTs exhibit the conditional computational property,
i.e., each prediction depends on a small number of nodes (parameters), and our
nonlinear optimization formulation for training them is amenable to
decomposition. After showing a universal approximation result for SRTs, we
present a decomposition training algorithm including a clustering-based
initialization procedure and a heuristic for reassigning the input vectors
along the tree. Under mild assumptions, we establish asymptotic convergence
guarantees. Experiments on 15 wellknown datasets indicate that our SRTs and
decomposition algorithm yield higher accuracy and robustness compared with
traditional soft regression trees trained using the nonlinear optimization
formulation of Blanquero et al., and a significant reduction in training times
as well as a slightly better average accuracy compared with the mixed-integer
optimization approach of Bertsimas and Dunn. We also report a comparison with
the Random Forest ensemble method.
|
2501.05943
|
Koopman-Based Model Predictive Control of Functional Electrical
Stimulation for Ankle Dorsiflexion and Plantarflexion Assistance
|
eess.SY cs.SY
|
Functional Electrical Stimulation (FES) can be an effective tool to augment
paretic muscle function and restore normal ankle function. Our approach
incorporates a real-time, data-driven Model Predictive Control (MPC) scheme,
built upon a Koopman operator theory (KOT) framework. This framework adeptly
captures the complex nonlinear dynamics of ankle motion in a linearized form,
enabling application of linear control approaches for highly nonlinear
FES-actuated dynamics. Utilizing inertial measurement units (IMUs), our method
accurately predicts the FES-induced ankle movements, while accounting for
nonlinear muscle actuation dynamics, including the muscle activation for both
plantarflexors, and dorsiflexors (Tibialis Anterior (TA)). The linear
prediction model derived through KOT allowed us to formulate the MPC problem
with linear state space dynamics, enhancing the real-time feasibility,
precision and adaptability of the FES driven control. The effectiveness and
applicability of our approach have been demonstrated through comprehensive
simulations and experimental trials, including three participants with no
disability and a participant with Multiple Sclerosis. Our findings highlight
the potential of a KOT-based MPC approach for FES based gait assistance that
offers effective and personalized assistance for individuals with gait
impairment conditions.
|
2501.05945
|
Reusable specimen-level inference in computational pathology
|
eess.IV cs.CV q-bio.TO
|
Foundation models for computational pathology have shown great promise for
specimen-level tasks and are increasingly accessible to researchers. However,
specimen-level models built on these foundation models remain largely
unavailable, hindering their broader utility and impact. To address this gap,
we developed SpinPath, a toolkit designed to democratize specimen-level deep
learning by providing a zoo of pretrained specimen-level models, a Python-based
inference engine, and a JavaScript-based inference platform. We demonstrate the
utility of SpinPath in metastasis detection tasks across nine foundation
models. SpinPath may foster reproducibility, simplify experimentation, and
accelerate the adoption of specimen-level deep learning in computational
pathology research.
|
2501.05946
|
Coverage and Spectral Efficiency of NOMA-Enabled LEO Satellite Networks
with Ordering Schemes
|
eess.SP cs.IT cs.SY eess.SY math.IT
|
This paper investigates an analytical model for low-earth orbit (LEO)
multi-satellite downlink non-orthogonal multiple access (NOMA) networks. The
satellites transmit data to multiple NOMA user terminals (UTs), each employing
successive interference cancellation (SIC) for decoding. Two ordering schemes
are adopted for NOMA-enabled LEO satellite networks, i.e., mean signal power
(MSP)-based ordering and
instantaneous-signal-to-inter-satellite-interference-plus-noise ratio
(ISINR)-based ordering. For each ordering scheme, we derive the coverage
probabilities of UTs under different channel conditions. Moreover, we discuss
how coverage is influenced by SIC, main-lobe gain, and tradeoffs between the
number of satellites and their altitudes. Additionally, two user fairness-based
power allocation (PA) schemes are considered, and PA coefficients with the
optimal number of UTs that maximize their sum spectral efficiency (SE) are
studied. Simulation results show that there exists a maximum
signal-to-inter-satellite-interference-plus-noise ratio (SINR) threshold for
each PA scheme that ensures the operation of NOMA in LEO satellite networks,
and the benefit of NOMA only exists when the target SINR is below a certain
threshold. Compared with orthogonal multiple access (OMA), NOMA increases UTs'
sum SE by as much as 35\%. Furthermore, for most SINR thresholds, the sum SE
increases with the number of UTs to the highest value, whilst the maximum sum
SE is obtained when there are two UTs.
|
2501.05948
|
Universal-2-TF: Robust All-Neural Text Formatting for ASR
|
cs.CL
|
This paper introduces an all-neural text formatting (TF) model designed for
commercial automatic speech recognition (ASR) systems, encompassing punctuation
restoration (PR), truecasing, and inverse text normalization (ITN). Unlike
traditional rule-based or hybrid approaches, this method leverages a two-stage
neural architecture comprising a multi-objective token classifier and a
sequence-to-sequence (seq2seq) model. This design minimizes computational costs
and reduces hallucinations while ensuring flexibility and robustness across
diverse linguistic entities and text domains. Developed as part of the
Universal-2 ASR system, the proposed method demonstrates superior performance
in TF accuracy, computational efficiency, and perceptual quality, as validated
through comprehensive evaluations using both objective and subjective methods.
This work underscores the importance of holistic TF models in enhancing ASR
usability in practical settings.
|
2501.05952
|
Scalable Vision Language Model Training via High Quality Data Curation
|
cs.CV cs.CL
|
In this paper, we introduce SAIL-VL (ScAlable Vision Language Model TraIning
via High QuaLity Data Curation), an open-source vision language model (VLM)
series achieving state-of-the-art (SOTA) performance in 2B and 8B parameters.
The following three key improvements contribute to SAIL-VL's leading
performance: (1) Scalable high-quality visual understanding data construction:
We implement a data construction pipeline to enable hundred-million-scale
high-quality recaption data annotation, and the resulted dataset SAIL-Caption
is validated to be of the highest data quality compared with opensource
alternatives. (2) Scalable Pretraining with High-Quality Visual Understanding
Data: We scale SAIL-VL's pretraining budget up to 655B tokens and show that
even a 2B VLM benefits from scaled up training data sizes, exhibiting expected
data size scaling laws in visual understanding and instruction following
performance. (3) Scalable SFT via data quantity and complexity scaling: We
curate a high-quality SFT dataset collection which outperforms opensource
alternatives in data quantity scaling effectiveness. We also demonstrate that
training with progressively higher-complexity data surpasses baseline one-stage
training by a large margin. SAIL-VL series models achieve the highest average
score in 18 widely used VLM benchmarks in our evaluation, with the 2B model
takes the top position over VLMs of comparable sizes on OpenCompass 2024
(https://rank.opencompass.org.cn/leaderboard-multimodal) demonstrating robust
visual comprehension abilities. SAIL-VL series models are released at
HuggingFace (https://huggingface.co/BytedanceDouyinContent).
|
2501.05961
|
Swin-X2S: Reconstructing 3D Shape from 2D Biplanar X-ray with Swin
Transformers
|
cs.CV eess.IV
|
The conversion from 2D X-ray to 3D shape holds significant potential for
improving diagnostic efficiency and safety. However, existing reconstruction
methods often rely on hand-crafted features, manual intervention, and prior
knowledge, resulting in unstable shape errors and additional processing costs.
In this paper, we introduce Swin-X2S, an end-to-end deep learning method for
directly reconstructing 3D segmentation and labeling from 2D biplanar
orthogonal X-ray images. Swin-X2S employs an encoder-decoder architecture: the
encoder leverages 2D Swin Transformer for X-ray information extraction, while
the decoder employs 3D convolution with cross-attention to integrate structural
features from orthogonal views. A dimension-expanding module is introduced to
bridge the encoder and decoder, ensuring a smooth conversion from 2D pixels to
3D voxels. We evaluate proposed method through extensive qualitative and
quantitative experiments across nine publicly available datasets covering four
anatomies (femur, hip, spine, and rib), with a total of 54 categories.
Significant improvements over previous methods have been observed not only in
the segmentation and labeling metrics but also in the clinically relevant
parameters that are of primary concern in practical applications, which
demonstrates the promise of Swin-X2S to provide an effective option for
anatomical shape reconstruction in clinical scenarios. Code implementation is
available at: \url{https://github.com/liukuan5625/Swin-X2S}.
|
2501.05962
|
Effective faking of verbal deception detection with target-aligned
adversarial attacks
|
cs.CL cs.AI
|
Background: Deception detection through analysing language is a promising
avenue using both human judgments and automated machine learning judgments. For
both forms of credibility assessment, automated adversarial attacks that
rewrite deceptive statements to appear truthful pose a serious threat. Methods:
We used a dataset of 243 truthful and 262 fabricated autobiographical stories
in a deception detection task for humans and machine learning models. A large
language model was tasked to rewrite deceptive statements so that they appear
truthful. In Study 1, humans who made a deception judgment or used the
detailedness heuristic and two machine learning models (a fine-tuned language
model and a simple n-gram model) judged original or adversarial modifications
of deceptive statements. In Study 2, we manipulated the target alignment of the
modifications, i.e. tailoring the attack to whether the statements would be
assessed by humans or computer models. Results: When adversarial modifications
were aligned with their target, human (d=-0.07 and d=-0.04) and machine
judgments (51% accuracy) dropped to the chance level. When the attack was not
aligned with the target, both human heuristics judgments (d=0.30 and d=0.36)
and machine learning predictions (63-78%) were significantly better than
chance. Conclusions: Easily accessible language models can effectively help
anyone fake deception detection efforts both by humans and machine learning
models. Robustness against adversarial modifications for humans and machines
depends on that target alignment. We close with suggestions on advancing
deception research with adversarial attack designs.
|
2501.05963
|
Finnish SQuAD: A Simple Approach to Machine Translation of Span
Annotations
|
cs.CL
|
We apply a simple method to machine translate datasets with span-level
annotation using the DeepL MT service and its ability to translate formatted
documents. Using this method, we produce a Finnish version of the SQuAD2.0
question answering dataset and train QA retriever models on this new dataset.
We evaluate the quality of the dataset and more generally the MT method through
direct evaluation, indirect comparison to other similar datasets, a
backtranslation experiment, as well as through the performance of downstream
trained QA models. In all these evaluations, we find that the method of
transfer is not only simple to use but produces consistently better translated
data. Given its good performance on the SQuAD dataset, it is likely the method
can be used to translate other similar span-annotated datasets for other tasks
and languages as well. All code and data is available under an open license:
data at HuggingFace TurkuNLP/squad_v2_fi, code on GitHub TurkuNLP/squad2-fi,
and model at HuggingFace TurkuNLP/bert-base-finnish-cased-squad2.
|
2501.05964
|
Recommender Systems for Social Good: The Role of Accountability and
Sustainability
|
cs.IR
|
This work examines the role of recommender systems in promoting
sustainability, social responsibility, and accountability, with a focus on
alignment with the United Nations Sustainable Development Goals (SDGs). As
recommender systems become increasingly integrated into daily interactions,
they must go beyond personalization to support responsible consumption, reduce
environmental impact, and foster social good. We explore strategies to mitigate
the carbon footprint of recommendation models, ensure fairness, and implement
accountability mechanisms. By adopting these approaches, recommender systems
can contribute to sustainable and socially beneficial outcomes, aligning
technological advancements with the SDGs focused on environmental
sustainability and social well-being.
|
2501.05965
|
Model Inversion in Split Learning for Personalized LLMs: New Insights
from Information Bottleneck Theory
|
cs.LG
|
Personalized Large Language Models (LLMs) have become increasingly prevalent,
showcasing the impressive capabilities of models like GPT-4. This trend has
also catalyzed extensive research on deploying LLMs on mobile devices. Feasible
approaches for such edge-cloud deployment include using split learning.
However, previous research has largely overlooked the privacy leakage
associated with intermediate representations transmitted from devices to
servers. This work is the first to identify model inversion attacks in the
split learning framework for LLMs, emphasizing the necessity of secure defense.
For the first time, we introduce mutual information entropy to understand the
information propagation of Transformer-based LLMs and assess privacy attack
performance for LLM blocks. To address the issue of representations being
sparser and containing less information than embeddings, we propose a two-stage
attack system in which the first part projects representations into the
embedding space, and the second part uses a generative model to recover text
from these embeddings. This design breaks down the complexity and achieves
attack scores of 38%-75% in various scenarios, with an over 60% improvement
over the SOTA. This work comprehensively highlights the potential privacy risks
during the deployment of personalized LLMs on the edge side.
|
2501.05966
|
Towards Early Prediction of Self-Supervised Speech Model Performance
|
cs.SD cs.CL cs.LG eess.AS
|
In Self-Supervised Learning (SSL), pre-training and evaluation are resource
intensive. In the speech domain, current indicators of the quality of SSL
models during pre-training, such as the loss, do not correlate well with
downstream performance. Consequently, it is often difficult to gauge the final
downstream performance in a cost efficient manner during pre-training. In this
work, we propose unsupervised efficient methods that give insights into the
quality of the pre-training of SSL speech models, namely, measuring the cluster
quality and rank of the embeddings of the SSL model. Results show that measures
of cluster quality and rank correlate better with downstream performance than
the pre-training loss with only one hour of unlabeled audio, reducing the need
for GPU hours and labeled data in SSL model evaluation.
|
2501.05970
|
A Brain Age Residual Biomarker (BARB): Leveraging MRI-Based Models to
Detect Latent Health Conditions in U.S. Veterans
|
cs.LG
|
Age prediction using brain imaging, such as MRIs, has achieved promising
results, with several studies identifying the model's residual as a potential
biomarker for chronic disease states. In this study, we developed a brain age
predictive model using a dataset of 1,220 U.S. veterans (18--80 years) and
convolutional neural networks (CNNs) trained on two-dimensional slices of axial
T2-weighted fast spin-echo and T2-weighted fluid attenuated inversion recovery
MRI images. The model, incorporating a degree-3 polynomial ensemble, achieved
an $R^{2}$ of 0.816 on the testing set. Images were acquired at the level of
the anterior commissure and the frontal horns of the lateral ventricles.
Residual analysis was performed to assess its potential as a biomarker for five
ICD-coded conditions: hypertension (HTN), diabetes mellitus (DM), mild
traumatic brain injury (mTBI), illicit substance abuse/dependence (SAD), and
alcohol abuse/dependence (AAD). Residuals grouped by the number of ICD-coded
conditions demonstrated different trends that were statistically significant
($p = 0.002$), suggesting a relationship between disease states and predicted
brain age. This association was particularly pronounced in patients over 49
years, where negative residuals (indicating advanced brain aging) correlated
with the presence of multiple ICD codes. These findings support the potential
of residuals as biomarkers for detecting latent health conditions.
|
2501.05981
|
Hermit Kingdom Through the Lens of Multiple Perspectives: A Case Study
of LLM Hallucination on North Korea
|
cs.CL
|
Hallucination in large language models (LLMs) remains a significant challenge
for their safe deployment, particularly due to its potential to spread
misinformation. Most existing solutions address this challenge by focusing on
aligning the models with credible sources or by improving how models
communicate their confidence (or lack thereof) in their outputs. While these
measures may be effective in most contexts, they may fall short in scenarios
requiring more nuanced approaches, especially in situations where access to
accurate data is limited or determining credible sources is challenging. In
this study, we take North Korea - a country characterised by an extreme lack of
reliable sources and the prevalence of sensationalist falsehoods - as a case
study. We explore and evaluate how some of the best-performing multilingual
LLMs and specific language-based models generate information about North Korea
in three languages spoken in countries with significant geo-political
interests: English (United States, United Kingdom), Korean (South Korea), and
Mandarin Chinese (China). Our findings reveal significant differences,
suggesting that the choice of model and language can lead to vastly different
understandings of North Korea, which has important implications given the
global security challenges the country poses.
|
2501.05982
|
Deep Variational Sequential Monte Carlo for High-Dimensional
Observations
|
cs.LG eess.SP
|
Sequential Monte Carlo (SMC), or particle filtering, is widely used in
nonlinear state-space systems, but its performance often suffers from poorly
approximated proposal and state-transition distributions. This work introduces
a differentiable particle filter that leverages the unsupervised variational
SMC objective to parameterize the proposal and transition distributions with a
neural network, designed to learn from high-dimensional observations.
Experimental results demonstrate that our approach outperforms established
baselines in tracking the challenging Lorenz attractor from high-dimensional
and partial observations. Furthermore, an evidence lower bound based evaluation
indicates that our method offers a more accurate representation of the
posterior distribution.
|
2501.05984
|
The Safe Trusted Autonomy for Responsible Space Program
|
eess.SY cs.SY
|
The Safe Trusted Autonomy for Responsible Space (STARS) program aims to
advance autonomy technologies for space by leveraging machine learning
technologies while mitigating barriers to trust, such as uncertainty,
opaqueness, brittleness, and inflexibility. This paper presents the
achievements and lessons learned from the STARS program in integrating
reinforcement learning-based multi-satellite control, run time assurance
approaches, and flexible human-autonomy teaming interfaces, into a new
integrated testing environment for collaborative autonomous satellite systems.
The primary results describe analysis of the reinforcement learning
multi-satellite control and run time assurance algorithms. These algorithms are
integrated into a prototype human-autonomy interface using best practices from
human-autonomy trust literature, however detailed analysis of the effectiveness
is left to future work. References are provided with additional detailed
results of individual experiments.
|
2501.05987
|
Comparing Self-Supervised Learning Models Pre-Trained on Human Speech
and Animal Vocalizations for Bioacoustics Processing
|
cs.LG eess.AS
|
Self-supervised learning (SSL) foundation models have emerged as powerful,
domain-agnostic, general-purpose feature extractors applicable to a wide range
of tasks. Such models pre-trained on human speech have demonstrated high
transferability for bioacoustic processing. This paper investigates (i) whether
SSL models pre-trained directly on animal vocalizations offer a significant
advantage over those pre-trained on speech, and (ii) whether fine-tuning
speech-pretrained models on automatic speech recognition (ASR) tasks can
enhance bioacoustic classification. We conduct a comparative analysis using
three diverse bioacoustic datasets and two different bioacoustic tasks. Results
indicate that pre-training on bioacoustic data provides only marginal
improvements over speech-pretrained models, with comparable performance in most
scenarios. Fine-tuning on ASR tasks yields mixed outcomes, suggesting that the
general-purpose representations learned during SSL pre-training are already
well-suited for bioacoustic tasks. These findings highlight the robustness of
speech-pretrained SSL models for bioacoustics and imply that extensive
fine-tuning may not be necessary for optimal performance.
|
2501.05989
|
Addressing speaker gender bias in large scale speech translation systems
|
cs.CL cs.AI
|
This study addresses the issue of speaker gender bias in Speech Translation
(ST) systems, which can lead to offensive and inaccurate translations. The
masculine bias often found in large-scale ST systems is typically perpetuated
through training data derived from Machine Translation (MT) systems. Our
approach involves two key steps. First, we employ Large Language Models (LLMs)
to rectify translations based on the speaker's gender in a cost-effective
manner. Second, we fine-tune the ST model with the corrected data, enabling the
model to generate gender-specific translations directly from audio cues,
without the need for explicit gender input. Additionally, we propose a
three-mode fine-tuned model for scenarios where the speaker's gender is either
predefined or should not be inferred from speech cues. We demonstrate a 70%
improvement in translations for female speakers compared to our baseline and
other large-scale ST systems, such as Seamless M4T and Canary, on the MuST-SHE
test set.
|
2501.05990
|
Constraining constructions with WordNet: pros and cons for the semantic
annotation of fillers in the Italian Constructicon
|
cs.CL
|
The paper discusses the role of WordNet-based semantic classification in the
formalization of constructions, and more specifically in the semantic
annotation of schematic fillers, in the Italian Constructicon. We outline how
the Italian Constructicon project uses Open Multilingual WordNet topics to
represent semantic features and constraints of constructions.
|
2501.05991
|
An Attention-Guided Deep Learning Approach for Classifying 39 Skin
Lesion Types
|
eess.IV cs.CV cs.LG
|
The skin, as the largest organ of the human body, is vulnerable to a diverse
array of conditions collectively known as skin lesions, which encompass various
dermatoses. Diagnosing these lesions presents significant challenges for
medical practitioners due to the subtle visual differences that are often
imperceptible to the naked eye. While not all skin lesions are
life-threatening, certain types can act as early indicators of severe diseases,
including skin cancers, underscoring the critical need for timely and accurate
diagnostic methods. Deep learning algorithms have demonstrated remarkable
potential in facilitating the early detection and prognosis of skin lesions.
This study advances the field by curating a comprehensive and diverse dataset
comprising 39 categories of skin lesions, synthesized from five publicly
available datasets. Using this dataset, the performance of five
state-of-the-art deep learning models -- MobileNetV2, Xception, InceptionV3,
EfficientNetB1, and Vision Transformer - is rigorously evaluated. To enhance
the accuracy and robustness of these models, attention mechanisms such as the
Efficient Channel Attention (ECA) and the Convolutional Block Attention Module
(CBAM) are incorporated into their architectures. Comprehensive evaluation
across multiple performance metrics reveals that the Vision Transformer model
integrated with CBAM outperforms others, achieving an accuracy of 93.46%,
precision of 94%, recall of 93%, F1-score of 93%, and specificity of 93.67%.
These results underscore the significant potential of the proposed system in
supporting medical professionals with accurate and efficient prognostic tools
for diagnosing a broad spectrum of skin lesions. The dataset and code used in
this study can be found at
https://github.com/akabircs/Skin-Lesions-Classification.
|
2501.05994
|
On the Interaction in Transient Stability of Two-Inverter Power Systems
containing GFL inverter Using Manifold Method
|
eess.SY cs.SY
|
Many renewable energy resources are integrated into power systems via
grid-following (GFL) inverters which rely on a phase-locked loop (PLL) for grid
synchronization. During severe grid faults, GFL inverters are vulnerable to
transient instability, often leading to disconnection from the grid. This paper
aims to elucidate the interaction mechanisms and define the stability
boundaries of systems of two inverters, including GFL, grid-forming (GFM), or
grid-supporting (GSP) inverters. First, the generalized large-signal expression
for the two-inverter system under various inverter combinations is derived,
revealing that no energy function exists for systems containing GFL inverters.
This implies that the traditional direct method cannot be applied to such
systems. To overcome these challenges, a manifold method is employed to
precisely determine the domain of attraction (DOA) of the system, and the
transient stability margin is assessed by a new metric termed the critical
clearing radius (CCR). A case study of the two-inverter system under various
inverter combinations is conducted to explore large-signal interactions across
different scenarios. Manifold analysis and simulation results reveal that GSP
inverters using PLL for grid synchronization exhibit behavior similar to GFM
inverters when the droop coefficients in the terminal voltage control loop
(TVC) are sufficiently large. Compared to GFL inverters, GSP inverters
incorporating a TVC significantly enhances the transient stability of other
inverters. In the STATCOM case, the optimal placement of the STATCOM, realized
by GSP or GFM inverters, is identified to be at the midpoint of a transmission
line. All findings in this paper are validated through electromagnetic
transient (EMT) simulations
|
2501.05997
|
Minimizing Occlusion Effect on Multi-View Camera Perception in BEV with
Multi-Sensor Fusion
|
cs.CV
|
Autonomous driving technology is rapidly evolving, offering the potential for
safer and more efficient transportation. However, the performance of these
systems can be significantly compromised by the occlusion on sensors due to
environmental factors like dirt, dust, rain, and fog. These occlusions severely
affect vision-based tasks such as object detection, vehicle segmentation, and
lane recognition. In this paper, we investigate the impact of various kinds of
occlusions on camera sensor by projecting their effects from multi-view camera
images of the nuScenes dataset into the Bird's-Eye View (BEV) domain. This
approach allows us to analyze how occlusions spatially distribute and influence
vehicle segmentation accuracy within the BEV domain. Despite significant
advances in sensor technology and multi-sensor fusion, a gap remains in the
existing literature regarding the specific effects of camera occlusions on
BEV-based perception systems. To address this gap, we use a multi-sensor fusion
technique that integrates LiDAR and radar sensor data to mitigate the
performance degradation caused by occluded cameras. Our findings demonstrate
that this approach significantly enhances the accuracy and robustness of
vehicle segmentation tasks, leading to more reliable autonomous driving
systems.
|
2501.06000
|
Self-Supervised Partial Cycle-Consistency for Multi-View Matching
|
cs.CV
|
Matching objects across partially overlapping camera views is crucial in
multi-camera systems and requires a view-invariant feature extraction network.
Training such a network with cycle-consistency circumvents the need for
labor-intensive labeling. In this paper, we extend the mathematical formulation
of cycle-consistency to handle partial overlap. We then introduce a pseudo-mask
which directs the training loss to take partial overlap into account. We
additionally present several new cycle variants that complement each other and
present a time-divergent scene sampling scheme that improves the data input for
this self-supervised setting. Cross-camera matching experiments on the
challenging DIVOTrack dataset show the merits of our approach. Compared to the
self-supervised state-of-the-art, we achieve a 4.3 percentage point higher F1
score with our combined contributions. Our improvements are robust to reduced
overlap in the training data, with substantial improvements in challenging
scenes that need to make few matches between many people. Self-supervised
feature networks trained with our method are effective at matching objects in a
range of multi-camera settings, providing opportunities for complex tasks like
large-scale multi-camera scene understanding.
|
2501.06002
|
DeltaGNN: Graph Neural Network with Information Flow Control
|
cs.LG
|
Graph Neural Networks (GNNs) are popular deep learning models designed to
process graph-structured data through recursive neighborhood aggregations in
the message passing process. When applied to semi-supervised node
classification, the message-passing enables GNNs to understand short-range
spatial interactions, but also causes them to suffer from over-smoothing and
over-squashing. These challenges hinder model expressiveness and prevent the
use of deeper models to capture long-range node interactions (LRIs) within the
graph. Popular solutions for LRIs detection are either too expensive to process
large graphs due to high time complexity or fail to generalize across diverse
graph structures. To address these limitations, we propose a mechanism called
\emph{information flow control}, which leverages a novel connectivity measure,
called \emph{information flow score}, to address over-smoothing and
over-squashing with linear computational overhead, supported by theoretical
evidence. Finally, to prove the efficacy of our methodology we design DeltaGNN,
the first scalable and generalizable approach for detecting long-range and
short-range interactions. We benchmark our model across 10 real-world datasets,
including graphs with varying sizes, topologies, densities, and homophilic
ratios, showing superior performance with limited computational complexity. The
implementation of the proposed methods are publicly available at
https://github.com/basiralab/DeltaGNN.
|
Subsets and Splits
No community queries yet
The top public SQL queries from the community will appear here once available.