id stringlengths 9 16 | title stringlengths 4 278 | categories stringlengths 5 104 | abstract stringlengths 6 4.09k |
|---|---|---|---|
2501.03225 | Automated Generation of Challenging Multiple-Choice Questions for Vision
Language Model Evaluation | cs.CV cs.AI cs.CL cs.CY cs.LG | The rapid development of vision language models (VLMs) demands rigorous and
reliable evaluation. However, current visual question answering (VQA)
benchmarks often depend on open-ended questions, making accurate evaluation
difficult due to the variability in natural language responses. To address
this, we introduce AutoConverter, an agentic framework that automatically
converts these open-ended questions into multiple-choice format, enabling
objective evaluation while reducing the costly question creation process. Our
experiments demonstrate that AutoConverter can generate correct and challenging
multiple-choice questions, with VLMs demonstrating consistently similar or
lower accuracy on these questions compared to human-created ones. Using
AutoConverter, we construct VMCBench, a benchmark created by transforming 20
existing VQA datasets into a unified multiple-choice format, totaling 9,018
questions. We comprehensively evaluate 33 state-of-the-art VLMs on VMCBench,
setting a new standard for scalable, consistent, and reproducible VLM
evaluation.
|
2501.03226 | BoostStep: Boosting mathematical capability of Large Language Models via
improved single-step reasoning | cs.CL cs.AI cs.LG | Large language models (LLMs) have demonstrated impressive ability in solving
complex mathematical problems with multi-step reasoning and can be further
enhanced with well-designed in-context learning (ICL) examples. However, this
potential is often constrained by two major challenges in ICL: granularity
mismatch and irrelevant information. We observe that while LLMs excel at
decomposing mathematical problems, they often struggle with reasoning errors in
fine-grained steps. Moreover, ICL examples retrieved at the question level may
omit critical steps or even mislead the model with irrelevant details. To
address this issue, we propose BoostStep, a method that enhances reasoning
accuracy through step-aligned ICL, a novel mechanism that carefully aligns
retrieved reference steps with the corresponding reasoning steps. Additionally,
BoostStep incorporates an effective "first-try" strategy to deliver exemplars
highly relevant to the current state of reasoning. BoostStep is a flexible and
powerful method that integrates seamlessly with chain-of-thought (CoT) and tree
search algorithms, refining both candidate selection and decision-making.
Empirical results show that BoostStep improves GPT-4o's CoT performance by 4.6%
across mathematical benchmarks, significantly surpassing traditional few-shot
learning's 1.2%. Moreover, it can achieve an additional 7.5\% gain combined
with tree search. Surprisingly, it enhances state-of-the-art LLMs to solve
challenging math problems using simpler examples. It improves
DeepSeek-R1-671B's performance on AIME by 2.2%, leveraging simple examples only
from the MATH dataset.
|
2501.03227 | When Should Selfish Miners Double-Spend? | cs.CR cs.DC cs.DM cs.IT math.IT math.PR | Although, both double-spending and selfish-mining attacks have been
extensively studied since the ``Bitcoin'' whitepaper of Nakamoto and the
``majority is not enough'' paper of Eyal and Sirer, there has been no rigorous
stochastic analysis of an attack that combines the two, except for the
complicated MDP models. In this paper, we first combine stubborn and selfish
mining attacks, i.e., construct a strategy where the attacker acts stubborn
until its private branch reaches a certain length and then switches to act
selfish. We provide the optimal stubbornness for each parameter regime. Next,
we provide the maximum stubbornness that is still more profitable than honest
mining and argue a connection between the level of stubbornness and the
$k$-confirmation rule. We show that, at each attack cycle, if the level of
stubbornness is higher than $k$, there is a risk of double-spending which comes
at no-cost to the adversary. The result can be seen as a guide for picking $k$
in the $k$-confirmation rule in a blockchain design. At each cycle, for a given
stubbornness level, we rigorously formulate how great the risk of
double-spending is. We provide the minimum double-spend value needed for an
attack to be profitable in the regimes where the scheme is less profitable than
honest mining. We further modify the attack in the stubborn regime in order to
conceal the attack and increase the double-spending probability. Finally, we
evaluate the results and provide the optimal and the maximum stubbornness
levels for each parameter regime as well as the revenue. As a case study, with
Bitcoin's $k=6$ block confirmation rule, we evaluate the revenue and
double-spending risk of the attacks for each pool parameter.
|
2501.03228 | LightGNN: Simple Graph Neural Network for Recommendation | cs.IR cs.AI cs.LG | Graph neural networks (GNNs) have demonstrated superior performance in
collaborative recommendation through their ability to conduct high-order
representation smoothing, effectively capturing structural information within
users' interaction patterns. However, existing GNN paradigms face significant
challenges in scalability and robustness when handling large-scale, noisy, and
real-world datasets. To address these challenges, we present LightGNN, a
lightweight and distillation-based GNN pruning framework designed to
substantially reduce model complexity while preserving essential collaboration
modeling capabilities. Our LightGNN framework introduces a computationally
efficient pruning module that adaptively identifies and removes redundant edges
and embedding entries for model compression. The framework is guided by a
resource-friendly hierarchical knowledge distillation objective, whose
intermediate layer augments the observed graph to maintain performance,
particularly in high-rate compression scenarios. Extensive experiments on
public datasets demonstrate LightGNN's effectiveness, significantly improving
both computational efficiency and recommendation accuracy. Notably, LightGNN
achieves an 80% reduction in edge count and 90% reduction in embedding entries
while maintaining performance comparable to more complex state-of-the-art
baselines. The implementation of our LightGNN framework is available at the
github repository: https://github.com/HKUDS/LightGNN.
|
2501.03229 | Gaussian Masked Autoencoders | cs.CV cs.AI | This paper explores Masked Autoencoders (MAE) with Gaussian Splatting. While
reconstructive self-supervised learning frameworks such as MAE learns good
semantic abstractions, it is not trained for explicit spatial awareness. Our
approach, named Gaussian Masked Autoencoder, or GMAE, aims to learn semantic
abstractions and spatial understanding jointly. Like MAE, it reconstructs the
image end-to-end in the pixel space, but beyond MAE, it also introduces an
intermediate, 3D Gaussian-based representation and renders images via
splatting. We show that GMAE can enable various zero-shot learning capabilities
of spatial understanding (e.g., figure-ground segmentation, image layering,
edge detection, etc.) while preserving the high-level semantics of
self-supervised representation quality from MAE. To our knowledge, we are the
first to employ Gaussian primitives in an image representation learning
framework beyond optimization-based single-scene reconstructions. We believe
GMAE will inspire further research in this direction and contribute to
developing next-generation techniques for modeling high-fidelity visual data.
More details at https://brjathu.github.io/gmae
|
2501.03230 | Video-of-Thought: Step-by-Step Video Reasoning from Perception to
Cognition | cs.AI cs.CV | Existing research of video understanding still struggles to achieve in-depth
comprehension and reasoning in complex videos, primarily due to the
under-exploration of two key bottlenecks: fine-grained spatial-temporal
perceptive understanding and cognitive-level video scene comprehension. This
paper bridges the gap by presenting a novel solution. We first introduce a
novel video Multimodal Large Language Model (MLLM), MotionEpic, which achieves
fine-grained pixel-level spatial-temporal video grounding by integrating video
spatial-temporal scene graph (STSG) representation. Building upon MotionEpic,
we then develop a Video-of-Thought (VoT) reasoning framework. VoT inherits the
Chain-of-Thought (CoT) core, breaking down a complex task into simpler and
manageable sub-problems, and addressing them step-by-step from a low-level
pixel perception to high-level cognitive interpretation. Extensive experiments
across various complex video QA benchmarks demonstrate that our overall
framework strikingly boosts existing state-of-the-art. To our knowledge, this
is the first attempt at successfully implementing the CoT technique for
achieving human-level video reasoning, where we show great potential in
extending it to a wider range of video understanding scenarios. Project is open
at https://haofei.vip/VoT
|
2501.03235 | Neural networks consisting of DNA | physics.bio-ph cond-mat.soft cs.AI cs.NE q-bio.BM q-bio.MN | Neural networks based on soft and biological matter constitute an interesting
potential alternative to traditional implementations based on electric
circuits. DNA is a particularly promising system in this context due its
natural ability to store information. In recent years, researchers have started
to construct neural networks that are based on DNA. In this chapter, I provide
a very basic introduction to the concept of DNA neural networks, aiming at an
audience that is not familiar with biochemistry.
|
2501.03246 | Bridging Auditory Perception and Language Comprehension through
MEG-Driven Encoding Models | q-bio.NC cs.CL cs.LG cs.SD eess.AS eess.SP | Understanding the neural mechanisms behind auditory and linguistic processing
is key to advancing cognitive neuroscience. In this study, we use
Magnetoencephalography (MEG) data to analyze brain responses to spoken language
stimuli. We develop two distinct encoding models: an audio-to-MEG encoder,
which uses time-frequency decompositions (TFD) and wav2vec2 latent space
representations, and a text-to-MEG encoder, which leverages CLIP and GPT-2
embeddings. Both models successfully predict neural activity, demonstrating
significant correlations between estimated and observed MEG signals. However,
the text-to-MEG model outperforms the audio-based model, achieving higher
Pearson Correlation (PC) score. Spatially, we identify that auditory-based
embeddings (TFD and wav2vec2) predominantly activate lateral temporal regions,
which are responsible for primary auditory processing and the integration of
auditory signals. In contrast, textual embeddings (CLIP and GPT-2) primarily
engage the frontal cortex, particularly Broca's area, which is associated with
higher-order language processing, including semantic integration and language
production, especially in the 8-30 Hz frequency range. The strong involvement
of these regions suggests that auditory stimuli are processed through more
direct sensory pathways, while linguistic information is encoded via networks
that integrate meaning and cognitive control. Our results reveal distinct
neural pathways for auditory and linguistic information processing, with higher
encoding accuracy for text representations in the frontal regions. These
insights refine our understanding of the brain's functional architecture in
processing auditory and textual information, offering quantitative advancements
in the modelling of neural responses to complex language stimuli.
|
2501.03250 | Machine Learning and Deep Learning Techniques used in Cybersecurity and
Digital Forensics: a Review | cs.CR cs.AI | In the paced realms of cybersecurity and digital forensics machine learning
(ML) and deep learning (DL) have emerged as game changing technologies that
introduce methods to identify stop and analyze cyber risks. This review
presents an overview of the ML and DL approaches used in these fields
showcasing their advantages drawbacks and possibilities. It covers a range of
AI techniques used in spotting intrusions in systems and classifying malware to
prevent cybersecurity attacks, detect anomalies and enhance resilience. This
study concludes by highlighting areas where further research is needed and
suggesting ways to create transparent and scalable ML and DL solutions that are
suited to the evolving landscape of cybersecurity and digital forensics.
|
2501.03254 | Advanced Displacement Magnitude Prediction in Multi-Material Architected
Lattice Structure Beams Using Physics Informed Neural Network Architecture | cs.AI cond-mat.mtrl-sci cs.CE cs.LG cs.NE | This paper proposes an innovative method for predicting deformation in
architected lattice structures that combines Physics-Informed Neural Networks
(PINNs) with finite element analysis. A thorough study was carried out on
FCC-based lattice beams utilizing five different materials (Structural Steel,
AA6061, AA7075, Ti6Al4V, and Inconel 718) under varied edge loads (1000-10000
N). The PINN model blends data-driven learning with physics-based limitations
via a proprietary loss function, resulting in much higher prediction accuracy
than linear regression. PINN outperforms linear regression, achieving greater
R-square (0.7923 vs 0.5686) and lower error metrics (MSE: 0.00017417 vs
0.00036187). Among the materials examined, AA6061 had the highest displacement
sensitivity (0.1014 mm at maximum load), while Inconel718 had better structural
stability.
|
2501.03256 | AI-ANNE: (A) (N)eural (N)et for (E)xploration: Transferring Deep
Learning Models onto Microcontrollers and Embedded Systems | cs.LG cs.AI | This working paper explores the integration of neural networks onto
resource-constrained embedded systems like a Raspberry Pi Pico / Raspberry Pi
Pico 2. A TinyML aproach transfers neural networks directly on these
microcontrollers, enabling real-time, low-latency, and energy-efficient
inference while maintaining data privacy. Therefore, AI-ANNE: (A) (N)eural
(N)et for (E)xploration will be presented, which facilitates the transfer of
pre-trained models from high-performance platforms like TensorFlow and Keras
onto microcontrollers, using a lightweight programming language like
MicroPython. This approach demonstrates how neural network architectures, such
as neurons, layers, density and activation functions can be implemented in
MicroPython in order to deal with the computational limitations of embedded
systems. Based on the Raspberry Pi Pico / Raspberry Pi Pico 2, two different
neural networks on microcontrollers are presented for an example of data
classification. As an further application example, such a microcontroller can
be used for condition monitoring, where immediate corrective measures are
triggered on the basis of sensor data. Overall, this working paper presents a
very easy-to-implement way of using neural networks on energy-efficient devices
such as microcontrollers. This makes AI-ANNE: (A) (N)eural (N)et for
(E)xploration not only suited for practical use, but also as an educational
tool with clear insights into how neural networks operate.
|
2501.03257 | Breaking Through the Spike: Spike Window Decoding for Accelerated and
Precise Automatic Speech Recognition | eess.AS cs.AI cs.CL cs.SD | Recently, end-to-end automatic speech recognition has become the mainstream
approach in both industry and academia. To optimize system performance in
specific scenarios, the Weighted Finite-State Transducer (WFST) is extensively
used to integrate acoustic and language models, leveraging its capacity to
implicitly fuse language models within static graphs, thereby ensuring robust
recognition while also facilitating rapid error correction. However, WFST
necessitates a frame-by-frame search of CTC posterior probabilities through
autoregression, which significantly hampers inference speed. In this work, we
thoroughly investigate the spike property of CTC outputs and further propose
the conjecture that adjacent frames to non-blank spikes carry semantic
information beneficial to the model. Building on this, we propose the Spike
Window Decoding algorithm, which greatly improves the inference speed by making
the number of frames decoded in WFST linearly related to the number of spiking
frames in the CTC output, while guaranteeing the recognition performance. Our
method achieves SOTA recognition accuracy with significantly accelerates
decoding speed, proven across both AISHELL-1 and large-scale In-House datasets,
establishing a pioneering approach for integrating CTC output with WFST.
|
2501.03259 | Toward Inclusive Educational AI: Auditing Frontier LLMs through a
Multiplexity Lens | cs.CL cs.AI cs.CY cs.LG cs.MA | As large language models (LLMs) like GPT-4 and Llama 3 become integral to
educational contexts, concerns are mounting over the cultural biases, power
imbalances, and ethical limitations embedded within these technologies. Though
generative AI tools aim to enhance learning experiences, they often reflect
values rooted in Western, Educated, Industrialized, Rich, and Democratic
(WEIRD) cultural paradigms, potentially sidelining diverse global perspectives.
This paper proposes a framework to assess and mitigate cultural bias within
LLMs through the lens of applied multiplexity. Multiplexity, inspired by
Senturk et al. and rooted in Islamic and other wisdom traditions, emphasizes
the coexistence of diverse cultural viewpoints, supporting a multi-layered
epistemology that integrates both empirical sciences and normative values. Our
analysis reveals that LLMs frequently exhibit cultural polarization, with
biases appearing in both overt responses and subtle contextual cues. To address
inherent biases and incorporate multiplexity in LLMs, we propose two
strategies: \textit{Contextually-Implemented Multiplex LLMs}, which embed
multiplex principles directly into the system prompt, influencing LLM outputs
at a foundational level and independent of individual prompts, and
\textit{Multi-Agent System (MAS)-Implemented Multiplex LLMs}, where multiple
LLM agents, each representing distinct cultural viewpoints, collaboratively
generate a balanced, synthesized response. Our findings demonstrate that as
mitigation strategies evolve from contextual prompting to MAS-implementation,
cultural inclusivity markedly improves, evidenced by a significant rise in the
Perspectives Distribution Score (PDS) and a PDS Entropy increase from 3.25\% at
baseline to 98\% with the MAS-Implemented Multiplex LLMs. Sentiment analysis
further shows a shift towards positive sentiment across cultures,...
|
2501.03261 | Navigation Variable-based Multi-objective Particle Swarm Optimization
for UAV Path Planning with Kinematic Constraints | cs.RO cs.AI cs.NE | Path planning is essential for unmanned aerial vehicles (UAVs) as it
determines the path that the UAV needs to follow to complete a task. This work
addresses this problem by introducing a new algorithm called navigation
variable-based multi-objective particle swarm optimization (NMOPSO). It first
models path planning as an optimization problem via the definition of a set of
objective functions that include optimality and safety requirements for UAV
operation. The NMOPSO is then used to minimize those functions through Pareto
optimal solutions. The algorithm features a new path representation based on
navigation variables to include kinematic constraints and exploit the
maneuverable characteristics of the UAV. It also includes an adaptive mutation
mechanism to enhance the diversity of the swarm for better solutions.
Comparisons with various algorithms have been carried out to benchmark the
proposed approach. The results indicate that the NMOPSO performs better than
not only other particle swarm optimization variants but also other
state-of-the-art multi-objective and metaheuristic optimization algorithms.
Experiments have also been conducted with real UAVs to confirm the validity of
the approach for practical flights. The source code of the algorithm is
available at https://github.com/ngandng/NMOPSO.
|
2501.03262 | REINFORCE++: A Simple and Efficient Approach for Aligning Large Language
Models | cs.CL cs.LG | Reinforcement Learning from Human Feedback (RLHF) has emerged as a critical
approach for aligning large language models with human preferences, witnessing
rapid algorithmic evolution through methods such as Proximal Policy
Optimization (PPO), Direct Preference Optimization (DPO), REINFORCE Leave
One-Out (RLOO), ReMax, and Group Relative Policy Optimization (GRPO). We
present REINFORCE++, an enhanced variant of the classical REINFORCE algorithm
that incorporates key optimization techniques from PPO while eliminating the
need for a critic network. REINFORCE++ achieves three primary objectives: (1)
simplicity (2) enhanced training stability, and (3) reduced computational
overhead. Through extensive empirical evaluation, we demonstrate that
REINFORCE++ exhibits superior stability compared to GRPO and achieves greater
computational efficiency than PPO while maintaining comparable performance. The
implementation is available at \url{https://github.com/OpenRLHF/OpenRLHF}.
|
2501.03264 | Bridge the Inference Gaps of Neural Processes via Expectation
Maximization | cs.LG cs.AI cs.NE | The neural process (NP) is a family of computationally efficient models for
learning distributions over functions. However, it suffers from under-fitting
and shows suboptimal performance in practice. Researchers have primarily
focused on incorporating diverse structural inductive biases, \textit{e.g.}
attention or convolution, in modeling. The topic of inference suboptimality and
an analysis of the NP from the optimization objective perspective has hardly
been studied in earlier work. To fix this issue, we propose a surrogate
objective of the target log-likelihood of the meta dataset within the
expectation maximization framework. The resulting model, referred to as the
Self-normalized Importance weighted Neural Process (SI-NP), can learn a more
accurate functional prior and has an improvement guarantee concerning the
target log-likelihood. Experimental results show the competitive performance of
SI-NP over other NPs objectives and illustrate that structural inductive
biases, such as attention modules, can also augment our method to achieve SOTA
performance. Our code is available at
\url{https://github.com/hhq123gogogo/SI_NPs}.
|
2501.03265 | Optimizing Edge AI: A Comprehensive Survey on Data, Model, and System
Strategies | cs.LG cs.AI | The emergence of 5G and edge computing hardware has brought about a
significant shift in artificial intelligence, with edge AI becoming a crucial
technology for enabling intelligent applications. With the growing amount of
data generated and stored on edge devices, deploying AI models for local
processing and inference has become increasingly necessary. However, deploying
state-of-the-art AI models on resource-constrained edge devices faces
significant challenges that must be addressed. This paper presents an
optimization triad for efficient and reliable edge AI deployment, including
data, model, and system optimization. First, we discuss optimizing data through
data cleaning, compression, and augmentation to make it more suitable for edge
deployment. Second, we explore model design and compression methods at the
model level, such as pruning, quantization, and knowledge distillation.
Finally, we introduce system optimization techniques like framework support and
hardware acceleration to accelerate edge AI workflows. Based on an in-depth
analysis of various application scenarios and deployment challenges of edge AI,
this paper proposes an optimization paradigm based on the data-model-system
triad to enable a whole set of solutions to effectively transfer ML models,
which are initially trained in the cloud, to various edge devices for
supporting multiple scenarios.
|
2501.03266 | LLM Content Moderation and User Satisfaction: Evidence from Response
Refusals in Chatbot Arena | cs.CL cs.AI cs.CY cs.HC cs.SI | LLM safety and ethical alignment are widely discussed, but the impact of
content moderation on user satisfaction remains underexplored. To address this,
we analyze nearly 50,000 Chatbot Arena response-pairs using a novel fine-tuned
RoBERTa model, that we trained on hand-labeled data to disentangle refusals due
to ethical concerns from other refusals due to technical disabilities or lack
of information. Our findings reveal a significant refusal penalty on content
moderation, with users choosing ethical-based refusals roughly one-fourth as
often as their preferred LLM response compared to standard responses. However,
the context and phrasing play critical roles: refusals on highly sensitive
prompts, such as illegal content, achieve higher win rates than less sensitive
ethical concerns, and longer responses closely aligned with the prompt perform
better. These results emphasize the need for nuanced moderation strategies that
balance ethical safeguards with user satisfaction. Moreover, we find that the
refusal penalty is notably lower in evaluations using the LLM-as-a-Judge
method, highlighting discrepancies between user and automated assessments.
|
2501.03268 | Heterogeneous Graph Pre-training Based Model for Secure and Efficient
Prediction of Default Risk Propagation among Bond Issuers | cs.LG cs.AI | Efficient prediction of default risk for bond-issuing enterprises is pivotal
for maintaining stability and fostering growth in the bond market. Conventional
methods usually rely solely on an enterprise's internal data for risk
assessment. In contrast, graph-based techniques leverage interconnected
corporate information to enhance default risk identification for targeted bond
issuers. Traditional graph techniques such as label propagation algorithm or
deepwalk fail to effectively integrate a enterprise's inherent attribute
information with its topological network data. Additionally, due to data
scarcity and security privacy concerns between enterprises, end-to-end graph
neural network (GNN) algorithms may struggle in delivering satisfactory
performance for target tasks. To address these challenges, we present a novel
two-stage model. In the first stage, we employ an innovative Masked
Autoencoders for Heterogeneous Graph (HGMAE) to pre-train on a vast enterprise
knowledge graph. Subsequently, in the second stage, a specialized classifier
model is trained to predict default risk propagation probabilities. The
classifier leverages concatenated feature vectors derived from the pre-trained
encoder with the enterprise's task-specific feature vectors. Through the
two-stage training approach, our model not only boosts the importance of unique
bond characteristics for specific default prediction tasks, but also securely
and efficiently leverage the global information pre-trained from other
enterprises. Experimental results demonstrate that our proposed model
outperforms existing approaches in predicting default risk for bond issuers.
|
2501.03271 | DPO Kernels: A Semantically-Aware, Kernel-Enhanced, and Divergence-Rich
Paradigm for Direct Preference Optimization | cs.LG cs.AI cs.CL | The rapid rise of large language models (LLMs) has unlocked many applications
but also underscores the challenge of aligning them with diverse values and
preferences. Direct Preference Optimization (DPO) is central to alignment but
constrained by fixed divergences and limited feature transformations. We
propose DPO-Kernels, which integrates kernel methods to address these issues
through four key contributions: (i) Kernelized Representations with polynomial,
RBF, Mahalanobis, and spectral kernels for richer transformations, plus a
hybrid loss combining embedding-based and probability-based objectives; (ii)
Divergence Alternatives (Jensen-Shannon, Hellinger, Renyi, Bhattacharyya,
Wasserstein, and f-divergences) for greater stability; (iii) Data-Driven
Selection metrics that automatically choose the best kernel-divergence pair;
and (iv) a Hierarchical Mixture of Kernels for both local precision and global
modeling. Evaluations on 12 datasets demonstrate state-of-the-art performance
in factuality, safety, reasoning, and instruction following. Grounded in
Heavy-Tailed Self-Regularization, DPO-Kernels maintains robust generalization
for LLMs, offering a comprehensive resource for further alignment research.
|
2501.03272 | Backdoor Token Unlearning: Exposing and Defending Backdoors in
Pretrained Language Models | cs.CR cs.AI cs.CL | Supervised fine-tuning has become the predominant method for adapting large
pretrained models to downstream tasks. However, recent studies have revealed
that these models are vulnerable to backdoor attacks, where even a small number
of malicious samples can successfully embed backdoor triggers into the model.
While most existing defense methods focus on post-training backdoor defense,
efficiently defending against backdoor attacks during training phase remains
largely unexplored. To address this gap, we propose a novel defense method
called Backdoor Token Unlearning (BTU), which proactively detects and
neutralizes trigger tokens during the training stage. Our work is based on two
key findings: 1) backdoor learning causes distinctive differences between
backdoor token parameters and clean token parameters in word embedding layers,
and 2) the success of backdoor attacks heavily depends on backdoor token
parameters. The BTU defense leverages these properties to identify aberrant
embedding parameters and subsequently removes backdoor behaviors using a
fine-grained unlearning technique. Extensive evaluations across three datasets
and four types of backdoor attacks demonstrate that BTU effectively defends
against these threats while preserving the model's performance on primary
tasks. Our code is available at https://github.com/XDJPH/BTU.
|
2501.03273 | Strategic Fusion Optimizes Transformer Compression | cs.LG cs.AI cs.CL | This study investigates transformer model compression by systematically
pruning its layers. We evaluated 14 pruning strategies across nine diverse
datasets, including 12 strategies based on different signals obtained from
layer activations, mutual information, gradients, weights, and attention. To
address the limitations of single-signal strategies, we introduced two fusion
strategies, linear regression and random forest, which combine individual
strategies (i.e., strategic fusion), for more informed pruning decisions.
Additionally, we applied knowledge distillation to mitigate any accuracy loss
during layer pruning. Our results reveal that random forest strategic fusion
outperforms individual strategies in seven out of nine datasets and achieves
near-optimal performance in the other two. The distilled random forest
surpasses the original accuracy in six datasets and mitigates accuracy drops in
the remaining three. Knowledge distillation also improves the accuracy-to-size
ratio by an average factor of 18.84 across all datasets. Supported by
mathematical foundations and biological analogies, our findings suggest that
strategically combining multiple signals can lead to efficient, high-performing
transformer models for resource-constrained applications.
|
2501.03276 | ComMer: a Framework for Compressing and Merging User Data for
Personalization | cs.CL cs.AI cs.IR cs.LG | Large Language Models (LLMs) excel at a wide range of tasks, but adapting
them to new data, particularly for personalized applications, poses significant
challenges due to resource and computational constraints. Existing methods
either rely on exposing fresh data to the model through the prompt, which is
limited by context size and computationally expensive at inference time, or
fine-tuning, which incurs substantial training and update costs. In this paper,
we introduce ComMer - Compress and Merge - a novel framework that efficiently
personalizes LLMs by compressing users' documents into compact representations,
which are then merged and fed into a frozen LLM. We evaluate ComMer on two
types of personalization tasks - personalized skill learning, using the tweet
paraphrasing dataset and the personalized news headline generation dataset from
the LaMP benchmark, and knowledge-intensive, using the PerLTQA dataset. Our
experiments demonstrate that in constrained inference budget scenarios ComMer
achieves superior quality in skill learning tasks, while highlighting
limitations in knowledge-intensive settings due to the loss of detailed
information. These results offer insights into trade-offs and potential
optimizations in multi-document compression for personalization.
|
2501.03277 | HonkaiChat: Companions from Anime that feel alive! | cs.CL | Modern conversational agents, including anime-themed chatbots, are frequently
reactive and personality-driven but fail to capture the dynamic nature of human
interactions. We propose an event-driven dialogue framework to address these
limitations by embedding dynamic events in conversation prompts and fine-tuning
models on character-specific data. Evaluations on GPT-4 and comparisons with
industry-leading baselines demonstrate that event-driven prompts significantly
improve conversational engagement and naturalness while reducing
hallucinations. This paper explores the application of this approach in
creating lifelike chatbot interactions within the context of Honkai: Star Rail,
showcasing the potential for dynamic event-based systems to transform
role-playing and interactive dialogue.
|
2501.03278 | DenseGNN: universal and scalable deeper graph neural networks for
high-performance property prediction in crystals and molecules | cond-mat.mtrl-sci cs.LG | Generative models generate vast numbers of hypothetical materials,
necessitating fast, accurate models for property prediction. Graph Neural
Networks (GNNs) excel in this domain but face challenges like high training
costs, domain adaptation issues, and over-smoothing. We introduce DenseGNN,
which employs Dense Connectivity Network (DCN), Hierarchical Node-Edge-Graph
Residual Networks (HRN), and Local Structure Order Parameters Embedding (LOPE)
to address these challenges. DenseGNN achieves state-of-the-art performance on
datasets such as JARVIS-DFT, Materials Project, and QM9, improving the
performance of models like GIN, Schnet, and Hamnet on materials datasets. By
optimizing atomic embeddings and reducing computational costs, DenseGNN enables
deeper architectures and surpasses other GNNs in crystal structure distinction,
approaching X-ray diffraction method accuracy. This advances materials
discovery and design.
|
2501.03279 | Revolutionizing Encrypted Traffic Classification with MH-Net: A
Multi-View Heterogeneous Graph Model | cs.CR cs.AI cs.LG | With the growing significance of network security, the classification of
encrypted traffic has emerged as an urgent challenge. Traditional byte-based
traffic analysis methods are constrained by the rigid granularity of
information and fail to fully exploit the diverse correlations between bytes.
To address these limitations, this paper introduces MH-Net, a novel approach
for classifying network traffic that leverages multi-view heterogeneous traffic
graphs to model the intricate relationships between traffic bytes. The essence
of MH-Net lies in aggregating varying numbers of traffic bits into multiple
types of traffic units, thereby constructing multi-view traffic graphs with
diverse information granularities. By accounting for different types of byte
correlations, such as header-payload relationships, MH-Net further endows the
traffic graph with heterogeneity, significantly enhancing model performance.
Notably, we employ contrastive learning in a multi-task manner to strengthen
the robustness of the learned traffic unit representations. Experiments
conducted on the ISCX and CIC-IoT datasets for both the packet-level and
flow-level traffic classification tasks demonstrate that MH-Net achieves the
best overall performance compared to dozens of SOTA methods.
|
2501.03282 | From Aleatoric to Epistemic: Exploring Uncertainty Quantification
Techniques in Artificial Intelligence | cs.AI cs.LG | Uncertainty quantification (UQ) is a critical aspect of artificial
intelligence (AI) systems, particularly in high-risk domains such as
healthcare, autonomous systems, and financial technology, where decision-making
processes must account for uncertainty. This review explores the evolution of
uncertainty quantification techniques in AI, distinguishing between aleatoric
and epistemic uncertainties, and discusses the mathematical foundations and
methods used to quantify these uncertainties. We provide an overview of
advanced techniques, including probabilistic methods, ensemble learning,
sampling-based approaches, and generative models, while also highlighting
hybrid approaches that integrate domain-specific knowledge. Furthermore, we
examine the diverse applications of UQ across various fields, emphasizing its
impact on decision-making, predictive accuracy, and system robustness. The
review also addresses key challenges such as scalability, efficiency, and
integration with explainable AI, and outlines future directions for research in
this rapidly developing area. Through this comprehensive survey, we aim to
provide a deeper understanding of UQ's role in enhancing the reliability,
safety, and trustworthiness of AI systems.
|
2501.03284 | Sensorformer: Cross-patch attention with global-patch compression is
effective for high-dimensional multivariate time series forecasting | cs.LG | Among the existing Transformer-based multivariate time series forecasting
methods, iTransformer, which treats each variable sequence as a token and only
explicitly extracts cross-variable dependencies, and PatchTST, which adopts a
channel-independent strategy and only explicitly extracts cross-time
dependencies, both significantly outperform most Channel-Dependent Transformer
that simultaneously extract cross-time and cross-variable dependencies. This
indicates that existing Transformer-based multivariate time series forecasting
methods still struggle to effectively fuse these two types of information. We
attribute this issue to the dynamic time lags in the causal relationships
between different variables. Therefore, we propose a new multivariate time
series forecasting Transformer, Sensorformer, which first compresses the global
patch information and then simultaneously extracts cross-variable and
cross-time dependencies from the compressed representations. Sensorformer can
effectively capture the correct inter-variable correlations and causal
relationships, even in the presence of dynamic causal lags between variables,
while also reducing the computational complexity of pure cross-patch
self-attention from $O(D^2 \cdot Patch\_num^2 \cdot d\_model)$ to $O(D^2 \cdot
Patch\_num \cdot d\_model)$. Extensive comparative and ablation experiments on
9 mainstream real-world multivariate time series forecasting datasets
demonstrate the superiority of Sensorformer. The implementation of
Sensorformer, following the style of the Time-series-library and scripts for
reproducing the main results, is publicly available at
https://github.com/BigYellowTiger/Sensorformer
|
2501.03286 | Inverse Design of Optimal Stern Shape with Convolutional Neural
Network-based Pressure Distribution | cs.LG physics.flu-dyn | Hull form designing is an iterative process wherein the performance of the
hull form needs to be checked via computational fluid dynamics calculations or
model experiments. The stern shape has to undergo a process wherein the hull
form variations from the pressure distribution analysis results are repeated
until the resistance and propulsion efficiency meet the design requirements. In
this study, the designer designed a pressure distribution that meets the design
requirements; this paper proposes an inverse design algorithm that estimates
the stern shape using deep learning. A convolutional neural network was used to
extract the features of the pressure distribution expressed as a contour,
whereas a multi-task learning model was used to estimate various sections of
the stern shape. We estimated the stern shape indirectly by estimating the
control point of the B-spline and comparing the actual and converted offsets
for each section; the performance was verified, and an inverse design is
proposed herein
|
2501.03287 | OpenLKA: an open dataset of lane keeping assist from market autonomous
vehicles | cs.RO cs.CV cs.LG | The Lane Keeping Assist (LKA) system has become a standard feature in recent
car models. While marketed as providing auto-steering capabilities, the
system's operational characteristics and safety performance remain
underexplored, primarily due to a lack of real-world testing and comprehensive
data. To fill this gap, we extensively tested mainstream LKA systems from
leading U.S. automakers in Tampa, Florida. Using an innovative method, we
collected a comprehensive dataset that includes full Controller Area Network
(CAN) messages with LKA attributes, as well as video, perception, and lateral
trajectory data from a high-quality front-facing camera equipped with advanced
vision detection and trajectory planning algorithms. Our tests spanned diverse,
challenging conditions, including complex road geometry, adverse weather,
degraded lane markings, and their combinations. A vision language model (VLM)
further annotated the videos to capture weather, lighting, and traffic
features. Based on this dataset, we present an empirical overview of LKA's
operational features and safety performance. Key findings indicate: (i) LKA is
vulnerable to faint markings and low pavement contrast; (ii) it struggles in
lane transitions (merges, diverges, intersections), often causing unintended
departures or disengagements; (iii) steering torque limitations lead to
frequent deviations on sharp turns, posing safety risks; and (iv) LKA systems
consistently maintain rigid lane-centering, lacking adaptability on tight
curves or near large vehicles such as trucks. We conclude by demonstrating how
this dataset can guide both infrastructure planning and self-driving
technology. In view of LKA's limitations, we recommend improvements in road
geometry and pavement maintenance. Additionally, we illustrate how the dataset
supports the development of human-like LKA systems via VLM fine-tuning and
Chain of Thought reasoning.
|
2501.03288 | CodeVision: Detecting LLM-Generated Code Using 2D Token Probability Maps
and Vision Models | cs.SE cs.AI | The rise of large language models (LLMs) like ChatGPT has significantly
improved automated code generation, enhancing software development efficiency.
However, this introduces challenges in academia, particularly in distinguishing
between human-written and LLM-generated code, which complicates issues of
academic integrity. Existing detection methods, such as pre-trained models and
watermarking, face limitations in adaptability and computational efficiency. In
this paper, we propose a novel detection method using 2D token probability maps
combined with vision models, preserving spatial code structures such as
indentation and brackets. By transforming code into log probability matrices
and applying vision models like Vision Transformers (ViT) and ResNet, we
capture both content and structure for more accurate detection. Our method
shows robustness across multiple programming languages and improves upon
traditional detectors, offering a scalable and computationally efficient
solution for identifying LLM-generated code.
|
2501.03289 | Adaptive Pruning of Pretrained Transformer via Differential Inclusions | cs.LG | Large transformers have demonstrated remarkable success, making it necessary
to compress these models to reduce inference costs while preserving their
perfor-mance. Current compression algorithms prune transformers at fixed
compression ratios, requiring a unique pruning process for each ratio, which
results in high computational costs. In contrast, we propose pruning of
pretrained transformers at any desired ratio within a single pruning stage,
based on a differential inclusion for a mask parameter. This dynamic can
generate the whole regularization solution path of the mask parameter, whose
support set identifies the network structure. Therefore, the solution path
identifies a Transformer weight family with various sparsity levels, offering
greater flexibility and customization. In this paper, we introduce such an
effective pruning method, termed SPP (Solution Path Pruning). To achieve
effective pruning, we segment the transformers into paired modules, including
query-key pairs, value-projection pairs, and sequential linear layers, and
apply low-rank compression to these pairs, maintaining the output structure
while enabling structural compression within the inner states. Extensive
experiments conducted on various well-known transformer backbones have
demonstrated the efficacy of SPP.
|
2501.03290 | A Decision-Based Heterogenous Graph Attention Network for Multi-Class
Fake News Detection | cs.LG cs.AI cs.SI | A promising tool for addressing fake news detection is Graph Neural Networks
(GNNs). However, most existing GNN-based methods rely on binary classification,
categorizing news as either real or fake. Additionally, traditional GNN models
use a static neighborhood for each node, making them susceptible to issues like
over-squashing. In this paper, we introduce a novel model named Decision-based
Heterogeneous Graph Attention Network (DHGAT) for fake news detection in a
semi-supervised setting. DHGAT effectively addresses the limitations of
traditional GNNs by dynamically optimizing and selecting the neighborhood type
for each node in every layer. It represents news data as a heterogeneous graph
where nodes (news items) are connected by various types of edges. The
architecture of DHGAT consists of a decision network that determines the
optimal neighborhood type and a representation network that updates node
embeddings based on this selection. As a result, each node learns an optimal
and task-specific computational graph, enhancing both the accuracy and
efficiency of the fake news detection process. We evaluate DHGAT on the LIAR
dataset, a large and challenging dataset for multi-class fake news detection,
which includes news items categorized into six classes. Our results demonstrate
that DHGAT outperforms existing methods, improving accuracy by approximately 4%
and showing robustness with limited labeled data.
|
2501.03291 | ADePT: Adaptive Decomposed Prompt Tuning for Parameter-Efficient
Fine-tuning | cs.CL | Prompt Tuning (PT) enables the adaptation of Pre-trained Large Language
Models (PLMs) to downstream tasks by optimizing a small amount of soft virtual
tokens, which are prepended to the input token embeddings. Recently, Decomposed
Prompt Tuning (DePT) has demonstrated superior adaptation capabilities by
decomposing the soft prompt into a shorter soft prompt and a pair of low-rank
matrices. The product of the pair of low-rank matrices is added to the input
token embeddings to offset them. Additionally, DePT achieves faster inference
compared to PT due to the shorter soft prompt. However, in this paper, we find
that the position-based token embedding offsets of DePT restricts its ability
to generalize across diverse model inputs, and that the shared embedding
offsets across many token embeddings result in sub-optimization. To tackle
these issues, we introduce \textbf{A}daptive \textbf{De}composed
\textbf{P}rompt \textbf{T}uning (ADePT), which is composed of a short soft
prompt and a shallow token-shared feed-forward neural network. ADePT utilizes
the token-shared feed-forward neural network to learn the embedding offsets for
each token, enabling adaptive embedding offsets that vary according to the
model input and better optimization of token embedding offsets. This enables
ADePT to achieve superior adaptation performance without requiring more
inference time or additional trainable parameters compared to vanilla PT and
its variants. In comprehensive experiments across 23 natural language
processing (NLP) tasks and 4 typical PLMs of different scales, we show that
ADePT consistently surpasses the leading parameter-efficient fine-tuning (PEFT)
methods, and even outperforms the full fine-tuning baseline in certain
scenarios. Code is available at \url{https://github.com/HungerPWAY/ADePT}.
|
2501.03292 | Multi-Modal One-Shot Federated Ensemble Learning for Medical Data with
Vision Large Language Model | cs.LG cs.AI | Federated learning (FL) has attracted considerable interest in the medical
domain due to its capacity to facilitate collaborative model training while
maintaining data privacy. However, conventional FL methods typically
necessitate multiple communication rounds, leading to significant communication
overhead and delays, especially in environments with limited bandwidth.
One-shot federated learning addresses these issues by conducting model training
and aggregation in a single communication round, thereby reducing communication
costs while preserving privacy. Among these, one-shot federated ensemble
learning combines independently trained client models using ensemble techniques
such as voting, further boosting performance in non-IID data scenarios. On the
other hand, existing machine learning methods in healthcare predominantly use
unimodal data (e.g., medical images or textual reports), which restricts their
diagnostic accuracy and comprehensiveness. Therefore, the integration of
multi-modal data is proposed to address these shortcomings. In this paper, we
introduce FedMME, an innovative one-shot multi-modal federated ensemble
learning framework that utilizes multi-modal data for medical image analysis.
Specifically, FedMME capitalizes on vision large language models to produce
textual reports from medical images, employs a BERT model to extract textual
features from these reports, and amalgamates these features with visual
features to improve diagnostic accuracy. Experimental results show that our
method demonstrated superior performance compared to existing one-shot
federated learning methods in healthcare scenarios across four datasets with
various data distributions. For instance, it surpasses existing one-shot
federated learning approaches by more than 17.5% in accuracy on the RSNA
dataset when applying a Dirichlet distribution with ($\alpha$ = 0.3).
|
2501.03295 | A Soft Sensor Method with Uncertainty-Awareness and Self-Explanation
Based on Large Language Models Enhanced by Domain Knowledge Retrieval | cs.LG cs.AI eess.SP | Data-driven soft sensors are crucial in predicting key performance indicators
in industrial systems. However, current methods predominantly rely on the
supervised learning paradigms of parameter updating, which inherently faces
challenges such as high development costs, poor robustness, training
instability, and lack of interpretability. Recently, large language models
(LLMs) have demonstrated significant potential across various domains, notably
through In-Context Learning (ICL), which enables high-performance task
execution with minimal input-label demonstrations and no prior training. This
paper aims to replace supervised learning with the emerging ICL paradigm for
soft sensor modeling to address existing challenges and explore new avenues for
advancement. To achieve this, we propose a novel framework called the Few-shot
Uncertainty-aware and self-Explaining Soft Sensor (LLM-FUESS), which includes
the Zero-shot Auxiliary Variable Selector (LLM-ZAVS) and the Uncertainty-aware
Few-shot Soft Sensor (LLM-UFSS). The LLM-ZAVS retrieves from the Industrial
Knowledge Vector Storage to enhance LLMs' domain-specific knowledge, enabling
zero-shot auxiliary variable selection. In the LLM-UFSS, we utilize text-based
context demonstrations of structured data to prompt LLMs to execute ICL for
predicting and propose a context sample retrieval augmentation strategy to
improve performance. Additionally, we explored LLMs' AIGC and probabilistic
characteristics to propose self-explanation and uncertainty quantification
methods for constructing a trustworthy soft sensor. Extensive experiments
demonstrate that our method achieved state-of-the-art predictive performance,
strong robustness, and flexibility, effectively mitigates training instability
found in traditional methods. To the best of our knowledge, this is the first
work to establish soft sensor utilizing LLMs.
|
2501.03300 | Method of data forward generation with partial differential equations
for machine learning modeling in fluid mechanics | cs.LG physics.flu-dyn | Artificial intelligence (AI) for fluid mechanics has become attractive topic.
High-fidelity data is one of most critical issues for the successful
applications of AI in fluid mechanics, however, it is expensively obtained or
even inaccessible. This study proposes a high-efficient data forward generation
method from the partial differential equations (PDEs). Specifically, the
solutions of the PDEs are first generated either following a random field (e.g.
Gaussian random field, GRF, computational complexity O(NlogN), N is the number
of spatial points) or physical laws (e.g. a kind of spectra, computational
complexity O(NM), M is the number of modes), then the source terms, boundary
conditions and initial conditions are computed to satisfy PDEs. Thus, the data
pairs of source terms, boundary conditions and initial conditions with
corresponding solutions of PDEs can be constructed. A Poisson neural network
(Poisson-NN) embedded in projection method and a wavelet transform
convolutional neuro network (WTCNN) embedded in multigrid numerical simulation
for solving incompressible Navier-Stokes equations is respectively proposed.
The feasibility of generated data for training Poisson-NN and WTCNN is
validated. The results indicate that even without any DNS data, the generated
data can train these two models with excellent generalization and accuracy. The
data following physical laws can significantly improve the convergence rate,
generalization and accuracy than that generated following GRF.
|
2501.03301 | Rethinking Byzantine Robustness in Federated Recommendation from Sparse
Aggregation Perspective | cs.CR cs.AI cs.DC cs.LG | To preserve user privacy in recommender systems, federated recommendation
(FR) based on federated learning (FL) emerges, keeping the personal data on the
local client and updating a model collaboratively. Unlike FL, FR has a unique
sparse aggregation mechanism, where the embedding of each item is updated by
only partial clients, instead of full clients in a dense aggregation of general
FL. Recently, as an essential principle of FL, model security has received
increasing attention, especially for Byzantine attacks, where malicious clients
can send arbitrary updates. The problem of exploring the Byzantine robustness
of FR is particularly critical since in the domains applying FR, e.g.,
e-commerce, malicious clients can be injected easily by registering new
accounts. However, existing Byzantine works neglect the unique sparse
aggregation of FR, making them unsuitable for our problem. Thus, we make the
first effort to investigate Byzantine attacks on FR from the perspective of
sparse aggregation, which is non-trivial: it is not clear how to define
Byzantine robustness under sparse aggregations and design Byzantine attacks
under limited knowledge/capability. In this paper, we reformulate the Byzantine
robustness under sparse aggregation by defining the aggregation for a single
item as the smallest execution unit. Then we propose a family of effective
attack strategies, named Spattack, which exploit the vulnerability in sparse
aggregation and are categorized along the adversary's knowledge and capability.
Extensive experimental results demonstrate that Spattack can effectively
prevent convergence and even break down defenses under a few malicious clients,
raising alarms for securing FR systems.
|
2501.03304 | LiLMaps: Learnable Implicit Language Maps | cs.RO cs.LG | One of the current trends in robotics is to employ large language models
(LLMs) to provide non-predefined command execution and natural human-robot
interaction. It is useful to have an environment map together with its language
representation, which can be further utilized by LLMs. Such a comprehensive
scene representation enables numerous ways of interaction with the map for
autonomously operating robots. In this work, we present an approach that
enhances incremental implicit mapping through the integration of
vision-language features. Specifically, we (i) propose a decoder optimization
technique for implicit language maps which can be used when new objects appear
on the scene, and (ii) address the problem of inconsistent vision-language
predictions between different viewing positions. Our experiments demonstrate
the effectiveness of LiLMaps and solid improvements in performance.
|
2501.03305 | Plant Leaf Disease Detection and Classification Using Deep Learning: A
Review and A Proposed System on Bangladesh's Perspective | cs.CV cs.LG | A very crucial part of Bangladeshi people's employment, GDP contribution, and
mainly livelihood is agriculture. It plays a vital role in decreasing poverty
and ensuring food security. Plant diseases are a serious stumbling block in
agricultural production in Bangladesh. At times, humans can't detect the
disease from an infected leaf with the naked eye. Using inorganic chemicals or
pesticides in plants when it's too late leads in vain most of the time,
deposing all the previous labor. The deep-learning technique of leaf-based
image classification, which has shown impressive results, can make the work of
recognizing and classifying all diseases trouble-less and more precise. In this
paper, we've mainly proposed a better model for the detection of leaf diseases.
Our proposed paper includes the collection of data on three different kinds of
crops: bell peppers, tomatoes, and potatoes. For training and testing the
proposed CNN model, the plant leaf disease dataset collected from Kaggle is
used, which has 17,430 images. The images are labeled with 14 separate classes
of damage. The developed CNN model performs efficiently and could successfully
detect and classify the tested diseases. The proposed CNN model may have great
potency in crop disease management.
|
2501.03306 | The Robustness of Spiking Neural Networks in Federated Learning with
Compression Against Non-omniscient Byzantine Attacks | cs.CR cs.DC cs.LG | Spiking Neural Networks (SNNs), which offer exceptional energy efficiency for
inference, and Federated Learning (FL), which offers privacy-preserving
distributed training, is a rising area of interest that highly beneficial
towards Internet of Things (IoT) devices. Despite this, research that tackles
Byzantine attacks and bandwidth limitation in FL-SNNs, both poses significant
threats on model convergence and training times, still remains largely
unexplored. Going beyond proposing a solution for both of these problems, in
this work we highlight the dual benefits of FL-SNNs, against non-omniscient
Byzantine adversaries (ones that restrict attackers access to local clients
datasets), and greater communication efficiency, over FL-ANNs. Specifically, we
discovered that a simple integration of Top-\k{appa} sparsification into the FL
apparatus can help leverage the advantages of the SNN models in both greatly
reducing bandwidth usage and significantly boosting the robustness of FL
training against non-omniscient Byzantine adversaries. Most notably, we saw a
massive improvement of roughly 40% accuracy gain in FL-SNNs training under the
lethal MinMax attack
|
2501.03324 | Analyzing Bias in Swiss Federal Supreme Court Judgments Using Facebook's
Holistic Bias Dataset: Implications for Language Model Training | cs.CL cs.AI | Natural Language Processing (NLP) is vital for computers to process and
respond accurately to human language. However, biases in training data can
introduce unfairness, especially in predicting legal judgment. This study
focuses on analyzing biases within the Swiss Judgment Prediction Dataset
(SJP-Dataset). Our aim is to ensure unbiased factual descriptions essential for
fair decision making by NLP models in legal contexts. We analyze the dataset
using social bias descriptors from the Holistic Bias dataset and employ
advanced NLP techniques, including attention visualization, to explore the
impact of dispreferred descriptors on model predictions. The study identifies
biases and examines their influence on model behavior. Challenges include
dataset imbalance and token limits affecting model performance.
|
2501.03331 | Global network control from local information | eess.SY cond-mat.dis-nn cs.SY | In the classical control of network systems, the control actions on a node
are determined as a function of the states of all nodes in the network.
Motivated by applications where the global state cannot be reconstructed in
real time due to limitations in the collection, communication, and processing
of data, here we introduce a control approach in which the control actions can
be computed as a function of the states of the nodes within a limited state
information neighborhood. The trade-off between the control performance and the
size of this neighborhood is primarily determined by the condition number of
the controllability Gramian. Our theoretical results are supported by
simulations on regular and random networks and are further illustrated by an
application to the control of power-grid synchronization. We demonstrate that
for well-conditioned Gramians, there is no significant loss of control
performance as the size of the state information neighborhood is reduced,
allowing efficient control of large networks using only local information.
|
2501.03332 | CM3T: Framework for Efficient Multimodal Learning for Inhomogeneous
Interaction Datasets | cs.CV | Challenges in cross-learning involve inhomogeneous or even inadequate amount
of training data and lack of resources for retraining large pretrained models.
Inspired by transfer learning techniques in NLP, adapters and prefix tuning,
this paper presents a new model-agnostic plugin architecture for
cross-learning, called CM3T, that adapts transformer-based models to new or
missing information. We introduce two adapter blocks: multi-head vision
adapters for transfer learning and cross-attention adapters for multimodal
learning. Training becomes substantially efficient as the backbone and other
plugins do not need to be finetuned along with these additions. Comparative and
ablation studies on three datasets Epic-Kitchens-100, MPIIGroupInteraction and
UDIVA v0.5 show efficacy of this framework on different recording settings and
tasks. With only 12.8% trainable parameters compared to the backbone to process
video input and only 22.3% trainable parameters for two additional modalities,
we achieve comparable and even better results than the state-of-the-art. CM3T
has no specific requirements for training or pretraining and is a step towards
bridging the gap between a general model and specific practical applications of
video classification.
|
2501.03336 | Mobile Augmented Reality Framework with Fusional Localization and Pose
Estimation | cs.CV | As a novel way of presenting information, augmented reality (AR) enables
people to interact with the physical world in a direct and intuitive way. While
there are some mobile AR products implemented with specific hardware at a high
cost, the software approaches of AR implementation on mobile platforms(such as
smartphones, tablet PC, etc.) are still far from practical use. GPS-based
mobile AR systems usually perform poorly due to the inaccurate positioning in
the indoor environment. Previous vision-based pose estimation methods need to
continuously track predefined markers within a short distance, which greatly
degrade user experience. This paper first conducts a comprehensive study of the
state-of-the-art AR and localization systems on mobile platforms. Then, we
propose an effective indoor mobile AR framework. In the framework, a fusional
localization method and a new pose estimation implementation are developed to
increase the overall matching rate and thus improving AR display accuracy.
Experiments show that our framework has higher performance than approaches
purely based on images or Wi-Fi signals. We achieve low average error distances
(0.61-0.81m) and accurate matching rates (77%-82%) when the average sampling
grid length is set to 0.5m.
|
2501.03349 | FTA-FTL: A Fine-Tuned Aggregation Federated Transfer Learning Scheme for
Lithology Microscopic Image Classification | cs.LG cs.AI cs.CV | Lithology discrimination is a crucial activity in characterizing oil
reservoirs, and processing lithology microscopic images is an essential
technique for investigating fossils and minerals and geological assessment of
shale oil exploration. In this way, Deep Learning (DL) technique is a powerful
approach for building robust classifier models. However, there is still a
considerable challenge to collect and produce a large dataset.
Transfer-learning and data augmentation techniques have emerged as popular
approaches to tackle this problem. Furthermore, due to different reasons,
especially data privacy, individuals, organizations, and industry companies
often are not willing to share their sensitive data and information. Federated
Learning (FL) has emerged to train a highly accurate central model across
multiple decentralized edge servers without transferring sensitive data,
preserving sensitive data, and enhancing security. This study involves two
phases; the first phase is to conduct Lithology microscopic image
classification on a small dataset using transfer learning. In doing so, various
pre-trained DL model architectures are comprehensively compared for the
classification task. In the second phase, we formulated the classification task
to a Federated Transfer Learning (FTL) scheme and proposed a Fine-Tuned
Aggregation strategy for Federated Learning (FTA-FTL). In order to perform a
comprehensive experimental study, several metrics such as accuracy, f1 score,
precision, specificity, sensitivity (recall), and confusion matrix are taken
into account. The results are in excellent agreement and confirm the efficiency
of the proposed scheme, and show that the proposed FTA-FTL algorithm is capable
enough to achieve approximately the same results obtained by the centralized
implementation for Lithology microscopic images classification task.
|
2501.03358 | Data integrity vs. inference accuracy in large AIS datasets | cs.CR cs.LG | Automatic Ship Identification Systems (AIS) play a key role in monitoring
maritime traffic, providing the data necessary for analysis and
decision-making. The integrity of this data is fundamental to the correctness
of infer-ence and decision-making in the context of maritime safety, traffic
manage-ment and environmental protection. This paper analyzes the impact of
data integrity in large AIS datasets, on classification accuracy. It also
presents er-ror detection and correction methods and data verification
techniques that can improve the reliability of AIS systems. The results show
that improving the integrity of AIS data significantly improves the quality of
inference, which has a direct impact on operational efficiency and safety at
sea.
|
2501.03360 | Quantum Feature-Empowered Deep Classification for Fast Mangrove Mapping | quant-ph cs.CV eess.IV | A mangrove mapping (MM) algorithm is an essential classification tool for
environmental monitoring. The recent literature shows that compared with other
index-based MM methods that treat pixels as spatially independent,
convolutional neural networks (CNNs) are crucial for leveraging spatial
continuity information, leading to improved classification performance. In this
work, we go a step further to show that quantum features provide radically new
information for CNN to further upgrade the classification results. Simply
speaking, CNN computes affine-mapping features, while quantum neural network
(QNN) offers unitary-computing features, thereby offering a fresh perspective
in the final decision-making (classification). To address the challenging MM
problem, we design an entangled spatial-spectral quantum feature extraction
module. Notably, to ensure that the quantum features contribute genuinely novel
information (unaffected by traditional CNN features), we design a separate
network track consisting solely of quantum neurons with built-in
interpretability. The extracted pure quantum information is then fused with
traditional feature information to jointly make the final decision. The
proposed quantum-empowered deep network (QEDNet) is very lightweight, so the
improvement does come from the cooperation between CNN and QNN (rather than
parameter augmentation). Extensive experiments will be conducted to demonstrate
the superiority of QEDNet.
|
2501.03368 | Detecting Defective Wafers Via Modular Networks | cs.LG | The growing availability of sensors within semiconductor manufacturing
processes makes it feasible to detect defective wafers with data-driven models.
Without directly measuring the quality of semiconductor devices, they capture
the modalities between diverse sensor readings and can be used to predict key
quality indicators (KQI, \textit{e.g.}, roughness, resistance) to detect faulty
products, significantly reducing the capital and human cost in maintaining
physical metrology steps. Nevertheless, existing models pay little attention to
the correlations among different processes for diverse wafer products and
commonly struggle with generalizability issues. To enable generic fault
detection, in this work, we propose a modular network (MN) trained using time
series stage-wise datasets that embodies the structure of the manufacturing
process. It decomposes KQI prediction as a combination of stage modules to
simulate compositional semiconductor manufacturing, universally enhancing
faulty wafer detection among different wafer types and manufacturing processes.
Extensive experiments demonstrate the usefulness of our approach, and shed
light on how the compositional design provides an interpretable interface for
more practical applications.
|
2501.03370 | Advanced Machine Learning Techniques for Social Support Detection on
Social Media | cs.CL cs.AI cs.HC cs.LG | The widespread use of social media highlights the need to understand its
impact, particularly the role of online social support. This study uses a
dataset focused on online social support, which includes binary and multiclass
classifications of social support content on social media. The classification
of social support is divided into three tasks. The first task focuses on
distinguishing between supportive and non-supportive. The second task aims to
identify whether the support is directed toward an individual or a group. The
third task categorizes the specific type of social support, grouping it into
categories such as Nation, LGBTQ, Black people, Women, Religion, and Other (if
it does not fit into the previously mentioned categories). To address data
imbalances in these tasks, we employed K-means clustering for balancing the
dataset and compared the results with the original unbalanced data. Using
advanced machine learning techniques, including transformers and zero-shot
learning approaches with GPT3, GPT4, and GPT4-o, we predict social support
levels in various contexts. The effectiveness of the dataset is evaluated using
baseline models across different learning approaches, with transformer-based
methods demonstrating superior performance. Additionally, we achieved a 0.4\%
increase in the macro F1 score for the second task and a 0.7\% increase for the
third task, compared to previous work utilizing traditional machine learning
with psycholinguistic and unigram-based TF-IDF values.
|
2501.03374 | License Plate Images Generation with Diffusion Models | cs.CV cs.AI cs.LG | Despite the evident practical importance of license plate recognition (LPR),
corresponding research is limited by the volume of publicly available datasets
due to privacy regulations such as the General Data Protection Regulation
(GDPR). To address this challenge, synthetic data generation has emerged as a
promising approach. In this paper, we propose to synthesize realistic license
plates (LPs) using diffusion models, inspired by recent advances in image and
video generation. In our experiments a diffusion model was successfully trained
on a Ukrainian LP dataset, and 1000 synthetic images were generated for
detailed analysis. Through manual classification and annotation of the
generated images, we performed a thorough study of the model output, such as
success rate, character distributions, and type of failures. Our contributions
include experimental validation of the efficacy of diffusion models for LP
synthesis, along with insights into the characteristics of the generated data.
Furthermore, we have prepared a synthetic dataset consisting of 10,000 LP
images, publicly available at https://zenodo.org/doi/10.5281/zenodo.13342102.
Conducted experiments empirically confirm the usefulness of synthetic data for
the LPR task. Despite the initial performance gap between the model trained
with real and synthetic data, the expansion of the training data set with
pseudolabeled synthetic data leads to an improvement in LPR accuracy by 3%
compared to baseline.
|
2501.03376 | Existential Crisis: A Social Robot's Reason for Being | cs.RO cs.AI cs.HC | As Robots become ever more important in our daily lives there's growing need
for understanding how they're perceived by people. This study aims to
investigate how the user perception of robots is influenced by displays of
personality. Using LLMs and speech to text technology, we designed a
within-subject study to compare two conditions: a personality-driven robot and
a purely task-oriented, personality-neutral robot. Twelve participants,
recruited from Socially Intelligent Robotics course at Vrije Universiteit
Amsterdam, interacted with a robot Nao tasked with asking them a set of medical
questions under both conditions. After completing both interactions, the
participants completed a user experience questionnaire measuring their
emotional states and robot perception using standardized questionnaires from
the SRI and Psychology literature.
|
2501.03383 | The Artificial Scientist -- in-transit Machine Learning of Plasma
Simulations | physics.comp-ph cs.DC cs.LG | Increasing HPC cluster sizes and large-scale simulations that produce
petabytes of data per run, create massive IO and storage challenges for
analysis. Deep learning-based techniques, in particular, make use of these
amounts of domain data to extract patterns that help build scientific
understanding. Here, we demonstrate a streaming workflow in which simulation
data is streamed directly to a machine-learning (ML) framework, circumventing
the file system bottleneck. Data is transformed in transit, asynchronously to
the simulation and the training of the model. With the presented workflow, data
operations can be performed in common and easy-to-use programming languages,
freeing the application user from adapting the application output routines. As
a proof-of-concept we consider a GPU accelerated particle-in-cell (PIConGPU)
simulation of the Kelvin- Helmholtz instability (KHI). We employ experience
replay to avoid catastrophic forgetting in learning from this non-steady
process in a continual manner. We detail challenges addressed while porting and
scaling to Frontier exascale system.
|
2501.03392 | Over-the-Air Fair Federated Learning via Multi-Objective Optimization | cs.LG cs.AI | In federated learning (FL), heterogeneity among the local dataset
distributions of clients can result in unsatisfactory performance for some,
leading to an unfair model. To address this challenge, we propose an
over-the-air fair federated learning algorithm (OTA-FFL), which leverages
over-the-air computation to train fair FL models. By formulating FL as a
multi-objective minimization problem, we introduce a modified Chebyshev
approach to compute adaptive weighting coefficients for gradient aggregation in
each communication round. To enable efficient aggregation over the multiple
access channel, we derive analytical solutions for the optimal transmit scalars
at the clients and the de-noising scalar at the parameter server. Extensive
experiments demonstrate the superiority of OTA-FFL in achieving fairness and
robust performance compared to existing methods.
|
2501.03394 | Enhanced Importance Sampling through Latent Space Exploration in
Normalizing Flows | cs.RO cs.AI cs.LG | Importance sampling is a rare event simulation technique used in Monte Carlo
simulations to bias the sampling distribution towards the rare event of
interest. By assigning appropriate weights to sampled points, importance
sampling allows for more efficient estimation of rare events or tails of
distributions. However, importance sampling can fail when the proposal
distribution does not effectively cover the target distribution. In this work,
we propose a method for more efficient sampling by updating the proposal
distribution in the latent space of a normalizing flow. Normalizing flows learn
an invertible mapping from a target distribution to a simpler latent
distribution. The latent space can be more easily explored during the search
for a proposal distribution, and samples from the proposal distribution are
recovered in the space of the target distribution via the invertible mapping.
We empirically validate our methodology on simulated robotics applications such
as autonomous racing and aircraft ground collision avoidance.
|
2501.03397 | DoubleDiffusion: Combining Heat Diffusion with Denoising Diffusion for
Generative Learning on 3D Meshes | cs.CV | This paper proposes DoubleDiffusion, a novel framework that combines heat
dissipation diffusion and denoising diffusion for direct generative learning on
3D mesh surfaces. Our approach addresses the challenges of generating
continuous signal distributions residing on a curve manifold surface. Unlike
previous methods that rely on unrolling 3D meshes into 2D or adopting field
representations, DoubleDiffusion leverages the Laplacian-Beltrami operator to
process features respecting the mesh structure. This combination enables
effective geometry-aware signal diffusion across the underlying geometry. As
shown in Fig.1, we demonstrate that DoubleDiffusion has the ability to generate
RGB signal distributions on complex 3D mesh surfaces and achieves per-category
shape-conditioned texture generation across different shape geometry. Our work
contributes a new direction in diffusion-based generative modeling on 3D
surfaces, with potential applications in the field of 3D asset generation.
|
2501.03399 | Compression of 3D Gaussian Splatting with Optimized Feature Planes and
Standard Video Codecs | cs.CV cs.MM | 3D Gaussian Splatting is a recognized method for 3D scene representation,
known for its high rendering quality and speed. However, its substantial data
requirements present challenges for practical applications. In this paper, we
introduce an efficient compression technique that significantly reduces storage
overhead by using compact representation. We propose a unified architecture
that combines point cloud data and feature planes through a progressive
tri-plane structure. Our method utilizes 2D feature planes, enabling continuous
spatial representation. To further optimize these representations, we
incorporate entropy modeling in the frequency domain, specifically designed for
standard video codecs. We also propose channel-wise bit allocation to achieve a
better trade-off between bitrate consumption and feature plane representation.
Consequently, our model effectively leverages spatial correlations within the
feature planes to enhance rate-distortion performance using standard,
non-differentiable video codecs. Experimental results demonstrate that our
method outperforms existing methods in data compactness while maintaining high
rendering quality. Our project page is available at
https://fraunhoferhhi.github.io/CodecGS
|
2501.03400 | Power System Steady-State Estimation Revisited | math.OC cs.SY eess.SY | In power system steady-state estimation (PSSE), one needs to consider (1) the
need for robust statistics, (2) the nonconvex transmission constraints, (3) the
fast-varying nature of the inputs, and the corresponding need to track optimal
trajectories as closely as possible. In combination, these challenges have not
been considered, yet. In this paper, we address all three challenges. The need
for robustness (1) is addressed by using an approach based on the so-called
Huber model. The non-convexity (2) of the problem, which results in first order
methods failing to find global minima, is dealt with by applying global
methods. One of these methods is based on a mixed integer quadratic
formulation, which provides results of several orders of magnitude better than
conventional gradient descent. Lastly, the trajectory tracking (3) is discussed
by showing under which conditions the trajectory tracking of the SDP
relaxations has meaning.
|
2501.03402 | On the Adversarial Robustness of Benjamini Hochberg | math.ST cs.LG stat.TH | The Benjamini-Hochberg (BH) procedure is widely used to control the false
detection rate (FDR) in multiple testing. Applications of this control abound
in drug discovery, forensics, anomaly detection, and, in particular, machine
learning, ranging from nonparametric outlier detection to out-of-distribution
detection and one-class classification methods. Considering this control could
be relied upon in critical safety/security contexts, we investigate its
adversarial robustness. More precisely, we study under what conditions BH does
and does not exhibit adversarial robustness, we present a class of simple and
easily implementable adversarial test-perturbation algorithms, and we perform
computational experiments. With our algorithms, we demonstrate that there are
conditions under which BH's control can be significantly broken with relatively
few (even just one) test score perturbation(s), and provide non-asymptotic
guarantees on the expected adversarial-adjustment to FDR. Our technical
analysis involves a combinatorial reframing of the BH procedure as a ``balls
into bins'' process, and drawing a connection to generalized ballot problems to
facilitate an information-theoretic approach for deriving non-asymptotic lower
bounds.
|
2501.03403 | BoundingDocs: a Unified Dataset for Document Question Answering with
Spatial Annotations | cs.CL cs.AI | We present a unified dataset for document Question-Answering (QA), which is
obtained combining several public datasets related to Document AI and visually
rich document understanding (VRDU). Our main contribution is twofold: on the
one hand we reformulate existing Document AI tasks, such as Information
Extraction (IE), into a Question-Answering task, making it a suitable resource
for training and evaluating Large Language Models; on the other hand, we
release the OCR of all the documents and include the exact position of the
answer to be found in the document image as a bounding box. Using this dataset,
we explore the impact of different prompting techniques (that might include
bounding box information) on the performance of open-weight models, identifying
the most effective approaches for document comprehension.
|
2501.03405 | A Study of the Efficacy of Generative Flow Networks for Robotics and
Machine Fault-Adaptation | cs.RO | Advancements in robotics have opened possibilities to automate tasks in
various fields such as manufacturing, emergency response and healthcare.
However, a significant challenge that prevents robots from operating in
real-world environments effectively is out-of-distribution (OOD) situations,
wherein robots encounter unforseen situations. One major OOD situations is when
robots encounter faults, making fault adaptation essential for real-world
operation for robots. Current state-of-the-art reinforcement learning
algorithms show promising results but suffer from sample inefficiency, leading
to low adaptation speed due to their limited ability to generalize to OOD
situations. Our research is a step towards adding hardware fault tolerance and
fast fault adaptability to machines. In this research, our primary focus is to
investigate the efficacy of generative flow networks in robotic environments,
particularly in the domain of machine fault adaptation. We simulated a robotic
environment called Reacher in our experiments. We modify this environment to
introduce four distinct fault environments that replicate real-world
machines/robot malfunctions. The empirical evaluation of this research
indicates that continuous generative flow networks (CFlowNets) indeed have the
capability to add adaptive behaviors in machines under adversarial conditions.
Furthermore, the comparative analysis of CFlowNets with reinforcement learning
algorithms also provides some key insights into the performance in terms of
adaptation speed and sample efficiency. Additionally, a separate study
investigates the implications of transferring knowledge from pre-fault task to
post-fault environments. Our experiments confirm that CFlowNets has the
potential to be deployed in a real-world machine and it can demonstrate
adaptability in case of malfunctions to maintain functionality.
|
2501.03406 | Low-Order Flow Reconstruction and Uncertainty Quantification in
Disturbed Aerodynamics Using Sparse Pressure Measurements | cs.LG physics.flu-dyn | This paper presents a novel machine-learning framework for reconstructing
low-order gust-encounter flow field and lift coefficients from sparse, noisy
surface pressure measurements. Our study thoroughly investigates the
time-varying response of sensors to gust-airfoil interactions, uncovering
valuable insights into optimal sensor placement. To address uncertainties in
deep learning predictions, we implement probabilistic regression strategies to
model both epistemic and aleatoric uncertainties. Epistemic uncertainty,
reflecting the model's confidence in its predictions, is modeled using Monte
Carlo dropout, as an approximation to the variational inference in the Bayesian
framework, treating the neural network as a stochastic entity. On the other
hand, aleatoric uncertainty, arising from noisy input measurements, is captured
via learned statistical parameters, which propagates measurement noise through
the network into the final predictions. Our results showcase the efficacy of
this dual uncertainty quantification strategy in accurately predicting
aerodynamic behavior under extreme conditions while maintaining computational
efficiency, underscoring its potential to improve online sensor-based flow
estimation in real-world applications.
|
2501.03410 | ScaleMAI: Accelerating the Development of Trusted Datasets and AI Models | cs.CV | Building trusted datasets is critical for transparent and responsible Medical
AI (MAI) research, but creating even small, high-quality datasets can take
years of effort from multidisciplinary teams. This process often delays AI
benefits, as human-centric data creation and AI-centric model development are
treated as separate, sequential steps. To overcome this, we propose ScaleMAI,
an agent of AI-integrated data curation and annotation, allowing data quality
and AI performance to improve in a self-reinforcing cycle and reducing
development time from years to months. We adopt pancreatic tumor detection as
an example. First, ScaleMAI progressively creates a dataset of 25,362 CT scans,
including per-voxel annotations for benign/malignant tumors and 24 anatomical
structures. Second, through progressive human-in-the-loop iterations, ScaleMAI
provides Flagship AI Model that can approach the proficiency of expert
annotators (30-year experience) in detecting pancreatic tumors. Flagship Model
significantly outperforms models developed from smaller, fixed-quality
datasets, with substantial gains in tumor detection (+14%), segmentation (+5%),
and classification (72%) on three prestigious benchmarks. In summary, ScaleMAI
transforms the speed, scale, and reliability of medical dataset creation,
paving the way for a variety of impactful, data-driven applications.
|
2501.03413 | SALT: Sales Autocompletion Linked Business Tables Dataset | cs.LG cs.AI cs.DB | Foundation models, particularly those that incorporate Transformer
architectures, have demonstrated exceptional performance in domains such as
natural language processing and image processing. Adapting these models to
structured data, like tables, however, introduces significant challenges. These
difficulties are even more pronounced when addressing multi-table data linked
via foreign key, which is prevalent in the enterprise realm and crucial for
empowering business use cases. Despite its substantial impact, research
focusing on such linked business tables within enterprise settings remains a
significantly important yet underexplored domain. To address this, we introduce
a curated dataset sourced from an Enterprise Resource Planning (ERP) system,
featuring extensive linked tables. This dataset is specifically designed to
support research endeavors in table representation learning. By providing
access to authentic enterprise data, our goal is to potentially enhance the
effectiveness and applicability of models for real-world business contexts.
|
2501.03416 | TinySense: A Lighter Weight and More Power-efficient Avionics System for
Flying Insect-scale Robots | cs.RO cs.SY eess.SY | In this paper, we investigate the prospects and challenges of sensor suites
in achieving autonomous control for flying insect robots (FIRs) weighing less
than a gram. FIRs, owing to their minuscule weight and size, offer unparalleled
advantages in terms of material cost and scalability. However, their size
introduces considerable control challenges, notably high-speed dynamics,
restricted power, and limited payload capacity. While there have been notable
advancements in developing lightweight sensors, often drawing inspiration from
biological systems, no sub-gram aircraft has been able to attain sustained
hover without relying on feedback from external sensing such as a motion
capture system. The lightest vehicle capable of sustained hover -- the first
level of "sensor autonomy" -- is the much larger 28 g Crazyflie. Previous work
reported a reduction in size of that vehicle's avionics suite to 187 mg and 21
mW. Here, we report a further reduction in mass and power to only 78.4 mg and
15 mW. We replaced the laser rangefinder with a lighter and more efficient
pressure sensor, and built a smaller optic flow sensor around a global-shutter
imaging chip. A Kalman Filter (KF) fuses these measurements to estimate the
state variables that are needed to control hover: pitch angle, translational
velocity, and altitude. Our system achieved performance comparable to that of
the Crazyflie's estimator while in flight, with root mean squared errors of
1.573 degrees, 0.186 m/s, and 0.139 m, respectively, relative to motion
capture.
|
2501.03420 | Designing Telepresence Robots to Support Place Attachment | cs.HC cs.RO | People feel attached to places that are meaningful to them, which
psychological research calls "place attachment." Place attachment is associated
with self-identity, self-continuity, and psychological well-being. Even small
cues, including videos, images, sounds, and scents, can facilitate feelings of
connection and belonging to a place. Telepresence robots that allow people to
see, hear, and interact with a remote place have the potential to establish and
maintain a connection with places and support place attachment. In this paper,
we explore the design space of robotic telepresence to promote place
attachment, including how users might be guided in a remote place and whether
they experience the environment individually or with others. We prototyped a
telepresence robot that allows one or more remote users to visit a place and be
guided by a local human guide or a conversational agent. Participants were 38
university alumni who visited their alma mater via the telepresence robot. Our
findings uncovered four distinct user personas in the remote experience and
highlighted the need for social participation to enhance place attachment. We
generated design implications for future telepresence robot design to support
people's connections with places of personal significance.
|
2501.03430 | A Self-supervised Diffusion Bridge for MRI Reconstruction | eess.IV cs.CV | Diffusion bridges (DBs) are a class of diffusion models that enable faster
sampling by interpolating between two paired image distributions. Training
traditional DBs for image reconstruction requires high-quality reference
images, which limits their applicability to settings where such references are
unavailable. We propose SelfDB as a novel self-supervised method for training
DBs directly on available noisy measurements without any high-quality reference
images. SelfDB formulates the diffusion process by further sub-sampling the
available measurements two additional times and training a neural network to
reverse the corresponding degradation process by using the available
measurements as the training targets. We validate SelfDB on compressed sensing
MRI, showing its superior performance compared to the denoising diffusion
models.
|
2501.03432 | Mixture-of-Experts Graph Transformers for Interpretable Particle
Collision Detection | cs.LG hep-ph | The Large Hadron Collider at CERN produces immense volumes of complex data
from high-energy particle collisions, demanding sophisticated analytical
techniques for effective interpretation. Neural Networks, including Graph
Neural Networks, have shown promise in tasks such as event classification and
object identification by representing collisions as graphs. However, while
Graph Neural Networks excel in predictive accuracy, their "black box" nature
often limits their interpretability, making it difficult to trust their
decision-making processes. In this paper, we propose a novel approach that
combines a Graph Transformer model with Mixture-of-Expert layers to achieve
high predictive performance while embedding interpretability into the
architecture. By leveraging attention maps and expert specialization, the model
offers insights into its internal decision-making, linking predictions to
physics-informed features. We evaluate the model on simulated events from the
ATLAS experiment, focusing on distinguishing rare Supersymmetric signal events
from Standard Model background. Our results highlight that the model achieves
competitive classification accuracy while providing interpretable outputs that
align with known physics, demonstrating its potential as a robust and
transparent tool for high-energy physics data analysis. This approach
underscores the importance of explainability in machine learning methods
applied to high energy physics, offering a path toward greater trust in
AI-driven discoveries.
|
2501.03437 | DAMAGE: Detecting Adversarially Modified AI Generated Text | cs.CL | AI humanizers are a new class of online software tools meant to paraphrase
and rewrite AI-generated text in a way that allows them to evade AI detection
software. We study 19 AI humanizer and paraphrasing tools and qualitatively
assess their effects and faithfulness in preserving the meaning of the original
text. We show that many existing AI detectors fail to detect humanized text.
Finally, we demonstrate a robust model that can detect humanized AI text while
maintaining a low false positive rate using a data-centric augmentation
approach. We attack our own detector, training our own fine-tuned model
optimized against our detector's predictions, and show that our detector's
cross-humanizer generalization is sufficient to remain robust to this attack.
|
2501.03441 | Finding A Voice: Evaluating African American Dialect Generation for
Chatbot Technology | cs.CL | As chatbots become increasingly integrated into everyday tasks, designing
systems that accommodate diverse user populations is crucial for fostering
trust, engagement, and inclusivity. This study investigates the ability of
contemporary Large Language Models (LLMs) to generate African American
Vernacular English (AAVE) and evaluates the impact of AAVE usage on user
experiences in chatbot applications. We analyze the performance of three LLM
families (Llama, GPT, and Claude) in producing AAVE-like utterances at varying
dialect intensities and assess user preferences across multiple domains,
including healthcare and education. Despite LLMs' proficiency in generating
AAVE-like language, findings indicate that AAVE-speaking users prefer Standard
American English (SAE) chatbots, with higher levels of AAVE correlating with
lower ratings for a variety of characteristics, including chatbot
trustworthiness and role appropriateness. These results highlight the
complexities of creating inclusive AI systems and underscore the need for
further exploration of diversity to enhance human-computer interactions.
|
2501.03443 | Optimization Learning | math.OC cs.AI | This article introduces the concept of optimization learning, a methodology
to design optimization proxies that learn the input/output mapping of
parametric optimization problems. These optimization proxies are trustworthy by
design: they compute feasible solutions to the underlying optimization
problems, provide quality guarantees on the returned solutions, and scale to
large instances. Optimization proxies are differentiable programs that combine
traditional deep learning technology with repair or completion layers to
produce feasible solutions. The article shows that optimization proxies can be
trained end-to-end in a self-supervised way. It presents methodologies to
provide performance guarantees and to scale optimization proxies to large-scale
optimization problems. The potential of optimization proxies is highlighted
through applications in power systems and, in particular, real-time risk
assessment and security-constrained optimal power flow.
|
2501.03445 | Physics-Constrained Generative Artificial Intelligence for Rapid Takeoff
Trajectory Design | cs.LG | To aid urban air mobility (UAM), electric vertical takeoff and landing
(eVTOL) aircraft are being targeted. Conventional multidisciplinary analysis
and optimization (MDAO) can be expensive, while surrogate-based optimization
can struggle with challenging physical constraints. This work proposes
physics-constrained generative adversarial networks (physicsGAN), to
intelligently parameterize the takeoff control profiles of an eVTOL aircraft
and to transform the original design space to a feasible space. Specifically,
the transformed feasible space refers to a space where all designs directly
satisfy all design constraints. The physicsGAN-enabled surrogate-based takeoff
trajectory design framework was demonstrated on the Airbus A3 Vahana. The
physicsGAN generated only feasible control profiles of power and wing angle in
the feasible space with around 98.9% of designs satisfying all constraints. The
proposed design framework obtained 99.6% accuracy compared with
simulation-based optimal design and took only 2.2 seconds, which reduced the
computational time by around 200 times. Meanwhile, data-driven GAN-enabled
surrogate-based optimization took 21.9 seconds using a derivative-free
optimizer, which was around an order of magnitude slower than the proposed
framework. Moreover, the data-driven GAN-based optimization using
gradient-based optimizers could not consistently find the optimal design during
random trials and got stuck in an infeasible region, which is problematic in
real practice. Therefore, the proposed physicsGAN-based design framework
outperformed data-driven GAN-based design to the extent of efficiency (2.2
seconds), optimality (99.6% accurate), and feasibility (100% feasible).
According to the literature review, this is the first physics-constrained
generative artificial intelligence enabled by surrogate models.
|
2501.03448 | Optimizing Value of Learning in Task-Oriented Federated Meta-Learning
Systems | cs.LG | Federated Learning (FL) has gained significant attention in recent years due
to its distributed nature and privacy preserving benefits. However, a key
limitation of conventional FL is that it learns and distributes a common global
model to all participants, which fails to provide customized solutions for
diverse task requirements. Federated meta-learning (FML) offers a promising
solution to this issue by enabling devices to finetune local models after
receiving a shared meta-model from the server. In this paper, we propose a
task-oriented FML framework over non-orthogonal multiple access (NOMA)
networks. A novel metric, termed value of learning (VoL), is introduced to
assess the individual training needs across devices. Moreover, a task-level
weight (TLW) metric is defined based on task requirements and fairness
considerations, guiding the prioritization of edge devices during FML training.
The formulated problem, to maximize the sum of TLW-based VoL across devices,
forms a non-convex mixed-integer non-linear programming (MINLP) challenge,
addressed here using a parameterized deep Q-network (PDQN) algorithm to handle
both discrete and continuous variables. Simulation results demonstrate that our
approach significantly outperforms baseline schemes, underscoring the
advantages of the proposed framework.
|
2501.03449 | Feasibility of short blocklength Reed-Muller codes for physical layer
security in real environment | cs.IT cs.CR eess.SP math.IT | In this paper, we investigate the application of Reed-Muller (RM) codes for
Physical-layer security in a real world wiretap channel scenario. Utilizing
software-defined radios (SDRs) in a real indoor environment, we implement a
coset coding scheme that leverages the hierarchical structure of RM codes to
secure data transmission. The generator matrix of the RM code is used to
partition codewords into cosets in the usual way, where each message
corresponds to a unique coset, and auxiliary bits select specific codewords
within each coset. This approach enables the legitimate receiver (Bob) can
decode the transmitted message with minimal information leakage to eavesdropper
(Eve) thus protecting the confidentiality of the communication with the help of
coset structure. Mutual information neural estimation (MINE) is used to
quantify information leakage and validate the effectiveness of the scheme.
Experimental results indicate that RM codes can achieve robust security even in
practical environments affected by real-world channel impairments. These
findings demonstrate the potential of RM codes as an efficient solution for
physical-layer security, particularly for applications that require low latency
and short blocklengths.
|
2501.03451 | Structure-Preference Enabled Graph Embedding Generation under
Differential Privacy | stat.ML cs.LG cs.SI | Graph embedding generation techniques aim to learn low-dimensional vectors
for each node in a graph and have recently gained increasing research
attention. Publishing low-dimensional node vectors enables various graph
analysis tasks, such as structural equivalence and link prediction. Yet,
improper publication opens a backdoor to malicious attackers, who can infer
sensitive information of individuals from the low-dimensional node vectors.
Existing methods tackle this issue by developing deep graph learning models
with differential privacy (DP). However, they often suffer from large noise
injections and cannot provide structural preferences consistent with mining
objectives. Recently, skip-gram based graph embedding generation techniques are
widely used due to their ability to extract customizable structures. Based on
skip-gram, we present SE-PrivGEmb, a structure-preference enabled graph
embedding generation under DP. For arbitrary structure preferences, we design a
unified noise tolerance mechanism via perturbing non-zero vectors. This
mechanism mitigates utility degradation caused by high sensitivity. By
carefully designing negative sampling probabilities in skip-gram, we
theoretically demonstrate that skip-gram can preserve arbitrary proximities,
which quantify structural features in graphs. Extensive experiments show that
our method outperforms existing state-of-the-art methods under structural
equivalence and link prediction tasks.
|
2501.03456 | Text to Band Gap: Pre-trained Language Models as Encoders for
Semiconductor Band Gap Prediction | cs.CL cond-mat.mtrl-sci | In this study, we explore the use of a transformer-based language model as an
encoder to predict the band gaps of semiconductor materials directly from their
text descriptions. Quantum chemistry simulations, including Density Functional
Theory (DFT), are computationally intensive and time-consuming, which limits
their practicality for high-throughput material screening, particularly for
complex systems. Shallow machine learning (ML) models, while effective, often
require extensive data preprocessing to convert non-numerical material
properties into numerical inputs. In contrast, our approach leverages textual
data directly, bypassing the need for complex feature engineering. We generate
material descriptions in two formats: formatted strings combining features and
natural language text generated using the ChatGPT API. We demonstrate that the
RoBERTa model, pre-trained on natural language processing tasks, performs
effectively as an encoder for prediction tasks. With minimal fine-tuning, it
achieves a mean absolute error (MAE) of approximately 0.33 eV, performing
better than shallow machine learning models such as Support Vector Regression,
Random Forest, and XGBoost. Even when only the linear regression head is
trained while keeping the RoBERTa encoder layers frozen, the accuracy remains
nearly identical to that of the fully trained model. This demonstrates that the
pre-trained RoBERTa encoder is highly adaptable for processing domain-specific
text related to material properties, such as the band gap, significantly
reducing the need for extensive retraining. This study highlights the potential
of transformer-based language models to serve as efficient and versatile
encoders for semiconductor materials property prediction tasks.
|
2501.03458 | Activating Associative Disease-Aware Vision Token Memory for LLM-Based
X-ray Report Generation | eess.IV cs.AI cs.CV | X-ray image based medical report generation achieves significant progress in
recent years with the help of the large language model, however, these models
have not fully exploited the effective information in visual image regions,
resulting in reports that are linguistically sound but insufficient in
describing key diseases. In this paper, we propose a novel associative
memory-enhanced X-ray report generation model that effectively mimics the
process of professional doctors writing medical reports. It considers both the
mining of global and local visual information and associates historical report
information to better complete the writing of the current report. Specifically,
given an X-ray image, we first utilize a classification model along with its
activation maps to accomplish the mining of visual regions highly associated
with diseases and the learning of disease query tokens. Then, we employ a
visual Hopfield network to establish memory associations for disease-related
tokens, and a report Hopfield network to retrieve report memory information.
This process facilitates the generation of high-quality reports based on a
large language model and achieves state-of-the-art performance on multiple
benchmark datasets, including the IU X-ray, MIMIC-CXR, and Chexpert Plus. The
source code of this work is released on
\url{https://github.com/Event-AHU/Medical_Image_Analysis}.
|
2501.03461 | Radar Signal Recognition through Self-Supervised Learning and Domain
Adaptation | cs.LG cs.AI eess.SP | Automatic radar signal recognition (RSR) plays a pivotal role in electronic
warfare (EW), as accurately classifying radar signals is critical for informing
decision-making processes. Recent advances in deep learning have shown
significant potential in improving RSR performance in domains with ample
annotated data. However, these methods fall short in EW scenarios where
annotated RF data are scarce or impractical to obtain. To address these
challenges, we introduce a self-supervised learning (SSL) method which utilises
masked signal modelling and RF domain adaption to enhance RSR performance in
environments with limited RF samples and labels. Specifically, we investigate
pre-training masked autoencoders (MAE) on baseband in-phase and quadrature
(I/Q) signals from various RF domains and subsequently transfer the learned
representation to the radar domain, where annotated data are limited. Empirical
results show that our lightweight self-supervised ResNet model with domain
adaptation achieves up to a 17.5% improvement in 1-shot classification accuracy
when pre-trained on in-domain signals (i.e., radar signals) and up to a 16.31%
improvement when pre-trained on out-of-domain signals (i.e., comm signals),
compared to its baseline without SSL. We also provide reference results for
several MAE designs and pre-training strategies, establishing a new benchmark
for few-shot radar signal classification.
|
2501.03462 | ISSR: Iterative Selection with Self-Review for Vocabulary Test
Distractor Generation | cs.CL | Vocabulary acquisition is essential to second language learning, as it
underpins all core language skills. Accurate vocabulary assessment is
particularly important in standardized exams, where test items evaluate
learners' comprehension and contextual use of words. Previous research has
explored methods for generating distractors to aid in the design of English
vocabulary tests. However, current approaches often rely on lexical databases
or predefined rules, and frequently produce distractors that risk invalidating
the question by introducing multiple correct options. In this study, we focus
on English vocabulary questions from Taiwan's university entrance exams. We
analyze student response distributions to gain insights into the
characteristics of these test items and provide a reference for future
research. Additionally, we identify key limitations in how large language
models (LLMs) support teachers in generating distractors for vocabulary test
design. To address these challenges, we propose the iterative selection with
self-review (ISSR) framework, which makes use of a novel LLM-based self-review
mechanism to ensure that the distractors remain valid while offering diverse
options. Experimental results show that ISSR achieves promising performance in
generating plausible distractors, and the self-review mechanism effectively
filters out distractors that could invalidate the question.
|
2501.03464 | LHGNN: Local-Higher Order Graph Neural Networks For Audio Classification
and Tagging | cs.SD cs.AI eess.AS | Transformers have set new benchmarks in audio processing tasks, leveraging
self-attention mechanisms to capture complex patterns and dependencies within
audio data. However, their focus on pairwise interactions limits their ability
to process the higher-order relations essential for identifying distinct audio
objects. To address this limitation, this work introduces the Local- Higher
Order Graph Neural Network (LHGNN), a graph based model that enhances feature
understanding by integrating local neighbourhood information with higher-order
data from Fuzzy C-Means clusters, thereby capturing a broader spectrum of audio
relationships. Evaluation of the model on three publicly available audio
datasets shows that it outperforms Transformer-based models across all
benchmarks while operating with substantially fewer parameters. Moreover, LHGNN
demonstrates a distinct advantage in scenarios lacking ImageNet pretraining,
establishing its effectiveness and efficiency in environments where extensive
pretraining data is unavailable.
|
2501.03465 | Extending Internet Access Over LoRa for Internet of Things and Critical
Applications | cs.NI cs.CY cs.SY eess.SY | LoRa bridges the gap between remote locations and mainstream networks,
enabling large-scale Internet of Things (IoT) deployments. Despite the recent
advancements around LoRa, Internet access over this technology is still largely
unexplored. Most existing solutions only handle packets within the local LoRa
network and do not interact with web applications. This limits the scalability
and the ability to deliver essential web services in disconnected regions. This
work proposes and implements ILoRa to extend the public Internet to
disconnected areas for essential service delivery. ILoRa enables accessing
Application Programming Interfaces (APIs) and web pages on the Internet over a
LoRa backbone network. It comprises a ILoRa coordinator code (ICN) and access
point nodes (APNs). The ICN interfaces the LoRa network with the public
Internet and interprets content. The APN tethers a WiFi hotspot to which
devices connect and access the web content. This work further proposes data
handling methods for ICNs and APNs. An actual hardware-based implementation
validates the proposed system. The implementation achieves a throughput of 1.06
kbps tested for an Internet-based API returning JSON data of 930 B.
Furthermore, the APN consumed approximately $0.162$A current, and the resource
utilization on the ICN was minimal.
|
2501.03466 | DGSSA: Domain generalization with structural and stylistic augmentation
for retinal vessel segmentation | eess.IV cs.CV | Retinal vascular morphology is crucial for diagnosing diseases such as
diabetes, glaucoma, and hypertension, making accurate segmentation of retinal
vessels essential for early intervention. Traditional segmentation methods
assume that training and testing data share similar distributions, which can
lead to poor performance on unseen domains due to domain shifts caused by
variations in imaging devices and patient demographics. This paper presents a
novel approach, DGSSA, for retinal vessel image segmentation that enhances
model generalization by combining structural and style augmentation strategies.
We utilize a space colonization algorithm to generate diverse vascular-like
structures that closely mimic actual retinal vessels, which are then used to
generate pseudo-retinal images with an improved Pix2Pix model, allowing the
segmentation model to learn a broader range of structure distributions.
Additionally, we utilize PixMix to implement random photometric augmentations
and introduce uncertainty perturbations, thereby enriching stylistic diversity
and significantly enhancing the model's adaptability to varying imaging
conditions. Our framework has been rigorously evaluated on four challenging
datasets-DRIVE, CHASEDB, HRF, and STARE-demonstrating state-of-the-art
performance that surpasses existing methods. This validates the effectiveness
of our proposed approach, highlighting its potential for clinical application
in automated retinal vessel analysis.
|
2501.03467 | FRESHR-GSI: A Generalized Safety Model and Evaluation Framework for
Mobile Robots in Multi-Human Environments | cs.RO cs.HC | Human safety is critical in applications involving close human-robot
interactions (HRI) and is a key aspect of physical compatibility between humans
and robots. While measures of human safety in HRI exist, these mainly target
industrial settings involving robotic manipulators. Less attention has been
paid to settings where mobile robots and humans share the space. This paper
introduces a new robot-centered directional framework of human safety. It is
particularly useful for evaluating mobile robots as they operate in
environments populated by multiple humans. The framework integrates several key
metrics, such as each human's relative distance, speed, and orientation. The
core novelty lies in the framework's flexibility to accommodate different
application requirements while allowing for both the robot-centered and
external observer points of view. We instantiate the framework by using RGB-D
based vision integrated with a deep learning-based human detection pipeline to
yield a generalized safety index (GSI) that instantaneously assesses human
safety. We evaluate GSI's capability of producing appropriate, robust, and
fine-grained safety measures in real-world experimental scenarios and compare
its performance with extant safety models.
|
2501.03468 | MTRAG: A Multi-Turn Conversational Benchmark for Evaluating
Retrieval-Augmented Generation Systems | cs.CL cs.AI | Retrieval-augmented generation (RAG) has recently become a very popular task
for Large Language Models (LLMs). Evaluating them on multi-turn RAG
conversations, where the system is asked to generate a response to a question
in the context of a preceding conversation is an important and often overlooked
task with several additional challenges. We present MTRAG: an end-to-end
human-generated multi-turn RAG benchmark that reflects several real-world
properties across diverse dimensions for evaluating the full RAG pipeline.
MTRAG contains 110 conversations averaging 7.7 turns each across four domains
for a total of 842 tasks. We also explore automation paths via synthetic data
and LLM-as-a-Judge evaluation. Our human and automatic evaluations show that
even state-of-the-art LLM RAG systems struggle on MTRAG. We demonstrate the
need for strong retrieval and generation systems that can handle later turns,
unanswerable questions, non-standalone questions, and multiple domains. MTRAG
is available at https://github.com/ibm/mt-rag-benchmark.
|
2501.03469 | Information-Maximized Soft Variable Discretization for Self-Supervised
Image Representation Learning | cs.CV | Self-supervised learning (SSL) has emerged as a crucial technique in image
processing, encoding, and understanding, especially for developing today's
vision foundation models that utilize large-scale datasets without annotations
to enhance various downstream tasks. This study introduces a novel SSL
approach, Information-Maximized Soft Variable Discretization (IMSVD), for image
representation learning. Specifically, IMSVD softly discretizes each variable
in the latent space, enabling the estimation of their probability distributions
over training batches and allowing the learning process to be directly guided
by information measures. Motivated by the MultiView assumption, we propose an
information-theoretic objective function to learn transform-invariant,
non-travail, and redundancy-minimized representation features. We then derive a
joint-cross entropy loss function for self-supervised image representation
learning, which theoretically enjoys superiority over the existing methods in
reducing feature redundancy. Notably, our non-contrastive IMSVD method
statistically performs contrastive learning. Extensive experimental results
demonstrate the effectiveness of IMSVD on various downstream tasks in terms of
both accuracy and efficiency. Thanks to our variable discretization, the
embedding features optimized by IMSVD offer unique explainability at the
variable level. IMSVD has the potential to be adapted to other learning
paradigms. Our code is publicly available at
https://github.com/niuchuangnn/IMSVD.
|
2501.03471 | Hyperbolic Binary Neural Network | cs.LG cs.CV | Binary Neural Network (BNN) converts full-precision weights and activations
into their extreme 1-bit counterparts, making it particularly suitable for
deployment on lightweight mobile devices. While binary neural networks are
typically formulated as a constrained optimization problem and optimized in the
binarized space, general neural networks are formulated as an unconstrained
optimization problem and optimized in the continuous space. This paper
introduces the Hyperbolic Binary Neural Network (HBNN) by leveraging the
framework of hyperbolic geometry to optimize the constrained problem.
Specifically, we transform the constrained problem in hyperbolic space into an
unconstrained one in Euclidean space using the Riemannian exponential map. On
the other hand, we also propose the Exponential Parametrization Cluster (EPC)
method, which, compared to the Riemannian exponential map, shrinks the segment
domain based on a diffeomorphism. This approach increases the probability of
weight flips, thereby maximizing the information gain in BNNs. Experimental
results on CIFAR10, CIFAR100, and ImageNet classification datasets with
VGGsmall, ResNet18, and ResNet34 models illustrate the superior performance of
our HBNN over state-of-the-art methods.
|
2501.03475 | Reading with Intent -- Neutralizing Intent | cs.CL cs.AI cs.LG | Queries to large language models (LLMs) can be divided into two parts: the
instruction/question and the accompanying context. The context for
retrieval-augmented generation (RAG) systems in most benchmarks comes from
Wikipedia or Wikipedia-like texts which are written in a neutral and factual
tone. However, when RAG systems retrieve internet-based content, they encounter
text with diverse tones and linguistic styles, introducing challenges for
downstream tasks. The Reading with Intent task addresses this issue by
evaluating how varying tones in context passages affect model performance.
Building on prior work that focused on sarcasm, we extend this paradigm by
constructing a dataset where context passages are transformed to $11$ distinct
emotions using a better synthetic data generation approach. Using this dataset,
we train an emotion translation model to systematically adapt passages to
specified emotional tones. The human evaluation shows that the LLM fine-tuned
to become the emotion-translator benefited from the synthetically generated
data. Finally, the emotion-translator is used in the Reading with Intent task
to transform the passages to a neutral tone. By neutralizing the passages, it
mitigates the challenges posed by sarcastic passages and improves overall
results on this task by about $3\%$.
|
2501.03477 | A study on performance limitations in Federated Learning | cs.LG | Increasing privacy concerns and unrestricted access to data lead to the
development of a novel machine learning paradigm called Federated Learning
(FL). FL borrows many of the ideas from distributed machine learning, however,
the challenges associated with federated learning makes it an interesting
engineering problem since the models are trained on edge devices. It was
introduced in 2016 by Google, and since then active research is being carried
out in different areas within FL such as federated optimization algorithms,
model and update compression, differential privacy, robustness, and attacks,
federated GANs and privacy preserved personalization. There are many open
challenges in the development of such federated machine learning systems and
this project will be focusing on the communication bottleneck and data Non
IID-ness, and its effect on the performance of the models. These issues are
characterized on a baseline model, model performance is evaluated, and
discussions are made to overcome these issues.
|
2501.03479 | Women, Infamous, and Exotic Beings: What Honorific Usages in Wikipedia
Reveal about the Socio-Cultural Norms | cs.CL | Honorifics serve as powerful linguistic markers that reflect social
hierarchies and cultural values. This paper presents a large-scale,
cross-linguistic exploration of usage of honorific pronouns in Bengali and
Hindi Wikipedia articles, shedding light on how socio-cultural factors shape
language. Using LLM (GPT-4o), we annotated 10, 000 articles of real and
fictional beings in each language for several sociodemographic features such as
gender, age, fame, and exoticness, and the use of honorifics. We find that
across all feature combinations, use of honorifics is consistently more common
in Bengali than Hindi. For both languages, the use non-honorific pronouns is
more commonly observed for infamous, juvenile, and exotic beings. Notably, we
observe a gender bias in use of honorifics in Hindi, with men being more
commonly referred to with honorifics than women.
|
2501.03482 | VOILA: Complexity-Aware Universal Segmentation of CT images by Voxel
Interacting with Language | cs.CV | Satisfactory progress has been achieved recently in universal segmentation of
CT images. Following the success of vision-language methods, there is a growing
trend towards utilizing text prompts and contrastive learning to develop
universal segmentation models. However, there exists a significant imbalance in
information density between 3D images and text prompts. Moreover, the standard
fully connected layer segmentation approach faces significant challenges in
handling multiple classes and exhibits poor generalizability. To address these
challenges, we propose the VOxel Interacting with LAnguage method (VOILA) for
universal CT image segmentation. Initially, we align voxels and language into a
shared representation space and classify voxels on the basis of cosine
similarity. Subsequently, we develop the Voxel-Language Interaction framework
to mitigate the impact of class imbalance caused by foreground-background
discrepancies and variations in target volumes. Furthermore, a Complexity-Aware
Sampling method is proposed to focus on region hard to segment, achieved by
generating pseudo-heatmaps from a trainable Gaussian mixture distribution. Our
results indicate the proposed VOILA is capable to achieve improved performance
with reduced parameters and computational cost during training. Furthermore, it
demonstrates significant generalizability across diverse datasets without
additional fine-tuning.
|
2501.03486 | Align-Pro: A Principled Approach to Prompt Optimization for LLM
Alignment | cs.LG cs.AI | The alignment of large language models (LLMs) with human values is critical
as these models become increasingly integrated into various societal and
decision-making processes. Traditional methods, such as reinforcement learning
from human feedback (RLHF), achieve alignment by fine-tuning model parameters,
but these approaches are often computationally expensive and impractical when
models are frozen or inaccessible for parameter modification. In contrast,
prompt optimization is a viable alternative to RLHF for LLM alignment. While
the existing literature has shown empirical promise of prompt optimization, its
theoretical underpinning remains under-explored. We address this gap by
formulating prompt optimization as an optimization problem and try to provide
theoretical insights into the optimality of such a framework. To analyze the
performance of the prompt optimization, we study theoretical suboptimality
bounds and provide insights in terms of how prompt optimization depends upon
the given prompter and target model. We also provide empirical validation
through experiments on various datasets, demonstrating that prompt optimization
can effectively align LLMs, even when parameter fine-tuning is not feasible.
|
2501.03489 | Entropy-Guided Attention for Private LLMs | cs.LG cs.CR | The pervasiveness of proprietary language models has raised critical privacy
concerns, necessitating advancements in private inference (PI), where
computations are performed directly on encrypted data without revealing users'
sensitive information. While PI offers a promising solution, its practical
deployment is hindered by substantial communication and latency overheads,
primarily stemming from nonlinear operations. To address this, we introduce an
information-theoretic framework to characterize the role of nonlinearities in
decoder-only language models, laying a principled foundation for optimizing
transformer-architectures tailored to the demands of PI.
By leveraging Shannon's entropy as a quantitative measure, we uncover the
previously unexplored dual significance of nonlinearities: beyond ensuring
training stability, they are crucial for maintaining attention head diversity.
Specifically, we find that their removal triggers two critical failure modes:
{\em entropy collapse} in deeper layers that destabilizes training, and {\em
entropic overload} in earlier layers that leads to under-utilization of
Multi-Head Attention's (MHA) representational capacity.
We propose an entropy-guided attention mechanism paired with a novel entropy
regularization technique to mitigate entropic overload. Additionally, we
explore PI-friendly alternatives to layer normalization for preventing entropy
collapse and stabilizing the training of LLMs with reduced-nonlinearities. Our
study bridges the gap between information theory and architectural design,
establishing entropy dynamics as a principled guide for developing efficient PI
architectures. The code and implementation are available at
https://github.com/Nandan91/entropy-guided-attention-llm
|
2501.03490 | SceneBooth: Diffusion-based Framework for Subject-preserved
Text-to-Image Generation | cs.CV | Due to the demand for personalizing image generation, subject-driven
text-to-image generation method, which creates novel renditions of an input
subject based on text prompts, has received growing research interest. Existing
methods often learn subject representation and incorporate it into the prompt
embedding to guide image generation, but they struggle with preserving subject
fidelity. To solve this issue, this paper approaches a novel framework named
SceneBooth for subject-preserved text-to-image generation, which consumes
inputs of a subject image, object phrases and text prompts. Instead of learning
the subject representation and generating a subject, our SceneBooth fixes the
given subject image and generates its background image guided by the text
prompts. To this end, our SceneBooth introduces two key components, i.e., a
multimodal layout generation module and a background painting module. The
former determines the position and scale of the subject by generating
appropriate scene layouts that align with text captions, object phrases, and
subject visual information. The latter integrates two adapters (ControlNet and
Gated Self-Attention) into the latent diffusion model to generate a background
that harmonizes with the subject guided by scene layouts and text descriptions.
In this manner, our SceneBooth ensures accurate preservation of the subject's
appearance in the output. Quantitative and qualitative experimental results
demonstrate that SceneBooth significantly outperforms baseline methods in terms
of subject preservation, image harmonization and overall quality.
|
2501.03491 | Can LLMs Design Good Questions Based on Context? | cs.CL cs.AI | This paper evaluates questions generated by LLMs from context, comparing them
to human-generated questions across six dimensions. We introduce an automated
LLM-based evaluation method, focusing on aspects like question length, type,
context coverage, and answerability. Our findings highlight unique
characteristics of LLM-generated questions, contributing insights that can
support further research in question quality and downstream applications.
|
2501.03492 | Multi-Source Urban Traffic Flow Forecasting with Drone and Loop Detector
Data | cs.LG | Traffic forecasting is a fundamental task in transportation research, however
the scope of current research has mainly focused on a single data modality of
loop detectors. Recently, the advances in Artificial Intelligence and drone
technologies have made possible novel solutions for efficient, accurate and
flexible aerial observations of urban traffic. As a promising traffic
monitoring approach, drone-captured data can create an accurate multi-sensor
mobility observatory for large-scale urban networks, when combined with
existing infrastructure. Therefore, this paper investigates the problem of
multi-source traffic speed prediction, simultaneously using drone and loop
detector data. A simple yet effective graph-based model HiMSNet is proposed to
integrate multiple data modalities and learn spatio-temporal correlations.
Detailed analysis shows that predicting accurate segment-level speed is more
challenging than the regional speed, especially under high-demand scenarios
with heavier congestions and varying traffic dynamics. Utilizing both drone and
loop detector data, the prediction accuracy can be improved compared to
single-modality cases, when the sensors have lower coverages and are subject to
noise. Our simulation study based on vehicle trajectories in a real urban road
network has highlighted the added value of integrating drones in traffic
forecasting and monitoring.
|
2501.03495 | Textualize Visual Prompt for Image Editing via Diffusion Bridge | cs.CV cs.LG | Visual prompt, a pair of before-and-after edited images, can convey
indescribable imagery transformations and prosper in image editing. However,
current visual prompt methods rely on a pretrained text-guided image-to-image
generative model that requires a triplet of text, before, and after images for
retraining over a text-to-image model. Such crafting triplets and retraining
processes limit the scalability and generalization of editing. In this paper,
we present a framework based on any single text-to-image model without reliance
on the explicit image-to-image model thus enhancing the generalizability and
scalability. Specifically, by leveraging the probability-flow ordinary
equation, we construct a diffusion bridge to transfer the distribution between
before-and-after images under the text guidance. By optimizing the text via the
bridge, the framework adaptively textualizes the editing transformation
conveyed by visual prompts into text embeddings without other models.
Meanwhile, we introduce differential attention control during text
optimization, which disentangles the text embedding from the invariance of the
before-and-after images and makes it solely capture the delicate transformation
and generalize to edit various images. Experiments on real images validate
competitive results on the generalization, contextual coherence, and high
fidelity for delicate editing with just one image pair as the visual prompt.
|
2501.03496 | A Unified Attack Detection Strategy for Multi-Agent Systems over
Transient and Steady Stages | eess.SY cs.SY | This paper proposes a unified detection strategy against three kinds of
attacks for multi-agent systems (MASs) which is applicable to both transient
and steady stages. For attacks on the communication layer, a watermarking-based
detection scheme with KullbackLeibler (KL) divergence is designed. Different
from traditional communication schemes, each agent transmits a message set
containing two state values with different types of watermarking. It is found
that the detection performance is determined by the relevant parameters of the
watermarking signal. Unlike the existing detection manoeuvres, such a scheme is
capable of transient and steady stages. For attacks on the agent layer, a
convergence rate related detection approach is put forward. It is shown that
the resilience of the considered system is characterized by the coefficient and
offset of the envelope. For hybrid attacks, based on the above detection
mechanisms, a general framework resorting to trusted agents is presented, which
requires weaker graph conditions and less information transmission. Finally, an
example associated with the platooning of connected vehicles is given to
support the theoretical results.
|
2501.03499 | Can Deep Learning Trigger Alerts from Mobile-Captured Images? | cs.CV cs.AI | Our research presents a comprehensive approach to leveraging mobile camera
image data for real-time air quality assessment and recommendation. We develop
a regression-based Convolutional Neural Network model and tailor it explicitly
for air quality prediction by exploiting the inherent relationship between
output parameters. As a result, the Mean Squared Error of 0.0077 and 0.0112
obtained for 2 and 5 pollutants respectively outperforms existing models.
Furthermore, we aim to verify the common practice of augmenting the original
dataset with a view to introducing more variation in the training phase. It is
one of our most significant contributions that our experimental results
demonstrate minimal accuracy differences between the original and augmented
datasets. Finally, a real-time, user-friendly dashboard is implemented which
dynamically displays the Air Quality Index and pollutant values derived from
captured mobile camera images. Users' health conditions are considered to
recommend whether a location is suitable based on current air quality metrics.
Overall, this research contributes to verification of data augmentation
techniques, CNN-based regression modelling for air quality prediction, and
user-centric air quality monitoring through mobile technology. The proposed
system offers practical solutions for individuals to make informed
environmental health and well-being decisions.
|
2501.03503 | Resilient Distributed Control for Uncertain Nonlinear Interconnected
Systems under Network Anomaly | eess.SY cs.SY | We address a distributed adaptive control methodology for nonlinear
interconnected systems possibly affected by network anomalies. In the framework
of adaptive approximation, the distributed controller and parameter estimator
are designed by exploiting a backstepping approach. The stability of the
distributed control system under anomalies is analyzed, where both local and
neighboring anomaly effects are considered. To quantify the resilience of the
interconnected system under the action of network anomalies, we derive bounds
on the duration of each anomaly and the resting time between two consecutive
anomalies. Specifically, when each anomaly duration is smaller than our
designed upper bound, the interconnected system controlled by the distributed
approximation-based controller remains asymptotically stable. Moreover, if the
resting time between two consecutive anomalies is larger than the proposed
bound, then all signals of the control system are guaranteed to be bounded. In
the paper, we show that under the action of the proposed distributed adaptive
controller, the interconnected system remains stable in the presence of network
anomalies, with both the qualitative and quantitative resilient conditions.
Extensive simulation results show the effectiveness of our theoretical results.
|
2501.03507 | An Empirical Study of Accuracy-Robustness Tradeoff and Training
Efficiency in Self-Supervised Learning | cs.CV cs.LG | Self-supervised learning (SSL) has significantly advanced image
representation learning, yet efficiency challenges persist, particularly with
adversarial training. Many SSL methods require extensive epochs to achieve
convergence, a demand further amplified in adversarial settings. To address
this inefficiency, we revisit the robust EMP-SSL framework, emphasizing the
importance of increasing the number of crops per image to accelerate learning.
Unlike traditional contrastive learning, robust EMP-SSL leverages multi-crop
sampling, integrates an invariance term and regularization, and reduces
training epochs, enhancing time efficiency. Evaluated with both standard linear
classifiers and multi-patch embedding aggregation, robust EMP-SSL provides new
insights into SSL evaluation strategies.
Our results show that robust crop-based EMP-SSL not only accelerates
convergence but also achieves a superior balance between clean accuracy and
adversarial robustness, outperforming multi-crop embedding aggregation.
Additionally, we extend this approach with free adversarial training in
Multi-Crop SSL, introducing the Cost-Free Adversarial Multi-Crop
Self-Supervised Learning (CF-AMC-SSL) method. CF-AMC-SSL demonstrates the
effectiveness of free adversarial training in reducing training time while
simultaneously improving clean accuracy and adversarial robustness. These
findings underscore the potential of CF-AMC-SSL for practical SSL applications.
Our code is publicly available at https://github.com/softsys4ai/CF-AMC-SSL.
|
Subsets and Splits
No community queries yet
The top public SQL queries from the community will appear here once available.