id
stringlengths 9
16
| title
stringlengths 4
278
| categories
stringlengths 5
104
| abstract
stringlengths 6
4.09k
|
|---|---|---|---|
2502.05638
|
ELMTEX: Fine-Tuning Large Language Models for Structured Clinical
Information Extraction. A Case Study on Clinical Reports
|
cs.CL cs.AI
|
Europe's healthcare systems require enhanced interoperability and
digitalization, driving a demand for innovative solutions to process legacy
clinical data. This paper presents the results of our project, which aims to
leverage Large Language Models (LLMs) to extract structured information from
unstructured clinical reports, focusing on patient history, diagnoses,
treatments, and other predefined categories. We developed a workflow with a
user interface and evaluated LLMs of varying sizes through prompting strategies
and fine-tuning. Our results show that fine-tuned smaller models match or
surpass larger counterparts in performance, offering efficiency for
resource-limited settings. A new dataset of 60,000 annotated English clinical
summaries and 24,000 German translations was validated with automated and
manual checks. The evaluations used ROUGE, BERTScore, and entity-level metrics.
The work highlights the approach's viability and outlines future improvements.
|
2502.05640
|
ETHEREAL: Energy-efficient and High-throughput Inference using
Compressed Tsetlin Machine
|
cs.LG
|
The Tsetlin Machine (TM) is a novel alternative to deep neural networks
(DNNs). Unlike DNNs, which rely on multi-path arithmetic operations, a TM
learns propositional logic patterns from data literals using Tsetlin automata.
This fundamental shift from arithmetic to logic underpinning makes TM suitable
for empowering new applications with low-cost implementations.
In TM, literals are often included by both positive and negative clauses
within the same class, canceling out their impact on individual class
definitions. This property can be exploited to develop compressed TM models,
enabling energy-efficient and high-throughput inferences for machine learning
(ML) applications.
We introduce a training approach that incorporates excluded automata states
to sparsify TM logic patterns in both positive and negative clauses. This
exclusion is iterative, ensuring that highly class-correlated (and therefore
significant) literals are retained in the compressed inference model, ETHEREAL,
to maintain strong classification accuracy. Compared to standard TMs, ETHEREAL
TM models can reduce model size by up to 87.54%, with only a minor accuracy
compromise. We validate the impact of this compression on eight real-world Tiny
machine learning (TinyML) datasets against standard TM, equivalent Random
Forest (RF) and Binarized Neural Network (BNN) on the STM32F746G-DISCO
platform. Our results show that ETHEREAL TM models achieve over an order of
magnitude reduction in inference time (resulting in higher throughput) and
energy consumption compared to BNNs, while maintaining a significantly smaller
memory footprint compared to RFs.
|
2502.05641
|
Generating Physically Realistic and Directable Human Motions from
Multi-Modal Inputs
|
cs.RO cs.AI
|
This work focuses on generating realistic, physically-based human behaviors
from multi-modal inputs, which may only partially specify the desired motion.
For example, the input may come from a VR controller providing arm motion and
body velocity, partial key-point animation, computer vision applied to videos,
or even higher-level motion goals. This requires a versatile low-level humanoid
controller that can handle such sparse, under-specified guidance, seamlessly
switch between skills, and recover from failures. Current approaches for
learning humanoid controllers from demonstration data capture some of these
characteristics, but none achieve them all. To this end, we introduce the
Masked Humanoid Controller (MHC), a novel approach that applies multi-objective
imitation learning on augmented and selectively masked motion demonstrations.
The training methodology results in an MHC that exhibits the key capabilities
of catch-up to out-of-sync input commands, combining elements from multiple
motion sequences, and completing unspecified parts of motions from sparse
multimodal input. We demonstrate these key capabilities for an MHC learned over
a dataset of 87 diverse skills and showcase different multi-modal use cases,
including integration with planning frameworks to highlight MHC's ability to
solve new user-defined tasks without any finetuning.
|
2502.05643
|
Design of optimal repetitive control based on EID estimator with
adaptive periodic event-triggered mechanism for linear systems subjected to
exogenous disturbances
|
eess.SY cs.SY
|
The periodic signal tracking and the unknown disturbance rejection under
limited communication resources are main important issues in many physical
systems and practical applications. The control of such systems has some
challenges such as time-varying delay, unknown external disturbances, structure
uncertainty, and the heavy communication burden on the sensors and controller.
These challenges affect the system performance and may destabilize the system.
Hence, in this article, an improved scheme has been designed to overcome these
challenges to achieve a good control performance based on optimization
technique, and to guarantee the closed-loop system stability. The proposed
scheme can be described as: modified repetitive control (MRC) with
equivalent-input-disturbance (EID) estimator based on adaptive periodic
event-triggered mechanism (APETM). The scheme that has been created is intended
for linear systems that experience external disturbances which are not known,
and must operate within constraints on communication resources. MRC based on
EID has been developed with the goal of achieving periodic reference tracking
and enhancing the ability to effectively reject both periodic and aperiodic
unknown disturbances. In addition, utilizing APETM to reduce data transmission,
computational burden and to save communication resources. Additionally, an
optimization method is employed to fine-tune the parameters of the controller,
enabling adjustments to the control and learning actions. Overall architecture
of the system, incorporating the APETM-MRC with the utilization of an EID
estimator and optimal techniques, can be described as a time-varying delay
system. Proposed schemes were demonstrated to be effective, feasible, and
robust through simulated application.
|
2502.05649
|
Gender Bias in Instruction-Guided Speech Synthesis Models
|
cs.CL cs.LG eess.AS
|
Recent advancements in controllable expressive speech synthesis, especially
in text-to-speech (TTS) models, have allowed for the generation of speech with
specific styles guided by textual descriptions, known as style prompts. While
this development enhances the flexibility and naturalness of synthesized
speech, there remains a significant gap in understanding how these models
handle vague or abstract style prompts. This study investigates the potential
gender bias in how models interpret occupation-related prompts, specifically
examining their responses to instructions like "Act like a nurse". We explore
whether these models exhibit tendencies to amplify gender stereotypes when
interpreting such prompts. Our experimental results reveal the model's tendency
to exhibit gender bias for certain occupations. Moreover, models of different
sizes show varying degrees of this bias across these occupations.
|
2502.05650
|
Incongruence Identification in Eyewitness Testimony
|
cs.CL
|
Incongruence detection in eyewitness narratives is critical for understanding
the reliability of testimonies, yet traditional approaches often fail to
address the nuanced inconsistencies inherent in such accounts. In this paper,
we introduce a novel task of incongruence detection in eyewitness testimonies.
Given a pair of testimonies containing of multiple pairs of question and answer
by two subjects, we identify contextually related incongruence between the two
subjects. We also mark the span of incongruences in the utterances. To achieve
this, we developed MIND(MultI-EyewitNess Deception) - a comprehensive dataset
consisting of 2927 pairs of contextually related answers designed to capture
both explicit and implicit contradictions. INstruction - TunEd iNcongruity
Detection framework based on 6W and multi-hop reasoning approach, aka. INTEND.
Drawing from investigative techniques, INTEND address the task as a close-style
problem, contradicting on the who, what, when, where and why aspect of the
content. Our findings shows that prompt tuning, especially when utilizing our
framework, enhances the detection of incongruences by a margin of +5.63
percent. We compare our approach with multiple fine-tuning and prompt tuning
techniques on MLMs and LLMs. Emperical results demonstrate convincing
performance improvement in F1-score over fine-tuned and regular prompt-tuning
techniques, highlighting the effectiveness of our approach.
|
2502.05651
|
KMI: A Dataset of Korean Motivational Interviewing Dialogues for
Psychotherapy
|
cs.CL cs.AI
|
The increasing demand for mental health services has led to the rise of
AI-driven mental health chatbots, though challenges related to privacy, data
collection, and expertise persist. Motivational Interviewing (MI) is gaining
attention as a theoretical basis for boosting expertise in the development of
these chatbots. However, existing datasets are showing limitations for training
chatbots, leading to a substantial demand for publicly available resources in
the field of MI and psychotherapy. These challenges are even more pronounced in
non-English languages, where they receive less attention. In this paper, we
propose a novel framework that simulates MI sessions enriched with the
expertise of professional therapists. We train an MI forecaster model that
mimics the behavioral choices of professional therapists and employ Large
Language Models (LLMs) to generate utterances through prompt engineering. Then,
we present KMI, the first synthetic dataset theoretically grounded in MI,
containing 1,000 high-quality Korean Motivational Interviewing dialogues.
Through an extensive expert evaluation of the generated dataset and the
dialogue model trained on it, we demonstrate the quality, expertise, and
practicality of KMI. We also introduce novel metrics derived from MI theory in
order to evaluate dialogues from the perspective of MI.
|
2502.05652
|
An inpainting approach to manipulate asymmetry in pre-operative breast
images
|
cs.CV
|
One of the most frequent modalities of breast cancer treatment is surgery.
Breast surgery can cause visual alterations to the breasts, due to scars and
asymmetries. To enable an informed choice of treatment, the patient must be
adequately informed of the aesthetic outcomes of each treatment plan. In this
work, we propose an inpainting approach to manipulate breast shape and nipple
position in breast images, for the purpose of predicting the aesthetic outcomes
of breast cancer treatment. We perform experiments with various model
architectures for the inpainting task, including invertible networks capable of
manipulating breasts in the absence of ground-truth breast contour and nipple
annotations. Experiments on two breast datasets show the proposed models'
ability to realistically alter a patient's breasts, enabling a faithful
reproduction of breast asymmetries of post-operative patients in pre-operative
images.
|
2502.05654
|
Evaluating the Techno-Economic Viability of a Solar PV-Wind Turbine
Hybrid System with Battery Storage for an Electric Vehicle Charging Station
in Khobar, Saudi Arabia
|
eess.SY cs.SY
|
The main aim of this investigation is to replicate and enhance a sustainable
hybrid energy structure that combines solar photovoltaic, wind turbines,
battery storage. The study employs the Homer simulation model to evaluate the
scaling, cost, and control strategy of this hybrid power system. This work
primarily focuses on determining the most efficient design for a renewable
energy generation system architecture for a significant electric vehicle
charging station. The hybrid power system is designed to meet an AC base load
of 2424.25 kWh/day with peak consumption of 444 kW. The simulation results
indicate that the optimized components and the cost of energy are at an optimal
level and the optimal design in terms of renewable energy penetration.
|
2502.05656
|
Flowing Through Layers: A Continuous Dynamical Systems Perspective on
Transformers
|
cs.LG math.DS
|
We show that the standard discrete update rule of transformer layers can be
naturally interpreted as a forward Euler discretization of a continuous
dynamical system. Our Transformer Flow Approximation Theorem demonstrates that,
under standard Lipschitz continuity assumptions, token representations converge
uniformly to the unique solution of an ODE as the number of layers grows.
Moreover, if the underlying mapping satisfies a one-sided Lipschitz condition
with a negative constant, the resulting dynamics are contractive, causing
perturbations to decay exponentially across layers. Beyond clarifying the
empirical stability and expressivity of transformer models, these insights link
transformer updates to a broader iterative reasoning framework, suggesting new
avenues for accelerated convergence and architectural innovations inspired by
dynamical systems theory.
|
2502.05660
|
Evaluating Vision-Language Models for Emotion Recognition
|
cs.CV cs.CL
|
Large Vision-Language Models (VLMs) have achieved unprecedented success in
several objective multimodal reasoning tasks. However, to further enhance their
capabilities of empathetic and effective communication with humans, improving
how VLMs process and understand emotions is crucial. Despite significant
research attention on improving affective understanding, there is a lack of
detailed evaluations of VLMs for emotion-related tasks, which can potentially
help inform downstream fine-tuning efforts. In this work, we present the first
comprehensive evaluation of VLMs for recognizing evoked emotions from images.
We create a benchmark for the task of evoked emotion recognition and study the
performance of VLMs for this task, from perspectives of correctness and
robustness. Through several experiments, we demonstrate important factors that
emotion recognition performance depends on, and also characterize the various
errors made by VLMs in the process. Finally, we pinpoint potential causes for
errors through a human evaluation study. We use our experimental results to
inform recommendations for the future of emotion research in the context of
VLMs.
|
2502.05664
|
CODESIM: Multi-Agent Code Generation and Problem Solving through
Simulation-Driven Planning and Debugging
|
cs.CL cs.AI
|
Large Language Models (LLMs) have made significant strides in code generation
and problem solving. Current approaches employ external tool-based iterative
debuggers that use compiler or other tool-based runtime feedback to refine
coarse programs generated by various methods. However, the effectiveness of
these approaches heavily relies on the quality of the initial code generation,
which remains an open challenge. In this paper, we introduce CodeSim, a novel
multi-agent code generation framework that comprehensively addresses the stages
of program synthesis-planning, coding, and debugging-through a human-like
perception approach. As human verifies their understanding of any algorithms
through visual simulation, CodeSim uniquely features a method of plan
verification and internal debugging through the step-by-step simulation of
input/output. Extensive experiments across seven challenging competitive
problem-solving and program synthesis benchmarks demonstrate CodeSim's
remarkable code generation capabilities. Our framework achieves new
state-of-the-art (pass@1) results-(HumanEval 95.1%, MBPP 90.7%, APPS 22%, and
CodeContests 29.1%). Furthermore, our method shows potential for even greater
enhancement when cascaded with external debuggers. To facilitate further
research and development in this area, we have open-sourced our framework in
this link (https://kagnlp.github.io/codesim.github.io/).
|
2502.05667
|
Online Controller Synthesis for Robot Collision Avoidance: A Case Study
|
cs.RO cs.SY eess.SY
|
The inherent uncertainty of dynamic environments poses significant challenges
for modeling robot behavior, particularly in tasks such as collision avoidance.
This paper presents an online controller synthesis framework tailored for
robots equipped with deep learning-based perception components, with a focus on
addressing distribution shifts. Our approach integrates periodic monitoring and
repair mechanisms for the deep neural network perception component, followed by
uncertainty reassessment. These uncertainty evaluations are injected into a
parametric discrete-time markov chain, enabling the synthesis of robust
controllers via probabilistic model checking. To ensure high system
availability during the repair process, we propose a dual-component
configuration that seamlessly transitions between operational states. Through a
case study on robot collision avoidance, we demonstrate the efficacy of our
method, showcasing substantial performance improvements over baseline
approaches. This work provides a comprehensive and scalable solution for
enhancing the safety and reliability of autonomous systems operating in
uncertain environments.
|
2502.05668
|
The late-stage training dynamics of (stochastic) subgradient descent on
homogeneous neural networks
|
cs.LG cs.NE math.OC stat.ML
|
We analyze the implicit bias of constant step stochastic subgradient descent
(SGD). We consider the setting of binary classification with homogeneous neural
networks - a large class of deep neural networks with ReLU-type activation
functions such as MLPs and CNNs without biases. We interpret the dynamics of
normalized SGD iterates as an Euler-like discretization of a conservative field
flow that is naturally associated to the normalized classification margin.
Owing to this interpretation, we show that normalized SGD iterates converge to
the set of critical points of the normalized margin at late-stage training
(i.e., assuming that the data is correctly classified with positive normalized
margin). Up to our knowledge, this is the first extension of the analysis of
Lyu and Li (2020) on the discrete dynamics of gradient descent to the nonsmooth
and stochastic setting. Our main result applies to binary classification with
exponential or logistic losses. We additionally discuss extensions to more
general settings.
|
2502.05669
|
Rigid Body Adversarial Attacks
|
cs.CV cs.GR
|
Due to their performance and simplicity, rigid body simulators are often used
in applications where the objects of interest can considered very stiff.
However, no material has infinite stiffness, which means there are potentially
cases where the non-zero compliance of the seemingly rigid object can cause a
significant difference between its trajectories when simulated in a rigid body
or deformable simulator.
Similarly to how adversarial attacks are developed against image classifiers,
we propose an adversarial attack against rigid body simulators. In this
adversarial attack, we solve an optimization problem to construct perceptually
rigid adversarial objects that have the same collision geometry and moments of
mass to a reference object, so that they behave identically in rigid body
simulations but maximally different in more accurate deformable simulations. We
demonstrate the validity of our method by comparing simulations of several
examples in commercially available simulators.
|
2502.05670
|
Language Models Largely Exhibit Human-like Constituent Ordering
Preferences
|
cs.CL cs.AI
|
Though English sentences are typically inflexible vis-\`a-vis word order,
constituents often show far more variability in ordering. One prominent theory
presents the notion that constituent ordering is directly correlated with
constituent weight: a measure of the constituent's length or complexity. Such
theories are interesting in the context of natural language processing (NLP),
because while recent advances in NLP have led to significant gains in the
performance of large language models (LLMs), much remains unclear about how
these models process language, and how this compares to human language
processing. In particular, the question remains whether LLMs display the same
patterns with constituent movement, and may provide insights into existing
theories on when and how the shift occurs in human language. We compare a
variety of LLMs with diverse properties to evaluate broad LLM performance on
four types of constituent movement: heavy NP shift, particle movement, dative
alternation, and multiple PPs. Despite performing unexpectedly around particle
movement, LLMs generally align with human preferences around constituent
ordering.
|
2502.05672
|
On the Convergence and Stability of Upside-Down Reinforcement Learning,
Goal-Conditioned Supervised Learning, and Online Decision Transformers
|
stat.ML cs.AI cs.LG cs.NE cs.SY eess.SY
|
This article provides a rigorous analysis of convergence and stability of
Episodic Upside-Down Reinforcement Learning, Goal-Conditioned Supervised
Learning and Online Decision Transformers. These algorithms performed
competitively across various benchmarks, from games to robotic tasks, but their
theoretical understanding is limited to specific environmental conditions. This
work initiates a theoretical foundation for algorithms that build on the broad
paradigm of approaching reinforcement learning through supervised learning or
sequence modeling. At the core of this investigation lies the analysis of
conditions on the underlying environment, under which the algorithms can
identify optimal solutions. We also assess whether emerging solutions remain
stable in situations where the environment is subject to tiny levels of noise.
Specifically, we study the continuity and asymptotic convergence of
command-conditioned policies, values and the goal-reaching objective depending
on the transition kernel of the underlying Markov Decision Process. We
demonstrate that near-optimal behavior is achieved if the transition kernel is
located in a sufficiently small neighborhood of a deterministic kernel. The
mentioned quantities are continuous (with respect to a specific topology) at
deterministic kernels, both asymptotically and after a finite number of
learning cycles. The developed methods allow us to present the first explicit
estimates on the convergence and stability of policies and values in terms of
the underlying transition kernels. On the theoretical side we introduce a
number of new concepts to reinforcement learning, like working in segment
spaces, studying continuity in quotient topologies and the application of the
fixed-point theory of dynamical systems. The theoretical study is accompanied
by a detailed investigation of example environments and numerical experiments.
|
2502.05673
|
The Evolution of Dataset Distillation: Toward Scalable and Generalizable
Solutions
|
cs.CV
|
Dataset distillation, which condenses large-scale datasets into compact
synthetic representations, has emerged as a critical solution for training
modern deep learning models efficiently. While prior surveys focus on
developments before 2023, this work comprehensively reviews recent advances,
emphasizing scalability to large-scale datasets such as ImageNet-1K and
ImageNet-21K. We categorize progress into a few key methodologies: trajectory
matching, gradient matching, distribution matching, scalable generative
approaches, and decoupling optimization mechanisms. As a comprehensive
examination of recent dataset distillation advances, this survey highlights
breakthrough innovations: the SRe2L framework for efficient and effective
condensation, soft label strategies that significantly enhance model accuracy,
and lossless distillation techniques that maximize compression while
maintaining performance. Beyond these methodological advancements, we address
critical challenges, including robustness against adversarial and backdoor
attacks, effective handling of non-IID data distributions. Additionally, we
explore emerging applications in video and audio processing, multi-modal
learning, medical imaging, and scientific computing, highlighting its domain
versatility. By offering extensive performance comparisons and actionable
research directions, this survey equips researchers and practitioners with
practical insights to advance efficient and generalizable dataset distillation,
paving the way for future innovations.
|
2502.05675
|
Investigating the Shortcomings of LLMs in Step-by-Step Legal Reasoning
|
cs.CL
|
Reasoning abilities of LLMs have been a key focus in recent years. One
challenging reasoning domain with interesting nuances is legal reasoning, which
requires careful application of rules, and precedents while balancing deductive
and analogical reasoning, and conflicts between rules. Although there have been
a few works on using LLMs for legal reasoning, their focus has been on overall
accuracy. In this paper, we dig deeper to do a step-by-step analysis and figure
out where they commit errors. We use the college-level Multiple Choice
Question-Answering (MCQA) task from the \textit{Civil Procedure} dataset and
propose a new error taxonomy derived from initial manual analysis of reasoning
chains with respect to several LLMs, including two objective measures:
soundness and correctness scores. We then develop an LLM-based automated
evaluation framework to identify reasoning errors and evaluate the performance
of LLMs. The computation of soundness and correctness on the dataset using the
auto-evaluator framework reveals several interesting insights. Furthermore, we
show that incorporating the error taxonomy as feedback in popular prompting
techniques marginally increases LLM performance. Our work will also serve as an
evaluation framework that can be used in detailed error analysis of reasoning
chains for logic-intensive complex tasks.
|
2502.05676
|
Generalized Venn and Venn-Abers Calibration with Applications in
Conformal Prediction
|
stat.ML cs.LG stat.ME
|
Ensuring model calibration is critical for reliable predictions, yet popular
distribution-free methods, such as histogram binning and isotonic regression,
provide only asymptotic guarantees. We introduce a unified framework for Venn
and Venn-Abers calibration, generalizing Vovk's binary classification approach
to arbitrary prediction tasks and loss functions. Venn calibration leverages
binning calibrators to construct prediction sets that contain at least one
marginally perfectly calibrated point prediction in finite samples, capturing
epistemic uncertainty in the calibration process. The width of these sets
shrinks asymptotically to zero, converging to a conditionally calibrated point
prediction. Furthermore, we propose Venn multicalibration, a novel methodology
for finite-sample calibration across subpopulations. For quantile loss,
group-conditional and multicalibrated conformal prediction arise as special
cases of Venn multicalibration, and Venn calibration produces novel conformal
prediction intervals that achieve quantile-conditional coverage. As a separate
contribution, we extend distribution-free conditional calibration guarantees of
histogram binning and isotonic calibration to general losses.
|
2502.05677
|
Surprise Potential as a Measure of Interactivity in Driving Scenarios
|
cs.RO cs.LG
|
Validating the safety and performance of an autonomous vehicle (AV) requires
benchmarking on real-world driving logs. However, typical driving logs contain
mostly uneventful scenarios with minimal interactions between road users.
Identifying interactive scenarios in real-world driving logs enables the
curation of datasets that amplify critical signals and provide a more accurate
assessment of an AV's performance. In this paper, we present a novel metric
that identifies interactive scenarios by measuring an AV's surprise potential
on others. First, we identify three dimensions of the design space to describe
a family of surprise potential measures. Second, we exhaustively evaluate and
compare different instantiations of the surprise potential measure within this
design space on the nuScenes dataset. To determine how well a surprise
potential measure correctly identifies an interactive scenario, we use a reward
model learned from human preferences to assess alignment with human intuition.
Our proposed surprise potential, arising from this exhaustive comparative
study, achieves a correlation of more than 0.82 with the human-aligned reward
function, outperforming existing approaches. Lastly, we validate motion
planners on curated interactive scenarios to demonstrate downstream
applications.
|
2502.05679
|
Federated Learning with Reservoir State Analysis for Time Series Anomaly
Detection
|
cs.LG
|
With a growing data privacy concern, federated learning has emerged as a
promising framework to train machine learning models without sharing locally
distributed data. In federated learning, local model training by multiple
clients and model integration by a server are repeated only through model
parameter sharing. Most existing federated learning methods assume training
deep learning models, which are often computationally demanding. To deal with
this issue, we propose federated learning methods with reservoir state analysis
to seek computational efficiency and data privacy protection simultaneously.
Specifically, our method relies on Mahalanobis Distance of Reservoir States
(MD-RS) method targeting time series anomaly detection, which learns a
distribution of reservoir states for normal inputs and detects anomalies based
on a deviation from the learned distribution. Iterative updating of statistical
parameters in the MD-RS enables incremental federated learning (IncFed MD-RS).
We evaluate the performance of IncFed MD-RS using benchmark datasets for time
series anomaly detection. The results show that IncFed MD-RS outperforms other
federated learning methods with deep learning and reservoir computing models
particularly when clients' data are relatively short and heterogeneous. We
demonstrate that IncFed MD-RS is robust against reduced sample data compared to
other methods. We also show that the computational cost of IncFed MD-RS can be
reduced by subsampling from the reservoir states without performance
degradation. The proposed method is beneficial especially in anomaly detection
applications where computational efficiency, algorithm simplicity, and low
communication cost are required.
|
2502.05684
|
Machine Unlearning via Information Theoretic Regularization
|
cs.LG cs.AI cs.IT math.IT stat.ML
|
How can we effectively remove or "unlearn" undesirable information, such as
specific features or individual data points, from a learning outcome while
minimizing utility loss and ensuring rigorous guarantees? We introduce a
mathematical framework based on information-theoretic regularization to address
both feature and data point unlearning. For feature unlearning, we derive a
unified solution that simultaneously optimizes diverse learning objectives,
including entropy, conditional entropy, KL-divergence, and the energy of
conditional probability. For data point unlearning, we first propose a novel
definition that serves as a practical condition for unlearning via retraining,
is easy to verify, and aligns with the principles of differential privacy from
an inference perspective. Then, we provide provable guarantees for our
framework on data point unlearning. By combining flexibility in learning
objectives with simplicity in regularization design, our approach is highly
adaptable and practical for a wide range of machine learning and AI
applications.
|
2502.05685
|
Mobile Application Threats and Security
|
cs.CR cs.AI
|
The movement to mobile computing solutions provides flexibility to different
users whether it is a business user, a student, or even providing entertainment
to children and adults of all ages. Due to these emerging technologies mobile
users are unable to safeguard private information in a very effective way and
cybercrimes are increasing day by day. This manuscript will focus on security
vulnerabilities in the mobile computing industry, especially focusing on
tablets and smart phones. This study will dive into current security threats
for the Android & Apple iOS market, exposing security risks and threats that
the novice or average user may not be aware of. The purpose of this study is to
analyze current security risks and threats, and provide solutions that may be
deployed to protect against such threats.
|
2502.05690
|
Managing Geological Uncertainty in Critical Mineral Supply Chains: A
POMDP Approach with Application to U.S. Lithium Resources
|
cs.AI econ.GN q-fin.EC
|
The world is entering an unprecedented period of critical mineral demand,
driven by the global transition to renewable energy technologies and electric
vehicles. This transition presents unique challenges in mineral resource
development, particularly due to geological uncertainty-a key characteristic
that traditional supply chain optimization approaches do not adequately
address. To tackle this challenge, we propose a novel application of Partially
Observable Markov Decision Processes (POMDPs) that optimizes critical mineral
sourcing decisions while explicitly accounting for the dynamic nature of
geological uncertainty. Through a case study of the U.S. lithium supply chain,
we demonstrate that POMDP-based policies achieve superior outcomes compared to
traditional approaches, especially when initial reserve estimates are
imperfect. Our framework provides quantitative insights for balancing domestic
resource development with international supply diversification, offering
policymakers a systematic approach to strategic decision-making in critical
mineral supply chains.
|
2502.05691
|
Consistent sampling of Paley-Wiener functions on graphons
|
eess.SP cs.IT math.CO math.IT
|
We study sampling methods for Paley-Wiener functions on graphons, thereby
adapting and generalizing methods initially developed for graphs to the graphon
setting. We then derive conditions under which such a sampling estimate is
consistent with graphon convergence.
|
2502.05692
|
Variational integrators for optimal control of foldable drones
|
math.OC cs.NA cs.SY eess.SY math.NA
|
Numerical methods that preserves geometric invariants of the system such as
energy, momentum and symplectic form, are called geometric integrators. These
include variational integrators as an important subclass of geometric
integrators. The general idea for those variational integrators is to
discretize Hamilton's principle rather than the equations of motion and as a
consequence these methods preserves some of the invariants of the original
system (symplecticity, symmetry, good behavior of energy,...). In this paper,
we construct variational integrators for control-dependent Lagrangian systems
on Lie groups. These integrators are derived via a discrete-time variational
principle for discrete-time control-dependent reduced Lagrangians. We employ
the variational integrator into optimal control problems for path planning of
foldable unmanned aerial vehicles (UAVs). Simulation are shown to validate the
performance of the geometric integrator.
|
2502.05693
|
Vertical Vibratory Transport of Grasped Parts Using Impacts
|
cs.RO
|
In this paper, we use impact-induced acceleration in conjunction with
periodic stick-slip to successfully and quickly transport parts vertically
against gravity. We show analytically that vertical vibratory transport is more
difficult than its horizontal counterpart, and provide guidelines for achieving
optimal vertical vibratory transport of a part. Namely, such a system must be
capable of quickly realizing high accelerations, as well as supply normal
forces at least several times that required for static equilibrium. We also
show that for a given maximum acceleration, there is an optimal normal force
for transport. To test our analytical guidelines, we built a vibrating surface
using flexures and a voice coil actuator that can accelerate a magnetic ram
into various materials to generate impacts. The surface was used to transport a
part against gravity. Experimentally obtained motion tracking data confirmed
the theoretical model. A series of grasping tests with a vibrating-surface
equipped parallel jaw gripper confirmed the design guidelines.
|
2502.05694
|
Zero-Shot End-to-End Relation Extraction in Chinese: A Comparative Study
of Gemini, LLaMA and ChatGPT
|
cs.CL cs.AI cs.LG
|
This study investigates the performance of various large language models
(LLMs) on zero-shot end-to-end relation extraction (RE) in Chinese, a task that
integrates entity recognition and relation extraction without requiring
annotated data. While LLMs show promise for RE, most prior work focuses on
English or assumes pre-annotated entities, leaving their effectiveness in
Chinese RE largely unexplored. To bridge this gap, we evaluate ChatGPT, Gemini,
and LLaMA based on accuracy, efficiency, and adaptability. ChatGPT demonstrates
the highest overall performance, balancing precision and recall, while Gemini
achieves the fastest inference speed, making it suitable for real-time
applications. LLaMA underperforms in both accuracy and latency, highlighting
the need for further adaptation. Our findings provide insights into the
strengths and limitations of LLMs for zero-shot Chinese RE, shedding light on
trade-offs between accuracy and efficiency. This study serves as a foundation
for future research aimed at improving LLM adaptability to complex linguistic
tasks in Chinese NLP.
|
2502.05695
|
Semantic-Aware Adaptive Video Streaming Using Latent Diffusion Models
for Wireless Networks
|
cs.MM cs.AI cs.CV cs.LG eess.IV
|
This paper proposes a novel framework for real-time adaptive-bitrate video
streaming by integrating latent diffusion models (LDMs) within the FFmpeg
techniques. This solution addresses the challenges of high bandwidth usage,
storage inefficiencies, and quality of experience (QoE) degradation associated
with traditional constant bitrate streaming (CBS) and adaptive bitrate
streaming (ABS). The proposed approach leverages LDMs to compress I-frames into
a latent space, offering significant storage and semantic transmission savings
without sacrificing high visual quality. While it keeps B-frames and P-frames
as adjustment metadata to ensure efficient video reconstruction at the user
side, the proposed framework is complemented with the most state-of-the-art
denoising and video frame interpolation (VFI) techniques. These techniques
mitigate semantic ambiguity and restore temporal coherence between frames, even
in noisy wireless communication environments. Experimental results demonstrate
the proposed method achieves high-quality video streaming with optimized
bandwidth usage, outperforming state-of-the-art solutions in terms of QoE and
resource efficiency. This work opens new possibilities for scalable real-time
video streaming in 5G and future post-5G networks.
|
2502.05696
|
Implicit Physics-aware Policy for Dynamic Manipulation of Rigid Objects
via Soft Body Tools
|
cs.RO
|
Recent advancements in robot tool use have unlocked their usage for novel
tasks, yet the predominant focus is on rigid-body tools, while the
investigation of soft-body tools and their dynamic interaction with rigid
bodies remains unexplored. This paper takes a pioneering step towards dynamic
one-shot soft tool use for manipulating rigid objects, a challenging problem
posed by complex interactions and unobservable physical properties. To address
these problems, we propose the Implicit Physics-aware (IPA) policy, designed to
facilitate effective soft tool use across various environmental configurations.
The IPA policy conducts system identification to implicitly identify physics
information and predict goal-conditioned, one-shot actions accordingly. We
validate our approach through a challenging task, i.e., transporting rigid
objects using soft tools such as ropes to distant target positions in a single
attempt under unknown environment physics parameters. Our experimental results
indicate the effectiveness of our method in efficiently identifying physical
properties, accurately predicting actions, and smoothly generalizing to
real-world environments. The related video is available at:
https://youtu.be/4hPrUDTc4Rg?si=WUZrT2vjLMt8qRWA
|
2502.05699
|
Context information can be more important than reasoning for time series
forecasting with a large language model
|
cs.LG cs.AI
|
With the evolution of large language models (LLMs), there is growing interest
in leveraging LLMs for time series tasks. In this paper, we explore the
characteristics of LLMs for time series forecasting by considering various
existing and proposed prompting techniques. Forecasting for both short and long
time series was evaluated. Our findings indicate that no single prompting
method is universally applicable. It was also observed that simply providing
proper context information related to the time series, without additional
reasoning prompts, can achieve performance comparable to the best-performing
prompt for each case. From this observation, it is expected that providing
proper context information can be more crucial than a prompt for specific
reasoning in time series forecasting. Several weaknesses in prompting for time
series forecasting were also identified. First, LLMs often fail to follow the
procedures described by the prompt. Second, when reasoning steps involve simple
algebraic calculations with several operands, LLMs often fail to calculate
accurately. Third, LLMs sometimes misunderstand the semantics of prompts,
resulting in incomplete responses.
|
2502.05701
|
TOKON: TOKenization-Optimized Normalization for time series analysis
with a large language model
|
cs.LG
|
While large language models have rapidly evolved towards general artificial
intelligence, their versatility in analyzing time series data remains limited.
To address this limitation, we propose a novel normalization technique that
considers the inherent nature of tokenization. The proposed
Tokenization-Optimized Normalization (TOKON) simplifies time series data by
representing each element with a single token, effectively reducing the number
of tokens by 2 to 3 times. Additionally, we introduce a novel prompt for time
series forecasting, termed Time Series Forecasting with Care (TFSC), to further
enhance forecasting performance. Experimental results demonstrate that TOKON
improves root mean square error (RMSE) for multi-step forecasting by
approximately 7% to 18%, depending on the dataset and prompting method.
Furthermore, TFSC, when used in conjunction with TOKON, shows additional
improvements in forecasting accuracy for certain datasets
|
2502.05702
|
Graph Neural Networks for Efficient AC Power Flow Prediction in Power
Grids
|
eess.SY cs.SY
|
This paper proposes a novel approach using Graph Neural Networks (GNNs) to
solve the AC Power Flow problem in power grids. AC OPF is essential for
minimizing generation costs while meeting the operational constraints of the
grid. Traditional solvers struggle with scalability, especially in large
systems with renewable energy sources. Our approach models the power grid as a
graph, where buses are nodes and transmission lines are edges. We explore
different GNN architectures, including GCN, GAT, SAGEConv, and GraphConv to
predict AC power flow solutions efficiently. Our experiments on IEEE test
systems show that GNNs can accurately predict power flow solutions and scale to
larger systems, outperforming traditional solvers in terms of computation time.
This work highlights the potential of GNNs for real-time power grid management,
with future plans to apply the model to even larger grid systems.
|
2502.05704
|
Rethinking Word Similarity: Semantic Similarity through Classification
Confusion
|
cs.CL cs.AI
|
Word similarity has many applications to social science and cultural
analytics tasks like measuring meaning change over time and making sense of
contested terms. Yet traditional similarity methods based on cosine similarity
between word embeddings cannot capture the context-dependent, asymmetrical,
polysemous nature of semantic similarity. We propose a new measure of
similarity, Word Confusion, that reframes semantic similarity in terms of
feature-based classification confusion. Word Confusion is inspired by Tversky's
suggestion that similarity features be chosen dynamically. Here we train a
classifier to map contextual embeddings to word identities and use the
classifier confusion (the probability of choosing a confounding word c instead
of the correct target word t) as a measure of the similarity of c and t. The
set of potential confounding words acts as the chosen features. Our method is
comparable to cosine similarity in matching human similarity judgments across
several datasets (MEN, WirdSim353, and SimLex), and can measure similarity
using predetermined features of interest. We demonstrate our model's ability to
make use of dynamic features by applying it to test a hypothesis about changes
in the 18th C. meaning of the French word "revolution" from popular to state
action during the French Revolution. We hope this reimagining of semantic
similarity will inspire the development of new tools that better capture the
multi-faceted and dynamic nature of language, advancing the fields of
computational social science and cultural analytics and beyond.
|
2502.05706
|
TD(0) Learning converges for Polynomial mixing and non-linear functions
|
stat.ML cs.LG
|
Theoretical work on Temporal Difference (TD) learning has provided
finite-sample and high-probability guarantees for data generated from Markov
chains. However, these bounds typically require linear function approximation,
instance-dependent step sizes, algorithmic modifications, and restrictive
mixing rates. We present theoretical findings for TD learning under more
applicable assumptions, including instance-independent step sizes, full data
utilization, and polynomial ergodicity, applicable to both linear and
non-linear functions. \textbf{To our knowledge, this is the first proof of
TD(0) convergence on Markov data under universal and instance-independent step
sizes.} While each contribution is significant on its own, their combination
allows these bounds to be effectively utilized in practical application
settings. Our results include bounds for linear models and non-linear under
generalized gradients and H\"older continuity.
|
2502.05708
|
GWRF: A Generalizable Wireless Radiance Field for Wireless Signal
Propagation Modeling
|
cs.NI cs.LG
|
We present Generalizable Wireless Radiance Fields (GWRF), a framework for
modeling wireless signal propagation at arbitrary 3D transmitter and receiver
positions. Unlike previous methods that adapt vanilla Neural Radiance Fields
(NeRF) from the optical to the wireless signal domain, requiring extensive
per-scene training, GWRF generalizes effectively across scenes. First, a
geometry-aware Transformer encoder-based wireless scene representation module
incorporates information from geographically proximate transmitters to learn a
generalizable wireless radiance field. Second, a neural-driven ray tracing
algorithm operates on this field to automatically compute signal reception at
the receiver. Experimental results demonstrate that GWRF outperforms existing
methods on single scenes and achieves state-of-the-art performance on unseen
scenes.
|
2502.05709
|
Flow-based Conformal Prediction for Multi-dimensional Time Series
|
cs.LG stat.ML
|
Conformal prediction for time series presents two key challenges: (1)
leveraging sequential correlations in features and non-conformity scores and
(2) handling multi-dimensional outcomes. We propose a novel conformal
prediction method to address these two key challenges by integrating
Transformer and Normalizing Flow. Specifically, the Transformer encodes the
historical context of time series, and normalizing flow learns the
transformation from the base distribution to the distribution of non-conformity
scores conditioned on the encoded historical context. This enables the
construction of prediction regions by transforming samples from the base
distribution using the learned conditional flow. We ensure the marginal
coverage by defining the prediction regions as sets in the transformed space
that correspond to a predefined probability mass in the base distribution. The
model is trained end-to-end by Flow Matching, avoiding the need for
computationally intensive numerical solutions of ordinary differential
equations. We demonstrate that our proposed method achieves smaller prediction
regions compared to the baselines while satisfying the desired coverage through
comprehensive experiments using simulated and real-world time series datasets.
|
2502.05710
|
SSDD-GAN: Single-Step Denoising Diffusion GAN for Cochlear Implant
Surgical Scene Completion
|
cs.CV
|
Recent deep learning-based image completion methods, including both
inpainting and outpainting, have demonstrated promising results in restoring
corrupted images by effectively filling various missing regions. Among these,
Generative Adversarial Networks (GANs) and Denoising Diffusion Probabilistic
Models (DDPMs) have been employed as key generative image completion
approaches, excelling in the field of generating high-quality restorations with
reduced artifacts and improved fine details. In previous work, we developed a
method aimed at synthesizing views from novel microscope positions for
mastoidectomy surgeries; however, that approach did not have the ability to
restore the surrounding surgical scene environment. In this paper, we propose
an efficient method to complete the surgical scene of the synthetic
postmastoidectomy dataset. Our approach leverages self-supervised learning on
real surgical datasets to train a Single-Step Denoising Diffusion-GAN
(SSDD-GAN), combining the advantages of diffusion models with the adversarial
optimization of GANs for improved Structural Similarity results of 6%. The
trained model is then directly applied to the synthetic postmastoidectomy
dataset using a zero-shot approach, enabling the generation of realistic and
complete surgical scenes without the need for explicit ground-truth labels from
the synthetic postmastoidectomy dataset. This method addresses key limitations
in previous work, offering a novel pathway for full surgical microscopy scene
completion and enhancing the usability of the synthetic postmastoidectomy
dataset in surgical preoperative planning and intraoperative navigation.
|
2502.05713
|
4D VQ-GAN: Synthesising Medical Scans at Any Time Point for Personalised
Disease Progression Modelling of Idiopathic Pulmonary Fibrosis
|
eess.IV cs.AI cs.CV cs.LG
|
Understanding the progression trajectories of diseases is crucial for early
diagnosis and effective treatment planning. This is especially vital for
life-threatening conditions such as Idiopathic Pulmonary Fibrosis (IPF), a
chronic, progressive lung disease with a prognosis comparable to many cancers.
Computed tomography (CT) imaging has been established as a reliable diagnostic
tool for IPF. Accurately predicting future CT scans of early-stage IPF patients
can aid in developing better treatment strategies, thereby improving survival
outcomes. In this paper, we propose 4D Vector Quantised Generative Adversarial
Networks (4D-VQ-GAN), a model capable of generating realistic CT volumes of IPF
patients at any time point. The model is trained using a two-stage approach. In
the first stage, a 3D-VQ-GAN is trained to reconstruct CT volumes. In the
second stage, a Neural Ordinary Differential Equation (ODE) based temporal
model is trained to capture the temporal dynamics of the quantised embeddings
generated by the encoder in the first stage. We evaluate different
configurations of our model for generating longitudinal CT scans and compare
the results against ground truth data, both quantitatively and qualitatively.
For validation, we conduct survival analysis using imaging biomarkers derived
from generated CT scans and achieve a C-index comparable to that of biomarkers
derived from the real CT scans. The survival analysis results demonstrate the
potential clinical utility inherent to generated longitudinal CT scans, showing
that they can reliably predict survival outcomes.
|
2502.05714
|
Proving the Coding Interview: A Benchmark for Formally Verified Code
Generation
|
cs.SE cs.AI cs.LG cs.LO
|
We introduce the Formally Verified Automated Programming Progress Standards,
or FVAPPS, a benchmark of 4715 samples for writing programs and proving their
correctness, the largest formal verification benchmark, including 1083 curated
and quality controlled samples. Previously, APPS provided a benchmark and
dataset for programming puzzles to be completed in Python and checked against
unit tests, of the kind seen in technical assessments in the software
engineering industry. Building upon recent approaches for benchmarks in
interactive theorem proving, we generalize the unit tests to Lean 4 theorems
given without proof (i.e., using Lean's "sorry" keyword). On the 406 theorems
of 100 randomly selected samples, Sonnet correctly proves 30% and Gemini
correctly proves 18%. We challenge the machine learning and program synthesis
communities to solve both each general purpose programming problem and its
associated correctness specifications. The benchmark is available at
https://huggingface.co/datasets/quinn-dougherty/fvapps.
|
2502.05718
|
Using agent-based models and EXplainable Artificial Intelligence (XAI)
to simulate social behaviors and policy intervention scenarios: A case study
of private well users in Ireland
|
cs.CY cs.LG
|
Around 50 percent of Irelands rural population relies on unregulated private
wells vulnerable to agricultural runoff and untreated wastewater. High national
rates of Shiga toxin-producing Escherichia coli (STEC) and other waterborne
illnesses have been linked to well water exposure. Periodic well testing is
essential for public health, yet the lack of government incentives places the
financial burden on households. Understanding environmental, cognitive, and
material factors influencing well-testing behavior is critical.
This study employs Agent-Based Modeling (ABM) to simulate policy
interventions based on national survey data. The ABM framework, designed for
private well-testing behavior, integrates a Deep Q-network reinforcement
learning model and Explainable AI (XAI) for decision-making insights. Key
features were selected using Recursive Feature Elimination (RFE) with 10-fold
cross-validation, while SHAP (Shapley Additive Explanations) provided further
interpretability for policy recommendations.
Fourteen policy scenarios were tested. The most effective, Free Well Testing
plus Communication Campaign, increased participation to 435 out of 561 agents,
from a baseline of approximately 5 percent, with rapid behavioral adaptation.
Free Well Testing plus Regulation also performed well, with 433 out of 561
agents initiating well testing. Free testing alone raised participation to over
75 percent, with some agents testing multiple times annually. Scenarios with
free well testing achieved faster learning efficiency, converging in 1000
episodes, while others took 2000 episodes, indicating slower adaptation.
This research demonstrates the value of ABM and XAI in public health policy,
providing a framework for evaluating behavioral interventions in environmental
health.
|
2502.05719
|
Extended Histogram-based Outlier Score (EHBOS)
|
cs.LG cs.AI stat.ML
|
Histogram-Based Outlier Score (HBOS) is a widely used outlier or anomaly
detection method known for its computational efficiency and simplicity.
However, its assumption of feature independence limits its ability to detect
anomalies in datasets where interactions between features are critical. In this
paper, we propose the Extended Histogram-Based Outlier Score (EHBOS), which
enhances HBOS by incorporating two-dimensional histograms to capture
dependencies between feature pairs. This extension allows EHBOS to identify
contextual and dependency-driven anomalies that HBOS fails to detect. We
evaluate EHBOS on 17 benchmark datasets, demonstrating its effectiveness and
robustness across diverse anomaly detection scenarios. EHBOS outperforms HBOS
on several datasets, particularly those where feature interactions are critical
in defining the anomaly structure, achieving notable improvements in ROC AUC.
These results highlight that EHBOS can be a valuable extension to HBOS, with
the ability to model complex feature dependencies. EHBOS offers a powerful new
tool for anomaly detection, particularly in datasets where contextual or
relational anomalies play a significant role.
|
2502.05720
|
Pareto-Optimality, Smoothness, and Stochasticity in Learning-Augmented
One-Max-Search
|
cs.DS cs.AI
|
One-max search is a classic problem in online decision-making, in which a
trader acts on a sequence of revealed prices and accepts one of them
irrevocably to maximise its profit. The problem has been studied both in
probabilistic and in worst-case settings, notably through competitive analysis,
and more recently in learning-augmented settings in which the trader has access
to a prediction on the sequence. However, existing approaches either lack
smoothness, or do not achieve optimal worst-case guarantees: they do not attain
the best possible trade-off between the consistency and the robustness of the
algorithm. We close this gap by presenting the first algorithm that
simultaneously achieves both of these important objectives. Furthermore, we
show how to leverage the obtained smoothness to provide an analysis of one-max
search in stochastic learning-augmented settings which capture randomness in
both the observed prices and the prediction.
|
2502.05722
|
Explainable and Class-Revealing Signal Feature Extraction via Scattering
Transform and Constrained Zeroth-Order Optimization
|
cs.LG eess.SP math.OC stat.ML
|
We propose a new method to extract discriminant and explainable features from
a particular machine learning model, i.e., a combination of the scattering
transform and the multiclass logistic regression. Although this model is
well-known for its ability to learn various signal classes with high
classification rate, it remains elusive to understand why it can generate such
successful classification, mainly due to the nonlinearity of the scattering
transform. In order to uncover the meaning of the scattering transform
coefficients selected by the multiclass logistic regression (with the Lasso
penalty), we adopt zeroth-order optimization algorithms to search an input
pattern that maximizes the class probability of a class of interest given the
learned model. In order to do so, it turns out that imposing sparsity and
smoothness of input patterns is important. We demonstrate the effectiveness of
our proposed method using a couple of synthetic time-series classification
problems.
|
2502.05724
|
Rethinking Link Prediction for Directed Graphs
|
cs.LG cs.AI
|
Link prediction for directed graphs is a crucial task with diverse real-world
applications. Recent advances in embedding methods and Graph Neural Networks
(GNNs) have shown promising improvements. However, these methods often lack a
thorough analysis of embedding expressiveness and suffer from ineffective
benchmarks for a fair evaluation. In this paper, we propose a unified framework
to assess the expressiveness of existing methods, highlighting the impact of
dual embeddings and decoder design on performance. To address limitations in
current experimental setups, we introduce DirLinkBench, a robust new benchmark
with comprehensive coverage and standardized evaluation. The results show that
current methods struggle to achieve strong performance on the new benchmark,
while DiGAE outperforms others overall. We further revisit DiGAE theoretically,
showing its graph convolution aligns with GCN on an undirected bipartite graph.
Inspired by these insights, we propose a novel spectral directed graph
auto-encoder SDGAE that achieves SOTA results on DirLinkBench. Finally, we
analyze key factors influencing directed link prediction and highlight open
challenges.
|
2502.05725
|
Predictive Coresets
|
stat.CO cs.LG
|
Modern data analysis often involves massive datasets with hundreds of
thousands of observations, making traditional inference algorithms
computationally prohibitive. Coresets are selection methods designed to choose
a smaller subset of observations while maintaining similar learning
performance. Conventional coreset approaches determine these weights by
minimizing the Kullback-Leibler (KL) divergence between the likelihood
functions of the full and weighted datasets; as a result, this makes them
ill-posed for nonparametric models, where the likelihood is often intractable.
We propose an alternative variational method which employs randomized
posteriors and finds weights to match the unknown posterior predictive
distributions conditioned on the full and reduced datasets. Our approach
provides a general algorithm based on predictive recursions suitable for
nonparametric priors. We evaluate the performance of the proposed coreset
construction on diverse problems, including random partitions and density
estimation.
|
2502.05726
|
Improving Environment Novelty Quantification for Effective Unsupervised
Environment Design
|
cs.LG stat.ML
|
Unsupervised Environment Design (UED) formalizes the problem of autocurricula
through interactive training between a teacher agent and a student agent. The
teacher generates new training environments with high learning potential,
curating an adaptive curriculum that strengthens the student's ability to
handle unseen scenarios. Existing UED methods mainly rely on regret, a metric
that measures the difference between the agent's optimal and actual
performance, to guide curriculum design. Regret-driven methods generate
curricula that progressively increase environment complexity for the student
but overlook environment novelty -- a critical element for enhancing an agent's
generalizability. Measuring environment novelty is especially challenging due
to the underspecified nature of environment parameters in UED, and existing
approaches face significant limitations. To address this, this paper introduces
the Coverage-based Evaluation of Novelty In Environment (CENIE) framework.
CENIE proposes a scalable, domain-agnostic, and curriculum-aware approach to
quantifying environment novelty by leveraging the student's state-action space
coverage from previous curriculum experiences. We then propose an
implementation of CENIE that models this coverage and measures environment
novelty using Gaussian Mixture Models. By integrating both regret and novelty
as complementary objectives for curriculum design, CENIE facilitates effective
exploration across the state-action space while progressively increasing
curriculum complexity. Empirical evaluations demonstrate that augmenting
existing regret-based UED algorithms with CENIE achieves state-of-the-art
performance across multiple benchmarks, underscoring the effectiveness of
novelty-driven autocurricula for robust generalization.
|
2502.05727
|
Impact of Data Poisoning Attacks on Feasibility and Optimality of Neural
Power System Optimizers
|
cs.LG
|
The increased integration of clean yet stochastic energy resources and the
growing number of extreme weather events are narrowing the decision-making
window of power grid operators. This time constraint is fueling a plethora of
research on Machine Learning-, or ML-, based optimization proxies. While
finding a fast solution is appealing, the inherent vulnerabilities of the
learning-based methods are hindering their adoption. One of these
vulnerabilities is data poisoning attacks, which adds perturbations to ML
training data, leading to incorrect decisions. The impact of poisoning attacks
on learning-based power system optimizers have not been thoroughly studied,
which creates a critical vulnerability. In this paper, we examine the impact of
data poisoning attacks on ML-based optimization proxies that are used to solve
the DC Optimal Power Flow problem. Specifically, we compare the resilience of
three different methods-a penalty-based method, a post-repair approach, and a
direct mapping approach-against the adverse effects of poisoning attacks. We
will use the optimality and feasibility of these proxies as performance
metrics. The insights of this work will establish a foundation for enhancing
the resilience of neural power system optimizers.
|
2502.05728
|
Hierarchical Equivariant Policy via Frame Transfer
|
cs.RO
|
Recent advances in hierarchical policy learning highlight the advantages of
decomposing systems into high-level and low-level agents, enabling efficient
long-horizon reasoning and precise fine-grained control. However, the interface
between these hierarchy levels remains underexplored, and existing hierarchical
methods often ignore domain symmetry, resulting in the need for extensive
demonstrations to achieve robust performance. To address these issues, we
propose Hierarchical Equivariant Policy (HEP), a novel hierarchical policy
framework. We propose a frame transfer interface for hierarchical policy
learning, which uses the high-level agent's output as a coordinate frame for
the low-level agent, providing a strong inductive bias while retaining
flexibility. Additionally, we integrate domain symmetries into both levels and
theoretically demonstrate the system's overall equivariance. HEP achieves
state-of-the-art performance in complex robotic manipulation tasks,
demonstrating significant improvements in both simulation and real-world
settings.
|
2502.05729
|
BnTTS: Few-Shot Speaker Adaptation in Low-Resource Setting
|
cs.CL
|
This paper introduces BnTTS (Bangla Text-To-Speech), the first framework for
Bangla speaker adaptation-based TTS, designed to bridge the gap in Bangla
speech synthesis using minimal training data. Building upon the XTTS
architecture, our approach integrates Bangla into a multilingual TTS pipeline,
with modifications to account for the phonetic and linguistic characteristics
of the language. We pre-train BnTTS on 3.85k hours of Bangla speech dataset
with corresponding text labels and evaluate performance in both zero-shot and
few-shot settings on our proposed test dataset. Empirical evaluations in
few-shot settings show that BnTTS significantly improves the naturalness,
intelligibility, and speaker fidelity of synthesized Bangla speech. Compared to
state-of-the-art Bangla TTS systems, BnTTS exhibits superior performance in
Subjective Mean Opinion Score (SMOS), Naturalness, and Clarity metrics.
|
2502.05735
|
Towards Autonomous Experimentation: Bayesian Optimization over Problem
Formulation Space for Accelerated Alloy Development
|
eess.SY cs.CE cs.LG cs.SY math.OC stat.ML
|
Accelerated discovery in materials science demands autonomous systems capable
of dynamically formulating and solving design problems. In this work, we
introduce a novel framework that leverages Bayesian optimization over a problem
formulation space to identify optimal design formulations in line with
decision-maker preferences. By mapping various design scenarios to a multi
attribute utility function, our approach enables the system to balance
conflicting objectives such as ductility, yield strength, density, and
solidification range without requiring an exact problem definition at the
outset. We demonstrate the efficacy of our method through an in silico case
study on a Mo-Nb-Ti-V-W alloy system targeted for gas turbine engine blade
applications. The framework converges on a sweet spot that satisfies critical
performance thresholds, illustrating that integrating problem formulation
discovery into the autonomous design loop can significantly streamline the
experimental process. Future work will incorporate human feedback to further
enhance the adaptability of the system in real-world experimental settings.
|
2502.05738
|
Performance Analysis of Traditional VQA Models Under Limited
Computational Resources
|
cs.CV
|
In real-world applications where computational resources are limited,
effectively integrating visual and textual information for Visual Question
Answering (VQA) presents significant challenges. This paper investigates the
performance of traditional models under computational constraints, focusing on
enhancing VQA performance, particularly for numerical and counting questions.
We evaluate models based on Bidirectional GRU (BidGRU), GRU, Bidirectional LSTM
(BidLSTM), and Convolutional Neural Networks (CNN), analyzing the impact of
different vocabulary sizes, fine-tuning strategies, and embedding dimensions.
Experimental results show that the BidGRU model with an embedding dimension of
300 and a vocabulary size of 3000 achieves the best overall performance without
the computational overhead of larger models. Ablation studies emphasize the
importance of attention mechanisms and counting information in handling complex
reasoning tasks under resource limitations. Our research provides valuable
insights for developing more efficient VQA models suitable for deployment in
environments with limited computational capacity.
|
2502.05739
|
Mitigating Sensitive Information Leakage in LLMs4Code through Machine
Unlearning
|
cs.CR cs.AI cs.SE
|
Large Language Models for Code (LLMs4Code) excel at code generation tasks,
yielding promise to release developers from huge software development burdens.
Nonetheless, these models have been shown to suffer from the significant
privacy risks due to the potential leakage of sensitive information embedded
during training, known as the memorization problem. Addressing this issue is
crucial for ensuring privacy compliance and upholding user trust, but till now
there is a dearth of dedicated studies in the literature that focus on this
specific direction. Recently, machine unlearning has emerged as a promising
solution by enabling models to "forget" sensitive information without full
retraining, offering an efficient and scalable approach compared to traditional
data cleaning methods. In this paper, we empirically evaluate the effectiveness
of unlearning techniques for addressing privacy concerns in
LLMs4Code.Specifically, we investigate three state-of-the-art unlearning
algorithms and three well-known open-sourced LLMs4Code, on a benchmark that
takes into consideration both the privacy data to be forgotten as well as the
code generation capabilites of these models. Results show that it is feasible
to mitigate the privacy concerns of LLMs4Code through machine unlearning while
maintain their code generation capabilities at the same time. We also dissect
the forms of privacy protection/leakage after unlearning and observe that there
is a shift from direct leakage to indirect leakage, which underscores the need
for future studies addressing this risk.
|
2502.05740
|
RECOVER: Designing a Large Language Model-based Remote Patient
Monitoring System for Postoperative Gastrointestinal Cancer Care
|
cs.HC cs.AI
|
Cancer surgery is a key treatment for gastrointestinal (GI) cancers, a group
of cancers that account for more than 35% of cancer-related deaths worldwide,
but postoperative complications are unpredictable and can be life-threatening.
In this paper, we investigate how recent advancements in large language models
(LLMs) can benefit remote patient monitoring (RPM) systems through clinical
integration by designing RECOVER, an LLM-powered RPM system for postoperative
GI cancer care. To closely engage stakeholders in the design process, we first
conducted seven participatory design sessions with five clinical staff and
interviewed five cancer patients to derive six major design strategies for
integrating clinical guidelines and information needs into LLM-based RPM
systems. We then designed and implemented RECOVER, which features an
LLM-powered conversational agent for cancer patients and an interactive
dashboard for clinical staff to enable efficient postoperative RPM. Finally, we
used RECOVER as a pilot system to assess the implementation of our design
strategies with four clinical staff and five patients, providing design
implications by identifying crucial design elements, offering insights on
responsible AI, and outlining opportunities for future LLM-powered RPM systems.
|
2502.05741
|
Linear Attention Modeling for Learned Image Compression
|
cs.CV
|
Recent years, learned image compression has made tremendous progress to
achieve impressive coding efficiency. Its coding gain mainly comes from
non-linear neural network-based transform and learnable entropy modeling.
However, most of recent focuses have been solely on a strong backbone, and few
studies consider the low-complexity design. In this paper, we propose LALIC, a
linear attention modeling for learned image compression. Specially, we propose
to use Bi-RWKV blocks, by utilizing the Spatial Mix and Channel Mix modules to
achieve more compact features extraction, and apply the Conv based Omni-Shift
module to adapt to two-dimensional latent representation. Furthermore, we
propose a RWKV-based Spatial-Channel ConTeXt model (RWKV-SCCTX), that leverages
the Bi-RWKV to modeling the correlation between neighboring features
effectively, to further improve the RD performance. To our knowledge, our work
is the first work to utilize efficient Bi-RWKV models with linear attention for
learned image compression. Experimental results demonstrate that our method
achieves competitive RD performances by outperforming VTM-9.1 by -14.84%,
-15.20%, -17.32% in BD-rate on Kodak, Tecnick and CLIC Professional validation
datasets.
|
2502.05742
|
An Evolutionary Game With the Game Transitions Based on the Markov
Process
|
cs.SI physics.soc-ph
|
The psychology of the individual is continuously changing in nature, which
has a significant influence on the evolutionary dynamics of populations. To
study the influence of the continuously changing psychology of individuals on
the behavior of populations, in this paper, we consider the game transitions of
individuals in evolutionary processes to capture the changing psychology of
individuals in reality, where the game that individuals will play shifts as
time progresses and is related to the transition rates between different games.
Besides, the individual's reputation is taken into account and utilized to
choose a suitable neighbor for the strategy updating of the individual. Within
this model, we investigate the statistical number of individuals staying in
different game states and the expected number fits well with our theoretical
results. Furthermore, we explore the impact of transition rates between
different game states, payoff parameters, the reputation mechanism, and
different time scales of strategy updates on cooperative behavior, and our
findings demonstrate that both the transition rates and reputation mechanism
have a remarkable influence on the evolution of cooperation. Additionally, we
examine the relationship between network size and cooperation frequency,
providing valuable insights into the robustness of the model.
|
2502.05743
|
Understanding Representation Dynamics of Diffusion Models via
Low-Dimensional Modeling
|
cs.LG cs.CV
|
This work addresses the critical question of why and when diffusion models,
despite being designed for generative tasks, can excel at learning high-quality
representations in a self-supervised manner. To address this, we develop a
mathematical framework based on a low-dimensional data model and posterior
estimation, revealing a fundamental trade-off between generation and
representation quality near the final stage of image generation. Our analysis
explains the unimodal representation dynamics across noise scales, mainly
driven by the interplay between data denoising and class specification.
Building on these insights, we propose an ensemble method that aggregates
features across noise levels, significantly improving both clean performance
and robustness under label noise. Extensive experiments on both synthetic and
real-world datasets validate our findings.
|
2502.05744
|
DISCD: Distributed Lossy Semantic Communication for Logical Deduction of
Hypothesis
|
cs.IT math.IT
|
In this paper, we address hypothesis testing in a distributed network of
nodes, where each node has only partial information about the State of the
World (SotW) and is tasked with determining which hypothesis, among a given
set, is most supported by the data available within the node. However, due to
each node's limited perspective of the SotW, individual nodes cannot reliably
determine the most supported hypothesis independently. To overcome this
limitation, nodes must exchange information via an intermediate server. Our
objective is to introduce a novel distributed lossy semantic communication
framework designed to minimize each node's uncertainty about the SotW while
operating under limited communication budget. In each communication round,
nodes determine the most content-informative message to send to the server. The
server aggregates incoming messages from all nodes, updates its view of the
SotW, and transmits back the most semantically informative message. We
demonstrate that transmitting semantically most informative messages enables
convergence toward the true distribution over the state space, improving
deductive reasoning performance under communication constraints. For
experimental evaluation, we construct a dataset designed for logical deduction
of hypotheses and compare our approach against random message selection.
Results validate the effectiveness of our semantic communication framework,
showing significant improvements in nodes' understanding of the SotW for
hypothesis testing, with reduced communication overhead.
|
2502.05749
|
UniDB: A Unified Diffusion Bridge Framework via Stochastic Optimal
Control
|
cs.CV cs.AI cs.SY eess.SY
|
Recent advances in diffusion bridge models leverage Doob's $h$-transform to
establish fixed endpoints between distributions, demonstrating promising
results in image translation and restoration tasks. However, these approaches
frequently produce blurred or excessively smoothed image details and lack a
comprehensive theoretical foundation to explain these shortcomings. To address
these limitations, we propose UniDB, a unified framework for diffusion bridges
based on Stochastic Optimal Control (SOC). UniDB formulates the problem through
an SOC-based optimization and derives a closed-form solution for the optimal
controller, thereby unifying and generalizing existing diffusion bridge models.
We demonstrate that existing diffusion bridges employing Doob's $h$-transform
constitute a special case of our framework, emerging when the terminal penalty
coefficient in the SOC cost function tends to infinity. By incorporating a
tunable terminal penalty coefficient, UniDB achieves an optimal balance between
control costs and terminal penalties, substantially improving detail
preservation and output quality. Notably, UniDB seamlessly integrates with
existing diffusion bridge models, requiring only minimal code modifications.
Extensive experiments across diverse image restoration tasks validate the
superiority and adaptability of the proposed framework. Our code is available
at https://github.com/UniDB-SOC/UniDB/.
|
2502.05752
|
PINGS: Gaussian Splatting Meets Distance Fields within a Point-Based
Implicit Neural Map
|
cs.RO cs.CV cs.GR
|
Robots require high-fidelity reconstructions of their environment for
effective operation. Such scene representations should be both, geometrically
accurate and photorealistic to support downstream tasks. While this can be
achieved by building distance fields from range sensors and radiance fields
from cameras, the scalable incremental mapping of both fields consistently and
at the same time with high quality remains challenging. In this paper, we
propose a novel map representation that unifies a continuous signed distance
field and a Gaussian splatting radiance field within an elastic and compact
point-based implicit neural map. By enforcing geometric consistency between
these fields, we achieve mutual improvements by exploiting both modalities. We
devise a LiDAR-visual SLAM system called PINGS using the proposed map
representation and evaluate it on several challenging large-scale datasets.
Experimental results demonstrate that PINGS can incrementally build globally
consistent distance and radiance fields encoded with a compact set of neural
points. Compared to the state-of-the-art methods, PINGS achieves superior
photometric and geometric rendering at novel views by leveraging the
constraints from the distance field. Furthermore, by utilizing dense
photometric cues and multi-view consistency from the radiance field, PINGS
produces more accurate distance fields, leading to improved odometry estimation
and mesh reconstruction.
|
2502.05755
|
Filter, Obstruct and Dilute: Defending Against Backdoor Attacks on
Semi-Supervised Learning
|
cs.LG
|
Recent studies have verified that semi-supervised learning (SSL) is
vulnerable to data poisoning backdoor attacks. Even a tiny fraction of
contaminated training data is sufficient for adversaries to manipulate up to
90\% of the test outputs in existing SSL methods. Given the emerging threat of
backdoor attacks designed for SSL, this work aims to protect SSL against such
risks, marking it as one of the few known efforts in this area. Specifically,
we begin by identifying that the spurious correlations between the backdoor
triggers and the target class implanted by adversaries are the primary cause of
manipulated model predictions during the test phase. To disrupt these
correlations, we utilize three key techniques: Gaussian Filter, complementary
learning and trigger mix-up, which collectively filter, obstruct and dilute the
influence of backdoor attacks in both data pre-processing and feature learning.
Experimental results demonstrate that our proposed method, Backdoor Invalidator
(BI), significantly reduces the average attack success rate from 84.7\% to
1.8\% across different state-of-the-art backdoor attacks. It is also worth
mentioning that BI does not sacrifice accuracy on clean data and is supported
by a theoretical guarantee of its generalization capability.
|
2502.05756
|
Exploring Visual Embedding Spaces Induced by Vision Transformers for
Online Auto Parts Marketplaces
|
cs.CV cs.LG
|
This study examines the capabilities of the Vision Transformer (ViT) model in
generating visual embeddings for images of auto parts sourced from online
marketplaces, such as Craigslist and OfferUp. By focusing exclusively on
single-modality data, the analysis evaluates ViT's potential for detecting
patterns indicative of illicit activities. The workflow involves extracting
high-dimensional embeddings from images, applying dimensionality reduction
techniques like Uniform Manifold Approximation and Projection (UMAP) to
visualize the embedding space, and using K-Means clustering to categorize
similar items. Representative posts nearest to each cluster centroid provide
insights into the composition and characteristics of the clusters. While the
results highlight the strengths of ViT in isolating visual patterns, challenges
such as overlapping clusters and outliers underscore the limitations of
single-modal approaches in this domain. This work contributes to understanding
the role of Vision Transformers in analyzing online marketplaces and offers a
foundation for future advancements in detecting fraudulent or illegal
activities.
|
2502.05759
|
Reinforced Lifelong Editing for Language Models
|
cs.CL
|
Large language models (LLMs) acquire information from pre-training corpora,
but their stored knowledge can become inaccurate or outdated over time. Model
editing addresses this challenge by modifying model parameters without
retraining, and prevalent approaches leverage hypernetworks to generate these
parameter updates. However, they face significant challenges in lifelong
editing due to their incompatibility with LLM parameters that dynamically
change during the editing process. To address this, we observed that
hypernetwork-based lifelong editing aligns with reinforcement learning modeling
and proposed RLEdit, an RL-based editing method. By treating editing losses as
rewards and optimizing hypernetwork parameters at the full knowledge sequence
level, we enable it to precisely capture LLM changes and generate appropriate
parameter updates. Our extensive empirical evaluation across several LLMs
demonstrates that RLEdit outperforms existing methods in lifelong editing with
superior effectiveness and efficiency, achieving a 59.24% improvement while
requiring only 2.11% of the time compared to most approaches. Our code is
available at: https://github.com/zhrli324/RLEdit.
|
2502.05761
|
3CAD: A Large-Scale Real-World 3C Product Dataset for Unsupervised
Anomaly
|
cs.CV
|
Industrial anomaly detection achieves progress thanks to datasets such as
MVTec-AD and VisA. However, they suffer from limitations in terms of the number
of defect samples, types of defects, and availability of real-world scenes.
These constraints inhibit researchers from further exploring the performance of
industrial detection with higher accuracy. To this end, we propose a new
large-scale anomaly detection dataset called 3CAD, which is derived from real
3C production lines. Specifically, the proposed 3CAD includes eight different
types of manufactured parts, totaling 27,039 high-resolution images labeled
with pixel-level anomalies. The key features of 3CAD are that it covers
anomalous regions of different sizes, multiple anomaly types, and the
possibility of multiple anomalous regions and multiple anomaly types per
anomaly image. This is the largest and first anomaly detection dataset
dedicated to 3C product quality control for community exploration and
development. Meanwhile, we introduce a simple yet effective framework for
unsupervised anomaly detection: a Coarse-to-Fine detection paradigm with
Recovery Guidance (CFRG). To detect small defect anomalies, the proposed CFRG
utilizes a coarse-to-fine detection paradigm. Specifically, we utilize a
heterogeneous distillation model for coarse localization and then fine
localization through a segmentation model. In addition, to better capture
normal patterns, we introduce recovery features as guidance. Finally, we report
the results of our CFRG framework and popular anomaly detection methods on the
3CAD dataset, demonstrating strong competitiveness and providing a highly
challenging benchmark to promote the development of the anomaly detection
field. Data and code are available: https://github.com/EnquanYang2022/3CAD.
|
2502.05765
|
Privacy-Preserving Dataset Combination
|
cs.LG cs.CR cs.CY
|
Access to diverse, high-quality datasets is crucial for machine learning
model performance, yet data sharing remains limited by privacy concerns and
competitive interests, particularly in regulated domains like healthcare. This
dynamic especially disadvantages smaller organizations that lack resources to
purchase data or negotiate favorable sharing agreements. We present SecureKL, a
privacy-preserving framework that enables organizations to identify beneficial
data partnerships without exposing sensitive information. Building on recent
advances in dataset combination methods, we develop a secure multiparty
computation protocol that maintains strong privacy guarantees while achieving
>90\% correlation with plaintext evaluations. In experiments with real-world
hospital data, SecureKL successfully identifies beneficial data partnerships
that improve model performance for intensive care unit mortality prediction
while preserving data privacy. Our framework provides a practical solution for
organizations seeking to leverage collective data resources while maintaining
privacy and competitive advantages. These results demonstrate the potential for
privacy-preserving data collaboration to advance machine learning applications
in high-stakes domains while promoting more equitable access to data resources.
|
2502.05768
|
Cooperative Optimization of Grid-Edge Cyber and Physical Resources for
Resilient Power System Operation
|
eess.SY cs.SY
|
The cooperative operation of grid-edge power and energy resources is crucial
to improving the resilience of power systems during contingencies. However,
given the complex cyber-physical nature of power grids, it is hard to respond
timely with limited costs for deploying additional cyber and/or phyiscal
resources, such as during a high-impact low-frequency cyber-physical event.
Therefore, the paper examines the design of cooperative cyber-physical resource
optimization solutions to control grid-tied cyber and physical resources.
First, the operation of a cyber-physical power system is formulated into a
constrained optimization problem, including the cyber and physical objectives
and constraints. Then, a bi-level solution is provided to obtain optimal cyber
and physical actions, including the reconfiguration of cyber topology (e.g.,
activation of communication links) in the cyber layer and the control of
physical resources (e.g., energy storage systems) in the physical layer. The
developed method improves grid resilience during cyberattacks and can provide
guidance on the control of coupled physical side resources. Numerical
simulation on a modified IEEE 14-bus system demonstrates the effectiveness of
the proposed approach.
|
2502.05769
|
Digital Twin Buildings: 3D Modeling, GIS Integration, and Visual
Descriptions Using Gaussian Splatting, ChatGPT/Deepseek, and Google Maps
Platform
|
cs.CV
|
Urban digital twins are virtual replicas of cities that use multi-source data
and data analytics to optimize urban planning, infrastructure management, and
decision-making. Towards this, we propose a framework focused on the
single-building scale. By connecting to cloud mapping platforms such as Google
Map Platforms APIs, by leveraging state-of-the-art multi-agent Large Language
Models data analysis using ChatGPT(4o) and Deepseek-V3/R1, and by using our
Gaussian Splatting-based mesh extraction pipeline, our Digital Twin Buildings
framework can retrieve a building's 3D model, visual descriptions, and achieve
cloud-based mapping integration with large language model-based data analytics
using a building's address, postal code, or geographic coordinates.
|
2502.05772
|
Effective Black-Box Multi-Faceted Attacks Breach Vision Large Language
Model Guardrails
|
cs.CV cs.AI
|
Vision Large Language Models (VLLMs) integrate visual data processing,
expanding their real-world applications, but also increasing the risk of
generating unsafe responses. In response, leading companies have implemented
Multi-Layered safety defenses, including alignment training, safety system
prompts, and content moderation. However, their effectiveness against
sophisticated adversarial attacks remains largely unexplored. In this paper, we
propose MultiFaceted Attack, a novel attack framework designed to
systematically bypass Multi-Layered Defenses in VLLMs. It comprises three
complementary attack facets: Visual Attack that exploits the multimodal nature
of VLLMs to inject toxic system prompts through images; Alignment Breaking
Attack that manipulates the model's alignment mechanism to prioritize the
generation of contrasting responses; and Adversarial Signature that deceives
content moderators by strategically placing misleading information at the end
of the response. Extensive evaluations on eight commercial VLLMs in a black-box
setting demonstrate that MultiFaceted Attack achieves a 61.56% attack success
rate, surpassing state-of-the-art methods by at least 42.18%.
|
2502.05773
|
PIPA: Preference Alignment as Prior-Informed Statistical Estimation
|
cs.LG cs.AI stat.ML
|
Offline preference alignment for language models such as Direct Preference
Optimization (DPO) is favored for its effectiveness and simplicity, eliminating
the need for costly reinforcement learning. Various offline algorithms have
been developed for different data settings, yet they lack a unified
understanding.
In this study, we introduce Pior-Informed Preference Alignment (PIPA), a
unified, RL-free probabilistic framework that formulates language model
preference alignment as a Maximum Likelihood Estimation (MLE) problem with
prior constraints. This method effectively accommodates both paired and
unpaired data, as well as answer and step-level annotations. We illustrate that
DPO and KTO are special cases with different prior constraints within our
framework. By integrating different types of prior information, we developed
two variations of PIPA: PIPA-M and PIPA-N. Both algorithms demonstrate a
$3\sim10\%$ performance enhancement on the GSM8K and MATH benchmarks across all
configurations, achieving these gains without additional training or
computational costs compared to existing algorithms.
|
2502.05775
|
Implicit Communication of Contextual Information in Human-Robot
Collaboration
|
cs.RO cs.HC
|
Implicit communication is crucial in human-robot collaboration (HRC), where
contextual information, such as intentions, is conveyed as implicatures,
forming a natural part of human interaction. However, enabling robots to
appropriately use implicit communication in cooperative tasks remains
challenging. My research addresses this through three phases: first, exploring
the impact of linguistic implicatures on collaborative tasks; second, examining
how robots' implicit cues for backchanneling and proactive communication affect
team performance and perception, and how they should adapt to human teammates;
and finally, designing and evaluating a multi-LLM robotics system that learns
from human implicit communication. This research aims to enhance the natural
communication abilities of robots and facilitate their integration into daily
collaborative activities.
|
2502.05776
|
Dynamic Pricing in the Linear Valuation Model using Shape Constraints
|
stat.ML cs.LG
|
We propose a shape-constrained approach to dynamic pricing for censored data
in the linear valuation model that eliminates the need for tuning parameters
commonly required in existing methods. Previous works have addressed the
challenge of unknown market noise distribution F using strategies ranging from
kernel methods to reinforcement learning algorithms, such as bandit techniques
and upper confidence bounds (UCB), under the Lipschitz (and stronger)
assumption(s) on $F_0$. In contrast, our method relies on isotonic regression
under the weaker assumption that $F_0$ is $\alpha$-Holder continuous for some
$\alpha \in (0,1]$. We obtain an upper bound on the asymptotic expected regret
that matches existing bounds in the literature for $\alpha = 1$ (the Lipschitz
case). Simulations and experiments with real-world data obtained by Welltower
Inc (a major healthcare Real Estate Investment Trust) consistently demonstrate
that our method attains better empirical regret in comparison to several
existing methods in the literature while offering the advantage of being
completely tuning-parameter free.
|
2502.05777
|
Predictive Crash Analytics for Traffic Safety using Deep Learning
|
cs.LG cs.AI
|
Traditional automated crash analysis systems heavily rely on static
statistical models and historical data, requiring significant manual
interpretation and lacking real-time predictive capabilities. This research
presents an innovative approach to traffic safety analysis through the
integration of ensemble learning methods and multi-modal data fusion for
real-time crash risk assessment and prediction. Our primary contribution lies
in developing a hierarchical severity classification system that combines
spatial-temporal crash patterns with environmental conditions, achieving
significant improvements over traditional statistical approaches. The system
demonstrates a Mean Average Precision (mAP) of 0.893, representing a 15%
improvement over current state-of-the-art methods (baseline mAP: 0.776). We
introduce a novel feature engineering technique that integrates crash location
data with incident reports and weather conditions, achieving 92.4% accuracy in
risk prediction and 89.7% precision in hotspot identification. Through
extensive validation using 500,000 initial crash records filtered to 59,496
high-quality samples, our solution shows marked improvements in both prediction
accuracy and computational efficiency. Key innovations include a robust data
cleaning pipeline, adaptive feature generation, and a scalable real-time
prediction system capable of handling peak loads of 1,000 concurrent requests
while maintaining sub-100ms response times.
|
2502.05779
|
A 3D Multimodal Feature for Infrastructure Anomaly Detection
|
cs.CV
|
Ageing structures require periodic inspections to identify structural
defects. Previous work has used geometric distortions to locate cracks in
synthetic masonry bridge point clouds but has struggled to detect small cracks.
To address this limitation, this study proposes a novel 3D multimodal feature,
3DMulti-FPFHI, that combines a customized Fast Point Feature Histogram (FPFH)
with an intensity feature. This feature is integrated into the PatchCore
anomaly detection algorithm and evaluated through statistical and parametric
analyses. The method is further evaluated using point clouds of a real masonry
arch bridge and a full-scale experimental model of a concrete tunnel. Results
show that the 3D intensity feature enhances inspection quality by improving
crack detection; it also enables the identification of water ingress which
introduces intensity anomalies. The 3DMulti-FPFHI outperforms FPFH and a
state-of-the-art multimodal anomaly detection method. The potential of the
method to address diverse infrastructure anomaly detection scenarios is
highlighted by the minimal requirements for data compared to learning-based
methods. The code and related point cloud dataset are available at
https://github.com/Jingyixiong/3D-Multi-FPFHI.
|
2502.05780
|
GOLD: Graph Out-of-Distribution Detection via Implicit Adversarial
Latent Generation
|
cs.LG
|
Despite graph neural networks' (GNNs) great success in modelling
graph-structured data, out-of-distribution (OOD) test instances still pose a
great challenge for current GNNs. One of the most effective techniques to
detect OOD nodes is to expose the detector model with an additional OOD
node-set, yet the extra OOD instances are often difficult to obtain in
practice. Recent methods for image data address this problem using OOD data
synthesis, typically relying on pre-trained generative models like Stable
Diffusion. However, these approaches require vast amounts of additional data,
as well as one-for-all pre-trained generative models, which are not available
for graph data. Therefore, we propose the GOLD framework for graph OOD
detection, an implicit adversarial learning pipeline with synthetic OOD
exposure without pre-trained models. The implicit adversarial training process
employs a novel alternating optimisation framework by training: (1) a latent
generative model to regularly imitate the in-distribution (ID) embeddings from
an evolving GNN, and (2) a GNN encoder and an OOD detector to accurately
classify ID data while increasing the energy divergence between the ID
embeddings and the generative model's synthetic embeddings. This novel approach
implicitly transforms the synthetic embeddings into pseudo-OOD instances
relative to the ID data, effectively simulating exposure to OOD scenarios
without auxiliary data. Extensive OOD detection experiments are conducted on
five benchmark graph datasets, verifying the superior performance of GOLD
without using real OOD data compared with the state-of-the-art OOD exposure and
non-exposure baselines.
|
2502.05783
|
WatchGuardian: Enabling User-Defined Personalized Just-in-Time
Intervention on Smartwatch
|
cs.HC cs.AI cs.LG
|
While just-in-time interventions (JITIs) have effectively targeted common
health behaviors, individuals often have unique needs to intervene in personal
undesirable actions that can negatively affect physical, mental, and social
well-being. We present WatchGuardian, a smartwatch-based JITI system that
empowers users to define custom interventions for these personal actions with a
small number of samples. For the model to detect new actions based on limited
new data samples, we developed a few-shot learning pipeline that finetuned a
pre-trained inertial measurement unit (IMU) model on public hand-gesture
datasets. We then designed a data augmentation and synthesis process to train
additional classification layers for customization. Our offline evaluation with
26 participants showed that with three, five, and ten examples, our approach
achieved an average accuracy of 76.8%, 84.7%, and 87.7%, and an F1 score of
74.8%, 84.2%, and 87.2% We then conducted a four-hour intervention study to
compare WatchGuardian against a rule-based intervention. Our results
demonstrated that our system led to a significant reduction by 64.0 +- 22.6% in
undesirable actions, substantially outperforming the baseline by 29.0%. Our
findings underscore the effectiveness of a customizable, AI-driven JITI system
for individuals in need of behavioral intervention in personal undesirable
actions. We envision that our work can inspire broader applications of
user-defined personalized intervention with advanced AI solutions.
|
2502.05784
|
Propagation of Chaos for Mean-Field Langevin Dynamics and its
Application to Model Ensemble
|
stat.ML cs.LG
|
Mean-field Langevin dynamics (MFLD) is an optimization method derived by
taking the mean-field limit of noisy gradient descent for two-layer neural
networks in the mean-field regime. Recently, the propagation of chaos (PoC) for
MFLD has gained attention as it provides a quantitative characterization of the
optimization complexity in terms of the number of particles and iterations. A
remarkable progress by Chen et al. (2022) showed that the approximation error
due to finite particles remains uniform in time and diminishes as the number of
particles increases. In this paper, by refining the defective log-Sobolev
inequality -- a key result from that earlier work -- under the neural network
training setting, we establish an improved PoC result for MFLD, which removes
the exponential dependence on the regularization coefficient from the particle
approximation term of the optimization complexity. As an application, we
propose a PoC-based model ensemble strategy with theoretical guarantees.
|
2502.05788
|
EPBC-YOLOv8: An efficient and accurate improved YOLOv8 underwater
detector based on an attention mechanism
|
cs.CV cs.AI
|
In this study, we enhance underwater target detection by integrating channel
and spatial attention into YOLOv8's backbone, applying Pointwise Convolution in
FasterNeXt for the FasterPW model, and leveraging Weighted Concat in a
BiFPN-inspired WFPN structure for improved cross-scale connections and
robustness. Utilizing CARAFE for refined feature reassembly, our framework
addresses underwater image degradation, achieving mAP at 0.5 scores of 76.7
percent and 79.0 percent on URPC2019 and URPC2020 datasets, respectively. These
scores are 2.3 percent and 0.7 percent higher than the original YOLOv8,
showcasing enhanced precision in detecting marine organisms.
|
2502.05790
|
I3S: Importance Sampling Subspace Selection for Low-Rank Optimization in
LLM Pretraining
|
cs.LG
|
Low-rank optimization has emerged as a promising approach to enabling
memory-efficient training of large language models (LLMs). Existing low-rank
optimization methods typically project gradients onto a low-rank subspace,
reducing the memory cost of storing optimizer states. A key challenge in these
methods is identifying suitable subspaces to ensure an effective optimization
trajectory. Most existing approaches select the dominant subspace to preserve
gradient information, as this intuitively provides the best approximation.
However, we find that in practice, the dominant subspace stops changing during
pretraining, thereby constraining weight updates to similar subspaces.
In this paper, we propose importance sampling subspace selection (I3S) for
low-rank optimization, which theoretically offers a comparable convergence rate
to the dominant subspace approach. Empirically, we demonstrate that I3S
significantly outperforms previous methods in LLM pretraining tasks.
|
2502.05792
|
AToM: Adaptive Theory-of-Mind-Based Human Motion Prediction in Long-Term
Human-Robot Interactions
|
cs.RO
|
Humans learn from observations and experiences to adjust their behaviours
towards better performance. Interacting with such dynamic humans is
challenging, as the robot needs to predict the humans accurately for safe and
efficient operations. Long-term interactions with dynamic humans have not been
extensively studied by prior works. We propose an adaptive human prediction
model based on the Theory-of-Mind (ToM), a fundamental social-cognitive ability
that enables humans to infer others' behaviours and intentions. We formulate
the human internal belief about others using a game-theoretic model, which
predicts the future motions of all agents in a navigation scenario. To estimate
an evolving belief, we use an Unscented Kalman Filter to update the behavioural
parameters in the human internal model. Our formulation provides unique
interpretability to dynamic human behaviours by inferring how the human
predicts the robot. We demonstrate through long-term experiments in both
simulations and real-world settings that our prediction effectively promotes
safety and efficiency in downstream robot planning. Code will be available at
https://github.com/centiLinda/AToM-human-prediction.git.
|
2502.05793
|
On Reference (In-)Determinacy in Natural Language Inference
|
cs.CL
|
We revisit the reference determinacy (RD) assumption in the task of natural
language inference (NLI), i.e., the premise and hypothesis are assumed to refer
to the same context when human raters annotate a label. While RD is a practical
assumption for constructing a new NLI dataset, we observe that current NLI
models, which are typically trained solely on hypothesis-premise pairs created
with the RD assumption, fail in downstream applications such as fact
verification, where the input premise and hypothesis may refer to different
contexts. To highlight the impact of this phenomenon in real-world use cases,
we introduce RefNLI, a diagnostic benchmark for identifying reference ambiguity
in NLI examples. In RefNLI, the premise is retrieved from a knowledge source
(i.e., Wikipedia) and does not necessarily refer to the same context as the
hypothesis. With RefNLI, we demonstrate that finetuned NLI models and few-shot
prompted LLMs both fail to recognize context mismatch, leading to over 80%
false contradiction and over 50% entailment predictions. We discover that the
existence of reference ambiguity in NLI examples can in part explain the
inherent human disagreements in NLI and provide insight into how the RD
assumption impacts the NLI dataset creation process.
|
2502.05794
|
Structural Perturbation in Large Language Model Representations through
Recursive Symbolic Regeneration
|
cs.CL
|
Symbolic perturbations offer a novel approach for influencing neural
representations without requiring direct modification of model parameters. The
recursive regeneration of symbolic structures introduces structured variations
in latent embeddings, leading to controlled shifts in attention dynamics and
lexical diversity across sequential generations. A comparative analysis with
conventional fine-tuning techniques reveals that structural modifications at
the symbolic level induce distinct variations in contextual sensitivity while
maintaining overall model fluency and coherence. Shifts in attention weight
distributions highlight the role of symbolic modifications in adjusting token
dependencies, influencing response variability, and refining long-form text
generation. Experimental findings suggest that symbolic perturbations can
enhance adaptability in domain-specific applications, allowing modifications in
model behavior without retraining. Evaluations of semantic drift indicate that
recursive regeneration alters long-range token dependencies, affecting topic
coherence across extended text sequences. Results from lexical variability
assessments further support the conclusion that symbolic-level modifications
introduce interpretable variations in generated responses, potentially enabling
more controlled stylistic adjustments in automated text generation.
|
2502.05795
|
The Curse of Depth in Large Language Models
|
cs.LG cs.AI
|
In this paper, we introduce the Curse of Depth, a concept that highlights,
explains, and addresses the recent observation in modern Large Language
Models(LLMs) where nearly half of the layers are less effective than expected.
We first confirm the wide existence of this phenomenon across the most popular
families of LLMs such as Llama, Mistral, DeepSeek, and Qwen. Our analysis,
theoretically and empirically, identifies that the underlying reason for the
ineffectiveness of deep layers in LLMs is the widespread usage of Pre-Layer
Normalization (Pre-LN). While Pre-LN stabilizes the training of Transformer
LLMs, its output variance exponentially grows with the model depth, which
undesirably causes the derivative of the deep Transformer blocks to be an
identity matrix, and therefore barely contributes to the training. To resolve
this training pitfall, we propose LayerNorm Scaling, which scales the variance
of output of the layer normalization inversely by the square root of its depth.
This simple modification mitigates the output variance explosion of deeper
Transformer layers, improving their contribution. Our experimental results,
spanning model sizes from 130M to 1B, demonstrate that LayerNorm Scaling
significantly enhances LLM pre-training performance compared to Pre-LN.
Moreover, this improvement seamlessly carries over to supervised fine-tuning.
All these gains can be attributed to the fact that LayerNorm Scaling enables
deeper layers to contribute more effectively during training.
|
2502.05800
|
MicroViT: A Vision Transformer with Low Complexity Self Attention for
Edge Device
|
cs.CV
|
The Vision Transformer (ViT) has demonstrated state-of-the-art performance in
various computer vision tasks, but its high computational demands make it
impractical for edge devices with limited resources. This paper presents
MicroViT, a lightweight Vision Transformer architecture optimized for edge
devices by significantly reducing computational complexity while maintaining
high accuracy. The core of MicroViT is the Efficient Single Head Attention
(ESHA) mechanism, which utilizes group convolution to reduce feature redundancy
and processes only a fraction of the channels, thus lowering the burden of the
self-attention mechanism. MicroViT is designed using a multi-stage MetaFormer
architecture, stacking multiple MicroViT encoders to enhance efficiency and
performance. Comprehensive experiments on the ImageNet-1K and COCO datasets
demonstrate that MicroViT achieves competitive accuracy while significantly
improving 3.6 faster inference speed and reducing energy consumption with 40%
higher efficiency than the MobileViT series, making it suitable for deployment
in resource-constrained environments such as mobile and edge devices.
|
2502.05802
|
Kalman Filter-Based Distributed Gaussian Process for Unknown Scalar
Field Estimation in Wireless Sensor Networks
|
cs.MA cs.RO
|
In this letter, we propose an online scalar field estimation algorithm of
unknown environments using a distributed Gaussian process (DGP) framework in
wireless sensor networks (WSNs). While the kernel-based Gaussian process (GP)
has been widely employed for estimating unknown scalar fields, its centralized
nature is not well-suited for handling a large amount of data from WSNs. To
overcome the limitations of the kernel-based GP, recent advancements in GP
research focus on approximating kernel functions as products of E-dimensional
nonlinear basis functions, which can handle large WSNs more efficiently in a
distributed manner. However, this approach requires a large number of basis
functions for accurate approximation, leading to increased computational and
communication complexities. To address these complexity issues, the paper
proposes a distributed GP framework by incorporating a Kalman filter scheme
(termed as K-DGP), which scales linearly with the number of nonlinear basis
functions. Moreover, we propose a new consensus protocol designed to handle the
unique data transmission requirement residing in the proposed K-DGP framework.
This protocol preserves the inherent elements in the form of a certain column
in the nonlinear function matrix of the communicated message; it enables
wireless sensors to cooperatively estimate the environment and reach the global
consensus through distributed learning with faster convergence than the
widely-used average consensus protocol. Simulation results demonstrate rapid
consensus convergence and outstanding estimation accuracy achieved by the
proposed K-DGP algorithm. The scalability and efficiency of the proposed
approach are further demonstrated by online dynamic environment estimation
using WSNs.
|
2502.05803
|
FlashCheck: Exploration of Efficient Evidence Retrieval for Fast
Fact-Checking
|
cs.IR
|
The advances in digital tools have led to the rampant spread of
misinformation. While fact-checking aims to combat this, manual fact-checking
is cumbersome and not scalable. It is essential for automated fact-checking to
be efficient for aiding in combating misinformation in real-time and at the
source. Fact-checking pipelines primarily comprise a knowledge retrieval
component which extracts relevant knowledge to fact-check a claim from large
knowledge sources like Wikipedia and a verification component. The existing
works primarily focus on the fact-verification part rather than evidence
retrieval from large data collections, which often face scalability issues for
practical applications such as live fact-checking. In this study, we address
this gap by exploring various methods for indexing a succinct set of factual
statements from large collections like Wikipedia to enhance the retrieval phase
of the fact-checking pipeline. We also explore the impact of vector
quantization to further improve the efficiency of pipelines that employ dense
retrieval approaches for first-stage retrieval. We study the efficiency and
effectiveness of the approaches on fact-checking datasets such as HoVer and
WiCE, leveraging Wikipedia as the knowledge source. We also evaluate the
real-world utility of the efficient retrieval approaches by fact-checking 2024
presidential debate and also open source the collection of claims with
corresponding labels identified in the debate. Through a combination of indexed
facts together with Dense retrieval and Index compression, we achieve up to a
10.0x speedup on CPUs and more than a 20.0x speedup on GPUs compared to the
classical fact-checking pipelines over large collections.
|
2502.05806
|
Divide-and-Conquer: Tree-structured Strategy with Answer Distribution
Estimator for Goal-Oriented Visual Dialogue
|
cs.CV
|
Goal-oriented visual dialogue involves multi-round interaction between
artificial agents, which has been of remarkable attention due to its wide
applications. Given a visual scene, this task occurs when a Questioner asks an
action-oriented question and an Answerer responds with the intent of letting
the Questioner know the correct action to take. The quality of questions
affects the accuracy and efficiency of the target search progress. However,
existing methods lack a clear strategy to guide the generation of questions,
resulting in the randomness in the search process and inconvergent results. We
propose a Tree-Structured Strategy with Answer Distribution Estimator (TSADE)
which guides the question generation by excluding half of the current candidate
objects in each round. The above process is implemented by maximizing a binary
reward inspired by the ``divide-and-conquer'' paradigm. We further design a
candidate-minimization reward which encourages the model to narrow down the
scope of candidate objects toward the end of the dialogue. We experimentally
demonstrate that our method can enable the agents to achieve high task-oriented
accuracy with fewer repeating questions and rounds compared to traditional
ergodic question generation approaches. Qualitative results further show that
TSADE facilitates agents to generate higher-quality questions.
|
2502.05807
|
Devil is in the Details: Density Guidance for Detail-Aware Generation
with Flow Models
|
cs.LG
|
Diffusion models have emerged as a powerful class of generative models,
capable of producing high-quality images by mapping noise to a data
distribution. However, recent findings suggest that image likelihood does not
align with perceptual quality: high-likelihood samples tend to be smooth, while
lower-likelihood ones are more detailed. Controlling sample density is thus
crucial for balancing realism and detail. In this paper, we analyze an existing
technique, Prior Guidance, which scales the latent code to influence image
detail. We introduce score alignment, a condition that explains why this method
works and show that it can be tractably checked for any continuous normalizing
flow model. We then propose Density Guidance, a principled modification of the
generative ODE that enables exact log-density control during sampling. Finally,
we extend Density Guidance to stochastic sampling, ensuring precise log-density
control while allowing controlled variation in structure or fine details. Our
experiments demonstrate that these techniques provide fine-grained control over
image detail without compromising sample quality.
|
2502.05812
|
Multi-Agent Reinforcement Learning in Wireless Distributed Networks for
6G
|
cs.IT cs.SY eess.SY math.IT
|
The introduction of intelligent interconnectivity between the physical and
human worlds has attracted great attention for future sixth-generation (6G)
networks, emphasizing massive capacity, ultra-low latency, and unparalleled
reliability. Wireless distributed networks and multi-agent reinforcement
learning (MARL), both of which have evolved from centralized paradigms, are two
promising solutions for the great attention. Given their distinct capabilities,
such as decentralization and collaborative mechanisms, integrating these two
paradigms holds great promise for unleashing the full power of 6G, attracting
significant research and development attention. This paper provides a
comprehensive study on MARL-assisted wireless distributed networks for 6G. In
particular, we introduce the basic mathematical background and evolution of
wireless distributed networks and MARL, as well as demonstrate their
interrelationships. Subsequently, we analyze different structures of wireless
distributed networks from the perspectives of homogeneous and heterogeneous.
Furthermore, we introduce the basic concepts of MARL and discuss two typical
categories, including model-based and model-free. We then present critical
challenges faced by MARL-assisted wireless distributed networks, providing
important guidance and insights for actual implementation. We also explore an
interplay between MARL-assisted wireless distributed networks and emerging
techniques, such as information bottleneck and mirror learning, delivering
in-depth analyses and application scenarios. Finally, we outline several
compelling research directions for future MARL-assisted wireless distributed
networks.
|
2502.05815
|
Image-Based Alzheimer's Disease Detection Using Pretrained Convolutional
Neural Network Models
|
eess.IV cs.CV cs.LG
|
Alzheimer's disease is an untreatable, progressive brain disorder that slowly
robs people of their memory, thinking abilities, and ultimately their capacity
to complete even the most basic tasks. Among older adults, it is the most
frequent cause of dementia. Although there is presently no treatment for
Alzheimer's disease, scientific trials are ongoing to discover drugs to combat
the condition. Treatments to slow the signs of dementia are also available.
Many researchers throughout the world became interested in developing
computer-aided diagnosis systems to aid in the early identification of this
deadly disease and assure an accurate diagnosis. In particular, image based
approaches have been coupled with machine learning techniques to address the
challenges of Alzheimer's disease detection. This study proposes a computer
aided diagnosis system to detect Alzheimer's disease from biomarkers captured
using neuroimaging techniques. The proposed approach relies on deep learning
techniques to extract the relevant visual features from the image collection to
accurately predict the Alzheimer's class value. In the experiments, standard
datasets and pre-trained deep learning models were investigated. Moreover,
standard performance measures were used to assess the models' performances. The
obtained results proved that VGG16-based models outperform the state of the art
performance.
|
2502.05817
|
DreamFLEX: Learning Fault-Aware Quadrupedal Locomotion Controller for
Anomaly Situation in Rough Terrains
|
cs.RO cs.SY eess.SY
|
Recent advances in quadrupedal robots have demonstrated impressive agility
and the ability to traverse diverse terrains. However, hardware issues, such as
motor overheating or joint locking, may occur during long-distance walking or
traversing through rough terrains leading to locomotion failures. Although
several studies have proposed fault-tolerant control methods for quadrupedal
robots, there are still challenges in traversing unstructured terrains. In this
paper, we propose DreamFLEX, a robust fault-tolerant locomotion controller that
enables a quadrupedal robot to traverse complex environments even under joint
failure conditions. DreamFLEX integrates an explicit failure estimation and
modulation network that jointly estimates the robot's joint fault vector and
utilizes this information to adapt the locomotion pattern to faulty conditions
in real-time, enabling quadrupedal robots to maintain stability and performance
in rough terrains. Experimental results demonstrate that DreamFLEX outperforms
existing methods in both simulation and real-world scenarios, effectively
managing hardware failures while maintaining robust locomotion performance.
|
2502.05819
|
Stacked Intelligent Metasurface Enabled Near-Field Multiuser
Beamfocusing in the Wave Domain
|
cs.IT eess.SP math.IT
|
Intelligent surfaces represent a breakthrough technology capable of
customizing the wireless channel cost-effectively. However, the existing works
generally focus on planar wavefront, neglecting near-field spherical wavefront
characteristics caused by large array aperture and high operation frequencies
in the terahertz (THz). Additionally, the single-layer reconfigurable
intelligent surface (RIS) lacks the signal processing ability to mitigate the
computational complexity at the base station (BS). To address this issue, we
introduce a novel stacked intelligent metasurfaces (SIM) comprised of an array
of programmable metasurface layers. The SIM aims to substitute conventional
digital baseband architecture to execute computing tasks with ultra-low
processing delay, albeit with a reduced number of radio-frequency (RF) chains
and low-resolution digital-to-analog converters. In this paper, we present a
SIM-aided multiuser multiple-input single-output (MU-MISO) near-field system,
where the SIM is integrated into the BS to perform beamfocusing in the wave
domain and customize an end-to-end channel with minimized inter-user
interference. Finally, the numerical results demonstrate that near-field
communication achieves superior spatial gain over the far-field, and the SIM
effectively suppresses inter-user interference as the wireless signals
propagate through it.
|
2502.05822
|
HCMRM: A High-Consistency Multimodal Relevance Model for Search Ads
|
cs.IR
|
Search advertising is essential for merchants to reach the target users on
short video platforms. Short video ads aligned with user search intents are
displayed through relevance matching and bid ranking mechanisms. This paper
focuses on improving query-to-video relevance matching to enhance the
effectiveness of ranking in ad systems. Recent vision-language pre-training
models have demonstrated promise in various multimodal tasks. However, their
contribution to downstream query-video relevance tasks is limited, as the
alignment between the pair of visual signals and text differs from the modeling
of the triplet of the query, visual signals, and video text. In addition, our
previous relevance model provides limited ranking capabilities, largely due to
the discrepancy between the binary cross-entropy fine-tuning objective and the
ranking objective. To address these limitations, we design a high-consistency
multimodal relevance model (HCMRM). It utilizes a simple yet effective method
to enhance the consistency between pre-training and relevance tasks.
Specifically, during the pre-training phase, along with aligning visual signals
and video text, several keywords are extracted from the video text as
pseudo-queries to perform the triplet relevance modeling. For the fine-tuning
phase, we introduce a hierarchical softmax loss, which enables the model to
learn the order within labels while maximizing the distinction between positive
and negative samples. This promotes the fusion ranking of relevance and bidding
in the subsequent ranking stage. The proposed method has been deployed in the
Kuaishou search advertising system for over a year, contributing to a 6.1%
reduction in the proportion of irrelevant ads and a 1.4% increase in ad
revenue.
|
2502.05824
|
Aerial Reliable Collaborative Communications for Terrestrial Mobile
Users via Evolutionary Multi-Objective Deep Reinforcement Learning
|
cs.NE
|
Unmanned aerial vehicles (UAVs) have emerged as the potential aerial base
stations (BSs) to improve terrestrial communications. However, the limited
onboard energy and antenna power of a UAV restrict its communication range and
transmission capability. To address these limitations, this work employs
collaborative beamforming through a UAV-enabled virtual antenna array to
improve transmission performance from the UAV to terrestrial mobile users,
under interference from non-associated BSs and dynamic channel conditions.
Specifically, we introduce a memory-based random walk model to more accurately
depict the mobility patterns of terrestrial mobile users. Following this, we
formulate a multi-objective optimization problem (MOP) focused on maximizing
the transmission rate while minimizing the flight energy consumption of the UAV
swarm. Given the NP-hard nature of the formulated MOP and the highly dynamic
environment, we transform this problem into a multi-objective Markov decision
process and propose an improved evolutionary multi-objective reinforcement
learning algorithm. Specifically, this algorithm introduces an evolutionary
learning approach to obtain the approximate Pareto set for the formulated MOP.
Moreover, the algorithm incorporates a long short-term memory network and
hyper-sphere-based task selection method to discern the movement patterns of
terrestrial mobile users and improve the diversity of the obtained Pareto set.
Simulation results demonstrate that the proposed method effectively generates a
diverse range of non-dominated policies and outperforms existing methods.
Additional simulations demonstrate the scalability and robustness of the
proposed CB-based method under different system parameters and various
unexpected circumstances.
|
2502.05825
|
Delta -- Contrastive Decoding Mitigates Text Hallucinations in Large
Language Models
|
cs.CL cs.AI
|
Large language models (LLMs) demonstrate strong capabilities in natural
language processing but remain prone to hallucinations, generating factually
incorrect or fabricated content. This issue undermines their reliability,
particularly in high-stakes domains such as healthcare and legal advisory. To
address this challenge, we propose Delta, an inference-time method that reduces
hallucinations without requiring model retraining or additional data. Delta
works by randomly masking parts of the input prompt and contrasting the output
distributions for the original and masked inputs, effectively suppressing
hallucinations through inference-only computations. We evaluate Delta on
context-rich question-answering benchmarks, achieving absolute improvements of
approximately 3 and 6 percentage points on SQuAD v1.1 and v2, respectively, and
7 and 2 percentage points on TriviaQA and Natural Questions under-sampling
decoding. Delta also improves the no-answer exact match score on SQuAD v2 by
over ten percentage points, demonstrating its effectiveness in mitigating
hallucinations arising from contextual ambiguity. These results highlight Delta
as a computationally efficient and scalable approach for improving the
reliability of LLMs in real-world applications.
|
2502.05826
|
MindCraft: Revolutionizing Education through AI-Powered Personalized
Learning and Mentorship for Rural India
|
cs.CY cs.AI cs.ET
|
MindCraft is a modern platform designed to revolutionize education in rural
India by leveraging Artificial Intelligence (AI) to create personalized
learning experiences, provide mentorship, and foster resource-sharing. In a
country where access to quality education is deeply influenced by geography and
socio economic status, rural students often face significant barriers in their
educational journeys. MindCraft aims to bridge this gap by utilizing AI to
create tailored learning paths, connect students with mentors, and enable a
collaborative network of educational resources that transcends both physical
and digital divides. This paper explores the challenges faced by rural
students, the transformative potential of AI, and how MindCraft offers a
scalable, sustainable solution for equitable education system. By focusing on
inclusivity, personalized learning, and mentorship, MindCraft seeks to empower
rural students, equipping them with the skills, knowledge, and opportunities
needed to thrive in an increasingly digital world. Ultimately, MindCraft
envisions a future in which technology not only bridges educational gaps but
also becomes the driving force for a more inclusive and empowered society.
|
2502.05827
|
HyGEN: Regularizing Negative Hyperedge Generation for Accurate Hyperedge
Prediction
|
cs.SI cs.AI
|
Hyperedge prediction is a fundamental task to predict future high-order
relations based on the observed network structure. Existing hyperedge
prediction methods, however, suffer from the data sparsity problem. To
alleviate this problem, negative sampling methods can be used, which leverage
non-existing hyperedges as contrastive information for model training. However,
the following important challenges have been rarely studied: (C1) lack of
guidance for generating negatives and (C2) possibility of producing false
negatives. To address them, we propose a novel hyperedge prediction method,
HyGEN, that employs (1) a negative hyperedge generator that employs positive
hyperedges as a guidance to generate more realistic ones and (2) a
regularization term that prevents the generated hyperedges from being false
negatives. Extensive experiments on six real-world hypergraphs reveal that
HyGEN consistently outperforms four state-of-the-art hyperedge prediction
methods.
|
2502.05832
|
Compressing Model with Few Class-Imbalance Samples: An
Out-of-Distribution Expedition
|
cs.LG cs.AI cs.CV
|
In recent years, as a compromise between privacy and performance, few-sample
model compression has been widely adopted to deal with limited data resulting
from privacy and security concerns. However, when the number of available
samples is extremely limited, class imbalance becomes a common and tricky
problem. Achieving an equal number of samples across all classes is often
costly and impractical in real-world applications, and previous studies on
few-sample model compression have mostly ignored this significant issue. Our
experiments comprehensively demonstrate that class imbalance negatively affects
the overall performance of few-sample model compression methods. To address
this problem, we propose a novel and adaptive framework named OOD-Enhanced
Few-Sample Model Compression (OE-FSMC). This framework integrates easily
accessible out-of-distribution (OOD) data into both the compression and
fine-tuning processes, effectively rebalancing the training distribution. We
also incorporate a joint distillation loss and a regularization term to reduce
the risk of the model overfitting to the OOD data. Extensive experiments on
multiple benchmark datasets show that our framework can be seamlessly
incorporated into existing few-sample model compression methods, effectively
mitigating the accuracy degradation caused by class imbalance.
|
2502.05833
|
Machine learning-based hybrid dynamic modeling and economic predictive
control of carbon capture process for ship decarbonization
|
eess.SY cs.SY
|
Implementing carbon capture technology on-board ships holds promise as a
solution to facilitate the reduction of carbon intensity in international
shipping, as mandated by the International Maritime Organization. In this work,
we address the energy-efficient operation of shipboard carbon capture processes
by proposing a hybrid modeling-based economic predictive control scheme.
Specifically, we consider a comprehensive shipboard carbon capture process that
encompasses the ship engine system and the shipboard post-combustion carbon
capture plant. To accurately and robustly characterize the dynamic behaviors of
this shipboard plant, we develop a hybrid dynamic process model that integrates
available imperfect physical knowledge with neural networks trained using
process operation data. An economic model predictive control approach is
proposed based on the hybrid model to ensure carbon capture efficiency while
minimizing energy consumption required for the carbon capture process
operation. \textcolor{blue}{The cross-entropy method is employed to efficiently
solve the complex non-convex optimization problem associated with the proposed
hybrid model-based economic model predictive control method.} Extensive
simulations, analyses, and comparisons are conducted to verify the
effectiveness and illustrate the superiority of the proposed framework.
|
2502.05835
|
Contrastive Representation Distillation via Multi-Scale Feature
Decoupling
|
cs.CV cs.AI
|
Knowledge distillation is a technique aimed at enhancing the performance of a
smaller student network without increasing its parameter size by transferring
knowledge from a larger, pre-trained teacher network. Previous approaches have
predominantly focused on distilling global feature information while
overlooking the importance of disentangling the diverse types of information
embedded within different regions of the feature. In this work, we introduce
multi-scale decoupling in the feature transfer process for the first time,
where the decoupled local features are individually processed and integrated
with contrastive learning. Moreover, compared to previous contrastive
learning-based distillation methods, our approach not only reduces
computational costs but also enhances efficiency, enabling performance
improvements for the student network using only single-batch samples. Extensive
evaluations on CIFAR-100 and ImageNet demonstrate our method's superiority,
with some student networks distilled using our method even surpassing the
performance of their pre-trained teacher networks. These results underscore the
effectiveness of our approach in enabling student networks to thoroughly absorb
knowledge from teacher networks.
|
Subsets and Splits
No community queries yet
The top public SQL queries from the community will appear here once available.