id
stringlengths 9
16
| title
stringlengths 4
278
| categories
stringlengths 5
104
| abstract
stringlengths 6
4.09k
|
|---|---|---|---|
2502.01107
|
GTG: Generalizable Trajectory Generation Model for Urban Mobility
|
cs.LG
|
Trajectory data mining is crucial for smart city management. However,
collecting large-scale trajectory datasets is challenging due to factors such
as commercial conflicts and privacy regulations. Therefore, we urgently need
trajectory generation techniques to address this issue. Existing trajectory
generation methods rely on the global road network structure of cities. When
the road network structure changes, these methods are often not transferable to
other cities. In fact, there exist invariant mobility patterns between
different cities: 1) People prefer paths with the minimal travel cost; 2) The
travel cost of roads has an invariant relationship with the topological
features of the road network. Based on the above insight, this paper proposes a
Generalizable Trajectory Generation model (GTG). The model consists of three
parts: 1) Extracting city-invariant road representation based on Space Syntax
method; 2) Cross-city travel cost prediction through disentangled adversarial
training; 3) Travel preference learning by shortest path search and preference
update. By learning invariant movement patterns, the model is capable of
generating trajectories in new cities. Experiments on three datasets
demonstrates that our model significantly outperforms existing models in terms
of generalization ability.
|
2502.01108
|
Pulse-PPG: An Open-Source Field-Trained PPG Foundation Model for
Wearable Applications Across Lab and Field Settings
|
cs.LG cs.AI eess.SP
|
Photoplethysmography (PPG)-based foundation models are gaining traction due
to the widespread use of PPG in biosignal monitoring and their potential to
generalize across diverse health applications. In this paper, we introduce
Pulse-PPG, the first open-source PPG foundation model trained exclusively on
raw PPG data collected over a 100-day field study with 120 participants.
Existing PPG foundation models are either open-source but trained on clinical
data or closed-source, limiting their applicability in real-world settings. We
evaluate Pulse-PPG across multiple datasets and downstream tasks, comparing its
performance against a state-of-the-art foundation model trained on clinical
data. Our results demonstrate that Pulse-PPG, trained on uncurated field data,
exhibits superior generalization across clinical and mobile health applications
in both lab and field settings. This suggests that exposure to real-world
variability enables the model to learn fine-grained representations, making it
more adaptable across tasks. Furthermore, pre-training on field data
surprisingly outperforms its pre-training on clinical data in many tasks,
reinforcing the importance of training on real-world, diverse datasets. To
encourage further advancements in robust foundation models leveraging field
data, we plan to release Pulse-PPG, providing researchers with a powerful
resource for developing more generalizable PPG-based models.
|
2502.01111
|
A generative foundation model for an all-in-one seismic processing
framework
|
physics.geo-ph cs.AI
|
Seismic data often face challenges in their utilization due to noise
contamination, incomplete acquisition, and limited low-frequency information,
which hinder accurate subsurface imaging and interpretation. Traditional
processing methods rely heavily on task-specific designs to address these
challenges and fail to account for the variability of data. To address these
limitations, we present a generative seismic foundation model (GSFM), a unified
framework based on generative diffusion models (GDMs), designed to tackle
multi-task seismic processing challenges, including denoising, backscattered
noise attenuation, interpolation, and low-frequency extrapolation. GSFM
leverages a pre-training stage on synthetic data to capture the features of
clean, complete, and broadband seismic data distributions and applies an
iterative fine-tuning strategy to adapt the model to field data. By adopting a
target-oriented diffusion process prediction, GSFM improves computational
efficiency without compromising accuracy. Synthetic data tests demonstrate GSFM
surpasses benchmarks with equivalent architectures in all tasks and achieves
performance comparable to traditional pre-training strategies, even after their
fine-tuning. Also, field data tests suggest that our iterative fine-tuning
approach addresses the generalization limitations of conventional pre-training
and fine-tuning paradigms, delivering significantly enhanced performance across
diverse tasks. Furthermore, GSFM's inherent probabilistic nature enables
effective uncertainty quantification, offering valuable insights into the
reliability of processing results.
|
2502.01113
|
GFM-RAG: Graph Foundation Model for Retrieval Augmented Generation
|
cs.IR cs.AI cs.CL
|
Retrieval-augmented generation (RAG) has proven effective in integrating
knowledge into large language models (LLMs). However, conventional RAGs
struggle to capture complex relationships between pieces of knowledge, limiting
their performance in intricate reasoning that requires integrating knowledge
from multiple sources. Recently, graph-enhanced retrieval augmented generation
(GraphRAG) builds graph structure to explicitly model these relationships,
enabling more effective and efficient retrievers. Nevertheless, its performance
is still hindered by the noise and incompleteness within the graph structure.
To address this, we introduce GFM-RAG, a novel graph foundation model (GFM) for
retrieval augmented generation. GFM-RAG is powered by an innovative graph
neural network that reasons over graph structure to capture complex
query-knowledge relationships. The GFM with 8M parameters undergoes a two-stage
training process on large-scale datasets, comprising 60 knowledge graphs with
over 14M triples and 700k documents. This results in impressive performance and
generalizability for GFM-RAG, making it the first graph foundation model
applicable to unseen datasets for retrieval without any fine-tuning required.
Extensive experiments on three multi-hop QA datasets and seven domain-specific
RAG datasets demonstrate that GFM-RAG achieves state-of-the-art performance
while maintaining efficiency and alignment with neural scaling laws,
highlighting its potential for further improvement.
|
2502.01116
|
Picky LLMs and Unreliable RMs: An Empirical Study on Safety Alignment
after Instruction Tuning
|
cs.AI cs.CL cs.LG
|
Large language models (LLMs) have emerged as powerful tools for addressing a
wide range of general inquiries and tasks. Despite this, fine-tuning aligned
LLMs on smaller, domain-specific datasets, critical to adapting them to
specialized tasks, can inadvertently degrade their safety alignment, even when
the datasets are benign. This phenomenon makes models more susceptible to
providing inappropriate responses. In this study, we systematically examine the
factors contributing to safety alignment degradation in benign fine-tuning
scenarios. Our analysis identifies three critical factors affecting aligned
LLMs: answer structure, identity calibration, and role-play. Additionally, we
evaluate the reliability of state-of-the-art reward models (RMs), which are
often used to guide alignment processes. Our findings reveal that these RMs
frequently fail to accurately reflect human preferences regarding safety,
underscoring their limitations in practical applications. By uncovering these
challenges, our work highlights the complexities of maintaining safety
alignment during fine-tuning and offers guidance to help developers balance
utility and safety in LLMs. Datasets and fine-tuning code used in our
experiments can be found in
https://github.com/GuanlinLee/llm_instruction_tuning.
|
2502.01117
|
Learning to Learn Weight Generation via Trajectory Diffusion
|
cs.LG cs.AI cs.CV
|
Diffusion-based algorithms have emerged as promising techniques for weight
generation, particularly in scenarios like multi-task learning that require
frequent weight updates. However, existing solutions suffer from limited
cross-task transferability. In addition, they only utilize optimal weights as
training samples, ignoring the value of other weights in the optimization
process. To address these issues, we propose Lt-Di, which integrates the
diffusion algorithm with meta-learning to generate weights for unseen tasks.
Furthermore, we extend the vanilla diffusion algorithm into a trajectory
diffusion algorithm to utilize other weights along the optimization trajectory.
Trajectory diffusion decomposes the entire diffusion chain into multiple
shorter ones, improving training and inference efficiency. We analyze the
convergence properties of the weight generation paradigm and improve
convergence efficiency without additional time overhead. Our experiments
demonstrate Lt-Di's higher accuracy while reducing computational overhead
across various tasks, including zero-shot and few-shot learning, multi-domain
generalization, and large-scale language model fine-tuning.Our code is released
at https://github.com/tuantuange/Lt-Di.
|
2502.01118
|
Large Language Model-Enhanced Multi-Armed Bandits
|
cs.LG cs.AI
|
Large language models (LLMs) have been adopted to solve sequential
decision-making tasks such as multi-armed bandits (MAB), in which an LLM is
directly instructed to select the arms to pull in every iteration. However,
this paradigm of direct arm selection using LLMs has been shown to be
suboptimal in many MAB tasks. Therefore, we propose an alternative approach
which combines the strengths of classical MAB and LLMs. Specifically, we adopt
a classical MAB algorithm as the high-level framework and leverage the strong
in-context learning capability of LLMs to perform the sub-task of reward
prediction. Firstly, we incorporate the LLM-based reward predictor into the
classical Thompson sampling (TS) algorithm and adopt a decaying schedule for
the LLM temperature to ensure a transition from exploration to exploitation.
Next, we incorporate the LLM-based reward predictor (with a temperature of 0)
into a regression oracle-based MAB algorithm equipped with an explicit
exploration mechanism. We also extend our TS-based algorithm to dueling bandits
where only the preference feedback between pairs of arms is available, which
requires non-trivial algorithmic modifications. We conduct empirical
evaluations using both synthetic MAB tasks and experiments designed using
real-world text datasets, in which the results show that our algorithms
consistently outperform previous baseline methods based on direct arm
selection. Interestingly, we also demonstrate that in challenging tasks where
the arms lack semantic meanings that can be exploited by the LLM, our approach
achieves considerably better performance than LLM-based direct arm selection.
|
2502.01122
|
Learning Efficient Positional Encodings with Graph Neural Networks
|
cs.LG
|
Positional encodings (PEs) are essential for effective graph representation
learning because they provide position awareness in inherently
position-agnostic transformer architectures and increase the expressive
capacity of Graph Neural Networks (GNNs). However, designing powerful and
efficient PEs for graphs poses significant challenges due to the absence of
canonical node ordering and the scale of the graph. {In this work, we identify
four key properties that graph PEs should satisfy}: stability, expressive
power, scalability, and genericness. We find that existing eigenvector-based PE
methods often fall short of jointly satisfying these criteria. To address this
gap, we introduce PEARL, a novel framework of learnable PEs for graphs. Our
primary insight is that message-passing GNNs function as nonlinear mappings of
eigenvectors, enabling the design of GNN architectures for generating powerful
and efficient PEs. A crucial challenge lies in initializing node attributes in
a manner that is both expressive and permutation equivariant. We tackle this by
initializing GNNs with random node inputs or standard basis vectors, thereby
unlocking the expressive power of message-passing operations, while employing
statistical pooling functions to maintain permutation equivariance. Our
analysis demonstrates that PEARL approximates equivariant functions of
eigenvectors with linear complexity, while rigorously establishing its
stability and high expressive power. Experimental evaluations show that PEARL
outperforms lightweight versions of eigenvector-based PEs and achieves
comparable performance to full eigenvector-based PEs, but with one or two
orders of magnitude lower complexity. Our code is available at
https://github.com/ehejin/Pearl-PE.
|
2502.01126
|
Language Models Prefer What They Know: Relative Confidence Estimation
via Confidence Preferences
|
cs.CL
|
Language models (LMs) should provide reliable confidence estimates to help
users detect mistakes in their outputs and defer to human experts when
necessary. Asking a language model to assess its confidence ("Score your
confidence from 0-1.") is a natural way of evaluating its uncertainty. However,
models struggle to provide absolute assessments of confidence (i.e. judging
confidence in answering a question independent of other questions) and the
coarse-grained scores they produce are not useful for evaluating the
correctness of their answers. We propose relative confidence estimation, where
we match up questions against each other and ask the model to make relative
judgments of confidence ("Which question are you more confident in answering
correctly?"). Treating each question as a "player" in a series of matchups
against other questions and the model's preferences as match outcomes, we can
use rank aggregation methods like Elo rating and Bradley-Terry to translate the
model's confidence preferences into confidence scores. We evaluate relative
confidence estimation against absolute confidence estimation and
self-consistency confidence methods on five state-of-the-art LMs -- GPT-4,
GPT-4o, Gemini 1.5 Pro, Claude 3.5 Sonnet, and Llama 3.1 405B -- across 14
challenging STEM, social science, and commonsense reasoning question answering
tasks. Our results demonstrate that relative confidence estimation consistently
provides more reliable confidence scores than absolute confidence estimation,
with average gains of 3.5% in selective classification AUC over direct absolute
confidence estimation methods and 1.7% over self-consistency approaches across
all models and datasets.
|
2502.01127
|
The Battling Influencers Game: Nash Equilibria Structure of a Potential
Game and Implications to Value Alignment
|
cs.GT cs.AI
|
When multiple influencers attempt to compete for a receiver's attention,
their influencing strategies must account for the presence of one another. We
introduce the Battling Influencers Game (BIG), a multi-player simultaneous-move
general-sum game, to provide a game-theoretic characterization of this social
phenomenon. We prove that BIG is a potential game, that it has either one or an
infinite number of pure Nash equilibria (NEs), and these pure NEs can be found
by convex optimization. Interestingly, we also prove that at any pure NE, all
(except at most one) influencers must exaggerate their actions to the maximum
extent. In other words, it is rational for the influencers to be non-truthful
and extreme because they anticipate other influencers to cancel out part of
their influence. We discuss the implications of BIG to value alignment.
|
2502.01128
|
C codegen considered unnecessary: go directly to binary, do not pass C.
Compilation of Julia code for deployment in model-based engineering
|
eess.SY cs.SY
|
Since time immemorial an old adage has always seemed to ring true: you cannot
use a high-level productive programming language like Python or R for real-time
control and embedded-systems programming, you must rewrite your program in C.
We present a counterexample to this mantra by demonstrating how recent compiler
developments in the Julia programming language allow users of Julia and the
equation-based modeling language ModelingToolkit to compile and deploy binaries
for real-time model-based estimation and control. Contrary to the approach
taken by a majority of modeling and simulation tools, we do not generate C
code, and instead demonstrate how we may use the native Julia code-generation
pipeline through LLVM to compile architecture-specific binaries from high-level
code. This approach avoids many of the restrictions typically placed on
high-level languages to enable C-code generation. As case studies, we include a
nonlinear state estimator derived from an equation-based model which is
compiled into a program that performs state estimation for deployment onto a
Raspberry Pi, as well as a PID controller library implemented in Julia and
compiled into a shared library callable from a C program.
|
2502.01129
|
Deep Reinforcement Learning for Dynamic Resource Allocation in Wireless
Networks
|
cs.DC cs.AI cs.ET cs.LG
|
This report investigates the application of deep reinforcement learning (DRL)
algorithms for dynamic resource allocation in wireless communication systems.
An environment that includes a base station, multiple antennas, and user
equipment is created. Using the RLlib library, various DRL algorithms such as
Deep Q-Network (DQN) and Proximal Policy Optimization (PPO) are then applied.
These algorithms are compared based on their ability to optimize resource
allocation, focusing on the impact of different learning rates and scheduling
policies. The findings demonstrate that the choice of algorithm and learning
rate significantly influences system performance, with DRL providing more
efficient resource allocation compared to traditional methods.
|
2502.01131
|
Simple Linear Neuron Boosting
|
cs.LG stat.ML
|
Given a differentiable network architecture and loss function, we revisit
optimizing the network's neurons in function space using Boosted
Backpropagation (Grubb & Bagnell, 2010), in contrast to optimizing in parameter
space. From this perspective, we reduce descent in the space of linear
functions that optimizes the network's backpropagated-errors to a
preconditioned gradient descent algorithm. We show that this preconditioned
update rule is equivalent to reparameterizing the network to whiten each
neuron's features, with the benefit that the normalization occurs outside of
inference. In practice, we use this equivalence to construct an online
estimator for approximating the preconditioner and we propose an online,
matrix-free learning algorithm with adaptive step sizes. The algorithm is
applicable whenever autodifferentiation is available, including convolutional
networks and transformers, and it is simple to implement for both the local and
distributed training settings. We demonstrate fast convergence both in terms of
epochs and wall clock time on a variety of tasks and networks.
|
2502.01137
|
Self-Organizing Interaction Spaces: A Framework for Engineering
Pervasive Applications in Mobile and Distributed Environments
|
cs.DC cs.AI cs.LG cs.SE
|
The rapid adoption of pervasive and mobile computing has led to an
unprecedented rate of data production and consumption by mobile applications at
the network edge. These applications often require interactions such as data
exchange, behavior coordination, and collaboration, which are typically
mediated by cloud servers. While cloud computing has been effective for
distributed systems, challenges like latency, cost, and intermittent
connectivity persist. With the advent of 5G technology, features like
location-awareness and device-to-device (D2D) communication enable a more
distributed and adaptive architecture. This paper introduces Self-Organizing
Interaction Spaces (SOIS), a novel framework for engineering pervasive
applications. SOIS leverages the dynamic and heterogeneous nature of mobile
nodes, allowing them to form adaptive organizational structures based on their
individual and social contexts. The framework provides two key abstractions for
modeling and programming pervasive applications using an organizational mindset
and mechanisms for adapting dynamic organizational structures. Case examples
and performance evaluations of a simulated mobile crowd-sensing application
demonstrate the feasibility and benefits of SOIS. Results highlight its
potential to enhance efficiency and reduce reliance on traditional cloud
models, paving the way for innovative solutions in mobile and distributed
environments.
|
2502.01141
|
Beyond Yes or No: Predictive Compliance Monitoring Approaches for
Quantifying the Magnitude of Compliance Violations
|
cs.LG cs.AI
|
Most existing process compliance monitoring approaches detect compliance
violations in an ex post manner. Only predicate prediction focuses on
predicting them. However, predicate prediction provides a binary yes/no notion
of compliance, lacking the ability to measure to which extent an ongoing
process instance deviates from the desired state as specified in constraints.
Here, being able to quantify the magnitude of violation would provide
organizations with deeper insights into their operational performance, enabling
informed decision making to reduce or mitigate the risk of non-compliance.
Thus, we propose two predictive compliance monitoring approaches to close this
research gap. The first approach reformulates the binary classification problem
as a hybrid task that considers both classification and regression, while the
second employs a multi-task learning method to explicitly predict the
compliance status and the magnitude of violation for deviant cases
simultaneously. In this work, we focus on temporal constraints as they are
significant in almost any application domain, e.g., health care. The evaluation
on synthetic and real-world event logs demonstrates that our approaches are
capable of quantifying the magnitude of violations while maintaining comparable
performance for compliance predictions achieved by state-of-the-art approaches.
|
2502.01142
|
DeepRAG: Thinking to Retrieval Step by Step for Large Language Models
|
cs.AI cs.CL cs.IR
|
Large Language Models (LLMs) have shown remarkable potential in reasoning
while they still suffer from severe factual hallucinations due to timeliness,
accuracy, and coverage of parametric knowledge. Meanwhile, integrating
reasoning with retrieval-augmented generation (RAG) remains challenging due to
ineffective task decomposition and redundant retrieval, which can introduce
noise and degrade response quality. In this paper, we propose DeepRAG, a
framework that models retrieval-augmented reasoning as a Markov Decision
Process (MDP), enabling strategic and adaptive retrieval. By iteratively
decomposing queries, DeepRAG dynamically determines whether to retrieve
external knowledge or rely on parametric reasoning at each step. Experiments
show that DeepRAG improves retrieval efficiency while improving answer accuracy
by 21.99%, demonstrating its effectiveness in optimizing retrieval-augmented
reasoning.
|
2502.01143
|
ASAP: Aligning Simulation and Real-World Physics for Learning Agile
Humanoid Whole-Body Skills
|
cs.RO cs.AI cs.LG cs.SY eess.SY
|
Humanoid robots hold the potential for unparalleled versatility in performing
human-like, whole-body skills. However, achieving agile and coordinated
whole-body motions remains a significant challenge due to the dynamics mismatch
between simulation and the real world. Existing approaches, such as system
identification (SysID) and domain randomization (DR) methods, often rely on
labor-intensive parameter tuning or result in overly conservative policies that
sacrifice agility. In this paper, we present ASAP (Aligning Simulation and
Real-World Physics), a two-stage framework designed to tackle the dynamics
mismatch and enable agile humanoid whole-body skills. In the first stage, we
pre-train motion tracking policies in simulation using retargeted human motion
data. In the second stage, we deploy the policies in the real world and collect
real-world data to train a delta (residual) action model that compensates for
the dynamics mismatch. Then, ASAP fine-tunes pre-trained policies with the
delta action model integrated into the simulator to align effectively with
real-world dynamics. We evaluate ASAP across three transfer scenarios: IsaacGym
to IsaacSim, IsaacGym to Genesis, and IsaacGym to the real-world Unitree G1
humanoid robot. Our approach significantly improves agility and whole-body
coordination across various dynamic motions, reducing tracking error compared
to SysID, DR, and delta dynamics learning baselines. ASAP enables highly agile
motions that were previously difficult to achieve, demonstrating the potential
of delta action learning in bridging simulation and real-world dynamics. These
results suggest a promising sim-to-real direction for developing more
expressive and agile humanoids.
|
2502.01145
|
Tackling Feature and Sample Heterogeneity in Decentralized Multi-Task
Learning: A Sheaf-Theoretic Approach
|
cs.LG
|
Federated multi-task learning (FMTL) aims to simultaneously learn multiple
related tasks across clients without sharing sensitive raw data. However, in
the decentralized setting, existing FMTL frameworks are limited in their
ability to capture complex task relationships and handle feature and sample
heterogeneity across clients. To address these challenges, we introduce a novel
sheaf-theoretic-based approach for FMTL. By representing client relationships
using cellular sheaves, our framework can flexibly model interactions between
heterogeneous client models. We formulate the sheaf-based FMTL optimization
problem using sheaf Laplacian regularization and propose the Sheaf-FMTL
algorithm to solve it. We show that the proposed framework provides a unified
view encompassing many existing federated learning (FL) and FMTL approaches.
Furthermore, we prove that our proposed algorithm, Sheaf-FMTL, achieves a
sublinear convergence rate in line with state-of-the-art decentralized FMTL
algorithms. Extensive experiments demonstrate that Sheaf-FMTL exhibits
communication savings by sending significantly fewer bits compared to
decentralized FMTL baselines.
|
2502.01146
|
Quantum Machine Learning: A Hands-on Tutorial for Machine Learning
Practitioners and Researchers
|
quant-ph cs.AI cs.LG
|
This tutorial intends to introduce readers with a background in AI to quantum
machine learning (QML) -- a rapidly evolving field that seeks to leverage the
power of quantum computers to reshape the landscape of machine learning. For
self-consistency, this tutorial covers foundational principles, representative
QML algorithms, their potential applications, and critical aspects such as
trainability, generalization, and computational complexity. In addition,
practical code demonstrations are provided in https://qml-tutorial.github.io/
to illustrate real-world implementations and facilitate hands-on learning.
Together, these elements offer readers a comprehensive overview of the latest
advancements in QML. By bridging the gap between classical machine learning and
quantum computing, this tutorial serves as a valuable resource for those
looking to engage with QML and explore the forefront of AI in the quantum era.
|
2502.01152
|
Gradient Norm-based Fine-Tuning for Backdoor Defense in Automatic Speech
Recognition
|
cs.SD cs.LG eess.AS
|
Backdoor attacks have posed a significant threat to the security of deep
neural networks (DNNs). Despite considerable strides in developing defenses
against backdoor attacks in the visual domain, the specialized defenses for the
audio domain remain empty. Furthermore, the defenses adapted from the visual to
audio domain demonstrate limited effectiveness. To fill this gap, we propose
Gradient Norm-based FineTuning (GN-FT), a novel defense strategy against the
attacks in the audio domain, based on the observation from the corresponding
backdoored models. Specifically, we first empirically find that the backdoored
neurons exhibit greater gradient values compared to other neurons, while clean
neurons stay the lowest. On this basis, we fine-tune the backdoored model by
incorporating the gradient norm regularization, aiming to weaken and reduce the
backdoored neurons. We further approximate the loss computation for lower
implementation costs. Extensive experiments on two speech recognition datasets
across five models demonstrate the superior performance of our proposed method.
To the best of our knowledge, this work is the first specialized and effective
defense against backdoor attacks in the audio domain.
|
2502.01154
|
Jailbreaking with Universal Multi-Prompts
|
cs.CL cs.AI cs.CR cs.LG
|
Large language models (LLMs) have seen rapid development in recent years,
revolutionizing various applications and significantly enhancing convenience
and productivity. However, alongside their impressive capabilities, ethical
concerns and new types of attacks, such as jailbreaking, have emerged. While
most prompting techniques focus on optimizing adversarial inputs for individual
cases, resulting in higher computational costs when dealing with large
datasets. Less research has addressed the more general setting of training a
universal attacker that can transfer to unseen tasks. In this paper, we
introduce JUMP, a prompt-based method designed to jailbreak LLMs using
universal multi-prompts. We also adapt our approach for defense, which we term
DUMP. Experimental results demonstrate that our method for optimizing universal
multi-prompts outperforms existing techniques.
|
2502.01156
|
On the impact of the parametrization of deep convolutional neural
networks on post-training quantization
|
cs.IT math.IT
|
This paper introduces novel theoretical approximation bounds for the output
of quantized neural networks, with a focus on convolutional neural networks
(CNN). By considering layerwise parametrization and focusing on the
quantization of weights, we provide bounds that gain several orders of
magnitude compared to state-of-the art results on classical deep convolutional
neural netorks such as MobileNetV2 or ResNets. These gains are achieved by
improving the behaviour of the approximation bounds with respect to the depth
parameter, which has the most impact on the approximation error induced by
quantization. To complement our theoretical result, we provide a numerical
exploration of our bounds on Mo-bileNetV2 and ResNets.
|
2502.01157
|
Radiant Foam: Real-Time Differentiable Ray Tracing
|
cs.CV
|
Research on differentiable scene representations is consistently moving
towards more efficient, real-time models. Recently, this has led to the
popularization of splatting methods, which eschew the traditional ray-based
rendering of radiance fields in favor of rasterization. This has yielded a
significant improvement in rendering speeds due to the efficiency of
rasterization algorithms and hardware, but has come at a cost: the
approximations that make rasterization efficient also make implementation of
light transport phenomena like reflection and refraction much more difficult.
We propose a novel scene representation which avoids these approximations, but
keeps the efficiency and reconstruction quality of splatting by leveraging a
decades-old efficient volumetric mesh ray tracing algorithm which has been
largely overlooked in recent computer vision research. The resulting model,
which we name Radiant Foam, achieves rendering speed and quality comparable to
Gaussian Splatting, without the constraints of rasterization. Unlike ray traced
Gaussian models that use hardware ray tracing acceleration, our method requires
no special hardware or APIs beyond the standard features of a programmable GPU.
|
2502.01158
|
MIND: Modality-Informed Knowledge Distillation Framework for Multimodal
Clinical Prediction Tasks
|
cs.LG cs.AI cs.CV
|
Multimodal fusion leverages information across modalities to learn better
feature representations with the goal of improving performance in fusion-based
tasks. However, multimodal datasets, especially in medical settings, are
typically smaller than their unimodal counterparts, which can impede the
performance of multimodal models. Additionally, the increase in the number of
modalities is often associated with an overall increase in the size of the
multimodal network, which may be undesirable in medical use cases. Utilizing
smaller unimodal encoders may lead to sub-optimal performance, particularly
when dealing with high-dimensional clinical data. In this paper, we propose the
Modality-INformed knowledge Distillation (MIND) framework, a multimodal model
compression approach based on knowledge distillation that transfers knowledge
from ensembles of pre-trained deep neural networks of varying sizes into a
smaller multimodal student. The teacher models consist of unimodal networks,
allowing the student to learn from diverse representations. MIND employs
multi-head joint fusion models, as opposed to single-head models, enabling the
use of unimodal encoders in the case of unimodal samples without requiring
imputation or masking of absent modalities. As a result, MIND generates an
optimized multimodal model, enhancing both multimodal and unimodal
representations. It can also be leveraged to balance multimodal learning during
training. We evaluate MIND on binary and multilabel clinical prediction tasks
using time series data and chest X-ray images. Additionally, we assess the
generalizability of the MIND framework on three non-medical multimodal
multiclass datasets. Experimental results demonstrate that MIND enhances the
performance of the smaller multimodal network across all five tasks, as well as
various fusion methods and multimodal architectures, compared to
state-of-the-art baselines.
|
2502.01159
|
AtmosSci-Bench: Evaluating the Recent Advance of Large Language Model
for Atmospheric Science
|
cs.LG cs.AI
|
The rapid advancements in large language models (LLMs), particularly in their
reasoning capabilities, hold transformative potential for addressing complex
challenges in atmospheric science. However, leveraging LLMs effectively in this
domain requires a robust and comprehensive evaluation benchmark. To address
this need, we present AtmosSci-Bench, a novel benchmark designed to
systematically assess LLM performance across five core categories of
atmospheric science problems: hydrology, atmospheric dynamics, atmospheric
physics, geophysics, and physical oceanography. We employ a template-based
question generation framework, enabling scalable and diverse multiple-choice
questions curated from graduate-level atmospheric science problems. We conduct
a comprehensive evaluation of representative LLMs, categorized into four
groups: instruction-tuned models, advanced reasoning models, math-augmented
models, and domain-specific climate models. Our analysis provides some
interesting insights into the reasoning and problem-solving capabilities of
LLMs in atmospheric science. We believe AtmosSci-Bench can serve as a critical
step toward advancing LLM applications in climate service by offering a
standard and rigorous evaluation framework. Our source codes are currently
available at https://github.com/Relaxed-System-Lab/AtmosSci-Bench.
|
2502.01160
|
Scalable Precise Computation of Shannon Entropy
|
cs.AI cs.IT math.IT
|
Quantitative information flow analyses (QIF) are a class of techniques for
measuring the amount of confidential information leaked by a program to its
public outputs.
Shannon entropy is an important method to quantify the amount of leakage in
QIF.
This paper focuses on the programs modeled in Boolean constraints and
optimizes the two stages of the Shannon entropy computation to implement a
scalable precise tool PSE.
In the first stage, we design a knowledge compilation language called \ADDAND
that combines Algebraic Decision Diagrams and conjunctive decomposition.
\ADDAND avoids enumerating possible outputs of a program and supports
tractable entropy computation.
In the second stage, we optimize the model counting queries that are used to
compute the probabilities of outputs.
We compare PSE with the state-of-the-art probably approximately correct tool
EntropyEstimation, which was shown to significantly outperform the existing
precise tools.
The experimental results demonstrate that PSE solved 55 more benchmarks
compared to EntropyEstimation in a total of 441. For 98% of the benchmarks that
both PSE and EntropyEstimation solved, PSE is at least $10\times$ as efficient
as EntropyEstimation.
|
2502.01167
|
ConditionNET: Learning Preconditions and Effects for Execution
Monitoring
|
cs.RO cs.LG
|
The introduction of robots into everyday scenarios necessitates algorithms
capable of monitoring the execution of tasks. In this paper, we propose
ConditionNET, an approach for learning the preconditions and effects of actions
in a fully data-driven manner. We develop an efficient vision-language model
and introduce additional optimization objectives during training to optimize
for consistent feature representations. ConditionNET explicitly models the
dependencies between actions, preconditions, and effects, leading to improved
performance. We evaluate our model on two robotic datasets, one of which we
collected for this paper, containing 406 successful and 138 failed teleoperated
demonstrations of a Franka Emika Panda robot performing tasks like pouring and
cleaning the counter. We show in our experiments that ConditionNET outperforms
all baselines on both anomaly detection and phase prediction tasks.
Furthermore, we implement an action monitoring system on a real robot to
demonstrate the practical applicability of the learned preconditions and
effects. Our results highlight the potential of ConditionNET for enhancing the
reliability and adaptability of robots in real-world environments. The data is
available on the project website:
https://dsliwowski1.github.io/ConditionNET_page.
|
2502.01170
|
Label Distribution Learning with Biased Annotations by Learning
Multi-Label Representation
|
cs.LG
|
Multi-label learning (MLL) has gained attention for its ability to represent
real-world data. Label Distribution Learning (LDL), an extension of MLL to
learning from label distributions, faces challenges in collecting accurate
label distributions. To address the issue of biased annotations, based on the
low-rank assumption, existing works recover true distributions from biased
observations by exploring the label correlations. However, recent evidence
shows that the label distribution tends to be full-rank, and naive apply of
low-rank approximation on biased observation leads to inaccurate recovery and
performance degradation. In this paper, we address the LDL with biased
annotations problem from a novel perspective, where we first degenerate the
soft label distribution into a hard multi-hot label and then recover the true
label information for each instance. This idea stems from an insight that
assigning hard multi-hot labels is often easier than assigning a soft label
distribution, and it shows stronger immunity to noise disturbances, leading to
smaller label bias. Moreover, assuming that the multi-label space for
predicting label distributions is low-rank offers a more reasonable approach to
capturing label correlations. Theoretical analysis and experiments confirm the
effectiveness and robustness of our method on real-world datasets.
|
2502.01171
|
Efficient and Scalable Density Functional Theory Hamiltonian Prediction
through Adaptive Sparsity
|
cs.LG physics.comp-ph
|
Hamiltonian matrix prediction is pivotal in computational chemistry, serving
as the foundation for determining a wide range of molecular properties. While
SE(3) equivariant graph neural networks have achieved remarkable success in
this domain, their substantial computational cost-driven by high-order tensor
product (TP) operations-restricts their scalability to large molecular systems
with extensive basis sets. To address this challenge, we introduce SPHNet, an
efficient and scalable equivariant network that incorporates adaptive sparsity
into Hamiltonian prediction. SPHNet employs two innovative sparse gates to
selectively constrain non-critical interaction combinations, significantly
reducing tensor product computations while maintaining accuracy. To optimize
the sparse representation, we develop a Three-phase Sparsity Scheduler,
ensuring stable convergence and achieving high performance at sparsity rates of
up to 70 percent. Extensive evaluations on QH9 and PubchemQH datasets
demonstrate that SPHNet achieves state-of-the-art accuracy while providing up
to a 7x speedup over existing models. Beyond Hamiltonian prediction, the
proposed sparsification techniques also hold significant potential for
improving the efficiency and scalability of other SE(3) equivariant networks,
further broadening their applicability and impact.
|
2502.01172
|
Towards Agile Swarming in Real World: Onboard Relative Localization with
Fast Tracking of Active Blinking Markers
|
cs.RO cs.CV
|
A novel onboard tracking approach enabling vision-based relative localization
and communication using Active blinking Marker Tracking (AMT) is introduced in
this article. Active blinking markers on multi-robot team members improve the
robustness of relative localization for aerial vehicles in tightly coupled
swarms during real-world deployments, while also serving as a resilient
communication channel. Traditional tracking algorithms struggle to track fast
moving blinking markers due to their intermittent appearance in the camera
frames. AMT addresses this by using weighted polynomial regression to predict
the future appearance of active blinking markers while accounting for
uncertainty in the prediction. In outdoor experiments, the AMT approach
outperformed state-of-the-art methods in tracking density, accuracy, and
complexity. The experimental validation of this novel tracking approach for
relative localization involved testing motion patterns motivated by our
research on agile multi-robot deployment.
|
2502.01177
|
Insights from Network Science can advance Deep Graph Learning
|
cs.LG
|
Deep graph learning and network science both analyze graphs but approach
similar problems from different perspectives. Whereas network science focuses
on models and measures that reveal the organizational principles of complex
systems with explicit assumptions, deep graph learning focuses on flexible and
generalizable models that learn patterns in graph data in an automated fashion.
Despite these differences, both fields share the same goal: to better model and
understand patterns in graph-structured data. Early efforts to integrate
methods, models, and measures from network science and deep graph learning
indicate significant untapped potential. In this position, we explore
opportunities at their intersection. We discuss open challenges in deep graph
learning, including data augmentation, improved evaluation practices,
higher-order models, and pooling methods. Likewise, we highlight challenges in
network science, including scaling to massive graphs, integrating continuous
gradient-based optimization, and developing standardized benchmarks.
|
2502.01179
|
Joint Localization and Activation Editing for Low-Resource Fine-Tuning
|
cs.CL cs.AI cs.LG
|
Parameter-efficient fine-tuning (PEFT) methods, such as LoRA, are commonly
used to adapt LLMs. However, the effectiveness of standard PEFT methods is
limited in low-resource scenarios with only a few hundred examples. Recent
advances in interpretability research have inspired the emergence of activation
editing techniques, which modify the activations of specific model components.
These methods, due to their extremely small parameter counts, show promise for
small datasets. However, their performance is highly dependent on identifying
the correct modules to edit and often lacks stability across different
datasets. In this paper, we propose Joint Localization and Activation Editing
(JoLA), a method that jointly learns (1) which heads in the Transformer to edit
(2) whether the intervention should be additive, multiplicative, or both and
(3) the intervention parameters themselves - the vectors applied as additive
offsets or multiplicative scalings to the head output. Through evaluations on
three benchmarks spanning commonsense reasoning, natural language
understanding, and natural language generation, we demonstrate that JoLA
consistently outperforms existing methods.
|
2502.01180
|
A Minimax Optimal Controller for Positive Systems
|
math.OC cs.SY eess.SY
|
We present an explicit solution to the discrete-time Bellman equation for
minimax optimal control of positive systems under unconstrained disturbances.
The primary contribution of our result relies on deducing a bound for the
disturbance penalty, which characterizes the existence of a finite solution to
the problem class. Moreover, this constraint on the disturbance penalty reveals
that, in scenarios where a solution is feasible, the problem converges to its
equivalent minimization problem in the absence of disturbances.
|
2502.01181
|
BVINet: Unlocking Blind Video Inpainting with Zero Annotations
|
cs.CV
|
Video inpainting aims to fill in corrupted regions of the video with
plausible contents. Existing methods generally assume that the locations of
corrupted regions are known, focusing primarily on the "how to inpaint". This
reliance necessitates manual annotation of the corrupted regions using binary
masks to indicate "whereto inpaint". However, the annotation of these masks is
labor-intensive and expensive, limiting the practicality of current methods. In
this paper, we expect to relax this assumption by defining a new blind video
inpainting setting, enabling the networks to learn the mapping from corrupted
video to inpainted result directly, eliminating the need of corrupted region
annotations. Specifically, we propose an end-to-end blind video inpainting
network (BVINet) to address both "where to inpaint" and "how to inpaint"
simultaneously. On the one hand, BVINet can predict the masks of corrupted
regions by detecting semantic-discontinuous regions of the frame and utilizing
temporal consistency prior of the video. On the other hand, the predicted masks
are incorporated into the BVINet, allowing it to capture valid context
information from uncorrupted regions to fill in corrupted ones. Besides, we
introduce a consistency loss to regularize the training parameters of BVINet.
In this way, mask prediction and video completion mutually constrain each
other, thereby maximizing the overall performance of the trained model.
Furthermore, we customize a dataset consisting of synthetic corrupted videos,
real-world corrupted videos, and their corresponding completed videos. This
dataset serves as a valuable resource for advancing blind video inpainting
research. Extensive experimental results demonstrate the effectiveness and
superiority of our method.
|
2502.01182
|
A Single Model Ensemble Framework for Neural Machine Translation using
Pivot Translation
|
cs.CL cs.AI
|
Despite the significant advances in neural machine translation, performance
remains subpar for low-resource language pairs. Ensembling multiple systems is
a widely adopted technique to enhance performance, often accomplished by
combining probability distributions. However, the previous approaches face the
challenge of high computational costs for training multiple models.
Furthermore, for black-box models, averaging token-level probabilities at each
decoding step is not feasible. To address the problems of multi-model ensemble
methods, we present a pivot-based single model ensemble. The proposed strategy
consists of two steps: pivot-based candidate generation and post-hoc
aggregation. In the first step, we generate candidates through pivot
translation. This can be achieved with only a single model and facilitates
knowledge transfer from high-resource pivot languages, resulting in candidates
that are not only diverse but also more accurate. Next, in the aggregation
step, we select k high-quality candidates from the generated candidates and
merge them to generate a final translation that outperforms the existing
candidates. Our experimental results show that our method produces translations
of superior quality by leveraging candidates from pivot translation to capture
the subtle nuances of the source sentence.
|
2502.01183
|
Enhancing Environmental Robustness in Few-shot Learning via Conditional
Representation Learning
|
cs.CV
|
Few-shot learning (FSL) has recently been extensively utilized to overcome
the scarcity of training data in domain-specific visual recognition. In
real-world scenarios, environmental factors such as complex backgrounds,
varying lighting conditions, long-distance shooting, and moving targets often
cause test images to exhibit numerous incomplete targets or noise disruptions.
However, current research on evaluation datasets and methodologies has largely
ignored the concept of "environmental robustness", which refers to maintaining
consistent performance in complex and diverse physical environments. This
neglect has led to a notable decline in the performance of FSL models during
practical testing compared to their training performance. To bridge this gap,
we introduce a new real-world multi-domain few-shot learning (RD-FSL)
benchmark, which includes four domains and six evaluation datasets. The test
images in this benchmark feature various challenging elements, such as
camouflaged objects, small targets, and blurriness. Our evaluation experiments
reveal that existing methods struggle to utilize training images effectively to
generate accurate feature representations for challenging test images. To
address this problem, we propose a novel conditional representation learning
network (CRLNet) that integrates the interactions between training and testing
images as conditional information in their respective representation processes.
The main goal is to reduce intra-class variance or enhance inter-class variance
at the feature representation level. Finally, comparative experiments reveal
that CRLNet surpasses the current state-of-the-art methods, achieving
performance improvements ranging from 6.83% to 16.98% across diverse settings
and backbones. The source code and dataset are available at
https://github.com/guoqianyu-alberta/Conditional-Representation-Learning.
|
2502.01184
|
FragmentNet: Adaptive Graph Fragmentation for Graph-to-Sequence
Molecular Representation Learning
|
cs.LG cs.AI physics.chem-ph q-bio.QM
|
Molecular property prediction uses molecular structure to infer chemical
properties. Chemically interpretable representations that capture meaningful
intramolecular interactions enhance the usability and effectiveness of these
predictions. However, existing methods often rely on atom-based or rule-based
fragment tokenization, which can be chemically suboptimal and lack scalability.
We introduce FragmentNet, a graph-to-sequence foundation model with an
adaptive, learned tokenizer that decomposes molecular graphs into chemically
valid fragments while preserving structural connectivity. FragmentNet
integrates VQVAE-GCN for hierarchical fragment embeddings, spatial positional
encodings for graph serialization, global molecular descriptors, and a
transformer. Pre-trained with Masked Fragment Modeling and fine-tuned on
MoleculeNet tasks, FragmentNet outperforms models with similarly scaled
architectures and datasets while rivaling larger state-of-the-art models
requiring significantly more resources. This novel framework enables adaptive
decomposition, serialization, and reconstruction of molecular graphs,
facilitating fragment-based editing and visualization of property trends in
learned embeddings - a powerful tool for molecular design and optimization.
|
2502.01185
|
Deep Active Speech Cancellation with Multi-Band Mamba Network
|
cs.SD cs.AI cs.LG eess.AS eess.SP
|
We present a novel deep learning network for Active Speech Cancellation
(ASC), advancing beyond Active Noise Cancellation (ANC) methods by effectively
canceling both noise and speech signals. The proposed Multi-Band Mamba
architecture segments input audio into distinct frequency bands, enabling
precise anti-signal generation and improved phase alignment across frequencies.
Additionally, we introduce an optimization-driven loss function that provides
near-optimal supervisory signals for anti-signal generation. Experimental
results demonstrate substantial performance gains, achieving up to 7.2dB
improvement in ANC scenarios and 6.2dB in ASC, significantly outperforming
existing methods. Audio samples are available at
https://mishalydev.github.io/DeepASC-Demo
|
2502.01186
|
A High-Accuracy SSIM-based Scoring System for Coin Die Link
Identification
|
cs.CV
|
The analyses of ancient coins, and especially the identification of those
struck with the same die, provides invaluable information for archaeologists
and historians. Nowadays, these die links are identified manually, which makes
the process laborious, if not impossible when big treasures are discovered as
the number of comparisons is too large. This study introduces advances that
promise to streamline and enhance archaeological coin analysis. Our
contributions include: 1) First publicly accessible labeled dataset of coin
pictures (329 images) for die link detection, facilitating method benchmarking;
2) Novel SSIM-based scoring method for rapid and accurate discrimination of
coin pairs, outperforming current techniques used in this research field; 3)
Evaluation of clustering techniques using our score, demonstrating near-perfect
die link identification. We provide datasets, to foster future research and the
development of even more powerful tools for archaeology, and more particularly
for numismatics.
|
2502.01187
|
Skewed Memorization in Large Language Models: Quantification and
Decomposition
|
cs.AI cs.CL cs.LG
|
Memorization in Large Language Models (LLMs) poses privacy and security
risks, as models may unintentionally reproduce sensitive or copyrighted data.
Existing analyses focus on average-case scenarios, often neglecting the highly
skewed distribution of memorization. This paper examines memorization in LLM
supervised fine-tuning (SFT), exploring its relationships with training
duration, dataset size, and inter-sample similarity. By analyzing memorization
probabilities over sequence lengths, we link this skewness to the token
generation process, offering insights for estimating memorization and comparing
it to established metrics. Through theoretical analysis and empirical
evaluation, we provide a comprehensive understanding of memorization behaviors
and propose strategies to detect and mitigate risks, contributing to more
privacy-preserving LLMs.
|
2502.01188
|
FairUDT: Fairness-aware Uplift Decision Trees
|
cs.LG stat.ML
|
Training data used for developing machine learning classifiers can exhibit
biases against specific protected attributes. Such biases typically originate
from historical discrimination or certain underlying patterns that
disproportionately under-represent minority groups, such as those identified by
their gender, religion, or race. In this paper, we propose a novel approach,
FairUDT, a fairness-aware Uplift-based Decision Tree for discrimination
identification. FairUDT demonstrates how the integration of uplift modeling
with decision trees can be adapted to include fair splitting criteria.
Additionally, we introduce a modified leaf relabeling approach for removing
discrimination. We divide our dataset into favored and deprived groups based on
a binary sensitive attribute, with the favored dataset serving as the treatment
group and the deprived dataset as the control group. By applying FairUDT and
our leaf relabeling approach to preprocess three benchmark datasets, we achieve
an acceptable accuracy-discrimination tradeoff. We also show that FairUDT is
inherently interpretable and can be utilized in discrimination detection tasks.
The code for this project is available https://github.com/ara-25/FairUDT
|
2502.01189
|
Compressed Image Generation with Denoising Diffusion Codebook Models
|
eess.IV cs.AI cs.CV cs.IT eess.SP math.IT
|
We present a novel generative approach based on Denoising Diffusion Models
(DDMs), which produces high-quality image samples along with their losslessly
compressed bit-stream representations. This is obtained by replacing the
standard Gaussian noise sampling in the reverse diffusion with a selection of
noise samples from pre-defined codebooks of fixed iid Gaussian vectors.
Surprisingly, we find that our method, termed Denoising Diffusion Codebook
Model (DDCM), retains sample quality and diversity of standard DDMs, even for
extremely small codebooks. We leverage DDCM and pick the noises from the
codebooks that best match a given image, converting our generative model into a
highly effective lossy image codec achieving state-of-the-art perceptual image
compression results. More generally, by setting other noise selections rules,
we extend our compression method to any conditional image generation task
(e.g., image restoration), where the generated images are produced jointly with
their condensed bit-stream representations. Our work is accompanied by a
mathematical interpretation of the proposed compressed conditional generation
schemes, establishing a connection with score-based approximations of posterior
samplers for the tasks considered.
|
2502.01190
|
Dance recalibration for dance coherency with recurrent convolution block
|
cs.LG cs.AI
|
With the recent advancements in generative AI such as GAN, Diffusion, and
VAE, the use of generative AI for dance generation has seen significant
progress and received considerable interest. In this study, We propose R-Lodge,
an enhanced version of Lodge. R-Lodge incorporates Recurrent Sequential
Representation Learning named Dance Recalibration to original coarse-to-fine
long dance generation model. R-Lodge utilizes Dance Recalibration method using
$N$ Dance Recalibration Block to address the lack of consistency in the coarse
dance representation of the Lodge model. By utilizing this method, each
generated dance motion incorporates a bit of information from the previous
dance motions. We evaluate R-Lodge on FineDance dataset and the results show
that R-Lodge enhances the consistency of the whole generated dance motions.
|
2502.01191
|
Towards Robust and Reliable Concept Representations:
Reliability-Enhanced Concept Embedding Model
|
cs.CV
|
Concept Bottleneck Models (CBMs) aim to enhance interpretability by
predicting human-understandable concepts as intermediates for decision-making.
However, these models often face challenges in ensuring reliable concept
representations, which can propagate to downstream tasks and undermine
robustness, especially under distribution shifts. Two inherent issues
contribute to concept unreliability: sensitivity to concept-irrelevant features
(e.g., background variations) and lack of semantic consistency for the same
concept across different samples. To address these limitations, we propose the
Reliability-Enhanced Concept Embedding Model (RECEM), which introduces a
two-fold strategy: Concept-Level Disentanglement to separate irrelevant
features from concept-relevant information and a Concept Mixup mechanism to
ensure semantic alignment across samples. These mechanisms work together to
improve concept reliability, enabling the model to focus on meaningful object
attributes and generate faithful concept representations. Experimental results
demonstrate that RECEM consistently outperforms existing baselines across
multiple datasets, showing superior performance under background and domain
shifts. These findings highlight the effectiveness of disentanglement and
alignment strategies in enhancing both reliability and robustness in CBMs.
|
2502.01194
|
COVE: COntext and VEracity prediction for out-of-context images
|
cs.CL
|
Images taken out of their context are the most prevalent form of multimodal
misinformation. Debunking them requires (1) providing the true context of the
image and (2) checking the veracity of the image's caption. However, existing
automated fact-checking methods fail to tackle both objectives explicitly. In
this work, we introduce COVE, a new method that predicts first the true COntext
of the image and then uses it to predict the VEracity of the caption. COVE
beats the SOTA context prediction model on all context items, often by more
than five percentage points. It is competitive with the best veracity
prediction models on synthetic data and outperforms them on real-world data,
showing that it is beneficial to combine the two tasks sequentially. Finally,
we conduct a human study that reveals that the predicted context is a reusable
and interpretable artifact to verify new out-of-context captions for the same
image. Our code and data are made available.
|
2502.01197
|
Multi-objective Evolution of Drone Morphology
|
cs.RO
|
The design of multicopter drones has remained almost the same since its
inception. While conventional designs, such as the quadcopter, work well in
many cases, they may not be optimal in specific environments or missions. This
paper revisits rotary drone design by exploring which body morphologies are
optimal for different objectives and constraints. Specifically, an evolutionary
algorithm is used to produce optimal drone morphologies for three objectives:
(1) high thrust-to-weight ratio, (2) high maneuverability, and (3) small size.
To generate a range of optimal drones with performance trade-offs between them,
the non-dominated sorting genetic algorithm II, or NSGA-II is used. A randomly
sampled population of 600 is evolved over 2000 generations. The NSGA-II
algorithm evolved drone bodies that outperform a standard 5-inch 220 mm
wheelbase quadcopter in at least one of the three objectives. The three extrema
in the Pareto front show improvement of 487.8%, 23.5% and 4.8% in
maneuverability, thrust-to-weight ratio and size, respectively. The improvement
in maneuverability can be attributed to the tilt angles of the propellers,
while the increase in thrust-to-weight ratio is primarily due to the higher
number of propellers. The quadcopter is located on the Pareto front for the
three objectives. However, our results also show that other designs can be
better depending on the objectives.
|
2502.01199
|
Nearly Lossless Adaptive Bit Switching
|
cs.CV cs.AI
|
Model quantization is widely applied for compressing and accelerating deep
neural networks (DNNs). However, conventional Quantization-Aware Training (QAT)
focuses on training DNNs with uniform bit-width. The bit-width settings vary
across different hardware and transmission demands, which induces considerable
training and storage costs. Hence, the scheme of one-shot joint training
multiple precisions is proposed to address this issue. Previous works either
store a larger FP32 model to switch between different precision models for
higher accuracy or store a smaller INT8 model but compromise accuracy due to
using shared quantization parameters. In this paper, we introduce the Double
Rounding quantization method, which fully utilizes the quantized representation
range to accomplish nearly lossless bit-switching while reducing storage by
using the highest integer precision instead of full precision. Furthermore, we
observe a competitive interference among different precisions during one-shot
joint training, primarily due to inconsistent gradients of quantization scales
during backward propagation. To tackle this problem, we propose an Adaptive
Learning Rate Scaling (ALRS) technique that dynamically adapts learning rates
for various precisions to optimize the training process. Additionally, we
extend our Double Rounding to one-shot mixed precision training and develop a
Hessian-Aware Stochastic Bit-switching (HASB) strategy. Experimental results on
the ImageNet-1K classification demonstrate that our methods have enough
advantages to state-of-the-art one-shot joint QAT in both multi-precision and
mixed-precision. We also validate the feasibility of our method on detection
and segmentation tasks, as well as on LLMs task. Our codes are available at
https://github.com/haiduo/Double-Rounding.
|
2502.01201
|
One-to-Normal: Anomaly Personalization for Few-shot Anomaly Detection
|
cs.CV
|
Traditional Anomaly Detection (AD) methods have predominantly relied on
unsupervised learning from extensive normal data. Recent AD methods have
evolved with the advent of large pre-trained vision-language models, enhancing
few-shot anomaly detection capabilities. However, these latest AD methods still
exhibit limitations in accuracy improvement. One contributing factor is their
direct comparison of a query image's features with those of few-shot normal
images. This direct comparison often leads to a loss of precision and
complicates the extension of these techniques to more complex domains--an area
that remains underexplored in a more refined and comprehensive manner. To
address these limitations, we introduce the anomaly personalization method,
which performs a personalized one-to-normal transformation of query images
using an anomaly-free customized generation model, ensuring close alignment
with the normal manifold. Moreover, to further enhance the stability and
robustness of prediction results, we propose a triplet contrastive anomaly
inference strategy, which incorporates a comprehensive comparison between the
query and generated anomaly-free data pool and prompt information. Extensive
evaluations across eleven datasets in three domains demonstrate our model's
effectiveness compared to the latest AD methods. Additionally, our method has
been proven to transfer flexibly to other AD methods, with the generated image
data effectively improving the performance of other AD methods.
|
2502.01203
|
Theoretical Analysis of KL-regularized RLHF with Multiple Reference
Models
|
cs.LG stat.ML
|
Recent methods for aligning large language models (LLMs) with human feedback
predominantly rely on a single reference model, which limits diversity, model
overfitting, and underutilizes the wide range of available pre-trained models.
Incorporating multiple reference models has the potential to address these
limitations by broadening perspectives, reducing bias, and leveraging the
strengths of diverse open-source LLMs. However, integrating multiple reference
models into reinforcement learning with human feedback (RLHF) frameworks poses
significant theoretical challenges, particularly in reverse KL-regularization,
where achieving exact solutions has remained an open problem. This paper
presents the first \emph{exact solution} to the multiple reference model
problem in reverse KL-regularized RLHF. We introduce a comprehensive
theoretical framework that includes rigorous statistical analysis and provides
sample complexity guarantees. Additionally, we extend our analysis to forward
KL-regularized RLHF, offering new insights into sample complexity requirements
in multiple reference scenarios. Our contributions lay the foundation for more
advanced and adaptable LLM alignment techniques, enabling the effective use of
multiple reference models. This work paves the way for developing alignment
frameworks that are both theoretically sound and better suited to the
challenges of modern AI ecosystems.
|
2502.01204
|
Land Surface Temperature Super-Resolution with a Scale-Invariance-Free
Neural Approach: Application to MODIS
|
cs.LG cs.CV
|
Due to the trade-off between the temporal and spatial resolution of thermal
spaceborne sensors, super-resolution methods have been developed to provide
fine-scale Land SurfaceTemperature (LST) maps. Most of them are trained at low
resolution but applied at fine resolution, and so they require a
scale-invariance hypothesis that is not always adapted. Themain contribution of
this work is the introduction of a Scale-Invariance-Free approach for training
Neural Network (NN) models, and the implementation of two NN models,
calledScale-Invariance-Free Convolutional Neural Network for Super-Resolution
(SIF-CNN-SR) for the super-resolution of MODIS LST products. The
Scale-Invariance-Free approach consists ontraining the models in order to
provide LST maps at high spatial resolution that recover the initial LST when
they are degraded at low resolution and that contain fine-scale
texturesinformed by the high resolution NDVI. The second contribution of this
work is the release of a test database with ASTER LST images concomitant with
MODIS ones that can be usedfor evaluation of super-resolution algorithms. We
compare the two proposed models, SIF-CNN-SR1 and SIF-CNN-SR2, with four
state-of-the-art methods, Bicubic, DMS, ATPRK, Tsharp,and a CNN sharing the
same architecture as SIF-CNN-SR but trained under the scale-invariance
hypothesis. We show that SIF-CNN-SR1 outperforms the state-of-the-art methods
and the other two CNN models as evaluated with LPIPS and Fourier space metrics
focusing on the analysis of textures. These results and the available
ASTER-MODIS database for evaluation are promising for future studies on
super-resolution of LST.
|
2502.01205
|
OCR Error Post-Correction with LLMs in Historical Documents: No Free
Lunches
|
cs.CL
|
Optical Character Recognition (OCR) systems often introduce errors when
transcribing historical documents, leaving room for post-correction to improve
text quality. This study evaluates the use of open-weight LLMs for OCR error
correction in historical English and Finnish datasets. We explore various
strategies, including parameter optimization, quantization, segment length
effects, and text continuation methods. Our results demonstrate that while
modern LLMs show promise in reducing character error rates (CER) in English, a
practically useful performance for Finnish was not reached. Our findings
highlight the potential and limitations of LLMs in scaling OCR post-correction
for large historical corpora.
|
2502.01207
|
Solgenia -- A Test Vessel Toward Energy-Efficient Autonomous Water Taxi
Applications
|
cs.RO cs.SY eess.SY
|
Autonomous surface vessels are a promising building block of the future's
transport sector and are investigated by research groups worldwide. This paper
presents a comprehensive and systematic overview of the autonomous research
vessel Solgenia including the latest investigations and recently presented
methods that contributed to the fields of autonomous systems, applied numerical
optimization, nonlinear model predictive control,
multi-extended-object-tracking, computer vision, and collision avoidance. These
are considered to be the main components of autonomous water taxi applications.
Autonomous water taxis have the potential to transform the traffic in cities
close to the water into a more efficient, sustainable, and flexible future
state. Regarding this transformation, the test platform Solgenia offers an
opportunity to gain new insights by investigating novel methods in real-world
experiments. An established test platform will strongly reduce the effort
required for real-world experiments in the future.
|
2502.01208
|
Almost Surely Safe Alignment of Large Language Models at Inference-Time
|
cs.LG cs.CL
|
Even highly capable large language models (LLMs) can produce biased or unsafe
responses, and alignment techniques, such as RLHF, aimed at mitigating this
issue, are expensive and prone to overfitting as they retrain the LLM. This
paper introduces a novel inference-time alignment approach that ensures LLMs
generate safe responses almost surely, i.e., with a probability approaching
one. We achieve this by framing the safe generation of inference-time responses
as a constrained Markov decision process within the LLM's latent space.
Crucially, we augment a safety state that tracks the evolution of safety
constraints and enables us to demonstrate formal safety guarantees upon solving
the MDP in the latent space. Building on this foundation, we propose
InferenceGuard, a practical implementation that safely aligns LLMs without
modifying the model weights. Empirically, we demonstrate InferenceGuard
effectively balances safety and task performance, outperforming existing
inference-time alignment methods in generating safe and aligned responses.
|
2502.01210
|
Modelling change in neural dynamics during phonetic accommodation
|
cs.CL
|
Short-term phonetic accommodation is a fundamental driver behind accent
change, but how does real-time input from another speaker's voice shape the
speech planning representations of an interlocutor? We advance a computational
model of change in phonetic representations during phonetic accommodation,
grounded in dynamic neural field equations for movement planning and memory
dynamics. We test the model's ability to capture empirical patterns from an
experimental study where speakers shadowed a model talker with a different
accent from their own. The experimental data shows vowel-specific degrees of
convergence during shadowing, followed by return to baseline (or minor
divergence) post-shadowing. The model can reproduce these phenomena by
modulating the magnitude of inhibitory memory dynamics, which may reflect
resistance to accommodation due to phonological and/or sociolinguistic
pressures. We discuss the implications of these results for the relation
between short-term phonetic accommodation and longer-term patterns of sound
change.
|
2502.01211
|
Privilege Scores
|
cs.LG stat.ML
|
Bias-transforming methods of fairness-aware machine learning aim to correct a
non-neutral status quo with respect to a protected attribute (PA). Current
methods, however, lack an explicit formulation of what drives non-neutrality.
We introduce privilege scores (PS) to measure PA-related privilege by comparing
the model predictions in the real world with those in a fair world in which the
influence of the PA is removed. At the individual level, PS can identify
individuals who qualify for affirmative action; at the global level, PS can
inform bias-transforming policies. After presenting estimation methods for PS,
we propose privilege score contributions (PSCs), an interpretation method that
attributes the origin of privilege to mediating features and direct effects. We
provide confidence intervals for both PS and PSCs. Experiments on simulated and
real-world data demonstrate the broad applicability of our methods and provide
novel insights into gender and racial privilege in mortgage and college
admissions applications.
|
2502.01216
|
Exploring Few-Shot Defect Segmentation in General Industrial Scenarios
with Metric Learning and Vision Foundation Models
|
cs.CV
|
Industrial defect segmentation is critical for manufacturing quality control.
Due to the scarcity of training defect samples, few-shot semantic segmentation
(FSS) holds significant value in this field. However, existing studies mostly
apply FSS to tackle defects on simple textures, without considering more
diverse scenarios. This paper aims to address this gap by exploring FSS in
broader industrial products with various defect types. To this end, we
contribute a new real-world dataset and reorganize some existing datasets to
build a more comprehensive few-shot defect segmentation (FDS) benchmark. On
this benchmark, we thoroughly investigate metric learning-based FSS methods,
including those based on meta-learning and those based on Vision Foundation
Models (VFMs). We observe that existing meta-learning-based methods are
generally not well-suited for this task, while VFMs hold great potential. We
further systematically study the applicability of various VFMs in this task,
involving two paradigms: feature matching and the use of Segment Anything (SAM)
models. We propose a novel efficient FDS method based on feature matching.
Meanwhile, we find that SAM2 is particularly effective for addressing FDS
through its video track mode. The contributed dataset and code will be
available at: https://github.com/liutongkun/GFDS.
|
2502.01218
|
Provable Ordering and Continuity in Vision-Language Pretraining for
Generalizable Embodied Agents
|
cs.RO cs.AI cs.CV cs.LG
|
Pre-training vision-language representations on human action videos has
emerged as a promising approach to reduce reliance on large-scale expert
demonstrations for training embodied agents. However, prior methods often
employ time contrastive learning based on goal-reaching heuristics,
progressively aligning language instructions from the initial to the final
frame. This overemphasis on future frames can result in erroneous
vision-language associations, as actions may terminate early or include
irrelevant moments in the end. To address this issue, we propose Action
Temporal Coherence Learning (AcTOL) to learn ordered and continuous
vision-language representations without rigid goal-based constraint. AcTOL
treats a video as a continuous trajectory where it (1) contrasts semantic
differences between frames to reflect their natural ordering, and (2) imposes a
local Brownian bridge constraint to ensure smooth transitions across
intermediate frames. Extensive imitation learning experiments across varying
numbers of demonstrations show that the pretrained features significantly
enhance downstream manipulation tasks by up to 49% with high robustness to
different linguistic styles of instructions, offering a viable pathway toward
generalized embodied agents. The source code is included in the supplementary
material for reference.
|
2502.01220
|
Language Models Struggle to Achieve a Consistent Temporal Representation
of Facts
|
cs.CL cs.LG
|
Language Models (LMs) have shown substantial improvements in handling factual
knowledge, yet their capability to consistently represent temporal facts, which
are valid only within specific timeframes, remains underexplored. To
investigate this, we introduce TimeStress, a novel dataset comprising 521K
statements on 2003 of the most popular temporal facts in Wikidata. Each
statement contextualizes a fact with correct and incorrect dates across three
precisions (Day, Month, Year). This setup allows us to evaluate LMs' ability to
discern between correct and incorrect temporal statements based on their
probability of being generated. We assess 18 LMs across various architectures
using two metrics: the win rate, indicating how often correct dates outperform
incorrect ones, and robustness, reflecting consistent performance across all
dates. Our findings reveal that while some LMs achieve a win rate exceeding
80\%, robustness remains low, with the best model achieving only 6\%.
Furthermore, robust knowledge at one date precision does not reliably transfer
to others, highlighting a significant generalization gap. These results
underscore the struggle of LMs to maintain a consistent temporal
representation, supporting their limitations as reliable sources of temporal
knowledge. We provide all data and code for further research.
|
2502.01225
|
The dark deep side of DeepSeek: Fine-tuning attacks against the safety
alignment of CoT-enabled models
|
cs.CR cs.AI
|
Large language models are typically trained on vast amounts of data during
the pre-training phase, which may include some potentially harmful information.
Fine-tuning attacks can exploit this by prompting the model to reveal such
behaviours, leading to the generation of harmful content. In this paper, we
focus on investigating the performance of the Chain of Thought based reasoning
model, DeepSeek, when subjected to fine-tuning attacks. Specifically, we
explore how fine-tuning manipulates the model's output, exacerbating the
harmfulness of its responses while examining the interaction between the Chain
of Thought reasoning and adversarial inputs. Through this study, we aim to shed
light on the vulnerability of Chain of Thought enabled models to fine-tuning
attacks and the implications for their safety and ethical deployment.
|
2502.01226
|
Efficient Prior Selection in Gaussian Process Bandits with Thompson
Sampling
|
cs.LG stat.ML
|
Gaussian process (GP) bandits provide a powerful framework for solving
blackbox optimization of unknown functions. The characteristics of the unknown
function depends heavily on the assumed GP prior. Most work in the literature
assume that this prior is known but in practice this seldom holds. Instead,
practitioners often rely on maximum likelihood estimation to select the
hyperparameters of the prior - which lacks theoretical guarantees. In this
work, we propose two algorithms for joint prior selection and regret
minimization in GP bandits based on GP Thompson sampling (GP-TS):
Prior-Elimination GP-TS (PE-GP-TS) and HyperPrior GP-TS (HP-GP-TS). We
theoretically analyze the algorithms and establish upper bounds for their
respective regret. In addition, we demonstrate the effectiveness of our
algorithms compared to the alternatives through experiments with synthetic and
real-world data.
|
2502.01228
|
Soft Robot Localization Using Distributed Miniaturized Time-of-Flight
Sensors
|
cs.RO
|
Thanks to their compliance and adaptability, soft robots can be deployed to
perform tasks in constrained or complex environments. In these scenarios,
spatial awareness of the surroundings and the ability to localize the robot
within the environment represent key aspects. While state-of-the-art
localization techniques are well-explored in autonomous vehicles and walking
robots, they rely on data retrieved with lidar or depth sensors which are bulky
and thus difficult to integrate into small soft robots. Recent developments in
miniaturized Time of Flight (ToF) sensors show promise as a small and
lightweight alternative to bulky sensors. These sensors can be potentially
distributed on the soft robot body, providing multi-point depth data of the
surroundings. However, the small spatial resolution and the noisy measurements
pose a challenge to the success of state-of-the-art localization algorithms,
which are generally applied to much denser and more reliable measurements. In
this paper, we enforce distributed VL53L5CX ToF sensors, mount them on the tip
of a soft robot, and investigate their usage for self-localization tasks.
Experimental results show that the soft robot can effectively be localized with
respect to a known map, with an error comparable to the uncertainty on the
measures provided by the miniaturized ToF sensors.
|
2502.01229
|
How Good are Learned Cost Models, Really? Insights from Query
Optimization Tasks
|
cs.DB
|
Traditionally, query optimizers rely on cost models to choose the best
execution plan from several candidates, making precise cost estimates critical
for efficient query execution. In recent years, cost models based on machine
learning have been proposed to overcome the weaknesses of traditional cost
models. While these models have been shown to provide better prediction
accuracy, only limited efforts have been made to investigate how well Learned
Cost Models (LCMs) actually perform in query optimization and how they affect
overall query performance. In this paper, we address this by a systematic study
evaluating LCMs on three of the core query optimization tasks: join ordering,
access path selection, and physical operator selection. In our study, we
compare seven state-of-the-art LCMs to a traditional cost model and,
surprisingly, find that the traditional model often still outperforms LCMs in
these tasks. We conclude by highlighting major takeaways and recommendations to
guide future research toward making LCMs more effective for query optimization.
|
2502.01231
|
Societal Attitudes Toward Service Robots: Adore, Abhor, Ignore, or
Unsure?
|
cs.RO
|
Societal or population-level attitudes are aggregated patterns of different
individual attitudes, representing collective general predispositions. As
service robots become ubiquitous, understanding attitudes towards them at the
population (vs. individual) level enables firms to expand robot services to a
broad (vs. niche) market. Targeting population-level attitudes would benefit
service firms because: (1) they are more persistent, thus, stronger predictors
of behavioral patterns and (2) this approach is less reliant on personal data,
whereas individualized services are vulnerable to AI-related privacy risks. As
for service theory, ignoring broad unobserved differences in attitudes produces
biased conclusions, and our systematic review of previous research highlights a
poor understanding of potential heterogeneity in attitudes toward service
robots. We present five diverse studies (S1-S5), utilizing multinational and
"real world" data (Ntotal = 89,541; years: 2012-2024). Results reveal a stable
structure comprising four distinct attitude profiles (S1-S5): positive
("adore"), negative ("abhor"), indifferent ("ignore"), and ambivalent
("unsure"). The psychological need for interacting with service staff, and for
autonomy and relatedness in technology use, function as attitude profile
antecedents (S2). Importantly, the attitude profiles predict differences in
post-interaction discomfort and anxiety (S3), satisfaction ratings and service
evaluations (S4), and perceived sociability and uncanniness based on a robot's
humanlikeness (S5).
|
2502.01232
|
Efficient rule induction by ignoring pointless rules
|
cs.AI
|
The goal of inductive logic programming (ILP) is to find a set of logical
rules that generalises training examples and background knowledge. We introduce
an ILP approach that identifies pointless rules. A rule is pointless if it
contains a redundant literal or cannot discriminate against negative examples.
We show that ignoring pointless rules allows an ILP system to soundly prune the
hypothesis space. Our experiments on multiple domains, including visual
reasoning and game playing, show that our approach can reduce learning times by
99% whilst maintaining predictive accuracies.
|
2502.01235
|
One-step full gradient suffices for low-rank fine-tuning, provably and
efficiently
|
stat.ML cs.AI cs.LG
|
This paper studies how to improve the performance of Low-Rank Adaption (LoRA)
as guided by our theoretical analysis. Our first set of theoretical results
show that for random initialization and linear models, \textit{i)} LoRA will
align to the certain singular subspace of one-step gradient of full
fine-tuning; \textit{ii)} preconditioners improve convergence in the high-rank
case. These insights motivate us to focus on preconditioned LoRA using a
specific spectral initialization strategy for aligning with certain subspaces.
For both linear and nonlinear models, we prove that alignment and
generalization guarantees can be directly achieved at initialization, and the
subsequent linear convergence can be also built. Our analysis leads to the
\emph{LoRA-One} algorithm (using \emph{One}-step gradient and preconditioning),
a theoretically grounded algorithm that achieves significant empirical
improvement over vanilla LoRA and its variants on several benchmarks. Our
theoretical analysis, based on decoupling the learning dynamics and
characterizing how spectral initialization contributes to feature learning, may
be of independent interest for understanding matrix sensing and deep learning
theory. The source code can be found in the
https://github.com/YuanheZ/LoRA-One.
|
2502.01236
|
Eliciting Language Model Behaviors with Investigator Agents
|
cs.LG cs.AI cs.CL
|
Language models exhibit complex, diverse behaviors when prompted with
free-form text, making it difficult to characterize the space of possible
outputs. We study the problem of behavior elicitation, where the goal is to
search for prompts that induce specific target behaviors (e.g., hallucinations
or harmful responses) from a target language model. To navigate the
exponentially large space of possible prompts, we train investigator models to
map randomly-chosen target behaviors to a diverse distribution of outputs that
elicit them, similar to amortized Bayesian inference. We do this through
supervised fine-tuning, reinforcement learning via DPO, and a novel Frank-Wolfe
training objective to iteratively discover diverse prompting strategies. Our
investigator models surface a variety of effective and human-interpretable
prompts leading to jailbreaks, hallucinations, and open-ended aberrant
behaviors, obtaining a 100% attack success rate on a subset of AdvBench
(Harmful Behaviors) and an 85% hallucination rate.
|
2502.01237
|
The Differences Between Direct Alignment Algorithms are a Blur
|
cs.LG
|
Direct Alignment Algorithms (DAAs) simplify language model alignment by
replacing reinforcement learning (RL) and reward modeling (RM) in Reinforcement
Learning from Human Feedback (RLHF) with direct policy optimization. DAAs can
be classified by their ranking losses (pairwise vs. pointwise), by the rewards
used in those losses (e.g., likelihood ratios of policy and reference policy,
or odds ratios), or by whether a Supervised Fine-Tuning (SFT) phase is required
(two-stage vs. one-stage). We first show that one-stage methods underperform
two-stage methods. To address this, we incorporate an explicit SFT phase and
introduce the $\beta$ parameter, controlling the strength of preference
optimization, into single-stage ORPO and ASFT. These modifications improve
their performance in Alpaca Eval 2 by +$3.46$ (ORPO) and +$8.27$ (ASFT),
matching two-stage methods like DPO. Further analysis reveals that the key
factor is whether the approach uses pairwise or pointwise objectives, rather
than the specific implicit reward or loss function. These results highlight the
importance of careful evaluation to avoid premature claims of performance gains
or overall superiority in alignment algorithms.
|
2502.01242
|
Neural Cellular Automata for Decentralized Sensing using a Soft
Inductive Sensor Array for Distributed Manipulator Systems
|
cs.RO cs.LG
|
In Distributed Manipulator Systems (DMS), decentralization is a highly
desirable property as it promotes robustness and facilitates scalability by
distributing computational burden and eliminating singular points of failure.
However, current DMS typically utilize a centralized approach to sensing, such
as single-camera computer vision systems. This centralization poses a risk to
system reliability and offers a significant limiting factor to system size. In
this work, we introduce a decentralized approach for sensing and in a
Distributed Manipulator Systems using Neural Cellular Automata (NCA).
Demonstrating a decentralized sensing in a hardware implementation, we present
a novel inductive sensor board designed for distributed sensing and evaluate
its ability to estimate global object properties, such as the geometric center,
through local interactions and computations. Experiments demonstrate that
NCA-based sensing networks accurately estimate object position at 0.24 times
the inter sensor distance. They maintain resilience under sensor faults and
noise, and scale seamlessly across varying network sizes. These findings
underscore the potential of local, decentralized computations to enable
scalable, fault-tolerant, and noise-resilient object property estimation in DMS
|
2502.01243
|
OphthBench: A Comprehensive Benchmark for Evaluating Large Language
Models in Chinese Ophthalmology
|
cs.CL cs.AI
|
Large language models (LLMs) have shown significant promise across various
medical applications, with ophthalmology being a notable area of focus. Many
ophthalmic tasks have shown substantial improvement through the integration of
LLMs. However, before these models can be widely adopted in clinical practice,
evaluating their capabilities and identifying their limitations is crucial. To
address this research gap and support the real-world application of LLMs, we
introduce the OphthBench, a specialized benchmark designed to assess LLM
performance within the context of Chinese ophthalmic practices. This benchmark
systematically divides a typical ophthalmic clinical workflow into five key
scenarios: Education, Triage, Diagnosis, Treatment, and Prognosis. For each
scenario, we developed multiple tasks featuring diverse question types,
resulting in a comprehensive benchmark comprising 9 tasks and 591 questions.
This comprehensive framework allows for a thorough assessment of LLMs'
capabilities and provides insights into their practical application in Chinese
ophthalmology. Using this benchmark, we conducted extensive experiments and
analyzed the results from 39 popular LLMs. Our evaluation highlights the
current gap between LLM development and its practical utility in clinical
settings, providing a clear direction for future advancements. By bridging this
gap, we aim to unlock the potential of LLMs and advance their development in
ophthalmology.
|
2502.01247
|
Learnable polynomial, trigonometric, and tropical activations
|
cs.LG cs.AI cs.CL cs.CV math.AG
|
This paper investigates scalable neural networks with learnable activation
functions based on orthogonal function bases and tropical polynomials,
targeting ImageNet-1K classification and next token prediction on OpenWebText.
Traditional activations, such as ReLU, are static. In contrast, learnable
activations enable the network to adapt dynamically during training. However,
stability issues, such as vanishing or exploding gradients, arise with improper
variance management in deeper networks. To remedy this, we propose an
initialization scheme that single-handedly preserves unitary variance in
transformers and convolutional networks, ensuring stable gradient flow even in
deep architectures. Extensive experiments demonstrate that networks with
Hermite, Fourier, and Tropical-based learnable activations significantly
improve over GPT-2 and ConvNeXt networks in terms of accuracy and perplexity in
train and test, highlighting the viability of learnable activations in
large-scale tasks. The activation functions developed here are the subject of a
library coded entirely in pure PyTorch: torchortho, available at
https://github.com/K-H-Ismail/torchortho.
|
2502.01248
|
Computational modelling of cancer nanomedicine: Integrating hyperthermia
treatment into a multiphase porous-media tumour model
|
cs.CE
|
Heat-based cancer treatment, so-called hyperthermia, can be used to destroy
tumour cells directly or to make them more susceptible to chemotherapy or
radiation therapy. To apply heat locally, iron oxide nanoparticles are injected
into the bloodstream and accumulate at the tumour site, where they generate
heat when exposed to an alternating magnetic field. However, the temperature
must be precisely controlled to achieve therapeutic benefits while avoiding
damage to healthy tissue. We therefore present a computational model for
nanoparticle-mediated hyperthermia treatment fully integrated into a multiphase
porous-media model of the tumour and its microenvironment. We study how the
temperature depends on the amount of nanoparticles accumulated in the tumour
area and the specific absorption rate of the nanoparticles. Our results show
that host tissue surrounding the tumour is also exposed to considerable doses
of heat due to the high thermal conductivity of the tissue, which may cause
pain or even unnecessary irreversible damage. Further, we include a lumped and
a discrete model for the cooling effect of blood perfusion. Using a discrete
model of a realistic microvasculature reveals that the small capillaries do not
have a significant cooling effect during hyperthermia treatment and that the
commonly used lumped model based on Pennes' bioheat equation overestimates the
effect: within the specific conditions analysed, the difference between lumped
and discrete approaches is approximatively 0.75{\deg}C, which could influence
the therapeutic intervention outcome. Such a comprehensive computational model,
as presented here, can provide insights into the optimal treatment parameters
for nanoparticle-mediated hyperthermia and can be used to design more efficient
treatment strategies.
|
2502.01250
|
Beyond Win Rates: A Clustering-Based Approach to Character Balance
Analysis in Team-Based Games
|
cs.LG
|
Character diversity in competitive games, while enriching gameplay, often
introduces balance challenges that can negatively impact player experience and
strategic depth. Traditional balance assessments rely on aggregate metrics like
win rates and pick rates, which offer limited insight into the intricate
dynamics of team-based games and nuanced character roles. This paper proposes a
novel clustering-based methodology to analyze character balance, leveraging
in-game data from Valorant to account for team composition influences and
reveal latent character roles. By applying hierarchical agglomerative
clustering with Jensen-Shannon Divergence to professional match data from the
Valorant Champions Tour 2022, our approach identifies distinct clusters of
agents exhibiting similar co-occurrence patterns within team compositions. This
method not only complements existing quantitative metrics but also provides a
more holistic and interpretable perspective on character synergies and
potential imbalances, offering game developers a valuable tool for informed and
context-aware balance adjustments.
|
2502.01253
|
Explainability-Driven Quality Assessment for Rule-Based Systems
|
cs.AI cs.LO
|
This paper introduces an explanation framework designed to enhance the
quality of rules in knowledge-based reasoning systems based on dataset-driven
insights. The traditional method for rule induction from data typically
requires labor-intensive labeling and data-driven learning. This framework
provides an alternative and instead allows for the data-driven refinement of
existing rules: it generates explanations of rule inferences and leverages
human interpretation to refine rules. It leverages four complementary
explanation types: trace-based, contextual, contrastive, and counterfactual,
providing diverse perspectives for debugging, validating, and ultimately
refining rules. By embedding explainability into the reasoning architecture,
the framework enables knowledge engineers to address inconsistencies, optimize
thresholds, and ensure fairness, transparency, and interpretability in
decision-making processes. Its practicality is demonstrated through a use case
in finance.
|
2502.01256
|
Soft is Safe: Human-Robot Interaction for Soft Robots
|
cs.RO
|
With the presence of robots increasing in the society, the need for
interacting with robots is becoming necessary. The field of Human-Robot
Interaction (HRI) has emerged important since more repetitive and tiresome jobs
are being done by robots. In the recent times, the field of soft robotics has
seen a boom in the field of research and commercialization. The Industry 5.0
focuses on human robot collaboration which also spurs the field of soft
robotics. However the HRI for soft robotics is still in the nascent stage. In
this work we review and then discuss how HRI is done for soft robots. We first
discuss the control, design, materials and manufacturing of soft robots. This
will provide an understanding of what is being interacted with. Then we discuss
about the various input and output modalities that are used in HRI. The
applications where the HRI for soft robots are found in the literature are
discussed in detail. Then the limitations of HRI for soft robots and various
research opportunities that exist in this field are discussed in detail. It is
concluded that there is a huge scope for development for HRI for soft robots.
|
2502.01262
|
FSPGD: Rethinking Black-box Attacks on Semantic Segmentation
|
cs.CV
|
Transferability, the ability of adversarial examples crafted for one model to
deceive other models, is crucial for black-box attacks. Despite advancements in
attack methods for semantic segmentation, transferability remains limited,
reducing their effectiveness in real-world applications. To address this, we
introduce the Feature Similarity Projected Gradient Descent (FSPGD) attack, a
novel black-box approach that enhances both attack performance and
transferability. Unlike conventional segmentation attacks that rely on output
predictions for gradient calculation, FSPGD computes gradients from
intermediate layer features. Specifically, our method introduces a loss
function that targets local information by comparing features between clean
images and adversarial examples, while also disrupting contextual information
by accounting for spatial relationships between objects. Experiments on Pascal
VOC 2012 and Cityscapes datasets demonstrate that FSPGD achieves superior
transferability and attack performance, establishing a new state-of-the-art
benchmark. Code is available at https://github.com/KU-AIVS/FSPGD.
|
2502.01264
|
Generalized Lanczos method for systematic optimization of neural-network
quantum states
|
cond-mat.str-el cs.LG physics.comp-ph
|
Recently, artificial intelligence for science has made significant inroads
into various fields of natural science research. In the field of quantum
many-body computation, researchers have developed numerous ground state solvers
based on neural-network quantum states (NQSs), achieving ground state energies
with accuracy comparable to or surpassing traditional methods such as
variational Monte Carlo methods, density matrix renormalization group, and
quantum Monte Carlo methods. Here, we combine supervised learning,
reinforcement learning, and the Lanczos method to develop a systematic approach
to improving the NQSs of many-body systems, which we refer to as the NQS
Lanczos method. The algorithm mainly consists of two parts: the supervised
learning part and the reinforcement learning part. Through supervised learning,
the Lanczos states are represented by the NQSs. Through reinforcement learning,
the NQSs are further optimized. We analyze the reasons for the underfitting
problem and demonstrate how the NQS Lanczos method systematically improves the
energy in the highly frustrated regime of the two-dimensional Heisenberg
$J_1$-$J_2$ model. Compared to the existing method that combines the Lanczos
method with the restricted Boltzmann machine, the primary advantage of the NQS
Lanczos method is its linearly increasing computational cost.
|
2502.01265
|
On Exact Learning of $d$-Monotone Functions
|
cs.LG cs.DS
|
In this paper, we study the learnability of the Boolean class of $d$-monotone
functions $f:{\cal X}\to\{0,1\}$ from membership and equivalence queries, where
$({\cal X},\le)$ is a finite lattice. We show that the class of $d$-monotone
functions that are represented in the form $f=F(g_1,g_2,\ldots,g_d)$, where $F$
is any Boolean function $F:\{0,1\}^d\to\{0,1\}$ and $g_1,\ldots,g_d:{\cal X}\to
\{0,1\}$ are any monotone functions, is learnable in time $\sigma({\cal
X})\cdot (size(f)/d+1)^{d}$ where $\sigma({\cal X})$ is the maximum sum of the
number of immediate predecessors in a chain from the largest element to the
smallest element in the lattice ${\cal X}$ and
$size(f)=size(g_1)+\cdots+size(g_d)$, where $size(g_i)$ is the number of
minimal elements in $g_i^{-1}(1)$.
For the Boolean function $f:\{0,1\}^n\to\{0,1\}$, the class of $d$-monotone
functions that are represented in the form $f=F(g_1,g_2,\ldots,g_d)$, where $F$
is any Boolean function and $g_1,\ldots,g_d$ are any monotone DNF, is learnable
in time $O(n^2)\cdot (size(f)/d+1)^{d}$ where
$size(f)=size(g_1)+\cdots+size(g_d)$.
In particular, this class is learnable in polynomial time when $d$ is
constant. Additionally, this class is learnable in polynomial time when
$size(g_i)$ is constant for all $i$ and $d=O(\log n)$.
|
2502.01267
|
Counterfactual Situation Testing: From Single to Multidimensional
Discrimination
|
cs.LG
|
We present counterfactual situation testing (CST), a causal data mining
framework for detecting individual discrimination in a dataset of classifier
decisions. CST answers the question "what would have been the model outcome had
the individual, or complainant, been of a different protected status?" It
extends the legally-grounded situation testing (ST) of Thanh et al. (2011) by
operationalizing the notion of fairness given the difference via counterfactual
reasoning. ST finds for each complainant similar protected and non-protected
instances in the dataset; constructs, respectively, a control and test group;
and compares the groups such that a difference in outcomes implies a potential
case of individual discrimination. CST, instead, avoids this idealized
comparison by establishing the test group on the complainant's generated
counterfactual, which reflects how the protected attribute when changed
influences other seemingly neutral attributes of the complainant. Under CST we
test for discrimination for each complainant by comparing similar individuals
within each group but dissimilar individuals across groups. We consider single
(e.g., gender) and multidimensional (e.g., gender and race) discrimination
testing. For multidimensional discrimination we study multiple and
intersectional discrimination and, as feared by legal scholars, find evidence
that the former fails to account for the latter kind. Using a k-nearest
neighbor implementation, we showcase CST on synthetic and real data.
Experimental results show that CST uncovers a higher number of cases than ST,
even when the model is counterfactually fair. In fact, CST extends
counterfactual fairness (CF) of Kusner et al. (2017) by equipping CF with
confidence intervals.
|
2502.01268
|
Resilient UAV Trajectory Planning via Few-Shot Meta-Offline
Reinforcement Learning
|
cs.RO cs.AI
|
Reinforcement learning (RL) has been a promising essence in future 5G-beyond
and 6G systems. Its main advantage lies in its robust model-free
decision-making in complex and large-dimension wireless environments. However,
most existing RL frameworks rely on online interaction with the environment,
which might not be feasible due to safety and cost concerns. Another problem
with online RL is the lack of scalability of the designed algorithm with
dynamic or new environments. This work proposes a novel, resilient, few-shot
meta-offline RL algorithm combining offline RL using conservative Q-learning
(CQL) and meta-learning using model-agnostic meta-learning (MAML). The proposed
algorithm can train RL models using static offline datasets without any online
interaction with the environments. In addition, with the aid of MAML, the
proposed model can be scaled up to new unseen environments. We showcase the
proposed algorithm for optimizing an unmanned aerial vehicle (UAV) 's
trajectory and scheduling policy to minimize the age-of-information (AoI) and
transmission power of limited-power devices. Numerical results show that the
proposed few-shot meta-offline RL algorithm converges faster than baseline
schemes, such as deep Q-networks and CQL. In addition, it is the only algorithm
that can achieve optimal joint AoI and transmission power using an offline
dataset with few shots of data points and is resilient to network failures due
to unprecedented environmental changes.
|
2502.01269
|
Exploratory Utility Maximization Problem with Tsallis Entropy
|
cs.LG q-fin.MF
|
We study expected utility maximization problem with constant relative risk
aversion utility function in a complete market under the reinforcement learning
framework. To induce exploration, we introduce the Tsallis entropy regularizer,
which generalizes the commonly used Shannon entropy. Unlike the classical
Merton's problem, which is always well-posed and admits closed-form solutions,
we find that the utility maximization exploratory problem is ill-posed in
certain cases, due to over-exploration. With a carefully selected primary
temperature function, we investigate two specific examples, for which we fully
characterize their well-posedness and provide semi-closed-form solutions. It is
interesting to find that one example has the well-known Gaussian distribution
as the optimal strategy, while the other features the rare Wigner semicircle
distribution, which is equivalent to a scaled Beta distribution. The means of
the two optimal exploratory policies coincide with that of the classical
counterpart. In addition, we examine the convergence of the value function and
optimal exploratory strategy as the exploration vanishes. Finally, we design a
reinforcement learning algorithm and conduct numerical experiments to
demonstrate the advantages of reinforcement learning.
|
2502.01270
|
Main Predicate and Their Arguments as Explanation Signals For Intent
Classification
|
cs.CL
|
Intent classification is crucial for conversational agents (chatbots), and
deep learning models perform well in this area. However, little research has
been done on the explainability of intent classification due to the absence of
suitable benchmark data. Human annotation of explanation signals in text
samples is time-consuming and costly. However, from inspection of data on
intent classification, we see that, more often than not, the main verb denotes
the action, and the direct object indicates the domain of conversation, serving
as explanation signals for intent. This observation enables us to hypothesize
that the main predicate in the text utterances, along with the arguments of the
main predicate, can serve as explanation signals. Leveraging this, we introduce
a new technique to automatically augment text samples from intent
classification datasets with word-level explanations. We mark main predicates
(primarily verbs) and their arguments (dependency relations) as explanation
signals in benchmark intent classification datasets ATIS and SNIPS, creating a
unique 21k-instance dataset for explainability. Further, we experiment with
deep learning and language models. We observe that models that work well for
classification do not perform well in explainability metrics like plausibility
and faithfulness. We also observe that guiding models to focus on explanation
signals from our dataset during training improves the plausibility Token F1
score by 3-4%, improving the model's reasoning.
|
2502.01272
|
Boosting Graph Robustness Against Backdoor Attacks: An Over-Similarity
Perspective
|
cs.LG
|
Graph Neural Networks (GNNs) have achieved notable success in tasks such as
social and transportation networks. However, recent studies have highlighted
the vulnerability of GNNs to backdoor attacks, raising significant concerns
about their reliability in real-world applications. Despite initial efforts to
defend against specific graph backdoor attacks, existing defense methods face
two main challenges: either the inability to establish a clear distinction
between triggers and clean nodes, resulting in the removal of many clean nodes,
or the failure to eliminate the impact of triggers, making it challenging to
restore the target nodes to their pre-attack state. Through empirical analysis
of various existing graph backdoor attacks, we observe that the triggers
generated by these methods exhibit over-similarity in both features and
structure. Based on this observation, we propose a novel graph backdoor defense
method SimGuard. We first utilizes a similarity-based metric to detect triggers
and then employs contrastive learning to train a backdoor detector that
generates embeddings capable of separating triggers from clean nodes, thereby
improving detection efficiency. Extensive experiments conducted on real-world
datasets demonstrate that our proposed method effectively defends against
various graph backdoor attacks while preserving performance on clean nodes. The
code will be released upon acceptance.
|
2502.01273
|
Analysis of Student-LLM Interaction in a Software Engineering Project
|
cs.SE cs.AI
|
Large Language Models (LLMs) are becoming increasingly competent across
various domains, educators are showing a growing interest in integrating these
LLMs into the learning process. Especially in software engineering, LLMs have
demonstrated qualitatively better capabilities in code summarization, code
generation, and debugging. Despite various research on LLMs for software
engineering tasks in practice, limited research captures the benefits of LLMs
for pedagogical advancements and their impact on the student learning process.
To this extent, we analyze 126 undergraduate students' interaction with an AI
assistant during a 13-week semester to understand the benefits of AI for
software engineering learning. We analyze the conversations, code generated,
code utilized, and the human intervention levels to integrate the code into the
code base.
Our findings suggest that students prefer ChatGPT over CoPilot. Our analysis
also finds that ChatGPT generates responses with lower computational complexity
compared to CoPilot. Furthermore, conversational-based interaction helps
improve the quality of the code generated compared to auto-generated code.
Early adoption of LLMs in software engineering is crucial to remain competitive
in the rapidly developing landscape. Hence, the next generation of software
engineers must acquire the necessary skills to interact with AI to improve
productivity.
|
2502.01276
|
HyperSHAP: Shapley Values and Interactions for Hyperparameter Importance
|
cs.LG cs.AI stat.ML
|
Hyperparameter optimization (HPO) is a crucial step in achieving strong
predictive performance. However, the impact of individual hyperparameters on
model generalization is highly context-dependent, prohibiting a
one-size-fits-all solution and requiring opaque automated machine learning
(AutoML) systems to find optimal configurations. The black-box nature of most
AutoML systems undermines user trust and discourages adoption. To address this,
we propose a game-theoretic explainability framework for HPO that is based on
Shapley values and interactions. Our approach provides an additive
decomposition of a performance measure across hyperparameters, enabling local
and global explanations of hyperparameter importance and interactions. The
framework, named HyperSHAP, offers insights into ablations, the tunability of
learning algorithms, and optimizer behavior across different hyperparameter
spaces. We evaluate HyperSHAP on various HPO benchmarks by analyzing the
interaction structure of the HPO problem. Our results show that while
higher-order interactions exist, most performance improvements can be explained
by focusing on lower-order representations.
|
2502.01278
|
DRL-based Dolph-Tschebyscheff Beamforming in Downlink Transmission for
Mobile Users
|
eess.SP cs.LG
|
With the emergence of AI technologies in next-generation communication
systems, machine learning plays a pivotal role due to its ability to address
high-dimensional, non-stationary optimization problems within dynamic
environments while maintaining computational efficiency. One such application
is directional beamforming, achieved through learning-based blind beamforming
techniques that utilize already existing radio frequency (RF) fingerprints of
the user equipment obtained from the base stations and eliminate the need for
additional hardware or channel and angle estimations. However, as the number of
users and antenna dimensions increase, thereby expanding the problem's
complexity, the learning process becomes increasingly challenging, and the
performance of the learning-based method cannot match that of the optimal
solution. In such a scenario, we propose a deep reinforcement learning-based
blind beamforming technique using a learnable Dolph-Tschebyscheff antenna array
that can change its beam pattern to accommodate mobile users. Our simulation
results show that the proposed method can support data rates very close to the
best possible values.
|
2502.01280
|
Trajectory Map-Matching in Urban Road Networks Based on RSS Measurements
|
eess.SY cs.SY
|
This paper proposes an RSS-based approach to reconstruct vehicle trajectories
within a road network, enforcing signal propagation rules and vehicle mobility
constraints to mitigate the impact of RSS noise and sparsity. The key challenge
lies in leveraging latent spatiotemporal correlations within RSS data while
navigating complex road networks. To address this, we develop a Hidden Markov
Model (HMM)-based RSS embedding (HRE) technique that employs alternating
optimization to infer vehicle trajectories from RSS measurements. This model
captures spatiotemporal dependencies while a road graph ensures network
compliance. Additionally, we introduce a maximum speed-constrained rough
trajectory estimation (MSR) method to guide the optimization process, enabling
rapid convergence to a favorable local solution.
|
2502.01281
|
Label Correction for Road Segmentation Using Road-side Cameras
|
cs.CV
|
Reliable road segmentation in all weather conditions is critical for
intelligent transportation applications, autonomous vehicles and advanced
driver's assistance systems. For robust performance, all weather conditions
should be included in the training data of deep learning-based perception
models. However, collecting and annotating such a dataset requires extensive
resources. In this paper, existing roadside camera infrastructure is utilized
for collecting road data in varying weather conditions automatically.
Additionally, a novel semi-automatic annotation method for roadside cameras is
proposed. For each camera, only one frame is labeled manually and then the
label is transferred to other frames of that camera feed. The small camera
movements between frames are compensated using frequency domain image
registration. The proposed method is validated with roadside camera data
collected from 927 cameras across Finland over 4 month time period during
winter. Training on the semi-automatically labeled data boosted the
segmentation performance of several deep learning segmentation models. Testing
was carried out on two different datasets to evaluate the robustness of the
resulting models. These datasets were an in-domain roadside camera dataset and
out-of-domain dataset captured with a vehicle on-board camera.
|
2502.01282
|
Rational Gaussian wavelets and corresponding model driven neural
networks
|
stat.ML cs.AI cs.LG
|
In this paper we consider the continuous wavelet transform using Gaussian
wavelets multiplied by an appropriate rational term. The zeros and poles of
this rational modifier act as free parameters and their choice highly
influences the shape of the mother wavelet. This allows the proposed
construction to approximate signals with complex morphology using only a few
wavelet coefficients. We show that the proposed rational Gaussian wavelets are
admissible and provide numerical approximations of the wavelet coefficients
using variable projection operators. In addition, we show how the proposed
variable projection based rational Gaussian wavelet transform can be used in
neural networks to obtain a highly interpretable feature learning layer. We
demonstrate the effectiveness of the proposed scheme through a biomedical
application, namely, the detection of ventricular ectopic beats (VEBs) in real
ECG measurements.
|
2502.01286
|
Template Matching in Images using Segmented Normalized Cross-Correlation
|
cs.CV
|
In this paper, a new variant of an algorithm for normalized cross-correlation
(NCC) is proposed in the context of template matching in images. The proposed
algorithm is based on the precomputation of a template image approximation,
enabling more efficient calculation of approximate NCC with the source image
than using the original template for exact NCC calculation. The approximate
template is precomputed from the template image by a split-and-merge approach,
resulting in a decomposition to axis-aligned rectangular segments, whose sizes
depend on per-segment pixel intensity variance. In the approximate template,
each segment is assigned the mean grayscale value of the corresponding pixels
from the original template. The proposed algorithm achieves superior
computational performance with negligible NCC approximation errors compared to
the well-known Fast Fourier Transform (FFT)-based NCC algorithm, when applied
on less visually complex and/or smaller template images. In other cases, the
proposed algorithm can maintain either computational performance or NCC
approximation error within the range of the FFT-based algorithm, but not both.
|
2502.01289
|
A Framework for Double-Blind Federated Adaptation of Foundation Models
|
cs.LG cs.CR cs.CV cs.DC
|
The availability of foundational models (FMs) pre-trained on large-scale data
has advanced the state-of-the-art in many computer vision tasks. While FMs have
demonstrated good zero-shot performance on many image classification tasks,
there is often scope for performance improvement by adapting the FM to the
downstream task. However, the data that is required for this adaptation
typically exists in silos across multiple entities (data owners) and cannot be
collated at a central location due to regulations and privacy concerns. At the
same time, a learning service provider (LSP) who owns the FM cannot share the
model with the data owners due to proprietary reasons. In some cases, the data
owners may not even have the resources to store such large FMs. Hence, there is
a need for algorithms to adapt the FM in a double-blind federated manner, i.e.,
the data owners do not know the FM or each other's data, and the LSP does not
see the data for the downstream tasks. In this work, we propose a framework for
double-blind federated adaptation of FMs using fully homomorphic encryption
(FHE). The proposed framework first decomposes the FM into a sequence of
FHE-friendly blocks through knowledge distillation. The resulting FHE-friendly
model is adapted for the downstream task via low-rank parallel adapters that
can be learned without backpropagation through the FM. Since the proposed
framework requires the LSP to share intermediate representations with the data
owners, we design a privacy-preserving permutation scheme to prevent the data
owners from learning the FM through model extraction attacks. Finally, a secure
aggregation protocol is employed for federated learning of the low-rank
parallel adapters. Empirical results on four datasets demonstrate the practical
feasibility of the proposed framework.
|
2502.01295
|
Common Foundations for SHACL, ShEx, and PG-Schema
|
cs.DB cs.AI
|
Graphs have emerged as an important foundation for a variety of applications,
including capturing and reasoning over factual knowledge, semantic data
integration, social networks, and providing factual knowledge for machine
learning algorithms. To formalise certain properties of the data and to ensure
data quality, there is a need to describe the schema of such graphs. Because of
the breadth of applications and availability of different data models, such as
RDF and property graphs, both the Semantic Web and the database community have
independently developed graph schema languages: SHACL, ShEx, and PG-Schema.
Each language has its unique approach to defining constraints and validating
graph data, leaving potential users in the dark about their commonalities and
differences. In this paper, we provide formal, concise definitions of the core
components of each of these schema languages. We employ a uniform framework to
facilitate a comprehensive comparison between the languages and identify a
common set of functionalities, shedding light on both overlapping and
distinctive features of the three languages.
|
2502.01296
|
Molecular Odor Prediction with Harmonic Modulated Feature Mapping and
Chemically-Informed Loss
|
cs.LG q-bio.QM
|
Molecular odor prediction has great potential across diverse fields such as
chemistry, pharmaceuticals, and environmental science, enabling the rapid
design of new materials and enhancing environmental monitoring. However,
current methods face two main challenges: First, existing models struggle with
non-smooth objective functions and the complexity of mixed feature dimensions;
Second, datasets suffer from severe label imbalance, which hampers model
training, particularly in learning minority class labels. To address these
issues, we introduce a novel feature mapping method and a molecular ensemble
optimization loss function. By incorporating feature importance learning and
frequency modulation, our model adaptively adjusts the contribution of each
feature, efficiently capturing the intricate relationship between molecular
structures and odor descriptors. Our feature mapping preserves feature
independence while enhancing the model's efficiency in utilizing molecular
features through frequency modulation. Furthermore, the proposed loss function
dynamically adjusts label weights, improves structural consistency, and
strengthens label correlations, effectively addressing data imbalance and label
co-occurrence challenges. Experimental results show that our method
significantly can improves the accuracy of molecular odor prediction across
various deep learning models, demonstrating its promising potential in
molecular structure representation and chemoinformatics.
|
2502.01297
|
XR-VIO: High-precision Visual Inertial Odometry with Fast Initialization
for XR Applications
|
cs.CV
|
This paper presents a novel approach to Visual Inertial Odometry (VIO),
focusing on the initialization and feature matching modules. Existing methods
for initialization often suffer from either poor stability in visual Structure
from Motion (SfM) or fragility in solving a huge number of parameters
simultaneously. To address these challenges, we propose a new pipeline for
visual inertial initialization that robustly handles various complex scenarios.
By tightly coupling gyroscope measurements, we enhance the robustness and
accuracy of visual SfM. Our method demonstrates stable performance even with
only four image frames, yielding competitive results. In terms of feature
matching, we introduce a hybrid method that combines optical flow and
descriptor-based matching. By leveraging the robustness of continuous optical
flow tracking and the accuracy of descriptor matching, our approach achieves
efficient, accurate, and robust tracking results. Through evaluation on
multiple benchmarks, our method demonstrates state-of-the-art performance in
terms of accuracy and success rate. Additionally, a video demonstration on
mobile devices showcases the practical applicability of our approach in the
field of Augmented Reality/Virtual Reality (AR/VR).
|
2502.01298
|
Augmented Knowledge Graph Querying leveraging LLMs
|
cs.IR
|
Adopting Knowledge Graphs (KGs) as a structured, semantic-oriented, data
representation model has significantly improved data integration, reasoning,
and querying capabilities across different domains. This is especially true in
modern scenarios such as Industry 5.0, in which the integration of data
produced by humans, smart devices, and production processes plays a crucial
role. However, the management, retrieval, and visualization of data from a KG
using formal query languages can be difficult for non-expert users due to their
technical complexity, thus limiting their usage inside industrial environments.
For this reason, we introduce SparqLLM, a framework that utilizes a
Retrieval-Augmented Generation (RAG) solution, to enhance the querying of
Knowledge Graphs (KGs). SparqLLM executes the Extract, Transform, and Load
(ETL) pipeline to construct KGs from raw data. It also features a natural
language interface powered by Large Language Models (LLMs) to enable automatic
SPARQL query generation. By integrating template-based methods as
retrieved-context for the LLM, SparqLLM enhances query reliability and reduces
semantic errors, ensuring more accurate and efficient KG interactions.
Moreover, to improve usability, the system incorporates a dynamic visualization
dashboard that adapts to the structure of the retrieved data, presenting the
query results in an intuitive format. Rigorous experimental evaluations
demonstrate that SparqLLM achieves high query accuracy, improved robustness,
and user-friendly interaction with KGs, establishing it as a scalable solution
to access semantic data.
|
2502.01299
|
Probabilistic adaptation of language comprehension for individual
speakers: Evidence from neural oscillations
|
q-bio.NC cs.CL
|
Listeners adapt language comprehension based on their mental representations
of speakers, but how these representations are dynamically updated remains
unclear. We investigated whether listeners probabilistically adapt their
comprehension based on the likelihood of speakers producing
stereotype-incongruent utterances. Our findings reveal two potential
mechanisms: a speaker-general mechanism that adjusts overall expectations about
speaker-content relationships, and a speaker-specific mechanism that updates
individual speaker models. In two EEG experiments, participants heard speakers
make stereotype-congruent or incongruent utterances, with incongruency base
rate manipulated between blocks. In Experiment 1, speaker incongruency
modulated both high-beta (21-30 Hz) and theta (4-6 Hz) oscillations:
incongruent utterances decreased oscillatory power in low base rate condition
but increased it in high base rate condition. The theta effect varied with
listeners' openness trait: less open participants showed theta increases to
speaker-incongruencies, suggesting maintenance of speaker-specific information,
while more open participants showed theta decreases, indicating flexible model
updating. In Experiment 2, we dissociated base rate from the target speaker by
manipulating the overall base rate using an alternative non-target speaker.
Only the high-beta effect persisted, showing power decrease for
speaker-incongruencies in low base rate condition but no effect in high base
rate condition. The high-beta oscillations might reflect the speaker-general
adjustment, while theta oscillations may index the speaker-specific model
updating. These findings provide evidence for how language processing is shaped
by social cognition in real time.
|
2502.01303
|
Partial Channel Network: Compute Fewer, Perform Better
|
cs.CV cs.AI
|
Designing a module or mechanism that enables a network to maintain low
parameters and FLOPs without sacrificing accuracy and throughput remains a
challenge. To address this challenge and exploit the redundancy within feature
map channels, we propose a new solution: partial channel mechanism (PCM).
Specifically, through the split operation, the feature map channels are divided
into different parts, with each part corresponding to different operations,
such as convolution, attention, pooling, and identity mapping. Based on this
assumption, we introduce a novel partial attention convolution (PATConv) that
can efficiently combine convolution with visual attention. Our exploration
indicates that the PATConv can completely replace both the regular convolution
and the regular visual attention while reducing model parameters and FLOPs.
Moreover, PATConv can derive three new types of blocks: Partial
Channel-Attention block (PAT_ch), Partial Spatial-Attention block (PAT_sp), and
Partial Self-Attention block (PAT_sf). In addition, we propose a novel dynamic
partial convolution (DPConv) that can adaptively learn the proportion of split
channels in different layers to achieve better trade-offs. Building on PATConv
and DPConv, we propose a new hybrid network family, named PartialNet, which
achieves superior top-1 accuracy and inference speed compared to some SOTA
models on ImageNet-1K classification and excels in both detection and
segmentation on the COCO dataset. Our code is available at
https://github.com/haiduo/PartialNet.
|
2502.01304
|
Towards Autonomous Wood-Log Grasping with a Forestry Crane: Simulator
and Benchmarking
|
cs.RO
|
Forestry machines operated in forest production environments face challenges
when performing manipulation tasks, especially regarding the complicated
dynamics of underactuated crane systems and the heavy weight of logs to be
grasped. This study investigates the feasibility of using reinforcement
learning for forestry crane manipulators in grasping and lifting heavy wood
logs autonomously. We first build a simulator using Mujoco physics engine to
create realistic scenarios, including modeling a forestry crane with 8 degrees
of freedom from CAD data and wood logs of different sizes. We further implement
a velocity controller for autonomous log grasping with deep reinforcement
learning using a curriculum strategy. Utilizing our new simulator, the proposed
control strategy exhibits a success rate of 96% when grasping logs of different
diameters and under random initial configurations of the forestry crane. In
addition, reward functions and reinforcement learning baselines are implemented
to provide an open-source benchmark for the community in large-scale
manipulation tasks. A video with several demonstrations can be seen at
https://www.acin.tuwien.ac.at/en/d18a/
|
2502.01307
|
Improving the Effectiveness of Potential-Based Reward Shaping in
Reinforcement Learning
|
cs.LG
|
Potential-based reward shaping is commonly used to incorporate prior
knowledge of how to solve the task into reinforcement learning because it can
formally guarantee policy invariance. As such, the optimal policy and the
ordering of policies by their returns are not altered by potential-based reward
shaping. In this work, we highlight the dependence of effective potential-based
reward shaping on the initial Q-values and external rewards, which determine
the agent's ability to exploit the shaping rewards to guide its exploration and
achieve increased sample efficiency. We formally derive how a simple linear
shift of the potential function can be used to improve the effectiveness of
reward shaping without changing the encoded preferences in the potential
function, and without having to adjust the initial Q-values, which can be
challenging and undesirable in deep reinforcement learning. We show the
theoretical limitations of continuous potential functions for correctly
assigning positive and negative reward shaping values. We verify our
theoretical findings empirically on Gridworld domains with sparse and
uninformative reward functions, as well as on the Cart Pole and Mountain Car
environments, where we demonstrate the application of our results in deep
reinforcement learning.
|
2502.01309
|
Heterogeneous Image GNN: Graph-Conditioned Diffusion for Image Synthesis
|
cs.CV
|
We introduce a novel method for conditioning diffusion-based image synthesis
models with heterogeneous graph data. Existing approaches typically incorporate
conditioning variables directly into model architectures, either through
cross-attention layers that attend to text latents or image concatenation that
spatially restrict generation. However, these methods struggle to handle
complex scenarios involving diverse, relational conditioning variables, which
are more naturally represented as unstructured graphs. This paper presents
Heterogeneous Image Graphs (HIG), a novel representation that models
conditioning variables and target images as two interconnected graphs, enabling
efficient handling of variable-length conditioning inputs and their
relationships. We also propose a magnitude-preserving GNN that integrates the
HIG into the existing EDM2 diffusion model using a ControlNet approach. Our
approach improves upon the SOTA on a variety of conditioning inputs for the
COCO-stuff and Visual Genome datasets, and showcases the ability to condition
on graph attributes and relationships represented by edges in the HIG.
|
2502.01310
|
A Statistical Learning Perspective on Semi-dual Adversarial Neural
Optimal Transport Solvers
|
cs.LG cs.AI
|
Neural network based Optimal Transport (OT) is a recent and fruitful
direction in the generative modeling community. It finds its applications in
various fields such as domain translation, image super-resolution,
computational biology and others. Among the existing approaches to OT, of
considerable interest are adversarial minimax solvers based on semi-dual
formulations of OT problems. While promising, these methods lack theoretical
investigation from a statistical learning perspective. Our work fills this gap
by establishing upper bounds on the generalization error of an approximate OT
map recovered by the minimax quadratic OT solver. Importantly, the bounds we
derive depend solely on some standard statistical and mathematical properties
of the considered functional classes (neural networks). While our analysis
focuses on the quadratic OT, we believe that similar bounds could be derived
for more general OT formulations, paving the promising direction for future
research.
|
Subsets and Splits
No community queries yet
The top public SQL queries from the community will appear here once available.