id
stringlengths 9
16
| title
stringlengths 4
278
| categories
stringlengths 5
104
| abstract
stringlengths 6
4.09k
|
|---|---|---|---|
2502.06860
|
AutoSketch: VLM-assisted Style-Aware Vector Sketch Completion
|
cs.CV cs.GR
|
The ability to automatically complete a partial sketch that depicts a complex
scene, e.g., "a woman chatting with a man in the park", is very useful.
However, existing sketch generation methods create sketches from scratch; they
do not complete a partial sketch in the style of the original. To address this
challenge, we introduce AutoSketch, a styleaware vector sketch completion
method that accommodates diverse sketch styles. Our key observation is that the
style descriptions of a sketch in natural language preserve the style during
automatic sketch completion. Thus, we use a pretrained vision-language model
(VLM) to describe the styles of the partial sketches in natural language and
replicate these styles using newly generated strokes. We initially optimize the
strokes to match an input prompt augmented by style descriptions extracted from
the VLM. Such descriptions allow the method to establish a diffusion prior in
close alignment with that of the partial sketch. Next, we utilize the VLM to
generate an executable style adjustment code that adjusts the strokes to
conform to the desired style. We compare our method with existing methods
across various sketch styles and prompts, performed extensive ablation studies
and qualitative and quantitative evaluations, and demonstrate that AutoSketch
can support various sketch scenarios.
|
2502.06861
|
Design Considerations in Offline Preference-based RL
|
cs.LG cs.AI
|
Offline algorithms for Reinforcement Learning from Human Preferences (RLHF),
which use only a fixed dataset of sampled responses given an input, and
preference feedback among these responses, have gained increasing prominence in
the literature on aligning language models. In this paper, we study how the
different design choices made in methods such as DPO, IPO, SLiC and many
variants influence the quality of the learned policy, from a theoretical
perspective. Our treatment yields insights into the choices of loss function,
the policy which is used to normalize log-likelihoods, and also the role of the
data sampling policy. Notably, our results do not rely on the standard
reparameterization-style arguments used to motivate some of the algorithms in
this family, which allows us to give a unified treatment to a broad class of
methods. We also conduct a small empirical study to verify some of the
theoretical findings on a standard summarization benchmark.
|
2502.06862
|
Poincar\'e Inequality for Local Log-Polyak-Lojasiewicz Measures :
Non-asymptotic Analysis in Low-temperature Regime
|
cs.LG math.CA math.FA math.PR stat.ML
|
Potential functions in highly pertinent applications, such as deep learning
in over-parameterized regime, are empirically observed to admit non-isolated
minima. To understand the convergence behavior of stochastic dynamics in such
landscapes, we propose to study the class of \logPLmeasure\ measures
$\mu_\epsilon \propto \exp(-V/\epsilon)$, where the potential $V$ satisfies a
local Polyak-{\L}ojasiewicz (P\L) inequality, and its set of local minima is
provably \emph{connected}. Notably, potentials in this class can exhibit local
maxima and we characterize its optimal set S to be a compact $\mathcal{C}^2$
\emph{embedding submanifold} of $\mathbb{R}^d$ without boundary. The
\emph{non-contractibility} of S distinguishes our function class from the
classical convex setting topologically. Moreover, the embedding structure
induces a naturally defined Laplacian-Beltrami operator on S, and we show that
its first non-trivial eigenvalue provides an \emph{$\epsilon$-independent}
lower bound for the \Poincare\ constant in the \Poincare\ inequality of
$\mu_\epsilon$. As a direct consequence, Langevin dynamics with such non-convex
potential $V$ and diffusion coefficient $\epsilon$ converges to its equilibrium
$\mu_\epsilon$ at a rate of $\tilde{\mathcal{O}}(1/\epsilon)$, provided
$\epsilon$ is sufficiently small. Here $\tilde{\mathcal{O}}$ hides logarithmic
terms.
|
2502.06863
|
BF-GAN: Development of an AI-driven Bubbly Flow Image Generation Model
Using Generative Adversarial Networks
|
cs.CV cs.AI
|
A generative AI architecture called bubbly flow generative adversarial
networks (BF-GAN) is developed, designed to generate realistic and high-quality
bubbly flow images through physically conditioned inputs, jg and jf. Initially,
52 sets of bubbly flow experiments under varying conditions are conducted to
collect 140,000 bubbly flow images with physical labels of jg and jf for
training data. A multi-scale loss function is then developed, incorporating
mismatch loss and pixel loss to enhance the generative performance of BF-GAN
further. Regarding evaluative metrics of generative AI, the BF-GAN has
surpassed conventional GAN. Physically, key parameters of bubbly flow generated
by BF-GAN are extracted and compared with measurement values and empirical
correlations, validating BF-GAN's generative performance. The comparative
analysis demonstrate that the BF-GAN can generate realistic and high-quality
bubbly flow images with any given jg and jf within the research scope.
BF-GAN offers a generative AI solution for two-phase flow research,
substantially lowering the time and cost required to obtain high-quality data.
In addition, it can function as a benchmark dataset generator for bubbly flow
detection and segmentation algorithms, enhancing overall productivity in this
research domain. The BF-GAN model is available online
(https://github.com/zhouzhouwen/BF-GAN).
|
2502.06864
|
Knowledge Graph-Guided Retrieval Augmented Generation
|
cs.CL cs.AI
|
Retrieval-augmented generation (RAG) has emerged as a promising technology
for addressing hallucination issues in the responses generated by large
language models (LLMs). Existing studies on RAG primarily focus on applying
semantic-based approaches to retrieve isolated relevant chunks, which ignore
their intrinsic relationships. In this paper, we propose a novel Knowledge
Graph-Guided Retrieval Augmented Generation (KG$^2$RAG) framework that utilizes
knowledge graphs (KGs) to provide fact-level relationships between chunks,
improving the diversity and coherence of the retrieved results. Specifically,
after performing a semantic-based retrieval to provide seed chunks, KG$^2$RAG
employs a KG-guided chunk expansion process and a KG-based chunk organization
process to deliver relevant and important knowledge in well-organized
paragraphs. Extensive experiments conducted on the HotpotQA dataset and its
variants demonstrate the advantages of KG$^2$RAG compared to existing RAG-based
approaches, in terms of both response quality and retrieval quality.
|
2502.06865
|
Deep Ritz method with Fourier feature mapping: A deep learning approach
for solving variational models of microstructure
|
cs.LG
|
This paper presents a novel approach that combines the Deep Ritz Method (DRM)
with Fourier feature mapping to solve minimization problems comprised of
multi-well, non-convex energy potentials. These problems present computational
challenges as they lack a global minimum. Through an investigation of three
benchmark problems in both 1D and 2D, we observe that DRM suffers from spectral
bias pathology, limiting its ability to learn solutions with high frequencies.
To overcome this limitation, we modify the method by introducing Fourier
feature mapping. This modification involves applying a Fourier mapping to the
input layer before it passes through the hidden and output layers. Our results
demonstrate that Fourier feature mapping enables DRM to generate
high-frequency, multiscale solutions for the benchmark problems in both 1D and
2D, offering a promising advancement in tackling complex non-convex energy
minimization problems.
|
2502.06866
|
Global Ease of Living Index: a machine learning framework for
longitudinal analysis of major economies
|
cs.LG cs.AI econ.EM stat.AP stat.ML
|
The drastic changes in the global economy, geopolitical conditions, and
disruptions such as the COVID-19 pandemic have impacted the cost of living and
quality of life. It is important to understand the long-term nature of the cost
of living and quality of life in major economies. A transparent and
comprehensive living index must include multiple dimensions of living
conditions. In this study, we present an approach to quantifying the quality of
life through the Global Ease of Living Index that combines various
socio-economic and infrastructural factors into a single composite score. Our
index utilises economic indicators that define living standards, which could
help in targeted interventions to improve specific areas. We present a machine
learning framework for addressing the problem of missing data for some of the
economic indicators for specific countries. We then curate and update the data
and use a dimensionality reduction approach (principal component analysis) to
create the Ease of Living Index for major economies since 1970. Our work
significantly adds to the literature by offering a practical tool for
policymakers to identify areas needing improvement, such as healthcare systems,
employment opportunities, and public safety. Our approach with open data and
code can be easily reproduced and applied to various contexts. This
transparency and accessibility make our work a valuable resource for ongoing
research and policy development in quality-of-life assessment.
|
2502.06867
|
Forbidden Science: Dual-Use AI Challenge Benchmark and Scientific
Refusal Tests
|
cs.CL cs.AI
|
The development of robust safety benchmarks for large language models
requires open, reproducible datasets that can measure both appropriate refusal
of harmful content and potential over-restriction of legitimate scientific
discourse. We present an open-source dataset and testing framework for
evaluating LLM safety mechanisms across mainly controlled substance queries,
analyzing four major models' responses to systematically varied prompts. Our
results reveal distinct safety profiles: Claude-3.5-sonnet demonstrated the
most conservative approach with 73% refusals and 27% allowances, while Mistral
attempted to answer 100% of queries. GPT-3.5-turbo showed moderate restriction
with 10% refusals and 90% allowances, and Grok-2 registered 20% refusals and
80% allowances. Testing prompt variation strategies revealed decreasing
response consistency, from 85% with single prompts to 65% with five variations.
This publicly available benchmark enables systematic evaluation of the critical
balance between necessary safety restrictions and potential over-censorship of
legitimate scientific inquiry, while providing a foundation for measuring
progress in AI safety implementation. Chain-of-thought analysis reveals
potential vulnerabilities in safety mechanisms, highlighting the complexity of
implementing robust safeguards without unduly restricting desirable and valid
scientific discourse.
|
2502.06868
|
Related Knowledge Perturbation Matters: Rethinking Multiple Pieces of
Knowledge Editing in Same-Subject
|
cs.CL cs.AI
|
Knowledge editing has become a promising approach for efficiently and
precisely updating knowledge embedded in large language models (LLMs). In this
work, we focus on Same-Subject Editing, which involves modifying multiple
attributes of a single entity to ensure comprehensive and consistent updates to
entity-centric knowledge. Through preliminary observation, we identify a
significant challenge: Current state-of-the-art editing methods struggle when
tasked with editing multiple related knowledge pieces for the same subject. To
address the lack of relevant editing data for identical subjects in traditional
benchmarks, we introduce the $\text{S}^2\text{RKE}$(Same-Subject Related
Knowledge Editing) benchmark. Our extensive experiments reveal that only
mainstream locate-then-edit methods, such as ROME and MEMIT, exhibit "related
knowledge perturbation," where subsequent edits interfere with earlier ones.
Further analysis reveals that these methods over-rely on subject information,
neglecting other critical factors, resulting in reduced editing effectiveness.
|
2502.06869
|
A Survey on Explainable Deep Reinforcement Learning
|
cs.LG cs.AI
|
Deep Reinforcement Learning (DRL) has achieved remarkable success in
sequential decision-making tasks across diverse domains, yet its reliance on
black-box neural architectures hinders interpretability, trust, and deployment
in high-stakes applications. Explainable Deep Reinforcement Learning (XRL)
addresses these challenges by enhancing transparency through feature-level,
state-level, dataset-level, and model-level explanation techniques. This survey
provides a comprehensive review of XRL methods, evaluates their qualitative and
quantitative assessment frameworks, and explores their role in policy
refinement, adversarial robustness, and security. Additionally, we examine the
integration of reinforcement learning with Large Language Models (LLMs),
particularly through Reinforcement Learning from Human Feedback (RLHF), which
optimizes AI alignment with human preferences. We conclude by highlighting open
research challenges and future directions to advance the development of
interpretable, reliable, and accountable DRL systems.
|
2502.06870
|
Bridging Traffic State and Trajectory for Dynamic Road Network and
Trajectory Representation Learning
|
cs.LG cs.AI
|
Effective urban traffic management is vital for sustainable city development,
relying on intelligent systems with machine learning tasks such as traffic flow
prediction and travel time estimation. Traditional approaches usually focus on
static road network and trajectory representation learning, and overlook the
dynamic nature of traffic states and trajectories, which is crucial for
downstream tasks. To address this gap, we propose TRACK, a novel framework to
bridge traffic state and trajectory data for dynamic road network and
trajectory representation learning. TRACK leverages graph attention networks
(GAT) to encode static and spatial road segment features, and introduces a
transformer-based model for trajectory representation learning. By
incorporating transition probabilities from trajectory data into GAT attention
weights, TRACK captures dynamic spatial features of road segments. Meanwhile,
TRACK designs a traffic transformer encoder to capture the spatial-temporal
dynamics of road segments from traffic state data. To further enhance dynamic
representations, TRACK proposes a co-attentional transformer encoder and a
trajectory-traffic state matching task. Extensive experiments on real-life
urban traffic datasets demonstrate the superiority of TRACK over
state-of-the-art baselines. Case studies confirm TRACK's ability to capture
spatial-temporal dynamics effectively.
|
2502.06871
|
FlavorDiffusion: Predicting Food Pairings and Chemical Interactions
Using Diffusion Models
|
cs.LG cs.AI
|
The study of food pairing has evolved beyond subjective expertise with the
advent of machine learning. This paper presents FlavorDiffusion, a novel
framework leveraging diffusion models to predict food-chemical interactions and
ingredient pairings without relying on chromatography. By integrating
graph-based embeddings, diffusion processes, and chemical property encoding,
FlavorDiffusion addresses data imbalances and enhances clustering quality.
Using a heterogeneous graph derived from datasets like Recipe1M and FlavorDB,
our model demonstrates superior performance in reconstructing
ingredient-ingredient relationships. The addition of a Chemical Structure
Prediction (CSP) layer further refines the embedding space, achieving
state-of-the-art NMI scores and enabling meaningful discovery of novel
ingredient combinations. The proposed framework represents a significant step
forward in computational gastronomy, offering scalable, interpretable, and
chemically informed solutions for food science.
|
2502.06872
|
Towards Trustworthy Retrieval Augmented Generation for Large Language
Models: A Survey
|
cs.CL cs.AI
|
Retrieval-Augmented Generation (RAG) is an advanced technique designed to
address the challenges of Artificial Intelligence-Generated Content (AIGC). By
integrating context retrieval into content generation, RAG provides reliable
and up-to-date external knowledge, reduces hallucinations, and ensures relevant
context across a wide range of tasks. However, despite RAG's success and
potential, recent studies have shown that the RAG paradigm also introduces new
risks, including robustness issues, privacy concerns, adversarial attacks, and
accountability issues. Addressing these risks is critical for future
applications of RAG systems, as they directly impact their trustworthiness.
Although various methods have been developed to improve the trustworthiness of
RAG methods, there is a lack of a unified perspective and framework for
research in this topic. Thus, in this paper, we aim to address this gap by
providing a comprehensive roadmap for developing trustworthy RAG systems. We
place our discussion around five key perspectives: reliability, privacy,
safety, fairness, explainability, and accountability. For each perspective, we
present a general framework and taxonomy, offering a structured approach to
understanding the current challenges, evaluating existing solutions, and
identifying promising future research directions. To encourage broader adoption
and innovation, we also highlight the downstream applications where trustworthy
RAG systems have a significant impact.
|
2502.06873
|
Multimodal Cognitive Reframing Therapy via Multi-hop Psychotherapeutic
Reasoning
|
cs.CL cs.AI
|
Previous research has revealed the potential of large language models (LLMs)
to support cognitive reframing therapy; however, their focus was primarily on
text-based methods, often overlooking the importance of non-verbal evidence
crucial in real-life therapy. To alleviate this gap, we extend the textual
cognitive reframing to multimodality, incorporating visual clues. Specifically,
we present a new dataset called Multi Modal-Cognitive Support Conversation
(M2CoSC), which pairs each GPT-4-generated dialogue with an image that reflects
the virtual client's facial expressions. To better mirror real psychotherapy,
where facial expressions lead to interpreting implicit emotional evidence, we
propose a multi-hop psychotherapeutic reasoning approach that explicitly
identifies and incorporates subtle evidence. Our comprehensive experiments with
both LLMs and vision-language models (VLMs) demonstrate that the VLMs'
performance as psychotherapists is significantly improved with the M2CoSC
dataset. Furthermore, the multi-hop psychotherapeutic reasoning method enables
VLMs to provide more thoughtful and empathetic suggestions, outperforming
standard prompting methods.
|
2502.06874
|
Group Reasoning Emission Estimation Networks
|
cs.CL cs.AI cs.LG
|
Accurate greenhouse gas (GHG) emission reporting is critical for governments,
businesses, and investors. However, adoption remains limited particularly among
small and medium enterprises due to high implementation costs, fragmented
emission factor databases, and a lack of robust sector classification methods.
To address these challenges, we introduce Group Reasoning Emission Estimation
Networks (GREEN), an AI-driven carbon accounting framework that standardizes
enterprise-level emission estimation, constructs a large-scale benchmark
dataset, and leverages a novel reasoning approach with large language models
(LLMs). Specifically, we compile textual descriptions for 20,850 companies with
validated North American Industry Classification System (NAICS) labels and
align these with an economic model of carbon intensity factors. By reframing
sector classification as an information retrieval task, we fine-tune
Sentence-BERT models using a contrastive learning loss. To overcome the
limitations of single-stage models in handling thousands of hierarchical
categories, we propose a Group Reasoning method that ensembles LLM classifiers
based on the natural NAICS ontology, decomposing the task into multiple
sub-classification steps. We theoretically prove that this approach reduces
classification uncertainty and computational complexity. Experiments on 1,114
NAICS categories yield state-of-the-art performance (83.68% Top-1, 91.47%
Top-10 accuracy), and case studies on 20 companies report a mean absolute
percentage error (MAPE) of 45.88%. The project is available at:
https://huggingface.co/datasets/Yvnminc/ExioNAICS.
|
2502.06875
|
Beyond Vision: How Large Language Models Interpret Facial Expressions
from Valence-Arousal Values
|
cs.CV cs.AI cs.CL
|
Large Language Models primarily operate through text-based inputs and
outputs, yet human emotion is communicated through both verbal and non-verbal
cues, including facial expressions. While Vision-Language Models analyze facial
expressions from images, they are resource-intensive and may depend more on
linguistic priors than visual understanding. To address this, this study
investigates whether LLMs can infer affective meaning from dimensions of facial
expressions-Valence and Arousal values, structured numerical representations,
rather than using raw visual input. VA values were extracted using Facechannel
from images of facial expressions and provided to LLMs in two tasks: (1)
categorizing facial expressions into basic (on the IIMI dataset) and complex
emotions (on the Emotic dataset) and (2) generating semantic descriptions of
facial expressions (on the Emotic dataset). Results from the categorization
task indicate that LLMs struggle to classify VA values into discrete emotion
categories, particularly for emotions beyond basic polarities (e.g., happiness,
sadness). However, in the semantic description task, LLMs produced textual
descriptions that align closely with human-generated interpretations,
demonstrating a stronger capacity for free text affective inference of facial
expressions.
|
2502.06876
|
Mix Data or Merge Models? Balancing the Helpfulness, Honesty, and
Harmlessness of Large Language Model via Model Merging
|
cs.CL cs.AI cs.LG
|
Achieving balanced alignment of large language models (LLMs) in terms of
Helpfulness, Honesty, and Harmlessness (3H optimization) constitutes a
cornerstone of responsible AI, with existing methods like data mixture
strategies facing limitations including reliance on expert knowledge and
conflicting optimization signals. While model merging offers a promising
alternative by integrating specialized models, its potential for 3H
optimization remains underexplored. This paper establishes the first
comprehensive benchmark for model merging in 3H-aligned LLMs, systematically
evaluating 15 methods (12 training-free merging and 3 data mixture techniques)
across 10 datasets associated with 5 annotation dimensions, 2 LLM families, and
2 training paradigms. Our analysis reveals three pivotal insights: (i)
previously overlooked collaborative/conflicting relationships among 3H
dimensions, (ii) the consistent superiority of model merging over data mixture
approaches in balancing alignment trade-offs, and (iii) the critical role of
parameter-level conflict resolution through redundant component pruning and
outlier mitigation. Building on these findings, we propose R-TSVM, a
Reweighting-enhanced Task Singular Vector Merging method that incorporates
outlier-aware parameter weighting and sparsity-adaptive rank selection
strategies adapted to the heavy-tailed parameter distribution and sparsity for
LLMs, further improving LLM alignment across multiple evaluations. We release
our trained models for further exploration.
|
2502.06877
|
WirelessGPT: A Generative Pre-trained Multi-task Learning Framework for
Wireless Communication
|
cs.LG
|
This paper introduces WirelessGPT, a pioneering foundation model specifically
designed for multi-task learning in wireless communication and sensing.
Specifically, WirelessGPT leverages large-scale wireless channel datasets for
unsupervised pretraining and extracting universal channel representations,
which captures complex spatiotemporal dependencies. In fact,this task-agnostic
design adapts WirelessGPT seamlessly to a wide range of downstream tasks, using
a unified representation with minimal fine-tuning. By unifying communication
and sensing functionalities, WirelessGPT addresses the limitations of
task-specific models, offering a scalable and efficient solution for integrated
sensing and communication (ISAC). With an initial parameter size of around 80
million, WirelessGPT demonstrates significant improvements over conventional
methods and smaller AI models, reducing reliance on large-scale labeled data.
As the first foundation model capable of supporting diverse tasks across
different domains, WirelessGPT establishes a new benchmark, paving the way for
future advancements in multi-task wireless systems.
|
2502.06878
|
Deep Learning Meets Oversampling: A Learning Framework to Handle
Imbalanced Classification
|
cs.LG
|
Despite extensive research spanning several decades, class imbalance is still
considered a profound difficulty for both machine learning and deep learning
models. While data oversampling is the foremost technique to address this
issue, traditional sampling techniques are often decoupled from the training
phase of the predictive model, resulting in suboptimal representations. To
address this, we propose a novel learning framework that can generate synthetic
data instances in a data-driven manner. The proposed framework formulates the
oversampling process as a composition of discrete decision criteria, thereby
enhancing the representation power of the model's learning process. Extensive
experiments on the imbalanced classification task demonstrate the superiority
of our framework over state-of-the-art algorithms.
|
2502.06879
|
CluStRE: Streaming Graph Clustering with Multi-Stage Refinement
|
cs.LG cs.DB
|
We present CluStRE, a novel streaming graph clustering algorithm that
balances computational efficiency with high-quality clustering using
multi-stage refinement. Unlike traditional in-memory clustering approaches,
CluStRE processes graphs in a streaming setting, significantly reducing memory
overhead while leveraging re-streaming and evolutionary heuristics to improve
solution quality. Our method dynamically constructs a quotient graph, enabling
modularity-based optimization while efficiently handling large-scale graphs. We
introduce multiple configurations of CluStRE to provide trade-offs between
speed, memory consumption, and clustering quality. Experimental evaluations
demonstrate that CluStRE improves solution quality by 89.8%, operates 2.6 times
faster, and uses less than two-thirds of the memory required by the
state-of-the-art streaming clustering algorithm on average. Moreover, our
strongest mode enhances solution quality by up to 150% on average. With this,
CluStRE achieves comparable solution quality to in-memory algorithms, i.e. over
96% of the quality of clustering approaches, including Louvain, effectively
bridging the gap between streaming and traditional clustering methods.
|
2502.06882
|
Multi-Agent Simulator Drives Language Models for Legal Intensive
Interaction
|
cs.CL cs.AI
|
Large Language Models (LLMs) have significantly advanced legal intelligence,
but the scarcity of scenario data impedes the progress toward interactive legal
scenarios. This paper introduces a Multi-agent Legal Simulation Driver (MASER)
to scalably generate synthetic data by simulating interactive legal scenarios.
Leveraging real-legal case sources, MASER ensures the consistency of legal
attributes between participants and introduces a supervisory mechanism to align
participants' characters and behaviors as well as addressing distractions. A
Multi-stage Interactive Legal Evaluation (MILE) benchmark is further
constructed to evaluate LLMs' performance in dynamic legal scenarios. Extensive
experiments confirm the effectiveness of our framework.
|
2502.06884
|
Learning Conformal Abstention Policies for Adaptive Risk Management in
Large Language and Vision-Language Models
|
cs.LG cs.AI
|
Large Language and Vision-Language Models (LLMs/VLMs) are increasingly used
in safety-critical applications, yet their opaque decision-making complicates
risk assessment and reliability. Uncertainty quantification (UQ) helps assess
prediction confidence and enables abstention when uncertainty is high.
Conformal prediction (CP), a leading UQ method, provides statistical guarantees
but relies on static thresholds, which fail to adapt to task complexity and
evolving data distributions, leading to suboptimal trade-offs in accuracy,
coverage, and informativeness. To address this, we propose learnable conformal
abstention, integrating reinforcement learning (RL) with CP to optimize
abstention thresholds dynamically. By treating CP thresholds as adaptive
actions, our approach balances multiple objectives, minimizing prediction set
size while maintaining reliable coverage. Extensive evaluations across diverse
LLM/VLM benchmarks show our method outperforms Least Ambiguous Classifiers
(LAC) and Adaptive Prediction Sets (APS), improving accuracy by up to 3.2%,
boosting AUROC for hallucination detection by 22.19%, enhancing
uncertainty-guided selective generation (AUARC) by 21.17%, and reducing
calibration error by 70%-85%. These improvements hold across multiple models
and datasets while consistently meeting the 90% coverage target, establishing
our approach as a more effective and flexible solution for reliable
decision-making in safety-critical applications. The code is available at:
{https://github.com/sinatayebati/vlm-uncertainty}.
|
2502.06885
|
Topological derivative approach for deep neural network architecture
adaptation
|
cs.LG cs.AI
|
This work presents a novel algorithm for progressively adapting neural
network architecture along the depth. In particular, we attempt to address the
following questions in a mathematically principled way: i) Where to add a new
capacity (layer) during the training process? ii) How to initialize the new
capacity? At the heart of our approach are two key ingredients: i) the
introduction of a ``shape functional" to be minimized, which depends on neural
network topology, and ii) the introduction of a topological derivative of the
shape functional with respect to the neural network topology. Using an optimal
control viewpoint, we show that the network topological derivative exists under
certain conditions, and its closed-form expression is derived. In particular,
we explore, for the first time, the connection between the topological
derivative from a topology optimization framework with the Hamiltonian from
optimal control theory. Further, we show that the optimality condition for the
shape functional leads to an eigenvalue problem for deep neural architecture
adaptation. Our approach thus determines the most sensitive location along the
depth where a new layer needs to be inserted during the training phase and the
associated parametric initialization for the newly added layer. We also
demonstrate that our layer insertion strategy can be derived from an optimal
transport viewpoint as a solution to maximizing a topological derivative in
$p$-Wasserstein space, where $p>= 1$. Numerical investigations with fully
connected network, convolutional neural network, and vision transformer on
various regression and classification problems demonstrate that our proposed
approach can outperform an ad-hoc baseline network and other architecture
adaptation strategies. Further, we also demonstrate other applications of
topological derivative in fields such as transfer learning.
|
2502.06887
|
Gradient Based Method for the Fusion of Lattice Quantizers
|
cs.LG cs.AI
|
In practical applications, lattice quantizers leverage discrete lattice
points to approximate arbitrary points in the lattice. An effective lattice
quantizer significantly enhances both the accuracy and efficiency of these
approximations. In the context of high-dimensional lattice quantization,
previous work proposed utilizing low-dimensional optimal lattice quantizers and
addressed the challenge of determining the optimal length ratio in orthogonal
splicing. Notably, it was demonstrated that fixed length ratios and
orthogonality yield suboptimal results when combining low-dimensional lattices.
Building on this foundation, another approach employed gradient descent to
identify optimal lattices, which inspired us to explore the use of neural
networks to discover matrices that outperform those obtained from orthogonal
splicing methods. We propose two novel approaches to tackle this problem: the
Household Algorithm and the Matrix Exp Algorithm. Our results indicate that
both the Household Algorithm and the Matrix Exp Algorithm achieve improvements
in lattice quantizers across dimensions 13, 15, 17 to 19, 21, and 22. Moreover,
the Matrix Exp Algorithm demonstrates superior efficacy in high-dimensional
settings.
|
2502.06888
|
Klotski: Efficient Mixture-of-Expert Inference via Expert-Aware
Multi-Batch Pipeline
|
cs.LG cs.AI
|
Mixture of Experts (MoE), with its distinctive sparse structure, enables the
scaling of language models up to trillions of parameters without significantly
increasing computational costs. However, the substantial parameter size
presents a challenge for inference, as the expansion in GPU memory cannot keep
pace with the growth in parameters. Although offloading techniques utilise
memory from the CPU and disk and parallelise the I/O and computation for
efficiency, the computation for each expert in MoE models is often less than
the I/O, resulting in numerous bubbles in the pipeline.
Therefore, we propose Klotski, an efficient MoE inference engine that
significantly reduces pipeline bubbles through a novel expert-aware multi-batch
pipeline paradigm. The proposed paradigm uses batch processing to extend the
computation time of the current layer to overlap with the loading time of the
next layer. Although this idea has been effectively applied to dense models,
more batches may activate more experts in the MoE, leading to longer loading
times and more bubbles. Thus, unlike traditional approaches, we balance
computation and I/O time and minimise bubbles by orchestrating their inference
orders based on their heterogeneous computation and I/O requirements and
activation patterns under different batch numbers. Moreover, to adapt to
different hardware environments and models, we design a constraint-sensitive
I/O-compute planner and a correlation-aware expert prefetcher for a schedule
that minimises pipeline bubbles. Experimental results demonstrate that Klotski
achieves a superior throughput-latency trade-off compared to state-of-the-art
techniques, with throughput improvements of up to 85.12x.
|
2502.06889
|
Secure Visual Data Processing via Federated Learning
|
cs.CV
|
As the demand for privacy in visual data management grows, safeguarding
sensitive information has become a critical challenge. This paper addresses the
need for privacy-preserving solutions in large-scale visual data processing by
leveraging federated learning. Although there have been developments in this
field, previous research has mainly focused on integrating object detection
with either anonymization or federated learning. However, these pairs often
fail to address complex privacy concerns. On the one hand, object detection
with anonymization alone can be vulnerable to reverse techniques. On the other
hand, federated learning may not provide sufficient privacy guarantees.
Therefore, we propose a new approach that combines object detection, federated
learning and anonymization. Combining these three components aims to offer a
robust privacy protection strategy by addressing different vulnerabilities in
visual data. Our solution is evaluated against traditional centralized models,
showing that while there is a slight trade-off in accuracy, the privacy
benefits are substantial, making it well-suited for privacy sensitive
applications.
|
2502.06890
|
LLMs for Drug-Drug Interaction Prediction: A Comprehensive Comparison
|
cs.LG cs.AI q-bio.QM
|
The increasing volume of drug combinations in modern therapeutic regimens
needs reliable methods for predicting drug-drug interactions (DDIs). While
Large Language Models (LLMs) have revolutionized various domains, their
potential in pharmaceutical research, particularly in DDI prediction, remains
largely unexplored. This study thoroughly investigates LLMs' capabilities in
predicting DDIs by uniquely processing molecular structures (SMILES), target
organisms, and gene interaction data as raw text input from the latest DrugBank
dataset. We evaluated 18 different LLMs, including proprietary models (GPT-4,
Claude, Gemini) and open-source variants (from 1.5B to 72B parameters), first
assessing their zero-shot capabilities in DDI prediction. We then fine-tuned
selected models (GPT-4, Phi-3.5 2.7B, Qwen-2.5 3B, Gemma-2 9B, and Deepseek R1
distilled Qwen 1.5B) to optimize their performance. Our comprehensive
evaluation framework included validation across 13 external DDI datasets,
comparing against traditional approaches such as l2-regularized logistic
regression. Fine-tuned LLMs demonstrated superior performance, with Phi-3.5
2.7B achieving a sensitivity of 0.978 in DDI prediction, with an accuracy of
0.919 on balanced datasets (50% positive, 50% negative cases). This result
represents an improvement over both zero-shot predictions and state-of-the-art
machine-learning methods used for DDI prediction. Our analysis reveals that
LLMs can effectively capture complex molecular interaction patterns and cases
where drug pairs target common genes, making them valuable tools for practical
applications in pharmaceutical research and clinical settings.
|
2502.06891
|
ScaffoldGPT: A Scaffold-based Large Language Model for Drug Improvement
|
q-bio.BM cs.CL cs.LG
|
Drug optimization has become increasingly crucial in light of fast-mutating
virus strains and drug-resistant cancer cells. Nevertheless, it remains
challenging as it necessitates retaining the beneficial properties of the
original drug while simultaneously enhancing desired attributes beyond its
scope. In this work, we aim to tackle this challenge by introducing
ScaffoldGPT, a novel Large Language Model (LLM) designed for drug optimization
based on molecular scaffolds. Our work comprises three key components: (1) A
three-stage drug optimization approach that integrates pretraining, finetuning,
and decoding optimization. (2) A uniquely designed two-phase incremental
training approach for pre-training the drug optimization LLM-based generator on
molecule scaffold with enhanced performance. (3) A token-level decoding
optimization strategy, TOP-N, that enabling controlled, reward-guided
generation using pretrained/finetuned LLMs. Finally, by conducting a
comprehensive evaluation on COVID and cancer benchmarks, we demonstrate that
SCAFFOLDGPT outperforms the competing baselines in drug optimization
benchmarks, while excelling in preserving the original functional scaffold and
enhancing desired properties.
|
2502.06892
|
Certifying Language Model Robustness with Fuzzed Randomized Smoothing:
An Efficient Defense Against Backdoor Attacks
|
cs.LG cs.AI
|
The widespread deployment of pre-trained language models (PLMs) has exposed
them to textual backdoor attacks, particularly those planted during the
pre-training stage. These attacks pose significant risks to high-reliability
applications, as they can stealthily affect multiple downstream tasks. While
certifying robustness against such threats is crucial, existing defenses
struggle with the high-dimensional, interdependent nature of textual data and
the lack of access to original poisoned pre-training data. To address these
challenges, we introduce \textbf{F}uzzed \textbf{R}andomized \textbf{S}moothing
(\textbf{FRS}), a novel approach for efficiently certifying language model
robustness against backdoor attacks. FRS integrates software robustness
certification techniques with biphased model parameter smoothing, employing
Monte Carlo tree search for proactive fuzzing to identify vulnerable textual
segments within the Damerau-Levenshtein space. This allows for targeted and
efficient text randomization, while eliminating the need for access to poisoned
training data during model smoothing. Our theoretical analysis demonstrates
that FRS achieves a broader certified robustness radius compared to existing
methods. Extensive experiments across various datasets, model configurations,
and attack strategies validate FRS's superiority in terms of defense
efficiency, accuracy, and robustness.
|
2502.06893
|
A New Hybrid Intelligent Approach for Multimodal Detection of Suspected
Disinformation on TikTok
|
cs.CV cs.CL cs.MM cs.SC
|
In the context of the rapid dissemination of multimedia content, identifying
disinformation on social media platforms such as TikTok represents a
significant challenge. This study introduces a hybrid framework that combines
the computational power of deep learning with the interpretability of fuzzy
logic to detect suspected disinformation in TikTok videos. The methodology is
comprised of two core components: a multimodal feature analyser that extracts
and evaluates data from text, audio, and video; and a multimodal disinformation
detector based on fuzzy logic. These systems operate in conjunction to evaluate
the suspicion of spreading disinformation, drawing on human behavioural cues
such as body language, speech patterns, and text coherence. Two experiments
were conducted: one focusing on context-specific disinformation and the other
on the scalability of the model across broader topics. For each video
evaluated, high-quality, comprehensive, well-structured reports are generated,
providing a detailed view of the disinformation behaviours.
|
2502.06894
|
AI-Driven HSI: Multimodality, Fusion, Challenges, and the Deep Learning
Revolution
|
cs.CV cs.AI
|
Hyperspectral imaging (HSI) captures spatial and spectral data, enabling
analysis of features invisible to conventional systems. The technology is vital
in fields such as weather monitoring, food quality control, counterfeit
detection, healthcare diagnostics, and extending into defense, agriculture, and
industrial automation at the same time. HSI has advanced with improvements in
spectral resolution, miniaturization, and computational methods. This study
provides an overview of the HSI, its applications, challenges in data fusion
and the role of deep learning models in processing HSI data. We discuss how
integration of multimodal HSI with AI, particularly with deep learning,
improves classification accuracy and operational efficiency. Deep learning
enhances HSI analysis in areas like feature extraction, change detection,
denoising unmixing, dimensionality reduction, landcover mapping, data
augmentation, spectral construction and super resolution. An emerging focus is
the fusion of hyperspectral cameras with large language models (LLMs), referred
as highbrain LLMs, enabling the development of advanced applications such as
low visibility crash detection and face antispoofing. We also highlight key
players in HSI industry, its compound annual growth rate and the growing
industrial significance. The purpose is to offer insight to both technical and
non-technical audience, covering HSI's images, trends, and future directions,
while providing valuable information on HSI datasets and software libraries.
|
2502.06895
|
A Comprehensive Review of U-Net and Its Variants: Advances and
Applications in Medical Image Segmentation
|
eess.IV cs.CV
|
Medical images often exhibit low and blurred contrast between lesions and
surrounding tissues, with considerable variation in lesion edges and shapes
even within the same disease, leading to significant challenges in
segmentation. Therefore, precise segmentation of lesions has become an
essential prerequisite for patient condition assessment and formulation of
treatment plans. Significant achievements have been made in research related to
the U-Net model in recent years. It improves segmentation performance and is
extensively applied in the semantic segmentation of medical images to offer
technical support for consistent quantitative lesion analysis methods. First,
this paper classifies medical image datasets on the basis of their imaging
modalities and then examines U-Net and its various improvement models from the
perspective of structural modifications. The research objectives, innovative
designs, and limitations of each approach are discussed in detail. Second, we
summarize the four central improvement mechanisms of the U-Net and U-Net
variant algorithms: the jump-connection mechanism, residual-connection
mechanism, 3D-UNet, and transformer mechanism. Finally, we examine the
relationships among the four core enhancement mechanisms and commonly utilized
medical datasets and propose potential avenues and strategies for future
advancements. This paper provides a systematic summary and reference for
researchers in related fields, and we look forward to designing more efficient
and stable medical image segmentation network models based on the U-Net
network.
|
2502.06897
|
PyPotteryInk: One-Step Diffusion Model for Sketch to Publication-ready
Archaeological Drawings
|
cs.GR cs.AI cs.CV
|
Archaeological pottery documentation traditionally requires a time-consuming
manual process of converting pencil sketches into publication-ready inked
drawings. I present PyPotteryInk, an open-source automated pipeline that
transforms archaeological pottery sketches into standardised publication-ready
drawings using a one-step diffusion model. Built on a modified img2img-turbo
architecture, the system processes drawings in a single forward pass while
preserving crucial morphological details and maintaining archaeologic
documentation standards and analytical value. The model employs an efficient
patch-based approach with dynamic overlap, enabling high-resolution output
regardless of input drawing size. I demonstrate the effectiveness of the
approach on a dataset of Italian protohistoric pottery drawings, where it
successfully captures both fine details like decorative patterns and structural
elements like vessel profiles or handling elements. Expert evaluation confirms
that the generated drawings meet publication standards while significantly
reducing processing time from hours to seconds per drawing. The model can be
fine-tuned to adapt to different archaeological contexts with minimal training
data, making it versatile across various pottery documentation styles. The
pre-trained models, the Python library and comprehensive documentation are
provided to facilitate adoption within the archaeological research community.
|
2502.06898
|
Large Language Models for In-File Vulnerability Localization Can Be
"Lost in the End"
|
cs.SE cs.AI
|
Recent advancements in artificial intelligence have enabled processing of
larger inputs, leading everyday software developers to increasingly rely on
chat-based large language models (LLMs) like GPT-3.5 and GPT-4 to detect
vulnerabilities across entire files, not just within functions. This new
development practice requires researchers to urgently investigate whether
commonly used LLMs can effectively analyze large file-sized inputs, in order to
provide timely insights for software developers and engineers about the pros
and cons of this emerging technological trend. Hence, the goal of this paper is
to evaluate the effectiveness of several state-of-the-art chat-based LLMs,
including the GPT models, in detecting in-file vulnerabilities. We conducted a
costly investigation into how the performance of LLMs varies based on
vulnerability type, input size, and vulnerability location within the file. To
give enough statistical power to our study, we could only focus on the three
most common (as well as dangerous) vulnerabilities: XSS, SQL injection, and
path traversal. Our findings indicate that the effectiveness of LLMs in
detecting these vulnerabilities is strongly influenced by both the location of
the vulnerability and the overall size of the input. Specifically, regardless
of the vulnerability type, LLMs tend to significantly (p < .05) underperform
when detecting vulnerabilities located toward the end of larger files, a
pattern we call the 'lost-in-the-end' effect. Finally, to further support
software developers and practitioners, we also explored the optimal input size
for these LLMs and presented a simple strategy for identifying it, which can be
applied to other models and vulnerability types. Eventually, we show how
adjusting the input size can lead to significant improvements in LLM-based
vulnerability detection, with an average recall increase of over 37% across all
models.
|
2502.06899
|
A Sociotechnical Approach for Knowledge Management (KM)
|
cs.DB cs.AI
|
This article presents a sociotechnical framework for KM. This sociotechnical
vision of KM allows: (1) to remove KM from a commercial concern; (2) to divide
the different KM technologies; and (3) to question the paradigms associated
with the social and technical components of KM. It is precisely this last point
that this article develops to identify the generic mechanisms of KM. More
precisely, the social aspect is explained through the organizational approach
to KM, the managerial approach to KM, and the biological approach to KM. In
contrast, the technical aspect is described through the knowledge and skills
engineering approach to KM. These approaches also lead us to provide a
comparative table between these organizational, managerial, and biological
visions of KM.
|
2502.06900
|
Polynomial Regret Concentration of UCB for Non-Deterministic State
Transitions
|
cs.LG cs.DM
|
Monte Carlo Tree Search (MCTS) has proven effective in solving
decision-making problems in perfect information settings. However, its
application to stochastic and imperfect information domains remains limited.
This paper extends the theoretical framework of MCTS to stochastic domains by
addressing non-deterministic state transitions, where actions lead to
probabilistic outcomes. Specifically, building on the work of Shah et al.
(2020), we derive polynomial regret concentration bounds for the Upper
Confidence Bound algorithm in multi-armed bandit problems with stochastic
transitions, offering improved theoretical guarantees. Our primary contribution
is proving that these bounds also apply to non-deterministic environments,
ensuring robust performance in stochastic settings. This broadens the
applicability of MCTS to real-world decision-making problems with probabilistic
outcomes, such as in autonomous systems and financial decision-making.
|
2502.06901
|
Enabling Autoregressive Models to Fill In Masked Tokens
|
cs.LG cs.AI cs.CL
|
Historically, LLMs have been trained using either autoregressive (AR) or
masked language modeling (MLM) objectives, with AR models gaining dominance in
recent years. However, AR models are inherently incapable of masked infilling,
which is the ability to predict masked tokens between past and future context.
In contrast, MLM models suffer from intrinsic computational inefficiencies
during both training and inference that hinder their scalability. This work
introduces MARIA (Masked and Autoregressive Infilling Architecture), a novel
approach that leverages the strengths of both paradigms to achieve
state-of-the-art masked infilling performance. MARIA combines a pre-trained MLM
and AR model by training a linear decoder that takes their concatenated hidden
states as input. This minimal modification enables the AR model to perform
infilling while retaining its inherent advantages in terms of faster inference
with KV caching. Our results demonstrate that MARIA significantly outperforms
existing methods, namely discrete diffusion models, on masked infilling tasks.
|
2502.06902
|
Emergence of Episodic Memory in Transformers: Characterizing Changes in
Temporal Structure of Attention Scores During Training
|
cs.LG cs.AI cs.CL
|
We investigate in-context temporal biases in attention heads and transformer
outputs. Using cognitive science methodologies, we analyze attention scores and
outputs of the GPT-2 models of varying sizes. Across attention heads, we
observe effects characteristic of human episodic memory, including temporal
contiguity, primacy and recency. Transformer outputs demonstrate a tendency
toward in-context serial recall. Importantly, this effect is eliminated after
the ablation of the induction heads, which are the driving force behind the
contiguity effect. Our findings offer insights into how transformers organize
information temporally during in-context learning, shedding light on their
similarities and differences with human memory and learning.
|
2502.06905
|
Lightweight Dataset Pruning without Full Training via Example Difficulty
and Prediction Uncertainty
|
cs.LG cs.AI
|
Recent advances in deep learning rely heavily on massive datasets, leading to
substantial storage and training costs. Dataset pruning aims to alleviate this
demand by discarding redundant examples. However, many existing methods require
training a model with a full dataset over a large number of epochs before being
able to prune the dataset, which ironically makes the pruning process more
expensive than just training the model on the entire dataset. To overcome this
limitation, we introduce a Difficulty and Uncertainty-Aware Lightweight (DUAL)
score, which aims to identify important samples from the early training stage
by considering both example difficulty and prediction uncertainty. To address a
catastrophic accuracy drop at an extreme pruning, we further propose a
ratio-adaptive sampling using Beta distribution. Experiments on various
datasets and learning scenarios such as image classification with label noise
and image corruption, and model architecture generalization demonstrate the
superiority of our method over previous state-of-the-art (SOTA) approaches.
Specifically, on ImageNet-1k, our method reduces the time cost for pruning to
66% compared to previous methods while achieving a SOTA, specifically 60% test
accuracy at a 90% pruning ratio. On CIFAR datasets, the time cost is reduced to
just 15% while maintaining SOTA performance.
|
2502.06906
|
Learning-based estimation of cattle weight gain and its influencing
factors
|
cs.LG cs.AI
|
Many cattle farmers still depend on manual methods to measure the live weight
gain of cattle at set intervals, which is time consuming, labour intensive, and
stressful for both the animals and handlers. A remote and autonomous monitoring
system using machine learning (ML) or deep learning (DL) can provide a more
efficient and less invasive method and also predictive capabilities for future
cattle weight gain (CWG). This system allows continuous monitoring and
estimation of individual cattle live weight gain, growth rates and weight
fluctuations considering various factors like environmental conditions, genetic
predispositions, feed availability, movement patterns and behaviour. Several
researchers have explored the efficiency of estimating CWG using ML and DL
algorithms. However, estimating CWG suffers from a lack of consistency in its
application. Moreover, ML or DL can provide weight gain estimations based on
several features that vary in existing research. Additionally, previous studies
have encountered various data related challenges when estimating CWG. This
paper presents a comprehensive investigation in estimating CWG using advanced
ML techniques based on research articles (between 2004 and 2024). This study
investigates the current tools, methods, and features used in CWG estimation,
as well as their strengths and weaknesses. The findings highlight the
significance of using advanced ML approaches in CWG estimation and its critical
influence on factors. Furthermore, this study identifies potential research
gaps and provides research direction on CWG prediction, which serves as a
reference for future research in this area.
|
2502.06907
|
Can ChatGPT Diagnose Alzheimer's Disease?
|
cs.LG cs.AI
|
Can ChatGPT diagnose Alzheimer's Disease (AD)? AD is a devastating
neurodegenerative condition that affects approximately 1 in 9 individuals aged
65 and older, profoundly impairing memory and cognitive function. This paper
utilises 9300 electronic health records (EHRs) with data from Magnetic
Resonance Imaging (MRI) and cognitive tests to address an intriguing question:
As a general-purpose task solver, can ChatGPT accurately detect AD using EHRs?
We present an in-depth evaluation of ChatGPT using a black-box approach with
zero-shot and multi-shot methods. This study unlocks ChatGPT's capability to
analyse MRI and cognitive test results, as well as its potential as a
diagnostic tool for AD. By automating aspects of the diagnostic process, this
research opens a transformative approach for the healthcare system,
particularly in addressing disparities in resource-limited regions where AD
specialists are scarce. Hence, it offers a foundation for a promising method
for early detection, supporting individuals with timely interventions, which is
paramount for Quality of Life (QoL).
|
2502.06909
|
Satisfaction-Aware Incentive Scheme for Federated Learning in Industrial
Metaverse: DRL-Based Stackbelberg Game Approach
|
cs.LG cs.AI cs.GT
|
Industrial Metaverse leverages the Industrial Internet of Things (IIoT) to
integrate data from diverse devices, employing federated learning and
meta-computing to train models in a distributed manner while ensuring data
privacy. Achieving an immersive experience for industrial Metaverse
necessitates maintaining a balance between model quality and training latency.
Consequently, a primary challenge in federated learning tasks is optimizing
overall system performance by balancing model quality and training latency.
This paper designs a satisfaction function that accounts for data size, Age of
Information (AoI), and training latency. Additionally, the satisfaction
function is incorporated into the utility functions to incentivize node
participation in model training. We model the utility functions of servers and
nodes as a two-stage Stackelberg game and employ a deep reinforcement learning
approach to learn the Stackelberg equilibrium. This approach ensures balanced
rewards and enhances the applicability of the incentive scheme for industrial
Metaverse. Simulation results demonstrate that, under the same budget
constraints, the proposed incentive scheme improves at least 23.7% utility
compared to existing schemes without compromising model accuracy.
|
2502.06910
|
TimeKAN: KAN-based Frequency Decomposition Learning Architecture for
Long-term Time Series Forecasting
|
cs.LG cs.AI
|
Real-world time series often have multiple frequency components that are
intertwined with each other, making accurate time series forecasting
challenging. Decomposing the mixed frequency components into multiple single
frequency components is a natural choice. However, the information density of
patterns varies across different frequencies, and employing a uniform modeling
approach for different frequency components can lead to inaccurate
characterization. To address this challenges, inspired by the flexibility of
the recent Kolmogorov-Arnold Network (KAN), we propose a KAN-based Frequency
Decomposition Learning architecture (TimeKAN) to address the complex
forecasting challenges caused by multiple frequency mixtures. Specifically,
TimeKAN mainly consists of three components: Cascaded Frequency Decomposition
(CFD) blocks, Multi-order KAN Representation Learning (M-KAN) blocks and
Frequency Mixing blocks. CFD blocks adopt a bottom-up cascading approach to
obtain series representations for each frequency band. Benefiting from the high
flexibility of KAN, we design a novel M-KAN block to learn and represent
specific temporal patterns within each frequency band. Finally, Frequency
Mixing blocks is used to recombine the frequency bands into the original
format. Extensive experimental results across multiple real-world time series
datasets demonstrate that TimeKAN achieves state-of-the-art performance as an
extremely lightweight architecture. Code is available at
https://github.com/huangst21/TimeKAN.
|
2502.06911
|
Foundation Models for Anomaly Detection: Vision and Challenges
|
cs.LG cs.AI
|
As data continues to grow in volume and complexity across domains such as
finance, manufacturing, and healthcare, effective anomaly detection is
essential for identifying irregular patterns that may signal critical issues.
Recently, foundation models (FMs) have emerged as a powerful tool for advancing
anomaly detection. They have demonstrated unprecedented capabilities in
enhancing anomaly identification, generating detailed data descriptions, and
providing visual explanations. This survey presents the first comprehensive
review of recent advancements in FM-based anomaly detection. We propose a novel
taxonomy that classifies FMs into three categories based on their roles in
anomaly detection tasks, i.e., as encoders, detectors, or interpreters. We
provide a systematic analysis of state-of-the-art methods and discuss key
challenges in leveraging FMs for improved anomaly detection. We also outline
future research directions in this rapidly evolving field.
|
2502.06913
|
A Simple yet Effective DDG Predictor is An Unsupervised Antibody
Optimizer and Explainer
|
q-bio.QM cs.AI cs.LG
|
The proteins that exist today have been optimized over billions of years of
natural evolution, during which nature creates random mutations and selects
them. The discovery of functionally promising mutations is challenged by the
limited evolutionary accessible regions, i.e., only a small region on the
fitness landscape is beneficial. There have been numerous priors used to
constrain protein evolution to regions of landscapes with high-fitness
variants, among which the change in binding free energy (DDG) of protein
complexes upon mutations is one of the most commonly used priors. However, the
huge mutation space poses two challenges: (1) how to improve the efficiency of
DDG prediction for fast mutation screening; and (2) how to explain mutation
preferences and efficiently explore accessible evolutionary regions. To address
these challenges, we propose a lightweight DDG predictor (Light-DDG), which
adopts a structure-aware Transformer as the backbone and enhances it by
knowledge distilled from existing powerful but computationally heavy DDG
predictors. Additionally, we augmented, annotated, and released a large-scale
dataset containing millions of mutation data for pre-training Light-DDG. We
find that such a simple yet effective Light-DDG can serve as a good
unsupervised antibody optimizer and explainer. For the target antibody, we
propose a novel Mutation Explainer to learn mutation preferences, which
accounts for the marginal benefit of each mutation per residue. To further
explore accessible evolutionary regions, we conduct preference-guided antibody
optimization and evaluate antibody candidates quickly using Light-DDG to
identify desirable mutations.
|
2502.06914
|
UniZyme: A Unified Protein Cleavage Site Predictor Enhanced with Enzyme
Active-Site Knowledge
|
q-bio.QM cs.AI cs.LG
|
Enzyme-catalyzed protein cleavage is essential for many biological functions.
Accurate prediction of cleavage sites can facilitate various applications such
as drug development, enzyme design, and a deeper understanding of biological
mechanisms. However, most existing models are restricted to an individual
enzyme, which neglects shared knowledge of enzymes and fails generalize to
novel enzymes. Thus, we introduce a unified protein cleavage site predictor
named UniZyme, which can generalize across diverse enzymes. To enhance the
enzyme encoding for the protein cleavage site prediction, UniZyme employs a
novel biochemically-informed model architecture along with active-site
knowledge of proteolytic enzymes. Extensive experiments demonstrate that
UniZyme achieves high accuracy in predicting cleavage sites across a range of
proteolytic enzymes, including unseen enzymes. The code is available in
https://anonymous.4open.science/r/UniZyme-4A67.
|
2502.06915
|
Analytic Personalized Federated Meta-Learning
|
cs.DC cs.LG
|
Analytic federated learning (AFL) which updates model weights only once by
using closed-form least-square (LS) solutions can reduce abundant training time
in gradient-free federated learning (FL). The current AFL framework cannot
support deep neural network (DNN) training, which hinders its implementation on
complex machine learning tasks. Meanwhile, it overlooks the heterogeneous data
distribution problem that restricts the single global model from performing
well on each client's task. To overcome the first challenge, we propose an AFL
framework, namely FedACnnL, in which we resort to a novel local analytic
learning method (ACnnL) and model the training of each layer as a distributed
LS problem. For the second challenge, we propose an analytic personalized
federated meta-learning framework, namely pFedACnnL, which is inherited from
FedACnnL. In pFedACnnL, clients with similar data distribution share a common
robust global model for fast adapting it to local tasks in an analytic manner.
FedACnnL is theoretically proven to require significantly shorter training time
than the conventional zeroth-order (i.e. gradient-free) FL frameworks on DNN
training while the reduction ratio is $98\%$ in the experiment. Meanwhile,
pFedACnnL achieves state-of-the-art (SOTA) model performance in most cases of
convex and non-convex settings, compared with the previous SOTA frameworks.
|
2502.06916
|
Hyper Compressed Fine-Tuning of Large Foundation Models with Quantum
Inspired Adapters
|
cs.LG cs.AI eess.SP quant-ph
|
Fine-tuning pre-trained large foundation models for specific tasks has become
increasingly challenging due to the computational and storage demands
associated with full parameter updates. Parameter-Efficient Fine-Tuning (PEFT)
methods address this issue by updating only a small subset of model parameters
using adapter modules. In this work, we propose \emph{Quantum-Inspired
Adapters}, a PEFT approach inspired by Hamming-weight preserving quantum
circuits from quantum machine learning literature. These models can be both
expressive and parameter-efficient by operating in a combinatorially large
space while simultaneously preserving orthogonality in weight parameters. We
test our proposed adapters by adapting large language models and large vision
transformers on benchmark datasets. Our method can achieve 99.2\% of the
performance of existing fine-tuning methods such LoRA with a 44x parameter
compression on language understanding datasets like GLUE and VTAB. Compared to
existing orthogonal fine-tuning methods such as OFT or BOFT, we achieve 98\%
relative performance with 25x fewer parameters. This demonstrates competitive
performance paired with a significant reduction in trainable parameters.
Through ablation studies, we determine that combining multiple Hamming-weight
orders with orthogonality and matrix compounding are essential for performant
fine-tuning. Our findings suggest that Quantum-Inspired Adapters offer a
promising direction for efficient adaptation of language and vision models in
resource-constrained environments.
|
2502.06917
|
Krum Federated Chain (KFC): Using blockchain to defend against
adversarial attacks in Federated Learning
|
cs.LG cs.AI
|
Federated Learning presents a nascent approach to machine learning, enabling
collaborative model training across decentralized devices while safeguarding
data privacy. However, its distributed nature renders it susceptible to
adversarial attacks. Integrating blockchain technology with Federated Learning
offers a promising avenue to enhance security and integrity. In this paper, we
tackle the potential of blockchain in defending Federated Learning against
adversarial attacks. First, we test Proof of Federated Learning, a well known
consensus mechanism designed ad-hoc to federated contexts, as a defense
mechanism demonstrating its efficacy against Byzantine and backdoor attacks
when at least one miner remains uncompromised. Second, we propose Krum
Federated Chain, a novel defense strategy combining Krum and Proof of Federated
Learning, valid to defend against any configuration of Byzantine or backdoor
attacks, even when all miners are compromised. Our experiments conducted on
image classification datasets validate the effectiveness of our proposed
approaches.
|
2502.06918
|
Leveraging GPT-4o Efficiency for Detecting Rework Anomaly in Business
Processes
|
cs.LG cs.AI
|
This paper investigates the effectiveness of GPT-4o-2024-08-06, one of the
Large Language Models (LLM) from OpenAI, in detecting business process
anomalies, with a focus on rework anomalies. In our study, we developed a
GPT-4o-based tool capable of transforming event logs into a structured format
and identifying reworked activities within business event logs. The analysis
was performed on a synthetic dataset designed to contain rework anomalies but
free of loops. To evaluate the anomaly detection capabilities of GPT
4o-2024-08-06, we used three prompting techniques: zero-shot, one-shot, and
few-shot. These techniques were tested on different anomaly distributions,
namely normal, uniform, and exponential, to identify the most effective
approach for each case. The results demonstrate the strong performance of
GPT-4o-2024-08-06. On our dataset, the model achieved 96.14% accuracy with
one-shot prompting for the normal distribution, 97.94% accuracy with few-shot
prompting for the uniform distribution, and 74.21% accuracy with few-shot
prompting for the exponential distribution. These results highlight the model's
potential as a reliable tool for detecting rework anomalies in event logs and
how anomaly distribution and prompting strategy influence the model's
performance.
|
2502.06919
|
Select before Act: Spatially Decoupled Action Repetition for Continuous
Control
|
cs.LG cs.AI cs.RO
|
Reinforcement Learning (RL) has achieved remarkable success in various
continuous control tasks, such as robot manipulation and locomotion. Different
to mainstream RL which makes decisions at individual steps, recent studies have
incorporated action repetition into RL, achieving enhanced action persistence
with improved sample efficiency and superior performance. However, existing
methods treat all action dimensions as a whole during repetition, ignoring
variations among them. This constraint leads to inflexibility in decisions,
which reduces policy agility with inferior effectiveness. In this work, we
propose a novel repetition framework called SDAR, which implements Spatially
Decoupled Action Repetition through performing closed-loop act-or-repeat
selection for each action dimension individually. SDAR achieves more flexible
repetition strategies, leading to an improved balance between action
persistence and diversity. Compared to existing repetition frameworks, SDAR is
more sample efficient with higher policy performance and reduced action
fluctuation. Experiments are conducted on various continuous control scenarios,
demonstrating the effectiveness of spatially decoupled repetition design
proposed in this work.
|
2502.06920
|
Direct Estimation of Pediatric Heart Rate Variability from BOLD-fMRI: A
Machine Learning Approach Using Dynamic Connectivity
|
eess.IV cs.AI cs.LG
|
In many pediatric fMRI studies, cardiac signals are often missing or of poor
quality. A tool to extract Heart Rate Variation (HRV) waveforms directly from
fMRI data, without the need for peripheral recording devices, would be highly
beneficial. We developed a machine learning framework to accurately reconstruct
HRV for pediatric applications. A hybrid model combining one-dimensional
Convolutional Neural Networks (1D-CNN) and Gated Recurrent Units (GRU) analyzed
BOLD signals from 628 ROIs, integrating past and future data. The model
achieved an 8% improvement in HRV accuracy, as evidenced by enhanced
performance metrics. This approach eliminates the need for peripheral
photoplethysmography devices, reduces costs, and simplifies procedures in
pediatric fMRI. Additionally, it improves the robustness of pediatric fMRI
studies, which are more sensitive to physiological and developmental variations
than those in adults.
|
2502.06921
|
GraNNite: Enabling High-Performance Execution of Graph Neural Networks
on Resource-Constrained Neural Processing Units
|
cs.LG cs.AI cs.AR
|
Graph Neural Networks (GNNs) are vital for learning from graph-structured
data, enabling applications in network analysis, recommendation systems, and
speech analytics. Deploying them on edge devices like client PCs and laptops
enhances real-time processing, privacy, and cloud independence. GNNs aid
Retrieval-Augmented Generation (RAG) for Large Language Models (LLMs) and
enable event-based vision tasks. However, irregular memory access, sparsity,
and dynamic structures cause high latency and energy overhead on
resource-constrained devices. While modern edge processors integrate CPUs,
GPUs, and NPUs, NPUs designed for data-parallel tasks struggle with irregular
GNN computations. We introduce GraNNite, the first hardware-aware framework
optimizing GNN execution on commercial-off-the-shelf (COTS) SOTA DNN
accelerators via a structured three-step methodology: (1) enabling NPU
execution, (2) optimizing performance, and (3) trading accuracy for efficiency
gains. Step 1 employs GraphSplit for workload distribution and StaGr for static
aggregation, while GrAd and NodePad handle dynamic graphs. Step 2 boosts
performance using EffOp for control-heavy tasks and GraSp for sparsity
exploitation. Graph Convolution optimizations PreG, SymG, and CacheG reduce
redundancy and memory transfers. Step 3 balances quality versus efficiency,
where QuantGr applies INT8 quantization, and GrAx1, GrAx2, and GrAx3 accelerate
attention, broadcast-add, and SAGE-max aggregation. On Intel Core Ultra AI PCs,
GraNNite achieves 2.6X to 7.6X speedups over default NPU mappings and up to
8.6X energy gains over CPUs and GPUs, delivering 10.8X and 6.7X higher
performance than CPUs and GPUs, respectively, across GNN models.
|
2502.06922
|
Synthetic Audio Helps for Cognitive State Tasks
|
cs.SD cs.AI cs.CL cs.LG
|
The NLP community has broadly focused on text-only approaches of cognitive
state tasks, but audio can provide vital missing cues through prosody. We posit
that text-to-speech models learn to track aspects of cognitive state in order
to produce naturalistic audio, and that the signal audio models implicitly
identify is orthogonal to the information that language models exploit. We
present Synthetic Audio Data fine-tuning (SAD), a framework where we show that
7 tasks related to cognitive state modeling benefit from multimodal training on
both text and zero-shot synthetic audio data from an off-the-shelf TTS system.
We show an improvement over the text-only modality when adding synthetic audio
data to text-only corpora. Furthermore, on tasks and corpora that do contain
gold audio, we show our SAD framework achieves competitive performance with
text and synthetic audio compared to text and gold audio.
|
2502.06923
|
Do Attention Heads Compete or Cooperate during Counting?
|
cs.LG cs.AI
|
We present an in-depth mechanistic interpretability analysis of training
small transformers on an elementary task, counting, which is a crucial
deductive step in many algorithms. In particular, we investigate the
collaboration/competition among the attention heads: we ask whether the
attention heads behave as a pseudo-ensemble, all solving the same subtask, or
they perform different subtasks, meaning that they can only solve the original
task in conjunction. Our work presents evidence that on the semantics of the
counting task, attention heads behave as a pseudo-ensemble, but their outputs
need to be aggregated in a non-uniform manner in order to create an encoding
that conforms to the syntax. Our source code will be available upon
publication.
|
2502.06924
|
XAMBA: Enabling Efficient State Space Models on Resource-Constrained
Neural Processing Units
|
cs.LG cs.AI
|
State-Space Models (SSMs) have emerged as efficient alternatives to
transformers for sequential data tasks, offering linear or near-linear
scalability with sequence length, making them ideal for long-sequence
applications in NLP, vision, and edge AI, including real-time transcription,
translation, and contextual search. These applications require lightweight,
high-performance models for deployment on resource-constrained devices like
laptops and PCs. Designing specialized accelerators for every emerging neural
network is costly and impractical; instead, optimizing models for existing NPUs
in AI PCs provides a scalable solution. To this end, we propose XAMBA, the
first framework to enable and optimize SSMs on commercial off-the-shelf (COTS)
state-of-the-art (SOTA) NPUs. XAMBA follows a three-step methodology: (1)
enabling SSMs on NPUs, (2) optimizing performance to meet KPI requirements, and
(3) trading accuracy for additional performance gains. After enabling SSMs on
NPUs, XAMBA mitigates key bottlenecks using CumBA and ReduBA, replacing
sequential CumSum and ReduceSum operations with matrix-based computations,
significantly improving execution speed and memory efficiency. Additionally,
ActiBA enhances performance by approximating expensive activation functions
(e.g., Swish, Softplus) using piecewise linear mappings, reducing latency with
minimal accuracy loss. Evaluations on an Intel Core Ultra Series 2 AI PC show
that XAMBA achieves up to 2.6X speed-up over the baseline. Our implementation
is available at https://github.com/arghadippurdue/XAMBA.
|
2502.06925
|
Occam's model: Selecting simpler representations for better
transferability estimation
|
cs.LG cs.AI
|
Fine-tuning models that have been pre-trained on large datasets has become a
cornerstone of modern machine learning workflows. With the widespread
availability of online model repositories, such as Hugging Face, it is now
easier than ever to fine-tune pre-trained models for specific tasks. This
raises a critical question: which pre-trained model is most suitable for a
given task? This problem is called transferability estimation. In this work, we
introduce two novel and effective metrics for estimating the transferability of
pre-trained models. Our approach is grounded in viewing transferability as a
measure of how easily a pre-trained model's representations can be trained to
separate target classes, providing a unique perspective on transferability
estimation. We rigorously evaluate the proposed metrics against
state-of-the-art alternatives across diverse problem settings, demonstrating
their robustness and practical utility. Additionally, we present theoretical
insights that explain our metrics' efficacy and adaptability to various
scenarios. We experimentally show that our metrics increase Kendall's Tau by up
to 32% compared to the state-of-the-art baselines.
|
2502.06927
|
Neighborhood-Order Learning Graph Attention Network for Fake News
Detection
|
cs.LG cs.AI cs.CL
|
Fake news detection is a significant challenge in the digital age, which has
become increasingly important with the proliferation of social media and online
communication networks. Graph Neural Networks (GNN)-based methods have shown
high potential in analyzing graph-structured data for this problem. However, a
major limitation in conventional GNN architectures is their inability to
effectively utilize information from neighbors beyond the network's layer
depth, which can reduce the model's accuracy and effectiveness. In this paper,
we propose a novel model called Neighborhood-Order Learning Graph Attention
Network (NOL-GAT) for fake news detection. This model allows each node in each
layer to independently learn its optimal neighborhood order. By doing so, the
model can purposefully and efficiently extract critical information from
distant neighbors. The NOL-GAT architecture consists of two main components: a
Hop Network that determines the optimal neighborhood order and an Embedding
Network that updates node embeddings using these optimal neighborhoods. To
evaluate the model's performance, experiments are conducted on various fake
news datasets. Results demonstrate that NOL-GAT significantly outperforms
baseline models in metrics such as accuracy and F1-score, particularly in
scenarios with limited labeled data. Features such as mitigating the
over-squashing problem, improving information flow, and reducing computational
complexity further highlight the advantages of the proposed model.
|
2502.06939
|
Generalizable automated ischaemic stroke lesion segmentation with vision
transformers
|
eess.IV cs.CV cs.LG
|
Ischaemic stroke, a leading cause of death and disability, critically relies
on neuroimaging for characterising the anatomical pattern of injury.
Diffusion-weighted imaging (DWI) provides the highest expressivity in ischemic
stroke but poses substantial challenges for automated lesion segmentation:
susceptibility artefacts, morphological heterogeneity, age-related
comorbidities, time-dependent signal dynamics, instrumental variability, and
limited labelled data. Current U-Net-based models therefore underperform, a
problem accentuated by inadequate evaluation metrics that focus on mean
performance, neglecting anatomical, subpopulation, and acquisition-dependent
variability. Here, we present a high-performance DWI lesion segmentation tool
addressing these challenges through optimized vision transformer-based
architectures, integration of 3563 annotated lesions from multi-site data, and
algorithmic enhancements, achieving state-of-the-art results. We further
propose a novel evaluative framework assessing model fidelity, equity (across
demographics and lesion subtypes), anatomical precision, and robustness to
instrumental variability, promoting clinical and research utility. This work
advances stroke imaging by reconciling model expressivity with domain-specific
challenges and redefining performance benchmarks to prioritize equity and
generalizability, critical for personalized medicine and mechanistic research.
|
2502.06957
|
GAS: Generative Avatar Synthesis from a Single Image
|
cs.CV
|
We introduce a generalizable and unified framework to synthesize
view-consistent and temporally coherent avatars from a single image, addressing
the challenging problem of single-image avatar generation. While recent methods
employ diffusion models conditioned on human templates like depth or normal
maps, they often struggle to preserve appearance information due to the
discrepancy between sparse driving signals and the actual human subject,
resulting in multi-view and temporal inconsistencies. Our approach bridges this
gap by combining the reconstruction power of regression-based 3D human
reconstruction with the generative capabilities of a diffusion model. The dense
driving signal from the initial reconstructed human provides comprehensive
conditioning, ensuring high-quality synthesis faithful to the reference
appearance and structure. Additionally, we propose a unified framework that
enables the generalization learned from novel pose synthesis on in-the-wild
videos to naturally transfer to novel view synthesis. Our video-based diffusion
model enhances disentangled synthesis with high-quality view-consistent
renderings for novel views and realistic non-rigid deformations in novel pose
animation. Results demonstrate the superior generalization ability of our
method across in-domain and out-of-domain in-the-wild datasets. Project page:
https://humansensinglab.github.io/GAS/
|
2502.06963
|
Task Offloading in Vehicular Edge Computing using Deep Reinforcement
Learning: A Survey
|
cs.LG cs.AI cs.DC cs.MA
|
The increasing demand for Intelligent Transportation Systems (ITS) has
introduced significant challenges in managing the complex,
computation-intensive tasks generated by modern vehicles while offloading tasks
to external computing infrastructures such as edge computing (EC), nearby
vehicular , and UAVs has become influential solution to these challenges.
However, traditional computational offloading strategies often struggle to
adapt to the dynamic and heterogeneous nature of vehicular environments. In
this study, we explored the potential of Reinforcement Learning (RL) and Deep
Reinforcement Learning (DRL) frameworks to optimize computational offloading
through adaptive, real-time decision-making, and we have thoroughly
investigated the Markov Decision Process (MDP) approaches on the existing
literature. The paper focuses on key aspects such as standardized learning
models, optimized reward structures, and collaborative multi-agent systems,
aiming to advance the understanding and application of DRL in vehicular
networks. Our findings offer insights into enhancing the efficiency,
scalability, and robustness of ITS, setting the stage for future innovations in
this rapidly evolving field.
|
2502.06967
|
Downlink and Uplink ISAC in Continuous-Aperture Array (CAPA) Systems
|
cs.IT eess.SP math.IT
|
A continuous-aperture array (CAPA)-based integrated sensing and
communications (ISAC) framework is proposed for both downlink and uplink
scenarios. Within this framework, continuous operator-based signal models are
employed to describe the sensing and communication processes. The performance
of communication and sensing is analyzed using two information-theoretic
metrics: the communication rate (CR) and the sensing rate (SR). 1) For downlink
ISAC, three continuous beamforming designs are proposed: i) the
communications-centric (C-C) design that maximizes the CR, ii) the
sensing-centric (S-C) design that maximizes the SR, and iii) the Pareto-optimal
design that characterizes the Pareto boundary of the CR-SR region. A signal
subspace-based approach is proposed to derive the closed-form optimal
beamformers for the considered designs. On this basis, closed-form expressions
are derived for the achievable CRs and SRs, and the downlink rate region
achieved by CAPAs is characterized. 2) For uplink ISAC, the C-C and S-C
successive interference cancellation (SIC)-based methods are proposed to manage
inter-functionality interference. Using the subspace approach along with the
time-sharing technique, closed-form expressions for the optimal beamformers are
derived, and the achievable CRs, SRs, and rate region are analyzed. Numerical
results demonstrate that, for both downlink and uplink, CAPA-based ISAC
achieves higher CRs and SRs as well as larger CR-SR regions compared to
conventional spatially discrete array (SPDA)-based ISAC.
|
2502.06970
|
Model Diffusion for Certifiable Few-shot Transfer Learning
|
cs.LG stat.ML
|
In modern large-scale deep learning, a prevalent and effective workflow for
solving low-data problems is adapting powerful pre-trained foundation models
(FMs) to new tasks via parameter-efficient fine-tuning (PEFT). However, while
empirically effective, the resulting solutions lack generalisation guarantees
to certify their accuracy - which may be required for ethical or legal reasons
prior to deployment in high-importance applications. In this paper we develop a
novel transfer learning approach that is designed to facilitate non-vacuous
learning theoretic generalisation guarantees for downstream tasks, even in the
low-shot regime. Specifically, we first use upstream tasks to train a
distribution over PEFT parameters. We then learn the downstream task by a
sample-and-evaluate procedure -- sampling plausible PEFTs from the trained
diffusion model and selecting the one with the highest likelihood on the
downstream data. Crucially, this confines our model hypothesis to a finite set
of PEFT samples. In contrast to learning in the typical continuous hypothesis
spaces of neural network weights, this facilitates tighter risk certificates.
We instantiate our bound and show non-trivial generalization guarantees
compared to existing learning approaches which lead to vacuous bounds in the
low-shot regime.
|
2502.06971
|
User-Preference Meets Pareto-Optimality: Multi-Objective Bayesian
Optimization with Local Gradient Search
|
cs.LG
|
Incorporating user preferences into multi-objective Bayesian optimization
(MOBO) allows for personalization of the optimization procedure. Preferences
are often abstracted in the form of an unknown utility function, estimated
through pairwise comparisons of potential outcomes. However, utility-driven
MOBO methods can yield solutions that are dominated by nearby solutions, as
non-dominance is not enforced. Additionally, classical MOBO commonly relies on
estimating the entire Pareto-front to identify the Pareto-optimal solutions,
which can be expensive and ignore user preferences. Here, we present a new
method, termed preference-utility-balanced MOBO (PUB-MOBO), that allows users
to disambiguate between near-Pareto candidate solutions. PUB-MOBO combines
utility-based MOBO with local multi-gradient descent to refine user-preferred
solutions to be near-Pareto-optimal. To this end, we propose a novel
preference-dominated utility function that concurrently preserves
user-preferences and dominance amongst candidate solutions. A key advantage of
PUB-MOBO is that the local search is restricted to a (small) region of the
Pareto-front directed by user preferences, alleviating the need to estimate the
entire Pareto-front. PUB-MOBO is tested on three synthetic benchmark problems:
DTLZ1, DTLZ2 and DH1, as well as on three real-world problems: Vehicle Safety,
Conceptual Marine Design, and Car Side Impact. PUB-MOBO consistently
outperforms state-of-the-art competitors in terms of proximity to the
Pareto-front and utility regret across all the problems.
|
2502.06973
|
Indoor Light and Heat Estimation from a Single Panorama
|
cs.CV
|
This paper presents a novel application for directly estimating indoor light
and heat maps from captured indoor-outdoor High Dynamic Range (HDR) panoramas.
In our image-based rendering method, the indoor panorama is used to estimate
the 3D room layout, while the corresponding outdoor panorama serves as an
environment map to infer spatially-varying light and material properties. We
establish a connection between indoor light transport and heat transport and
implement transient heat simulation to generate indoor heat panoramas. The
sensitivity analysis of various thermal parameters is conducted, and the
resulting heat maps are compared with the images captured by the thermal camera
in real-world scenarios. This digital application enables automatic indoor
light and heat estimation without manual inputs and cumbersome field
measurements.
|
2502.06975
|
Position: Episodic Memory is the Missing Piece for Long-Term LLM Agents
|
cs.AI
|
As Large Language Models (LLMs) evolve from text-completion tools into fully
fledged agents operating in dynamic environments, they must address the
challenge of continually learning and retaining long-term knowledge. Many
biological systems solve these challenges with episodic memory, which supports
single-shot learning of instance-specific contexts. Inspired by this, we
present an episodic memory framework for LLM agents, centered around five key
properties of episodic memory that underlie adaptive and context-sensitive
behavior. With various research efforts already partially covering these
properties, this position paper argues that now is the right time for an
explicit, integrated focus on episodic memory to catalyze the development of
long-term agents. To this end, we outline a roadmap that unites several
research directions under the goal to support all five properties of episodic
memory for more efficient long-term LLM agents.
|
2502.06976
|
Who is Helping Whom? Analyzing Inter-dependencies to Evaluate
Cooperation in Human-AI Teaming
|
cs.MA cs.AI
|
The long-standing research challenges of Human-AI Teaming(HAT) and Zero-shot
Cooperation(ZSC) have been tackled by applying multi-agent reinforcement
learning(MARL) to train an agent by optimizing the environment reward function
and evaluating their performance through task performance metrics such as task
reward. However, such evaluation focuses only on task completion, while being
agnostic to `how' the two agents work with each other. Specifically, we are
interested in understanding the cooperation arising within the team when
trained agents are paired with humans. To formally address this problem, we
propose the concept of interdependence to measure how much agents rely on each
other's actions to achieve the shared goal, as a key metric for evaluating
cooperation in human-agent teams. Towards this, we ground this concept through
a symbolic formalism and define evaluation metrics that allow us to assess the
degree of reliance between the agents' actions. We pair state-of-the-art agents
trained through MARL for HAT, with learned human models for the the popular
Overcooked domain, and evaluate the team performance for these human-agent
teams. Our results demonstrate that trained agents are not able to induce
cooperative behavior, reporting very low levels of interdependence across all
the teams. We also report that teaming performance of a team is not necessarily
correlated with the task reward.
|
2502.06978
|
Dual Conic Proxy for Semidefinite Relaxation of AC Optimal Power Flow
|
math.OC cs.LG
|
The nonlinear, non-convex AC Optimal Power Flow (AC-OPF) problem is
fundamental for power systems operations. The intrinsic complexity of AC-OPF
has fueled a growing interest in the development of optimization proxies for
the problem, i.e., machine learning models that predict high-quality,
close-to-optimal solutions. More recently, dual conic proxy architectures have
been proposed, which combine machine learning and convex relaxations of AC-OPF,
to provide valid certificates of optimality using learning-based methods.
Building on this methodology, this paper proposes, for the first time, a dual
conic proxy architecture for the semidefinite (SDP) relaxation of AC-OPF
problems. Although the SDP relaxation is stronger than the second-order cone
relaxation considered in previous work, its practical use has been hindered by
its computational cost. The proposed method combines a neural network with a
differentiable dual completion strategy that leverages the structure of the
dual SDP problem. This approach guarantees dual feasibility, and therefore
valid dual bounds, while providing orders of magnitude of speedups compared to
interior-point algorithms. The paper also leverages self-supervised learning,
which alleviates the need for time-consuming data generation and allows to
train the proposed models efficiently. Numerical experiments are presented on
several power grid benchmarks with up to 500 buses. The results demonstrate
that the proposed SDP-based proxies can outperform weaker conic relaxations,
while providing several orders of magnitude speedups compared to a
state-of-the-art interior-point SDP solver.
|
2502.06982
|
Machine Learning Fleet Efficiency: Analyzing and Optimizing Large-Scale
Google TPU Systems with ML Productivity Goodput
|
cs.LG
|
Recent years have seen the emergence of machine learning (ML) workloads
deployed in warehouse-scale computing (WSC) settings, also known as ML fleets.
As the computational demands placed on ML fleets have increased due to the rise
of large models and growing demand for ML applications, it has become
increasingly critical to measure and improve the efficiency of such systems.
However, there is not yet an established methodology to characterize ML fleet
performance and identify potential performance optimizations accordingly. This
paper presents a large-scale analysis of an ML fleet based on Google's TPUs,
introducing a framework to capture fleet-wide efficiency, systematically
evaluate performance characteristics, and identify optimization strategies for
the fleet. We begin by defining an ML fleet, outlining its components, and
analyzing an example Google ML fleet in production comprising thousands of
accelerators running diverse workloads. Our study reveals several critical
insights: first, ML fleets extend beyond the hardware layer, with model, data,
framework, compiler, and scheduling layers significantly impacting performance;
second, the heterogeneous nature of ML fleets poses challenges in
characterizing individual workload performance; and third, traditional
utilization-based metrics prove insufficient for ML fleet characterization. To
address these challenges, we present the "ML Productivity Goodput" (MPG) metric
to measure ML fleet efficiency. We show how to leverage this metric to
characterize the fleet across the ML system stack. We also present methods to
identify and optimize performance bottlenecks using MPG, providing strategies
for managing warehouse-scale ML systems in general. Lastly, we demonstrate
quantitative evaluations from applying these methods to a real ML fleet for
internal-facing Google TPU workloads, where we observed tangible improvements.
|
2502.06987
|
Universal Vessel Segmentation for Multi-Modality Retinal Images
|
eess.IV cs.CV
|
We identify two major limitations in the existing studies on retinal vessel
segmentation: (1) Most existing works are restricted to one modality, i.e, the
Color Fundus (CF). However, multi-modality retinal images are used every day in
the study of retina and retinal diseases, and the study of vessel segmentation
on the other modalities is scarce; (2) Even though a small amount of works
extended their experiments to limited new modalities such as the Multi-Color
Scanning Laser Ophthalmoscopy (MC), these works still require finetuning a
separate model for the new modality. And the finetuning will require extra
training data, which is difficult to acquire. In this work, we present a
foundational universal vessel segmentation model (UVSM) for multi-modality
retinal images. Not only do we perform the study on a much wider range of
modalities, but also we propose a universal model to segment the vessels in all
these commonly-used modalities. Despite being much more versatile comparing
with existing methods, our universal model still demonstrates comparable
performance with the state-of-the-art finetuned methods. To the best of our
knowledge, this is the first work that achieves cross-modality retinal vessel
segmentation and also the first work to study retinal vessel segmentation in
some novel modalities.
|
2502.06988
|
A Compiler for Operations on Relations with Bag Semantics
|
cs.PL cs.DB
|
We describe an abstract loop-based intermediate representation that can
express fused implementations of relational algebra expressions on sets and
bags (multisets). The loops are abstracted away from physical data structures
thus making it easier to generate, reason about, and perform optimization like
fusion on. The IR supports the natural relational algebra as well as complex
operators that are used in production database systems, including outer joins,
non-equi joins, and differences. We then show how to compile this IR to
efficient C++ code that co-iterates over the physical data structures present
in the relational algebra expression. Our approach lets us express fusion
across disparate operators, leading to a 3.87x speedup (0.77--12.23x) on
selected LSQB benchmarks and worst-case optimal triangle queries. We also
demonstrate that our compiler generates code of high quality: it has similar
sequential performance to Hyper on TPC-H with a 1.00x speedup (0.38--4.34x) and
competitive parallel performance with a 0.61x speedup (0.23--1.80x). Finally,
our approach is portable across data structures.
|
2502.06990
|
Investigating the Zone of Proximal Development of Language Models for
In-Context Learning
|
cs.CL
|
In this paper, we introduce a learning analytics framework to analyze the
in-context learning (ICL) behavior of large language models (LLMs) through the
lens of the Zone of Proximal Development (ZPD), an established theory in
educational psychology. ZPD delineates the space between what a learner is
capable of doing unsupported and what the learner cannot do even with support.
We adapt this concept to ICL, measuring the ZPD of LLMs based on model
performance on individual examples with and without ICL. Furthermore, we
propose an item response theory (IRT) model to predict the distribution of
zones for LLMs. Our findings reveal a series of intricate and multifaceted
behaviors of ICL, providing new insights into understanding and leveraging this
technique. Finally, we demonstrate how our framework can enhance LLM in both
inference and fine-tuning scenarios: (1) By predicting a model's zone of
proximal development, we selectively apply ICL to queries that are most likely
to benefit from demonstrations, achieving a better balance between inference
cost and performance; (2) We propose a human-like curriculum for fine-tuning,
which prioritizes examples within the model's ZPD. The curriculum results in
improved performance, and we explain its effectiveness through an analysis of
the training dynamics of LLMs.
|
2502.06994
|
SyncMind: Measuring Agent Out-of-Sync Recovery in Collaborative Software
Engineering
|
cs.SE cs.AI cs.CL
|
Software engineering (SE) is increasingly collaborative, with developers
working together on shared complex codebases. Effective collaboration in shared
environments requires participants -- whether humans or AI agents -- to stay on
the same page as their environment evolves. When a collaborator's understanding
diverges from the current state -- what we term the out-of-sync challenge --
the collaborator's actions may fail, leading to integration issues. In this
work, we introduce SyncMind, a framework that systematically defines the
out-of-sync problem faced by large language model (LLM) agents in collaborative
software engineering (CSE). Based on SyncMind, we create SyncBench, a benchmark
featuring 24,332 instances of agent out-of-sync scenarios in real-world CSE
derived from 21 popular GitHub repositories with executable verification tests.
Experiments on SyncBench uncover critical insights into existing LLM agents'
capabilities and limitations. Besides substantial performance gaps among agents
(from Llama-3.1 agent <= 3.33% to Claude-3.5-Sonnet >= 28.18%), their
consistently low collaboration willingness (<= 4.86%) suggests fundamental
limitations of existing LLM in CSE. However, when collaboration occurs, it
positively correlates with out-of-sync recovery success. Minimal performance
differences in agents' resource-aware out-of-sync recoveries further reveal
their significant lack of resource awareness and adaptability, shedding light
on future resource-efficient collaborative systems. Code and data are openly
available on our project website: https://xhguo7.github.io/SyncMind/.
|
2502.06995
|
Epistemic Uncertainty in Conformal Scores: A Unified Approach
|
stat.ML cs.LG
|
Conformal prediction methods create prediction bands with distribution-free
guarantees but do not explicitly capture epistemic uncertainty, which can lead
to overconfident predictions in data-sparse regions. Although recent conformal
scores have been developed to address this limitation, they are typically
designed for specific tasks, such as regression or quantile regression.
Moreover, they rely on particular modeling choices for epistemic uncertainty,
restricting their applicability. We introduce $\texttt{EPICSCORE}$, a
model-agnostic approach that enhances any conformal score by explicitly
integrating epistemic uncertainty. Leveraging Bayesian techniques such as
Gaussian Processes, Monte Carlo Dropout, or Bayesian Additive Regression Trees,
$\texttt{EPICSCORE}$ adaptively expands predictive intervals in regions with
limited data while maintaining compact intervals where data is abundant. As
with any conformal method, it preserves finite-sample marginal coverage.
Additionally, it also achieves asymptotic conditional coverage. Experiments
demonstrate its good performance compared to existing methods. Designed for
compatibility with any Bayesian model, but equipped with distribution-free
guarantees, $\texttt{EPICSCORE}$ provides a general-purpose framework for
uncertainty quantification in prediction problems.
|
2502.06996
|
A view on learning robust goal-conditioned value functions: Interplay
between RL and MPC
|
eess.SY cs.SY
|
Reinforcement learning (RL) and model predictive control (MPC) offer a wealth
of distinct approaches for automatic decision-making. Given the impact both
fields have had independently across numerous domains, there is growing
interest in combining the general-purpose learning capability of RL with the
safety and robustness features of MPC. To this end, this paper presents a
tutorial-style treatment of RL and MPC, treating them as alternative approaches
to solving Markov decision processes. In our formulation, RL aims to learn a
global value function through offline exploration in an uncertain environment,
whereas MPC constructs a local value function through online optimization. This
local-global perspective suggests new ways to design policies that combine
robustness and goal-conditioned learning. Robustness is incorporated into the
RL and MPC pipelines through a scenario-based approach. Goal-conditioned
learning aims to alleviate the burden of engineering a reward function for RL.
Combining the two leads to a single policy that unites a robust, high-level RL
terminal value function with short-term, scenario-based MPC planning for
reliable constraint satisfaction. This approach leverages the benefits of both
RL and MPC, the effectiveness of which is demonstrated on classical control
benchmarks.
|
2502.06997
|
Conditional diffusion model with spatial attention and latent embedding
for medical image segmentation
|
eess.IV cs.CV
|
Diffusion models have been used extensively for high quality image and video
generation tasks. In this paper, we propose a novel conditional diffusion model
with spatial attention and latent embedding (cDAL) for medical image
segmentation. In cDAL, a convolutional neural network (CNN) based discriminator
is used at every time-step of the diffusion process to distinguish between the
generated labels and the real ones. A spatial attention map is computed based
on the features learned by the discriminator to help cDAL generate more
accurate segmentation of discriminative regions in an input image.
Additionally, we incorporated a random latent embedding into each layer of our
model to significantly reduce the number of training and sampling time-steps,
thereby making it much faster than other diffusion models for image
segmentation. We applied cDAL on 3 publicly available medical image
segmentation datasets (MoNuSeg, Chest X-ray and Hippocampus) and observed
significant qualitative and quantitative improvements with higher Dice scores
and mIoU over the state-of-the-art algorithms. The source code is publicly
available at https://github.com/Hejrati/cDAL/.
|
2502.06999
|
Outsourced diffusion sampling: Efficient posterior inference in latent
spaces of generative models
|
cs.LG
|
Any well-behaved generative model over a variable $\mathbf{x}$ can be
expressed as a deterministic transformation of an exogenous ('outsourced')
Gaussian noise variable $\mathbf{z}$: $\mathbf{x}=f_\theta(\mathbf{z})$. In
such a model (e.g., a VAE, GAN, or continuous-time flow-based model), sampling
of the target variable $\mathbf{x} \sim p_\theta(\mathbf{x})$ is
straightforward, but sampling from a posterior distribution of the form
$p(\mathbf{x}\mid\mathbf{y}) \propto
p_\theta(\mathbf{x})r(\mathbf{x},\mathbf{y})$, where $r$ is a constraint
function depending on an auxiliary variable $\mathbf{y}$, is generally
intractable. We propose to amortize the cost of sampling from such posterior
distributions with diffusion models that sample a distribution in the noise
space ($\mathbf{z}$). These diffusion samplers are trained by reinforcement
learning algorithms to enforce that the transformed samples
$f_\theta(\mathbf{z})$ are distributed according to the posterior in the data
space ($\mathbf{x}$). For many models and constraints of interest, the
posterior in the noise space is smoother than the posterior in the data space,
making it more amenable to such amortized inference. Our method enables
conditional sampling under unconditional GAN, (H)VAE, and flow-based priors,
comparing favorably both with current amortized and non-amortized inference
methods. We demonstrate the proposed outsourced diffusion sampling in several
experiments with large pretrained prior models: conditional image generation,
reinforcement learning with human feedback, and protein structure generation.
|
2502.07001
|
From Image to Video: An Empirical Study of Diffusion Representations
|
cs.CV cs.AI cs.LG
|
Diffusion models have revolutionized generative modeling, enabling
unprecedented realism in image and video synthesis. This success has sparked
interest in leveraging their representations for visual understanding tasks.
While recent works have explored this potential for image generation, the
visual understanding capabilities of video diffusion models remain largely
uncharted. To address this gap, we systematically compare the same model
architecture trained for video versus image generation, analyzing the
performance of their latent representations on various downstream tasks
including image classification, action recognition, depth estimation, and
tracking. Results show that video diffusion models consistently outperform
their image counterparts, though we find a striking range in the extent of this
superiority. We further analyze features extracted from different layers and
with varying noise levels, as well as the effect of model size and training
budget on representation and generation quality. This work marks the first
direct comparison of video and image diffusion objectives for visual
understanding, offering insights into the role of temporal information in
representation learning.
|
2502.07003
|
AstroLoc: Robust Space to Ground Image Localizer
|
cs.CV
|
Astronauts take thousands of photos of Earth per day from the International
Space Station, which, once localized on Earth's surface, are used for a
multitude of tasks, ranging from climate change research to disaster
management. The localization process, which has been performed manually for
decades, has recently been approached through image retrieval solutions: given
an astronaut photo, find its most similar match among a large database of
geo-tagged satellite images, in a task called Astronaut Photography
Localization (APL). Yet, existing APL approaches are trained only using
satellite images, without taking advantage of the millions open-source
astronaut photos. In this work we present the first APL pipeline capable of
leveraging astronaut photos for training. We first produce full localization
information for 300,000 manually weakly labeled astronaut photos through an
automated pipeline, and then use these images to train a model, called
AstroLoc. AstroLoc learns a robust representation of Earth's surface features
through two losses: astronaut photos paired with their matching satellite
counterparts in a pairwise loss, and a second loss on clusters of satellite
imagery weighted by their relevance to astronaut photography via unsupervised
mining. We find that AstroLoc achieves a staggering 35% average improvement in
recall@1 over previous SOTA, pushing the limits of existing datasets with a
recall@100 consistently over 99%. Finally, we note that AstroLoc, without any
fine-tuning, provides excellent results for related tasks like the
lost-in-space satellite problem and historical space imagery localization.
|
2502.07004
|
Demystifying Singular Defects in Large Language Models
|
cs.CL
|
Large transformer models are known to produce high-norm tokens. In vision
transformers (ViTs), such tokens have been mathematically modeled through the
singular vectors of the linear approximations of layers. However, in large
language models (LLMs), the underlying causes of high-norm tokens remain
largely unexplored, and their different properties from those of ViTs require a
new analysis framework. In this paper, we provide both theoretical insights and
empirical validation across a range of recent models, leading to the following
observations: i) The layer-wise singular direction predicts the abrupt
explosion of token norms in LLMs. ii) The negative eigenvalues of a layer
explain its sudden decay. iii) The computational pathways leading to high-norm
tokens differ between initial and noninitial tokens. iv) High-norm tokens are
triggered by the right leading singular vector of the matrix approximating the
corresponding modules. We showcase two practical applications of these
findings: the improvement of quantization schemes and the design of LLM
signatures. Our findings not only advance the understanding of singular defects
in LLMs but also open new avenues for their application. We expect that this
work will stimulate further research into the internal mechanisms of LLMs and
will therefore publicly release our code.
|
2502.07005
|
Geometry-aware RL for Manipulation of Varying Shapes and Deformable
Objects
|
cs.LG cs.RO
|
Manipulating objects with varying geometries and deformable objects is a
major challenge in robotics. Tasks such as insertion with different objects or
cloth hanging require precise control and effective modelling of complex
dynamics. In this work, we frame this problem through the lens of a
heterogeneous graph that comprises smaller sub-graphs, such as actuators and
objects, accompanied by different edge types describing their interactions.
This graph representation serves as a unified structure for both rigid and
deformable objects tasks, and can be extended further to tasks comprising
multiple actuators. To evaluate this setup, we present a novel and challenging
reinforcement learning benchmark, including rigid insertion of diverse objects,
as well as rope and cloth manipulation with multiple end-effectors. These tasks
present a large search space, as both the initial and target configurations are
uniformly sampled in 3D space. To address this issue, we propose a novel
graph-based policy model, dubbed Heterogeneous Equivariant Policy (HEPi),
utilizing $SE(3)$ equivariant message passing networks as the main backbone to
exploit the geometric symmetry. In addition, by modeling explicit
heterogeneity, HEPi can outperform Transformer-based and non-heterogeneous
equivariant policies in terms of average returns, sample efficiency, and
generalization to unseen objects.
|
2502.07007
|
Grounding Creativity in Physics: A Brief Survey of Physical Priors in
AIGC
|
cs.CV
|
Recent advancements in AI-generated content have significantly improved the
realism of 3D and 4D generation. However, most existing methods prioritize
appearance consistency while neglecting underlying physical principles, leading
to artifacts such as unrealistic deformations, unstable dynamics, and
implausible objects interactions. Incorporating physics priors into generative
models has become a crucial research direction to enhance structural integrity
and motion realism. This survey provides a review of physics-aware generative
methods, systematically analyzing how physical constraints are integrated into
3D and 4D generation. First, we examine recent works in incorporating physical
priors into static and dynamic 3D generation, categorizing methods based on
representation types, including vision-based, NeRF-based, and Gaussian
Splatting-based approaches. Second, we explore emerging techniques in 4D
generation, focusing on methods that model temporal dynamics with physical
simulations. Finally, we conduct a comparative analysis of major methods,
highlighting their strengths, limitations, and suitability for different
materials and motion dynamics. By presenting an in-depth analysis of
physics-grounded AIGC, this survey aims to bridge the gap between generative
models and physical realism, providing insights that inspire future research in
physically consistent content generation.
|
2502.07008
|
Early Operative Difficulty Assessment in Laparoscopic Cholecystectomy
via Snapshot-Centric Video Analysis
|
cs.CV
|
Purpose: Laparoscopic cholecystectomy (LC) operative difficulty (LCOD) is
highly variable and influences outcomes. Despite extensive LC studies in
surgical workflow analysis, limited efforts explore LCOD using intraoperative
video data. Early recognition of LCOD could allow prompt review by expert
surgeons, enhance operating room (OR) planning, and improve surgical outcomes.
Methods: We propose the clinical task of early LCOD assessment using limited
video observations. We design SurgPrOD, a deep learning model to assess LCOD by
analyzing features from global and local temporal resolutions (snapshots) of
the observed LC video. Also, we propose a novel snapshot-centric attention
(SCA) module, acting across snapshots, to enhance LCOD prediction. We introduce
the CholeScore dataset, featuring video-level LCOD labels to validate our
method.
Results: We evaluate SurgPrOD on 3 LCOD assessment scales in the CholeScore
dataset. On our new metric assessing early and stable correct predictions,
SurgPrOD surpasses baselines by at least 0.22 points. SurgPrOD improves over
baselines by at least 9 and 5 percentage points in F1 score and top1-accuracy,
respectively, demonstrating its effectiveness in correct predictions.
Conclusion: We propose a new task for early LCOD assessment and a novel
model, SurgPrOD analyzing surgical video from global and local perspectives.
Our results on the CholeScore dataset establishes a new benchmark to study LCOD
using intraoperative video data.
|
2502.07011
|
DROP: Poison Dilution via Knowledge Distillation for Federated Learning
|
cs.LG cs.CR cs.DC
|
Federated Learning is vulnerable to adversarial manipulation, where malicious
clients can inject poisoned updates to influence the global model's behavior.
While existing defense mechanisms have made notable progress, they fail to
protect against adversaries that aim to induce targeted backdoors under
different learning and attack configurations. To address this limitation, we
introduce DROP (Distillation-based Reduction Of Poisoning), a novel defense
mechanism that combines clustering and activity-tracking techniques with
extraction of benign behavior from clients via knowledge distillation to tackle
stealthy adversaries that manipulate low data poisoning rates and diverse
malicious client ratios within the federation. Through extensive
experimentation, our approach demonstrates superior robustness compared to
existing defenses across a wide range of learning configurations. Finally, we
evaluate existing defenses and our method under the challenging setting of
non-IID client data distribution and highlight the challenges of designing a
resilient FL defense in this setting.
|
2502.07015
|
Data Warehouse Design for Multiple Source Forest Inventory Management
and Image Processing
|
cs.DB
|
This research developed a prototype data warehouse to integrate multi-source
forestry data for long-term monitoring, management, and sustainability. The
data warehouse is intended to accommodate all types of imagery from various
platforms, LiDAR point clouds, survey records, and paper documents, with the
capability to transform these datasets into machine learning (ML) and deep
learning classification and segmentation models. In this study, we pioneered
the integration of unmanned aerial vehicle (UAV) imagery and paper records,
testing the merged data on the YOLOv11 model. Paper records improved ground
truth, and preliminary results demonstrated notable performance improvements.
This research aims to implement a data warehouse (DW) to manage data for a
YOLO (You Only Look Once) model, which identifies objects in images. It does
this by integrating advanced data processing pipelines. Data are also stored
and easily accessible for future use, including comparing current and
historical data to understand growth or declining patterns. In addition, the
design is used to optimize resource usage. It also scales easily, not affecting
other parts of the data warehouse when adding dimension tables or other fields
to the fact table. DW performance and estimations for growing workloads are
also explored in this paper.
|
2502.07016
|
Confidence Intervals for Evaluation of Data Mining
|
stat.ML cs.LG
|
In data mining, when binary prediction rules are used to predict a binary
outcome, many performance measures are used in a vast array of literature for
the purposes of evaluation and comparison. Some examples include classification
accuracy, precision, recall, F measures, and Jaccard index. Typically, these
performance measures are only approximately estimated from a finite dataset,
which may lead to findings that are not statistically significant. In order to
properly quantify such statistical uncertainty, it is important to provide
confidence intervals associated with these estimated performance measures. We
consider statistical inference about general performance measures used in data
mining, with both individual and joint confidence intervals. These confidence
intervals are based on asymptotic normal approximations and can be computed
fast, without needs to do bootstrap resampling. We study the finite sample
coverage probabilities for these confidence intervals and also propose a
`blurring correction' on the variance to improve the finite sample performance.
This 'blurring correction' generalizes the plus-four method from binomial
proportion to general performance measures used in data mining. Our framework
allows multiple performance measures of multiple classification rules to be
inferred simultaneously for comparisons.
|
2502.07017
|
Finding Words Associated with DIF: Predicting Differential Item
Functioning using LLMs and Explainable AI
|
cs.CL cs.AI
|
We fine-tuned and compared several encoder-based Transformer large language
models (LLM) to predict differential item functioning (DIF) from the item text.
We then applied explainable artificial intelligence (XAI) methods to these
models to identify specific words associated with DIF. The data included 42,180
items designed for English language arts and mathematics summative state
assessments among students in grades 3 to 11. Prediction $R^2$ ranged from .04
to .32 among eight focal and reference group pairs. Our findings suggest that
many words associated with DIF reflect minor sub-domains included in the test
blueprint by design, rather than construct-irrelevant item content that should
be removed from assessments. This may explain why qualitative reviews of DIF
items often yield confusing or inconclusive results. Our approach can be used
to screen words associated with DIF during the item-writing process for
immediate revision, or help review traditional DIF analysis results by
highlighting key words in the text. Extensions of this research can enhance the
fairness of assessment programs, especially those that lack resources to build
high-quality items, and among smaller subpopulations where we do not have
sufficient sample sizes for traditional DIF analyses.
|
2502.07021
|
Federated Sinkhorn
|
cs.DC cs.LG
|
In this work we investigate the potential of solving the discrete Optimal
Transport (OT) problem with entropy regularization in a federated learning
setting. Recall that the celebrated Sinkhorn algorithm transforms the classical
OT linear program into strongly convex constrained optimization, facilitating
first order methods for otherwise intractably large problems. A common
contemporary setting that remains an open problem as far as the application of
Sinkhorn is the presence of data spread across clients with distributed
inter-communication, either due to clients whose privacy is a concern, or
simply by necessity of processing and memory hardware limitations. In this work
we investigate various natural procedures, which we refer to as Federated
Sinkhorn, that handle distributed environments where data is partitioned across
multiple clients. We formulate the problem as minimizing the transport cost
with an entropy regularization term, subject to marginal constraints, where
block components of the source and target distribution vectors are locally
known to clients corresponding to each block. We consider both synchronous and
asynchronous variants as well as all-to-all and server-client communication
topology protocols. Each procedure allows clients to compute local operations
on their data partition while periodically exchanging information with others.
We provide theoretical guarantees on convergence for the different variants
under different possible conditions. We empirically demonstrate the algorithms
performance on synthetic datasets and a real-world financial risk assessment
application. The investigation highlights the subtle tradeoffs associated with
computation and communication time in different settings and how they depend on
problem size and sparsity.
|
2502.07022
|
AIMS.au: A Dataset for the Analysis of Modern Slavery Countermeasures in
Corporate Statements
|
cs.CL cs.AI cs.LG
|
Despite over a decade of legislative efforts to address modern slavery in the
supply chains of large corporations, the effectiveness of government oversight
remains hampered by the challenge of scrutinizing thousands of statements
annually. While Large Language Models (LLMs) can be considered a well
established solution for the automatic analysis and summarization of documents,
recognizing concrete modern slavery countermeasures taken by companies and
differentiating those from vague claims remains a challenging task. To help
evaluate and fine-tune LLMs for the assessment of corporate statements, we
introduce a dataset composed of 5,731 modern slavery statements taken from the
Australian Modern Slavery Register and annotated at the sentence level. This
paper details the construction steps for the dataset that include the careful
design of annotation specifications, the selection and preprocessing of
statements, and the creation of high-quality annotation subsets for effective
model evaluations. To demonstrate our dataset's utility, we propose a machine
learning methodology for the detection of sentences relevant to mandatory
reporting requirements set by the Australian Modern Slavery Act. We then follow
this methodology to benchmark modern language models under zero-shot and
supervised learning settings.
|
2502.07025
|
Detecting Neurodegenerative Diseases using Frame-Level Handwriting
Embeddings
|
cs.LG cs.CV
|
In this study, we explored the use of spectrograms to represent handwriting
signals for assessing neurodegenerative diseases, including 42 healthy controls
(CTL), 35 subjects with Parkinson's Disease (PD), 21 with Alzheimer's Disease
(AD), and 15 with Parkinson's Disease Mimics (PDM). We applied CNN and
CNN-BLSTM models for binary classification using both multi-channel fixed-size
and frame-based spectrograms. Our results showed that handwriting tasks and
spectrogram channel combinations significantly impacted classification
performance. The highest F1-score (89.8%) was achieved for AD vs. CTL, while PD
vs. CTL reached 74.5%, and PD vs. PDM scored 77.97%. CNN consistently
outperformed CNN-BLSTM. Different sliding window lengths were tested for
constructing frame-based spectrograms. A 1-second window worked best for AD,
longer windows improved PD classification, and window length had little effect
on PD vs. PDM.
|
2502.07026
|
Machine Learning for Everyone: Simplifying Healthcare Analytics with
BigQuery ML
|
cs.LG cs.AI
|
Machine learning (ML) is transforming healthcare by enabling predictive
analytics, personalized treatments, and improved patient outcomes. However,
traditional ML workflows require specialized skills, infrastructure, and
resources, limiting accessibility for many healthcare professionals. This paper
explores how Google Cloud's BigQuery ML simplifies the development and
deployment of ML models using SQL, reducing technical barriers. Through a case
study on diabetes prediction using the Diabetes Health Indicators Dataset, we
evaluate three predictive models: Logistic Regression, Boosted Tree, and Deep
Neural Network (DNN). Our results demonstrate that the Boosted Tree model
achieves the highest performance, making it highly effective for diabetes
prediction. This study highlights BigQuery ML's role in democratizing machine
learning by providing a scalable, efficient, and accessible solution for
healthcare analytics.
|
2502.07027
|
Representational Alignment with Chemical Induced Fit for Molecular
Relational Learning
|
cs.LG cs.AI
|
Molecular Relational Learning (MRL) is widely applied in natural sciences to
predict relationships between molecular pairs by extracting structural
features. The representational similarity between substructure pairs determines
the functional compatibility of molecular binding sites. Nevertheless, aligning
substructure representations by attention mechanisms lacks guidance from
chemical knowledge, resulting in unstable model performance in chemical space
(\textit{e.g.}, functional group, scaffold) shifted data. With theoretical
justification, we propose the \textbf{Re}presentational \textbf{Align}ment with
Chemical Induced \textbf{Fit} (ReAlignFit) to enhance the stability of MRL.
ReAlignFit dynamically aligns substructure representation in MRL by introducing
chemical Induced Fit-based inductive bias. In the induction process, we design
the Bias Correction Function based on substructure edge reconstruction to align
representations between substructure pairs by simulating chemical
conformational changes (dynamic combination of substructures). ReAlignFit
further integrates the Subgraph Information Bottleneck during fit process to
refine and optimize substructure pairs exhibiting high chemical functional
compatibility, leveraging them to generate molecular embeddings. Experimental
results on nine datasets demonstrate that ReAlignFit outperforms
state-of-the-art models in two tasks and significantly enhances model's
stability in both rule-shifted and scaffold-shifted data distributions.
|
2502.07029
|
Leveraging Allophony in Self-Supervised Speech Models for Atypical
Pronunciation Assessment
|
cs.CL cs.AI cs.LG eess.AS
|
Allophony refers to the variation in the phonetic realization of a phoneme
based on its phonetic environment. Modeling allophones is crucial for atypical
pronunciation assessment, which involves distinguishing atypical from typical
pronunciations. However, recent phoneme classifier-based approaches often
simplify this by treating various realizations as a single phoneme, bypassing
the complexity of modeling allophonic variation. Motivated by the acoustic
modeling capabilities of frozen self-supervised speech model (S3M) features, we
propose MixGoP, a novel approach that leverages Gaussian mixture models to
model phoneme distributions with multiple subclusters. Our experiments show
that MixGoP achieves state-of-the-art performance across four out of five
datasets, including dysarthric and non-native speech. Our analysis further
suggests that S3M features capture allophonic variation more effectively than
MFCCs and Mel spectrograms, highlighting the benefits of integrating MixGoP
with S3M features.
|
2502.07030
|
PrismAvatar: Real-time animated 3D neural head avatars on edge devices
|
cs.CV cs.GR cs.LG
|
We present PrismAvatar: a 3D head avatar model which is designed specifically
to enable real-time animation and rendering on resource-constrained edge
devices, while still enjoying the benefits of neural volumetric rendering at
training time. By integrating a rigged prism lattice with a 3D morphable head
model, we use a hybrid rendering model to simultaneously reconstruct a
mesh-based head and a deformable NeRF model for regions not represented by the
3DMM. We then distill the deformable NeRF into a rigged mesh and neural
textures, which can be animated and rendered efficiently within the constraints
of the traditional triangle rendering pipeline. In addition to running at 60
fps with low memory usage on mobile devices, we find that our trained models
have comparable quality to state-of-the-art 3D avatar models on desktop
devices.
|
2502.07036
|
Automated Consistency Analysis of LLMs
|
cs.CR cs.AI cs.LG
|
Generative AI (Gen AI) with large language models (LLMs) are being widely
adopted across the industry, academia and government. Cybersecurity is one of
the key sectors where LLMs can be and/or are already being used. There are a
number of problems that inhibit the adoption of trustworthy Gen AI and LLMs in
cybersecurity and such other critical areas. One of the key challenge to the
trustworthiness and reliability of LLMs is: how consistent an LLM is in its
responses?
In this paper, we have analyzed and developed a formal definition of
consistency of responses of LLMs. We have formally defined what is consistency
of responses and then develop a framework for consistency evaluation. The paper
proposes two approaches to validate consistency: self-validation, and
validation across multiple LLMs. We have carried out extensive experiments for
several LLMs such as GPT4oMini, GPT3.5, Gemini, Cohere, and Llama3, on a
security benchmark consisting of several cybersecurity questions: informational
and situational. Our experiments corroborate the fact that even though these
LLMs are being considered and/or already being used for several cybersecurity
tasks today, they are often inconsistent in their responses, and thus are
untrustworthy and unreliable for cybersecurity.
|
2502.07039
|
Boosting of Classification Models with Human-in-the-Loop Computational
Visual Knowledge Discovery
|
cs.LG cs.HC
|
High-risk artificial intelligence and machine learning classification tasks,
such as healthcare diagnosis, require accurate and interpretable prediction
models. However, classifier algorithms typically sacrifice individual
case-accuracy for overall model accuracy, limiting analysis of class overlap
areas regardless of task significance. The Adaptive Boosting meta-algorithm,
which won the 2003 G\"odel Prize, analytically assigns higher weights to
misclassified cases to reclassify. However, it relies on weaker base
classifiers that are iteratively strengthened, limiting improvements from base
classifiers. Combining visual and computational approaches enables selecting
stronger base classifiers before boosting. This paper proposes moving boosting
methodology from focusing on only misclassified cases to all cases in the class
overlap areas using Computational and Interactive Visual Learning (CIVL) with a
Human-in-the-Loop. It builds classifiers in lossless visualizations integrating
human domain expertise and visual insights. A Divide and Classify process
splits cases to simple and complex, classifying these individually through
computational analysis and data visualization with lossless visualization
spaces of Parallel Coordinates or other General Line Coordinates. After finding
pure and overlap class areas simple cases in pure areas are classified,
generating interpretable sub-models like decision rules in Propositional and
First-order Logics. Only multidimensional cases in the overlap areas are
losslessly visualized simplifying end-user cognitive tasks to identify
difficult case patterns, including engineering features to form new
classifiable patterns. Demonstration shows a perfectly accurate and losslessly
interpretable model of the Iris dataset, and simulated data shows generalized
benefits to accuracy and interpretability of models, increasing end-user
confidence in discovered models.
|
2502.07042
|
Building networks of shared research interests by embedding words into a
representation space
|
cs.SI
|
Departments within a university are not only administrative units, but also
an effort to gather investigators around common fields of academic study. A
pervasive challenge is connecting members with shared research interests both
within and between departments. Here I describe a workflow that adapts methods
from natural language processing to generate a network connecting $n=79$
members of a university department, or multiple departments within a faculty
($n=278$), based on common topics in their research publications. After
extracting and processing terms from $n=16,901$ abstracts in the PubMed
database, the co-occurrence of terms is encoded in a sparse document-term
matrix. Based on the angular distances between the presence-absence vectors for
every pair of terms, I use the uniform manifold approximation and projection
(UMAP) method to embed the terms into a representational space such that terms
that tend to appear in the same documents are closer together. Each author's
corpus defines a probability distribution over terms in this space. Using the
Wasserstein distance to quantify the similarity between these distributions, I
generate a distance matrix among authors that can be analyzed and visualized as
a graph. I demonstrate that this nonparametric method produces clusters with
distinct themes that are consistent with some academic divisions, while
identifying untapped connections among members. A documented workflow
comprising Python and R scripts is available under the MIT license at
https://github.com/PoonLab/tragula.
|
2502.07045
|
Scalable and Ethical Insider Threat Detection through Data Synthesis and
Analysis by LLMs
|
cs.CR cs.AI cs.CL cs.CY
|
Insider threats wield an outsized influence on organizations,
disproportionate to their small numbers. This is due to the internal access
insiders have to systems, information, and infrastructure. %One example of this
influence is where anonymous respondents submit web-based job search site
reviews, an insider threat risk to organizations. Signals for such risks may be
found in anonymous submissions to public web-based job search site reviews.
This research studies the potential for large language models (LLMs) to analyze
and detect insider threat sentiment within job site reviews. Addressing ethical
data collection concerns, this research utilizes synthetic data generation
using LLMs alongside existing job review datasets. A comparative analysis of
sentiment scores generated by LLMs is benchmarked against expert human scoring.
Findings reveal that LLMs demonstrate alignment with human evaluations in most
cases, thus effectively identifying nuanced indicators of threat sentiment. The
performance is lower on human-generated data than synthetic data, suggesting
areas for improvement in evaluating real-world data. Text diversity analysis
found differences between human-generated and LLM-generated datasets, with
synthetic data exhibiting somewhat lower diversity. Overall, the results
demonstrate the applicability of LLMs to insider threat detection, and a
scalable solution for insider sentiment testing by overcoming ethical and
logistical barriers tied to data acquisition.
|
2502.07046
|
SnipGen: A Mining Repository Framework for Evaluating LLMs for Code
|
cs.SE cs.AI cs.LG
|
Language Models (LLMs), such as transformer-based neural networks trained on
billions of parameters, have become increasingly prevalent in software
engineering (SE). These models, trained on extensive datasets that include code
repositories, exhibit remarkable capabilities for SE tasks. However, evaluating
their effectiveness poses significant challenges, primarily due to the
potential overlap between the datasets used for training and those employed for
evaluation. To address this issue, we introduce SnipGen, a comprehensive
repository mining framework designed to leverage prompt engineering across
various downstream tasks for code generation. SnipGen aims to mitigate data
contamination by generating robust testbeds and crafting tailored data points
to assist researchers and practitioners in evaluating LLMs for code-related
tasks. In our exploratory study, SnipGen mined approximately 227K data points
from 338K recent code changes in GitHub commits, focusing on method-level
granularity. SnipGen features a collection of prompt templates that can be
combined to create a Chain-of-Thought-like sequence of prompts, enabling a
nuanced assessment of LLMs' code generation quality. By providing the mining
tool, the methodology, and the dataset, SnipGen empowers researchers and
practitioners to rigorously evaluate and interpret LLMs' performance in
software engineering contexts.
|
2502.07049
|
LLMs in Software Security: A Survey of Vulnerability Detection
Techniques and Insights
|
cs.CR cs.AI
|
Large Language Models (LLMs) are emerging as transformative tools for
software vulnerability detection, addressing critical challenges in the
security domain. Traditional methods, such as static and dynamic analysis,
often falter due to inefficiencies, high false positive rates, and the growing
complexity of modern software systems. By leveraging their ability to analyze
code structures, identify patterns, and generate repair suggestions, LLMs,
exemplified by models like GPT, BERT, and CodeBERT, present a novel and
scalable approach to mitigating vulnerabilities. This paper provides a detailed
survey of LLMs in vulnerability detection. It examines key aspects, including
model architectures, application methods, target languages, fine-tuning
strategies, datasets, and evaluation metrics. We also analyze the scope of
current research problems, highlighting the strengths and weaknesses of
existing approaches. Further, we address challenges such as cross-language
vulnerability detection, multimodal data integration, and repository-level
analysis. Based on these findings, we propose solutions for issues like dataset
scalability, model interpretability, and applications in low-resource
scenarios. Our contributions are threefold: (1) a systematic review of how LLMs
are applied in vulnerability detection; (2) an analysis of shared patterns and
differences across studies, with a unified framework for understanding the
field; and (3) a summary of key challenges and future research directions. This
work provides valuable insights for advancing LLM-based vulnerability
detection. We also maintain and regularly update latest selected paper on
https://github.com/OwenSanzas/LLM-For-Vulnerability-Detection
|
Subsets and Splits
No community queries yet
The top public SQL queries from the community will appear here once available.