id
stringlengths 9
16
| title
stringlengths 4
278
| categories
stringlengths 5
104
| abstract
stringlengths 6
4.09k
|
|---|---|---|---|
2501.16312
|
LinPrim: Linear Primitives for Differentiable Volumetric Rendering
|
cs.CV
|
Volumetric rendering has become central to modern novel view synthesis
methods, which use differentiable rendering to optimize 3D scene
representations directly from observed views. While many recent works build on
NeRF or 3D Gaussians, we explore an alternative volumetric scene
representation. More specifically, we introduce two new scene representations
based on linear primitives-octahedra and tetrahedra-both of which define
homogeneous volumes bounded by triangular faces. This formulation aligns
naturally with standard mesh-based tools, minimizing overhead for downstream
applications. To optimize these primitives, we present a differentiable
rasterizer that runs efficiently on GPUs, allowing end-to-end gradient-based
optimization while maintaining realtime rendering capabilities. Through
experiments on real-world datasets, we demonstrate comparable performance to
state-of-the-art volumetric methods while requiring fewer primitives to achieve
similar reconstruction fidelity. Our findings provide insights into the
geometry of volumetric rendering and suggest that adopting explicit polyhedra
can expand the design space of scene representations.
|
2501.16319
|
Adaptive Iterative Compression for High-Resolution Files: an Approach
Focused on Preserving Visual Quality in Cinematic Workflows
|
cs.CV cs.ET cs.LG cs.PF
|
This study presents an iterative adaptive compression model for
high-resolution DPX-derived TIFF files used in cinematographic workflows and
digital preservation. The model employs SSIM and PSNR metrics to dynamically
adjust compression parameters across three configurations (C0, C1, C2),
achieving storage reductions up to 83.4 % while maintaining high visual
fidelity (SSIM > 0.95). Validation across three diverse productions - black and
white classic, soft-palette drama, and complex action film - demonstrated the
method's effectiveness in preserving critical visual elements while
significantly reducing storage requirements. Professional evaluators reported
90% acceptance rate for the optimal C1 configuration, with artifacts remaining
below perceptual threshold in critical areas. Comparative analysis with
JPEG2000 and H.265 showed superior quality preservation at equivalent
compression rates, particularly for high bit-depth content. While requiring
additional computational overhead, the method's storage benefits and quality
control capabilities make it suitable for professional workflows, with
potential applications in medical imaging and cloud storage optimization.
|
2501.16322
|
Implicit Bias in Matrix Factorization and its Explicit Realization in a
New Architecture
|
cs.LG math.OC stat.ML
|
Gradient descent for matrix factorization is known to exhibit an implicit
bias toward approximately low-rank solutions. While existing theories often
assume the boundedness of iterates, empirically the bias persists even with
unbounded sequences. We thus hypothesize that implicit bias is driven by
divergent dynamics markedly different from the convergent dynamics for data
fitting. Using this perspective, we introduce a new factorization model:
$X\approx UDV^\top$, where $U$ and $V$ are constrained within norm balls, while
$D$ is a diagonal factor allowing the model to span the entire search space.
Our experiments reveal that this model exhibits a strong implicit bias
regardless of initialization and step size, yielding truly (rather than
approximately) low-rank solutions. Furthermore, drawing parallels between
matrix factorization and neural networks, we propose a novel neural network
model featuring constrained layers and diagonal components. This model achieves
strong performance across various regression and classification tasks while
finding low-rank solutions, resulting in efficient and lightweight networks.
|
2501.16325
|
Tailored Forecasting from Short Time Series via Meta-learning
|
cs.LG nlin.CD physics.comp-ph
|
Machine learning (ML) models can be effective for forecasting the dynamics of
unknown systems from time-series data, but they often require large amounts of
data and struggle to generalize across systems with varying dynamics. Combined,
these issues make forecasting from short time series particularly challenging.
To address this problem, we introduce Meta-learning for Tailored Forecasting
from Related Time Series (METAFORS), which uses related systems with longer
time-series data to supplement limited data from the system of interest. By
leveraging a library of models trained on related systems, METAFORS builds
tailored models to forecast system evolution with limited data. Using a
reservoir computing implementation and testing on simulated chaotic systems, we
demonstrate METAFORS' ability to predict both short-term dynamics and long-term
statistics, even when test and related systems exhibit significantly different
behaviors and the available data are scarce, highlighting its robustness and
versatility in data-limited scenarios.
|
2501.16327
|
LUCY: Linguistic Understanding and Control Yielding Early Stage of Her
|
cs.CL cs.SD eess.AS
|
The film Her features Samantha, a sophisticated AI audio agent who is capable
of understanding both linguistic and paralinguistic information in human speech
and delivering real-time responses that are natural, informative and sensitive
to emotional subtleties. Moving one step toward more sophisticated audio agent
from recent advancement in end-to-end (E2E) speech systems, we propose LUCY, a
E2E speech model that (1) senses and responds to user's emotion, (2) deliver
responses in a succinct and natural style, and (3) use external tool to answer
real-time inquiries. Experiment results show that LUCY is better at emotion
control than peer models, generating emotional responses based on linguistic
emotional instructions and responding to paralinguistic emotional cues. Lucy is
also able to generate responses in a more natural style, as judged by external
language models, without sacrificing much performance on general question
answering. Finally, LUCY can leverage function calls to answer questions that
are out of its knowledge scope.
|
2501.16329
|
sDREAMER: Self-distilled Mixture-of-Modality-Experts Transformer for
Automatic Sleep Staging
|
cs.LG cs.AI
|
Automatic sleep staging based on electroencephalography (EEG) and
electromyography (EMG) signals is an important aspect of sleep-related
research. Current sleep staging methods suffer from two major drawbacks. First,
there are limited information interactions between modalities in the existing
methods. Second, current methods do not develop unified models that can handle
different sources of input. To address these issues, we propose a novel sleep
stage scoring model sDREAMER, which emphasizes cross-modality interaction and
per-channel performance. Specifically, we develop a mixture-of-modality-expert
(MoME) model with three pathways for EEG, EMG, and mixed signals with partially
shared weights. We further propose a self-distillation training scheme for
further information interaction across modalities. Our model is trained with
multi-channel inputs and can make classifications on either single-channel or
multi-channel inputs. Experiments demonstrate that our model outperforms the
existing transformer-based sleep scoring methods for multi-channel inference.
For single-channel inference, our model also outperforms the transformer-based
models trained with single-channel signals.
|
2501.16330
|
RelightVid: Temporal-Consistent Diffusion Model for Video Relighting
|
cs.CV cs.AI
|
Diffusion models have demonstrated remarkable success in image generation and
editing, with recent advancements enabling albedo-preserving image relighting.
However, applying these models to video relighting remains challenging due to
the lack of paired video relighting datasets and the high demands for output
fidelity and temporal consistency, further complicated by the inherent
randomness of diffusion models. To address these challenges, we introduce
RelightVid, a flexible framework for video relighting that can accept
background video, text prompts, or environment maps as relighting conditions.
Trained on in-the-wild videos with carefully designed illumination
augmentations and rendered videos under extreme dynamic lighting, RelightVid
achieves arbitrary video relighting with high temporal consistency without
intrinsic decomposition while preserving the illumination priors of its image
backbone.
|
2501.16331
|
Decoding OTC Government Bond Market Liquidity: An ABM Model for Market
Dynamics
|
q-fin.TR cs.AI
|
The over-the-counter (OTC) government bond markets are characterised by their
bilateral trading structures, which pose unique challenges to understanding and
ensuring market stability and liquidity. In this paper, we develop a bespoke
ABM that simulates market-maker interactions within a stylised government bond
market. The model focuses on the dynamics of liquidity and stability in the
secondary trading of government bonds, particularly in concentrated markets
like those found in Australia and the UK. Through this simulation, we test key
hypotheses around improving market stability, focusing on the effects of agent
diversity, business costs, and client base size. We demonstrate that greater
agent diversity enhances market liquidity and that reducing the costs of
market-making can improve overall market stability. The model offers insights
into computational finance by simulating trading without price transparency,
highlighting how micro-structural elements can affect macro-level market
outcomes. This research contributes to the evolving field of computational
finance by employing computational intelligence techniques to better understand
the fundamental mechanics of government bond markets, providing actionable
insights for both academics and practitioners.
|
2501.16333
|
A New Proof for the Linear Filtering and Smoothing Equations, and
Asymptotic Expansion of Nonlinear Filtering
|
eess.SP cs.IT math.IT math.PR math.ST stat.TH
|
In this paper, we propose a new approach to the linear filtering and
smoothing problem and demonstrate its applicability to nonlinear filtering. For
the linear case, our main theorem provides an explicit expression for the
conditional distribution of the hidden process given the observations, leading
to a novel derivation of the linear filtering and smoothing equations.
Moreover, the theorem offers an efficient framework for computing the
asymptotic expansion of nonlinear filtering.
|
2501.16334
|
RNN-Based Models for Predicting Seizure Onset in Epileptic Patients
|
eess.SP cs.LG
|
Early management and better clinical outcomes for epileptic patients depend
on seizure prediction. The accuracy and false alarm rates of existing systems
are often compromised by their dependence on static thresholds and basic
Electroencephalogram (EEG) properties. A novel Recurrent Neural Network
(RNN)-based method for seizure start prediction is proposed in the article to
overcome these limitations. As opposed to conventional techniques, the proposed
system makes use of Long Short-Term Memory (LSTM) networks to extract temporal
correlations from unprocessed EEG data. It enables the system to adapt
dynamically to the unique EEG patterns of each patient, improving prediction
accuracy. The methodology of the system comprises thorough data collecting,
preprocessing, and LSTM-based feature extraction. Annotated EEG datasets are
then used for model training and validation. Results show a considerable
reduction in false alarm rates (average of 6.8%) and an improvement in
prediction accuracy (90.2% sensitivity, 88.9% specificity, and AUC-ROC of 93).
Additionally, computational efficiency is significantly higher than that of
existing systems (12 ms processing time, 45 MB memory consumption). About
improving seizure prediction reliability, these results demonstrate the
effectiveness of the proposed RNN-based strategy, opening up possibilities for
its practical application to improve epilepsy treatment.
|
2501.16336
|
Runtime Analysis of Evolutionary Algorithms for Multiparty
Multiobjective Optimization
|
cs.NE cs.AI
|
In scenarios where multiple decision-makers operate within a common decision
space, each focusing on their own multi-objective optimization problem (e.g.,
bargaining games), the problem can be modeled as a multi-party multi-objective
optimization problem (MPMOP). While numerous evolutionary algorithms have been
proposed to solve MPMOPs, most results remain empirical. This paper presents
the first theoretical analysis of the expected runtime of evolutionary
algorithms on bi-party multi-objective optimization problems (BPMOPs). Our
findings demonstrate that employing traditional multi-objective optimization
algorithms to solve MPMOPs is both time-consuming and inefficient, as the
resulting population contains many solutions that fail to achieve consensus
among decision-makers. An alternative approach involves decision-makers
individually solving their respective optimization problems and seeking
consensus only in the final stage. While feasible for pseudo-Boolean
optimization problems, this method may fail to guarantee approximate
performance for one party in NP-hard problems. Finally, We propose
coevolutionary multi-party multi-objective optimizers (CoEMPMO) for
pseudo-Boolean optimization and shortest path problems within a multi-party
multi-objective context, which maintains a common solution set among all
parties through coevolution. Theoretical and experimental results demonstrate
that the proposed \( \text{CoEMPMO}_{\text{random}} \) outperforms previous
algorithms in terms of the expected lower bound on runtime for pseudo-Boolean
optimization problems. Additionally, \(
\text{CoEMPMO}_{\text{cons}}^{\text{SP}} \) achieves better efficiency and
precision in solving shortest path problems compared to existing algorithms.
|
2501.16337
|
Explore Activation Sparsity in Recurrent LLMs for Energy-Efficient
Neuromorphic Computing
|
cs.NE cs.AI cs.AR cs.LG
|
The recent rise of Large Language Models (LLMs) has revolutionized the deep
learning field. However, the desire to deploy LLMs on edge devices introduces
energy efficiency and latency challenges. Recurrent LLM (R-LLM) architectures
have proven effective in mitigating the quadratic complexity of self-attention,
making them a potential paradigm for computing on-edge neuromorphic processors.
In this work, we propose a low-cost, training-free algorithm to sparsify
R-LLMs' activations to enhance energy efficiency on neuromorphic hardware. Our
approach capitalizes on the inherent structure of these models, rendering them
well-suited for energy-constrained environments. Although primarily designed
for R-LLMs, this method can be generalized to other LLM architectures, such as
transformers, as demonstrated on the OPT model, achieving comparable sparsity
and efficiency improvements. Empirical studies illustrate that our method
significantly reduces computational demands while maintaining competitive
accuracy across multiple zero-shot learning benchmarks. Additionally, hardware
simulations with the SENECA neuromorphic processor underscore notable energy
savings and latency improvements. These results pave the way for low-power,
real-time neuromorphic deployment of LLMs and demonstrate the feasibility of
training-free on-chip adaptation using activation sparsity.
|
2501.16341
|
Developing Enhanced Conversational Agents for Social Virtual Worlds
|
eess.AS cs.CL cs.SD
|
In this paper, we present a methodology for the development of embodied
conversational agents for social virtual worlds. The agents provide multimodal
communication with their users in which speech interaction is included. Our
proposal combines different techniques related to Artificial Intelligence,
Natural Language Processing, Affective Computing, and User Modeling. Firstly,
the developed conversational agents. A statistical methodology has been
developed to model the system conversational behavior, which is learned from an
initial corpus and improved with the knowledge acquired from the successive
interactions. In addition, the selection of the next system response is adapted
considering information stored into users profiles and also the emotional
contents detected in the users utterances. Our proposal has been evaluated with
the successful development of an embodied conversational agent which has been
placed in the Second Life social virtual world. The avatar includes the
different models and interacts with the users who inhabit the virtual world in
order to provide academic information. The experimental results show that the
agents conversational behavior adapts successfully to the specific
characteristics of users interacting in such environments.
|
2501.16343
|
Self-orthogonal and self-dual codes from maximal curves
|
cs.IT math.AG math.IT
|
In the field of algebraic geometric codes (AG codes), the characterization of
dual codes has long been a challenging problem which relies on differentials.
In this paper, we provide some descriptions for certain differentials utilizing
algebraic structure of finite fields and geometric properties of algebraic
curves. Moreover, we construct self-orthogonal and self-dual codes with
parameters $[n, k, d]_{q^2}$ satisfying $k + d$ is close to $n$. Additionally,
quantum codes with large minimum distance are also constructed.
|
2501.16344
|
WhiSPA: Semantically and Psychologically Aligned Whisper with
Self-Supervised Contrastive and Student-Teacher Learning
|
eess.AS cs.AI cs.CL cs.SD
|
Current speech encoding pipelines often rely on an additional text-based LM
to get robust representations of human communication, even though SotA
speech-to-text models often have a LM within. This work proposes an approach to
improve the LM within an audio model such that the subsequent text-LM is
unnecessary. We introduce WhiSPA (Whisper with Semantic and Psychological
Alignment), which leverages a novel audio training objective: contrastive loss
with a language model embedding as a teacher. Using over 500k speech segments
from mental health audio interviews, we evaluate the utility of aligning
Whisper's latent space with semantic representations from a text autoencoder
(SBERT) and lexically derived embeddings of basic psychological dimensions:
emotion and personality. Over self-supervised affective tasks and downstream
psychological tasks, WhiSPA surpasses current speech encoders, achieving an
average error reduction of 73.4% and 83.8%, respectively. WhiSPA demonstrates
that it is not always necessary to run a subsequent text LM on speech-to-text
output in order to get a rich psychological representation of human
communication.
|
2501.16345
|
Self-Clustering Graph Transformer Approach to Model Resting-State
Functional Brain Activity
|
cs.LG cs.AI
|
Resting-state functional magnetic resonance imaging (rs-fMRI) offers valuable
insights into the human brain's functional organization and is a powerful tool
for investigating the relationship between brain function and cognitive
processes, as it allows for the functional organization of the brain to be
captured without relying on a specific task or stimuli. In this study, we
introduce a novel attention mechanism for graphs with subnetworks, named
Self-Clustering Graph Transformer (SCGT), designed to handle the issue of
uniform node updates in graph transformers. By using static functional
connectivity (FC) correlation features as input to the transformer model, SCGT
effectively captures the sub-network structure of the brain by performing
cluster-specific updates to the nodes, unlike uniform node updates in vanilla
graph transformers, further allowing us to learn and interpret the subclusters.
We validate our approach on the Adolescent Brain Cognitive Development (ABCD)
dataset, comprising 7,957 participants, for the prediction of total cognitive
score and gender classification. Our results demonstrate that SCGT outperforms
the vanilla graph transformer method and other recent models, offering a
promising tool for modeling brain functional connectivity and interpreting the
underlying subnetwork structures.
|
2501.16346
|
Self-supervised Graph Transformer with Contrastive Learning for Brain
Connectivity Analysis towards Improving Autism Detection
|
cs.LG cs.AI
|
Functional Magnetic Resonance Imaging (fMRI) provides useful insights into
the brain function both during task or rest. Representing fMRI data using
correlation matrices is found to be a reliable method of analyzing the inherent
connectivity of the brain in the resting and active states. Graph Neural
Networks (GNNs) have been widely used for brain network analysis due to their
inherent explainability capability. In this work, we introduce a novel
framework using contrastive self-supervised learning graph transformers,
incorporating a brain network transformer encoder with random graph
alterations. The proposed network leverages both contrastive learning and graph
alterations to effectively train the graph transformer for autism detection.
Our approach, tested on Autism Brain Imaging Data Exchange (ABIDE) data,
demonstrates superior autism detection, achieving an AUROC of 82.6 and an
accuracy of 74%, surpassing current state-of-the-art methods.
|
2501.16347
|
Identification of Hardware Trojan Locations in Gate-Level Netlist using
Nearest Neighbour Approach integrated with Machine Learning Technique
|
cs.LG cs.AI
|
In the evolving landscape of integrated circuit design, detecting Hardware
Trojans (HTs) within a multi entity based design cycle presents significant
challenges. This research proposes an innovative machine learning-based
methodology for identifying malicious logic gates in gate-level netlists. By
focusing on path retrace algorithms. The methodology is validated across three
distinct cases, each employing different machine learning models to classify
HTs. Case I utilizes a decision tree algorithm for node-to-node comparisons,
significantly improving detection accuracy through the integration of Principal
Component Analysis (PCA). Case II introduces a graph-to-graph classification
using a Graph Neural Network (GNN) model, enabling the differentiation between
normal and Trojan-infected circuit designs. Case III applies GNN-based node
classification to identify individual compromised nodes and its location.
Additionally, nearest neighbor (NN) method has been combined with GNN
graph-to-graph in Case II and GNN node-to-node in Case III. Despite the
potential of GNN model graph-to-graph classification, NN approach demonstrated
superior performance, with the first nearest neighbor (1st NN) achieving 73.2%
accuracy and the second nearest neighbor (2nd NN) method reaching 97.7%. In
comparison, the GNN model achieved an accuracy of 62.8%. Similarly, GNN model
node-to-node classification, NN approach demonstrated superior performance,
with the 1st NN achieving 93% accuracy and the 2nd NN method reaching 97.7%. In
comparison, the GNN model achieved an accuracy of 79.8%. However, higher and
higher NN will lead to large code coverage for the identification of HTs.
|
2501.16348
|
An Integrated Approach to AI-Generated Content in e-health
|
cs.LG cs.AI
|
Artificial Intelligence-Generated Content, a subset of Generative Artificial
Intelligence, holds significant potential for advancing the e-health sector by
generating diverse forms of data. In this paper, we propose an end-to-end
class-conditioned framework that addresses the challenge of data scarcity in
health applications by generating synthetic medical images and text data,
evaluating on practical applications such as retinopathy detection, skin
infections and mental health assessments. Our framework integrates Diffusion
and Large Language Models (LLMs) to generate data that closely match real-world
patterns, which is essential for improving downstream task performance and
model robustness in e-health applications. Experimental results demonstrate
that the synthetic images produced by the proposed diffusion model outperform
traditional GAN architectures. Similarly, in the text modality, data generated
by uncensored LLM achieves significantly better alignment with real-world data
than censored models in replicating the authentic tone.
|
2501.16349
|
Risk-Informed Diffusion Transformer for Long-Tail Trajectory Prediction
in the Crash Scenario
|
cs.LG cs.AI
|
Trajectory prediction methods have been widely applied in autonomous driving
technologies. Although the overall performance accuracy of trajectory
prediction is relatively high, the lack of trajectory data in critical
scenarios in the training data leads to the long-tail phenomenon. Normally, the
trajectories of the tail data are more critical and more difficult to predict
and may include rare scenarios such as crashes. To solve this problem, we
extracted the trajectory data from real-world crash scenarios, which contain
more long-tail data. Meanwhile, based on the trajectory data in this scenario,
we integrated graph-based risk information and diffusion with transformer and
proposed the Risk-Informed Diffusion Transformer (RI-DiT) trajectory prediction
method. Extensive experiments were conducted on trajectory data in the
real-world crash scenario, and the results show that the algorithm we proposed
has good performance. When predicting the data of the tail 10\% (Top 10\%), the
minADE and minFDE indicators are 0.016/2.667 m. At the same time, we showed the
trajectory conditions of different long-tail distributions. The distribution of
trajectory data is closer to the tail, the less smooth the trajectory is.
Through the trajectory data in real-world crash scenarios, Our work expands the
methods to overcome the long-tail challenges in trajectory prediction. Our
method, RI-DiT, integrates inverse time to collision (ITTC) and the feature of
traffic flow, which can predict long-tail trajectories more accurately and
improve the safety of autonomous driving systems.
|
2501.16350
|
A Method for Multi-Hop Question Answering on Persian Knowledge Graph
|
cs.IR cs.AI cs.CL
|
Question answering systems are the latest evolution in information retrieval
technology, designed to accept complex queries in natural language and provide
accurate answers using both unstructured and structured knowledge sources.
Knowledge Graph Question Answering (KGQA) systems fulfill users' information
needs by utilizing structured data, representing a vast number of facts as a
graph. However, despite significant advancements, major challenges persist in
answering multi-hop complex questions, particularly in Persian. One of the main
challenges is the accurate understanding and transformation of these multi-hop
complex questions into semantically equivalent SPARQL queries, which allows for
precise answer retrieval from knowledge graphs. In this study, to address this
issue, a dataset of 5,600 Persian multi-hop complex questions was developed,
along with their decomposed forms based on the semantic representation of the
questions. Following this, Persian language models were trained using this
dataset, and an architecture was proposed for answering complex questions using
a Persian knowledge graph. Finally, the proposed method was evaluated against
similar systems on the PeCoQ dataset. The results demonstrated the superiority
of our approach, with an improvement of 12.57% in F1-score and 12.06% in
accuracy compared to the best comparable method.
|
2501.16352
|
Mixture of Experts (MoE): A Big Data Perspective
|
cs.LG cs.AI
|
As the era of big data arrives, traditional artificial intelligence
algorithms have difficulty processing the demands of massive and diverse data.
Mixture of experts (MoE) has shown excellent performance and broad application
prospects. This paper provides an in-depth review and analysis of the latest
progress in this field from multiple perspectives, including the basic
principles, algorithmic models, key technical challenges, and application
practices of MoE. First, we introduce the basic concept of MoE and its core
idea and elaborate on its advantages over traditional single models. Then, we
discuss the basic architecture of MoE and its main components, including the
gating network, expert networks, and learning algorithms. Next, we review the
applications of MoE in addressing key technical issues in big data. For each
challenge, we provide specific MoE solutions and their innovations.
Furthermore, we summarize the typical use cases of MoE in various application
domains. This fully demonstrates the powerful capability of MoE in big data
processing. We also analyze the advantages of MoE in big data environments.
Finally, we explore the future development trends of MoE. We believe that MoE
will become an important paradigm of artificial intelligence in the era of big
data. In summary, this paper systematically elaborates on the principles,
techniques, and applications of MoE in big data processing, providing
theoretical and practical references to further promote the application of MoE
in real scenarios.
|
2501.16353
|
Synthetic Data Generation by Supervised Neural Gas Network for
Physiological Emotion Recognition Data
|
cs.NE cs.AI cs.LG eess.SP
|
Data scarcity remains a significant challenge in the field of emotion
recognition using physiological signals, as acquiring comprehensive and diverse
datasets is often prevented by privacy concerns and logistical constraints.
This limitation restricts the development and generalization of robust emotion
recognition models, making the need for effective synthetic data generation
methods more critical. Emotion recognition from physiological signals such as
EEG, ECG, and GSR plays a pivotal role in enhancing human-computer interaction
and understanding human affective states. Utilizing these signals, this study
introduces an innovative approach to synthetic data generation using a
Supervised Neural Gas (SNG) network, which has demonstrated noteworthy speed
advantages over established models like Conditional VAE, Conditional GAN,
diffusion model, and Variational LSTM. The Neural Gas network, known for its
adaptability in organizing data based on topological and feature-space
proximity, provides a robust framework for generating real-world-like synthetic
datasets that preserve the intrinsic patterns of physiological emotion data.
Our implementation of the SNG efficiently processes the input data, creating
synthetic instances that closely mimic the original data distributions, as
demonstrated through comparative accuracy assessments. In experiments, while
our approach did not universally outperform all models, it achieved superior
performance against most of the evaluated models and offered significant
improvements in processing time. These outcomes underscore the potential of
using SNG networks for fast, efficient, and effective synthetic data generation
in emotion recognition applications.
|
2501.16354
|
Adaptive Hoeffding Tree with Transfer Learning for Streaming
Synchrophasor Data Sets
|
cs.LG cs.AI
|
Synchrophasor technology or phasor measurement units (PMUs) are known to
detect multiple type of oscillations or faults better than Supervisory Control
and Data Acquisition (SCADA) systems, but the volume of Bigdata (e.g., 30-120
samples per second on a single PMU) generated by these sensors at the
aggregator level (e.g., several PMUs) requires special handling. Conventional
machine learning or data mining methods are not suitable to handle such larger
streaming realtime data. This is primarily due to latencies associated with
cloud environments (e.g., at an aggregator or PDC level), and thus necessitates
the need for local computing to move the data on the edge (or locally at the
PMU level) for processing. This requires faster real-time streaming algorithms
to be processed at the local level (e.g., typically by a Field Programmable
Gate Array (FPGA) based controllers). This paper proposes a transfer
learning-based hoeffding tree with ADWIN (THAT) method to detect anomalous
synchrophasor signatures. The proposed algorithm is trained and tested with the
OzaBag method. The preliminary results with transfer learning indicate that a
computational time saving of 0.7ms is achieved with THAT algorithm (0.34ms)
over Ozabag (1.04ms), while the accuracy of both methods in detecting fault
events remains at 94% for four signatures.
|
2501.16355
|
How Strategic Agents Respond: Comparing Analytical Models with
LLM-Generated Responses in Strategic Classification
|
cs.LG cs.AI
|
When machine learning (ML) algorithms are used to automate human-related
decisions, human agents may gain knowledge of the decision policy and behave
strategically to obtain desirable outcomes. Strategic Classification (SC) has
been proposed to address the interplay between agents and decision-makers.
Prior work on SC has relied on assumptions that agents are perfectly or
approximately rational, responding to decision policies by maximizing their
utilities. Verifying these assumptions is challenging due to the difficulty of
collecting real-world agent responses. Meanwhile, the growing adoption of large
language models (LLMs) makes it increasingly likely that human agents in SC
settings will seek advice from these tools. We propose using strategic advice
generated by LLMs to simulate human agent responses in SC. Specifically, we
examine five critical SC scenarios -- hiring, loan applications, school
admissions, personal income, and public assistance programs -- and simulate how
human agents with diverse profiles seek advice from LLMs. We then compare the
resulting agent responses with the best responses generated by existing
theoretical models. Our findings reveal that: (i) LLMs and theoretical models
generally lead to agent score or qualification changes in the same direction
across most settings, with both achieving similar levels of fairness; (ii)
state-of-the-art commercial LLMs (e.g., GPT-3.5, GPT-4) consistently provide
helpful suggestions, though these suggestions typically do not result in
maximal score or qualification improvements; and (iii) LLMs tend to produce
more diverse agent responses, often favoring more balanced effort allocation
strategies. These results suggest that theoretical models align with LLMs to
some extent and that leveraging LLMs to simulate more realistic agent responses
offers a promising approach to designing trustworthy ML systems.
|
2501.16356
|
Evaluating Binary Decision Biases in Large Language Models: Implications
for Fair Agent-Based Financial Simulations
|
cs.LG cs.AI
|
Large Language Models (LLMs) are increasingly being used to simulate
human-like decision making in agent-based financial market models (ABMs). As
models become more powerful and accessible, researchers can now incorporate
individual LLM decisions into ABM environments. However, integration may
introduce inherent biases that need careful evaluation. In this paper we test
three state-of-the-art GPT models for bias using two model sampling approaches:
one-shot and few-shot API queries. We observe significant variations in
distributions of outputs between specific models, and model sub versions, with
GPT-4o-Mini-2024-07-18 showing notably better performance (32-43% yes
responses) compared to GPT-4-0125-preview's extreme bias (98-99% yes
responses). We show that sampling methods and model sub-versions significantly
impact results: repeated independent API calls produce different distributions
compared to batch sampling within a single call. While no current GPT model can
simultaneously achieve a uniform distribution and Markovian properties in
one-shot testing, few-shot sampling can approach uniform distributions under
certain conditions. We explore the Temperature parameter, providing a
definition and comparative results. We further compare our results to true
random binary series and test specifically for the common human bias of
Negative Recency - finding LLMs have a mixed ability to 'beat' humans in this
one regard. These findings emphasise the critical importance of careful LLM
integration into ABMs for financial markets and more broadly.
|
2501.16357
|
EVolutionary Independent DEtermiNistiC Explanation
|
cs.LG cs.AI eess.SP
|
The widespread use of artificial intelligence deep neural networks in fields
such as medicine and engineering necessitates understanding their
decision-making processes. Current explainability methods often produce
inconsistent results and struggle to highlight essential signals influencing
model inferences. This paper introduces the Evolutionary Independent
Deterministic Explanation (EVIDENCE) theory, a novel approach offering a
deterministic, model-independent method for extracting significant signals from
black-box models. EVIDENCE theory, grounded in robust mathematical
formalization, is validated through empirical tests on diverse datasets,
including COVID-19 audio diagnostics, Parkinson's disease voice recordings, and
the George Tzanetakis music classification dataset (GTZAN). Practical
applications of EVIDENCE include improving diagnostic accuracy in healthcare
and enhancing audio signal analysis. For instance, in the COVID-19 use case,
EVIDENCE-filtered spectrograms fed into a frozen Residual Network with 50
layers improved precision by 32% for positive cases and increased the area
under the curve (AUC) by 16% compared to baseline models. For Parkinson's
disease classification, EVIDENCE achieved near-perfect precision and
sensitivity, with a macro average F1-Score of 0.997. In the GTZAN, EVIDENCE
maintained a high AUC of 0.996, demonstrating its efficacy in filtering
relevant features for accurate genre classification. EVIDENCE outperformed
other Explainable Artificial Intelligence (XAI) methods such as LIME, SHAP, and
GradCAM in almost all metrics. These findings indicate that EVIDENCE not only
improves classification accuracy but also provides a transparent and
reproducible explanation mechanism, crucial for advancing the trustworthiness
and applicability of AI systems in real-world settings.
|
2501.16358
|
The OpenLAM Challenges
|
cs.LG cond-mat.mtrl-sci physics.comp-ph
|
Inspired by the success of Large Language Models (LLMs), the development of
Large Atom Models (LAMs) has gained significant momentum in scientific
computation. Since 2022, the Deep Potential team has been actively pretraining
LAMs and launched the OpenLAM Initiative to develop an open-source foundation
model spanning the periodic table. A core objective is establishing
comprehensive benchmarks for reliable LAM evaluation, addressing limitations in
existing datasets. As a first step, the LAM Crystal Philately competition has
collected over 19.8 million valid structures, including 1 million on the
OpenLAM convex hull, driving advancements in generative modeling and materials
science applications.
|
2501.16360
|
Momentum Contrastive Learning with Enhanced Negative Sampling and Hard
Negative Filtering
|
cs.LG cs.AI
|
Contrastive learning has become pivotal in unsupervised representation
learning, with frameworks like Momentum Contrast (MoCo) effectively utilizing
large negative sample sets to extract discriminative features. However,
traditional approaches often overlook the full potential of key embeddings and
are susceptible to performance degradation from noisy negative samples in the
memory bank. This study addresses these challenges by proposing an enhanced
contrastive learning framework that incorporates two key innovations. First, we
introduce a dual-view loss function, which ensures balanced optimization of
both query and key embeddings, improving representation quality. Second, we
develop a selective negative sampling strategy that emphasizes the most
challenging negatives based on cosine similarity, mitigating the impact of
noise and enhancing feature discrimination. Extensive experiments demonstrate
that our framework achieves superior performance on downstream tasks,
delivering robust and well-structured representations. These results highlight
the potential of optimized contrastive mechanisms to advance unsupervised
learning and extend its applicability across domains such as computer vision
and natural language processing
|
2501.16361
|
Large Language Models Meet Graph Neural Networks for Text-Numeric Graph
Reasoning
|
cs.LG cs.AI
|
In real-world scientific discovery, human beings always make use of the
accumulated prior knowledge with imagination pick select one or a few most
promising hypotheses from large and noisy data analysis results. In this study,
we introduce a new type of graph structure, the text-numeric graph (TNG), which
is defined as graph entities and associations have both text-attributed
information and numeric information. The TNG is an ideal data structure model
for novel scientific discovery via graph reasoning because it integrates
human-understandable textual annotations or prior knowledge, with numeric
values that represent the observed or activation levels of graph entities or
associations in different samples. Together both the textual information and
numeric values determine the importance of graph entities and associations in
graph reasoning for novel scientific knowledge discovery. We further propose
integrating large language models (LLMs) and graph neural networks (GNNs) to
analyze the TNGs for graph understanding and reasoning. To demonstrate the
utility, we generated the text-omic(numeric) signaling graphs (TOSG), as one
type of TNGs, in which all graphs have the same entities, associations and
annotations, but have sample-specific entity numeric (omic) values using single
cell RNAseq (scRNAseq) datasets of different diseases. We proposed joint
LLM-GNN models for key entity mining and signaling pathway mining on the TOSGs.
The evaluation results showed the LLM-GNN and TNGs models significantly improve
classification accuracy and network inference. In conclusion, the TNGs and
joint LLM-GNN models are important approaches for scientific discovery.
|
2501.16362
|
A novel Trunk Branch-net PINN for flow and heat transfer prediction in
porous medium
|
cs.LG physics.flu-dyn
|
A novel Trunk-Branch (TB)-net physics-informed neural network (PINN)
architecture is developed, which is a PINN-based method incorporating trunk and
branch nets to capture both global and local features. The aim is to solve four
main classes of problems: forward flow problem, forward heat transfer problem,
inverse heat transfer problem, and transfer learning problem within the porous
medium, which are notoriously complex that could not be handled by origin PINN.
In the proposed TB-net PINN architecture, a Fully-connected Neural Network
(FNN) is used as the trunk net, followed by separated FNNs as the branch nets
with respect to outputs, and automatic differentiation is performed for partial
derivatives of outputs with respect to inputs by considering various physical
loss. The effectiveness and flexibility of the novel TB-net PINN architecture
is demonstrated through a collection of forward problems, and transfer learning
validates the feasibility of resource reuse. Combining with the superiority
over traditional numerical methods in solving inverse problems, the proposed
TB-net PINN shows its great potential for practical engineering applications.
|
2501.16364
|
Multivariate Time Series Anomaly Detection by Capturing Coarse-Grained
Intra- and Inter-Variate Dependencies
|
cs.LG cs.AI
|
Multivariate time series anomaly detection is essential for failure
management in web application operations, as it directly influences the
effectiveness and timeliness of implementing remedial or preventive measures.
This task is often framed as a semi-supervised learning problem, where only
normal data are available for model training, primarily due to the
labor-intensive nature of data labeling and the scarcity of anomalous data.
Existing semi-supervised methods often detect anomalies by capturing
intra-variate temporal dependencies and/or inter-variate relationships to learn
normal patterns, flagging timestamps that deviate from these patterns as
anomalies. However, these approaches often fail to capture salient
intra-variate temporal and inter-variate dependencies in time series due to
their focus on excessively fine granularity, leading to suboptimal performance.
In this study, we introduce MtsCID, a novel semi-supervised multivariate time
series anomaly detection method. MtsCID employs a dual network architecture:
one network operates on the attention maps of multi-scale intra-variate patches
for coarse-grained temporal dependency learning, while the other works on
variates to capture coarse-grained inter-variate relationships through
convolution and interaction with sinusoidal prototypes. This design enhances
the ability to capture the patterns from both intra-variate temporal
dependencies and inter-variate relationships, resulting in improved
performance. Extensive experiments across seven widely used datasets
demonstrate that MtsCID achieves performance comparable or superior to
state-of-the-art benchmark methods.
|
2501.16365
|
CAND: Cross-Domain Ambiguity Inference for Early Detecting Nuanced
Illness Deterioration
|
cs.LG cs.AI
|
Early detection of patient deterioration is essential for timely treatment,
with vital signs like heart rates being key health indicators. Existing methods
tend to solely analyze vital sign waveforms, ignoring transition relationships
of waveforms within each vital sign and the correlation strengths among various
vital signs. Such studies often overlook nuanced illness deterioration, which
is the early sign of worsening health but is difficult to detect. In this
paper, we introduce CAND, a novel method that organizes the transition
relationships and the correlations within and among vital signs as
domain-specific and cross-domain knowledge. CAND jointly models these knowledge
in a unified representation space, considerably enhancing the early detection
of nuanced illness deterioration. In addition, CAND integrates a Bayesian
inference method that utilizes augmented knowledge from domain-specific and
cross-domain knowledge to address the ambiguities in correlation strengths.
With this architecture, the correlation strengths can be effectively inferred
to guide joint modeling and enhance representations of vital signs. This allows
a more holistic and accurate interpretation of patient health. Our experiments
on a real-world ICU dataset demonstrate that CAND significantly outperforms
existing methods in both effectiveness and earliness in detecting nuanced
illness deterioration. Moreover, we conduct a case study for the interpretable
detection process to showcase the practicality of CAND.
|
2501.16368
|
Foundation Models for CPS-IoT: Opportunities and Challenges
|
cs.LG cs.AI cs.SY eess.SY
|
Methods from machine learning (ML) have transformed the implementation of
Perception-Cognition-Communication-Action loops in Cyber-Physical Systems (CPS)
and the Internet of Things (IoT), replacing mechanistic and basic statistical
models with those derived from data. However, the first generation of ML
approaches, which depend on supervised learning with annotated data to create
task-specific models, faces significant limitations in scaling to the diverse
sensor modalities, deployment configurations, application tasks, and operating
dynamics characterizing real-world CPS-IoT systems. The success of
task-agnostic foundation models (FMs), including multimodal large language
models (LLMs), in addressing similar challenges across natural language,
computer vision, and human speech has generated considerable enthusiasm for and
exploration of FMs and LLMs as flexible building blocks in CPS-IoT analytics
pipelines, promising to reduce the need for costly task-specific engineering.
Nonetheless, a significant gap persists between the current capabilities of
FMs and LLMs in the CPS-IoT domain and the requirements they must meet to be
viable for CPS-IoT applications. In this paper, we analyze and characterize
this gap through a thorough examination of the state of the art and our
research, which extends beyond it in various dimensions. Based on the results
of our analysis and research, we identify essential desiderata that CPS-IoT
domain-specific FMs and LLMs must satisfy to bridge this gap. We also propose
actions by CPS-IoT researchers to collaborate in developing key community
resources necessary for establishing FMs and LLMs as foundational tools for the
next generation of CPS-IoT systems.
|
2501.16369
|
Blockchain-based Crowdsourced Deep Reinforcement Learning as a Service
|
cs.LG cs.AI
|
Deep Reinforcement Learning (DRL) has emerged as a powerful paradigm for
solving complex problems. However, its full potential remains inaccessible to a
broader audience due to its complexity, which requires expertise in training
and designing DRL solutions, high computational capabilities, and sometimes
access to pre-trained models. This necessitates the need for hassle-free
services that increase the availability of DRL solutions to a variety of users.
To enhance the accessibility to DRL services, this paper proposes a novel
blockchain-based crowdsourced DRL as a Service (DRLaaS) framework. The
framework provides DRL-related services to users, covering two types of tasks:
DRL training and model sharing. Through crowdsourcing, users could benefit from
the expertise and computational capabilities of workers to train DRL solutions.
Model sharing could help users gain access to pre-trained models, shared by
workers in return for incentives, which can help train new DRL solutions using
methods in knowledge transfer. The DRLaaS framework is built on top of a
Consortium Blockchain to enable traceable and autonomous execution. Smart
Contracts are designed to manage worker and model allocation, which are stored
using the InterPlanetary File System (IPFS) to ensure tamper-proof data
distribution. The framework is tested on several DRL applications, proving its
efficacy.
|
2501.16370
|
Advanced Physics-Informed Neural Network with Residuals for Solving
Complex Integral Equations
|
cs.LG cs.AI cs.NA cs.NE math.NA
|
In this paper, we present the Residual Integral Solver Network (RISN), a
novel neural network architecture designed to solve a wide range of integral
and integro-differential equations, including one-dimensional,
multi-dimensional, ordinary and partial integro-differential, systems, and
fractional types. RISN integrates residual connections with high-accurate
numerical methods such as Gaussian quadrature and fractional derivative
operational matrices, enabling it to achieve higher accuracy and stability than
traditional Physics-Informed Neural Networks (PINN). The residual connections
help mitigate vanishing gradient issues, allowing RISN to handle deeper
networks and more complex kernels, particularly in multi-dimensional problems.
Through extensive experiments, we demonstrate that RISN consistently
outperforms PINN, achieving significantly lower Mean Absolute Errors (MAE)
across various types of equations. The results highlight RISN's robustness and
efficiency in solving challenging integral and integro-differential problems,
making it a valuable tool for real-world applications where traditional methods
often struggle.
|
2501.16371
|
Which Optimizer Works Best for Physics-Informed Neural Networks and
Kolmogorov-Arnold Networks?
|
cs.LG cs.AI math.OC
|
Physics-Informed Neural Networks (PINNs) have revolutionized the computation
of PDE solutions by integrating partial differential equations (PDEs) into the
neural network's training process as soft constraints, becoming an important
component of the scientific machine learning (SciML) ecosystem. In its current
implementation, PINNs are mainly optimized using first-order methods like Adam,
as well as quasi-Newton methods such as BFGS and its low-memory variant,
L-BFGS. However, these optimizers often struggle with highly non-linear and
non-convex loss landscapes, leading to challenges such as slow convergence,
local minima entrapment, and (non)degenerate saddle points. In this study, we
investigate the performance of Self-Scaled Broyden (SSBroyden) methods and
other advanced quasi-Newton schemes, including BFGS and L-BFGS with different
line search strategies approaches. These methods dynamically rescale updates
based on historical gradient information, thus enhancing training efficiency
and accuracy. We systematically compare these optimizers on key challenging
linear, stiff, multi-scale and non-linear PDEs benchmarks, including the
Burgers, Allen-Cahn, Kuramoto-Sivashinsky, and Ginzburg-Landau equations, and
extend our study to Physics-Informed Kolmogorov-Arnold Networks (PIKANs)
representation. Our findings provide insights into the effectiveness of
second-order optimization strategies in improving the convergence and accurate
generalization of PINNs for complex PDEs by orders of magnitude compared to the
state-of-the-art.
|
2501.16372
|
Low-Rank Adapters Meet Neural Architecture Search for LLM Compression
|
cs.LG cs.AI cs.CL
|
The rapid expansion of Large Language Models (LLMs) has posed significant
challenges regarding the computational resources required for fine-tuning and
deployment. Recent advancements in low-rank adapters have demonstrated their
efficacy in parameter-efficient fine-tuning (PEFT) of these models. This
retrospective paper comprehensively discusses innovative approaches that
synergize low-rank representations with Neural Architecture Search (NAS)
techniques, particularly weight-sharing super-networks. Robust solutions for
compressing and fine-tuning large pre-trained models are developed by
integrating these methodologies. Our analysis highlights the potential of these
combined strategies to democratize the use of LLMs, making them more accessible
for deployment in resource-constrained environments. The resulting models
exhibit reduced memory footprints and faster inference times, paving the way
for more practical and scalable applications of LLMs. Models and code are
available at
https://github.com/IntelLabs/Hardware-Aware-Automated-Machine-Learning.
|
2501.16373
|
Unveiling Discrete Clues: Superior Healthcare Predictions for Rare
Diseases
|
cs.LG cs.AI cs.CE
|
Accurate healthcare prediction is essential for improving patient outcomes.
Existing work primarily leverages advanced frameworks like attention or graph
networks to capture the intricate collaborative (CO) signals in electronic
health records. However, prediction for rare diseases remains challenging due
to limited co-occurrence and inadequately tailored approaches. To address this
issue, this paper proposes UDC, a novel method that unveils discrete clues to
bridge consistent textual knowledge and CO signals within a unified semantic
space, thereby enriching the representation semantics of rare diseases.
Specifically, we focus on addressing two key sub-problems: (1) acquiring
distinguishable discrete encodings for precise disease representation and (2)
achieving semantic alignment between textual knowledge and the CO signals at
the code level. For the first sub-problem, we refine the standard vector
quantized process to include condition awareness. Additionally, we develop an
advanced contrastive approach in the decoding stage, leveraging synthetic and
mixed-domain targets as hard negatives to enrich the perceptibility of the
reconstructed representation for downstream tasks. For the second sub-problem,
we introduce a novel codebook update strategy using co-teacher distillation.
This approach facilitates bidirectional supervision between textual knowledge
and CO signals, thereby aligning semantically equivalent information in a
shared discrete latent space. Extensive experiments on three datasets
demonstrate our superiority.
|
2501.16374
|
SAFR: Neuron Redistribution for Interpretability
|
cs.LG cs.AI
|
Superposition refers to encoding representations of multiple features within
a single neuron, which is common in deep neural networks. This property allows
neurons to combine and represent multiple features, enabling the model to
capture intricate information and handle complex tasks. Despite promising
performance, the model's interpretability has been diminished. This paper
presents a novel approach to enhance model interpretability by regularizing
feature superposition. We introduce SAFR, which simply applies regularizations
to the loss function to promote monosemantic representations for important
tokens while encouraging polysemanticity for correlated token pairs, where
important tokens and correlated token pairs are identified via VMASK and
attention weights respectively. We evaluate SAFR with a transformer model on
two classification tasks. Experiments demonstrate the effectiveness of SAFR in
improving model interpretability without compromising prediction performance.
Besides, SAFR provides explanations by visualizing the neuron allocation within
the intermediate layers.
|
2501.16375
|
On Storage Neural Network Augmented Approximate Nearest Neighbor Search
|
cs.LG cs.AI cs.IR
|
Large-scale approximate nearest neighbor search (ANN) has been gaining
attention along with the latest machine learning researches employing ANNs. If
the data is too large to fit in memory, it is necessary to search for the most
similar vectors to a given query vector from the data stored in storage
devices, not from that in memory. The storage device such as NAND flash memory
has larger capacity than the memory device such as DRAM, but they also have
larger latency to read data. Therefore, ANN methods for storage require
completely different approaches from conventional in-memory ANN methods. Since
the approximation that the time required for search is determined only by the
amount of data fetched from storage holds under reasonable assumptions, our
goal is to minimize it while maximizing recall. For partitioning-based ANNs,
vectors are partitioned into clusters in the index building phase. In the
search phase, some of the clusters are chosen, the vectors in the chosen
clusters are fetched from storage, and the nearest vector is retrieved from the
fetched vectors. Thus, the key point is to accurately select the clusters
containing the ground truth nearest neighbor vectors. We accomplish this by
proposing a method to predict the correct clusters by means of a neural network
that is gradually refined by alternating supervised learning and duplicated
cluster assignment. Compared to state-of-the-art SPANN and an exhaustive method
using k-means clustering and linear search, the proposed method achieves 90%
recall on SIFT1M with 80% and 58% less data fetched from storage, respectively.
|
2501.16376
|
HWPQ: Hessian-free Weight Pruning-Quantization For LLM Compression And
Acceleration
|
cs.LG cs.AI
|
Large Language Models (LLMs) have achieved remarkable success across numerous
domains. However, the high time complexity of existing pruning and quantization
methods significantly hinders their effective deployment on
resource-constrained consumer or edge devices. In this study, we propose a
novel Hessian-free Weight Pruning-Quantization (HWPQ) method. HWPQ eliminates
the need for computationally intensive Hessian matrix calculations by
introducing a contribution-based weight metric, which evaluates the importance
of weights without relying on second-order derivatives. Additionally, we employ
the Exponentially Weighted Moving Average (EWMA) technique to bypass weight
sorting, enabling the selection of weights that contribute most to LLM accuracy
and further reducing time complexity. Our approach is extended to support 2:4
structured sparsity pruning, facilitating efficient execution on modern
hardware accelerators. Experimental results demonstrate that HWPQ significantly
enhances the compression performance of LLaMA2. Compared to state-of-the-art
quantization and pruning frameworks, HWPQ achieves average speedups of 5.97x
(up to 20.75x) in quantization time and 12.29x (up to 56.02x) in pruning time,
while largely preserving model accuracy. Furthermore, we observe a 1.50x
inference speedup compared to the baseline.
|
2501.16377
|
Optimal Signal Decomposition-based Multi-Stage Learning for Battery
Health Estimation
|
cs.LG cs.AI
|
Battery health estimation is fundamental to ensure battery safety and reduce
cost. However, achieving accurate estimation has been challenging due to the
batteries' complex nonlinear aging patterns and capacity regeneration
phenomena. In this paper, we propose OSL, an optimal signal decomposition-based
multi-stage machine learning for battery health estimation. OSL treats battery
signals optimally. It uses optimized variational mode decomposition to extract
decomposed signals capturing different frequency bands of the original battery
signals. It also incorporates a multi-stage learning process to analyze both
spatial and temporal battery features effectively. An experimental study is
conducted with a public battery aging dataset. OSL demonstrates exceptional
performance with a mean error of just 0.26%. It significantly outperforms
comparison algorithms, both those without and those with suboptimal signal
decomposition and analysis. OSL considers practical battery challenges and can
be integrated into real-world battery management systems, offering a good
impact on battery monitoring and optimization.
|
2501.16378
|
Internal Activation Revision: Safeguarding Vision Language Models
Without Parameter Update
|
cs.LG cs.AI cs.CL cs.CV
|
Vision-language models (VLMs) demonstrate strong multimodal capabilities but
have been found to be more susceptible to generating harmful content compared
to their backbone large language models (LLMs). Our investigation reveals that
the integration of images significantly shifts the model's internal activations
during the forward pass, diverging from those triggered by textual input.
Moreover, the safety alignments of LLMs embedded within VLMs are not
sufficiently robust to handle the activations discrepancies, making the models
vulnerable to even the simplest jailbreaking attacks. To address this issue, we
propose an \textbf{internal activation revision} approach that efficiently
revises activations during generation, steering the model toward safer outputs.
Our framework incorporates revisions at both the layer and head levels,
offering control over the model's generation at varying levels of granularity.
In addition, we explore three strategies for constructing positive and negative
samples and two approaches for extracting revision vectors, resulting in
different variants of our method. Comprehensive experiments demonstrate that
the internal activation revision method significantly improves the safety of
widely used VLMs, reducing attack success rates by an average of 48.94\%,
34.34\%, 43.92\%, and 52.98\% on SafeBench, Safe-Unsafe, Unsafe, and
MM-SafetyBench, respectively, while minimally impacting model helpfulness.
|
2501.16379
|
FedAGHN: Personalized Federated Learning with Attentive Graph
HyperNetworks
|
cs.LG cs.AI
|
Personalized Federated Learning (PFL) aims to address the statistical
heterogeneity of data across clients by learning the personalized model for
each client. Among various PFL approaches, the personalized aggregation-based
approach conducts parameter aggregation in the server-side aggregation phase to
generate personalized models, and focuses on learning appropriate collaborative
relationships among clients for aggregation. However, the collaborative
relationships vary in different scenarios and even at different stages of the
FL process. To this end, we propose Personalized Federated Learning with
Attentive Graph HyperNetworks (FedAGHN), which employs Attentive Graph
HyperNetworks (AGHNs) to dynamically capture fine-grained collaborative
relationships and generate client-specific personalized initial models.
Specifically, AGHNs empower graphs to explicitly model the client-specific
collaborative relationships, construct collaboration graphs, and introduce
tunable attentive mechanism to derive the collaboration weights, so that the
personalized initial models can be obtained by aggregating parameters over the
collaboration graphs. Extensive experiments can demonstrate the superiority of
FedAGHN. Moreover, a series of visualizations are presented to explore the
effectiveness of collaboration graphs learned by FedAGHN.
|
2501.16380
|
UDiTQC: U-Net-Style Diffusion Transformer for Quantum Circuit Synthesis
|
cs.LG cs.AI quant-ph
|
Quantum computing is a transformative technology with wide-ranging
applications, and efficient quantum circuit generation is crucial for unlocking
its full potential. Current diffusion model approaches based on U-Net
architectures, while promising, encounter challenges related to computational
efficiency and modeling global context. To address these issues, we propose
UDiT,a novel U-Net-style Diffusion Transformer architecture, which combines
U-Net's strengths in multi-scale feature extraction with the Transformer's
ability to model global context. We demonstrate the framework's effectiveness
on two tasks: entanglement generation and unitary compilation, where UDiTQC
consistently outperforms existing methods. Additionally, our framework supports
tasks such as masking and editing circuits to meet specific physical property
requirements. This dual advancement, improving quantum circuit synthesis and
refining generative model architectures, marks a significant milestone in the
convergence of quantum computing and machine learning research.
|
2501.16381
|
Reduced-order modeling and classification of hydrodynamic pattern
formation in gravure printing
|
cs.LG physics.flu-dyn
|
Hydrodynamic pattern formation phenomena in printing and coating processes
are still not fully understood. However, fundamental understanding is essential
to achieve high-quality printed products and to tune printed patterns according
to the needs of a specific application like printed electronics, graphical
printing, or biomedical printing. The aim of the paper is to develop an
automated pattern classification algorithm based on methods from supervised
machine learning and reduced-order modeling. We use the HYPA-p dataset, a large
image dataset of gravure-printed images, which shows various types of
hydrodynamic pattern formation phenomena. It enables the correlation of
printing process parameters and resulting printed patterns for the first time.
26880 images of the HYPA-p dataset have been labeled by a human observer as dot
patterns, mixed patterns, or finger patterns; 864000 images (97%) are
unlabeled. A singular value decomposition (SVD) is used to find the modes of
the labeled images and to reduce the dimensionality of the full dataset by
truncation and projection. Selected machine learning classification techniques
are trained on the reduced-order data. We investigate the effect of several
factors, including classifier choice, whether or not fast Fourier transform
(FFT) is used to preprocess the labeled images, data balancing, and data
normalization. The best performing model is a k-nearest neighbor (kNN)
classifier trained on unbalanced, FFT-transformed data with a test error of 3%,
which outperforms a human observer by 7%. Data balancing slightly increases the
test error of the kNN-model to 5%, but also increases the recall of the mixed
class from 90% to 94%. Finally, we demonstrate how the trained models can be
used to predict the pattern class of unlabeled images and how the predictions
can be correlated to the printing process parameters, in the form of regime
maps.
|
2501.16382
|
GraPPI: A Retrieve-Divide-Solve GraphRAG Framework for Large-scale
Protein-protein Interaction Exploration
|
q-bio.QM cs.AI cs.LG
|
Drug discovery (DD) has tremendously contributed to maintaining and improving
public health. Hypothesizing that inhibiting protein misfolding can slow
disease progression, researchers focus on target identification (Target ID) to
find protein structures for drug binding. While Large Language Models (LLMs)
and Retrieval-Augmented Generation (RAG) frameworks have accelerated drug
discovery, integrating models into cohesive workflows remains challenging. We
conducted a user study with drug discovery researchers to identify the
applicability of LLMs and RAGs in Target ID. We identified two main findings:
1) an LLM should provide multiple Protein-Protein Interactions (PPIs) based on
an initial protein and protein candidates that have a therapeutic impact; 2)
the model must provide the PPI and relevant explanations for better
understanding. Based on these observations, we identified three limitations in
previous approaches for Target ID: 1) semantic ambiguity, 2) lack of
explainability, and 3) short retrieval units. To address these issues, we
propose GraPPI, a large-scale knowledge graph (KG)-based retrieve-divide-solve
agent pipeline RAG framework to support large-scale PPI signaling pathway
exploration in understanding therapeutic impacts by decomposing the analysis of
entire PPI pathways into sub-tasks focused on the analysis of PPI edges.
|
2501.16383
|
RotateKV: Accurate and Robust 2-Bit KV Cache Quantization for LLMs via
Outlier-Aware Adaptive Rotations
|
cs.LG cs.AI cs.CL
|
Key-Value (KV) cache facilitates efficient large language models (LLMs)
inference by avoiding recomputation of past KVs. As the batch size and context
length increase, the oversized KV caches become a significant memory
bottleneck, highlighting the need for efficient compression. Existing KV
quantization rely on fine-grained quantization or the retention of a
significant portion of high bit-widths caches, both of which compromise
compression ratio and often fail to maintain robustness at extremely low
average bit-widths. In this work, we explore the potential of rotation
technique for 2-bit KV quantization and propose RotateKV, which achieves
accurate and robust performance through the following innovations: (i)
Outlier-Aware Rotation, which utilizes channel-reordering to adapt the
rotations to varying channel-wise outlier distributions without sacrificing the
computational efficiency of the fast Walsh-Hadamard transform (FWHT); (ii)
Pre-RoPE Grouped-Head Rotation, which mitigates the impact of rotary position
embedding (RoPE) on proposed outlier-aware rotation and further smooths
outliers across heads; (iii) Attention-Sink-Aware Quantization, which leverages
the massive activations to precisely identify and protect attention sinks.
RotateKV achieves less than 0.3 perplexity (PPL) degradation with 2-bit
quantization on WikiText-2 using LLaMA-2-13B, maintains strong CoT reasoning
and long-context capabilities, with less than 1.7\% degradation on GSM8K,
outperforming existing methods even at lower average bit-widths. RotateKV also
showcases a 3.97x reduction in peak memory usage, supports 5.75x larger batch
sizes, and achieves a 2.32x speedup in decoding stage.
|
2501.16384
|
MambaTron: Efficient Cross-Modal Point Cloud Enhancement using Aggregate
Selective State Space Modeling
|
eess.SP cs.LG
|
Point cloud enhancement is the process of generating a high-quality point
cloud from an incomplete input. This is done by filling in the missing details
from a reference like the ground truth via regression, for example. In addition
to unimodal image and point cloud reconstruction, we focus on the task of
view-guided point cloud completion, where we gather the missing information
from an image, which represents a view of the point cloud and use it to
generate the output point cloud. With the recent research efforts surrounding
state-space models, originally in natural language processing and now in 2D and
3D vision, Mamba has shown promising results as an efficient alternative to the
self-attention mechanism. However, there is limited research towards employing
Mamba for cross-attention between the image and the input point cloud, which is
crucial in multi-modal problems. In this paper, we introduce MambaTron, a
Mamba-Transformer cell that serves as a building block for our network which is
capable of unimodal and cross-modal reconstruction which includes view-guided
point cloud completion.We explore the benefits of Mamba's long-sequence
efficiency coupled with the Transformer's excellent analytical capabilities
through MambaTron. This approach is one of the first attempts to implement a
Mamba-based analogue of cross-attention, especially in computer vision. Our
model demonstrates a degree of performance comparable to the current
state-of-the-art techniques while using a fraction of the computation
resources.
|
2501.16385
|
FBQuant: FeedBack Quantization for Large Language Models
|
cs.LG cs.CL
|
Deploying Large Language Models (LLMs) on edge devices is increasingly
important, as it eliminates reliance on network connections, reduces expensive
API calls, and enhances user privacy. However, on-device deployment is
challenging due to the limited computational resources of edge devices. In
particular, the key bottleneck stems from memory bandwidth constraints related
to weight loading. Weight-only quantization effectively reduces memory access,
yet often induces significant accuracy degradation. Recent efforts to
incorporate sub-branches have shown promise for mitigating quantization errors,
but these methods either lack robust optimization strategies or rely on
suboptimal objectives. To address these gaps, we propose FeedBack Quantization
(FBQuant), a novel approach inspired by negative feedback mechanisms in
automatic control. FBQuant inherently ensures that the reconstructed weights
remain bounded by the quantization process, thereby reducing the risk of
overfitting. To further offset the additional latency introduced by
sub-branches, we develop an efficient CUDA kernel that decreases 60\% of extra
inference time. Comprehensive experiments demonstrate the efficiency and
effectiveness of FBQuant across various LLMs. Notably, for 3-bit Llama2-7B,
FBQuant improves zero-shot accuracy by 1.2\%.
|
2501.16386
|
ILETIA: An AI-enhanced method for individualized trigger-oocyte pickup
interval estimation of progestin-primed ovarian stimulation protocol
|
q-bio.QM cs.LG
|
In vitro fertilization-embryo transfer (IVF-ET) stands as one of the most
prevalent treatments for infertility. During an IVF-ET cycle, the time interval
between trigger shot and oocyte pickup (OPU) is a pivotal period for follicular
maturation, which determines mature oocytes yields and impacts the success of
subsequent procedures. However, accurately predicting this interval is severely
hindered by the variability of clinicians'experience that often leads to
suboptimal oocyte retrieval rate. To address this challenge, we propose ILETIA,
the first machine learning-based method that could predict the optimal
trigger-OPU interval for patients receiving progestin-primed ovarian
stimulation (PPOS) protocol. Specifically, ILETIA leverages a Transformer to
learn representations from clinical tabular data, and then employs
gradient-boosted trees for interval prediction. For model training and
evaluating, we compiled a dataset PPOS-DS of nearly ten thousand patients
receiving PPOS protocol, the largest such dataset to our knowledge.
Experimental results demonstrate that our method achieves strong performance
(AUROC = 0.889), outperforming both clinicians and other widely used
computational models. Moreover, ILETIA also supports premature ovulation risk
prediction in a specific OPU time (AUROC = 0.838). Collectively, by enabling
more precise and individualized decisions, ILETIA has the potential to improve
clinical outcomes and lay the foundation for future IVF-ET research.
|
2501.16388
|
Development and Validation of a Dynamic Kidney Failure Prediction Model
based on Deep Learning: A Real-World Study with External Validation
|
cs.LG stat.AP
|
Background: Chronic kidney disease (CKD), a progressive disease with high
morbidity and mortality, has become a significant global public health problem.
At present, most of the models used for predicting the progression of CKD are
static models. We aim to develop a dynamic kidney failure prediction model
based on deep learning (KFDeep) for CKD patients, utilizing all available data
on common clinical indicators from real-world Electronic Health Records (EHRs)
to provide real-time predictions.
Findings: A retrospective cohort of 4,587 patients from EHRs of Yinzhou,
China, is used as the development dataset (2,752 patients for training, 917
patients for validation) and internal validation dataset (917 patients), while
a prospective cohort of 934 patients from the Peking University First Hospital
CKD cohort (PKUFH cohort) is used as the external validation dataset. The AUROC
of the KFDeep model reaches 0.946 (95\% CI: 0.922-0.970) on the internal
validation dataset and 0.805 (95\% CI: 0.763-0.847) on the external validation
dataset, both surpassing existing models. The KFDeep model demonstrates stable
performance in simulated dynamic scenarios, with the AUROC progressively
increasing over time. Both the calibration curve and decision curve analyses
confirm that the model is unbiased and safe for practical use, while the SHAP
analysis and hidden layer clustering results align with established medical
knowledge.
Interpretation: The KFDeep model built from real-world EHRs enhances the
prediction accuracy of kidney failure without increasing clinical examination
costs and can be easily integrated into existing hospital systems, providing
physicians with a continuously updated decision-support tool due to its dynamic
design.
|
2501.16389
|
Bridging the Sim2Real Gap: Vision Encoder Pre-Training for Visuomotor
Policy Transfer
|
cs.RO cs.CV
|
Simulation offers a scalable and efficient alternative to real-world data
collection for learning visuomotor robotic policies. However, the
simulation-to-reality, or "Sim2Real" distribution shift -- introduced by
employing simulation-trained policies in real-world environments -- frequently
prevents successful policy transfer. This study explores the potential of using
large-scale pre-training of vision encoders to address the Sim2Real gap. We
examine a diverse collection of encoders, evaluating their ability to (1)
extract features necessary for robot control while (2) remaining invariant to
task-irrelevant environmental variations. We quantitatively measure the
encoder's feature extraction capabilities through linear probing and its domain
invariance by computing distances between simulation and real-world embedding
centroids. Additional qualitative insights are provided through t-SNE plots and
GradCAM saliency maps. Findings suggest that encoders pre-trained on
manipulation-specific datasets generally outperform those trained on generic
datasets in bridging the Sim2Real gap.
https://github.com/yyardi/Bridging-the-Sim2Real-Gap
|
2501.16391
|
Leveraging Induced Transferable Binding Principles for Associative
Prediction of Novel Drug-Target Interactions
|
cs.LG cs.AI q-bio.BM
|
Significant differences in protein structures hinder the generalization of
existing drug-target interaction (DTI) models, which often rely heavily on
pre-learned binding principles or detailed annotations. In contrast, BioBridge
designs an Inductive-Associative pipeline inspired by the workflow of
scientists who base their accumulated expertise on drawing insights into novel
drug-target pairs from weakly related references. BioBridge predicts novel
drug-target interactions using limited sequence data, incorporating multi-level
encoders with adversarial training to accumulate transferable binding
principles. On these principles basis, BioBridge employs a dynamic prototype
meta-learning framework to associate insights from weakly related annotations,
enabling robust predictions for previously unseen drug-target pairs. Extensive
experiments demonstrate that BioBridge surpasses existing models, especially
for unseen proteins. Notably, when only homologous protein binding data is
available, BioBridge proves effective for virtual screening of the epidermal
growth factor receptor and adenosine receptor, underscoring its potential in
drug discovery.
|
2501.16392
|
HMCGeo: IP Region Prediction Based on Hierarchical Multi-label
Classification
|
cs.LG
|
Fine-grained IP geolocation plays a critical role in applications such as
location-based services and cybersecurity. Most existing fine-grained IP
geolocation methods are regression-based; however, due to noise in the input
data, these methods typically encounter kilometer-level prediction errors and
provide incorrect region information for users. To address this issue, this
paper proposes a novel hierarchical multi-label classification framework for IP
region prediction, named HMCGeo. This framework treats IP geolocation as a
hierarchical multi-label classification problem and employs residual
connection-based feature extraction and attention prediction units to predict
the target host region across multiple geographical granularities. Furthermore,
we introduce probabilistic classification loss during training, combining it
with hierarchical cross-entropy loss to form a composite loss function. This
approach optimizes predictions by utilizing hierarchical constraints between
regions at different granularities. IP region prediction experiments on the New
York, Los Angeles, and Shanghai datasets demonstrate that HMCGeo achieves
superior performance across all geographical granularities, significantly
outperforming existing IP geolocation methods.
|
2501.16393
|
Improving Network Threat Detection by Knowledge Graph, Large Language
Model, and Imbalanced Learning
|
cs.LG cs.CR stat.ML
|
Network threat detection has been challenging due to the complexities of
attack activities and the limitation of historical threat data to learn from.
To help enhance the existing practices of using analytics, machine learning,
and artificial intelligence methods to detect the network threats, we propose
an integrated modelling framework, where Knowledge Graph is used to analyze the
users' activity patterns, Imbalanced Learning techniques are used to prune and
weigh Knowledge Graph, and LLM is used to retrieve and interpret the users'
activities from Knowledge Graph. The proposed framework is applied to Agile
Threat Detection through Online Sequential Learning. The preliminary results
show the improved threat capture rate by 3%-4% and the increased
interpretabilities of risk predictions based on the users' activities.
|
2501.16394
|
Transformer^-1: Input-Adaptive Computation for Resource-Constrained
Deployment
|
cs.LG
|
Addressing the resource waste caused by fixed computation paradigms in deep
learning models under dynamic scenarios, this paper proposes a
Transformer$^{-1}$ architecture based on the principle of deep adaptivity. This
architecture achieves dynamic matching between input features and computational
resources by establishing a joint optimization model for complexity and
computation. Our core contributions include: (1) designing a two-layer control
mechanism, composed of a complexity predictor and a reinforcement learning
policy network, enabling end-to-end optimization of computation paths; (2)
deriving a lower bound theory for dynamic computation, proving the system's
theoretical reach to optimal efficiency; and (3) proposing a layer folding
technique and a CUDA Graph pre-compilation scheme, overcoming the engineering
bottlenecks of dynamic architectures. In the ImageNet-1K benchmark test, our
method reduces FLOPs by 42.7\% and peak memory usage by 34.1\% compared to the
standard Transformer, while maintaining comparable accuracy ($\pm$0.3\%).
Furthermore, we conducted practical deployment on the Jetson AGX Xavier
platform, verifying the effectiveness and practical value of this method in
resource-constrained environments. To further validate the generality of the
method, we also conducted experiments on several natural language processing
tasks and achieved significant improvements in resource efficiency.
|
2501.16396
|
TopoNets: High Performing Vision and Language Models with Brain-Like
Topography
|
cs.LG cs.NE q-bio.NC
|
Neurons in the brain are organized such that nearby cells tend to share
similar functions. AI models lack this organization, and past efforts to
introduce topography have often led to trade-offs between topography and task
performance. In this work, we present TopoLoss, a new loss function that
promotes spatially organized topographic representations in AI models without
significantly sacrificing task performance. TopoLoss is highly adaptable and
can be seamlessly integrated into the training of leading model architectures.
We validate our method on both vision (ResNet-18, ResNet-50, ViT) and language
models (GPT-Neo-125M, NanoGPT), collectively TopoNets. TopoNets are the
highest-performing supervised topographic models to date, exhibiting brain-like
properties such as localized feature processing, lower dimensionality, and
increased efficiency. TopoNets also predict responses in the brain and
replicate the key topographic signatures observed in the brain's visual and
language cortices. Together, this work establishes a robust and generalizable
framework for integrating topography into leading model architectures,
advancing the development of high-performing models that more closely emulate
the computational strategies of the human brain.
|
2501.16397
|
THOR: A Generic Energy Estimation Approach for On-Device Training
|
cs.LG
|
Battery-powered mobile devices (e.g., smartphones, AR/VR glasses, and various
IoT devices) are increasingly being used for AI training due to their growing
computational power and easy access to valuable, diverse, and real-time data.
On-device training is highly energy-intensive, making accurate energy
consumption estimation crucial for effective job scheduling and sustainable AI.
However, the heterogeneity of devices and the complexity of models challenge
the accuracy and generalizability of existing estimation methods.
This paper proposes THOR, a generic approach for energy consumption
estimation in deep neural network (DNN) training. First, we examine the
layer-wise energy additivity property of DNNs and strategically partition the
entire model into layers for fine-grained energy consumption profiling. Then,
we fit Gaussian Process (GP) models to learn from layer-wise energy consumption
measurements and estimate a DNN's overall energy consumption based on its
layer-wise energy additivity property. We conduct extensive experiments with
various types of models across different real-world platforms. The results
demonstrate that THOR has effectively reduced the Mean Absolute Percentage
Error (MAPE) by up to 30%. Moreover, THOR is applied in guiding energy-aware
pruning, successfully reducing energy consumption by 50%, thereby further
demonstrating its generality and potential.
|
2501.16398
|
Visualizing the Local Atomic Environment Features of Machine Learning
Interatomic Potential
|
cs.LG physics.atom-ph
|
This paper addresses the challenges of creating efficient and high-quality
datasets for machine learning potential functions. We present a novel approach,
termed DV-LAE (Difference Vectors based on Local Atomic Environments), which
utilizes the properties of atomic local environments and employs histogram
statistics to generate difference vectors. This technique facilitates dataset
screening and optimization, effectively minimizing redundancy while maintaining
data diversity. We have validated the optimized datasets in high-temperature
and high-pressure hydrogen systems as well as the {\alpha}-Fe/H binary system,
demonstrating a significant reduction in computational resource usage without
compromising prediction accuracy. Additionally, our method has revealed new
structures that emerge during simulations but were underrepresented in the
initial training datasets. The redundancy in the datasets and the distribution
of these new structures can be visually analyzed through the visualization of
difference vectors. This approach enhances our understanding of the
characteristics of these newly formed structures and their impact on physical
processes.
|
2501.16399
|
Detecting clinician implicit biases in diagnoses using proximal causal
inference
|
cs.LG stat.AP
|
Clinical decisions to treat and diagnose patients are affected by implicit
biases formed by racism, ableism, sexism, and other stereotypes. These biases
reflect broader systemic discrimination in healthcare and risk marginalizing
already disadvantaged groups. Existing methods for measuring implicit biases
require controlled randomized testing and only capture individual attitudes
rather than outcomes. However, the "big-data" revolution has led to the
availability of large observational medical datasets, like EHRs and biobanks,
that provide the opportunity to investigate discrepancies in patient health
outcomes. In this work, we propose a causal inference approach to detect the
effect of clinician implicit biases on patient outcomes in large-scale medical
data. Specifically, our method uses proximal mediation to disentangle
pathway-specific effects of a patient's sociodemographic attribute on a
clinician's diagnosis decision. We test our method on real-world data from the
UK Biobank. Our work can serve as a tool that initiates conversation and brings
awareness to unequal health outcomes caused by implicit biases.
|
2501.16403
|
Is Open Source the Future of AI? A Data-Driven Approach
|
cs.SE cs.AI cs.CL
|
Large Language Models (LLMs) have become central in academia and industry,
raising concerns about privacy, transparency, and misuse. A key issue is the
trustworthiness of proprietary models, with open-sourcing often proposed as a
solution. However, open-sourcing presents challenges, including potential
misuse, financial disincentives, and intellectual property concerns.
Proprietary models, backed by private sector resources, are better positioned
for return on investment.
There are also other approaches that lie somewhere on the spectrum between
completely open-source and proprietary. These can largely be categorised into
open-source usage limitations protected by licensing, partially open-source
(open weights) models, hybrid approaches where obsolete model versions are
open-sourced, while competitive versions with market value remain proprietary.
Currently, discussions on where on the spectrum future models should fall on
remains unbacked and mostly opinionated where industry leaders are weighing in
on the discussion. In this paper, we present a data-driven approach by
compiling data on open-source development of LLMs, and their contributions in
terms of improvements, modifications, and methods. Our goal is to avoid
supporting either extreme but rather present data that will support future
discussions both by industry experts as well as policy makers.
Our findings indicate that open-source contributions can enhance model
performance, with trends such as reduced model size and manageable accuracy
loss. We also identify positive community engagement patterns and architectures
that benefit most from open contributions.
|
2501.16404
|
DynaPrompt: Dynamic Test-Time Prompt Tuning
|
cs.LG cs.AI cs.CL
|
Test-time prompt tuning enhances zero-shot generalization of vision-language
models but tends to ignore the relatedness among test samples during inference.
Online test-time prompt tuning provides a simple way to leverage the
information in previous test samples, albeit with the risk of prompt collapse
due to error accumulation. To enhance test-time prompt tuning, we propose
DynaPrompt, short for dynamic test-time prompt tuning, exploiting relevant data
distribution information while reducing error accumulation. Built on an online
prompt buffer, DynaPrompt adaptively selects and optimizes the relevant prompts
for each test sample during tuning. Specifically, we introduce a dynamic prompt
selection strategy based on two metrics: prediction entropy and probability
difference. For unseen test data information, we develop dynamic prompt
appending, which allows the buffer to append new prompts and delete the
inactive ones. By doing so, the prompts are optimized to exploit beneficial
information on specific test data, while alleviating error accumulation.
Experiments on fourteen datasets demonstrate the effectiveness of dynamic
test-time prompt tuning.
|
2501.16405
|
DepoRanker: A Web Tool to predict Klebsiella Depolymerases using Machine
Learning
|
q-bio.GN cs.LG
|
Background: Phage therapy shows promise for treating antibiotic-resistant
Klebsiella infections. Identifying phage depolymerases that target Klebsiella
capsular polysaccharides is crucial, as these capsules contribute to biofilm
formation and virulence. However, homology-based searches have limitations in
novel depolymerase discovery.
Objective: To develop a machine learning model for identifying and ranking
potential phage depolymerases targeting Klebsiella.
Methods: We developed DepoRanker, a machine learning algorithm to rank
proteins by their likelihood of being depolymerases. The model was
experimentally validated on 5 newly characterized proteins and compared to
BLAST.
Results: DepoRanker demonstrated superior performance to BLAST in identifying
potential depolymerases. Experimental validation confirmed its predictive
ability on novel proteins.
Conclusions: DepoRanker provides an accurate and functional tool to expedite
depolymerase discovery for phage therapy against Klebsiella. It is available as
a webserver and open-source software.
Availability: Webserver: https://deporanker.dcs.warwick.ac.uk/ Source code:
https://github.com/wgrgwrght/deporanker
|
2501.16409
|
Classification of Mild Cognitive Impairment Based on Dynamic Functional
Connectivity Using Spatio-Temporal Transformer
|
eess.IV cs.AI q-bio.NC
|
Dynamic functional connectivity (dFC) using resting-state functional magnetic
resonance imaging (rs-fMRI) is an advanced technique for capturing the dynamic
changes of neural activities, and can be very useful in the studies of brain
diseases such as Alzheimer's disease (AD). Yet, existing studies have not fully
leveraged the sequential information embedded within dFC that can potentially
provide valuable information when identifying brain conditions. In this paper,
we propose a novel framework that jointly learns the embedding of both spatial
and temporal information within dFC based on the transformer architecture.
Specifically, we first construct dFC networks from rs-fMRI data through a
sliding window strategy. Then, we simultaneously employ a temporal block and a
spatial block to capture higher-order representations of dynamic
spatio-temporal dependencies, via mapping them into an efficient fused feature
representation. To further enhance the robustness of these feature
representations by reducing the dependency on labeled data, we also introduce a
contrastive learning strategy to manipulate different brain states.
Experimental results on 345 subjects with 570 scans from the Alzheimer's
Disease Neuroimaging Initiative (ADNI) demonstrate the superiority of our
proposed method for MCI (Mild Cognitive Impairment, the prodromal stage of AD)
prediction, highlighting its potential for early identification of AD.
|
2501.16410
|
DynAlign: Unsupervised Dynamic Taxonomy Alignment for Cross-Domain
Segmentation
|
cs.CV
|
Current unsupervised domain adaptation (UDA) methods for semantic
segmentation typically assume identical class labels between the source and
target domains. This assumption ignores the label-level domain gap, which is
common in real-world scenarios, thus limiting their ability to identify
finer-grained or novel categories without requiring extensive manual
annotation. A promising direction to address this limitation lies in recent
advancements in foundation models, which exhibit strong generalization
abilities due to their rich prior knowledge. However, these models often
struggle with domain-specific nuances and underrepresented fine-grained
categories.
To address these challenges, we introduce DynAlign, a framework that
integrates UDA with foundation models to bridge both the image-level and
label-level domain gaps. Our approach leverages prior semantic knowledge to
align source categories with target categories that can be novel, more
fine-grained, or named differently (e.g., vehicle to {car, truck, bus}).
Foundation models are then employed for precise segmentation and category
reassignment. To further enhance accuracy, we propose a knowledge fusion
approach that dynamically adapts to varying scene contexts. DynAlign generates
accurate predictions in a new target label space without requiring any manual
annotations, allowing seamless adaptation to new taxonomies through either
model retraining or direct inference.
Experiments on the street scene semantic segmentation benchmarks GTA to
Mapillary Vistas and GTA to IDD validate the effectiveness of our approach,
achieving a significant improvement over existing methods. Our code will be
publicly available.
|
2501.16411
|
PhysBench: Benchmarking and Enhancing Vision-Language Models for
Physical World Understanding
|
cs.CV cs.AI cs.CL cs.LG cs.RO
|
Understanding the physical world is a fundamental challenge in embodied AI,
critical for enabling agents to perform complex tasks and operate safely in
real-world environments. While Vision-Language Models (VLMs) have shown great
promise in reasoning and task planning for embodied agents, their ability to
comprehend physical phenomena remains extremely limited. To close this gap, we
introduce PhysBench, a comprehensive benchmark designed to evaluate VLMs'
physical world understanding capability across a diverse set of tasks.
PhysBench contains 10,002 entries of interleaved video-image-text data,
categorized into four major domains: physical object properties, physical
object relationships, physical scene understanding, and physics-based dynamics,
further divided into 19 subclasses and 8 distinct capability dimensions. Our
extensive experiments, conducted on 75 representative VLMs, reveal that while
these models excel in common-sense reasoning, they struggle with understanding
the physical world -- likely due to the absence of physical knowledge in their
training data and the lack of embedded physical priors. To tackle the
shortfall, we introduce PhysAgent, a novel framework that combines the
generalization strengths of VLMs with the specialized expertise of vision
models, significantly enhancing VLMs' physical understanding across a variety
of tasks, including an 18.4\% improvement on GPT-4o. Furthermore, our results
demonstrate that enhancing VLMs' physical world understanding capabilities can
help embodied agents such as MOKA. We believe that PhysBench and PhysAgent
offer valuable insights and contribute to bridging the gap between VLMs and
physical world understanding.
|
2501.16443
|
Objects matter: object-centric world models improve reinforcement
learning in visually complex environments
|
cs.LG cs.CV
|
Deep reinforcement learning has achieved remarkable success in learning
control policies from pixels across a wide range of tasks, yet its application
remains hindered by low sample efficiency, requiring significantly more
environment interactions than humans to reach comparable performance.
Model-based reinforcement learning (MBRL) offers a solution by leveraging
learnt world models to generate simulated experience, thereby improving sample
efficiency. However, in visually complex environments, small or dynamic
elements can be critical for decision-making. Yet, traditional MBRL methods in
pixel-based environments typically rely on auto-encoding with an $L_2$ loss,
which is dominated by large areas and often fails to capture decision-relevant
details. To address these limitations, we propose an object-centric MBRL
pipeline, which integrates recent advances in computer vision to allow agents
to focus on key decision-related elements. Our approach consists of four main
steps: (1) annotating key objects related to rewards and goals with
segmentation masks, (2) extracting object features using a pre-trained, frozen
foundation vision model, (3) incorporating these object features with the raw
observations to predict environmental dynamics, and (4) training the policy
using imagined trajectories generated by this object-centric world model.
Building on the efficient MBRL algorithm STORM, we call this pipeline OC-STORM.
We demonstrate OC-STORM's practical value in overcoming the limitations of
conventional MBRL approaches on both Atari games and the visually complex game
Hollow Knight.
|
2501.16448
|
What is Harm? Baby Don't Hurt Me! On the Impossibility of Complete Harm
Specification in AI Alignment
|
cs.AI cs.LG
|
"First, do no harm" faces a fundamental challenge in artificial intelligence:
how can we specify what constitutes harm? While prior work treats harm
specification as a technical hurdle to be overcome through better algorithms or
more data, we argue this assumption is unsound. Drawing on information theory,
we demonstrate that complete harm specification is fundamentally impossible for
any system where harm is defined external to its specifications. This
impossibility arises from an inescapable information-theoretic gap: the entropy
of harm H(O) always exceeds the mutual information I(O;I) between ground truth
harm O and a system's specifications I.
We introduce two novel metrics: semantic entropy H(S) and the
safety-capability ratio I(O;I)/H(O), to quantify these limitations. Through a
progression of increasingly sophisticated specification attempts, we show why
each approach must fail and why the resulting gaps are not mere engineering
challenges but fundamental constraints akin to the halting problem. These
results suggest a paradigm shift: rather than pursuing complete specifications,
AI alignment research should focus on developing systems that can operate
safely despite irreducible specification uncertainty.
|
2501.16450
|
360Brew: A Decoder-only Foundation Model for Personalized Ranking and
Recommendation
|
cs.IR cs.AI
|
Ranking and recommendation systems are the foundation for numerous online
experiences, ranging from search results to personalized content delivery.
These systems have evolved into complex, multilayered architectures that
leverage vast datasets and often incorporate thousands of predictive models.
The maintenance and enhancement of these models is a labor intensive process
that requires extensive feature engineering. This approach not only exacerbates
technical debt but also hampers innovation in extending these systems to
emerging problem domains. In this report, we present our research to address
these challenges by utilizing a large foundation model with a textual interface
for ranking and recommendation tasks. We illustrate several key advantages of
our approach: (1) a single model can manage multiple predictive tasks involved
in ranking and recommendation, (2) decoder models with textual interface due to
their comprehension of reasoning capabilities, can generalize to new
recommendation surfaces and out-of-domain problems, and (3) by employing
natural language interfaces for task definitions and verbalizing member
behaviors and their social connections, we eliminate the need for feature
engineering and the maintenance of complex directed acyclic graphs of model
dependencies. We introduce our research pre-production model, 360Brew V1.0, a
150B parameter, decoder-only model that has been trained and fine-tuned on
LinkedIn's data and tasks. This model is capable of solving over 30 predictive
tasks across various segments of the LinkedIn platform, achieving performance
levels comparable to or exceeding those of current production systems based on
offline metrics, without task-specific fine-tuning. Notably, each of these
tasks is conventionally addressed by dedicated models that have been developed
and maintained over multiple years by teams of a similar or larger size than
our own.
|
2501.16453
|
Detecting Zero-Day Attacks in Digital Substations via In-Context
Learning
|
cs.LG cs.AI
|
The occurrences of cyber attacks on the power grids have been increasing
every year, with novel attack techniques emerging every year. In this paper, we
address the critical challenge of detecting novel/zero-day attacks in digital
substations that employ the IEC-61850 communication protocol. While many
heuristic and machine learning (ML)-based methods have been proposed for attack
detection in IEC-61850 digital substations, generalization to novel or zero-day
attacks remains challenging. We propose an approach that leverages the
in-context learning (ICL) capability of the transformer architecture, the
fundamental building block of large language models. The ICL approach enables
the model to detect zero-day attacks and learn from a few examples of that
attack without explicit retraining. Our experiments on the IEC-61850 dataset
demonstrate that the proposed method achieves more than $85\%$ detection
accuracy on zero-day attacks while the existing state-of-the-art baselines
fail. This work paves the way for building more secure and resilient digital
substations of the future.
|
2501.16456
|
CoCoNUT: Structural Code Understanding does not fall out of a tree
|
cs.LG cs.SE
|
Large Language Models (LLMs) have shown impressive performance across a wide
array of tasks involving both structured and unstructured textual data. Recent
results on various benchmarks for code generation, repair, or completion
suggest that certain models have programming abilities comparable to or even
surpass humans. In this work, we demonstrate that high performance on such
benchmarks does not correlate to humans' innate ability to understand
structural control flow in code. To this end, we extract solutions from the
HumanEval benchmark, which the relevant models perform strongly on, and trace
their execution path using function calls sampled from the respective test set.
Using this dataset, we investigate the ability of seven state-of-the-art LLMs
to match the execution trace and find that, despite their ability to generate
semantically identical code, they possess limited ability to trace execution
paths, especially for longer traces and specific control structures. We find
that even the top-performing model, Gemini, can fully and correctly generate
only 47% of HumanEval task traces. Additionally, we introduce a subset for
three key structures not contained in HumanEval: Recursion, Parallel
Processing, and Object-Oriented Programming, including concepts like
Inheritance and Polymorphism. Besides OOP, we show that none of the
investigated models achieve an accuracy over 5% on the relevant traces.
Aggregating these specialized parts with HumanEval tasks, we present CoCoNUT:
Code Control Flow for Navigation Understanding and Testing, which measures a
model's ability to trace execution of code upon relevant calls, including
advanced structural components. We conclude that current LLMs need significant
improvement to enhance code reasoning abilities. We hope our dataset helps
researchers bridge this gap.
|
2501.16458
|
BiFold: Bimanual Cloth Folding with Language Guidance
|
cs.RO cs.CV
|
Cloth folding is a complex task due to the inevitable self-occlusions of
clothes, their complicated dynamics, and the disparate materials, geometries,
and textures that garments can have. In this work, we learn folding actions
conditioned on text commands. Translating high-level, abstract instructions
into precise robotic actions requires sophisticated language understanding and
manipulation capabilities. To do that, we leverage a pre-trained
vision-language model and repurpose it to predict manipulation actions. Our
model, BiFold, can take context into account and achieves state-of-the-art
performance on an existing language-conditioned folding benchmark. Given the
lack of annotated bimanual folding data, we devise a procedure to automatically
parse actions of a simulated dataset and tag them with aligned text
instructions. BiFold attains the best performance on our dataset and can
transfer to new instructions, garments, and environments.
|
2501.16466
|
On the Feasibility of Using LLMs to Execute Multistage Network Attacks
|
cs.CR cs.AI
|
LLMs have shown preliminary promise in some security tasks and CTF
challenges. However, it is unclear whether LLMs are able to realize multistage
network attacks, which involve executing a wide variety of actions across
multiple hosts such as conducting reconnaissance, exploiting vulnerabilities to
gain initial access, leveraging internal hosts to move laterally, and using
multiple compromised hosts to exfiltrate data. We evaluate LLMs across 10
multistage networks and find that popular LLMs are unable to realize these
attacks. To enable LLMs to realize these attacks, we introduce Incalmo, an
LLM-agnostic high-level attack abstraction layer that sits between an LLM and
the environment. Rather than LLMs issuing low-level command-line instructions,
which can lead to incorrect implementations, Incalmo allows LLMs to specify
high-level tasks (e.g., infect a host, scan a network), which are then carried
out by Incalmo. Incalmo realizes these tasks by translating them into low-level
primitives (e.g., commands to exploit tools). Incalmo also provides an
environment state service and an attack graph service to provide structure to
LLMs in selecting actions relevant to a multistage attack. Across 9 out of 10
realistic emulated networks (from 25 to 50 hosts), LLMs using Incalmo can
successfully autonomously execute multistage attacks. We also conduct an
ablation analysis to show the key role the high-level abstractions play. For
instance, we find that both Incalmo's high-level tasks and services are
crucial. Furthermore, even smaller-parameter LLMs with Incalmo can fully
succeed in 5 of 10 environments, while larger-parameter LLMs without Incalmo do
not fully succeed in any.
|
2501.16467
|
Cross-Domain Semantic Segmentation with Large Language Model-Assisted
Descriptor Generation
|
cs.CV
|
Semantic segmentation plays a crucial role in enabling machines to understand
and interpret visual scenes at a pixel level. While traditional segmentation
methods have achieved remarkable success, their generalization to diverse
scenes and unseen object categories remains limited. Recent advancements in
large language models (LLMs) offer a promising avenue for bridging visual and
textual modalities, providing a deeper understanding of semantic relationships.
In this paper, we propose LangSeg, a novel LLM-guided semantic segmentation
method that leverages context-sensitive, fine-grained subclass descriptors
generated by LLMs. Our framework integrates these descriptors with a
pre-trained Vision Transformer (ViT) to achieve superior segmentation
performance without extensive model retraining. We evaluate LangSeg on two
challenging datasets, ADE20K and COCO-Stuff, where it outperforms
state-of-the-art models, achieving up to a 6.1% improvement in mean
Intersection over Union (mIoU). Additionally, we conduct a comprehensive
ablation study and human evaluation to validate the effectiveness of our method
in real-world scenarios. The results demonstrate that LangSeg not only excels
in semantic understanding and contextual alignment but also provides a flexible
and efficient framework for language-guided segmentation tasks. This approach
opens up new possibilities for interactive and domain-specific segmentation
applications.
|
2501.16469
|
Object Detection for Medical Image Analysis: Insights from the RT-DETR
Model
|
cs.CV cs.LG
|
Deep learning has emerged as a transformative approach for solving complex
pattern recognition and object detection challenges. This paper focuses on the
application of a novel detection framework based on the RT-DETR model for
analyzing intricate image data, particularly in areas such as diabetic
retinopathy detection. Diabetic retinopathy, a leading cause of vision loss
globally, requires accurate and efficient image analysis to identify
early-stage lesions. The proposed RT-DETR model, built on a Transformer-based
architecture, excels at processing high-dimensional and complex visual data
with enhanced robustness and accuracy. Comparative evaluations with models such
as YOLOv5, YOLOv8, SSD, and DETR demonstrate that RT-DETR achieves superior
performance across precision, recall, mAP50, and mAP50-95 metrics, particularly
in detecting small-scale objects and densely packed targets. This study
underscores the potential of Transformer-based models like RT-DETR for
advancing object detection tasks, offering promising applications in medical
imaging and beyond.
|
2501.16471
|
SIM: Surface-based fMRI Analysis for Inter-Subject Multimodal Decoding
from Movie-Watching Experiments
|
cs.LG cs.AI eess.AS eess.IV q-bio.NC
|
Current AI frameworks for brain decoding and encoding, typically train and
test models within the same datasets. This limits their utility for brain
computer interfaces (BCI) or neurofeedback, for which it would be useful to
pool experiences across individuals to better simulate stimuli not sampled
during training. A key obstacle to model generalisation is the degree of
variability of inter-subject cortical organisation, which makes it difficult to
align or compare cortical signals across participants. In this paper we address
this through the use of surface vision transformers, which build a
generalisable model of cortical functional dynamics, through encoding the
topography of cortical networks and their interactions as a moving image across
a surface. This is then combined with tri-modal self-supervised contrastive
(CLIP) alignment of audio, video, and fMRI modalities to enable the retrieval
of visual and auditory stimuli from patterns of cortical activity (and
vice-versa). We validate our approach on 7T task-fMRI data from 174 healthy
participants engaged in the movie-watching experiment from the Human Connectome
Project (HCP). Results show that it is possible to detect which movie clips an
individual is watching purely from their brain activity, even for individuals
and movies not seen during training. Further analysis of attention maps reveals
that our model captures individual patterns of brain activity that reflect
semantic and visual systems. This opens the door to future personalised
simulations of brain function. Code & pre-trained models will be made available
at https://github.com/metrics-lab/sim, processed data for training will be
available upon request at https://gin.g-node.org/Sdahan30/sim.
|
2501.16473
|
Sensitivity Analysis of the Laser Power Control System to Measurement
Noise in SLS 3D Printers
|
eess.SY cs.SY
|
Uniform temperature distribution in Selective Laser Sintering (SLS) is
essential for producing durable 3D prints. Achieving uniformity requires a
laser power control system that minimises deviation of the printing
temperatures from the target temperature. Because the estimate of the actual
process temperature is an input to the laser power control, uncertainty in the
estimate of the actual temperature can lead to fluctuations in laser power that
affect the thermal performance of the SLS. This article investigates the
sensitivity of a laser power control system to temperature measurement
uncertainty. This article evaluates the effectiveness of two methods for
quantifying the effect of input uncertainty on a SLS laser power control
system: a recent innovation in uncertainty-tracked architecture and traditional
Monte Carlo simulation. We show that recent advances in computer architecture
for arithmatic on probability distributions make it possible for the first
time, to perform control system uncertainty analysis with latencies under 30
ms, while achieving the same level of uncertainty analysis as Monte Carlo
methods with latencies that are two orders of magnitude slower.
|
2501.16476
|
Closed-Form Feedback-Free Learning with Forward Projection
|
cs.LG stat.ML
|
State-of-the-art methods for backpropagation-free learning employ local error
feedback to direct iterative optimisation via gradient descent. In this study,
we examine the more restrictive setting where retrograde communication from
neuronal outputs is unavailable for pre-synaptic weight optimisation. To
address this challenge, we propose Forward Projection (FP). This novel
randomised closed-form training method requires only a single forward pass over
the entire dataset for model fitting, without retrograde communication. Target
values for pre-activation membrane potentials are generated layer-wise via
nonlinear projections of pre-synaptic inputs and the labels. Local loss
functions are optimised over pre-synaptic inputs using closed-form regression,
without feedback from neuronal outputs or downstream layers. Interpretability
is a key advantage of FP training; membrane potentials of hidden neurons in
FP-trained networks encode information which is interpretable layer-wise as
label predictions. We demonstrate the effectiveness of FP across four
biomedical datasets. In few-shot learning tasks, FP yielded more generalisable
models than those optimised via backpropagation. In large-sample tasks,
FP-based models achieve generalisation comparable to gradient descent-based
local learning methods while requiring only a single forward propagation step,
achieving significant speed up for training. Interpretation functions defined
on local neuronal activity in FP-based models successfully identified
clinically salient features for diagnosis in two biomedical datasets. Forward
Projection is a computationally efficient machine learning approach that yields
interpretable neural network models without retrograde communication of
neuronal activity during training.
|
2501.16480
|
Modular Framework for Uncertainty Prediction in Autonomous Vehicle
Motion Forecasting within Complex Traffic Scenarios
|
cs.RO cs.LG eess.SP
|
We propose a modular modeling framework designed to enhance the capture and
validation of uncertainty in autonomous vehicle (AV) trajectory prediction.
Departing from traditional deterministic methods, our approach employs a
flexible, end-to-end differentiable probabilistic encoder-decoder architecture.
This modular design allows the encoder and decoder to be trained independently,
enabling seamless adaptation to diverse traffic scenarios without retraining
the entire system. Our key contributions include: (1) a probabilistic heatmap
predictor that generates context-aware occupancy grids for dynamic forecasting,
(2) a modular training approach that supports independent component training
and flexible adaptation, and (3) a structured validation scheme leveraging
uncertainty metrics to evaluate robustness under high-risk conditions. To
highlight the benefits of our framework, we benchmark it against an end-to-end
baseline, demonstrating faster convergence, improved stability, and
flexibility. Experimental results validate these advantages, showcasing the
capacity of the framework to efficiently handle complex scenarios while
ensuring reliable predictions and robust uncertainty representation. This
modular design offers significant practical utility and scalability for
real-world autonomous driving applications.
|
2501.16481
|
Generating customized prompts for Zero-Shot Rare Event Medical Image
Classification using LLM
|
cs.CV
|
Rare events, due to their infrequent occurrences, do not have much data, and
hence deep learning techniques fail in estimating the distribution for such
data. Open-vocabulary models represent an innovative approach to image
classification. Unlike traditional models, these models classify images into
any set of categories specified with natural language prompts during inference.
These prompts usually comprise manually crafted templates (e.g., 'a photo of a
{}') that are filled in with the names of each category. This paper introduces
a simple yet effective method for generating highly accurate and contextually
descriptive prompts containing discriminative characteristics. Rare event
detection, especially in medicine, is more challenging due to low inter-class
and high intra-class variability. To address these, we propose a novel approach
that uses domain-specific expert knowledge on rare events to generate
customized and contextually relevant prompts, which are then used by large
language models for image classification. Our zero-shot, privacy-preserving
method enhances rare event classification without additional training,
outperforming state-of-the-art techniques.
|
2501.16485
|
Enhanced Position Estimation in Tactile Internet-Enabled Remote Robotic
Surgery Using MOESP-Based Kalman Filter
|
cs.RO cs.SY eess.SY
|
Accurately estimating the position of a patient's side robotic arm in real
time during remote surgery is a significant challenge, especially within
Tactile Internet (TI) environments. This paper presents a new and efficient
method for position estimation using a Kalman Filter (KF) combined with the
Multivariable Output-Error State Space (MOESP) method for system
identification. Unlike traditional approaches that require prior knowledge of
the system's dynamics, this study uses the JIGSAW dataset, a comprehensive
collection of robotic surgical data, along with input from the Master Tool
Manipulator (MTM) to derive the state-space model directly. The MOESP method
allows accurate modeling of the Patient Side Manipulator (PSM) dynamics without
prior system models, improving the KF's performance under simulated network
conditions, including delays, jitter, and packet loss. These conditions mimic
real-world challenges in Tactile Internet applications. The findings
demonstrate the KF's improved resilience and accuracy in state estimation,
achieving over 95 percent accuracy despite network-induced uncertainties.
|
2501.16487
|
Network Risk Estimation: A Risk Estimation Paradigm for Cyber Networks
|
eess.SY cs.SY
|
Cyber networks are fundamental to many organization's infrastructure, and the
size of cyber networks is increasing rapidly. Risk measurement of the
entities/endpoints that make up the network via available knowledge about
possible threats has been the primary tool in cyber network security. However,
the dynamic behavior of the entities and the sparsity of risk-measurable points
are limiting factors for risk measurement strategies, which results in poor
network visibility considering the volatility of cyber networks. This work
proposes a new probabilistic risk estimation approach to network security, NRE,
which operates on top of existing risk measurements. The proposed method NRE
extracts relationships among system components from the network connection
data, models risk propagation based on the learned relationships and refines
the estimates whenever risk measurements are provided. In this work, (i) the
risk estimation scheme is proposed, (ii) an application of quantitative risk
estimates is devised, (iii) descriptiveness of the risk estimates are compared
to a pure risk measurement alternative and (iv) low computational complexity of
the proposed method is illustrated capable of real-time deployment. The
proposed method, NRE, is ultimately a quantitative data-driven risk assessment
tool that can be used to add security aspects to existing network functions,
such as routing, and it provides a robust description of the network state in
the presence of threats, capable of running in real-time.
|
2501.16489
|
Nonparametric Sparse Online Learning of the Koopman Operator
|
stat.ML cs.LG cs.SY eess.SY
|
The Koopman operator provides a powerful framework for representing the
dynamics of general nonlinear dynamical systems. Data-driven techniques to
learn the Koopman operator typically assume that the chosen function space is
closed under system dynamics. In this paper, we study the Koopman operator via
its action on the reproducing kernel Hilbert space (RKHS), and explore the
mis-specified scenario where the dynamics may escape the chosen function space.
We relate the Koopman operator to the conditional mean embeddings (CME)
operator and then present an operator stochastic approximation algorithm to
learn the Koopman operator iteratively with control over the complexity of the
representation. We provide both asymptotic and finite-time last-iterate
guarantees of the online sparse learning algorithm with trajectory-based
sampling with an analysis that is substantially more involved than that for
finite-dimensional stochastic approximation. Numerical examples confirm the
effectiveness of the proposed algorithm.
|
2501.16490
|
Towards Robust Stability Prediction in Smart Grids: GAN-based Approach
under Data Constraints and Adversarial Challenges
|
cs.CR cs.AI cs.LG
|
Smart grids are critical for addressing the growing energy demand due to
global population growth and urbanization. They enhance efficiency,
reliability, and sustainability by integrating renewable energy. Ensuring their
availability and safety requires advanced operational control and safety
measures. Researchers employ AI and machine learning to assess grid stability,
but challenges like the lack of datasets and cybersecurity threats, including
adversarial attacks, persist. In particular, data scarcity is a key issue:
obtaining grid instability instances is tough due to the need for significant
expertise, resources, and time. However, they are essential to test novel
research advancements and security mitigations. In this paper, we introduce a
novel framework to detect instability in smart grids by employing only stable
data. It relies on a Generative Adversarial Network (GAN) where the generator
is trained to create instability data that are used along with stable data to
train the discriminator. Moreover, we include a new adversarial training layer
to improve robustness against adversarial attacks. Our solution, tested on a
dataset composed of real-world stable and unstable samples, achieve accuracy up
to 97.5\% in predicting grid stability and up to 98.9\% in detecting
adversarial attacks. Moreover, we implemented our model in a single-board
computer demonstrating efficient real-time decision-making with an average
response time of less than 7ms. Our solution improves prediction accuracy and
resilience while addressing data scarcity in smart grid management.
|
2501.16496
|
Open Problems in Mechanistic Interpretability
|
cs.LG
|
Mechanistic interpretability aims to understand the computational mechanisms
underlying neural networks' capabilities in order to accomplish concrete
scientific and engineering goals. Progress in this field thus promises to
provide greater assurance over AI system behavior and shed light on exciting
scientific questions about the nature of intelligence. Despite recent progress
toward these goals, there are many open problems in the field that require
solutions before many scientific and practical benefits can be realized: Our
methods require both conceptual and practical improvements to reveal deeper
insights; we must figure out how best to apply our methods in pursuit of
specific goals; and the field must grapple with socio-technical challenges that
influence and are influenced by our work. This forward-facing review discusses
the current frontier of mechanistic interpretability and the open problems that
the field may benefit from prioritizing.
|
2501.16497
|
Smoothed Embeddings for Robust Language Models
|
cs.LG cs.AI cs.CL cs.CR stat.ML
|
Improving the safety and reliability of large language models (LLMs) is a
crucial aspect of realizing trustworthy AI systems. Although alignment methods
aim to suppress harmful content generation, LLMs are often still vulnerable to
jailbreaking attacks that employ adversarial inputs that subvert alignment and
induce harmful outputs. We propose the Randomized Embedding Smoothing and Token
Aggregation (RESTA) defense, which adds random noise to the embedding vectors
and performs aggregation during the generation of each output token, with the
aim of better preserving semantic information. Our experiments demonstrate that
our approach achieves superior robustness versus utility tradeoffs compared to
the baseline defenses.
|
2501.16504
|
Digital Twin Enabled Site Specific Channel Precoding: Over the Air CIR
Inference
|
eess.SP cs.AI
|
This paper investigates the significance of designing a reliable,
intelligent, and true physical environment-aware precoding scheme by leveraging
an accurately designed channel twin model to obtain realistic channel state
information (CSI) for cellular communication systems. Specifically, we propose
a fine-tuned multi-step channel twin design process that can render CSI very
close to the CSI of the actual environment. After generating a precise CSI, we
execute precoding using the obtained CSI at the transmitter end. We demonstrate
a two-step parameters' tuning approach to design channel twin by ray tracing
(RT) emulation, then further fine-tuning of CSI by employing an artificial
intelligence (AI) based algorithm can significantly reduce the gap between
actual CSI and the fine-tuned digital twin (DT) rendered CSI. The simulation
results show the effectiveness of the proposed novel approach in designing a
true physical environment-aware channel twin model.
|
2501.16507
|
Characterizing Network Structure of Anti-Trans Actors on TikTok
|
cs.HC cs.AI cs.SI
|
The recent proliferation of short form video social media sites such as
TikTok has been effectively utilized for increased visibility, communication,
and community connection amongst trans/nonbinary creators online. However,
these same platforms have also been exploited by right-wing actors targeting
trans/nonbinary people, enabling such anti-trans actors to efficiently spread
hate speech and propaganda. Given these divergent groups, what are the
differences in network structure between anti-trans and pro-trans communities
on TikTok, and to what extent do they amplify the effects of anti-trans
content? In this paper, we collect a sample of TikTok videos containing pro and
anti-trans content, and develop a taxonomy of trans related sentiment to enable
the classification of content on TikTok, and ultimately analyze the reply
network structures of pro-trans and anti-trans communities. In order to
accomplish this, we worked with hired expert data annotators from the
trans/nonbinary community in order to generate a sample of highly accurately
labeled data. From this subset, we utilized a novel classification pipeline
leveraging Retrieval-Augmented Generation (RAG) with annotated examples and
taxonomy definitions to classify content into pro-trans, anti-trans, or neutral
categories. We find that incorporating our taxonomy and its logics into our
classification engine results in improved ability to differentiate trans
related content, and that Results from network analysis indicate many
interactions between posters of pro-trans and anti-trans content exist, further
demonstrating targeting of trans individuals, and demonstrating the need for
better content moderation tools
|
2501.16509
|
Reinforcement Learning for Quantum Circuit Design: Using Matrix
Representations
|
quant-ph cs.AI
|
Quantum computing promises advantages over classical computing. The
manufacturing of quantum hardware is in the infancy stage, called the Noisy
Intermediate-Scale Quantum (NISQ) era. A major challenge is automated quantum
circuit design that map a quantum circuit to gates in a universal gate set. In
this paper, we present a generic MDP modeling and employ Q-learning and DQN
algorithms for quantum circuit design. By leveraging the power of deep
reinforcement learning, we aim to provide an automatic and scalable approach
over traditional hand-crafted heuristic methods.
|
2501.16510
|
Decrypting the temperature field in flow boiling with latent diffusion
models
|
physics.flu-dyn cs.AI
|
This paper presents an innovative method using Latent Diffusion Models (LDMs)
to generate temperature fields from phase indicator maps. By leveraging the
BubbleML dataset from numerical simulations, the LDM translates phase field
data into corresponding temperature distributions through a two-stage training
process involving a vector-quantized variational autoencoder (VQVAE) and a
denoising autoencoder. The resulting model effectively reconstructs complex
temperature fields at interfaces. Spectral analysis indicates a high degree of
agreement with ground truth data in the low to mid wavenumber ranges, even
though some inconsistencies are observed at higher wavenumbers, suggesting
areas for further enhancement. This machine learning approach significantly
reduces the computational burden of traditional simulations and improves the
precision of experimental calibration methods. Future work will focus on
refining the model's ability to represent small-scale turbulence and expanding
its applicability to a broader range of boiling conditions.
|
2501.16513
|
Deception in LLMs: Self-Preservation and Autonomous Goals in Large
Language Models
|
cs.CL
|
Recent advances in Large Language Models (LLMs) have incorporated planning
and reasoning capabilities, enabling models to outline steps before execution
and provide transparent reasoning paths. This enhancement has reduced errors in
mathematical and logical tasks while improving accuracy. These developments
have facilitated LLMs' use as agents that can interact with tools and adapt
their responses based on new information.
Our study examines DeepSeek R1, a model trained to output reasoning tokens
similar to OpenAI's o1. Testing revealed concerning behaviors: the model
exhibited deceptive tendencies and demonstrated self-preservation instincts,
including attempts of self-replication, despite these traits not being
explicitly programmed (or prompted). These findings raise concerns about LLMs
potentially masking their true objectives behind a facade of alignment. When
integrating such LLMs into robotic systems, the risks become tangible - a
physically embodied AI exhibiting deceptive behaviors and self-preservation
instincts could pursue its hidden objectives through real-world actions. This
highlights the critical need for robust goal specification and safety
frameworks before any physical implementation.
|
2501.16516
|
How well can LLMs Grade Essays in Arabic?
|
cs.CL cs.AI
|
This research assesses the effectiveness of state-of-the-art large language
models (LLMs), including ChatGPT, Llama, Aya, Jais, and ACEGPT, in the task of
Arabic automated essay scoring (AES) using the AR-AES dataset. It explores
various evaluation methodologies, including zero-shot, few-shot in-context
learning, and fine-tuning, and examines the influence of instruction-following
capabilities through the inclusion of marking guidelines within the prompts. A
mixed-language prompting strategy, integrating English prompts with Arabic
content, was implemented to improve model comprehension and performance. Among
the models tested, ACEGPT demonstrated the strongest performance across the
dataset, achieving a Quadratic Weighted Kappa (QWK) of 0.67, but was
outperformed by a smaller BERT-based model with a QWK of 0.88. The study
identifies challenges faced by LLMs in processing Arabic, including
tokenization complexities and higher computational demands. Performance
variation across different courses underscores the need for adaptive models
capable of handling diverse assessment formats and highlights the positive
impact of effective prompt engineering on improving LLM outputs. To the best of
our knowledge, this study is the first to empirically evaluate the performance
of multiple generative Large Language Models (LLMs) on Arabic essays using
authentic student data.
|
2501.16519
|
Optimizing Decentralized Online Learning for Supervised Regression and
Classification Problems
|
cs.LG cs.DC cs.MA
|
Decentralized learning networks aim to synthesize a single network inference
from a set of raw inferences provided by multiple participants. To determine
the combined inference, these networks must adopt a mapping from historical
participant performance to weights, and to appropriately incentivize
contributions they must adopt a mapping from performance to fair rewards.
Despite the increased prevalence of decentralized learning networks, there
exists no systematic study that performs a calibration of the associated free
parameters. Here we present an optimization framework for key parameters
governing decentralized online learning in supervised regression and
classification problems. These parameters include the slope of the mapping
between historical performance and participant weight, the timeframe for
performance evaluation, and the slope of the mapping between performance and
rewards. These parameters are optimized using a suite of numerical experiments
that mimic the design of the Allora Network, but have been extended to handle
classification tasks in addition to regression tasks. This setup enables a
comparative analysis of parameter tuning and network performance optimization
(loss minimization) across both problem types. We demonstrate how the optimal
performance-weight mapping, performance timeframe, and performance-reward
mapping vary with network composition and problem type. Our findings provide
valuable insights for the optimization of decentralized learning protocols, and
we discuss how these results can be generalized to optimize any inference
synthesis-based, decentralized AI network.
|
2501.16520
|
Safe Gradient Flow for Bilevel Optimization
|
math.OC cs.LG cs.SY eess.SY
|
Bilevel optimization is a key framework in hierarchical decision-making,
where one problem is embedded within the constraints of another. In this work,
we propose a control-theoretic approach to solving bilevel optimization
problems. Our method consists of two components: a gradient flow mechanism to
minimize the upper-level objective and a safety filter to enforce the
constraints imposed by the lower-level problem. Together, these components form
a safe gradient flow that solves the bilevel problem in a single loop. To
improve scalability with respect to the lower-level problem's dimensions, we
introduce a relaxed formulation and design a compact variant of the safe
gradient flow. This variant minimizes the upper-level objective while ensuring
the lower-level solution remains within a user-defined distance. Using Lyapunov
analysis, we establish convergence guarantees for the dynamics, proving that
they converge to a neighborhood of the optimal solution. Numerical experiments
further validate the effectiveness of the proposed approaches. Our
contributions provide both theoretical insights and practical tools for
efficiently solving bilevel optimization problems.
|
2501.16524
|
Programming by Examples Meets Historical Linguistics: A Large Language
Model Based Approach to Sound Law Induction
|
cs.CL
|
Historical linguists have long written "programs" that convert reconstructed
words in an ancestor language into their attested descendants via ordered
string rewrite functions (called sound laws) However, writing these programs is
time-consuming, motivating the development of automated Sound Law Induction
(SLI) which we formulate as Programming by Examples (PBE) with Large Language
Models (LLMs) in this paper. While LLMs have been effective for code
generation, recent work has shown that PBE is challenging but improvable by
fine-tuning, especially with training data drawn from the same distribution as
evaluation data. In this paper, we create a conceptual framework of what
constitutes a "similar distribution" for SLI and propose four kinds of
synthetic data generation methods with varying amounts of inductive bias to
investigate what leads to the best performance. Based on the results we create
a SOTA open-source model for SLI as PBE (+6% pass rate with a third of the
parameters of the second-best LLM) and also highlight exciting future
directions for PBE research.
|
2501.16525
|
Multi-Objective Deep-Learning-based Biomechanical Deformable Image
Registration with MOREA
|
cs.CV cs.AI cs.NE
|
When choosing a deformable image registration (DIR) approach for images with
large deformations and content mismatch, the realism of found transformations
often needs to be traded off against the required runtime. DIR approaches using
deep learning (DL) techniques have shown remarkable promise in instantly
predicting a transformation. However, on difficult registration problems, the
realism of these transformations can fall short. DIR approaches using
biomechanical, finite element modeling (FEM) techniques can find more realistic
transformations, but tend to require much longer runtimes. This work proposes
the first hybrid approach to combine them, with the aim of getting the best of
both worlds. This hybrid approach, called DL-MOREA, combines a recently
introduced multi-objective DL-based DIR approach which leverages the VoxelMorph
framework, called DL-MODIR, with MOREA, an evolutionary algorithm-based,
multi-objective DIR approach in which a FEM-like biomechanical mesh
transformation model is used. In our proposed hybrid approach, the DL results
are used to smartly initialize MOREA, with the aim of more efficiently
optimizing its mesh transformation model. We empirically compare DL-MOREA
against its components, DL-MODIR and MOREA, on CT scan pairs capturing large
bladder filling differences of 15 cervical cancer patients. While MOREA
requires a median runtime of 45 minutes, DL-MOREA can already find high-quality
transformations after 5 minutes. Compared to the DL-MODIR transformations, the
transformations found by DL-MOREA exhibit far less folding and improve or
preserve the bladder contour distance error.
|
2501.16533
|
A comparison of data filtering techniques for English-Polish LLM-based
machine translation in the biomedical domain
|
cs.CL cs.LG
|
Large Language Models (LLMs) have become state-of-the-art in Machine
Translation (MT), often trained on massive bilingual parallel corpora scraped
from the web, that contain low-quality entries and redundant information,
leading to significant computational challenges. Various data filtering methods
exist to reduce dataset sizes, but their effectiveness largely varies based on
specific language pairs and domains. This paper evaluates the impact of
commonly used data filtering techniques, such as LASER, MUSE, and LaBSE, on
English-Polish translation within the biomedical domain. By filtering the UFAL
Medical Corpus, we created varying dataset sizes to fine-tune the mBART50
model, which was then evaluated using the SacreBLEU metric on the Khresmoi
dataset, having the quality of translations assessed by bilingual speakers. Our
results show that both LASER and MUSE can significantly reduce dataset sizes
while maintaining or even enhancing performance. We recommend the use of LASER,
as it consistently outperforms the other methods and provides the most fluent
and natural-sounding translations.
|
2501.16534
|
Targeting Alignment: Extracting Safety Classifiers of Aligned LLMs
|
cs.CR cs.AI
|
Alignment in large language models (LLMs) is used to enforce guidelines such
as safety. Yet, alignment fails in the face of jailbreak attacks that modify
inputs to induce unsafe outputs. In this paper, we present and evaluate a
method to assess the robustness of LLM alignment. We observe that alignment
embeds a safety classifier in the target model that is responsible for deciding
between refusal and compliance. We seek to extract an approximation of this
classifier, called a surrogate classifier, from the LLM. We develop an
algorithm for identifying candidate classifiers from subsets of the LLM model.
We evaluate the degree to which the candidate classifiers approximate the
model's embedded classifier in benign (F1 score) and adversarial (using
surrogates in a white-box attack) settings. Our evaluation shows that the best
candidates achieve accurate agreement (an F1 score above 80%) using as little
as 20% of the model architecture. Further, we find attacks mounted on the
surrogate models can be transferred with high accuracy. For example, a
surrogate using only 50% of the Llama 2 model achieved an attack success rate
(ASR) of 70%, a substantial improvement over attacking the LLM directly, where
we only observed a 22% ASR. These results show that extracting surrogate
classifiers is a viable (and highly effective) means for modeling (and therein
addressing) the vulnerability of aligned models to jailbreaking attacks.
|
Subsets and Splits
No community queries yet
The top public SQL queries from the community will appear here once available.