id
stringlengths 9
16
| title
stringlengths 4
278
| categories
stringlengths 5
104
| abstract
stringlengths 6
4.09k
|
|---|---|---|---|
2501.18649
|
Fake News Detection After LLM Laundering: Measurement and Explanation
|
cs.CL cs.AI cs.LG
|
With their advanced capabilities, Large Language Models (LLMs) can generate
highly convincing and contextually relevant fake news, which can contribute to
disseminating misinformation. Though there is much research on fake news
detection for human-written text, the field of detecting LLM-generated fake
news is still under-explored. This research measures the efficacy of detectors
in identifying LLM-paraphrased fake news, in particular, determining whether
adding a paraphrase step in the detection pipeline helps or impedes detection.
This study contributes: (1) Detectors struggle to detect LLM-paraphrased fake
news more than human-written text, (2) We find which models excel at which
tasks (evading detection, paraphrasing to evade detection, and paraphrasing for
semantic similarity). (3) Via LIME explanations, we discovered a possible
reason for detection failures: sentiment shift. (4) We discover a worrisome
trend for paraphrase quality measurement: samples that exhibit sentiment shift
despite a high BERTSCORE. (5) We provide a pair of datasets augmenting existing
datasets with paraphrase outputs and scores. The dataset is available on GitHub
|
2501.18650
|
Constructing Cell-type Taxonomy by Optimal Transport with Relaxed
Marginal Constraints
|
q-bio.GN cs.LG stat.ML
|
The rapid emergence of single-cell data has facilitated the study of many
different biological conditions at the cellular level. Cluster analysis has
been widely applied to identify cell types, capturing the essential patterns of
the original data in a much more concise form. One challenge in the cluster
analysis of cells is matching clusters extracted from datasets of different
origins or conditions. Many existing algorithms cannot recognize new cell types
present in only one of the two samples when establishing a correspondence
between clusters obtained from two samples. Additionally, when there are more
than two samples, it is advantageous to align clusters across all samples
simultaneously rather than performing pairwise alignment. Our approach aims to
construct a taxonomy for cell clusters across all samples to better annotate
these clusters and effectively extract features for downstream analysis. A new
system for constructing cell-type taxonomy has been developed by combining the
technique of Optimal Transport with Relaxed Marginal Constraints (OT-RMC) and
the simultaneous alignment of clusters across multiple samples. OT-RMC allows
us to address challenges that arise when the proportions of clusters vary
substantially between samples or when some clusters do not appear in all the
samples. Experiments on more than twenty datasets demonstrate that the taxonomy
constructed by this new system can yield highly accurate annotation of cell
types. Additionally, sample-level features extracted based on the taxonomy
result in accurate classification of samples.
|
2501.18653
|
Cogito, ergo sum: A Neurobiologically-Inspired Cognition-Memory-Growth
System for Code Generation
|
cs.SE cs.AI
|
Large language models based Multi Agent Systems (MAS) have demonstrated
promising performance for enhancing the efficiency and accuracy of code
generation tasks. However,most existing methods follow a conventional sequence
of planning, coding, and debugging,which contradicts the growth-driven nature
of human learning process. Additionally,the frequent information interaction
between multiple agents inevitably involves high computational costs. In this
paper,we propose Cogito,a neurobiologically inspired multi-agent framework to
enhance the problem-solving capabilities in code generation tasks with lower
cost. Specifically,Cogito adopts a reverse sequence: it first undergoes
debugging, then coding,and finally planning. This approach mimics human
learning and development,where knowledge is acquired progressively.
Accordingly,a hippocampus-like memory module with different functions is
designed to work with the pipeline to provide quick retrieval in similar tasks.
Through this growth-based learning model,Cogito accumulates knowledge and
cognitive skills at each stage,ultimately forming a Super Role an all capable
agent to perform the code generation task. Extensive experiments against
representative baselines demonstrate the superior performance and efficiency of
Cogito. The code is publicly available at
https://anonymous.4open.science/r/Cogito-0083.
|
2501.18657
|
Enhancing Large Language Model Efficiencyvia Symbolic Compression: A
Formal Approach Towards Interpretability
|
cs.AI cs.SE
|
Large language models (LLMs) face significant token efficiency bottlenecks in
code generation and logical reasoning tasks, a challenge that directly impacts
inference cost and model interpretability. This paper proposes a formal
framework based on symbolic compression,integrating combinatory logic,
information-theoretic optimal encoding, and context-aware inference techniques
to achieve a step-change improvement in token efficiency while preserving
semantic integrity. We establish a mathematical framework within a functional
programming paradigm, derive the quantitative relationship between symbolic
density and model interpretability, and propose a differentiable compression
factor metric to evaluate encoding efficiency. Furthermore, we leverage
parameter-efficient fine-tuning (PEFT) techniques to achieve a low-cost
application of the GAEL language. Experimental results show that this method
achieves a 78.3% token compression rate in code generation tasks while
improving logical traceability by 62% through structural explicitness. This
research provides new theoretical tools for efficient inference in LLMs and
opens a symbolic path for modelinterpretability research.
|
2501.18659
|
SAFL: Structure-Aware Personalized Federated Learning via
Client-Specific Clustering and SCSI-Guided Model Pruning
|
cs.LG cs.DC
|
Federated Learning (FL) enables clients to collaboratively train machine
learning models without sharing local data, preserving privacy in diverse
environments. While traditional FL approaches preserve privacy, they often
struggle with high computational and communication overhead. To address these
issues, model pruning is introduced as a strategy to streamline computations.
However, existing pruning methods, when applied solely based on local data,
often produce sub-models that inadequately reflect clients' specific tasks due
to data insufficiency. To overcome these challenges, this paper introduces SAFL
(Structure-Aware Federated Learning), a novel framework that enhances
personalized federated learning through client-specific clustering and Similar
Client Structure Information (SCSI)-guided model pruning. SAFL employs a
two-stage process: initially, it groups clients based on data similarities and
uses aggregated pruning criteria to guide the pruning process, facilitating the
identification of optimal sub-models. Subsequently, clients train these pruned
models and engage in server-based aggregation, ensuring tailored and efficient
models for each client. This method significantly reduces computational
overhead while improving inference accuracy. Extensive experiments demonstrate
that SAFL markedly diminishes model size and improves performance, making it
highly effective in federated environments characterized by heterogeneous data.
|
2501.18660
|
NAR db status Version 2 and miRNAverse: Over Two Years of Manual
Meta-Registry Curation and Updates
|
q-bio.OT cs.DB
|
Previously, we reported on a new meta-registry for NAR published databases
focusing on high-quality annotations regarding database availability and
longevity. With over two years of continued manual curation, here, we report on
recent updates and additions. Furthermore, the available annotations as well as
the underlying database structure have been unified with the miRNAverse
meta-registry. This allows for more in-depth insights as well as easier
curation and future developments shared across both meta-registries. NAR db
status currently provides annotations for 2,082 databases and miRNAverse for
194 databases. With the oldest annotation revision from June 2022 and the
newest from January 2025, NAR db status spans two and a half years of continued
manual curation. NAR db status is available at https://nardbstatus.de and
miRNAverse at https://mirnaverse.de.
|
2501.18663
|
Joint Optimization of Prompt Security and System Performance in
Edge-Cloud LLM Systems
|
cs.CR cs.AI
|
Large language models (LLMs) have significantly facilitated human life, and
prompt engineering has improved the efficiency of these models. However, recent
years have witnessed a rise in prompt engineering-empowered attacks, leading to
issues such as privacy leaks, increased latency, and system resource wastage.
Though safety fine-tuning based methods with Reinforcement Learning from Human
Feedback (RLHF) are proposed to align the LLMs, existing security mechanisms
fail to cope with fickle prompt attacks, highlighting the necessity of
performing security detection on prompts. In this paper, we jointly consider
prompt security, service latency, and system resource optimization in
Edge-Cloud LLM (EC-LLM) systems under various prompt attacks. To enhance prompt
security, a vector-database-enabled lightweight attack detector is proposed. We
formalize the problem of joint prompt detection, latency, and resource
optimization into a multi-stage dynamic Bayesian game model. The equilibrium
strategy is determined by predicting the number of malicious tasks and updating
beliefs at each stage through Bayesian updates. The proposed scheme is
evaluated on a real implemented EC-LLM system, and the results demonstrate that
our approach offers enhanced security, reduces the service latency for benign
users, and decreases system resource consumption compared to state-of-the-art
algorithms.
|
2501.18664
|
Rethinking the Upsampling Layer in Hyperspectral Image Super Resolution
|
eess.IV cs.AI cs.CV
|
Deep learning has achieved significant success in single hyperspectral image
super-resolution (SHSR); however, the high spectral dimensionality leads to a
heavy computational burden, thus making it difficult to deploy in real-time
scenarios. To address this issue, this paper proposes a novel lightweight SHSR
network, i.e., LKCA-Net, that incorporates channel attention to calibrate
multi-scale channel features of hyperspectral images. Furthermore, we
demonstrate, for the first time, that the low-rank property of the learnable
upsampling layer is a key bottleneck in lightweight SHSR methods. To address
this, we employ the low-rank approximation strategy to optimize the parameter
redundancy of the learnable upsampling layer. Additionally, we introduce a
knowledge distillation-based feature alignment technique to ensure the low-rank
approximated network retains the same feature representation capacity as the
original. We conducted extensive experiments on the Chikusei, Houston 2018, and
Pavia Center datasets compared to some SOTAs. The results demonstrate that our
method is competitive in performance while achieving speedups of several dozen
to even hundreds of times compared to other well-performing SHSR methods.
|
2501.18665
|
BARNN: A Bayesian Autoregressive and Recurrent Neural Network
|
cs.LG cs.AI
|
Autoregressive and recurrent networks have achieved remarkable progress
across various fields, from weather forecasting to molecular generation and
Large Language Models. Despite their strong predictive capabilities, these
models lack a rigorous framework for addressing uncertainty, which is key in
scientific applications such as PDE solving, molecular generation and Machine
Learning Force Fields. To address this shortcoming we present BARNN: a
variational Bayesian Autoregressive and Recurrent Neural Network. BARNNs aim to
provide a principled way to turn any autoregressive or recurrent model into its
Bayesian version. BARNN is based on the variational dropout method, allowing to
apply it to large recurrent neural networks as well. We also introduce a
temporal version of the "Variational Mixtures of Posteriors" prior
(tVAMP-prior) to make Bayesian inference efficient and well-calibrated.
Extensive experiments on PDE modelling and molecular generation demonstrate
that BARNN not only achieves comparable or superior accuracy compared to
existing methods, but also excels in uncertainty quantification and modelling
long-range dependencies.
|
2501.18666
|
Structure Development in List-Sorting Transformers
|
cs.LG cs.AI cs.NE
|
We study how a one-layer attention-only transformer develops relevant
structures while learning to sort lists of numbers. At the end of training, the
model organizes its attention heads in two main modes that we refer to as
vocabulary-splitting and copy-suppression. Both represent simpler modes than
having multiple heads handle overlapping ranges of numbers. Interestingly,
vocabulary-splitting is present regardless of whether we use weight decay, a
common regularization technique thought to drive simplification, supporting the
thesis that neural networks naturally prefer simpler solutions. We relate
copy-suppression to a mechanism in GPT-2 and investigate its functional role in
our model. Guided by insights from a developmental analysis of the model, we
identify features in the training data that drive the model's final acquired
solution. This provides a concrete example of how the training data shape the
internal organization of transformers, paving the way for future studies that
could help us better understand how LLMs develop their internal structures.
|
2501.18668
|
Simulation Streams: A Programming Paradigm for Controlling Large
Language Models and Building Complex Systems with Generative AI
|
cs.AI cs.SE
|
We introduce Simulation Streams, a programming paradigm designed to
efficiently control and leverage Large Language Models (LLMs) for complex,
dynamic simulations and agentic workflows. Our primary goal is to create a
minimally interfering framework that harnesses the agentic abilities of LLMs
while addressing their limitations in maintaining consistency, selectively
ignoring/including information, and enforcing strict world rules. Simulation
Streams achieves this through a state-based approach where variables are
modified in sequential steps by "operators," producing output on a recurring
format and adhering to consistent rules for state variables. This approach
focus the LLMs on defined tasks, while aiming to have the context stream remain
"in-distribution". The approach incorporates an Entity-Component-System (ECS)
architecture to write programs in a more intuitive manner, facilitating reuse
of workflows across different components and entities. This ECS approach
enhances the modularity of the output stream, allowing for complex,
multi-entity simulations while maintaining format consistency, information
control, and rule enforcement. It is supported by a custom editor that aids in
creating, running, and analyzing simulations. We demonstrate the versatility of
simulation streams through an illustrative example of an ongoing market economy
simulation, a social simulation of three characters playing a game of catch in
a park and a suite of classical reinforcement learning benchmark tasks. These
examples showcase Simulation Streams' ability to handle complex, evolving
scenarios over 100s-1000s of iterations, facilitate comparisons between
different agent workflows and models, and maintain consistency and continued
interesting developments in LLM-driven simulations.
|
2501.18669
|
The Pitfalls of "Security by Obscurity" And What They Mean for
Transparent AI
|
cs.CR cs.AI cs.CY
|
Calls for transparency in AI systems are growing in number and urgency from
diverse stakeholders ranging from regulators to researchers to users (with a
comparative absence of companies developing AI). Notions of transparency for AI
abound, each addressing distinct interests and concerns.
In computer security, transparency is likewise regarded as a key concept. The
security community has for decades pushed back against so-called security by
obscurity -- the idea that hiding how a system works protects it from attack --
against significant pressure from industry and other stakeholders. Over the
decades, in a community process that is imperfect and ongoing, security
researchers and practitioners have gradually built up some norms and practices
around how to balance transparency interests with possible negative side
effects. This paper asks: What insights can the AI community take from the
security community's experience with transparency?
We identify three key themes in the security community's perspective on the
benefits of transparency and their approach to balancing transparency against
countervailing interests. For each, we investigate parallels and insights
relevant to transparency in AI. We then provide a case study discussion on how
transparency has shaped the research subfield of anonymization. Finally,
shifting our focus from similarities to differences, we highlight key
transparency issues where modern AI systems present challenges different from
other kinds of security-critical systems, raising interesting open questions
for the security and AI communities alike.
|
2501.18670
|
High-Accuracy ECG Image Interpretation using Parameter-Efficient LoRA
Fine-Tuning with Multimodal LLaMA 3.2
|
cs.CV cs.AI
|
Electrocardiogram (ECG) interpretation is a cornerstone of cardiac
diagnostics. This paper explores a practical approach to enhance ECG image
interpretation using the multimodal LLaMA 3.2 model. We used a
parameter-efficient fine-tuning strategy, Low-Rank Adaptation (LoRA),
specifically designed to boost the model's ability to understand ECG images and
achieve better outcomes across a wide range of cardiac conditions. Our method
is tailored for ECG analysis and leverages ECGInstruct, a large-scale
instruction dataset with 1 Million samples. This dataset is a rich collection
of synthesized ECG images, generated from raw ECG data from trusted open-source
repositories like MIMIC-IV ECG and PTB-XL. Each ECG image in ECGInstruct comes
with expert-written questions and detailed answers, covering diverse ECG
interpretation scenarios, including complex cardiac conditions like Myocardial
Infarction and Conduction Disturbances. Our fine-tuning approach efficiently
adapts the LLaMA 3.2 model (built upon LLaMA 3) by integrating low-rank
adaptation techniques, focusing on efficiency by updating only a small set of
parameters, specifically ignoring the `lm_head` and `embed_tokens` layers. This
paper details the model setup, our efficient fine-tuning method, and
implementation specifics. We provide a thorough evaluation through extensive
experiments, demonstrating the effectiveness of our method across various ECG
interpretation tasks. The results convincingly show that our
parameter-efficient LoRA fine-tuning achieves excellent performance in ECG
image interpretation, significantly outperforming baseline models and reaching
accuracy comparable to or exceeding traditional CNN-based methods in
identifying a wide range of cardiac abnormalities, including over 70 conditions
from the PTB-XL dataset.
|
2501.18671
|
Machine Learning Strategies for Parkinson Tremor Classification Using
Wearable Sensor Data
|
cs.LG eess.SP
|
Parkinson's disease (PD) is a neurological disorder requiring early and
accurate diagnosis for effective management. Machine learning (ML) has emerged
as a powerful tool to enhance PD classification and diagnostic accuracy,
particularly by leveraging wearable sensor data. This survey comprehensively
reviews current ML methodologies used in classifying Parkinsonian tremors,
evaluating various tremor data acquisition methodologies, signal preprocessing
techniques, and feature selection methods across time and frequency domains,
highlighting practical approaches for tremor classification. The survey
explores ML models utilized in existing studies, ranging from traditional
methods such as Support Vector Machines (SVM) and Random Forests to advanced
deep learning architectures like Convolutional Neural Networks (CNN) and Long
Short-Term Memory networks (LSTM). We assess the efficacy of these models in
classifying tremor patterns associated with PD, considering their strengths and
limitations. Furthermore, we discuss challenges and discrepancies in current
research and broader challenges in applying ML to PD diagnosis using wearable
sensor data. We also outline future research directions to advance ML
applications in PD diagnostics, providing insights for researchers and
practitioners.
|
2501.18672
|
Drag Your Gaussian: Effective Drag-Based Editing with Score Distillation
for 3D Gaussian Splatting
|
cs.GR cs.CV
|
Recent advancements in 3D scene editing have been propelled by the rapid
development of generative models. Existing methods typically utilize generative
models to perform text-guided editing on 3D representations, such as 3D
Gaussian Splatting (3DGS). However, these methods are often limited to texture
modifications and fail when addressing geometric changes, such as editing a
character's head to turn around. Moreover, such methods lack accurate control
over the spatial position of editing results, as language struggles to
precisely describe the extent of edits. To overcome these limitations, we
introduce DYG, an effective 3D drag-based editing method for 3D Gaussian
Splatting. It enables users to conveniently specify the desired editing region
and the desired dragging direction through the input of 3D masks and pairs of
control points, thereby enabling precise control over the extent of editing.
DYG integrates the strengths of the implicit triplane representation to
establish the geometric scaffold of the editing results, effectively overcoming
suboptimal editing outcomes caused by the sparsity of 3DGS in the desired
editing regions. Additionally, we incorporate a drag-based Latent Diffusion
Model into our method through the proposed Drag-SDS loss function, enabling
flexible, multi-view consistent, and fine-grained editing. Extensive
experiments demonstrate that DYG conducts effective drag-based editing guided
by control point prompts, surpassing other baselines in terms of editing effect
and quality, both qualitatively and quantitatively. Visit our project page at
https://quyans.github.io/Drag-Your-Gaussian.
|
2501.18674
|
Unpaired Translation of Point Clouds for Modeling Detector Response
|
cs.CV cs.LG nucl-ex
|
Modeling detector response is a key challenge in time projection chambers. We
cast this problem as an unpaired point cloud translation task, between data
collected from simulations and from experimental runs. Effective translation
can assist with both noise rejection and the construction of high-fidelity
simulators. Building on recent work in diffusion probabilistic models, we
present a novel framework for performing this mapping. We demonstrate the
success of our approach in both synthetic domains and in data sourced from the
Active-Target Time Projection Chamber.
|
2501.18691
|
Regularized second-order optimization of tensor-network Born machines
|
cs.LG quant-ph
|
Tensor-network Born machines (TNBMs) are quantum-inspired generative models
for learning data distributions. Using tensor-network contraction and
optimization techniques, the model learns an efficient representation of the
target distribution, capable of capturing complex correlations with a compact
parameterization. Despite their promise, the optimization of TNBMs presents
several challenges. A key bottleneck of TNBMs is the logarithmic nature of the
loss function that is commonly used for this problem. The single-tensor
logarithmic optimization problem cannot be solved analytically, necessitating
an iterative approach that slows down convergence and increases the risk of
getting trapped in one of many non-optimal local minima. In this paper, we
present an improved second-order optimization technique for TNBM training,
which significantly enhances convergence rates and the quality of the optimized
model. Our method employs a modified Newton's method on the manifold of
normalized states, incorporating regularization of the loss landscape to
mitigate local minima issues. We demonstrate the effectiveness of our approach
by training a one-dimensional matrix product state (MPS) on both discrete and
continuous datasets, showcasing its advantages in terms of stability,
efficiency, and generalization.
|
2501.18698
|
Human Re-ID Meets LVLMs: What can we expect?
|
cs.CV
|
Large vision-language models (LVLMs) have been regarded as a breakthrough
advance in an astoundingly variety of tasks, from content generation to virtual
assistants and multimodal search or retrieval. However, for many of these
applications, the performance of these methods has been widely criticized,
particularly when compared with state-of-the-art methods and technologies in
each specific domain. In this work, we compare the performance of the leading
large vision-language models in the human re-identification task, using as
baseline the performance attained by state-of-the-art AI models specifically
designed for this problem. We compare the results due to ChatGPT-4o,
Gemini-2.0-Flash, Claude 3.5 Sonnet, and Qwen-VL-Max to a baseline ReID
PersonViT model, using the well-known Market1501 dataset. Our evaluation
pipeline includes the dataset curation, prompt engineering, and metric
selection to assess the models' performance. Results are analyzed from many
different perspectives: similarity scores, classification accuracy, and
classification metrics, including precision, recall, F1 score, and area under
curve (AUC). Our results confirm the strengths of LVLMs, but also their severe
limitations that often lead to catastrophic answers and should be the scope of
further research. As a concluding remark, we speculate about some further
research that should fuse traditional and LVLMs to combine the strengths from
both families of techniques and achieve solid improvements in performance.
|
2501.18699
|
STAN: Smooth Transition Autoregressive Networks
|
cs.LG
|
Traditional Smooth Transition Autoregressive (STAR) models offer an effective
way to model these dynamics through smooth regime changes based on specific
transition variables. In this paper, we propose a novel approach by drawing an
analogy between STAR models and a multilayer neural network architecture. Our
proposed neural network architecture mimics the STAR framework, employing
multiple layers to simulate the smooth transition between regimes and capturing
complex, nonlinear relationships. The network's hidden layers and activation
functions are structured to replicate the gradual switching behavior typical of
STAR models, allowing for a more flexible and scalable approach to
regime-dependent modeling. This research suggests that neural networks can
provide a powerful alternative to STAR models, with the potential to enhance
predictive accuracy in economic and financial forecasting.
|
2501.18707
|
Hierarchical Multi-field Representations for Two-Stage E-commerce
Retrieval
|
cs.IR
|
Dense retrieval methods typically target unstructured text data represented
as flat strings. However, e-commerce catalogs often include structured
information across multiple fields, such as brand, title, and description,
which contain important information potential for retrieval systems. We present
Cascading Hierarchical Attention Retrieval Model (CHARM), a novel framework
designed to encode structured product data into hierarchical field-level
representations with progressively finer detail. Utilizing a novel
block-triangular attention mechanism, our method captures the interdependencies
between product fields in a specified hierarchy, yielding field-level
representations and aggregated vectors suitable for fast and efficient
retrieval. Combining both representations enables a two-stage retrieval
pipeline, in which the aggregated vectors support initial candidate selection,
while more expressive field-level representations facilitate precise
fine-tuning for downstream ranking. Experiments on publicly available
large-scale e-commerce datasets demonstrate that CHARM matches or outperforms
state-of-the-art baselines. Our analysis highlights the framework's ability to
align different queries with appropriate product fields, enhancing retrieval
accuracy and explainability.
|
2501.18708
|
Combining physics-based and data-driven models: advancing the frontiers
of research with Scientific Machine Learning
|
math.NA cs.LG cs.NA physics.comp-ph
|
Scientific Machine Learning (SciML) is a recently emerged research field
which combines physics-based and data-driven models for the numerical
approximation of differential problems. Physics-based models rely on the
physical understanding of the problem at hand, subsequent mathematical
formulation, and numerical approximation. Data-driven models instead aim to
extract relations between input and output data without arguing any causality
principle underlining the available data distribution. In recent years,
data-driven models have been rapidly developed and popularized. Such a
diffusion has been triggered by a huge availability of data (the so-called big
data), an increasingly cheap computing power, and the development of powerful
machine learning algorithms. SciML leverages the physical awareness of
physics-based models and, at the same time, the efficiency of data-driven
algorithms. With SciML, we can inject physics and mathematical knowledge into
machine learning algorithms. Yet, we can rely on data-driven algorithms'
capability to discover complex and non-linear patterns from data and improve
the descriptive capacity of physics-based models. After recalling the
mathematical foundations of digital modelling and machine learning algorithms,
and presenting the most popular machine learning architectures, we discuss the
great potential of a broad variety of SciML strategies in solving complex
problems governed by partial differential equations. Finally, we illustrate the
successful application of SciML to the simulation of the human cardiac
function, a field of significant socio-economic importance that poses numerous
challenges on both the mathematical and computational fronts. The corresponding
mathematical model is a complex system of non-linear ordinary and partial
differential equations describing the electromechanics, valve dynamics, blood
circulation, perfusion in the coronary tree, and torso potential. Despite the
robustness and accuracy of physics-based models, certain aspects, such as
unveiling constitutive laws for cardiac cells and myocardial material
properties, as well as devising efficient reduced order models to dominate the
extraordinary computational complexity, have been successfully tackled by
leveraging data-driven models.
|
2501.18712
|
Invisible Traces: Using Hybrid Fingerprinting to identify underlying
LLMs in GenAI Apps
|
cs.LG cs.CR
|
Fingerprinting refers to the process of identifying underlying Machine
Learning (ML) models of AI Systemts, such as Large Language Models (LLMs), by
analyzing their unique characteristics or patterns, much like a human
fingerprint. The fingerprinting of Large Language Models (LLMs) has become
essential for ensuring the security and transparency of AI-integrated
applications. While existing methods primarily rely on access to direct
interactions with the application to infer model identity, they often fail in
real-world scenarios involving multi-agent systems, frequent model updates, and
restricted access to model internals. In this paper, we introduce a novel
fingerprinting framework designed to address these challenges by integrating
static and dynamic fingerprinting techniques. Our approach identifies
architectural features and behavioral traits, enabling accurate and robust
fingerprinting of LLMs in dynamic environments. We also highlight new threat
scenarios where traditional fingerprinting methods are ineffective, bridging
the gap between theoretical techniques and practical application. To validate
our framework, we present an extensive evaluation setup that simulates
real-world conditions and demonstrate the effectiveness of our methods in
identifying and monitoring LLMs in Gen-AI applications. Our results highlight
the framework's adaptability to diverse and evolving deployment contexts.
|
2501.18715
|
chebgreen: Learning and Interpolating Continuous Empirical Green's
Functions from Data
|
cs.LG cs.NA math.NA
|
In this work, we present a mesh-independent, data-driven library, chebgreen,
to mathematically model one-dimensional systems, possessing an associated
control parameter, and whose governing partial differential equation is
unknown. The proposed method learns an Empirical Green's Function for the
associated, but hidden, boundary value problem, in the form of a Rational
Neural Network from which we subsequently construct a bivariate representation
in a Chebyshev basis. We uncover the Green's function, at an unseen control
parameter value, by interpolating the left and right singular functions within
a suitable library, expressed as points on a manifold of Quasimatrices, while
the associated singular values are interpolated with Lagrange polynomials.
|
2501.18716
|
Full-Head Segmentation of MRI with Abnormal Brain Anatomy: Model and
Data Release
|
cs.CV cs.LG eess.IV q-bio.NC
|
The goal of this work was to develop a deep network for whole-head
segmentation, including clinical MRIs with abnormal anatomy, and compile the
first public benchmark dataset for this purpose. We collected 91 MRIs with
volumetric segmentation labels for a diverse set of human subjects (4 normal,
32 traumatic brain injuries, and 57 strokes). These clinical cases are
characterized by extended cerebrospinal fluid (CSF) in regions normally
containing the brain. Training labels were generated by manually correcting
initial automated segmentations for skin/scalp, skull, CSF, gray matter, white
matter, air cavity, and extracephalic air. We developed a MultiAxial network
consisting of three 2D U-Net models that operate independently in sagittal,
axial, and coronal planes and are then combined to produce a single 3D
segmentation. The MultiAxial network achieved test-set Dice scores of 0.88
(median plus-minus 0.04). For brain tissue, it significantly outperforms
existing brain segmentation methods (MultiAxial: 0.898 plus-minus 0.041,
SynthSeg: 0.758 plus-minus 0.054, BrainChop: 0.757 plus-minus 0.125). The
MultiAxial network gains in robustness by avoiding the need for coregistration
with an atlas. It performed well in regions with abnormal anatomy and on images
that have been de-identified. It enables more robust current flow modeling when
incorporated into ROAST, a widely-used modeling toolbox for transcranial
electric stimulation. We are releasing a state-of-the-art model for whole-head
MRI segmentation, along with a dataset of 61 clinical MRIs and training labels,
including non-brain structures. Together, the model and data may serve as a
benchmark for future efforts.
|
2501.18718
|
Distributed Offloading in Multi-Access Edge Computing Systems: A
Mean-Field Perspective
|
cs.IT cs.MA cs.SY eess.SY math.IT math.OC
|
Multi-access edge computing (MEC) technology is a promising solution to
assist power-constrained IoT devices by providing additional computing
resources for time-sensitive tasks. In this paper, we consider the problem of
optimal task offloading in MEC systems with due consideration of the timeliness
and scalability issues under two scenarios of equitable and priority access to
the edge server (ES). In the first scenario, we consider a MEC system
consisting of $N$ devices assisted by one ES, where the devices can split task
execution between a local processor and the ES, with equitable access to the
ES. In the second scenario, we consider a MEC system consisting of one primary
user, $N$ secondary users and one ES. The primary user has priority access to
the ES while the secondary users have equitable access to the ES amongst
themselves. In both scenarios, due to the power consumption associated with
utilizing the local resource and task offloading, the devices must optimize
their actions. Additionally, since the ES is a shared resource, other users'
offloading activity serves to increase latency incurred by each user. We thus
model both scenarios using a non-cooperative game framework. However, the
presence of a large number of users makes it nearly impossible to compute the
equilibrium offloading policies for each user, which would require a
significant information exchange overhead between users. Thus, to alleviate
such scalability issues, we invoke the paradigm of mean-field games to compute
approximate Nash equilibrium policies for each user using their local
information, and further study the trade-offs between increasing information
freshness and reducing power consumption for each user. Using numerical
evaluations, we show that our approach can recover the offloading trends
displayed under centralized solutions, and provide additional insights into the
results obtained.
|
2501.18723
|
Scaling Policy Gradient Quality-Diversity with Massive Parallelization
via Behavioral Variations
|
cs.NE cs.AI cs.LG cs.RO
|
Quality-Diversity optimization comprises a family of evolutionary algorithms
aimed at generating a collection of diverse and high-performing solutions.
MAP-Elites (ME), a notable example, is used effectively in fields like
evolutionary robotics. However, the reliance of ME on random mutations from
Genetic Algorithms limits its ability to evolve high-dimensional solutions.
Methods proposed to overcome this include using gradient-based operators like
policy gradients or natural evolution strategies. While successful at scaling
ME for neuroevolution, these methods often suffer from slow training speeds, or
difficulties in scaling with massive parallelization due to high computational
demands or reliance on centralized actor-critic training. In this work, we
introduce a fast, sample-efficient ME based algorithm capable of scaling up
with massive parallelization, significantly reducing runtimes without
compromising performance. Our method, ASCII-ME, unlike existing policy gradient
quality-diversity methods, does not rely on centralized actor-critic training.
It performs behavioral variations based on time step performance metrics and
maps these variations to solutions using policy gradients. Our experiments show
that ASCII-ME can generate a diverse collection of high-performing deep neural
network policies in less than 250 seconds on a single GPU. Additionally, it
operates on average, five times faster than state-of-the-art algorithms while
still maintaining competitive sample efficiency.
|
2501.18724
|
Zero-shot Large Language Models for Long Clinical Text Summarization
with Temporal Reasoning
|
cs.CL
|
Recent advancements in large language models (LLMs) have shown potential for
transforming data processing in healthcare, particularly in understanding
complex clinical narratives. This study evaluates the efficacy of zero-shot
LLMs in summarizing long clinical texts that require temporal reasoning, a
critical aspect for comprehensively capturing patient histories and treatment
trajectories. We applied a series of advanced zero-shot LLMs to extensive
clinical documents, assessing their ability to integrate and accurately reflect
temporal dynamics without prior task-specific training. While the models
efficiently identified key temporal events, they struggled with chronological
coherence over prolonged narratives. The evaluation, combining quantitative and
qualitative methods, highlights the strengths and limitations of zero-shot LLMs
in clinical text summarization. The results suggest that while promising,
zero-shot LLMs require further refinement to effectively support clinical
decision-making processes, underscoring the need for enhanced model training
approaches that better capture the nuances of temporal information in long
context medical documents.
|
2501.18726
|
Strong and Controllable 3D Motion Generation
|
cs.CV
|
Human motion generation is a significant pursuit in generative computer
vision with widespread applications in film-making, video games, AR/VR, and
human-robot interaction. Current methods mainly utilize either diffusion-based
generative models or autoregressive models for text-to-motion generation.
However, they face two significant challenges: (1) The generation process is
time-consuming, posing a major obstacle for real-time applications such as
gaming, robot manipulation, and other online settings. (2) These methods
typically learn a relative motion representation guided by text, making it
difficult to generate motion sequences with precise joint-level control. These
challenges significantly hinder progress and limit the real-world application
of human motion generation techniques. To address this gap, we propose a simple
yet effective architecture consisting of two key components. Firstly, we aim to
improve hardware efficiency and computational complexity in transformer-based
diffusion models for human motion generation. By customizing flash linear
attention, we can optimize these models specifically for generating human
motion efficiently. Furthermore, we will customize the consistency model in the
motion latent space to further accelerate motion generation. Secondly, we
introduce Motion ControlNet, which enables more precise joint-level control of
human motion compared to previous text-to-motion generation methods. These
contributions represent a significant advancement for text-to-motion
generation, bringing it closer to real-world applications.
|
2501.18727
|
Exploring Audio Editing Features as User-Centric Privacy Defenses
Against Large Language Model(LLM) Based Emotion Inference Attacks
|
cs.CR cs.AI cs.LG cs.SD eess.AS
|
The rapid proliferation of speech-enabled technologies, including virtual
assistants, video conferencing platforms, and wearable devices, has raised
significant privacy concerns, particularly regarding the inference of sensitive
emotional information from audio data. Existing privacy-preserving methods
often compromise usability and security, limiting their adoption in practical
scenarios. This paper introduces a novel, user-centric approach that leverages
familiar audio editing techniques, specifically pitch and tempo manipulation,
to protect emotional privacy without sacrificing usability. By analyzing
popular audio editing applications on Android and iOS platforms, we identified
these features as both widely available and usable. We rigorously evaluated
their effectiveness against a threat model, considering adversarial attacks
from diverse sources, including Deep Neural Networks (DNNs), Large Language
Models (LLMs), and and reversibility testing. Our experiments, conducted on
three distinct datasets, demonstrate that pitch and tempo manipulation
effectively obfuscates emotional data. Additionally, we explore the design
principles for lightweight, on-device implementation to ensure broad
applicability across various devices and platforms.
|
2501.18729
|
Motion Diffusion Autoencoders: Enabling Attribute Manipulation in Human
Motion Demonstrated on Karate Techniques
|
cs.CV cs.LG
|
Attribute manipulation deals with the problem of changing individual
attributes of a data point or a time series, while leaving all other aspects
unaffected. This work focuses on the domain of human motion, more precisely
karate movement patterns. To the best of our knowledge, it presents the first
success at manipulating attributes of human motion data. One of the key
requirements for achieving attribute manipulation on human motion is a suitable
pose representation. Therefore, we design a novel rotation-based pose
representation that enables the disentanglement of the human skeleton and the
motion trajectory, while still allowing an accurate reconstruction of the
original anatomy. The core idea of the manipulation approach is to use a
transformer encoder for discovering high-level semantics, and a diffusion
probabilistic model for modeling the remaining stochastic variations. We show
that the embedding space obtained from the transformer encoder is semantically
meaningful and linear. This enables the manipulation of high-level attributes,
by discovering their linear direction of change in the semantic embedding space
and moving the embedding along said direction. The code and data are available
at https://github.com/anthony-mendil/MoDiffAE.
|
2501.18731
|
Evaluating Spoken Language as a Biomarker for Automated Screening of
Cognitive Impairment
|
cs.LG cs.CL
|
Timely and accurate assessment of cognitive impairment is a major unmet need
in populations at risk. Alterations in speech and language can be early
predictors of Alzheimer's disease and related dementias (ADRD) before clinical
signs of neurodegeneration. Voice biomarkers offer a scalable and non-invasive
solution for automated screening. However, the clinical applicability of
machine learning (ML) remains limited by challenges in generalisability,
interpretability, and access to patient data to train clinically applicable
predictive models. Using DementiaBank recordings (N=291, 64% female), we
evaluated ML techniques for ADRD screening and severity prediction from spoken
language. We validated model generalisability with pilot data collected
in-residence from older adults (N=22, 59% female). Risk stratification and
linguistic feature importance analysis enhanced the interpretability and
clinical utility of predictions. For ADRD classification, a Random Forest
applied to lexical features achieved a mean sensitivity of 69.4% (95%
confidence interval (CI) = 66.4-72.5) and specificity of 83.3% (78.0-88.7). On
real-world pilot data, this model achieved a mean sensitivity of 70.0%
(58.0-82.0) and specificity of 52.5% (39.3-65.7). For severity prediction using
Mini-Mental State Examination (MMSE) scores, a Random Forest Regressor achieved
a mean absolute MMSE error of 3.7 (3.7-3.8), with comparable performance of 3.3
(3.1-3.5) on pilot data. Linguistic features associated with higher ADRD risk
included increased use of pronouns and adverbs, greater disfluency, reduced
analytical thinking, lower lexical diversity and fewer words reflecting a
psychological state of completion. Our interpretable predictive modelling
offers a novel approach for in-home integration with conversational AI to
monitor cognitive health and triage higher-risk individuals, enabling earlier
detection and intervention.
|
2501.18732
|
Optimizing Bidding Curves for Renewable Energy in Two-Settlement
Electricity Markets
|
eess.SY cs.SY math.OC
|
Coordination of day-ahead and real-time electricity markets is imperative for
cost-effective electricity supply and also to provide efficient incentives for
the energy transition. Although stochastic market designs feature the
least-cost coordination, they are incompatible with current deterministic
markets. This paper proposes a new approach for compatible coordination in
two-settlement markets based on benchmark bidding curves for variable renewable
energy. These curves are optimized based on a bilevel optimization problem,
anticipating per-scenario responses of deterministic market-clearing problems
and ultimately minimizing the expected cost across day-ahead and real-time
markets. Although the general bilevel model is challenging to solve, we
theoretically prove that a single-segment bidding curve with a zero bidding
price is sufficient to achieve system optimality if the marginal cost of
variable renewable energy is zero, thus addressing the computational challenge.
In practice, variable renewable energy producers can be allowed to bid
multi-segment curves with non-zero prices. We test the bilevel framework for
both single- and multiple-segment bidding curves under the assumption of fixed
bidding prices. We leverage duality theory and McCormick envelopes to derive
the linear programming approximation of the bilevel problem, which scales to
practical systems such as a 1576-bus NYISO system. We benchmark the proposed
coordination and find absolute dominance over the baseline solution, which
assumes that renewables agnostically bid their expected forecasts. We also
demonstrate that our proposed scheme provides a good approximation of the
least-cost, yet unattainable in practice, stochastic market outcome.
|
2501.18733
|
Integrating LMM Planners and 3D Skill Policies for Generalizable
Manipulation
|
cs.RO cs.AI
|
The recent advancements in visual reasoning capabilities of large multimodal
models (LMMs) and the semantic enrichment of 3D feature fields have expanded
the horizons of robotic capabilities. These developments hold significant
potential for bridging the gap between high-level reasoning from LMMs and
low-level control policies utilizing 3D feature fields. In this work, we
introduce LMM-3DP, a framework that can integrate LMM planners and 3D skill
Policies. Our approach consists of three key perspectives: high-level planning,
low-level control, and effective integration. For high-level planning, LMM-3DP
supports dynamic scene understanding for environment disturbances, a critic
agent with self-feedback, history policy memorization, and reattempts after
failures. For low-level control, LMM-3DP utilizes a semantic-aware 3D feature
field for accurate manipulation. In aligning high-level and low-level control
for robot actions, language embeddings representing the high-level policy are
jointly attended with the 3D feature field in the 3D transformer for seamless
integration. We extensively evaluate our approach across multiple skills and
long-horizon tasks in a real-world kitchen environment. Our results show a
significant 1.45x success rate increase in low-level control and an approximate
1.5x improvement in high-level planning accuracy compared to LLM-based
baselines. Demo videos and an overview of LMM-3DP are available at
https://lmm-3dp-release.github.io.
|
2501.18734
|
STaleX: A Spatiotemporal-Aware Adaptive Auto-scaling Framework for
Microservices
|
cs.SE cs.DC cs.LG cs.SY eess.SY
|
While cloud environments and auto-scaling solutions have been widely applied
to traditional monolithic applications, they face significant limitations when
it comes to microservices-based architectures. Microservices introduce
additional challenges due to their dynamic and spatiotemporal characteristics,
which require more efficient and specialized auto-scaling strategies.
Centralized auto-scaling for the entire microservice application is
insufficient, as each service within a chain has distinct specifications and
performance requirements. Therefore, each service requires its own dedicated
auto-scaler to address its unique scaling needs effectively, while also
considering the dependencies with other services in the chain and the overall
application. This paper presents a combination of control theory, machine
learning, and heuristics to address these challenges. We propose an adaptive
auto-scaling framework, STaleX, for microservices that integrates
spatiotemporal features, enabling real-time resource adjustments to minimize
SLO violations. STaleX employs a set of weighted
Proportional-Integral-Derivative (PID) controllers for each service, where
weights are dynamically adjusted based on a supervisory unit that integrates
spatiotemporal features. This supervisory unit continuously monitors and
adjusts both the weights and the resources allocated to each service. Our
framework accounts for spatial features, including service specifications and
dependencies among services, as well as temporal variations in workload,
ensuring that resource allocation is continuously optimized. Through
experiments on a microservice-based demo application deployed on a Kubernetes
cluster, we demonstrate the effectiveness of our framework in improving
performance and reducing costs compared to traditional scaling methods like
Kubernetes Horizontal Pod Autoscaler (HPA) with a 26.9% reduction in resource
usage.
|
2501.18736
|
Distillation-Driven Diffusion Model for Multi-Scale MRI
Super-Resolution: Make 1.5T MRI Great Again
|
eess.IV cs.CV
|
Magnetic Resonance Imaging (MRI) offers critical insights into
microstructural details, however, the spatial resolution of standard 1.5T
imaging systems is often limited. In contrast, 7T MRI provides significantly
enhanced spatial resolution, enabling finer visualization of anatomical
structures. Though this, the high cost and limited availability of 7T MRI
hinder its widespread use in clinical settings. To address this challenge, a
novel Super-Resolution (SR) model is proposed to generate 7T-like MRI from
standard 1.5T MRI scans. Our approach leverages a diffusion-based architecture,
incorporating gradient nonlinearity correction and bias field correction data
from 7T imaging as guidance. Moreover, to improve deployability, a progressive
distillation strategy is introduced. Specifically, the student model refines
the 7T SR task with steps, leveraging feature maps from the inference phase of
the teacher model as guidance, aiming to allow the student model to achieve
progressively 7T SR performance with a smaller, deployable model size.
Experimental results demonstrate that our baseline teacher model achieves
state-of-the-art SR performance. The student model, while lightweight,
sacrifices minimal performance. Furthermore, the student model is capable of
accepting MRI inputs at varying resolutions without the need for retraining,
significantly further enhancing deployment flexibility. The clinical relevance
of our proposed method is validated using clinical data from Massachusetts
General Hospital. Our code is available at https://github.com/ZWang78/SR.
|
2501.18738
|
Examining the Robustness of Large Language Models across Language
Complexity
|
cs.CL
|
With the advancement of large language models (LLMs), an increasing number of
student models have leveraged LLMs to analyze textual artifacts generated by
students to understand and evaluate their learning. These student models
typically employ pre-trained LLMs to vectorize text inputs into embeddings and
then use the embeddings to train models to detect the presence or absence of a
construct of interest. However, how reliable and robust are these models at
processing language with different levels of complexity? In the context of
learning where students may have different language backgrounds with various
levels of writing skills, it is critical to examine the robustness of such
models to ensure that these models work equally well for text with varying
levels of language complexity. Coincidentally, a few (but limited) research
studies show that the use of language can indeed impact the performance of
LLMs. As such, in the current study, we examined the robustness of several
LLM-based student models that detect student self-regulated learning (SRL) in
math problem-solving. Specifically, we compared how the performance of these
models vary using texts with high and low lexical, syntactic, and semantic
complexity measured by three linguistic measures.
|
2501.18739
|
Neural Graph Pattern Machine
|
cs.LG cs.AI cs.SI
|
Graph learning tasks require models to comprehend essential substructure
patterns relevant to downstream tasks, such as triadic closures in social
networks and benzene rings in molecular graphs. Due to the non-Euclidean nature
of graphs, existing graph neural networks (GNNs) rely on message passing to
iteratively aggregate information from local neighborhoods. Despite their
empirical success, message passing struggles to identify fundamental
substructures, such as triangles, limiting its expressiveness. To overcome this
limitation, we propose the Neural Graph Pattern Machine (GPM), a framework
designed to learn directly from graph patterns. GPM efficiently extracts and
encodes substructures while identifying the most relevant ones for downstream
tasks. We also demonstrate that GPM offers superior expressivity and improved
long-range information modeling compared to message passing. Empirical
evaluations on node classification, link prediction, graph classification, and
regression show the superiority of GPM over state-of-the-art baselines. Further
analysis reveals its desirable out-of-distribution robustness, scalability, and
interpretability. We consider GPM to be a step toward going beyond message
passing.
|
2501.18741
|
Synthetic Data Generation for Augmenting Small Samples
|
cs.LG cs.AI stat.ML
|
Small datasets are common in health research. However, the generalization
performance of machine learning models is suboptimal when the training datasets
are small. To address this, data augmentation is one solution. Augmentation
increases sample size and is seen as a form of regularization that increases
the diversity of small datasets, leading them to perform better on unseen data.
We found that augmentation improves prognostic performance for datasets that:
have fewer observations, with smaller baseline AUC, have higher cardinality
categorical variables, and have more balanced outcome variables. No specific
generative model consistently outperformed the others. We developed a decision
support model that can be used to inform analysts if augmentation would be
useful. For seven small application datasets, augmenting the existing data
results in an increase in AUC between 4.31% (AUC from 0.71 to 0.75) and 43.23%
(AUC from 0.51 to 0.73), with an average 15.55% relative improvement,
demonstrating the nontrivial impact of augmentation on small datasets
(p=0.0078). Augmentation AUC was higher than resampling only AUC (p=0.016). The
diversity of augmented datasets was higher than the diversity of resampled
datasets (p=0.046).
|
2501.18750
|
Revisiting Projection-based Data Transfer for Cross-Lingual Named Entity
Recognition in Low-Resource Languages
|
cs.CL cs.IR
|
Cross-lingual Named Entity Recognition (NER) leverages knowledge transfer
between languages to identify and classify named entities, making it
particularly useful for low-resource languages. We show that the data-based
cross-lingual transfer method is an effective technique for crosslingual NER
and can outperform multilingual language models for low-resource languages.
This paper introduces two key enhancements to the annotation projection step in
cross-lingual NER for low-resource languages. First, we explore refining word
alignments using back-translation to improve accuracy. Second, we present a
novel formalized projection approach of matching source entities with extracted
target candidates. Through extensive experiments on two datasets spanning 57
languages, we demonstrated that our approach surpasses existing projectionbased
methods in low-resource settings. These findings highlight the robustness of
projection-based data transfer as an alternative to model-based methods for
crosslingual named entity recognition in lowresource languages.
|
2501.18753
|
INT: Instance-Specific Negative Mining for Task-Generic Promptable
Segmentation
|
cs.CV
|
Task-generic promptable image segmentation aims to achieve segmentation of
diverse samples under a single task description by utilizing only one
task-generic prompt. Current methods leverage the generalization capabilities
of Vision-Language Models (VLMs) to infer instance-specific prompts from these
task-generic prompts in order to guide the segmentation process. However, when
VLMs struggle to generalise to some image instances, predicting
instance-specific prompts becomes poor. To solve this problem, we introduce
\textbf{I}nstance-specific \textbf{N}egative Mining for \textbf{T}ask-Generic
Promptable Segmentation (\textbf{INT}). The key idea of INT is to adaptively
reduce the influence of irrelevant (negative) prior knowledge whilst to
increase the use the most plausible prior knowledge, selected by negative
mining with higher contrast, in order to optimise instance-specific prompts
generation. Specifically, INT consists of two components: (1) instance-specific
prompt generation, which progressively fliters out incorrect information in
prompt generation; (2) semantic mask generation, which ensures each image
instance segmentation matches correctly the semantics of the instance-specific
prompts. INT is validated on six datasets, including camouflaged objects and
medical images, demonstrating its effectiveness, robustness and scalability.
|
2501.18756
|
A Unified Framework for Entropy Search and Expected Improvement in
Bayesian Optimization
|
stat.ML cs.LG math.OC
|
Bayesian optimization is a widely used method for optimizing expensive
black-box functions, with Expected Improvement being one of the most commonly
used acquisition functions. In contrast, information-theoretic acquisition
functions aim to reduce uncertainty about the function's optimum and are often
considered fundamentally distinct from EI. In this work, we challenge this
prevailing perspective by introducing a unified theoretical framework,
Variational Entropy Search, which reveals that EI and information-theoretic
acquisition functions are more closely related than previously recognized. We
demonstrate that EI can be interpreted as a variational inference approximation
of the popular information-theoretic acquisition function, named Max-value
Entropy Search. Building on this insight, we propose VES-Gamma, a novel
acquisition function that balances the strengths of EI and MES. Extensive
empirical evaluations across both low- and high-dimensional synthetic and
real-world benchmarks demonstrate that VES-Gamma is competitive with
state-of-the-art acquisition functions and in many cases outperforms EI and
MES.
|
2501.18758
|
A New Statistical Approach to the Performance Analysis of Vision-based
Localization
|
cs.CV cs.IT eess.IV math.IT math.ST stat.AP stat.TH
|
Many modern wireless devices with accurate positioning needs also have access
to vision sensors, such as a camera, radar, and Light Detection and Ranging
(LiDAR). In scenarios where wireless-based positioning is either inaccurate or
unavailable, using information from vision sensors becomes highly desirable for
determining the precise location of the wireless device. Specifically, vision
data can be used to estimate distances between the target (where the sensors
are mounted) and nearby landmarks. However, a significant challenge in
positioning using these measurements is the inability to uniquely identify
which specific landmark is visible in the data. For instance, when the target
is located close to a lamppost, it becomes challenging to precisely identify
the specific lamppost (among several in the region) that is near the target.
This work proposes a new framework for target localization using range
measurements to multiple proximate landmarks. The geometric constraints
introduced by these measurements are utilized to narrow down candidate landmark
combinations corresponding to the range measurements and, consequently, the
target's location on a map. By modeling landmarks as a marked Poisson point
process (PPP), we show that three noise-free range measurements are sufficient
to uniquely determine the correct combination of landmarks in a two-dimensional
plane. For noisy measurements, we provide a mathematical characterization of
the probability of correctly identifying the observed landmark combination
based on a novel joint distribution of key random variables. Our results
demonstrate that the landmark combination can be identified using ranges, even
when individual landmarks are visually indistinguishable.
|
2501.18761
|
Probabilistic Joint Recovery Method for CO$_2$ Plume Monitoring
|
cs.LG physics.ao-ph
|
Reducing CO$_2$ emissions is crucial to mitigating climate change. Carbon
Capture and Storage (CCS) is one of the few technologies capable of achieving
net-negative CO$_2$ emissions. However, predicting fluid flow patterns in CCS
remains challenging due to uncertainties in CO$_2$ plume dynamics and reservoir
properties. Building on existing seismic imaging methods like the Joint
Recovery Method (JRM), which lacks uncertainty quantification, we propose the
Probabilistic Joint Recovery Method (pJRM). By estimating posterior
distributions across surveys using a shared generative model, pJRM provides
uncertainty information to improve risk assessment in CCS projects.
|
2501.18766
|
Breaking the Fake News Barrier: Deep Learning Approaches in Bangla
Language
|
cs.CL cs.AI
|
The rapid development of digital stages has greatly compounded the dispersal
of untrue data, dissolving certainty and judgment in society, especially among
the Bengali-speaking community. Our ponder addresses this critical issue by
presenting an interesting strategy that utilizes a profound learning
innovation, particularly the Gated Repetitive Unit (GRU), to recognize fake
news within the Bangla dialect. The strategy of our proposed work incorporates
intensive information preprocessing, which includes lemmatization,
tokenization, and tending to course awkward nature by oversampling. This comes
about in a dataset containing 58,478 passages. We appreciate the creation of a
demonstration based on GRU (Gated Repetitive Unit) that illustrates remarkable
execution with a noteworthy precision rate of 94%. This ponder gives an
intensive clarification of the methods included in planning the information,
selecting the show, preparing it, and assessing its execution. The performance
of the model is investigated by reliable metrics like precision, recall, F1
score, and accuracy. The commitment of the work incorporates making a huge fake
news dataset in Bangla and a demonstration that has outperformed other Bangla
fake news location models.
|
2501.18768
|
Diversity By Design: Leveraging Distribution Matching for Offline
Model-Based Optimization
|
cs.LG cs.AI
|
The goal of offline model-based optimization (MBO) is to propose new designs
that maximize a reward function given only an offline dataset. However, an
important desiderata is to also propose a diverse set of final candidates that
capture many optimal and near-optimal design configurations. We propose
Diversity in Adversarial Model-based Optimization (DynAMO) as a novel method to
introduce design diversity as an explicit objective into any MBO problem. Our
key insight is to formulate diversity as a distribution matching problem where
the distribution of generated designs captures the inherent diversity contained
within the offline dataset. Extensive experiments spanning multiple scientific
domains show that DynAMO can be used with common optimization methods to
significantly improve the diversity of proposed designs while still discovering
high-quality candidates.
|
2501.18769
|
One Stack, Diverse Vehicles: Checking Safe Portability of Automated
Driving Software
|
eess.SY cs.RO cs.SY
|
Integrating an automated driving software stack into vehicles with variable
configuration is challenging, especially due to different hardware
characteristics. Further, to provide software updates to a vehicle fleet in the
field, the functional safety of every affected configuration has to be ensured.
These additional demands for dependability and the increasing hardware
diversity in automated driving make rigorous automatic analysis essential. This
paper addresses this challenge by using formal portability checking of adaptive
cruise controller code for different vehicle configurations. Given a formal
specification of the safe behavior, models of target configurations are
derived, which capture relevant effects of sensors, actuators and computing
platforms. A corresponding safe set is obtained and used to check if the
desired behavior is achievable on all targets. In a case study, portability
checking of a traditional and a neural network controller are performed
automatically within minutes for each vehicle hardware configuration. The check
provides feedback for necessary adaptations of the controllers, thus, allowing
rapid integration and testing of software or parameter changes.
|
2501.18771
|
Overestimation in LLM Evaluation: A Controlled Large-Scale Study on Data
Contamination's Impact on Machine Translation
|
cs.CL cs.AI
|
Data contamination -- the accidental consumption of evaluation examples
within the pre-training data -- can undermine the validity of evaluation
benchmarks. In this paper, we present a rigorous analysis of the effects of
contamination on language models at 1B and 8B scales on the machine translation
task. Starting from a carefully decontaminated train-test split, we
systematically introduce contamination at various stages, scales, and data
formats to isolate its effect and measure its impact on performance metrics.
Our experiments reveal that contamination with both source and target
substantially inflates BLEU scores, and this inflation is 2.5 times larger (up
to 30 BLEU points) for 8B compared to 1B models. In contrast, source-only and
target-only contamination generally produce smaller, less consistent
over-estimations. Finally, we study how the temporal distribution and frequency
of contaminated samples influence performance over-estimation across languages
with varying degrees of data resources.
|
2501.18773
|
Beyond Short Steps in Frank-Wolfe Algorithms
|
math.OC cs.LG
|
We introduce novel techniques to enhance Frank-Wolfe algorithms by leveraging
function smoothness beyond traditional short steps. Our study focuses on
Frank-Wolfe algorithms with step sizes that incorporate primal-dual guarantees,
offering practical stopping criteria. We present a new Frank-Wolfe algorithm
utilizing an optimistic framework and provide a primal-dual convergence proof.
Additionally, we propose a generalized short-step strategy aimed at optimizing
a computable primal-dual gap. Interestingly, this new generalized short-step
strategy is also applicable to gradient descent algorithms beyond Frank-Wolfe
methods. As a byproduct, our work revisits and refines primal-dual techniques
for analyzing Frank-Wolfe algorithms, achieving tighter primal-dual convergence
rates. Empirical results demonstrate that our optimistic algorithm outperforms
existing methods, highlighting its practical advantages.
|
2501.18777
|
Navigating the Fragrance space Via Graph Generative Models And
Predicting Odors
|
cs.LG
|
We explore a suite of generative modelling techniques to efficiently navigate
and explore the complex landscapes of odor and the broader chemical space.
Unlike traditional approaches, we not only generate molecules but also predict
the odor likeliness with ROC AUC score of 0.97 and assign probable odor labels.
We correlate odor likeliness with physicochemical features of molecules using
machine learning techniques and leverage SHAP (SHapley Additive exPlanations)
to demonstrate the interpretability of the function. The whole process involves
four key stages: molecule generation, stringent sanitization checks for
molecular validity, fragrance likeliness screening and odor prediction of the
generated molecules. By making our code and trained models publicly accessible,
we aim to facilitate broader adoption of our research across applications in
fragrance discovery and olfactory research.
|
2501.18781
|
A consistent diffuse-interface finite element approach to rapid
melt--vapor dynamics in metal additive manufacturing
|
cs.CE
|
Metal additive manufacturing via laser-based powder bed fusion (PBF-LB/M)
faces performance-critical challenges due to complex melt pool and vapor
dynamics, often oversimplified by computational models that neglect crucial
aspects, such as vapor jet formation. To address this limitation, we propose a
consistent computational multi-physics mesoscale model to study melt pool
dynamics, laser-induced evaporation, and vapor flow. In addition to the
evaporation-induced pressure jump, we also resolve the evaporation-induced
volume expansion and the resulting velocity jump at the liquid--vapor
interface. We use an anisothermal incompressible Navier--Stokes solver extended
by a conservative diffuse level-set framework and integrate it into a
matrix-free adaptive finite element framework. To ensure accurate physical
solutions despite extreme density, pressure and velocity gradients across the
diffuse liquid--vapor interface, we employ consistent interface source term
formulations developed in our previous work. These formulations consider
projection operations to extend solution variables from the sharp liquid--vapor
interface into the computational domain. Benchmark examples, including film
boiling, confirm the accuracy and versatility of the model. As a key result, we
demonstrate the model's ability to capture the strong coupling between melt and
vapor flow dynamics in PBF-LB/M based on simulations of stationary laser
illumination on a metal plate. Additionally, we show the derivation of the
well-known Anisimov model and extend it to a new hybrid model. This hybrid
model, together with consistent interface source term formulations, especially
for the level-set transport velocity, enables PBF-LB/M simulations that combine
accurate physical results with the robustness of an incompressible,
diffuse-interface computational modeling framework.
|
2501.18782
|
PSO-Net: Development of an automated psoriasis assessment system using
attention-based interpretable deep neural networks
|
eess.IV cs.CV
|
Psoriasis is a chronic skin condition that requires long-term treatment and
monitoring. Although, the Psoriasis Area and Severity Index (PASI) is utilized
as a standard measurement to assess psoriasis severity in clinical trials, it
has many drawbacks such as (1) patient burden for in-person clinic visits for
assessment of psoriasis, (2) time required for investigator scoring and (3)
variability of inter- and intra-rater scoring. To address these drawbacks, we
propose a novel and interpretable deep learning architecture called PSO-Net,
which maps digital images from different anatomical regions to derive
attention-based scores. Regional scores are further combined to estimate an
absolute PASI score. Moreover, we devise a novel regression activation map for
interpretability through ranking attention scores. Using this approach, we
achieved inter-class correlation scores of 82.2% [95% CI: 77- 87%] and 87.8%
[95% CI: 84-91%] with two different clinician raters, respectively.
|
2501.18783
|
RUN: Reversible Unfolding Network for Concealed Object Segmentation
|
cs.CV
|
Existing concealed object segmentation (COS) methods frequently utilize
reversible strategies to address uncertain regions. However, these approaches
are typically restricted to the mask domain, leaving the potential of the RGB
domain underexplored. To address this, we propose the Reversible Unfolding
Network (RUN), which applies reversible strategies across both mask and RGB
domains through a theoretically grounded framework, enabling accurate
segmentation. RUN first formulates a novel COS model by incorporating an extra
residual sparsity constraint to minimize segmentation uncertainties. The
iterative optimization steps of the proposed model are then unfolded into a
multistage network, with each step corresponding to a stage. Each stage of RUN
consists of two reversible modules: the Segmentation-Oriented Foreground
Separation (SOFS) module and the Reconstruction-Oriented Background Extraction
(ROBE) module. SOFS applies the reversible strategy at the mask level and
introduces Reversible State Space to capture non-local information. ROBE
extends this to the RGB domain, employing a reconstruction network to address
conflicting foreground and background regions identified as distortion-prone
areas, which arise from their separate estimation by independent modules. As
the stages progress, RUN gradually facilitates reversible modeling of
foreground and background in both the mask and RGB domains, directing the
network's attention to uncertain regions and mitigating false-positive and
false-negative results. Extensive experiments demonstrate the superior
performance of RUN and highlight the potential of unfolding-based frameworks
for COS and other high-level vision tasks. We will release the code and models.
|
2501.18784
|
LLM-Generated Heuristics for AI Planning: Do We Even Need
Domain-Independence Anymore?
|
cs.AI
|
Domain-independent heuristics have long been a cornerstone of AI planning,
offering general solutions applicable across a wide range of tasks without
requiring domain-specific engineering. However, the advent of large language
models (LLMs) presents an opportunity to generate heuristics tailored to
specific planning problems, potentially challenging the necessity of domain
independence as a strict design principle. In this paper, we explore the use of
LLMs to automatically derive planning heuristics from task descriptions
represented as successor generators and goal tests written in general purpose
programming language. We investigate the trade-offs between domain-specific
LLM-generated heuristics and traditional domain-independent methods in terms of
computational efficiency and explainability. Our experiments demonstrate that
LLMs can create heuristics that achieve state-of-the-art performance on some
standard IPC domains, as well as their ability to solve problems that lack an
adequate Planning Domain Definition Language ({\sc pddl}) representation. We
discuss whether these results signify a paradigm shift and how they can
complement existing approaches.
|
2501.18786
|
Multispectral 3D mapping on a Roman sculpture to study ancient
polychromy
|
cs.CV eess.IV
|
Research into the polychromy of Greek and Roman sculptures has surged to
explore the hypothesis that ancient sculptures were originally not pristine
white but adorned with colors. Multispectral and multimodal imaging techniques
have been crucial in studying painted surfaces, revealing polychromies even in
traces. In fact, imaging techniques, such as reflectance and fluorescence, can
identify different materials and map inhomogeneities, guiding further
investigations such as Raman, XRays Fluorescence, and Fourier Transform
InfraRed Spectroscopy (FTIR) to investigate residual colors. However, this
approach may underestimate the original polychromies' extent over the complex
articulation of a sculptured surface. This study proposes a methodology to
analyze the original appearance of ancient sculptures using reality-based 3D
models with textures not limited to those visible to the naked eye. We employ
Visible Reflected Imaging (VIS) and Ultraviolet-induced Fluorescence Imaging
(UVF). From the UVF and VIS datasets, the underlying 3D model is built by means
of photogrammetry. Through raw data processing, images taken with different
illuminating sources are successfully aligned and processed, creating a single
3D model with multiple textures mapped onto the same bi-dimensional space. The
pixel-to-pixel correspondence of different textures allows for the
implementation of a classification algorithm that can directly map its outcome
onto the 3D model surface. This enables conservators to deepen their
understanding of artifact preservation, observe mate-rial distribution in
detail, and correlate this with 3D geometrical data. In this study, we
experiment with this approach on an ancient Roman sculpture of Artemis,
conserved at the Archeological and Art Museum of Maremma (MAAM) in Grosseto,
Italy.
|
2501.18788
|
Tuning Event Camera Biases Heuristic for Object Detection Applications
in Staring Scenarios
|
cs.CV math.OC
|
One of the main challenges in unlocking the potential of neuromorphic
cameras, also called 'event cameras', is the development of novel methods that
solve the multi-parameter problem of adjusting their bias parameters to
accommodate a desired task. Actually, it is very difficult to find in the
literature a systematic heuristic that solves the problem for any desired
application.
In this paper we present a tuning parametes heuristic for the biases of event
cameras, for tasks that require small objects detection in staring scenarios.
The main purpose of the heuristic is to squeeze the camera's potential,
optimize its performance, and expand its detection capabilities as much as
possible.
In the presentation, we translate the experimental properties of event camera
and systemic constrains into mathematical terms, and show, under certain
assumptions, how the multi-variable problem collapses into a two-parameter
problem that can be solved experimentally.
A main conclusion that will be demonstrated is that for certain desired
signals, such as the one provided by an incandescent lamp powered by the
periodic electrical grid, the optimal values of the camera are very far from
the default values recommended by the manufacturer.
|
2501.18790
|
Achieving $\widetilde{\mathcal{O}}(\sqrt{T})$ Regret in Average-Reward
POMDPs with Known Observation Models
|
cs.LG stat.ML
|
We tackle average-reward infinite-horizon POMDPs with an unknown transition
model but a known observation model, a setting that has been previously
addressed in two limiting ways: (i) frequentist methods relying on suboptimal
stochastic policies having a minimum probability of choosing each action, and
(ii) Bayesian approaches employing the optimal policy class but requiring
strong assumptions about the consistency of employed estimators. Our work
removes these limitations by proving convenient estimation guarantees for the
transition model and introducing an optimistic algorithm that leverages the
optimal class of deterministic belief-based policies. We introduce
modifications to existing estimation techniques providing theoretical
guarantees separately for each estimated action transition matrix. Unlike
existing estimation methods that are unable to use samples from different
policies, we present a novel and simple estimator that overcomes this barrier.
This new data-efficient technique, combined with the proposed \emph{Action-wise
OAS-UCRL} algorithm and a tighter theoretical analysis, leads to the first
approach enjoying a regret guarantee of order $\mathcal{O}(\sqrt{T \,\log T})$
when compared against the optimal policy, thus improving over state of the art
techniques. Finally, theoretical results are validated through numerical
simulations showing the efficacy of our method against baseline methods.
|
2501.18792
|
Bayesian Optimization with Preference Exploration by Monotonic Neural
Network Ensemble
|
cs.LG math.OC stat.ML
|
Many real-world black-box optimization problems have multiple conflicting
objectives. Rather than attempting to approximate the entire set of
Pareto-optimal solutions, interactive preference learning allows to focus the
search on the most relevant subset. However, few previous studies have
exploited the fact that utility functions are usually monotonic. In this paper,
we address the Bayesian Optimization with Preference Exploration (BOPE) problem
and propose using a neural network ensemble as a utility surrogate model. This
approach naturally integrates monotonicity and supports pairwise comparison
data. Our experiments demonstrate that the proposed method outperforms
state-of-the-art approaches and exhibits robustness to noise in utility
evaluations. An ablation study highlights the critical role of monotonicity in
enhancing performance.
|
2501.18793
|
OT-Transformer: A Continuous-time Transformer Architecture with Optimal
Transport Regularization
|
cs.LG cs.AI
|
Transformers have achieved state-of-the-art performance in numerous tasks. In
this paper, we propose a continuous-time formulation of transformers.
Specifically, we consider a dynamical system whose governing equation is
parametrized by transformer blocks. We leverage optimal transport theory to
regularize the training problem, which enhances stability in training and
improves generalization of the resulting model. Moreover, we demonstrate in
theory that this regularization is necessary as it promotes uniqueness and
regularity of solutions. Our model is flexible in that almost any existing
transformer architectures can be adopted to construct the dynamical system with
only slight modifications to the existing code. We perform extensive numerical
experiments on tasks motivated by natural language processing, image
classification, and point cloud classification. Our experimental results show
that the proposed method improves the performance of its discrete counterpart
and outperforms relevant comparing models.
|
2501.18794
|
Survey and Improvement Strategies for Gene Prioritization with Large
Language Models
|
q-bio.GN cs.AI
|
Rare diseases are challenging to diagnose due to limited patient data and
genetic diversity. Despite advances in variant prioritization, many cases
remain undiagnosed. While large language models (LLMs) have performed well in
medical exams, their effectiveness in diagnosing rare genetic diseases has not
been assessed. To identify causal genes, we benchmarked various LLMs for gene
prioritization. Using multi-agent and Human Phenotype Ontology (HPO)
classification, we categorized patients based on phenotypes and solvability
levels. As gene set size increased, LLM performance deteriorated, so we used a
divide-and-conquer strategy to break the task into smaller subsets. At
baseline, GPT-4 outperformed other LLMs, achieving near 30% accuracy in ranking
causal genes correctly. The multi-agent and HPO approaches helped distinguish
confidently solved cases from challenging ones, highlighting the importance of
known gene-phenotype associations and phenotype specificity. We found that
cases with specific phenotypes or clear associations were more accurately
solved. However, we observed biases toward well-studied genes and input order
sensitivity, which hindered gene prioritization. Our divide-and-conquer
strategy improved accuracy by overcoming these biases. By utilizing HPO
classification, novel multi-agent techniques, and our LLM strategy, we improved
causal gene identification accuracy compared to our baseline evaluation. This
approach streamlines rare disease diagnosis, facilitates reanalysis of unsolved
cases, and accelerates gene discovery, supporting the development of targeted
diagnostics and therapies.
|
2501.18795
|
Rope to Nope and Back Again: A New Hybrid Attention Strategy
|
cs.CL
|
Long-context large language models (LLMs) have achieved remarkable
advancements, driven by techniques like Rotary Position Embedding (RoPE) (Su et
al., 2023) and its extensions (Chen et al., 2023; Liu et al., 2024c; Peng et
al., 2023). By adjusting RoPE parameters and incorporating training data with
extended contexts, we can train performant models with considerably longer
input sequences. However, existing RoPE-based methods exhibit performance
limitations when applied to extended context lengths. This paper presents a
comprehensive analysis of various attention mechanisms, including RoPE, No
Positional Embedding (NoPE), and Query-Key Normalization (QK-Norm), identifying
their strengths and shortcomings in long-context modeling. Our investigation
identifies distinctive attention patterns in these methods and highlights their
impact on long-context performance, providing valuable insights for
architectural design. Building on these findings, we propose a novel
architectural based on a hybrid attention mechanism that not only surpasses
conventional RoPE-based transformer models in long context tasks but also
achieves competitive performance on benchmarks requiring shorter context
lengths.
|
2501.18796
|
Designing Kresling Origami for Personalised Wrist Orthosis
|
cs.RO
|
The wrist plays a pivotal role in facilitating motion dexterity and hand
functions. Wrist orthoses, from passive braces to active exoskeletons, provide
an effective solution for the assistance and rehabilitation of motor abilities.
However, the type of motions facilitated by currently available orthoses is
limited, with little emphasis on personalised design. To address these gaps,
this paper proposes a novel wrist orthosis design inspired by the Kresling
origami. The design can be adapted to accommodate various individual shape
parameters, which benefits from the topological variations and intrinsic
compliance of origami. Heat-sealable fabrics are used to replicate the
non-rigid nature of the Kresling origami. The orthosis is capable of six
distinct motion modes with a detachable tendon-based actuation system.
Experimental characterisation of the workspace has been conducted by activating
tendons individually. The maximum bending angle in each direction ranges from
18.81{\deg} to 32.63{\deg}. When tendons are pulled in combination, the maximum
bending angles in the dorsal, palmar, radial, and ulnar directions are
31.66{\deg}, 30.38{\deg}, 27.14{\deg}, and 14.92{\deg}, respectively. The
capability to generate complex motions such as the dart-throwing motion and
circumduction has also been experimentally validated. The work presents a
promising foundation for the development of personalised wrist orthoses for
training and rehabilitation.
|
2501.18797
|
Compositional Generalization Requires More Than Disentangled
Representations
|
cs.LG cs.AI stat.ML
|
Composition-the ability to generate myriad variations from finite means-is
believed to underlie powerful generalization. However, compositional
generalization remains a key challenge for deep learning. A widely held
assumption is that learning disentangled (factorized) representations naturally
supports this kind of extrapolation. Yet, empirical results are mixed, with
many generative models failing to recognize and compose factors to generate
out-of-distribution (OOD) samples. In this work, we investigate a controlled 2D
Gaussian "bump" generation task, demonstrating that standard generative
architectures fail in OOD regions when training with partial data, even when
supplied with fully disentangled $(x, y)$ coordinates, re-entangling them
through subsequent layers. By examining the model's learned kernels and
manifold geometry, we show that this failure reflects a "memorization" strategy
for generation through the superposition of training data rather than by
combining the true factorized features. We show that models forced-through
architectural modifications with regularization or curated training data-to
create disentangled representations in the full-dimensional representational
(pixel) space can be highly data-efficient and effective at learning to compose
in OOD regions. These findings underscore that bottlenecks with
factorized/disentangled representations in an abstract representation are
insufficient: the model must actively maintain or induce factorization directly
in the representational space in order to achieve robust compositional
generalization.
|
2501.18799
|
A General-Purpose Neuromorphic Sensor based on Spiketrum Algorithm:
Hardware Details and Real-life Applications
|
eess.SP cs.SY eess.AS eess.SY
|
Spiking Neural Networks (SNNs) offer a biologically inspired computational
paradigm, enabling energy-efficient data processing through spike-based
information transmission. Despite notable advancements in hardware for SNNs,
spike encoding has largely remained software-dependent, limiting efficiency.
This paper addresses the need for adaptable and resource-efficient spike
encoding hardware by presenting an area-optimized hardware implementation of
the Spiketrum algorithm, which encodes time-varying analogue signals into
spatiotemporal spike patterns. Unlike earlier performance-optimized designs,
which prioritize speed, our approach focuses on reducing hardware footprint,
achieving a 52% reduction in Block RAMs (BRAMs), 31% fewer Digital Signal
Processing (DSP) slices, and a 6% decrease in Look-Up Tables (LUTs). The
proposed implementation has been verified on an FPGA and successfully
integrated into an IC using TSMC180 technology. Experimental results
demonstrate the system's effectiveness in real-world applications, including
sound and ECG classification. This work highlights the trade-offs between
performance and resource efficiency, offering a flexible, scalable solution for
neuromorphic systems in power-sensitive applications like cochlear implants and
neural devices.
|
2501.18801
|
Every Image Listens, Every Image Dances: Music-Driven Image Animation
|
cs.CV cs.AI
|
Image animation has become a promising area in multimodal research, with a
focus on generating videos from reference images. While prior work has largely
emphasized generic video generation guided by text, music-driven dance video
generation remains underexplored. In this paper, we introduce MuseDance, an
innovative end-to-end model that animates reference images using both music and
text inputs. This dual input enables MuseDance to generate personalized videos
that follow text descriptions and synchronize character movements with the
music. Unlike existing approaches, MuseDance eliminates the need for complex
motion guidance inputs, such as pose or depth sequences, making flexible and
creative video generation accessible to users of all expertise levels. To
advance research in this field, we present a new multimodal dataset comprising
2,904 dance videos with corresponding background music and text descriptions.
Our approach leverages diffusion-based methods to achieve robust
generalization, precise control, and temporal consistency, setting a new
baseline for the music-driven image animation task.
|
2501.18802
|
Agile and Cooperative Aerial Manipulation of a Cable-Suspended Load
|
cs.RO cs.SY eess.SY
|
Quadrotors can carry slung loads to hard-to-reach locations at high speed.
Since a single quadrotor has limited payload capacities, using a team of
quadrotors to collaboratively manipulate a heavy object is a scalable and
promising solution. However, existing control algorithms for multi-lifting
systems only enable low-speed and low-acceleration operations due to the
complex dynamic coupling between quadrotors and the load, limiting their use in
time-critical missions such as search and rescue. In this work, we present a
solution to significantly enhance the agility of cable-suspended multi-lifting
systems. Unlike traditional cascaded solutions, we introduce a trajectory-based
framework that solves the whole-body kinodynamic motion planning problem
online, accounting for the dynamic coupling effects and constraints between the
quadrotors and the load. The planned trajectory is provided to the quadrotors
as a reference in a receding-horizon fashion and is tracked by an onboard
controller that observes and compensates for the cable tension. Real-world
experiments demonstrate that our framework can achieve at least eight times
greater acceleration than state-of-the-art methods to follow agile
trajectories. Our method can even perform complex maneuvers such as flying
through narrow passages at high speed. Additionally, it exhibits high
robustness against load uncertainties and does not require adding any sensors
to the load, demonstrating strong practicality.
|
2501.18803
|
Deceptive Sequential Decision-Making via Regularized Policy Optimization
|
cs.LG math.OC
|
Autonomous systems are increasingly expected to operate in the presence of
adversaries, though an adversary may infer sensitive information simply by
observing a system, without even needing to interact with it. Therefore, in
this work we present a deceptive decision-making framework that not only
conceals sensitive information, but in fact actively misleads adversaries about
it. We model autonomous systems as Markov decision processes, and we consider
adversaries that attempt to infer their reward functions using inverse
reinforcement learning. To counter such efforts, we present two regularization
strategies for policy synthesis problems that actively deceive an adversary
about a system's underlying rewards. The first form of deception is
``diversionary'', and it leads an adversary to draw any false conclusion about
what the system's reward function is. The second form of deception is
``targeted'', and it leads an adversary to draw a specific false conclusion
about what the system's reward function is. We then show how each form of
deception can be implemented in policy optimization problems, and we
analytically bound the loss in total accumulated reward that is induced by
deception. Next, we evaluate these developments in a multi-agent sequential
decision-making problem with one real agent and multiple decoys. We show that
diversionary deception can cause the adversary to believe that the most
important agent is the least important, while attaining a total accumulated
reward that is $98.83\%$ of its optimal, non-deceptive value. Similarly, we
show that targeted deception can make any decoy appear to be the most important
agent, while still attaining a total accumulated reward that is $99.25\%$ of
its optimal, non-deceptive value.
|
2501.18804
|
Zero-Shot Novel View and Depth Synthesis with Multi-View Geometric
Diffusion
|
cs.CV cs.LG
|
Current methods for 3D scene reconstruction from sparse posed images employ
intermediate 3D representations such as neural fields, voxel grids, or 3D
Gaussians, to achieve multi-view consistent scene appearance and geometry. In
this paper we introduce MVGD, a diffusion-based architecture capable of direct
pixel-level generation of images and depth maps from novel viewpoints, given an
arbitrary number of input views. Our method uses raymap conditioning to both
augment visual features with spatial information from different viewpoints, as
well as to guide the generation of images and depth maps from novel views. A
key aspect of our approach is the multi-task generation of images and depth
maps, using learnable task embeddings to guide the diffusion process towards
specific modalities. We train this model on a collection of more than 60
million multi-view samples from publicly available datasets, and propose
techniques to enable efficient and consistent learning in such diverse
conditions. We also propose a novel strategy that enables the efficient
training of larger models by incrementally fine-tuning smaller ones, with
promising scaling behavior. Through extensive experiments, we report
state-of-the-art results in multiple novel view synthesis benchmarks, as well
as multi-view stereo and video depth estimation.
|
2501.18805
|
Are Representation Disentanglement and Interpretability Linked in
Recommendation Models? A Critical Review and Reproducibility Study
|
cs.IR
|
Unsupervised learning of disentangled representations has been closely tied
to enhancing the representation intepretability of Recommender Systems (RSs).
This has been achieved by making the representation of individual features more
distinctly separated, so that it is easier to attribute the contribution of
features to the model's predictions. However, such advantages in
interpretability and feature attribution have mainly been explored
qualitatively. Moreover, the effect of disentanglement on the model's
recommendation performance has been largely overlooked. In this work, we
reproduce the recommendation performance, representation disentanglement and
representation interpretability of five well-known recommendation models on
four RS datasets. We quantify disentanglement and investigate the link of
disentanglement with recommendation effectiveness and representation
interpretability. While several existing work in RSs have proposed disentangled
representations as a gateway to improved effectiveness and interpretability,
our findings show that disentanglement is not necessarily related to
effectiveness but is closely related to representation interpretability. Our
code and results are publicly available at
https://github.com/edervishaj/disentanglement-interpretability-recsys.
|
2501.18808
|
Learning Hamiltonian Dynamics with Bayesian Data Assimilation
|
cs.LG cs.RO cs.SY eess.SY
|
In this paper, we develop a neural network-based approach for time-series
prediction in unknown Hamiltonian dynamical systems. Our approach leverages a
surrogate model and learns the system dynamics using generalized coordinates
(positions) and their conjugate momenta while preserving a constant
Hamiltonian. To further enhance long-term prediction accuracy, we introduce an
Autoregressive Hamiltonian Neural Network, which incorporates autoregressive
prediction errors into the training objective. Additionally, we employ Bayesian
data assimilation to refine predictions in real-time using online measurement
data. Numerical experiments on a spring-mass system and highly elliptic orbits
under gravitational perturbations demonstrate the effectiveness of the proposed
method, highlighting its potential for accurate and robust long-term
predictions.
|
2501.18812
|
Estimating the Probability of Sampling a Trained Neural Network at
Random
|
cs.LG
|
We present an algorithm for estimating the probability mass, under a Gaussian
or uniform prior, of a region in neural network parameter space corresponding
to a particular behavior, such as achieving test loss below some threshold.
When the prior is uniform, this problem is equivalent to measuring the volume
of a region. We show empirically and theoretically that existing algorithms for
estimating volumes in parameter space underestimate the true volume by millions
of orders of magnitude. We find that this error can be dramatically reduced,
but not entirely eliminated, with an importance sampling method using gradient
information that is already provided by popular optimizers. The negative
logarithm of this probability can be interpreted as a measure of a network's
information content, in accordance with minimum description length (MDL)
principles and rate-distortion theory. As expected, this quantity increases
during language model training. We also find that badly-generalizing behavioral
regions are smaller, and therefore less likely to be sampled at random,
demonstrating an inductive bias towards well-generalizing functions.
|
2501.18815
|
An Adversarial Approach to Register Extreme Resolution Tissue Cleared 3D
Brain Images
|
eess.IV cs.AI cs.CV
|
We developed a generative patch based 3D image registration model that can
register very high resolution images obtained from a biochemical process name
tissue clearing. Tissue clearing process removes lipids and fats from the
tissue and make the tissue transparent. When cleared tissues are imaged with
Light-sheet fluorescent microscopy, the resulting images give a clear window to
the cellular activities and dynamics inside the tissue.Thus the images obtained
are very rich with cellular information and hence their resolution is extremely
high (eg .2560x2160x676). Analyzing images with such high resolution is a
difficult task for any image analysis pipeline.Image registration is a common
step in image analysis pipeline when comparison between images are required.
Traditional image registration methods fail to register images with such
extant. In this paper we addressed this very high resolution image registration
issue by proposing a patch-based generative network named InvGAN. Our proposed
network can register very high resolution tissue cleared images. The tissue
cleared dataset used in this paper are obtained from a tissue clearing protocol
named CUBIC. We compared our method both with traditional and deep-learning
based registration methods.Two different versions of CUBIC dataset are used,
representing two different resolutions 25% and 100% respectively. Experiments
on two different resolutions clearly show the impact of resolution on the
registration quality. At 25% resolution, our method achieves comparable
registration accuracy with very short time (7 minutes approximately). At 100%
resolution, most of the traditional registration methods fail except Elastix
registration tool.Elastix takes 28 hours to register where proposed InvGAN
takes only 10 minutes.
|
2501.18816
|
Large Language Models as Common-Sense Heuristics
|
cs.CL cs.AI cs.LG
|
While systems designed for solving planning tasks vastly outperform Large
Language Models (LLMs) in this domain, they usually discard the rich semantic
information embedded within task descriptions. In contrast, LLMs possess
parametrised knowledge across a wide range of topics, enabling them to leverage
the natural language descriptions of planning tasks in their solutions.
However, current research in this direction faces challenges in generating
correct and executable plans. Furthermore, these approaches depend on the LLM
to output solutions in an intermediate language, which must be translated into
the representation language of the planning task. We introduce a novel planning
method, which leverages the parametrised knowledge of LLMs by using their
output as a heuristic for Hill-Climbing Search. This approach is further
enhanced by prompting the LLM to generate a solution estimate to guide the
search. Our method outperforms the task success rate of similar systems within
a common household environment by 22 percentage points, with consistently
executable plans. All actions are encoded in their original representation,
demonstrating that strong results can be achieved without an intermediate
language, thus eliminating the need for a translation step.
|
2501.18817
|
Bridging the Reasoning Gap: Small LLMs Can Plan with Generalised
Strategies
|
cs.AI cs.CL
|
Recent advancements in the reasoning skills of Large Language Models (LLMs)
demonstrate an increase in the ability of LLMs to solve simple planning tasks.
However, as long as the driving force behind improved reasoning capability is
the size and complexity of the model, the financial and computational costs
associated with running them will also increase. This trend raises questions
about continued accessibility and whether these improvements will increase at
the same pace as models continue to grow in size and expense. We propose two
approaches to enhance the reasoning ability of less resource-intensive LLMs.
(1) Provide them with a generalised strategy for solving tasks within a given
domain, generated by a more resource-intensive LLM. (2) Exploit their
cost-effectiveness by iteratively prompting these models to correct errors in
their proposed solutions. Our empirical results from planning and mathematical
reasoning tasks demonstrate that these methods improve the performance of less
resource-intensive LLMs to levels comparable with their more resource-intensive
counterparts, at a fraction of the cost. Additionally, we show that the
utilisation of generalised strategies in our experiments reduced the cost of
the less resource-intensive model by nearly 30 percent on average.
|
2501.18821
|
An Optimal Cascade Feature-Level Spatiotemporal Fusion Strategy for
Anomaly Detection in CAN Bus
|
cs.LG cs.AI cs.CR
|
Autonomous vehicles represent a revolutionary advancement driven by the
integration of artificial intelligence within intelligent transportation
systems. However, they remain vulnerable due to the absence of robust security
mechanisms in the Controller Area Network (CAN) bus. In order to mitigate the
security issue, many machine learning models and strategies have been proposed,
which primarily focus on a subset of dominant patterns of anomalies and lack
rigorous evaluation in terms of reliability and robustness. Therefore, to
address the limitations of previous works and mitigate the security
vulnerability in CAN bus, the current study develops a model based on the
intrinsic nature of the problem to cover all dominant patterns of anomalies. To
achieve this, a cascade feature-level fusion strategy optimized by a
two-parameter genetic algorithm is proposed to combine temporal and spatial
information. Subsequently, the model is evaluated using a paired t-test to
ensure reliability and robustness. Finally, a comprehensive comparative
analysis conducted on two widely used datasets advocates that the proposed
model outperforms other models and achieves superior accuracy and F1-score,
demonstrating the best performance among all models presented to date.
|
2501.18823
|
Transcoders Beat Sparse Autoencoders for Interpretability
|
cs.LG
|
Sparse autoencoders (SAEs) extract human-interpretable features from deep
neural networks by transforming their activations into a sparse, higher
dimensional latent space, and then reconstructing the activations from these
latents. Transcoders are similar to SAEs, but they are trained to reconstruct
the output of a component of a deep network given its input. In this work, we
compare the features found by transcoders and SAEs trained on the same model
and data, finding that transcoder features are significantly more
interpretable. We also propose skip transcoders, which add an affine skip
connection to the transcoder architecture, and show that these achieve lower
reconstruction loss with no effect on interpretability.
|
2501.18824
|
Memory-Efficient Fine-Tuning of Transformers via Token Selection
|
cs.CL cs.LG
|
Fine-tuning provides an effective means to specialize pre-trained models for
various downstream tasks. However, fine-tuning often incurs high memory
overhead, especially for large transformer-based models, such as LLMs. While
existing methods may reduce certain parts of the memory required for
fine-tuning, they still require caching all intermediate activations computed
in the forward pass to update weights during the backward pass. In this work,
we develop TokenTune, a method to reduce memory usage, specifically the memory
to store intermediate activations, in the fine-tuning of transformer-based
models. During the backward pass, TokenTune approximates the gradient
computation by backpropagating through just a subset of input tokens. Thus,
with TokenTune, only a subset of intermediate activations are cached during the
forward pass. Also, TokenTune can be easily combined with existing methods like
LoRA, further reducing the memory cost. We evaluate our approach on pre-trained
transformer models with up to billions of parameters, considering the
performance on multiple downstream tasks such as text classification and
question answering in a few-shot learning setup. Overall, TokenTune achieves
performance on par with full fine-tuning or representative memory-efficient
fine-tuning methods, while greatly reducing the memory footprint, especially
when combined with other methods with complementary memory reduction
mechanisms. We hope that our approach will facilitate the fine-tuning of large
transformers, in specializing them for specific domains or co-training them
with other neural components from a larger system. Our code is available at
https://github.com/facebookresearch/tokentune.
|
2501.18826
|
Structural Embedding Projection for Contextual Large Language Model
Inference
|
cs.CL
|
Structured embedding transformations offer a promising approach for enhancing
the efficiency and coherence of language model inference. The introduction of
Structural Embedding Projection (SEP) provides a mechanism for refining token
representations through projection matrices that integrate hierarchical and
relational dependencies. The mathematical formulation of SEP enables embedding
spaces to capture structured contextual relationships, thereby improving
semantic fidelity without significantly increasing computational overhead.
Experimental evaluations conducted on a range of linguistic datasets revealed
that SEP contributed to reductions in perplexity and enhanced contextual
coherence, demonstrating its potential to refine language model outputs.
Computational efficiency assessments highlighted variations across different
datasets, suggesting that the integration of structured embeddings introduced
dataset-dependent trade-offs between inference speed and representational
richness. The qualitative analysis of generated responses indicated that SEP
enhanced narrative consistency and topic alignment, leading to improved fluency
in multi-sentence text generation. The modifications to embedding layers
required precise optimization to ensure stable training dynamics, as the
introduction of structured transformations altered the traditional
representation-learning process. The architectural adjustments necessary for
SEP implementation influenced inference latency and memory consumption,
requiring a balance between efficiency gains and additional processing demands.
The impact of SEP on lexical diversity suggested that embedding modifications
influenced the model's vocabulary usage, reflecting a more context-aware
selection of generated tokens.
|
2501.18834
|
Pitfalls of defacing whole-head MRI: re-identification risk with
diffusion models and compromised research potential
|
eess.IV cs.AI cs.CV
|
Defacing is often applied to head magnetic resonance image (MRI) datasets
prior to public release to address privacy concerns. The alteration of facial
and nearby voxels has provoked discussions about the true capability of these
techniques to ensure privacy as well as their impact on downstream tasks. With
advancements in deep generative models, the extent to which defacing can
protect privacy is uncertain. Additionally, while the altered voxels are known
to contain valuable anatomical information, their potential to support research
beyond the anatomical regions directly affected by defacing remains uncertain.
To evaluate these considerations, we develop a refacing pipeline that recovers
faces in defaced head MRIs using cascaded diffusion probabilistic models
(DPMs). The DPMs are trained on images from 180 subjects and tested on images
from 484 unseen subjects, 469 of whom are from a different dataset. To assess
whether the altered voxels in defacing contain universally useful information,
we also predict computed tomography (CT)-derived skeletal muscle radiodensity
from facial voxels in both defaced and original MRIs. The results show that
DPMs can generate high-fidelity faces that resemble the original faces from
defaced images, with surface distances to the original faces significantly
smaller than those of a population average face (p < 0.05). This performance
also generalizes well to previously unseen datasets. For skeletal muscle
radiodensity predictions, using defaced images results in significantly weaker
Spearman's rank correlation coefficients compared to using original images (p <
10-4). For shin muscle, the correlation is statistically significant (p < 0.05)
when using original images but not statistically significant (p > 0.05) when
any defacing method is applied, suggesting that defacing might not only fail to
protect privacy but also eliminate valuable information.
|
2501.18835
|
Early Diagnosis and Severity Assessment of Weligama Coconut Leaf Wilt
Disease and Coconut Caterpillar Infestation using Deep Learning-based Image
Processing Techniques
|
cs.CV
|
Global Coconut (Cocos nucifera (L.)) cultivation faces significant
challenges, including yield loss, due to pest and disease outbreaks. In
particular, Weligama Coconut Leaf Wilt Disease (WCWLD) and Coconut Caterpillar
Infestation (CCI) damage coconut trees, causing severe coconut production loss
in Sri Lanka and nearby coconut-producing countries. Currently, both WCWLD and
CCI are detected through on-field human observations, a process that is not
only time-consuming but also limits the early detection of infections. This
paper presents a study conducted in Sri Lanka, demonstrating the effectiveness
of employing transfer learning-based Convolutional Neural Network (CNN) and
Mask Region-based-CNN (Mask R-CNN) to identify WCWLD and CCI at their early
stages and to assess disease progression. Further, this paper presents the use
of the You Only Look Once (YOLO) object detection model to count the number of
caterpillars distributed on leaves with CCI. The introduced methods were tested
and validated using datasets collected from Matara, Puttalam, and Makandura,
Sri Lanka. The results show that the proposed methods identify WCWLD and CCI
with an accuracy of 90% and 95%, respectively. In addition, the proposed WCWLD
disease severity identification method classifies the severity with an accuracy
of 97%. Furthermore, the accuracies of the object detection models for
calculating the number of caterpillars in the leaflets were: YOLOv5-96.87%,
YOLOv8-96.1%, and YOLO11-95.9%.
|
2501.18836
|
Transfer Learning for Nonparametric Contextual Dynamic Pricing
|
cs.LG math.ST stat.ME stat.TH
|
Dynamic pricing strategies are crucial for firms to maximize revenue by
adjusting prices based on market conditions and customer characteristics.
However, designing optimal pricing strategies becomes challenging when
historical data are limited, as is often the case when launching new products
or entering new markets. One promising approach to overcome this limitation is
to leverage information from related products or markets to inform the focal
pricing decisions. In this paper, we explore transfer learning for
nonparametric contextual dynamic pricing under a covariate shift model, where
the marginal distributions of covariates differ between source and target
domains while the reward functions remain the same. We propose a novel Transfer
Learning for Dynamic Pricing (TLDP) algorithm that can effectively leverage
pre-collected data from a source domain to enhance pricing decisions in the
target domain. The regret upper bound of TLDP is established under a simple
Lipschitz condition on the reward function. To establish the optimality of
TLDP, we further derive a matching minimax lower bound, which includes the
target-only scenario as a special case and is presented for the first time in
the literature. Extensive numerical experiments validate our approach,
demonstrating its superiority over existing methods and highlighting its
practical utility in real-world applications.
|
2501.18837
|
Constitutional Classifiers: Defending against Universal Jailbreaks
across Thousands of Hours of Red Teaming
|
cs.CL cs.AI cs.CR cs.LG
|
Large language models (LLMs) are vulnerable to universal jailbreaks-prompting
strategies that systematically bypass model safeguards and enable users to
carry out harmful processes that require many model interactions, like
manufacturing illegal substances at scale. To defend against these attacks, we
introduce Constitutional Classifiers: safeguards trained on synthetic data,
generated by prompting LLMs with natural language rules (i.e., a constitution)
specifying permitted and restricted content. In over 3,000 estimated hours of
red teaming, no red teamer found a universal jailbreak that could extract
information from an early classifier-guarded LLM at a similar level of detail
to an unguarded model across most target queries. On automated evaluations,
enhanced classifiers demonstrated robust defense against held-out
domain-specific jailbreaks. These classifiers also maintain deployment
viability, with an absolute 0.38% increase in production-traffic refusals and a
23.7% inference overhead. Our work demonstrates that defending against
universal jailbreaks while maintaining practical deployment viability is
tractable.
|
2501.18838
|
Partially Rewriting a Transformer in Natural Language
|
cs.LG cs.CL
|
The greatest ambition of mechanistic interpretability is to completely
rewrite deep neural networks in a format that is more amenable to human
understanding, while preserving their behavior and performance. In this paper,
we attempt to partially rewrite a large language model using simple natural
language explanations. We first approximate one of the feedforward networks in
the LLM with a wider MLP with sparsely activating neurons - a transcoder - and
use an automated interpretability pipeline to generate explanations for these
neurons. We then replace the first layer of this sparse MLP with an LLM-based
simulator, which predicts the activation of each neuron given its explanation
and the surrounding context. Finally, we measure the degree to which these
modifications distort the model's final output. With our pipeline, the model's
increase in loss is statistically similar to entirely replacing the sparse MLP
output with the zero vector. We employ the same protocol, this time using a
sparse autoencoder, on the residual stream of the same layer and obtain similar
results. These results suggest that more detailed explanations are needed to
improve performance substantially above the zero ablation baseline.
|
2501.18839
|
Social Cyber Geographical Worldwide Inventory of Bots
|
cs.SI
|
Social Cyber Geography is the space in the digital cyber realm that is
produced through social relations. Communication in the social media ecosystem
happens not only because of human interactions, but is also fueled by
algorithmically controlled bot agents. Most studies have not looked at the
social cyber geography of bots because they focus on bot activity within a
single country. Since creating a bot uses universal programming technology,
bots, how prevalent are these bots throughout the world? To quantify bot
activity worldwide, we perform a multilingual and geospatial analysis on a
large dataset of social data collected from X during the Coronavirus pandemic
in 2021. This pandemic affected most of the world, and thus is a common topic
of discussion. Our dataset consists of ~100 mil posts generated by ~31mil
users. Most bot studies focus only on English-speaking countries, because most
bot detection algorithms are built for the English language. However, only 47\%
of the bots write in the English language. To accommodate multiple languages in
our bot detection algorithm, we built Multilingual BotBuster, a multi-language
bot detection algorithm to identify the bots in this diverse dataset. We also
create a Geographical Location Identifier to swiftly identify the countries a
user affiliates with in his description. Our results show that bots can appear
to move from one country to another, but the language they write in remains
relatively constant. Bots distribute narratives on distinct topics related to
their self-declared country affiliation. Finally, despite the diverse
distribution of bot locations around the world, the proportion of bots per
country is about 20%. Our work stresses the importance of a united analysis of
the cyber and physical realms, where we combine both spheres to inventorize the
language and location of social media bots and understand communication
strategies.
|
2501.18841
|
Trading Inference-Time Compute for Adversarial Robustness
|
cs.LG cs.CR
|
We conduct experiments on the impact of increasing inference-time compute in
reasoning models (specifically OpenAI o1-preview and o1-mini) on their
robustness to adversarial attacks. We find that across a variety of attacks,
increased inference-time compute leads to improved robustness. In many cases
(with important exceptions), the fraction of model samples where the attack
succeeds tends to zero as the amount of test-time compute grows. We perform no
adversarial training for the tasks we study, and we increase inference-time
compute by simply allowing the models to spend more compute on reasoning,
independently of the form of attack. Our results suggest that inference-time
compute has the potential to improve adversarial robustness for Large Language
Models. We also explore new attacks directed at reasoning models, as well as
settings where inference-time compute does not improve reliability, and
speculate on the reasons for these as well as ways to address them.
|
2501.18845
|
Text Data Augmentation for Large Language Models: A Comprehensive Survey
of Methods, Challenges, and Opportunities
|
cs.CL
|
The increasing size and complexity of pre-trained language models have
demonstrated superior performance in many applications, but they usually
require large training datasets to be adequately trained. Insufficient training
sets could unexpectedly make the model overfit and fail to cope with complex
tasks. Large language models (LLMs) trained on extensive corpora have prominent
text generation capabilities, which improve the quality and quantity of data
and play a crucial role in data augmentation. Specifically, distinctive prompt
templates are given in personalised tasks to guide LLMs in generating the
required content. Recent promising retrieval-based techniques further improve
the expressive performance of LLMs in data augmentation by introducing external
knowledge to enable them to produce more grounded-truth data. This survey
provides an in-depth analysis of data augmentation in LLMs, classifying the
techniques into Simple Augmentation, Prompt-based Augmentation, Retrieval-based
Augmentation and Hybrid Augmentation. We summarise the post-processing
approaches in data augmentation, which contributes significantly to refining
the augmented data and enabling the model to filter out unfaithful content.
Then, we provide the common tasks and evaluation metrics. Finally, we introduce
existing challenges and future opportunities that could bring further
improvement to data augmentation.
|
2501.18848
|
Reinforcement Learning of Flexible Policies for Symbolic Instructions
with Adjustable Mapping Specifications
|
cs.RO
|
Symbolic task representation is a powerful tool for encoding human
instructions and domain knowledge. Such instructions guide robots to accomplish
diverse objectives and meet constraints through reinforcement learning (RL).
Most existing methods are based on fixed mappings from environmental states to
symbols. However, in inspection tasks, where equipment conditions must be
evaluated from multiple perspectives to avoid errors of oversight, robots must
fulfill the same symbol from different states. To help robots respond to
flexible symbol mapping, we propose representing symbols and their mapping
specifications separately within an RL policy. This approach imposes on RL
policy to learn combinations of symbolic instructions and mapping
specifications, requiring an efficient learning framework. To cope with this
issue, we introduce an approach for learning flexible policies called Symbolic
Instructions with Adjustable Mapping Specifications (SIAMS). This paper
represents symbolic instructions using linear temporal logic (LTL), a formal
language that can be easily integrated into RL. Our method addresses the
diversified completion patterns of instructions by (1) a specification-aware
state modulation, which embeds differences in mapping specifications in state
features, and (2) a symbol-number-based task curriculum, which gradually
provides tasks according to the learning's progress. Evaluations in 3D
simulations with discrete and continuous action spaces demonstrate that our
method outperforms context-aware multitask RL comparisons.
|
2501.18850
|
Equivariant Hypergraph Diffusion for Crystal Structure Prediction
|
cs.CE
|
Crystal Structure Prediction (CSP) remains a fundamental challenge with
significant implications for the development of new materials and the
advancement of various scientific disciplines. Recent developments have shown
that generative models, particularly diffusion models, hold great promise for
CSP. However, traditional graph-based representations, where atomic bonds are
modeled as pairwise graph edges, fail to fully capture the intricate high-order
interactions essential for accurately representing crystal structures. In this
work, we propose a novel approach that utilizes hypergraphs to represent
crystal structures, providing a more expressive abstraction for modeling
multi-way atomic interactions. By adopting hypergraphs, we can effectively
capture complex high-order relationships and symmetries, such as permutation
and periodic translation invariance, which are crucial for characterizing
crystal structures. In this work, we propose the \textbf{E}quivariant
\textbf{H}ypergraph \textbf{Diff}usion Model (\textbf{EH-Diff}), a generative
model designed to take advantage of the symmetry-preserving properties of
hypergraphs. EH-Diff exploits these features to offer an efficient and accurate
method for predicting crystal structures with a strong theoretical
justification to preserve invariance properties. Empirically, we conduct
extensive experiments on four benchmark datasets, and the results demonstrate
that EH-Diff outperforms state-of-the-art CSP methods with only one sample.
|
2501.18851
|
Project-and-Fuse: Improving RGB-D Semantic Segmentation via Graph
Convolution Networks
|
cs.CV
|
Most existing RGB-D semantic segmentation methods focus on the feature level
fusion, including complex cross-modality and cross-scale fusion modules.
However, these methods may cause misalignment problem in the feature fusion
process and counter-intuitive patches in the segmentation results. Inspired by
the popular pixel-node-pixel pipeline, we propose to 1) fuse features from two
modalities in a late fusion style, during which the geometric feature injection
is guided by texture feature prior; 2) employ Graph Neural Networks (GNNs) on
the fused feature to alleviate the emergence of irregular patches by inferring
patch relationship. At the 3D feature extraction stage, we argue that
traditional CNNs are not efficient enough for depth maps. So, we encode depth
map into normal map, after which CNNs can easily extract object surface
tendencies.At projection matrix generation stage, we find the existence of
Biased-Assignment and Ambiguous-Locality issues in the original pipeline.
Therefore, we propose to 1) adopt the Kullback-Leibler Loss to ensure no
missing important pixel features, which can be viewed as hard pixel mining
process; 2) connect regions that are close to each other in the Euclidean space
as well as in the semantic space with larger edge weights so that location
informations can been considered. Extensive experiments on two public datasets,
NYU-DepthV2 and SUN RGB-D, have shown that our approach can consistently boost
the performance of RGB-D semantic segmentation task.
|
2501.18852
|
Tracking Error Based Fault Tolerant Scheme for Marine Vehicles with
Thruster Redundancy
|
eess.SY cs.SY
|
This paper proposes an active model-based fault and failure tolerant control
scheme for a class of marine vehicles with thruster redundancy. Unlike widely
used state and parameter estimation methods, where the estimation errors are
utilized to generate residual, in this paper we directly apply the trajectory
tracking error terms to construct residual and detect thruster fault and
failure in the steady state of the tracking system. As for identification or
diagnosis, this paper proposes a novel scheme through a detailed examination of
the tracking error trends and the combinations of thruster configurations.
Since this fault detection and identification operates within the same
closed-loop of the tracking control system, control reconfiguration can be
easily achieved by adjusting the weight parameter of the isolated thruster to
minimize tracking errors or residual. Numerical studies with the real world
vehicle model is also carried out to verify the effectiveness of the proposed
method.
|
2501.18853
|
Non-Asymptotic Analysis of Subspace Identification for Stochastic
Systems Using Multiple Trajectories
|
eess.SY cs.SY
|
This paper is concerned with the analysis of identification errors for
$n$-dimensional discrete-time Linear Time-Invariant (LTI) systems with $m$
outputs and no external inputs, using Subspace Identification Methods (SIM)
with finite sample data. We provide non-asymptotic high-probability upper
bounds for matrices $A,C$, the Kalman filter gain $K$, and the closed loop
matrix $A-KC $, based on multiple sample trajectories, and further give the
first non-asymptotic high-probability upper bounds for the system poles, which
cover both (marginally) stable systems and unstable systems. We show that, with
high probability, the non-asymptotic estimation errors of these matrices decay
at a rate of at least $ \mathcal{O}(\sqrt{1/N}) $, while the estimation error
of the system poles decays at a rate of at least $
\mathcal{O}(N^{-\frac{1}{2n}}) $, where $ N $ represents the number of sample
trajectories. Furthermore, we prove that SIMs become ill-conditioned when the
ratio $n/m$ is large, regardless of the system parameters. Numerical
experiments are conducted to validate the non-asymptotic results and the
ill-conditionedness of SIM.
|
2501.18855
|
FlexiCrackNet: A Flexible Pipeline for Enhanced Crack Segmentation with
General Features Transfered from SAM
|
cs.CV
|
Automatic crack segmentation is a cornerstone technology for intelligent
visual perception modules in road safety maintenance and structural integrity
systems. Existing deep learning models and ``pre-training + fine-tuning''
paradigms often face challenges of limited adaptability in resource-constrained
environments and inadequate scalability across diverse data domains. To
overcome these limitations, we propose FlexiCrackNet, a novel pipeline that
seamlessly integrates traditional deep learning paradigms with the strengths of
large-scale pre-trained models. At its core, FlexiCrackNet employs an
encoder-decoder architecture to extract task-specific features. The lightweight
EdgeSAM's CNN-based encoder is exclusively used as a generic feature extractor,
decoupled from the fixed input size requirements of EdgeSAM. To harmonize
general and domain-specific features, we introduce the information-Interaction
gated attention mechanism (IGAM), which adaptively fuses multi-level features
to enhance segmentation performance while mitigating irrelevant noise. This
design enables the efficient transfer of general knowledge to crack
segmentation tasks while ensuring adaptability to diverse input resolutions and
resource-constrained environments. Experiments show that FlexiCrackNet
outperforms state-of-the-art methods, excels in zero-shot generalization,
computational efficiency, and segmentation robustness under challenging
scenarios such as blurry inputs, complex backgrounds, and visually ambiguous
artifacts. These advancements underscore the potential of FlexiCrackNet for
real-world applications in automated crack detection and comprehensive
structural health monitoring systems.
|
2501.18858
|
BRiTE: Bootstrapping Reinforced Thinking Process to Enhance Language
Model Reasoning
|
cs.LG cs.AI cs.CL
|
Large Language Models (LLMs) have demonstrated remarkable capabilities in
complex reasoning tasks, yet generating reliable reasoning processes remains a
significant challenge. We present a unified probabilistic framework that
formalizes LLM reasoning through a novel graphical model incorporating latent
thinking processes and evaluation signals. Within this framework, we introduce
the Bootstrapping Reinforced Thinking Process (BRiTE) algorithm, which works in
two steps. First, it generates high-quality rationales by approximating the
optimal thinking process through reinforcement learning, using a novel reward
shaping mechanism. Second, it enhances the base LLM by maximizing the joint
probability of rationale generation with respect to the model's parameters.
Theoretically, we demonstrate BRiTE's convergence at a rate of $1/T$ with $T$
representing the number of iterations. Empirical evaluations on math and coding
benchmarks demonstrate that our approach consistently improves performance
across different base models without requiring human-annotated thinking
processes. In addition, BRiTE demonstrates superior performance compared to
existing algorithms that bootstrap thinking processes use alternative methods
such as rejection sampling, and can even match or exceed the results achieved
through supervised fine-tuning with human-annotated data.
|
2501.18859
|
A Deep Spatio-Temporal Architecture for Dynamic Effective Connectivity
Network Analysis Based on Dynamic Causal Discovery
|
cs.LG
|
Dynamic effective connectivity networks (dECNs) reveal the changing directed
brain activity and the dynamic causal influences among brain regions, which
facilitate the identification of individual differences and enhance the
understanding of human brain. Although the existing causal discovery methods
have shown promising results in effective connectivity network analysis, they
often overlook the dynamics of causality, in addition to the incorporation of
spatio-temporal information in brain activity data. To address these issues, we
propose a deep spatio-temporal fusion architecture, which employs a dynamic
causal deep encoder to incorporate spatio-temporal information into dynamic
causality modeling, and a dynamic causal deep decoder to verify the discovered
causality. The effectiveness of the proposed method is first illustrated with
simulated data. Then, experimental results from Philadelphia Neurodevelopmental
Cohort (PNC) demonstrate the superiority of the proposed method in inferring
dECNs, which reveal the dynamic evolution of directed flow between brain
regions. The analysis shows the difference of dECNs between young adults and
children. Specifically, the directed brain functional networks transit from
fluctuating undifferentiated systems to more stable specialized networks as one
grows. This observation provides further evidence on the modularization and
adaptation of brain networks during development, leading to higher cognitive
abilities observed in young adults.
|
2501.18862
|
Scalable Distributed Reproduction Numbers of Network Epidemics with
Differential Privacy
|
eess.SY cs.SY
|
Reproduction numbers are widely used for the estimation and prediction of
epidemic spreading processes over networks. However, conventional reproduction
numbers of an overall network do not indicate where an epidemic is spreading.
Therefore, we propose a novel notion of local distributed reproduction numbers
to capture the spreading behaviors of each node in a network. We first show how
to compute them and then use them to derive new conditions under which an
outbreak can occur. These conditions are then used to derive new conditions for
the existence, uniqueness, and stability of equilibrium states of the
underlying epidemic model. Building upon these local distributed reproduction
numbers, we define cluster distributed reproduction numbers to model the spread
between clusters composed of nodes. Furthermore, we demonstrate that the local
distributed reproduction numbers can be aggregated into cluster distributed
reproduction numbers at different scales. However, both local and cluster
distributed reproduction numbers can reveal the frequency of interactions
between nodes in a network, which raises privacy concerns. Thus, we next
develop a privacy framework that implements a differential privacy mechanism to
provably protect the frequency of interactions between nodes when computing
distributed reproduction numbers. Numerical experiments show that, even under
differential privacy, the distributed reproduction numbers provide accurate
estimates of the epidemic spread while also providing more insights than
conventional reproduction numbers.
|
2501.18863
|
Adaptivity and Convergence of Probability Flow ODEs in Diffusion
Generative Models
|
stat.ML cs.LG
|
Score-based generative models, which transform noise into data by learning to
reverse a diffusion process, have become a cornerstone of modern generative AI.
This paper contributes to establishing theoretical guarantees for the
probability flow ODE, a widely used diffusion-based sampler known for its
practical efficiency. While a number of prior works address its general
convergence theory, it remains unclear whether the probability flow ODE sampler
can adapt to the low-dimensional structures commonly present in natural image
data. We demonstrate that, with accurate score function estimation, the
probability flow ODE sampler achieves a convergence rate of $O(k/T)$ in total
variation distance (ignoring logarithmic factors), where $k$ is the intrinsic
dimension of the target distribution and $T$ is the number of iterations. This
dimension-free convergence rate improves upon existing results that scale with
the typically much larger ambient dimension, highlighting the ability of the
probability flow ODE sampler to exploit intrinsic low-dimensional structures in
the target distribution for faster sampling.
|
2501.18864
|
Test-time Loss Landscape Adaptation for Zero-Shot Generalization in
Vision-Language Models
|
cs.CV
|
Test-time adaptation of pre-trained vision-language models has emerged as a
technique for tackling distribution shifts during the test time. Although
existing methods, especially those based on Test-time Prompt Tuning (TPT), have
shown promising results, their high computational cost associated with
parameter optimization presents challenges for scalability and practical
application. This paper unveils the unnecessary nature of backpropagation in
existing methods from a loss landscape perspective. Building on this insight,
this paper proposes a simple yet effective framework called Test-time Loss
Landscape Adaptation (TLLA). TLLA leverages the relative position between the
training minimum and test loss landscapes to guide the adaptation process,
avoiding the update of model parameters at test time. Specifically, it mainly
consists of two main stages: In the prompt tuning stage, a Sharpness-Aware
Prompt Tuning (SAPT) method is introduced to identify the training flat
minimum, setting the foundation for the subsequent test-time adaptation; In the
test stage, a Sharpness-based Test Sample Selection (STSS) approach is utilized
to ensure the alignment of flat minima within the training loss landscape and
each augmented test sample's loss landscape. Extensive experiments on both
domain generalization and cross-dataset benchmarks demonstrate that TLLA
achieves state-of-the-art performances while significantly reducing
computational overhead. Notably, TLLA surpasses TPT by an average of 5.32\% and
6.98\% on four ImageNet variant datasets when employing ResNet50 and ViT-B/16
image encoders, respectively. The code will be available soon.
|
2501.18865
|
REG: Rectified Gradient Guidance for Conditional Diffusion Models
|
cs.CV cs.AI cs.LG
|
Guidance techniques are simple yet effective for improving conditional
generation in diffusion models. Albeit their empirical success, the practical
implementation of guidance diverges significantly from its theoretical
motivation. In this paper, we reconcile this discrepancy by replacing the
scaled marginal distribution target, which we prove theoretically invalid, with
a valid scaled joint distribution objective. Additionally, we show that the
established guidance implementations are approximations to the intractable
optimal solution under no future foresight constraint. Building on these
theoretical insights, we propose rectified gradient guidance (REG), a versatile
enhancement designed to boost the performance of existing guidance methods.
Experiments on 1D and 2D demonstrate that REG provides a better approximation
to the optimal solution than prior guidance techniques, validating the proposed
theoretical framework. Extensive experiments on class-conditional ImageNet and
text-to-image generation tasks show that incorporating REG consistently
improves FID and Inception/CLIP scores across various settings compared to its
absence.
|
2501.18867
|
UP-VLA: A Unified Understanding and Prediction Model for Embodied Agent
|
cs.CV cs.AI
|
Recent advancements in Vision-Language-Action (VLA) models have leveraged
pre-trained Vision-Language Models (VLMs) to improve the generalization
capabilities. VLMs, typically pre-trained on vision-language understanding
tasks, provide rich semantic knowledge and reasoning abilities. However, prior
research has shown that VLMs often focus on high-level semantic content and
neglect low-level features, limiting their ability to capture detailed spatial
information and understand physical dynamics. These aspects, which are crucial
for embodied control tasks, remain underexplored in existing pre-training
paradigms. In this paper, we investigate the training paradigm for VLAs, and
introduce \textbf{UP-VLA}, a \textbf{U}nified VLA model training with both
multi-modal \textbf{U}nderstanding and future \textbf{P}rediction objectives,
enhancing both high-level semantic comprehension and low-level spatial
understanding. Experimental results show that UP-VLA achieves a 33% improvement
on the Calvin ABC-D benchmark compared to the previous state-of-the-art method.
Additionally, UP-VLA demonstrates improved success rates in real-world
manipulation tasks, particularly those requiring precise spatial information.
|
2501.18870
|
Continuous-Time Analysis of Federated Averaging
|
cs.LG cs.DC math.OC
|
Federated averaging (FedAvg) is a popular algorithm for horizontal federated
learning (FL), where samples are gathered across different clients and are not
shared with each other or a central server. Extensive convergence analysis of
FedAvg exists for the discrete iteration setting, guaranteeing convergence for
a range of loss functions and varying levels of data heterogeneity. We extend
this analysis to the continuous-time setting where the global weights evolve
according to a multivariate stochastic differential equation (SDE), which is
the first time FedAvg has been studied from the continuous-time perspective. We
use techniques from stochastic processes to establish convergence guarantees
under different loss functions, some of which are more general than existing
work in the discrete setting. We also provide conditions for which FedAvg
updates to the server weights can be approximated as normal random variables.
Finally, we use the continuous-time formulation to reveal generalization
properties of FedAvg.
|
2501.18871
|
Neural SDEs as a Unified Approach to Continuous-Domain Sequence Modeling
|
cs.LG stat.ML
|
Inspired by the ubiquitous use of differential equations to model continuous
dynamics across diverse scientific and engineering domains, we propose a novel
and intuitive approach to continuous sequence modeling. Our method interprets
time-series data as \textit{discrete samples from an underlying continuous
dynamical system}, and models its time evolution using Neural Stochastic
Differential Equation (Neural SDE), where both the flow (drift) and diffusion
terms are parameterized by neural networks. We derive a principled maximum
likelihood objective and a \textit{simulation-free} scheme for efficient
training of our Neural SDE model. We demonstrate the versatility of our
approach through experiments on sequence modeling tasks across both embodied
and generative AI. Notably, to the best of our knowledge, this is the first
work to show that SDE-based continuous-time modeling also excels in such
complex scenarios, and we hope that our work opens up new avenues for research
of SDE models in high-dimensional and temporally intricate domains.
|
Subsets and Splits
No community queries yet
The top public SQL queries from the community will appear here once available.