id
stringlengths 9
16
| title
stringlengths 4
278
| categories
stringlengths 5
104
| abstract
stringlengths 6
4.09k
|
|---|---|---|---|
2502.01311
|
TFBS-Finder: Deep Learning-based Model with DNABERT and Convolutional
Networks to Predict Transcription Factor Binding Sites
|
cs.LG cs.AI q-bio.GN
|
Transcription factors are proteins that regulate the expression of genes by
binding to specific genomic regions known as Transcription Factor Binding Sites
(TFBSs), typically located in the promoter regions of those genes. Accurate
prediction of these binding sites is essential for understanding the complex
gene regulatory networks underlying various cellular functions. In this regard,
many deep learning models have been developed for such prediction, but there is
still scope of improvement. In this work, we have developed a deep learning
model which uses pre-trained DNABERT, a Convolutional Neural Network (CNN)
module, a Modified Convolutional Block Attention Module (MCBAM), a Multi-Scale
Convolutions with Attention (MSCA) module and an output module. The pre-trained
DNABERT is used for sequence embedding, thereby capturing the long-term
dependencies in the DNA sequences while the CNN, MCBAM and MSCA modules are
useful in extracting higher-order local features. TFBS-Finder is trained and
tested on 165 ENCODE ChIP-seq datasets. We have also performed ablation studies
as well as cross-cell line validations and comparisons with other models. The
experimental results show the superiority of the proposed method in predicting
TFBSs compared to the existing methodologies. The codes and the relevant
datasets are publicly available at
https://github.com/NimishaGhosh/TFBS-Finder/.
|
2502.01312
|
CleanPose: Category-Level Object Pose Estimation via Causal Learning and
Knowledge Distillation
|
cs.CV
|
Category-level object pose estimation aims to recover the rotation,
translation and size of unseen instances within predefined categories. In this
task, deep neural network-based methods have demonstrated remarkable
performance. However, previous studies show they suffer from spurious
correlations raised by "unclean" confounders in models, hindering their
performance on novel instances with significant variations. To address this
issue, we propose CleanPose, a novel approach integrating causal learning and
knowledge distillation to enhance category-level pose estimation. To mitigate
the negative effect of unobserved confounders, we develop a causal inference
module based on front-door adjustment, which promotes unbiased estimation by
reducing potential spurious correlations. Additionally, to further improve
generalization ability, we devise a residual-based knowledge distillation
method that has proven effective in providing comprehensive category
information guidance. Extensive experiments across multiple benchmarks
(REAL275, CAMERA25 and HouseCat6D) hightlight the superiority of proposed
CleanPose over state-of-the-art methods. Code will be released.
|
2502.01313
|
Strategic Classification with Randomised Classifiers
|
cs.LG stat.ML
|
We consider the problem of strategic classification, where a learner must
build a model to classify agents based on features that have been strategically
modified. Previous work in this area has concentrated on the case when the
learner is restricted to deterministic classifiers. In contrast, we perform a
theoretical analysis of an extension to this setting that allows the learner to
produce a randomised classifier. We show that, under certain conditions, the
optimal randomised classifier can achieve better accuracy than the optimal
deterministic classifier, but under no conditions can it be worse. When a
finite set of training data is available, we show that the excess risk of
Strategic Empirical Risk Minimisation over the class of randomised classifiers
is bounded in a similar manner as the deterministic case. In both the
deterministic and randomised cases, the risk of the classifier produced by the
learner converges to that of the corresponding optimal classifier as the volume
of available training data grows. Moreover, this convergence happens at the
same rate as in the i.i.d. case. Our findings are compared with previous
theoretical work analysing the problem of strategic classification. We conclude
that randomisation has the potential to alleviate some issues that could be
faced in practice without introducing any substantial downsides.
|
2502.01315
|
Optimization-based Coordination of Traffic Lights and Automated Vehicles
at Intersections
|
math.OC cs.SY eess.SY
|
This paper tackles the challenge of coordinating traffic lights and automated
vehicles at signalized intersections, formulated as a constrained
finite-horizon optimal control problem. The problem falls into the category of
mixed-integer nonlinear programming, posing challenges for solving large
instances. To address this, we introduce a decomposition approach consisting of
an upper-level problem for traffic light timing allocation and a set of
lower-level problems that generate appropriate commands for automated vehicles
in each intersection movement. By leveraging solutions from the lower-level
problems and employing parametric optimization techniques, we solve the
upper-level problem using a standard sequential quadratic programming approach.
The paper concludes by presenting an illustrative numerical example that
highlights the effectiveness of our algorithm compared to scenarios where no
coordination between traffic lights and vehicles exists.
|
2502.01316
|
Learning Fused State Representations for Control from Multi-View
Observations
|
cs.LG cs.AI
|
Multi-View Reinforcement Learning (MVRL) seeks to provide agents with
multi-view observations, enabling them to perceive environment with greater
effectiveness and precision. Recent advancements in MVRL focus on extracting
latent representations from multiview observations and leveraging them in
control tasks. However, it is not straightforward to learn compact and
task-relevant representations, particularly in the presence of redundancy,
distracting information, or missing views. In this paper, we propose Multi-view
Fusion State for Control (MFSC), firstly incorporating bisimulation metric
learning into MVRL to learn task-relevant representations. Furthermore, we
propose a multiview-based mask and latent reconstruction auxiliary task that
exploits shared information across views and improves MFSC's robustness in
missing views by introducing a mask token. Extensive experimental results
demonstrate that our method outperforms existing approaches in MVRL tasks. Even
in more realistic scenarios with interference or missing views, MFSC
consistently maintains high performance.
|
2502.01329
|
Benchmarking Different QP Formulations and Solvers for Dynamic
Quadrupedal Walking
|
cs.RO
|
Quadratic Programs (QPs) are widely used in the control of walking robots,
especially in Model Predictive Control (MPC) and Whole-Body Control (WBC). In
both cases, the controller design requires the formulation of a QP and the
selection of a suitable QP solver, both requiring considerable time and
expertise. While computational performance benchmarks exist for QP solvers,
studies comparing optimal combinations of computational hardware (HW), QP
formulation, and solver performance are lacking. In this work, we compare dense
and sparse QP formulations, and multiple solving methods on different HW
architectures, focusing on their computational efficiency in dynamic walking of
four legged robots using MPC. We introduce the Solve Frequency per Watt (SFPW)
as a performance measure to enable a cross hardware comparison of the
efficiency of QP solvers. We also benchmark different QP solvers for WBC that
we use for trajectory stabilization in quadrupedal walking. As a result, this
paper provides recommendations for the selection of QP formulations and solvers
for different HW architectures in walking robots and indicates which problems
should be devoted the greater technical effort in this domain in future.
|
2502.01330
|
Accelerating Linear Recurrent Neural Networks for the Edge with
Unstructured Sparsity
|
cs.LG cs.NE
|
Linear recurrent neural networks enable powerful long-range sequence modeling
with constant memory usage and time-per-token during inference. These
architectures hold promise for streaming applications at the edge, but
deployment in resource-constrained environments requires hardware-aware
optimizations to minimize latency and energy consumption. Unstructured sparsity
offers a compelling solution, enabling substantial reductions in compute and
memory requirements--when accelerated by compatible hardware platforms. In this
paper, we conduct a scaling study to investigate the Pareto front of
performance and efficiency across inference compute budgets. We find that
highly sparse linear RNNs consistently achieve better efficiency-performance
trade-offs than dense baselines, with 2x less compute and 36% less memory at
iso-accuracy. Our models achieve state-of-the-art results on a real-time
streaming task for audio denoising. By quantizing our sparse models to
fixed-point arithmetic and deploying them on the Intel Loihi 2 neuromorphic
chip for real-time processing, we translate model compression into tangible
gains of 42x lower latency and 149x lower energy consumption compared to a
dense model on an edge GPU. Our findings showcase the transformative potential
of unstructured sparsity, paving the way for highly efficient recurrent neural
networks in real-world, resource-constrained environments.
|
2502.01332
|
A two-disk approach to the synthesis of coherent passive equalizers for
linear quantum systems
|
quant-ph cs.SY eess.SY math.OC
|
The coherent equalization problem consists in designing a quantum system
acting as a mean-square near optimal filter for a given quantum communication
channel. The paper develops an improved method for the synthesis of transfer
functions for such equalizing filters, based on a linear quantum system model
of the channel. The method draws on a connection with the two-disk problem of
${H}_{\infty}$ control for classical (i.e., nonquantum) linear uncertain
systems. Compared with the previous methods, the proposed method applies to a
broader class of linear quantum communication channels.
|
2502.01334
|
Deep generative computed perfusion-deficit mapping of ischaemic stroke
|
q-bio.QM cs.CV q-bio.NC
|
Focal deficits in ischaemic stroke result from impaired perfusion downstream
of a critical vascular occlusion. While parenchymal lesions are traditionally
used to predict clinical deficits, the underlying pattern of disrupted
perfusion provides information upstream of the lesion, potentially yielding
earlier predictive and localizing signals. Such perfusion maps can be derived
from routine CT angiography (CTA) widely deployed in clinical practice.
Analysing computed perfusion maps from 1,393 CTA-imaged-patients with acute
ischaemic stroke, we use deep generative inference to localise neural
substrates of NIHSS sub-scores. We show that our approach replicates known
lesion-deficit relations without knowledge of the lesion itself and reveals
novel neural dependents. The high achieved anatomical fidelity suggests acute
CTA-derived computed perfusion maps may be of substantial
clinical-and-scientific value in rich phenotyping of acute stroke. Using only
hyperacute imaging, deep generative inference could power highly expressive
models of functional anatomical relations in ischaemic stroke within the
pre-interventional window.
|
2502.01335
|
ConceptVAE: Self-Supervised Fine-Grained Concept Disentanglement from 2D
Echocardiographies
|
cs.CV
|
While traditional self-supervised learning methods improve performance and
robustness across various medical tasks, they rely on single-vector embeddings
that may not capture fine-grained concepts such as anatomical structures or
organs. The ability to identify such concepts and their characteristics without
supervision has the potential to improve pre-training methods, and enable novel
applications such as fine-grained image retrieval and concept-based outlier
detection. In this paper, we introduce ConceptVAE, a novel pre-training
framework that detects and disentangles fine-grained concepts from their style
characteristics in a self-supervised manner. We present a suite of loss terms
and model architecture primitives designed to discretise input data into a
preset number of concepts along with their local style. We validate ConceptVAE
both qualitatively and quantitatively, demonstrating its ability to detect
fine-grained anatomical structures such as blood pools and septum walls from 2D
cardiac echocardiographies. Quantitatively, ConceptVAE outperforms traditional
self-supervised methods in tasks such as region-based instance retrieval,
semantic segmentation, out-of-distribution detection, and object detection.
Additionally, we explore the generation of in-distribution synthetic data that
maintains the same concepts as the training data but with distinct styles,
highlighting its potential for more calibrated data generation. Overall, our
study introduces and validates a promising new pre-training technique based on
concept-style disentanglement, opening multiple avenues for developing models
for medical image analysis that are more interpretable and explainable than
black-box approaches.
|
2502.01337
|
Neural Preconditioning Operator for Efficient PDE Solves
|
cs.CE
|
We introduce the Neural Preconditioning Operator (NPO), a novel approach
designed to accelerate Krylov solvers in solving large, sparse linear systems
derived from partial differential equations (PDEs). Unlike classical
preconditioners that often require extensive tuning and struggle to generalize
across different meshes or parameters, NPO employs neural operators trained via
condition and residual losses. This framework seamlessly integrates with
existing neural network models, serving effectively as a preconditioner to
enhance the performance of Krylov subspace methods. Further, by melding
algebraic multigrid principles with a transformer-based architecture, NPO
significantly reduces iteration counts and runtime for solving Poisson,
Diffusion, and Linear Elasticity problems on both uniform and irregular meshes.
Our extensive numerical experiments demonstrate that NPO outperforms
traditional methods and contemporary neural approaches across various
resolutions, ensuring robust convergence even on grids as large as 4096, far
exceeding its initial training limits. These findings underscore the potential
of data-driven preconditioning to transform the computational efficiency of
high-dimensional PDE applications.
|
2502.01338
|
PtyGenography: using generative models for regularization of the phase
retrieval problem
|
stat.ML cs.IT cs.LG math.FA math.IT math.OC
|
In phase retrieval and similar inverse problems, the stability of solutions
across different noise levels is crucial for applications. One approach to
promote it is using signal priors in a form of a generative model as a
regularization, at the expense of introducing a bias in the reconstruction. In
this paper, we explore and compare the reconstruction properties of classical
and generative inverse problem formulations. We propose a new unified
reconstruction approach that mitigates overfitting to the generative model for
varying noise levels.
|
2502.01339
|
Reducing Ciphertext and Key Sizes for MLWE-Based Cryptosystems
|
cs.CR cs.IT math.IT
|
The concatenation of encryption and decryption can be interpreted as data
transmission over a noisy communication channel. In this work, we use finite
blocklength methods (normal approximation and random coding union bound) as
well as asymptotics to show that ciphertext and key sizes of the
state-of-the-art post-quantum secure key encapsulation mechanism (KEM) Kyber
can be reduced without compromising the security of the scheme. We show that in
the asymptotic regime, it is possible to reduce the sizes of ciphertexts and
secret keys by 25% for the parameter set Kyber1024 while keeping the bitrate at
1 as proposed in the original scheme. For a single Kyber encryption block used
to share a 256-bit AES key, we furthermore show that reductions in ciphertext
size of 39% and 33% are possible for Kyber1024 and Kyber512, respectively.
|
2502.01340
|
Human-Agent Interaction in Synthetic Social Networks: A Framework for
Studying Online Polarization
|
physics.soc-ph cs.SI
|
Online social networks have dramatically altered the landscape of public
discourse, creating both opportunities for enhanced civic participation and
risks of deepening social divisions. Prevalent approaches to studying online
polarization have been limited by a methodological disconnect: mathematical
models excel at formal analysis but lack linguistic realism, while language
model-based simulations capture natural discourse but often sacrifice
analytical precision. This paper introduces an innovative computational
framework that synthesizes these approaches by embedding formal opinion
dynamics principles within LLM-based artificial agents, enabling both rigorous
mathematical analysis and naturalistic social interactions. We validate our
framework through comprehensive offline testing and experimental evaluation
with 122 human participants engaging in a controlled social network
environment. The results demonstrate our ability to systematically investigate
polarization mechanisms while preserving ecological validity. Our findings
reveal how polarized environments shape user perceptions and behavior:
participants exposed to polarized discussions showed markedly increased
sensitivity to emotional content and group affiliations, while perceiving
reduced uncertainty in the agents' positions. By combining mathematical
precision with natural language capabilities, our framework opens new avenues
for investigating social media phenomena through controlled experimentation.
This methodological advancement allows researchers to bridge the gap between
theoretical models and empirical observations, offering unprecedented
opportunities to study the causal mechanisms underlying online opinion
dynamics.
|
2502.01341
|
AlignVLM: Bridging Vision and Language Latent Spaces for Multimodal
Understanding
|
cs.CL
|
Aligning visual features with language embeddings is a key challenge in
vision-language models (VLMs). The performance of such models hinges on having
a good connector that maps visual features generated by a vision encoder to a
shared embedding space with the LLM while preserving semantic similarity.
Existing connectors, such as multilayer perceptrons (MLPs), often produce
out-of-distribution or noisy inputs, leading to misalignment between the
modalities. In this work, we propose a novel vision-text alignment method,
AlignVLM, that maps visual features to a weighted average of LLM text
embeddings. Our approach leverages the linguistic priors encoded by the LLM to
ensure that visual features are mapped to regions of the space that the LLM can
effectively interpret. AlignVLM is particularly effective for document
understanding tasks, where scanned document images must be accurately mapped to
their textual content. Our extensive experiments show that AlignVLM achieves
state-of-the-art performance compared to prior alignment methods. We provide
further analysis demonstrating improved vision-text feature alignment and
robustness to noise.
|
2502.01342
|
Activation by Interval-wise Dropout: A Simple Way to Prevent Neural
Networks from Plasticity Loss
|
cs.LG cs.AI
|
Plasticity loss, a critical challenge in neural network training, limits a
model's ability to adapt to new tasks or shifts in data distribution. This
paper introduces AID (Activation by Interval-wise Dropout), a novel method
inspired by Dropout, designed to address plasticity loss. Unlike Dropout, AID
generates subnetworks by applying Dropout with different probabilities on each
preactivation interval. Theoretical analysis reveals that AID regularizes the
network, promoting behavior analogous to that of deep linear networks, which do
not suffer from plasticity loss. We validate the effectiveness of AID in
maintaining plasticity across various benchmarks, including continual learning
tasks on standard image classification datasets such as CIFAR10, CIFAR100, and
TinyImageNet. Furthermore, we show that AID enhances reinforcement learning
performance in the Arcade Learning Environment benchmark.
|
2502.01344
|
PSSD: Making Large Language Models Self-denial via Human Psyche
Structure
|
cs.AI cs.CL cs.IR
|
The enhance of accuracy in reasoning results of LLMs arouses the community's
interests, wherein pioneering studies investigate post-hoc strategies to
rectify potential mistakes. Despite extensive efforts, they are all stuck in a
state of resource competition demanding significant time and computing
expenses. The cause of the situation lies in the failure of identifying the
fundamental feature of the solutions in this line, coined as the self-denial of
LLMs. In other words, LLMs should confidently determine the potential existence
of mistakes and carefully execute the targeted correction. As the whole
procedure conducts within LLMs, supporting and persuasive references are hard
to acquire, while the absence of specific steps towards refining hidden
mistakes persists even when errors are acknowledged. In response to the
challenges, we present PSSD, which refers to and implements the human psyche
structure such that three distinct and interconnected roles contribute to human
reasoning. Specifically, PSSD leverages the recent multi-agent paradigm, and is
further enhanced with three innovatively conceived roles: (1) the
intuition-based id role that provides initial attempts based on benign LLMs;
(2) the rule-driven superego role that summarizes rules to regulate the above
attempts, and returns specific key points as guidance; and (3) the
script-centric ego role that absorbs all procedural information to generate
executable script for the final answer prediction. Extensive experiments
demonstrate that the proposed design not only better enhance reasoning
capabilities, but also seamlessly integrate with current models, leading to
superior performance.
|
2502.01347
|
Spurious Correlations in High Dimensional Regression: The Roles of
Regularization, Simplicity Bias and Over-Parameterization
|
stat.ML cs.LG
|
Learning models have been shown to rely on spurious correlations between
non-predictive features and the associated labels in the training data, with
negative implications on robustness, bias and fairness. In this work, we
provide a statistical characterization of this phenomenon for high-dimensional
regression, when the data contains a predictive core feature $x$ and a spurious
feature $y$. Specifically, we quantify the amount of spurious correlations $C$
learned via linear regression, in terms of the data covariance and the strength
$\lambda$ of the ridge regularization. As a consequence, we first capture the
simplicity of $y$ through the spectrum of its covariance, and its correlation
with $x$ through the Schur complement of the full data covariance. Next, we
prove a trade-off between $C$ and the in-distribution test loss $L$, by showing
that the value of $\lambda$ that minimizes $L$ lies in an interval where $C$ is
increasing. Finally, we investigate the effects of over-parameterization via
the random features model, by showing its equivalence to regularized linear
regression. Our theoretical results are supported by numerical experiments on
Gaussian, Color-MNIST, and CIFAR-10 datasets.
|
2502.01349
|
Bias Beware: The Impact of Cognitive Biases on LLM-Driven Product
Recommendations
|
cs.CL
|
The advent of Large Language Models (LLMs) has revolutionized product
recommendation systems, yet their susceptibility to adversarial manipulation
poses critical challenges, particularly in real-world commercial applications.
Our approach is the first one to tap into human psychological principles,
seamlessly modifying product descriptions, making these adversarial
manipulations hard to detect. In this work, we investigate cognitive biases as
black-box adversarial strategies, drawing parallels between their effects on
LLMs and human purchasing behavior. Through extensive experiments on LLMs of
varying scales, we reveal significant vulnerabilities in their use as
recommenders, providing critical insights into safeguarding these systems.
|
2502.01352
|
Metric Privacy in Federated Learning for Medical Imaging: Improving
Convergence and Preventing Client Inference Attacks
|
cs.LG cs.CR
|
Federated learning is a distributed learning technique that allows training a
global model with the participation of different data owners without the need
to share raw data. This architecture is orchestrated by a central server that
aggregates the local models from the clients. This server may be trusted, but
not all nodes in the network. Then, differential privacy (DP) can be used to
privatize the global model by adding noise. However, this may affect
convergence across the rounds of the federated architecture, depending also on
the aggregation strategy employed. In this work, we aim to introduce the notion
of metric-privacy to mitigate the impact of classical server side global-DP on
the convergence of the aggregated model. Metric-privacy is a relaxation of DP,
suitable for domains provided with a notion of distance. We apply it from the
server side by computing a distance for the difference between the local
models. We compare our approach with standard DP by analyzing the impact on six
classical aggregation strategies. The proposed methodology is applied to an
example of medical imaging and different scenarios are simulated across
homogeneous and non-i.i.d clients. Finally, we introduce a novel client
inference attack, where a semi-honest client tries to find whether another
client participated in the training and study how it can be mitigated using DP
and metric-privacy. Our evaluation shows that metric-privacy can increase the
performance of the model compared to standard DP, while offering similar
protection against client inference attacks.
|
2502.01356
|
Quasi-Conformal Convolution : A Learnable Convolution for Deep Learning
on Riemann Surfaces
|
cs.CV
|
Deep learning on non-Euclidean domains is important for analyzing complex
geometric data that lacks common coordinate systems and familiar Euclidean
properties. A central challenge in this field is to define convolution on
domains, which inherently possess irregular and non-Euclidean structures. In
this work, we introduce Quasi-conformal Convolution (QCC), a novel framework
for defining convolution on Riemann surfaces using quasi-conformal theories.
Each QCC operator is linked to a specific quasi-conformal mapping, enabling the
adjustment of the convolution operation through manipulation of this mapping.
By utilizing trainable estimator modules that produce Quasi-conformal mappings,
QCC facilitates adaptive and learnable convolution operators that can be
dynamically adjusted according to the underlying data structured on Riemann
surfaces. QCC unifies a broad range of spatially defined convolutions,
facilitating the learning of tailored convolution operators on each underlying
surface optimized for specific tasks. Building on this foundation, we develop
the Quasi-Conformal Convolutional Neural Network (QCCNN) to address a variety
of tasks related to geometric data. We validate the efficacy of QCCNN through
the classification of images defined on curvilinear Riemann surfaces,
demonstrating superior performance in this context. Additionally, we explore
its potential in medical applications, including craniofacial analysis using 3D
facial data and lesion segmentation on 3D human faces, achieving enhanced
accuracy and reliability.
|
2502.01357
|
Bayesian Approximation-Based Trajectory Prediction and Tracking with 4D
Radar
|
cs.CV
|
Accurate 3D multi-object tracking (MOT) is vital for autonomous vehicles, yet
LiDAR and camera-based methods degrade in adverse weather. Meanwhile,
Radar-based solutions remain robust but often suffer from limited vertical
resolution and simplistic motion models. Existing Kalman filter-based
approaches also rely on fixed noise covariance, hampering adaptability when
objects make sudden maneuvers. We propose Bayes-4DRTrack, a 4D Radar-based MOT
framework that adopts a transformer-based motion prediction network to capture
nonlinear motion dynamics and employs Bayesian approximation in both detection
and prediction steps. Moreover, our two-stage data association leverages
Doppler measurements to better distinguish closely spaced targets. Evaluated on
the K-Radar dataset (including adverse weather scenarios), Bayes-4DRTrack
demonstrates a 5.7% gain in Average Multi-Object Tracking Accuracy (AMOTA) over
methods with traditional motion models and fixed noise covariance. These
results showcase enhanced robustness and accuracy in demanding, real-world
conditions.
|
2502.01358
|
Diffusion at Absolute Zero: Langevin Sampling Using Successive Moreau
Envelopes
|
math.OC cs.CV cs.NA math.NA
|
In this article we propose a novel method for sampling from Gibbs
distributions of the form $\pi(x)\propto\exp(-U(x))$ with a potential $U(x)$.
In particular, inspired by diffusion models we propose to consider a sequence
$(\pi^{t_k})_k$ of approximations of the target density, for which
$\pi^{t_k}\approx \pi$ for $k$ small and, on the other hand, $\pi^{t_k}$
exhibits favorable properties for sampling for $k$ large. This sequence is
obtained by replacing parts of the potential $U$ by its Moreau envelopes.
Sampling is performed in an Annealed Langevin type procedure, that is,
sequentially sampling from $\pi^{t_k}$ for decreasing $k$, effectively guiding
the samples from a simple starting density to the more complex target. In
addition to a theoretical analysis we show experimental results supporting the
efficacy of the method in terms of increased convergence speed and
applicability to multi-modal densities $\pi$.
|
2502.01360
|
A Relative Homology Theory of Representation in Neural Networks
|
cs.LG math.AT q-bio.NC
|
Previous research has proven that the set of maps implemented by neural
networks with a ReLU activation function is identical to the set of piecewise
linear continuous maps. Furthermore, such networks induce a hyperplane
arrangement splitting the input domain into convex polyhedra $G_J$ over which
the network $\Phi$ operates in an affine manner.
In this work, we leverage these properties to define the equivalence class of
inputs $\sim_\Phi$, which can be split into two sets related to the local rank
of $\Phi_J$ and the intersections $\cap \text{Im}\Phi_{J_i}$. We refer to the
latter as the overlap decomposition $O_\Phi$ and prove that if the
intersections between each polyhedron and the input manifold are convex, the
homology groups of neural representations are isomorphic to relative homology
groups $H_k(\Phi(M)) \simeq H_k(M,O_\Phi)$. This lets us compute Betti numbers
without the choice of an external metric. We develop methods to numerically
compute the overlap decomposition through linear programming and a union-find
algorithm.
Using this framework, we perform several experiments on toy datasets showing
that, compared to standard persistent homology, our relative homology-based
computation of Betti numbers tracks purely topological rather than geometric
features. Finally, we study the evolution of the overlap decomposition during
training on various classification problems while varying network width and
depth and discuss some shortcomings of our method.
|
2502.01362
|
Inverse Bridge Matching Distillation
|
cs.LG cs.CV
|
Learning diffusion bridge models is easy; making them fast and practical is
an art. Diffusion bridge models (DBMs) are a promising extension of diffusion
models for applications in image-to-image translation. However, like many
modern diffusion and flow models, DBMs suffer from the problem of slow
inference. To address it, we propose a novel distillation technique based on
the inverse bridge matching formulation and derive the tractable objective to
solve it in practice. Unlike previously developed DBM distillation techniques,
the proposed method can distill both conditional and unconditional types of
DBMs, distill models in a one-step generator, and use only the corrupted images
for training. We evaluate our approach for both conditional and unconditional
types of bridge matching on a wide set of setups, including super-resolution,
JPEG restoration, sketch-to-image, and other tasks, and show that our
distillation technique allows us to accelerate the inference of DBMs from 4x to
100x and even provide better generation quality than used teacher model
depending on particular setup.
|
2502.01364
|
Meursault as a Data Point
|
cs.CY cs.AI cs.CL cs.DL cs.LG
|
In an era dominated by datafication, the reduction of human experiences to
quantifiable metrics raises profound philosophical and ethical questions. This
paper explores these issues through the lens of Meursault, the protagonist of
Albert Camus' The Stranger, whose emotionally detached existence epitomizes the
existential concept of absurdity. Using natural language processing (NLP)
techniques including emotion detection (BERT), sentiment analysis (VADER), and
named entity recognition (spaCy)-this study quantifies key events and behaviors
in Meursault's life. Our analysis reveals the inherent limitations of applying
algorithmic models to complex human experiences, particularly those rooted in
existential alienation and moral ambiguity. By examining how modern AI tools
misinterpret Meursault's actions and emotions, this research underscores the
broader ethical dilemmas of reducing nuanced human narratives to data points,
challenging the foundational assumptions of our data-driven society. The
findings presented in this paper serve as a critique of the increasing reliance
on data-driven narratives and advocate for incorporating humanistic values in
artificial intelligence.
|
2502.01366
|
Trajectory World Models for Heterogeneous Environments
|
cs.LG
|
Heterogeneity in sensors and actuators across environments poses a
significant challenge to building large-scale pre-trained world models on top
of this low-dimensional sensor information. In this work, we explore
pre-training world models for heterogeneous environments by addressing key
transfer barriers in both data diversity and model flexibility. We introduce
UniTraj, a unified dataset comprising over one million trajectories from 80
environments, designed to scale data while preserving critical diversity.
Additionally, we propose TrajWorld, a novel architecture capable of flexibly
handling varying sensor and actuator information and capturing environment
dynamics in-context. Pre-training TrajWorld on UniTraj demonstrates significant
improvements in transition prediction and achieves a new state-of-the-art for
off-policy evaluation. To the best of our knowledge, this work, for the first
time, demonstrates the transfer benefits of world models across heterogeneous
and complex control environments.
|
2502.01375
|
Compact Rule-Based Classifier Learning via Gradient Descent
|
cs.LG cs.AI cs.LO
|
Rule-based models play a crucial role in scenarios that require transparency
and accountable decision-making. However, they primarily consist of discrete
parameters and structures, which presents challenges for scalability and
optimization. In this work, we introduce a new rule-based classifier trained
using gradient descent, in which the user can control the maximum number and
length of the rules. For numerical partitions, the user can also control the
partitions used with fuzzy sets, which also helps keep the number of partitions
small. We perform a series of exhaustive experiments on $40$ datasets to show
how this classifier performs in terms of accuracy and rule base size. Then, we
compare our results with a genetic search that fits an equivalent classifier
and with other explainable and non-explainable state-of-the-art classifiers.
Our results show how our method can obtain compact rule bases that use
significantly fewer patterns than other rule-based methods and perform better
than other explainable classifiers.
|
2502.01376
|
Compliance while resisting: a shear-thickening fluid controller for
physical human-robot interaction
|
cs.RO
|
Physical human-robot interaction (pHRI) is widely needed in many fields, such
as industrial manipulation, home services, and medical rehabilitation, and puts
higher demands on the safety of robots. Due to the uncertainty of the working
environment, the pHRI may receive unexpected impact interference, which affects
the safety and smoothness of the task execution. The commonly used linear
admittance control (L-AC) can cope well with high-frequency small-amplitude
noise, but for medium-frequency high-intensity impact, the effect is not as
good. Inspired by the solid-liquid phase change nature of shear-thickening
fluid, we propose a Shear-thickening Fluid Control (SFC) that can achieve both
an easy human-robot collaboration and resistance to impact interference. The
SFC's stability, passivity, and phase trajectory are analyzed in detail, the
frequency and time domain properties are quantified, and parameter constraints
in discrete control and coupled stability conditions are provided. We conducted
simulations to compare the frequency and time domain characteristics of L-AC,
nonlinear admittance controller (N-AC), and SFC, and validated their dynamic
properties. In real-world experiments, we compared the performance of L-AC,
N-AC, and SFC in both fixed and mobile manipulators. L-AC exhibits weak
resistance to impact. N-AC can resist moderate impacts but not high-intensity
ones, and may exhibit self-excited oscillations. In contrast, SFC demonstrated
superior impact resistance and maintained stable collaboration, enhancing
comfort in cooperative water delivery tasks. Additionally, a case study was
conducted in a factory setting, further affirming the SFC's capability in
facilitating human-robot collaborative manipulation and underscoring its
potential in industrial applications.
|
2502.01377
|
Data-Efficient Model for Psychological Resilience Prediction based on
Neurological Data
|
cs.CE cs.AI
|
Psychological resilience, defined as the ability to rebound from adversity,
is crucial for mental health. Compared with traditional resilience assessments
through self-reported questionnaires, resilience assessments based on
neurological data offer more objective results with biological markers, hence
significantly enhancing credibility. This paper proposes a novel data-efficient
model to address the scarcity of neurological data. We employ Neuro
Kolmogorov-Arnold Networks as the structure of the prediction model. In the
training stage, a new trait-informed multimodal representation algorithm with a
smart chunk technique is proposed to learn the shared latent space with limited
data. In the test stage, a new noise-informed inference algorithm is proposed
to address the low signal-to-noise ratio of the neurological data. The proposed
model not only shows impressive performance on both public datasets and
self-constructed datasets but also provides some valuable psychological
hypotheses for future research.
|
2502.01378
|
CE-LoRA: Computation-Efficient LoRA Fine-Tuning for Language Models
|
cs.LG
|
Large Language Models (LLMs) demonstrate exceptional performance across
various tasks but demand substantial computational resources even for
fine-tuning computation. Although Low-Rank Adaptation (LoRA) significantly
alleviates memory consumption during fine-tuning, its impact on computational
cost reduction is limited. This paper identifies the computation of activation
gradients as the primary bottleneck in LoRA's backward propagation and
introduces the Computation-Efficient LoRA (CE-LoRA) algorithm, which enhances
computational efficiency while preserving memory efficiency. CE-LoRA leverages
two key techniques: Approximated Matrix Multiplication, which replaces dense
multiplications of large and complete matrices with sparse multiplications
involving only critical rows and columns, and the Double-LoRA technique, which
reduces error propagation in activation gradients. Theoretically, CE-LoRA
converges at the same rate as LoRA, $ \mathcal{O}(1/\sqrt{T}) $, where $T$ is
the number of iteartions. Empirical evaluations confirm that CE-LoRA
significantly reduces computational costs compared to LoRA without notable
performance degradation.
|
2502.01382
|
HingePlace: Harnessing the neural thresholding behavior to optimize
Transcranial Electrical Stimulation
|
eess.SY cs.SY
|
Transcranial Electrical Stimulation (tES) is a neuromodulation technique that
utilizes electrodes on the scalp to stimulate target brain regions. tES has
shown promise in treating many neurological conditions, such as stroke
rehabilitation and chronic pain. Several electrode placement algorithms have
been proposed to optimize tES-based therapies by designing multi-electrode
montages that create focal neural responses. We first extend a well-known
unification result by Fernandez-Corazza et al. to unify all major traditional
electrode placement algorithms. We utilize this unification result to identify
a common restriction among traditional electrode placement algorithms: they do
not harness the thresholding behavior of neural response. Consequently, these
algorithms only partially harness the properties of neural response to optimize
tES, particularly increasing the focality of neural response. We propose a new
electrode placement algorithm, HingePlace, that utilizes a symmetrized hinge
loss to harness the thresholding behavior of neural response. We extensively
compare the HingePlace algorithm with traditional electrode placement
algorithms in two simulation platforms. Across both platforms, we find that
HingePlace-designed montages consistently generate more focal neural responses
-- by as much as 60% -- than the electrode montages designed by traditional
electrode placement algorithms.
|
2502.01383
|
InfoBridge: Mutual Information estimation via Bridge Matching
|
cs.LG stat.ML
|
Diffusion bridge models have recently become a powerful tool in the field of
generative modeling. In this work, we leverage their power to address another
important problem in machine learning and information theory - the estimation
of the mutual information (MI) between two random variables. We show that by
using the theory of diffusion bridges, one can construct an unbiased estimator
for data posing difficulties for conventional MI estimators. We showcase the
performance of our estimator on a series of standard MI estimation benchmarks.
|
2502.01384
|
Fine-Tuning Discrete Diffusion Models with Policy Gradient Methods
|
stat.ML cs.AI cs.CL cs.LG
|
Discrete diffusion models have recently gained significant attention due to
their ability to process complex discrete structures for language modeling.
However, fine-tuning these models with policy gradient methods, as is commonly
done in Reinforcement Learning from Human Feedback (RLHF), remains a
challenging task. We propose an efficient, broadly applicable, and
theoretically justified policy gradient algorithm, called Score Entropy Policy
Optimization (SEPO), for fine-tuning discrete diffusion models over
non-differentiable rewards. Our numerical experiments across several discrete
generative tasks demonstrate the scalability and efficiency of our method. Our
code is available at https://github.com/ozekri/SEPO
|
2502.01385
|
Detecting Backdoor Samples in Contrastive Language Image Pretraining
|
cs.LG cs.CV
|
Contrastive language-image pretraining (CLIP) has been found to be vulnerable
to poisoning backdoor attacks where the adversary can achieve an almost perfect
attack success rate on CLIP models by poisoning only 0.01\% of the training
dataset. This raises security concerns on the current practice of pretraining
large-scale models on unscrutinized web data using CLIP. In this work, we
analyze the representations of backdoor-poisoned samples learned by CLIP models
and find that they exhibit unique characteristics in their local subspace,
i.e., their local neighborhoods are far more sparse than that of clean samples.
Based on this finding, we conduct a systematic study on detecting CLIP backdoor
attacks and show that these attacks can be easily and efficiently detected by
traditional density ratio-based local outlier detectors, whereas existing
backdoor sample detection methods fail. Our experiments also reveal that an
unintentional backdoor already exists in the original CC3M dataset and has been
trained into a popular open-source model released by OpenCLIP. Based on our
detector, one can clean up a million-scale web dataset (e.g., CC3M) efficiently
within 15 minutes using 4 Nvidia A100 GPUs. The code is publicly available in
our \href{https://github.com/HanxunH/Detect-CLIP-Backdoor-Samples}{GitHub
repository}.
|
2502.01386
|
Topic-FlipRAG: Topic-Orientated Adversarial Opinion Manipulation Attacks
to Retrieval-Augmented Generation Models
|
cs.CL cs.CR cs.IR
|
Retrieval-Augmented Generation (RAG) systems based on Large Language Models
(LLMs) have become essential for tasks such as question answering and content
generation. However, their increasing impact on public opinion and information
dissemination has made them a critical focus for security research due to
inherent vulnerabilities. Previous studies have predominantly addressed attacks
targeting factual or single-query manipulations. In this paper, we address a
more practical scenario: topic-oriented adversarial opinion manipulation
attacks on RAG models, where LLMs are required to reason and synthesize
multiple perspectives, rendering them particularly susceptible to systematic
knowledge poisoning. Specifically, we propose Topic-FlipRAG, a two-stage
manipulation attack pipeline that strategically crafts adversarial
perturbations to influence opinions across related queries. This approach
combines traditional adversarial ranking attack techniques and leverages the
extensive internal relevant knowledge and reasoning capabilities of LLMs to
execute semantic-level perturbations. Experiments show that the proposed
attacks effectively shift the opinion of the model's outputs on specific
topics, significantly impacting user information perception. Current mitigation
methods cannot effectively defend against such attacks, highlighting the
necessity for enhanced safeguards for RAG systems, and offering crucial
insights for LLM security research.
|
2502.01387
|
TeLL-Drive: Enhancing Autonomous Driving with Teacher LLM-Guided Deep
Reinforcement Learning
|
cs.AI cs.RO
|
Although Deep Reinforcement Learning (DRL) and Large Language Models (LLMs)
each show promise in addressing decision-making challenges in autonomous
driving, DRL often suffers from high sample complexity, while LLMs have
difficulty ensuring real-time decision making. To address these limitations, we
propose TeLL-Drive, a hybrid framework that integrates a Teacher LLM to guide
an attention-based Student DRL policy. By incorporating risk metrics,
historical scenario retrieval, and domain heuristics into context-rich prompts,
the LLM produces high-level driving strategies through chain-of-thought
reasoning. A self-attention mechanism then fuses these strategies with the DRL
agent's exploration, accelerating policy convergence and boosting robustness
across diverse driving conditions. The experimental results, evaluated across
multiple traffic scenarios, show that TeLL-Drive outperforms existing baseline
methods, including other LLM-based approaches, in terms of success rates,
average returns, and real-time feasibility. Ablation studies underscore the
importance of each model component, especially the synergy between the
attention mechanism and LLM-driven guidance. Finally, we build a virtual-real
fusion experimental platform to verify the real-time performance, robustness,
and reliability of the algorithm running on real vehicles through
vehicle-in-loop experiments.
|
2502.01390
|
Plan-Then-Execute: An Empirical Study of User Trust and Team Performance
When Using LLM Agents As A Daily Assistant
|
cs.HC cs.CL
|
Since the explosion in popularity of ChatGPT, large language models (LLMs)
have continued to impact our everyday lives. Equipped with external tools that
are designed for a specific purpose (e.g., for flight booking or an alarm
clock), LLM agents exercise an increasing capability to assist humans in their
daily work. Although LLM agents have shown a promising blueprint as daily
assistants, there is a limited understanding of how they can provide daily
assistance based on planning and sequential decision making capabilities. We
draw inspiration from recent work that has highlighted the value of
'LLM-modulo' setups in conjunction with humans-in-the-loop for planning tasks.
We conducted an empirical study (N = 248) of LLM agents as daily assistants in
six commonly occurring tasks with different levels of risk typically associated
with them (e.g., flight ticket booking and credit card payments). To ensure
user agency and control over the LLM agent, we adopted LLM agents in a
plan-then-execute manner, wherein the agents conducted step-wise planning and
step-by-step execution in a simulation environment. We analyzed how user
involvement at each stage affects their trust and collaborative team
performance. Our findings demonstrate that LLM agents can be a double-edged
sword -- (1) they can work well when a high-quality plan and necessary user
involvement in execution are available, and (2) users can easily mistrust the
LLM agents with plans that seem plausible. We synthesized key insights for
using LLM agents as daily assistants to calibrate user trust and achieve better
overall task outcomes. Our work has important implications for the future
design of daily assistants and human-AI collaboration with LLM agents.
|
2502.01391
|
Learning Traffic Anomalies from Generative Models on Real-Time
Observations
|
cs.LG cs.AI cs.CV
|
Accurate detection of traffic anomalies is crucial for effective urban
traffic management and congestion mitigation. We use the Spatiotemporal
Generative Adversarial Network (STGAN) framework combining Graph Neural
Networks and Long Short-Term Memory networks to capture complex spatial and
temporal dependencies in traffic data. We apply STGAN to real-time,
minute-by-minute observations from 42 traffic cameras across Gothenburg,
Sweden, collected over several months in 2020. The images are processed to
compute a flow metric representing vehicle density, which serves as input for
the model. Training is conducted on data from April to November 2020, and
validation is performed on a separate dataset from November 14 to 23, 2020. Our
results demonstrate that the model effectively detects traffic anomalies with
high precision and low false positive rates. The detected anomalies include
camera signal interruptions, visual artifacts, and extreme weather conditions
affecting traffic flow.
|
2502.01397
|
Can message-passing GNN approximate triangular factorizations of sparse
matrices?
|
cs.LG cs.AI cs.NA math.NA
|
We study fundamental limitations of Graph Neural Networks (GNNs) for learning
sparse matrix preconditioners. While recent works have shown promising results
using GNNs to predict incomplete factorizations, we demonstrate that the local
nature of message passing creates inherent barriers for capturing non-local
dependencies required for optimal preconditioning. We introduce a new benchmark
dataset of matrices where good sparse preconditioners exist but require
non-local computations, constructed using both synthetic examples and
real-world matrices. Our experimental results show that current GNN
architectures struggle to approximate these preconditioners, suggesting the
need for new architectural approaches beyond traditional message passing
networks. We provide theoretical analysis and empirical evidence to explain
these limitations, with implications for the broader use of GNNs in numerical
linear algebra.
|
2502.01401
|
Evolving Symbolic 3D Visual Grounder with Weakly Supervised Reflection
|
cs.CV
|
3D visual grounding (3DVG) is challenging because of the requirement of
understanding on visual information, language and spatial relationships. While
supervised approaches have achieved superior performance, they are constrained
by the scarcity and high cost of 3D vision-language datasets. On the other
hand, LLM/VLM based agents are proposed for 3DVG, eliminating the need for
training data. However, these methods incur prohibitive time and token costs
during inference. To address the challenges, we introduce a novel training-free
symbolic framework for 3D visual grounding, namely Evolvable Symbolic Visual
Grounder, that offers significantly reduced inference costs compared to
previous agent-based methods while maintaining comparable performance. EaSe
uses LLM generated codes to compute on spatial relationships. EaSe also
implements an automatic pipeline to evaluate and optimize the quality of these
codes and integrate VLMs to assist in the grounding process. Experimental
results demonstrate that EaSe achieves 52.9% accuracy on Nr3D dataset and 49.2%
Acc@0.25 on ScanRefer, which is top-tier among training-free methods. Moreover,
it substantially reduces the inference time and cost, offering a balanced
trade-off between performance and efficiency. Codes are available at
https://github.com/OpenRobotLab/EaSe.
|
2502.01402
|
Annotation Tool and Dataset for Fact-Checking Podcasts
|
cs.CL
|
Podcasts are a popular medium on the web, featuring diverse and multilingual
content that often includes unverified claims. Fact-checking podcasts is a
challenging task, requiring transcription, annotation, and claim verification,
all while preserving the contextual details of spoken content. Our tool offers
a novel approach to tackle these challenges by enabling real-time annotation of
podcasts during playback. This unique capability allows users to listen to the
podcast and annotate key elements, such as check-worthy claims, claim spans,
and contextual errors, simultaneously. By integrating advanced transcription
models like OpenAI's Whisper and leveraging crowdsourced annotations, we create
high-quality datasets to fine-tune multilingual transformer models such as
XLM-RoBERTa for tasks like claim detection and stance classification.
Furthermore, we release the annotated podcast transcripts and sample
annotations with preliminary experiments.
|
2502.01403
|
AdaSVD: Adaptive Singular Value Decomposition for Large Language Models
|
cs.CV cs.AI cs.CL
|
Large language models (LLMs) have achieved remarkable success in natural
language processing (NLP) tasks, yet their substantial memory requirements
present significant challenges for deployment on resource-constrained devices.
Singular Value Decomposition (SVD) has emerged as a promising compression
technique for LLMs, offering considerable reductions in memory overhead.
However, existing SVD-based methods often struggle to effectively mitigate the
errors introduced by SVD truncation, leading to a noticeable performance gap
when compared to the original models. Furthermore, applying a uniform
compression ratio across all transformer layers fails to account for the
varying importance of different layers. To address these challenges, we propose
AdaSVD, an adaptive SVD-based LLM compression approach. Specifically, AdaSVD
introduces adaComp, which adaptively compensates for SVD truncation errors by
alternately updating the singular matrices U and V^T. Additionally, AdaSVD
introduces adaCR, which adaptively assigns layer-specific compression ratios
based on the relative importance of each layer. Extensive experiments across
multiple LLM families and evaluation metrics demonstrate that AdaSVD
consistently outperforms state-of-the-art (SOTA) SVD-based methods, achieving
superior performance with significantly reduced memory requirements. The code
and models will be available at https://github.com/ZHITENGLI/AdaSVD.
|
2502.01405
|
FourieRF: Few-Shot NeRFs via Progressive Fourier Frequency Control
|
cs.CV
|
In this work, we introduce FourieRF, a novel approach for achieving fast and
high-quality reconstruction in the few-shot setting. Our method effectively
parameterizes features through an explicit curriculum training procedure,
incrementally increasing scene complexity during optimization. Experimental
results show that the prior induced by our approach is both robust and
adaptable across a wide variety of scenes, establishing FourieRF as a strong
and versatile baseline for the few-shot rendering problem. While our approach
significantly reduces artifacts, it may still lead to reconstruction errors in
severely under-constrained scenarios, particularly where view occlusion leaves
parts of the shape uncovered. In the future, our method could be enhanced by
integrating foundation models to complete missing parts using large data-driven
priors.
|
2502.01406
|
GRADIEND: Monosemantic Feature Learning within Neural Networks Applied
to Gender Debiasing of Transformer Models
|
cs.LG cs.AI cs.CL
|
AI systems frequently exhibit and amplify social biases, including gender
bias, leading to harmful consequences in critical areas. This study introduces
a novel encoder-decoder approach that leverages model gradients to learn a
single monosemantic feature neuron encoding gender information. We show that
our method can be used to debias transformer-based language models, while
maintaining other capabilities. We demonstrate the effectiveness of our
approach across multiple encoder-only based models and highlight its potential
for broader applications.
|
2502.01411
|
Human Body Restoration with One-Step Diffusion Model and A New Benchmark
|
cs.CV
|
Human body restoration, as a specific application of image restoration, is
widely applied in practice and plays a vital role across diverse fields.
However, thorough research remains difficult, particularly due to the lack of
benchmark datasets. In this study, we propose a high-quality dataset automated
cropping and filtering (HQ-ACF) pipeline. This pipeline leverages existing
object detection datasets and other unlabeled images to automatically crop and
filter high-quality human images. Using this pipeline, we constructed a
person-based restoration with sophisticated objects and natural activities
(\emph{PERSONA}) dataset, which includes training, validation, and test sets.
The dataset significantly surpasses other human-related datasets in both
quality and content richness. Finally, we propose \emph{OSDHuman}, a novel
one-step diffusion model for human body restoration. Specifically, we propose a
high-fidelity image embedder (HFIE) as the prompt generator to better guide the
model with low-quality human image information, effectively avoiding misleading
prompts. Experimental results show that OSDHuman outperforms existing methods
in both visual quality and quantitative metrics. The dataset and code will at
https://github.com/gobunu/OSDHuman.
|
2502.01416
|
Categorical Schr\"odinger Bridge Matching
|
cs.LG
|
The Schr\"odinger Bridge (SB) is a powerful framework for solving generative
modeling tasks such as unpaired domain translation. Most SB-related research
focuses on continuous data space $\mathbb{R}^{D}$ and leaves open theoretical
and algorithmic questions about applying SB methods to discrete data, e.g, on
finite spaces $\mathbb{S}^{D}$. Notable examples of such sets $\mathbb{S}$ are
codebooks of vector-quantized (VQ) representations of modern autoencoders,
tokens in texts, categories of atoms in molecules, etc. In this paper, we
provide a theoretical and algorithmic foundation for solving SB in discrete
spaces using the recently introduced Iterative Markovian Fitting (IMF)
procedure. Specifically, we theoretically justify the convergence of
discrete-time IMF (D-IMF) to SB in discrete spaces. This enables us to develop
a practical computational algorithm for SB which we call Categorical
Schr\"odinger Bridge Matching (CSBM). We show the performance of CSBM via a
series of experiments with synthetic data and VQ representations of images.
|
2502.01417
|
Originality in scientific titles and abstracts can predict citation
count
|
cs.DL cs.CL
|
In this research-in-progress paper, we apply a computational measure
correlating with originality from creativity science: Divergent Semantic
Integration (DSI), to a selection of 99,557 scientific abstracts and titles
selected from the Web of Science. We observe statistically significant
differences in DSI between subject and field of research, and a slight rise in
DSI over time. We model the base 10 logarithm of the citation count after 5
years with DSI and find a statistically significant positive correlation in all
fields of research with an adjusted $R^2$ of 0.13.
|
2502.01418
|
Assessing the use of Diffusion models for motion artifact correction in
brain MRI
|
eess.IV cs.CV cs.LG cs.NA math.NA
|
Magnetic Resonance Imaging generally requires long exposure times, while
being sensitive to patient motion, resulting in artifacts in the acquired
images, which may hinder their diagnostic relevance. Despite research efforts
to decrease the acquisition time, and designing efficient acquisition
sequences, motion artifacts are still a persistent problem, pushing toward the
need for the development of automatic motion artifact correction techniques.
Recently, diffusion models have been proposed as a solution for the task at
hand. While diffusion models can produce high-quality reconstructions, they are
also susceptible to hallucination, which poses risks in diagnostic
applications. In this study, we critically evaluate the use of diffusion models
for correcting motion artifacts in 2D brain MRI scans. Using a popular
benchmark dataset, we compare a diffusion model-based approach with
state-of-the-art methods consisting of Unets trained in a supervised fashion on
motion-affected images to reconstruct ground truth motion-free images. Our
findings reveal mixed results: diffusion models can produce accurate
predictions or generate harmful hallucinations in this context, depending on
data heterogeneity and the acquisition planes considered as input.
|
2502.01419
|
Visual Attention Never Fades: Selective Progressive Attention
ReCalibration for Detailed Image Captioning in Multimodal Large Language
Models
|
cs.CV cs.AI
|
Detailed image captioning is essential for tasks like data generation and
aiding visually impaired individuals. High-quality captions require a balance
between precision and recall, which remains challenging for current multimodal
large language models (MLLMs). In this work, we hypothesize that this
limitation stems from weakening and increasingly noisy visual attention as
responses lengthen. To address this issue, we propose SPARC (Selective
Progressive Attention ReCalibration), a training-free method that enhances the
contribution of visual tokens during decoding. SPARC is founded on three key
observations: (1) increasing the influence of all visual tokens reduces recall;
thus, SPARC selectively amplifies visual tokens; (2) as captions lengthen,
visual attention becomes noisier, so SPARC identifies critical visual tokens by
leveraging attention differences across time steps; (3) as visual attention
gradually weakens, SPARC reinforces it to preserve its influence. Our
experiments, incorporating both automated and human evaluations, demonstrate
that existing methods improve the precision of MLLMs at the cost of recall. In
contrast, our proposed method enhances both precision and recall with minimal
computational overhead.
|
2502.01425
|
The Batch Complexity of Bandit Pure Exploration
|
cs.LG stat.ML
|
In a fixed-confidence pure exploration problem in stochastic multi-armed
bandits, an algorithm iteratively samples arms and should stop as early as
possible and return the correct answer to a query about the arms distributions.
We are interested in batched methods, which change their sampling behaviour
only a few times, between batches of observations. We give an
instance-dependent lower bound on the number of batches used by any sample
efficient algorithm for any pure exploration task. We then give a general
batched algorithm and prove upper bounds on its expected sample complexity and
batch complexity. We illustrate both lower and upper bounds on best-arm
identification and thresholding bandits.
|
2502.01427
|
Structural features of the fly olfactory circuit mitigate the
stability-plasticity dilemma in continual learning
|
cs.LG cs.AI cs.CV q-bio.NC
|
Artificial neural networks face the stability-plasticity dilemma in continual
learning, while the brain can maintain memories and remain adaptable. However,
the biological strategies for continual learning and their potential to inspire
learning algorithms in neural networks are poorly understood. This study
presents a minimal model of the fly olfactory circuit to investigate the
biological strategies that support continual odor learning. We introduce the
fly olfactory circuit as a plug-and-play component, termed the Fly Model, which
can integrate with modern machine learning methods to address this dilemma. Our
findings demonstrate that the Fly Model enhances both memory stability and
learning plasticity, overcoming the limitations of current continual learning
strategies. We validated its effectiveness across various challenging continual
learning scenarios using commonly used datasets. The fly olfactory system
serves as an elegant biological circuit for lifelong learning, offering a
module that enhances continual learning with minimal additional computational
cost for machine learning.
|
2502.01429
|
An Algorithm for Fixed Budget Best Arm Identification with Combinatorial
Exploration
|
cs.LG
|
We consider the best arm identification (BAI) problem in the $K-$armed bandit
framework with a modification - the agent is allowed to play a subset of arms
at each time slot instead of one arm. Consequently, the agent observes the
sample average of the rewards of the arms that constitute the probed subset.
Several trade-offs arise here - e.g., sampling a larger number of arms together
results in a wider view of the environment, while sampling fewer arms enhances
the information about individual reward distributions. Furthermore, grouping a
large number of suboptimal arms together albeit reduces the variance of the
reward of the group, it may enhance the group mean to make it close to that
containing the optimal arm. To solve this problem, we propose an algorithm that
constructs $\log_2 K$ groups and performs a likelihood ratio test to detect the
presence of the best arm in each of these groups. Then a Hamming decoding
procedure determines the unique best arm. We derive an upper bound for the
error probability of the proposed algorithm based on a new hardness parameter
$H_4$. Finally, we demonstrate cases under which it outperforms the
state-of-the-art algorithms for the single play case.
|
2502.01430
|
Molecular Odor Prediction Based on Multi-Feature Graph Attention
Networks
|
cs.LG q-bio.QM
|
Olfactory perception plays a critical role in both human and organismal
interactions, yet understanding of its underlying mechanisms and influencing
factors remain insufficient. Molecular structures influence odor perception
through intricate biochemical interactions, and accurately quantifying
structure-odor relationships presents significant challenges. The Quantitative
Structure-Odor Relationship (QSOR) task, which involves predicting the
associations between molecular structures and their corresponding odors, seeks
to address these challenges. To this end, we propose a method for QSOR,
utilizing Graph Attention Networks to model molecular structures and capture
both local and global features. Unlike conventional QSOR approaches reliant on
predefined descriptors, our method leverages diverse molecular feature
extraction techniques to automatically learn comprehensive representations.
This integration enhances the model's capacity to handle complex molecular
information, improves prediction accuracy. Our approach demonstrates clear
advantages in QSOR prediction tasks, offering valuable insights into the
application of deep learning in cheminformatics.
|
2502.01432
|
Emergent Stack Representations in Modeling Counter Languages Using
Transformers
|
cs.CL cs.LG
|
Transformer architectures are the backbone of most modern language models,
but understanding the inner workings of these models still largely remains an
open problem. One way that research in the past has tackled this problem is by
isolating the learning capabilities of these architectures by training them
over well-understood classes of formal languages. We extend this literature by
analyzing models trained over counter languages, which can be modeled using
counter variables. We train transformer models on 4 counter languages, and
equivalently formulate these languages using stacks, whose depths can be
understood as the counter values. We then probe their internal representations
for stack depths at each input token to show that these models when trained as
next token predictors learn stack-like representations. This brings us closer
to understanding the algorithmic details of how transformers learn languages
and helps in circuit discovery.
|
2502.01436
|
Towards Safer Chatbots: A Framework for Policy Compliance Evaluation of
Custom GPTs
|
cs.CL cs.AI
|
Large Language Models (LLMs) have gained unprecedented prominence, achieving
widespread adoption across diverse domains and integrating deeply into society.
The capability to fine-tune general-purpose LLMs, such as Generative
Pre-trained Transformers (GPT), for specific tasks has facilitated the
emergence of numerous Custom GPTs. These tailored models are increasingly made
available through dedicated marketplaces, such as OpenAI's GPT Store. However,
their black-box nature introduces significant safety and compliance risks. In
this work, we present a scalable framework for the automated evaluation of
Custom GPTs against OpenAI's usage policies, which define the permissible
behaviors of these systems. Our framework integrates three core components: (1)
automated discovery and data collection of models from the GPT store, (2) a
red-teaming prompt generator tailored to specific policy categories and the
characteristics of each target GPT, and (3) an LLM-as-a-judge technique to
analyze each prompt-response pair for potential policy violations.
We validate our framework with a manually annotated ground truth, and
evaluate it through a large-scale study with 782 Custom GPTs across three
categories: Romantic, Cybersecurity, and Academic GPTs. Our manual annotation
process achieved an F1 score of 0.975 in identifying policy violations,
confirming the reliability of the framework's assessments. The results reveal
that 58.7% of the analyzed models exhibit indications of non-compliance,
exposing weaknesses in the GPT store's review and approval processes.
Furthermore, our findings indicate that a model's popularity does not correlate
with compliance, and non-compliance issues largely stem from behaviors
inherited from base models rather than user-driven customizations. We believe
this approach is extendable to other chatbot platforms and policy domains,
improving LLM-based systems safety.
|
2502.01439
|
Alternating direction method of multipliers for polynomial optimization
|
math.OC cs.SY eess.SY
|
Multivariate polynomial optimization is a prevalent model for a number of
engineering problems. From a mathematical viewpoint, polynomial optimization is
challenging because it is non-convex. The Lasserre's theory, based on
semidefinite relaxations, provides an effective tool to overcome this issue and
to achieve the global optimum. However, this approach can be computationally
complex for medium and large scale problems. For this motivation, in this work,
we investigate a local minimization approach, based on the alternating
direction method of multipliers, which is low-complex, straightforward to
implement, and prone to decentralization. The core of the work is the
development of the algorithm tailored to polynomial optimization, along with
the proof of its convergence. Through a numerical example we show a practical
implementation and test the effectiveness of the proposed algorithm with
respect to state-of-the-art methodologies.
|
2502.01441
|
Improved Training Technique for Latent Consistency Models
|
cs.CV cs.LG
|
Consistency models are a new family of generative models capable of producing
high-quality samples in either a single step or multiple steps. Recently,
consistency models have demonstrated impressive performance, achieving results
on par with diffusion models in the pixel space. However, the success of
scaling consistency training to large-scale datasets, particularly for
text-to-image and video generation tasks, is determined by performance in the
latent space. In this work, we analyze the statistical differences between
pixel and latent spaces, discovering that latent data often contains highly
impulsive outliers, which significantly degrade the performance of iCT in the
latent space. To address this, we replace Pseudo-Huber losses with Cauchy
losses, effectively mitigating the impact of outliers. Additionally, we
introduce a diffusion loss at early timesteps and employ optimal transport (OT)
coupling to further enhance performance. Lastly, we introduce the adaptive
scaling-$c$ scheduler to manage the robust training process and adopt
Non-scaling LayerNorm in the architecture to better capture the statistics of
the features and reduce outlier impact. With these strategies, we successfully
train latent consistency models capable of high-quality sampling with one or
two steps, significantly narrowing the performance gap between latent
consistency and diffusion models. The implementation is released here:
https://github.com/quandao10/sLCT/
|
2502.01445
|
SPFFNet: Strip Perception and Feature Fusion Spatial Pyramid Pooling for
Fabric Defect Detection
|
cs.CV cs.AI
|
Defect detection in fabrics is critical for quality control, yet existing
methods often struggle with complex backgrounds and shape-specific defects. In
this paper, we propose an improved fabric defect detection model based on
YOLOv11. To enhance the detection of strip defects, we introduce a Strip
Perception Module (SPM) that improves feature capture through multi-scale
convolution. We further enhance the spatial pyramid pooling fast (SPPF) by
integrating a squeeze-and-excitation mechanism, resulting in the SE-SPPF
module, which better integrates spatial and channel information for more
effective defect feature extraction. Additionally, we propose a novel focal
enhanced complete intersection over union (FECIoU) metric with adaptive
weights, addressing scale differences and class imbalance by adjusting the
weights of hard-to-detect instances through focal loss. Experimental results
demonstrate that our model achieves a 0.8-8.1% improvement in mean average
precision (mAP) on the Tianchi dataset and a 1.6-13.2% improvement on our
custom dataset, outperforming other state-of-the-art methods.
|
2502.01448
|
What Can You Say to a Robot? Capability Communication Leads to More
Natural Conversations
|
cs.RO cs.HC
|
When encountering a robot in the wild, it is not inherently clear to human
users what the robot's capabilities are. When encountering misunderstandings or
problems in spoken interaction, robots often just apologize and move on,
without additional effort to make sure the user understands what happened. We
set out to compare the effect of two speech based capability communication
strategies (proactive, reactive) to a robot without such a strategy, in regard
to the user's rating of and their behavior during the interaction. For this, we
conducted an in-person user study with 120 participants who had three
speech-based interactions with a social robot in a restaurant setting. Our
results suggest that users preferred the robot communicating its capabilities
proactively and adjusted their behavior in those interactions, using a more
conversational interaction style while also enjoying the interaction more.
|
2502.01450
|
Simulating Rumor Spreading in Social Networks using LLM Agents
|
cs.SI cs.AI
|
With the rise of social media, misinformation has become increasingly
prevalent, fueled largely by the spread of rumors. This study explores the use
of Large Language Model (LLM) agents within a novel framework to simulate and
analyze the dynamics of rumor propagation across social networks. To this end,
we design a variety of LLM-based agent types and construct four distinct
network structures to conduct these simulations. Our framework assesses the
effectiveness of different network constructions and agent behaviors in
influencing the spread of rumors. Our results demonstrate that the framework
can simulate rumor spreading across more than one hundred agents in various
networks with thousands of edges. The evaluations indicate that network
structure, personas, and spreading schemes can significantly influence rumor
dissemination, ranging from no spread to affecting 83\% of agents in
iterations, thereby offering a realistic simulation of rumor spread in social
networks.
|
2502.01455
|
Temporal-consistent CAMs for Weakly Supervised Video Segmentation in
Waste Sorting
|
cs.CV cs.AI cs.LG
|
In industrial settings, weakly supervised (WS) methods are usually preferred
over their fully supervised (FS) counterparts as they do not require costly
manual annotations. Unfortunately, the segmentation masks obtained in the WS
regime are typically poor in terms of accuracy. In this work, we present a WS
method capable of producing accurate masks for semantic segmentation in the
case of video streams. More specifically, we build saliency maps that exploit
the temporal coherence between consecutive frames in a video, promoting
consistency when objects appear in different frames. We apply our method in a
waste-sorting scenario, where we perform weakly supervised video segmentation
(WSVS) by training an auxiliary classifier that distinguishes between videos
recorded before and after a human operator, who manually removes specific
wastes from a conveyor belt. The saliency maps of this classifier identify
materials to be removed, and we modify the classifier training to minimize
differences between the saliency map of a central frame and those in adjacent
frames, after having compensated object displacement. Experiments on a
real-world dataset demonstrate the benefits of integrating temporal coherence
directly during the training phase of the classifier. Code and dataset are
available upon request.
|
2502.01456
|
Process Reinforcement through Implicit Rewards
|
cs.LG cs.AI cs.CL
|
Dense process rewards have proven a more effective alternative to the sparse
outcome-level rewards in the inference-time scaling of large language models
(LLMs), particularly in tasks requiring complex multi-step reasoning. While
dense rewards also offer an appealing choice for the reinforcement learning
(RL) of LLMs since their fine-grained rewards have the potential to address
some inherent issues of outcome rewards, such as training efficiency and credit
assignment, this potential remains largely unrealized. This can be primarily
attributed to the challenges of training process reward models (PRMs) online,
where collecting high-quality process labels is prohibitively expensive, making
them particularly vulnerable to reward hacking. To address these challenges, we
propose PRIME (Process Reinforcement through IMplicit rEwards), which enables
online PRM updates using only policy rollouts and outcome labels through
implict process rewards. PRIME combines well with various advantage functions
and forgoes the dedicated reward model training phrase that existing approaches
require, substantially reducing the development overhead. We demonstrate
PRIME's effectiveness on competitional math and coding. Starting from
Qwen2.5-Math-7B-Base, PRIME achieves a 15.1% average improvement across several
key reasoning benchmarks over the SFT model. Notably, our resulting model,
Eurus-2-7B-PRIME, surpasses Qwen2.5-Math-7B-Instruct on seven reasoning
benchmarks with 10% of its training data.
|
2502.01458
|
Understanding the Capabilities and Limitations of Weak-to-Strong
Generalization
|
cs.LG stat.ML
|
Weak-to-strong generalization, where weakly supervised strong models
outperform their weaker teachers, offers a promising approach to aligning
superhuman models with human values. To deepen the understanding of this
approach, we provide theoretical insights into its capabilities and
limitations. First, in the classification setting, we establish upper and lower
generalization error bounds for the strong model, identifying the primary
limitations as stemming from the weak model's generalization error and the
optimization objective itself. Additionally, we derive lower and upper bounds
on the calibration error of the strong model. These theoretical bounds reveal
two critical insights: (1) the weak model should demonstrate strong
generalization performance and maintain well-calibrated predictions, and (2)
the strong model's training process must strike a careful balance, as excessive
optimization could undermine its generalization capability by over-relying on
the weak supervision signals. Finally, in the regression setting, we extend the
work of Charikar et al. (2024) to a loss function based on Kullback-Leibler
(KL) divergence, offering guarantees that the strong student can outperform its
weak teacher by at least the magnitude of their disagreement. We conduct
sufficient experiments to validate our theory.
|
2502.01459
|
Learning to Partially Defer for Sequences
|
stat.ME cs.LG stat.ML
|
In the Learning to Defer (L2D) framework, a prediction model can either make
a prediction or defer it to an expert, as determined by a rejector. Current L2D
methods train the rejector to decide whether to reject the entire prediction,
which is not desirable when the model predicts long sequences. We present an
L2D setting for sequence outputs where the system can defer specific outputs of
the whole model prediction to an expert in an effort to interleave the expert
and machine throughout the prediction. We propose two types of model-based
post-hoc rejectors for pre-trained predictors: a token-level rejector, which
defers specific token predictions to experts with next token prediction
capabilities, and a one-time rejector for experts without such abilities, which
defers the remaining sequence from a specific point onward. In the experiments,
we also empirically demonstrate that such granular deferrals achieve better
cost-accuracy tradeoffs than whole deferrals on Traveling salesman solvers and
News summarization models.
|
2502.01461
|
Docking-Aware Attention: Dynamic Protein Representations through
Molecular Context Integration
|
cs.LG q-bio.BM
|
Computational prediction of enzymatic reactions represents a crucial
challenge in sustainable chemical synthesis across various scientific domains,
ranging from drug discovery to materials science and green chemistry. These
syntheses rely on proteins that selectively catalyze complex molecular
transformations. These protein catalysts exhibit remarkable substrate
adaptability, with the same protein often catalyzing different chemical
transformations depending on its molecular partners. Current approaches to
protein representation in reaction prediction either ignore protein structure
entirely or rely on static embeddings, failing to capture how proteins
dynamically adapt their behavior to different substrates. We present
Docking-Aware Attention (DAA), a novel architecture that generates dynamic,
context-dependent protein representations by incorporating molecular docking
information into the attention mechanism. DAA combines physical interaction
scores from docking predictions with learned attention patterns to focus on
protein regions most relevant to specific molecular interactions. We evaluate
our method on enzymatic reaction prediction, where it outperforms previous
state-of-the-art methods, achieving 62.2\% accuracy versus 56.79\% on complex
molecules and 55.54\% versus 49.45\% on innovative reactions. Through detailed
ablation studies and visualizations, we demonstrate how DAA generates
interpretable attention patterns that adapt to different molecular contexts.
Our approach represents a general framework for context-aware protein
representation in biocatalysis prediction, with potential applications across
enzymatic synthesis planning. We open-source our implementation and pre-trained
models to facilitate further research.
|
2502.01464
|
Predicting symmetries of quantum dynamics with optimal samples
|
quant-ph cs.IT math.IT
|
Identifying symmetries in quantum dynamics, such as identity or time-reversal
invariance, is a crucial challenge with profound implications for quantum
technologies. We introduce a unified framework combining group representation
theory and subgroup hypothesis testing to predict these symmetries with optimal
efficiency. By exploiting the inherent symmetry of compact groups and their
irreducible representations, we derive an exact characterization of the optimal
type-II error (failure probability to detect a symmetry), offering an
operational interpretation for the quantum max-relative entropy. In particular,
we prove that parallel strategies achieve the same performance as adaptive or
indefinite-causal-order protocols, resolving debates about the necessity of
complex control sequences. Applications to the singleton group, maximal
commutative group, and orthogonal group yield explicit results: for predicting
the identity property, Z-symmetry, and T-symmetry of unknown qubit unitaries,
with zero type-I error and type-II error bounded by $\delta$, we establish the
explicit optimal sample complexity which scales as $\mathcal{O}(\delta^{-1/3})$
for identity testing and $\mathcal{O}(\delta^{-1/2})$ for T/Z-symmetry testing.
These findings offer theoretical insights and practical guidelines for
efficient unitary property testing and symmetry-driven protocols in quantum
information processing.
|
2502.01465
|
Embrace Collisions: Humanoid Shadowing for Deployable Contact-Agnostics
Motions
|
cs.RO cs.SY eess.SY
|
Previous humanoid robot research works treat the robot as a bipedal mobile
manipulation platform, where only the feet and hands contact the environment.
However, we humans use all body parts to interact with the world, e.g., we sit
in chairs, get up from the ground, or roll on the floor. Contacting the
environment using body parts other than feet and hands brings significant
challenges in both model-predictive control and reinforcement learning-based
methods. An unpredictable contact sequence makes it almost impossible for
model-predictive control to plan ahead in real time. The success of the
zero-shot sim-to-real reinforcement learning method for humanoids heavily
depends on the acceleration of GPU-based rigid-body physical simulator and
simplification of the collision detection. Lacking extreme torso movement of
the humanoid research makes all other components non-trivial to design, such as
termination conditions, motion commands and reward designs. To address these
potential challenges, we propose a general humanoid motion framework that takes
discrete motion commands and controls the robot's motor action in real time.
Using a GPU-accelerated rigid-body simulator, we train a humanoid whole-body
control policy that follows the high-level motion command in the real world in
real time, even with stochastic contacts and extremely large robot base
rotation and not-so-feasible motion command. More details at
https://project-instinct.github.io
|
2502.01467
|
Deep Unfolding Multi-modal Image Fusion Network via Attribution Analysis
|
cs.CV
|
Multi-modal image fusion synthesizes information from multiple sources into a
single image, facilitating downstream tasks such as semantic segmentation.
Current approaches primarily focus on acquiring informative fusion images at
the visual display stratum through intricate mappings. Although some approaches
attempt to jointly optimize image fusion and downstream tasks, these efforts
often lack direct guidance or interaction, serving only to assist with a
predefined fusion loss. To address this, we propose an ``Unfolding Attribution
Analysis Fusion network'' (UAAFusion), using attribution analysis to tailor
fused images more effectively for semantic segmentation, enhancing the
interaction between the fusion and segmentation. Specifically, we utilize
attribution analysis techniques to explore the contributions of semantic
regions in the source images to task discrimination. At the same time, our
fusion algorithm incorporates more beneficial features from the source images,
thereby allowing the segmentation to guide the fusion process. Our method
constructs a model-driven unfolding network that uses optimization objectives
derived from attribution analysis, with an attribution fusion loss calculated
from the current state of the segmentation network. We also develop a new
pathway function for attribution analysis, specifically tailored to the fusion
tasks in our unfolding network. An attribution attention mechanism is
integrated at each network stage, allowing the fusion network to prioritize
areas and pixels crucial for high-level recognition tasks. Additionally, to
mitigate the information loss in traditional unfolding networks, a memory
augmentation module is incorporated into our network to improve the information
flow across various network layers. Extensive experiments demonstrate our
method's superiority in image fusion and applicability to semantic
segmentation.
|
2502.01472
|
FALCON: Fine-grained Activation Manipulation by Contrastive Orthogonal
Unalignment for Large Language Model
|
cs.CL cs.AI
|
Large language models have been widely applied, but can inadvertently encode
sensitive or harmful information, raising significant safety concerns. Machine
unlearning has emerged to alleviate this concern; however, existing
training-time unlearning approaches, relying on coarse-grained loss
combinations, have limitations in precisely separating knowledge and balancing
removal effectiveness with model utility. In contrast, we propose Fine-grained
Activation manipuLation by Contrastive Orthogonal uNalignment (FALCON), a novel
representation-guided unlearning approach that leverages information-theoretic
guidance for efficient parameter selection, employs contrastive mechanisms to
enhance representation separation, and projects conflict gradients onto
orthogonal subspaces to resolve conflicts between forgetting and retention
objectives. Extensive experiments demonstrate that FALCON achieves superior
unlearning effectiveness while maintaining model utility, exhibiting robust
resistance against knowledge recovery attempts.
|
2502.01473
|
Generalization Error Analysis for Selective State-Space Models Through
the Lens of Attention
|
cs.LG
|
State-space models (SSMs) are a new class of foundation models that have
emerged as a compelling alternative to Transformers and their attention
mechanisms for sequence processing tasks. This paper provides a detailed
theoretical analysis of selective SSMs, the core components of the Mamba and
Mamba-2 architectures. We leverage the connection between selective SSMs and
the self-attention mechanism to highlight the fundamental similarities between
these models. Building on this connection, we establish a length independent
covering number-based generalization bound for selective SSMs, providing a
deeper understanding of their theoretical performance guarantees. We analyze
the effects of state matrix stability and input-dependent discretization,
shedding light on the critical role played by these factors in the
generalization capabilities of selective SSMs. Finally, we empirically
demonstrate the sequence length independence of the derived bounds on two
tasks.
|
2502.01474
|
Simultaneous Automatic Picking and Manual Picking Refinement for
First-Break
|
cs.CV eess.IV
|
First-break picking is a pivotal procedure in processing microseismic data
for geophysics and resource exploration. Recent advancements in deep learning
have catalyzed the evolution of automated methods for identifying first-break.
Nevertheless, the complexity of seismic data acquisition and the requirement
for detailed, expert-driven labeling often result in outliers and potential
mislabeling within manually labeled datasets. These issues can negatively
affect the training of neural networks, necessitating algorithms that handle
outliers or mislabeled data effectively. We introduce the Simultaneous Picking
and Refinement (SPR) algorithm, designed to handle datasets plagued by outlier
samples or even noisy labels. Unlike conventional approaches that regard manual
picks as ground truth, our method treats the true first-break as a latent
variable within a probabilistic model that includes a first-break labeling
prior. SPR aims to uncover this variable, enabling dynamic adjustments and
improved accuracy across the dataset. This strategy mitigates the impact of
outliers or inaccuracies in manual labels. Intra-site picking experiments and
cross-site generalization experiments on publicly available data confirm our
method's performance in identifying first-break and its generalization across
different sites. Additionally, our investigations into noisy signals and labels
underscore SPR's resilience to both types of noise and its capability to refine
misaligned manual annotations. Moreover, the flexibility of SPR, not being
limited to any single network architecture, enhances its adaptability across
various deep learning-based picking methods. Focusing on learning from data
that may contain outliers or partial inaccuracies, SPR provides a robust
solution to some of the principal obstacles in automatic first-break picking.
|
2502.01476
|
Neuro-Symbolic AI for Analytical Solutions of Differential Equations
|
cs.LG
|
Analytical solutions of differential equations offer exact insights into
fundamental behaviors of physical processes. Their application, however, is
limited as finding these solutions is difficult. To overcome this limitation,
we combine two key insights. First, constructing an analytical solution
requires a composition of foundational solution components. Second, iterative
solvers define parameterized function spaces with constraint-based updates. Our
approach merges compositional differential equation solution techniques with
iterative refinement by using formal grammars, building a rich space of
candidate solutions that are embedded into a low-dimensional (continuous)
latent manifold for probabilistic exploration. This integration unifies
numerical and symbolic differential equation solvers via a neuro-symbolic AI
framework to find analytical solutions of a wide variety of differential
equations. By systematically constructing candidate expressions and applying
constraint-based refinement, we overcome longstanding barriers to extract such
closed-form solutions. We illustrate advantages over commercial solvers,
symbolic methods, and approximate neural networks on a diverse set of problems,
demonstrating both generality and accuracy.
|
2502.01477
|
Position: Empowering Time Series Reasoning with Multimodal LLMs
|
cs.LG cs.AI
|
Understanding time series data is crucial for multiple real-world
applications. While large language models (LLMs) show promise in time series
tasks, current approaches often rely on numerical data alone, overlooking the
multimodal nature of time-dependent information, such as textual descriptions,
visual data, and audio signals. Moreover, these methods underutilize LLMs'
reasoning capabilities, limiting the analysis to surface-level interpretations
instead of deeper temporal and multimodal reasoning. In this position paper, we
argue that multimodal LLMs (MLLMs) can enable more powerful and flexible
reasoning for time series analysis, enhancing decision-making and real-world
applications. We call on researchers and practitioners to leverage this
potential by developing strategies that prioritize trust, interpretability, and
robust reasoning in MLLMs. Lastly, we highlight key research directions,
including novel reasoning paradigms, architectural innovations, and
domain-specific applications, to advance time series reasoning with MLLMs.
|
2502.01478
|
BYON: Bring Your Own Networks for Digital Agriculture Applications
|
cs.NI cs.RO
|
Digital agriculture technologies rely on sensors, drones, robots, and
autonomous farm equipment to improve farm yields and incorporate sustainability
practices. However, the adoption of such technologies is severely limited by
the lack of broadband connectivity in rural areas. We argue that farming
applications do not require permanent always-on connectivity. Instead, farming
activity and digital agriculture applications follow seasonal rhythms of
agriculture. Therefore, the need for connectivity is highly localized in time
and space. We introduce BYON, a new connectivity model for high bandwidth
agricultural applications that relies on emerging connectivity solutions like
citizens broadband radio service (CBRS) and satellite networks. BYON creates an
agile connectivity solution that can be moved along a farm to create
spatio-temporal connectivity bubbles. BYON incorporates a new gateway design
that reacts to the presence of crops and optimizes coverage in agricultural
settings. We evaluate BYON in a production farm and demonstrate its benefits.
|
2502.01481
|
Explaining Context Length Scaling and Bounds for Language Models
|
cs.LG cs.CL
|
Long Context Language Models have drawn great attention in the past few
years. There has been work discussing the impact of long context on Language
Model performance: some find that long irrelevant context could harm
performance, while some experimentally summarize loss reduction by relevant
long context as Scaling Laws. This calls for a more thorough understanding on
how long context impact Language Modeling. In this work, we (1) propose a clean
and effective theoretical framework on explaining the impact of context length
to Language Modeling, from an Intrinsic Space perspective; and (2) conduct
experiments on natural language and synthetic data, validating our proposed
theoretical assumptions and deductions. Our theoretical framework can provide
practical insights such as establishing that training dataset size dictates an
optimal context length and bounds context length scaling for certain case. We
hope our work may inspire new long context Language Models, as well as future
work studying Physics for Language Models. Code for our experiments is
available at this url: https://github.com/JingzheShi/NLPCtlScalingAndBounds.
|
2502.01482
|
On the Uncertainty of a Simple Estimator for Remote Source Monitoring
over ALOHA Channels
|
cs.IT math.IT
|
Efficient remote monitoring of distributed sources is essential for many
Internet of Things (IoT) applications. This work studies the uncertainty at the
receiver when tracking two-state Markov sources over a slotted random access
channel without feedback, using the conditional entropy as a performance
indicator, and considering the last received value as current state estimate.
We provide an analytical characterization of the metric, and evaluate three
access strategies: (i) maximizing throughput, (ii) transmitting only on state
changes, and (iii) minimizing uncertainty through optimized access
probabilities. Our results reveal that throughput optimization does not always
reduce uncertainty. Moreover, while reactive policies are optimal for symmetric
sources, asymmetric processes benefit from mixed strategies allowing
transmissions during state persistence.
|
2502.01484
|
Robot Cell Modeling via Exploratory Robot Motions
|
cs.RO
|
Generating a collision-free robot motion is crucial for safe applications in
real-world settings. This requires an accurate model of all obstacle shapes
within the constrained robot cell, which is particularly challenging and
time-consuming. The difficulty is heightened in flexible production lines,
where the environment model must be updated each time the robot cell is
modified. Furthermore, sensor-based methods often necessitate costly hardware
and calibration procedures, and can be influenced by environmental factors
(e.g., light conditions or reflections). To address these challenges, we
present a novel data-driven approach to modeling a cluttered workspace,
leveraging solely the robot internal joint encoders to capture exploratory
motions. By computing the corresponding swept volume, we generate a
(conservative) mesh of the environment that is subsequently used for collision
checking within established path planning and control methods. Our method
significantly reduces the complexity and cost of classical environment modeling
by removing the need for CAD files and external sensors. We validate the
approach with the KUKA LBR iisy collaborative robot in a pick-and-place
scenario. In less than three minutes of exploratory robot motions and less than
four additional minutes of computation time, we obtain an accurate model that
enables collision-free motions. Our approach is intuitive, easy-to-use, making
it accessible to users without specialized technical knowledge. It is
applicable to all types of industrial robots or cobots.
|
2502.01490
|
MoireDB: Formula-generated Interference-fringe Image Dataset
|
cs.CV cs.AI cs.LG
|
Image recognition models have struggled to treat recognition robustness to
real-world degradations. In this context, data augmentation methods like PixMix
improve robustness but rely on generative arts and feature visualizations
(FVis), which have copyright, drawing cost, and scalability issues. We propose
MoireDB, a formula-generated interference-fringe image dataset for image
augmentation enhancing robustness. MoireDB eliminates copyright concerns,
reduces dataset assembly costs, and enhances robustness by leveraging illusory
patterns. Experiments show that MoireDB augmented images outperforms
traditional Fractal arts and FVis-based augmentations, making it a scalable and
effective solution for improving model robustness against real-world
degradations.
|
2502.01491
|
Memorization Inheritance in Sequence-Level Knowledge Distillation for
Neural Machine Translation
|
cs.CL
|
In this work, we explore how instance-level memorization in the teacher
Neural Machine Translation (NMT) model gets inherited by the student model in
sequence-level knowledge distillation (SeqKD). We find that despite not
directly seeing the original training data, students memorize more than
baseline models (models of the same size, trained on the original data) -- 3.4%
for exact matches and 57% for extractive memorization -- and show increased
hallucination rates. Further, under this SeqKD setting, we also characterize
how students behave on specific training data subgroups, such as subgroups with
low quality and specific counterfactual memorization (CM) scores, and find that
students exhibit amplified denoising on low-quality subgroups. Finally, we
propose a modification to SeqKD named Adaptive-SeqKD, which intervenes in SeqKD
to reduce memorization and hallucinations. Overall, we recommend caution when
applying SeqKD: students inherit both their teachers' superior performance and
their fault modes, thereby requiring active monitoring.
|
2502.01492
|
Develop AI Agents for System Engineering in Factorio
|
cs.AI
|
Continuing advances in frontier model research are paving the way for
widespread deployment of AI agents. Meanwhile, global interest in building
large, complex systems in software, manufacturing, energy and logistics has
never been greater. Although AI driven system engineering holds tremendous
promise, the static benchmarks dominating agent evaluations today fail to
capture the crucial skills required for implementing dynamic systems, such as
managing uncertain trade-offs and ensuring proactive adaptability. This
position paper advocates for training and evaluating AI agents' system
engineering abilities through automation-oriented sandbox games-particularly
Factorio. By directing research efforts in this direction, we can equip AI
agents with the specialized reasoning and long-horizon planning necessary to
design, maintain, and optimize tomorrow's most demanding engineering projects.
|
2502.01498
|
Compact Yet Highly Accurate Printed Classifiers Using Sequential Support
Vector Machine Circuits
|
cs.LG cs.AR
|
Printed Electronics (PE) technology has emerged as a promising alternative to
silicon-based computing. It offers attractive properties such as on-demand
ultra-low-cost fabrication, mechanical flexibility, and conformality. However,
PE are governed by large feature sizes, prohibiting the realization of complex
printed Machine Learning (ML) classifiers. Leveraging PE's ultra-low
non-recurring engineering and fabrication costs, designers can fully customize
hardware to a specific ML model and dataset, significantly reducing circuit
complexity. Despite significant advancements, state-of-the-art solutions
achieve area efficiency at the expense of considerable accuracy loss. Our work
mitigates this by designing area- and power-efficient printed ML classifiers
with little to no accuracy degradation. Specifically, we introduce the first
sequential Support Vector Machine (SVM) classifiers, exploiting the hardware
efficiency of bespoke control and storage units and a single
Multiply-Accumulate compute engine. Our SVMs yield on average 6x lower area and
4.6% higher accuracy compared to the printed state of the art.
|
2502.01500
|
Gamma/hadron separation in the TAIGA experiment with neural network
methods
|
astro-ph.IM astro-ph.HE cs.LG
|
In this work, the ability of rare VHE gamma ray selection with neural network
methods is investigated in the case when cosmic radiation flux strongly
prevails (ratio up to {10^4} over the gamma radiation flux from a point
source). This ratio is valid for the Crab Nebula in the TeV energy range, since
the Crab is a well-studied source for calibration and test of various methods
and installations in gamma astronomy. The part of TAIGA experiment which
includes three Imaging Atmospheric Cherenkov Telescopes observes this
gamma-source too. Cherenkov telescopes obtain images of Extensive Air Showers.
Hillas parameters can be used to analyse images in standard processing method,
or images can be processed with convolutional neural networks. In this work we
would like to describe the main steps and results obtained in the gamma/hadron
separation task from the Crab Nebula with neural network methods. The results
obtained are compared with standard processing method applied in the TAIGA
collaboration and using Hillas parameter cuts. It is demonstrated that a signal
was received at the level of higher than 5.5{\sigma} in 21 hours of Crab Nebula
observations after processing the experimental data with the neural network
method.
|
2502.01503
|
Sea-cret Agents: Maritime Abduction for Region Generation to Expose Dark
Vessel Trajectories
|
cs.AI cs.LG cs.LO cs.SC
|
Bad actors in the maritime industry engage in illegal behaviors after
disabling their vessel's automatic identification system (AIS) - which makes
finding such vessels difficult for analysts. Machine learning approaches only
succeed in identifying the locations of these ``dark vessels'' in the immediate
future. This work leverages ideas from the literature on abductive inference
applied to locating adversarial agents to solve the problem. Specifically, we
combine concepts from abduction, logic programming, and rule learning to create
an efficient method that approaches full recall of dark vessels while requiring
less search area than machine learning methods. We provide a logic-based
paradigm for reasoning about maritime vessels, an abductive inference query
method, an automatically extracted rule-based behavior model methodology, and a
thorough suite of experiments.
|
2502.01506
|
TwinMarket: A Scalable Behavioral and Social Simulation for Financial
Markets
|
cs.CE cs.CY
|
The study of social emergence has long been a central focus in social
science. Traditional modeling approaches, such as rule-based Agent-Based Models
(ABMs), struggle to capture the diversity and complexity of human behavior,
particularly the irrational factors emphasized in behavioral economics.
Recently, large language model (LLM) agents have gained traction as simulation
tools for modeling human behavior in social science and role-playing
applications. Studies suggest that LLMs can account for cognitive biases,
emotional fluctuations, and other non-rational influences, enabling more
realistic simulations of socio-economic dynamics. In this work, we introduce
TwinMarket, a novel multi-agent framework that leverages LLMs to simulate
socio-economic systems. Specifically, we examine how individual behaviors,
through interactions and feedback mechanisms, give rise to collective dynamics
and emergent phenomena. Through experiments in a simulated stock market
environment, we demonstrate how individual actions can trigger group behaviors,
leading to emergent outcomes such as financial bubbles and recessions. Our
approach provides valuable insights into the complex interplay between
individual decision-making and collective socio-economic patterns.
|
2502.01507
|
End-to-end Training for Text-to-Image Synthesis using Dual-Text
Embeddings
|
cs.CV
|
Text-to-Image (T2I) synthesis is a challenging task that requires modeling
complex interactions between two modalities ( i.e., text and image). A common
framework adopted in recent state-of-the-art approaches to achieving such
multimodal interactions is to bootstrap the learning process with pre-trained
image-aligned text embeddings trained using contrastive loss. Furthermore,
these embeddings are typically trained generically and reused across various
synthesis models. In contrast, we explore an approach to learning text
embeddings specifically tailored to the T2I synthesis network, trained in an
end-to-end fashion. Further, we combine generative and contrastive training and
use two embeddings, one optimized to enhance the photo-realism of the generated
images, and the other seeking to capture text-to-image alignment. A
comprehensive set of experiments on three text-to-image benchmark datasets
(Oxford-102, Caltech-UCSD, and MS-COCO) reveal that having two separate
embeddings gives better results than using a shared one and that such an
approach performs favourably in comparison with methods that use text
representations from a pre-trained text encoder trained using a discriminative
approach. Finally, we demonstrate that such learned embeddings can be used in
other contexts as well, such as text-to-image manipulation.
|
2502.01510
|
Grid-based exoplanet atmospheric mass loss predictions through neural
network
|
astro-ph.EP cs.LG
|
The fast and accurate estimation of planetary mass-loss rates is critical for
planet population and evolution modelling. We use machine learning (ML) for
fast interpolation across an existing large grid of hydrodynamic upper
atmosphere models, providing mass-loss rates for any planet inside the grid
boundaries with superior accuracy compared to previously published
interpolation schemes. We consider an already available grid comprising about
11000 hydrodynamic upper atmosphere models for training and generate an
additional grid of about 250 models for testing purposes. We develop the ML
interpolation scheme (dubbed "atmospheric Mass Loss INquiry frameworK"; MLink)
using a Dense Neural Network, further comparing the results with what was
obtained employing classical approaches (e.g. linear interpolation and radial
basis function-based regression). Finally, we study the impact of the different
interpolation schemes on the evolution of a small sample of carefully selected
synthetic planets. MLink provides high-quality interpolation across the entire
parameter space by significantly reducing both the number of points with large
interpolation errors and the maximum interpolation error compared to previously
available schemes. For most cases, evolutionary tracks computed employing MLink
and classical schemes lead to comparable planetary parameters at
Gyr-timescales. However, particularly for planets close to the top edge of the
radius gap, the difference between the predicted planetary radii at a given age
of tracks obtained employing MLink and classical interpolation schemes can
exceed the typical observational uncertainties. Machine learning can be
successfully used to estimate atmospheric mass-loss rates from model grids
paving the way to explore future larger and more complex grids of models
computed accounting for more physical processes.
|
2502.01512
|
Wrapped Gaussian on the manifold of Symmetric Positive Definite Matrices
|
stat.ME cs.LG math.ST stat.ML stat.TH
|
Circular and non-flat data distributions are prevalent across diverse domains
of data science, yet their specific geometric structures often remain
underutilized in machine learning frameworks. A principled approach to
accounting for the underlying geometry of such data is pivotal, particularly
when extending statistical models, like the pervasive Gaussian distribution. In
this work, we tackle those issue by focusing on the manifold of symmetric
positive definite matrices, a key focus in information geometry. We introduced
a non-isotropic wrapped Gaussian by leveraging the exponential map, we derive
theoretical properties of this distribution and propose a maximum likelihood
framework for parameter estimation. Furthermore, we reinterpret established
classifiers on SPD through a probabilistic lens and introduce new classifiers
based on the wrapped Gaussian model. Experiments on synthetic and real-world
datasets demonstrate the robustness and flexibility of this geometry-aware
distribution, underscoring its potential to advance manifold-based data
analysis. This work lays the groundwork for extending classical machine
learning and statistical methods to more complex and structured data.
|
2502.01517
|
Regularized interpolation in 4D neural fields enables optimization of 3D
printed geometries
|
cs.GR cs.AI
|
The ability to accurately produce geometries with specified properties is
perhaps the most important characteristic of a manufacturing process. 3D
printing is marked by exceptional design freedom and complexity but is also
prone to geometric and other defects that must be resolved for it to reach its
full potential. Ultimately, this will require both astute design decisions and
timely parameter adjustments to maintain stability that is challenging even
with expert human operators. While machine learning is widely investigated in
3D printing, existing methods typically overlook spatial features that vary
across prints and thus find it difficult to produce desired geometries. Here,
we encode volumetric representations of printed parts into neural fields and
apply a new regularization strategy, based on minimizing the partial derivative
of the field's output with respect to a single, non-learnable parameter. By
thus encouraging small input changes to yield only small output variations, we
encourage smooth interpolation between observed volumes and hence realistic
geometry predictions. This framework therefore allows the extraction of
'imagined' 3D shapes, revealing how a part would look if manufactured under
previously unseen parameters. The resulting continuous field is used for
data-driven optimization to maximize geometric fidelity between expected and
produced geometries, reducing post-processing, material waste, and production
costs. By optimizing process parameters dynamically, our approach enables
advanced planning strategies, potentially allowing manufacturers to better
realize complex and feature-rich designs.
|
2502.01518
|
Hybrid Machine Learning Model for Detecting Bangla Smishing Text Using
BERT and Character-Level CNN
|
cs.CL cs.LG cs.SI
|
Smishing is a social engineering attack using SMS containing malicious
content to deceive individuals into disclosing sensitive information or
transferring money to cybercriminals. Smishing attacks have surged by 328%,
posing a major threat to mobile users, with losses exceeding \$54.2 million in
2019. Despite its growing prevalence, the issue remains significantly
under-addressed. This paper presents a novel hybrid machine learning model for
detecting Bangla smishing texts, combining Bidirectional Encoder
Representations from Transformers (BERT) with Convolutional Neural Networks
(CNNs) for enhanced character-level analysis.
Our model addresses multi-class classification by distinguishing between
Normal, Promotional, and Smishing SMS. Unlike traditional binary classification
methods, our approach integrates BERT's contextual embeddings with CNN's
character-level features, improving detection accuracy. Enhanced by an
attention mechanism, the model effectively prioritizes crucial text segments.
Our model achieves 98.47% accuracy, outperforming traditional classifiers, with
high precision and recall in Smishing detection, and strong performance across
all categories.
|
2502.01520
|
Prioritizing App Reviews for Developer Responses on Google Play
|
cs.SE cs.LG
|
The number of applications in Google Play has increased dramatically in
recent years. On Google Play, users can write detailed reviews and rate apps,
with these ratings significantly influencing app success and download numbers.
Reviews often include notable information like feature requests, which are
valuable for software maintenance. Users can update their reviews and ratings
anytime. Studies indicate that apps with ratings below three stars are
typically avoided by potential users. Since 2013, Google Play has allowed
developers to respond to user reviews, helping resolve issues and potentially
boosting overall ratings and download rates. However, responding to reviews is
time-consuming, and only 13% to 18% of developers engage in this practice. To
address this challenge, we propose a method to prioritize reviews based on
response priority. We collected and preprocessed review data, extracted both
textual and semantic features, and assessed their impact on the importance of
responses. We labelled reviews as requiring a response or not and trained four
different machine learning models to prioritize them. We evaluated the models
performance using metrics such as F1-Score, Accuracy, Precision, and Recall.
Our findings indicate that the XGBoost model is the most effective for
prioritizing reviews needing a response.
|
2502.01521
|
Toward Task Generalization via Memory Augmentation in Meta-Reinforcement
Learning
|
cs.LG cs.AI cs.RO
|
In reinforcement learning (RL), agents often struggle to perform well on
tasks that differ from those encountered during training. This limitation
presents a challenge to the broader deployment of RL in diverse and dynamic
task settings. In this work, we introduce memory augmentation, a memory-based
RL approach to improve task generalization. Our approach leverages
task-structured augmentations to simulate plausible out-of-distribution
scenarios and incorporates memory mechanisms to enable context-aware policy
adaptation. Trained on a predefined set of tasks, our policy demonstrates the
ability to generalize to unseen tasks through memory augmentation without
requiring additional interactions with the environment. Through extensive
simulation experiments and real-world hardware evaluations on legged locomotion
tasks, we demonstrate that our approach achieves zero-shot generalization to
unseen tasks while maintaining robust in-distribution performance and high
sample efficiency.
|
2502.01522
|
BD-Diff: Generative Diffusion Model for Image Deblurring on Unknown
Domains with Blur-Decoupled Learning
|
cs.CV
|
Generative diffusion models trained on large-scale datasets have achieved
remarkable progress in image synthesis. In favor of their ability to supplement
missing details and generate aesthetically pleasing contents, recent works have
applied them to image deblurring tasks via training an adapter on blurry-sharp
image pairs to provide structural conditions for restoration. However,
acquiring substantial amounts of realistic paired data is challenging and
costly in real-world scenarios. On the other hand, relying solely on synthetic
data often results in overfitting, leading to unsatisfactory performance when
confronted with unseen blur patterns. To tackle this issue, we propose BD-Diff,
a generative-diffusion-based model designed to enhance deblurring performance
on unknown domains by decoupling structural features and blur patterns through
joint training on three specially designed tasks. We employ two Q-Formers as
structural representations and blur patterns extractors separately. The
features extracted by them will be used for the supervised deblurring task on
synthetic data and the unsupervised blur-transfer task by leveraging unpaired
blurred images from the target domain simultaneously. Furthermore, we introduce
a reconstruction task to make the structural features and blur patterns
complementary. This blur-decoupled learning process enhances the generalization
capabilities of BD-Diff when encountering unknown domain blur patterns.
Experiments on real-world datasets demonstrate that BD-Diff outperforms
existing state-of-the-art methods in blur removal and structural preservation
in various challenging scenarios. The codes will be released in
https://github.com/donahowe/BD-Diff
|
2502.01523
|
CondAmbigQA: A Benchmark and Dataset for Conditional Ambiguous Question
Answering
|
cs.CL
|
Large language models (LLMs) are prone to hallucinations in
question-answering (QA) tasks when faced with ambiguous questions. Users often
assume that LLMs share their cognitive alignment, a mutual understanding of
context, intent, and implicit details, leading them to omit critical
information in the queries. However, LLMs generate responses based on
assumptions that can misalign with user intent, which may be perceived as
hallucinations if they misalign with the user's intent. Therefore, identifying
those implicit assumptions is crucial to resolve ambiguities in QA. Prior work,
such as AmbigQA, reduces ambiguity in queries via human-annotated
clarifications, which is not feasible in real application. Meanwhile, ASQA
compiles AmbigQA's short answers into long-form responses but inherits human
biases and fails capture explicit logical distinctions that differentiates the
answers. We introduce Conditional Ambiguous Question-Answering (CondAmbigQA), a
benchmark with 200 ambiguous queries and condition-aware evaluation metrics.
Our study pioneers the concept of ``conditions'' in ambiguous QA tasks, where
conditions stand for contextual constraints or assumptions that resolve
ambiguities. The retrieval-based annotation strategy uses retrieved Wikipedia
fragments to identify possible interpretations for a given query as its
conditions and annotate the answers through those conditions. Such a strategy
minimizes human bias introduced by different knowledge levels among annotators.
By fixing retrieval results, CondAmbigQA evaluates how RAG systems leverage
conditions to resolve ambiguities. Experiments show that models considering
conditions before answering improve performance by $20\%$, with an additional
$5\%$ gain when conditions are explicitly provided. These results underscore
the value of conditional reasoning in QA, offering researchers tools to
rigorously evaluate ambiguity resolution.
|
2502.01524
|
Efficiently Integrate Large Language Models with Visual Perception: A
Survey from the Training Paradigm Perspective
|
cs.CV cs.AI cs.CL cs.LG
|
The integration of vision-language modalities has been a significant focus in
multimodal learning, traditionally relying on Vision-Language Pretrained
Models. However, with the advent of Large Language Models (LLMs), there has
been a notable shift towards incorporating LLMs with vision modalities.
Following this, the training paradigms for incorporating vision modalities into
LLMs have evolved. Initially, the approach was to integrate the modalities
through pretraining the modality integrator, named Single-stage Tuning. It has
since branched out into methods focusing on performance enhancement, denoted as
Two-stage Tuning, and those prioritizing parameter efficiency, referred to as
Direct Adaptation. However, existing surveys primarily address the latest
Vision Large Language Models (VLLMs) with Two-stage Tuning, leaving a gap in
understanding the evolution of training paradigms and their unique
parameter-efficient considerations. This paper categorizes and reviews 34 VLLMs
from top conferences, journals, and highly cited Arxiv papers, focusing on
parameter efficiency during adaptation from the training paradigm perspective.
We first introduce the architecture of LLMs and parameter-efficient learning
methods, followed by a discussion on vision encoders and a comprehensive
taxonomy of modality integrators. We then review three training paradigms and
their efficiency considerations, summarizing benchmarks in the VLLM field. To
gain deeper insights into their effectiveness in parameter efficiency, we
compare and discuss the experimental results of representative models, among
which the experiment of the Direct Adaptation paradigm is replicated. Providing
insights into recent developments and practical uses, this survey is a vital
guide for researchers and practitioners navigating the efficient integration of
vision modalities into LLMs.
|
2502.01527
|
Enhancing Bayesian Network Structural Learning with Monte Carlo Tree
Search
|
cs.LG
|
This article presents MCTS-BN, an adaptation of the Monte Carlo Tree Search
(MCTS) algorithm for the structural learning of Bayesian Networks (BNs).
Initially designed for game tree exploration, MCTS has been repurposed to
address the challenge of learning BN structures by exploring the search space
of potential ancestral orders in Bayesian Networks. Then, it employs Hill
Climbing (HC) to derive a Bayesian Network structure from each order. In large
BNs, where the search space for variable orders becomes vast, using completely
random orders during the rollout phase is often unreliable and impractical. We
adopt a semi-randomized approach to address this challenge by incorporating
variable orders obtained from other heuristic search algorithms such as Greedy
Equivalent Search (GES), PC, or HC itself. This hybrid strategy mitigates the
computational burden and enhances the reliability of the rollout process.
Experimental evaluations demonstrate the effectiveness of MCTS-BN in improving
BNs generated by traditional structural learning algorithms, exhibiting robust
performance even when base algorithm orders are suboptimal and surpassing the
gold standard when provided with favorable orders.
|
2502.01528
|
SQUASH: Serverless and Distributed Quantization-based Attributed Vector
Similarity Search
|
cs.DC cs.DB
|
Vector similarity search presents significant challenges in terms of
scalability for large and high-dimensional datasets, as well as in providing
native support for hybrid queries. Serverless computing and cloud functions
offer attractive benefits such as elasticity and cost-effectiveness, but are
difficult to apply to data-intensive workloads. Jointly addressing these two
main challenges, we present SQUASH, the first fully serverless vector search
solution with rich support for hybrid queries. It features OSQ, an optimized
and highly parallelizable quantization-based approach for vectors and
attributes. Its segment-based storage mechanism enables significant compression
in resource-constrained settings and offers efficient dimensional extraction
operations. SQUASH performs a single distributed pass to guarantee the return
of sufficiently many vectors satisfying the filter predicate, achieving high
accuracy and avoiding redundant computation for vectors which fail the
predicate. A multi-level search workflow is introduced to prune most vectors
early to minimize the load on Function-as-a-Service (FaaS) instances. SQUASH is
designed to identify and utilize retention of relevant data in re-used runtime
containers, which eliminates redundant I/O and reduces costs. Finally, we
demonstrate a new tree-based method for rapid FaaS invocation, enabling the
bi-directional flow of data via request/response payloads. Experiments
comparing SQUASH with state-of-the-art serverless vector search solutions and
server-based baselines on vector search benchmarks confirm significant
performance improvements at a lower cost.
|
2502.01530
|
The in-context inductive biases of vision-language models differ across
modalities
|
cs.CV cs.CL cs.LG
|
Inductive biases are what allow learners to make guesses in the absence of
conclusive evidence. These biases have often been studied in cognitive science
using concepts or categories -- e.g. by testing how humans generalize a new
category from a few examples that leave the category boundary ambiguous. We use
these approaches to study generalization in foundation models during in-context
learning. Modern foundation models can condition on both vision and text, and
differences in how they interpret and learn from these different modalities is
an emerging area of study. Here, we study how their generalizations vary by the
modality in which stimuli are presented, and the way the stimuli are described
in text. We study these biases with three different experimental paradigms,
across three different vision-language models. We find that the models
generally show some bias towards generalizing according to shape over color.
This shape bias tends to be amplified when the examples are presented visually.
By contrast, when examples are presented in text, the ordering of adjectives
affects generalization. However, the extent of these effects vary across models
and paradigms. These results help to reveal how vision-language models
represent different types of inputs in context, and may have practical
implications for the use of vision-language models.
|
2502.01532
|
Federated Learning with Discriminative Naive Bayes Classifier
|
cs.LG
|
Federated Learning has emerged as a promising approach to train machine
learning models on decentralized data sources while preserving data privacy.
This paper proposes a new federated approach for Naive Bayes (NB)
classification, assuming discrete variables. Our approach federates a
discriminative variant of NB, sharing meaningless parameters instead of
conditional probability tables. Therefore, this process is more reliable
against possible attacks. We conduct extensive experiments on 12 datasets to
validate the efficacy of our approach, comparing federated and non-federated
settings. Additionally, we benchmark our method against the generative variant
of NB, which serves as a baseline for comparison. Our experimental results
demonstrate the effectiveness of our method in achieving accurate
classification.
|
2502.01533
|
Transformers trained on proteins can learn to attend to Euclidean
distance
|
cs.LG cs.AI q-bio.BM
|
While conventional Transformers generally operate on sequence data, they can
be used in conjunction with structure models, typically SE(3)-invariant or
equivariant graph neural networks (GNNs), for 3D applications such as protein
structure modelling. These hybrids typically involve either (1)
preprocessing/tokenizing structural features as input for Transformers or (2)
taking Transformer embeddings and processing them within a structural
representation. However, there is evidence that Transformers can learn to
process structural information on their own, such as the AlphaFold3 structural
diffusion model. In this work we show that Transformers can function
independently as structure models when passed linear embeddings of coordinates.
We first provide a theoretical explanation for how Transformers can learn to
filter attention as a 3D Gaussian with learned variance. We then validate this
theory using both simulated 3D points and in the context of masked token
prediction for proteins. Finally, we show that pre-training protein Transformer
encoders with structure improves performance on a downstream task, yielding
better performance than custom structural models. Together, this work provides
a basis for using standard Transformers as hybrid structure-language models.
|
Subsets and Splits
No community queries yet
The top public SQL queries from the community will appear here once available.