id
stringlengths 9
16
| title
stringlengths 4
278
| categories
stringlengths 5
104
| abstract
stringlengths 6
4.09k
|
|---|---|---|---|
2502.02167
|
Multilingual Attribute Extraction from News Web Pages
|
cs.CL cs.IR
|
This paper addresses the challenge of automatically extracting attributes
from news article web pages across multiple languages. Recent neural network
models have shown high efficacy in extracting information from semi-structured
web pages. However, these models are predominantly applied to domains like
e-commerce and are pre-trained using English data, complicating their
application to web pages in other languages. We prepared a multilingual dataset
comprising 3,172 marked-up news web pages across six languages (English,
German, Russian, Chinese, Korean, and Arabic) from 161 websites. The dataset is
publicly available on GitHub. We fine-tuned the pre-trained state-of-the-art
model, MarkupLM, to extract news attributes from these pages and evaluated the
impact of translating pages into English on extraction quality. Additionally,
we pre-trained another state-of-the-art model, DOM-LM, on multilingual data and
fine-tuned it on our dataset. We compared both fine-tuned models to existing
open-source news data extraction tools, achieving superior extraction metrics.
|
2502.02170
|
Graph Neural Networks for O-RAN Mobility Management: A Link Prediction
Approach
|
cs.NI cs.AI
|
Mobility performance has been a key focus in cellular networks up to 5G. To
enhance handover (HO) performance, 3GPP introduced Conditional Handover (CHO)
and Layer 1/Layer 2 Triggered Mobility (LTM) mechanisms in 5G. While these
reactive HO strategies address the trade-off between HO failures (HOF) and
ping-pong effects, they often result in inefficient radio resource utilization
due to additional HO preparations. To overcome these challenges, this article
proposes a proactive HO framework for mobility management in O-RAN, leveraging
user-cell link predictions to identify the optimal target cell for HO. We
explore various categories of Graph Neural Networks (GNNs) for link prediction
and analyze the complexity of applying them to the mobility management domain.
Two GNN models are compared using a real-world dataset, with experimental
results demonstrating their ability to capture the dynamic and graph-structured
nature of cellular networks. Finally, we present key insights from our study
and outline future steps to enable the integration of GNN-based link prediction
for mobility management in 6G networks.
|
2502.02171
|
DeepForest: Sensing Into Self-Occluding Volumes of Vegetation With
Aerial Imaging
|
cs.CV eess.IV
|
Access to below-canopy volumetric vegetation data is crucial for
understanding ecosystem dynamics. We address the long-standing limitation of
remote sensing to penetrate deep into dense canopy layers. LiDAR and radar are
currently considered the primary options for measuring 3D vegetation
structures, while cameras can only extract the reflectance and depth of top
layers. Using conventional, high-resolution aerial images, our approach allows
sensing deep into self-occluding vegetation volumes, such as forests. It is
similar in spirit to the imaging process of wide-field microscopy, but can
handle much larger scales and strong occlusion. We scan focal stacks by
synthetic-aperture imaging with drones and reduce out-of-focus signal
contributions using pre-trained 3D convolutional neural networks. The resulting
volumetric reflectance stacks contain low-frequency representations of the
vegetation volume. Combining multiple reflectance stacks from various spectral
channels provides insights into plant health, growth, and environmental
conditions throughout the entire vegetation volume.
|
2502.02172
|
EditIQ: Automated Cinematic Editing of Static Wide-Angle Videos via
Dialogue Interpretation and Saliency Cues
|
cs.MM cs.CV cs.HC
|
We present EditIQ, a completely automated framework for cinematically editing
scenes captured via a stationary, large field-of-view and high-resolution
camera. From the static camera feed, EditIQ initially generates multiple
virtual feeds, emulating a team of cameramen. These virtual camera shots termed
rushes are subsequently assembled using an automated editing algorithm, whose
objective is to present the viewer with the most vivid scene content. To
understand key scene elements and guide the editing process, we employ a
two-pronged approach: (1) a large language model (LLM)-based dialogue
understanding module to analyze conversational flow, coupled with (2) visual
saliency prediction to identify meaningful scene elements and camera shots
therefrom. We then formulate cinematic video editing as an energy minimization
problem over shot selection, where cinematic constraints determine shot
choices, transitions, and continuity. EditIQ synthesizes an aesthetically and
visually compelling representation of the original narrative while maintaining
cinematic coherence and a smooth viewing experience. Efficacy of EditIQ against
competing baselines is demonstrated via a psychophysical study involving twenty
participants on the BBC Old School dataset plus eleven theatre performance
videos. Video samples from EditIQ can be found at
https://editiq-ave.github.io/.
|
2502.02173
|
Mass-Editing Memory with Attention in Transformers: A cross-lingual
exploration of knowledge
|
cs.CL cs.AI
|
Recent research has explored methods for updating and modifying factual
knowledge in large language models, often focusing on specific multi-layer
perceptron blocks. This study expands on this work by examining the
effectiveness of existing knowledge editing methods across languages and
delving into the role of attention mechanisms in this process. Drawing from the
insights gained, we propose Mass-Editing Memory with Attention in Transformers
(MEMAT), a method that achieves significant improvements in all metrics while
requiring minimal parameter modifications. MEMAT delivers a remarkable 10%
increase in magnitude metrics, benefits languages not included in the training
data and also demonstrates a high degree of portability. Our code and data are
at https://github.com/dtamayo-nlp/MEMAT.
|
2502.02175
|
VLA-Cache: Towards Efficient Vision-Language-Action Model via Adaptive
Token Caching in Robotic Manipulation
|
cs.RO cs.CV cs.LG
|
Vision-Language-Action (VLA) model can process instructions and visual
perception to directly generate actions as output in an end-to-end fashion due
to its strong multi-modal reasoning capabilities. While the performance of VLA
models is promising, their computational cost can be substantial. This raises
challenge for applying them on robotics tasks, which requires real-time
decision-making to respond quickly to environmental changes. Since robotic
control involves sequential decision-making, the visual input often exhibits
minimal variation between successive steps. A natural idea is to reuse the
computational results of unchanged visual tokens from the last step. Motivated
by this idea, we propose VLA-Cache, an efficient vision-language-action model.
VLA-Cache incorporates a token-selection mechanism that compares the visual
input at each step with the input from the previous step, adaptively
identifying visual tokens with minimal changes. The computational results for
these unchanged tokens are then reused in subsequent steps via KV-cache,
thereby significantly improving the efficiency of the VLA-Cache model.
Experimental results on both simulation (e.g., LIBERO benchmark and SIMPLER)
and real-world robot valid VLA-Cache can achieve practical acceleration with
minimal sacrifice in success rate.
|
2502.02179
|
Deep Ensemble approach for Enhancing Brain Tumor Segmentation in
Resource-Limited Settings
|
eess.IV cs.CV
|
Segmentation of brain tumors is a critical step in treatment planning, yet
manual segmentation is both time-consuming and subjective, relying heavily on
the expertise of radiologists. In Sub-Saharan Africa, this challenge is
magnified by overburdened medical systems and limited access to advanced
imaging modalities and expert radiologists. Automating brain tumor segmentation
using deep learning offers a promising solution. Convolutional Neural Networks
(CNNs), especially the U-Net architecture, have shown significant potential.
However, a major challenge remains: achieving generalizability across different
datasets. This study addresses this gap by developing a deep learning ensemble
that integrates UNet3D, V-Net, and MSA-VNet models for the semantic
segmentation of gliomas. By initially training on the BraTS-GLI dataset and
fine-tuning with the BraTS-SSA dataset, we enhance model performance. Our
ensemble approach significantly outperforms individual models, achieving DICE
scores of 0.8358 for Tumor Core, 0.8521 for Whole Tumor, and 0.8167 for
Enhancing Tumor. These results underscore the potential of ensemble methods in
improving the accuracy and reliability of automated brain tumor segmentation,
particularly in resource-limited settings.
|
2502.02180
|
The Elicitation Game: Evaluating Capability Elicitation Techniques
|
cs.AI cs.LG
|
Capability evaluations are required to understand and regulate AI systems
that may be deployed or further developed. Therefore, it is important that
evaluations provide an accurate estimation of an AI system's capabilities.
However, in numerous cases, previously latent capabilities have been elicited
from models, sometimes long after initial release. Accordingly, substantial
efforts have been made to develop methods for eliciting latent capabilities
from models. In this paper, we evaluate the effectiveness of capability
elicitation techniques by intentionally training model organisms -- language
models with hidden capabilities that are revealed by a password. We introduce a
novel method for training model organisms, based on circuit breaking, which is
more robust to elicitation techniques than standard password-locked models. We
focus on elicitation techniques based on prompting and activation steering, and
compare these to fine-tuning methods. Prompting techniques can elicit the
actual capability of both password-locked and circuit-broken model organisms in
an MCQA setting, while steering fails to do so. For a code-generation task,
only fine-tuning can elicit the hidden capabilities of our novel model
organism. Additionally, our results suggest that combining techniques improves
elicitation. Still, if possible, fine-tuning should be the method of choice to
improve the trustworthiness of capability evaluations.
|
2502.02182
|
Sequence models for continuous cell cycle stage prediction from
brightfield images
|
cs.CV
|
Understanding cell cycle dynamics is crucial for studying biological
processes such as growth, development and disease progression. While
fluorescent protein reporters like the Fucci system allow live monitoring of
cell cycle phases, they require genetic engineering and occupy additional
fluorescence channels, limiting broader applicability in complex experiments.
In this study, we conduct a comprehensive evaluation of deep learning methods
for predicting continuous Fucci signals using non-fluorescence brightfield
imaging, a widely available label-free modality. To that end, we generated a
large dataset of 1.3 M images of dividing RPE1 cells with full cell cycle
trajectories to quantitatively compare the predictive performance of distinct
model categories including single time-frame models, causal state space models
and bidirectional transformer models. We show that both causal and
transformer-based models significantly outperform single- and fixed frame
approaches, enabling the prediction of visually imperceptible transitions like
G1/S within 1h resolution. Our findings underscore the importance of sequence
models for accurate predictions of cell cycle dynamics and highlight their
potential for label-free imaging.
|
2502.02185
|
Generative Kernel Spectral Clustering
|
cs.LG
|
Modern clustering approaches often trade interpretability for performance,
particularly in deep learning-based methods. We present Generative Kernel
Spectral Clustering (GenKSC), a novel model combining kernel spectral
clustering with generative modeling to produce both well-defined clusters and
interpretable representations. By augmenting weighted variance maximization
with reconstruction and clustering losses, our model creates an explorable
latent space where cluster characteristics can be visualized through traversals
along cluster directions. Results on MNIST and FashionMNIST datasets
demonstrate the model's ability to learn meaningful cluster representations.
|
2502.02187
|
ShapeShifter: 3D Variations Using Multiscale and Sparse Point-Voxel
Diffusion
|
cs.CV cs.AI
|
This paper proposes ShapeShifter, a new 3D generative model that learns to
synthesize shape variations based on a single reference model. While generative
methods for 3D objects have recently attracted much attention, current
techniques often lack geometric details and/or require long training times and
large resources. Our approach remedies these issues by combining sparse voxel
grids and point, normal, and color sampling within a multiscale neural
architecture that can be trained efficiently and in parallel. We show that our
resulting variations better capture the fine details of their original input
and can handle more general types of surfaces than previous SDF-based methods.
Moreover, we offer interactive generation of 3D shape variants, allowing more
human control in the design loop if needed.
|
2502.02189
|
deCIFer: Crystal Structure Prediction from Powder Diffraction Data using
Autoregressive Language Models
|
cs.LG
|
Novel materials drive progress across applications from energy storage to
electronics. Automated characterization of material structures with machine
learning methods offers a promising strategy for accelerating this key step in
material design. In this work, we introduce an autoregressive language model
that performs crystal structure prediction (CSP) from powder diffraction data.
The presented model, deCIFer, generates crystal structures in the widely used
Crystallographic Information File (CIF) format and can be conditioned on powder
X-ray diffraction (PXRD) data. Unlike earlier works that primarily rely on
high-level descriptors like composition, deCIFer performs CSP from diffraction
data. We train deCIFer on nearly 2.3M unique crystal structures and validate on
diverse sets of PXRD patterns for characterizing challenging inorganic crystal
systems. Qualitative and quantitative assessments using the residual weighted
profile and Wasserstein distance show that deCIFer produces structures that
more accurately match the target diffraction data when conditioned, compared to
the unconditioned case. Notably, deCIFer can achieve a 94% match rate on unseen
data. deCIFer bridges experimental diffraction data with computational CSP,
lending itself as a powerful tool for crystal structure characterization and
accelerating materials discovery.
|
2502.02190
|
Discovering Quality-Diversity Algorithms via Meta-Black-Box Optimization
|
cs.NE cs.LG
|
Quality-Diversity has emerged as a powerful family of evolutionary algorithms
that generate diverse populations of high-performing solutions by implementing
local competition principles inspired by biological evolution. While these
algorithms successfully foster diversity and innovation, their specific
mechanisms rely on heuristics, such as grid-based competition in MAP-Elites or
nearest-neighbor competition in unstructured archives. In this work, we propose
a fundamentally different approach: using meta-learning to automatically
discover novel Quality-Diversity algorithms. By parameterizing the competition
rules using attention-based neural architectures, we evolve new algorithms that
capture complex relationships between individuals in the descriptor space. Our
discovered algorithms demonstrate competitive or superior performance compared
to established Quality-Diversity baselines while exhibiting strong
generalization to higher dimensions, larger populations, and
out-of-distribution domains like robot control. Notably, even when optimized
solely for fitness, these algorithms naturally maintain diverse populations,
suggesting meta-learning rediscovers that diversity is fundamental to effective
optimization.
|
2502.02195
|
EFKAN: A KAN-Integrated Neural Operator For Efficient Magnetotelluric
Forward Modeling
|
physics.geo-ph cs.LG
|
Magnetotelluric (MT) forward modeling is fundamental for improving the
accuracy and efficiency of MT inversion. Neural operators (NOs) have been
effectively used for rapid MT forward modeling, demonstrating their promising
performance in solving the MT forward modeling-related partial differential
equations (PDEs). Particularly, they can obtain the electromagnetic field at
arbitrary locations and frequencies. In these NOs, the projection layers have
been dominated by multi-layer perceptrons (MLPs), which may potentially reduce
the accuracy of solution due to they usually suffer from the disadvantages of
MLPs, such as lack of interpretability, overfitting, and so on. Therefore, to
improve the accuracy of MT forward modeling with NOs and explore the potential
alternatives to MLPs, we propose a novel neural operator by extending the
Fourier neural operator (FNO) with Kolmogorov-Arnold network (EFKAN). Within
the EFKAN framework, the FNO serves as the branch network to calculate the
apparent resistivity and phase from the resistivity model in the frequency
domain. Meanwhile, the KAN acts as the trunk network to project the resistivity
and phase, determined by the FNO, to the desired locations and frequencies.
Experimental results demonstrate that the proposed method not only achieves
higher accuracy in obtaining apparent resistivity and phase compared to the NO
equipped with MLPs at the desired frequencies and locations but also
outperforms traditional numerical methods in terms of computational speed.
|
2502.02196
|
Exploiting Ensemble Learning for Cross-View Isolated Sign Language
Recognition
|
cs.CV cs.AI
|
In this paper, we present our solution to the Cross-View Isolated Sign
Language Recognition (CV-ISLR) challenge held at WWW 2025. CV-ISLR addresses a
critical issue in traditional Isolated Sign Language Recognition (ISLR), where
existing datasets predominantly capture sign language videos from a frontal
perspective, while real-world camera angles often vary. To accurately recognize
sign language from different viewpoints, models must be capable of
understanding gestures from multiple angles, making cross-view recognition
challenging. To address this, we explore the advantages of ensemble learning,
which enhances model robustness and generalization across diverse views. Our
approach, built on a multi-dimensional Video Swin Transformer model, leverages
this ensemble strategy to achieve competitive performance. Finally, our
solution ranked 3rd in both the RGB-based ISLR and RGB-D-based ISLR tracks,
demonstrating the effectiveness in handling the challenges of cross-view
recognition. The code is available at:
https://github.com/Jiafei127/CV_ISLR_WWW2025.
|
2502.02197
|
An Efficient Local Search Approach for Polarized Community Discovery in
Signed Networks
|
cs.LG cs.AI cs.SI
|
Signed networks, where edges are labeled as positive or negative to indicate
friendly or antagonistic interactions, offer a natural framework for studying
polarization, trust, and conflict in social systems. Detecting meaningful group
structures in these networks is crucial for understanding online discourse,
political division, and trust dynamics. A key challenge is to identify groups
that are cohesive internally yet antagonistic externally, while allowing for
neutral or unaligned vertices. In this paper, we address this problem by
identifying $k$ polarized communities that are large, dense, and balanced in
size. We develop an approach based on Frank-Wolfe optimization, leading to a
local search procedure with provable convergence guarantees. Our method is both
scalable and efficient, outperforming state-of-the-art baselines in solution
quality while remaining competitive in terms of computational efficiency.
|
2502.02199
|
When Dimensionality Hurts: The Role of LLM Embedding Compression for
Noisy Regression Tasks
|
cs.CL cs.CE cs.LG q-fin.CP
|
Large language models (LLMs) have shown remarkable success in language
modelling due to scaling laws found in model size and the hidden dimension of
the model's text representation. Yet, we demonstrate that compressed
representations of text can yield better performance in LLM-based regression
tasks. In this paper, we compare the relative performance of embedding
compression in three different signal-to-noise contexts: financial return
prediction, writing quality assessment and review scoring. Our results show
that compressing embeddings, in a minimally supervised manner using an
autoencoder's hidden representation, can mitigate overfitting and improve
performance on noisy tasks, such as financial return prediction; but that
compression reduces performance on tasks that have high causal dependencies
between the input and target data. Our results suggest that the success of
interpretable compressed representations such as sentiment may be due to a
regularising effect.
|
2502.02201
|
Can You Move These Over There? An LLM-based VR Mover for Supporting
Object Manipulation
|
cs.HC cs.AI cs.CL cs.ET
|
In our daily lives, we can naturally convey instructions for the spatial
manipulation of objects using words and gestures. Transposing this form of
interaction into virtual reality (VR) object manipulation can be beneficial. We
propose VR Mover, an LLM-empowered solution that can understand and interpret
the user's vocal instruction to support object manipulation. By simply pointing
and speaking, the LLM can manipulate objects without structured input. Our user
study demonstrates that VR Mover enhances user usability, overall experience
and performance on multi-object manipulation, while also reducing workload and
arm fatigue. Users prefer the proposed natural interface for broad movements
and may complementarily switch to gizmos or virtual hands for finer
adjustments. These findings are believed to contribute to design implications
for future LLM-based object manipulation interfaces, highlighting the potential
for more intuitive and efficient user interactions in VR environments.
|
2502.02202
|
Multi-level Supervised Contrastive Learning
|
cs.LG
|
Contrastive learning is a well-established paradigm in representation
learning. The standard framework of contrastive learning minimizes the distance
between "similar" instances and maximizes the distance between dissimilar ones
in the projection space, disregarding the various aspects of similarity that
can exist between two samples. Current methods rely on a single projection
head, which fails to capture the full complexity of different aspects of a
sample, leading to suboptimal performance, especially in scenarios with limited
training data. In this paper, we present a novel supervised contrastive
learning method in a unified framework called multilevel contrastive learning
(MLCL), that can be applied to both multi-label and hierarchical classification
tasks. The key strength of the proposed method is the ability to capture
similarities between samples across different labels and/or hierarchies using
multiple projection heads. Extensive experiments on text and image datasets
demonstrate that the proposed approach outperforms state-of-the-art contrastive
learning methods
|
2502.02204
|
Backcasting Policies in Transport Systems as an Optimal Control Problem
: An Example with Electric Vehicle Purchase Incentives
|
math.OC cs.SY eess.SY
|
This study represents a first attempt to build a backcasting methodology to
identify the optimal policy roadmaps in transport systems. Specifically, it
considers a passenger car fleet subsystem, modelling its evolution and
greenhouse gas emissions. The policy decision under consideration is the
monetary incentive to the purchase of electric vehicles. This process is cast
as an optimal control problem with the objective to minimize the total budget
of the state and reach a desired CO$_2$ target. A case study applied to
Metropolitan France is presented to illustrate the approach. Additionally,
alternative policy scenarios are also analyzed.
|
2502.02205
|
From Uncertain to Safe: Conformal Fine-Tuning of Diffusion Models for
Safe PDE Control
|
cs.LG
|
The application of deep learning for partial differential equation
(PDE)-constrained control is gaining increasing attention. However, existing
methods rarely consider safety requirements crucial in real-world applications.
To address this limitation, we propose Safe Diffusion Models for PDE Control
(SafeDiffCon), which introduce the uncertainty quantile as model uncertainty
quantification to achieve optimal control under safety constraints through both
post-training and inference phases. Firstly, our approach post-trains a
pre-trained diffusion model to generate control sequences that better satisfy
safety constraints while achieving improved control objectives via a reweighted
diffusion loss, which incorporates the uncertainty quantile estimated using
conformal prediction. Secondly, during inference, the diffusion model
dynamically adjusts both its generation process and parameters through
iterative guidance and fine-tuning, conditioned on control targets while
simultaneously integrating the estimated uncertainty quantile. We evaluate
SafeDiffCon on three control tasks: 1D Burgers' equation, 2D incompressible
fluid, and controlled nuclear fusion problem. Results demonstrate that
SafeDiffCon is the only method that satisfies all safety constraints, whereas
other classical and deep learning baselines fail. Furthermore, while adhering
to safety constraints, SafeDiffCon achieves the best control performance.
|
2502.02206
|
Target-aware Bayesian inference via generalized thermodynamic
integration
|
stat.CO cs.CE stat.ME
|
In Bayesian inference, we are usually interested in the numerical
approximation of integrals that are posterior expectations or marginal
likelihoods (a.k.a., Bayesian evidence). In this paper, we focus on the
computation of the posterior expectation of a function $f(\x)$. We consider a
\emph{target-aware} scenario where $f(\x)$ is known in advance and can be
exploited in order to improve the estimation of the posterior expectation. In
this scenario, this task can be reduced to perform several independent marginal
likelihood estimation tasks. The idea of using a path of tempered posterior
distributions has been widely applied in the literature for the computation of
marginal likelihoods. Thermodynamic integration, path sampling and annealing
importance sampling are well-known examples of algorithms belonging to this
family of methods. In this work, we introduce a generalized thermodynamic
integration (GTI) scheme which is able to perform a target-aware Bayesian
inference, i.e., GTI can approximate the posterior expectation of a given
function. Several scenarios of application of GTI are discussed and different
numerical simulations are provided.
|
2502.02207
|
Human-Aided Trajectory Planning for Automated Vehicles through
Teleoperation and Arbitration Graphs
|
cs.RO cs.HC
|
Teleoperation enables remote human support of automated vehicles in scenarios
where the automation is not able to find an appropriate solution. Remote
assistance concepts, where operators provide discrete inputs to aid specific
automation modules like planning, is gaining interest due to its reduced
workload on the human remote operator and improved safety. However, these
concepts are challenging to implement and maintain due to their deep
integration and interaction with the automated driving system. In this paper,
we propose a solution to facilitate the implementation of remote assistance
concepts that intervene on planning level and extend the operational design
domain of the vehicle at runtime. Using arbitration graphs, a modular
decision-making framework, we integrate remote assistance into an existing
automated driving system without modifying the original software components.
Our simulative implementation demonstrates this approach in two use cases,
allowing operators to adjust planner constraints and enable trajectory
generation beyond nominal operational design domains.
|
2502.02209
|
On the Expressivity of Selective State-Space Layers: A Multivariate
Polynomial Approach
|
cs.LG
|
Recent advances in efficient sequence modeling have introduced selective
state-space layers, a key component of the Mamba architecture, which have
demonstrated remarkable success in a wide range of NLP and vision tasks. While
Mamba's empirical performance has matched or surpassed SoTA transformers on
such diverse benchmarks, the theoretical foundations underlying its powerful
representational capabilities remain less explored. In this work, we
investigate the expressivity of selective state-space layers using multivariate
polynomials, and prove that they surpass linear transformers in expressiveness.
Consequently, our findings reveal that Mamba offers superior representational
power over linear attention-based models for long sequences, while not
sacrificing their generalization. Our theoretical insights are validated by a
comprehensive set of empirical experiments on various datasets.
|
2502.02215
|
InterLCM: Low-Quality Images as Intermediate States of Latent
Consistency Models for Effective Blind Face Restoration
|
cs.CV
|
Diffusion priors have been used for blind face restoration (BFR) by
fine-tuning diffusion models (DMs) on restoration datasets to recover
low-quality images. However, the naive application of DMs presents several key
limitations. (i) The diffusion prior has inferior semantic consistency (e.g.,
ID, structure and color.), increasing the difficulty of optimizing the BFR
model; (ii) reliance on hundreds of denoising iterations, preventing the
effective cooperation with perceptual losses, which is crucial for faithful
restoration. Observing that the latent consistency model (LCM) learns
consistency noise-to-data mappings on the ODE-trajectory and therefore shows
more semantic consistency in the subject identity, structural information and
color preservation, we propose InterLCM to leverage the LCM for its superior
semantic consistency and efficiency to counter the above issues. Treating
low-quality images as the intermediate state of LCM, InterLCM achieves a
balance between fidelity and quality by starting from earlier LCM steps. LCM
also allows the integration of perceptual loss during training, leading to
improved restoration quality, particularly in real-world scenarios. To mitigate
structural and semantic uncertainties, InterLCM incorporates a Visual Module to
extract visual features and a Spatial Encoder to capture spatial details,
enhancing the fidelity of restored images. Extensive experiments demonstrate
that InterLCM outperforms existing approaches in both synthetic and real-world
datasets while also achieving faster inference speed.
|
2502.02216
|
Flatten Graphs as Sequences: Transformers are Scalable Graph Generators
|
cs.LG stat.ML
|
We introduce AutoGraph, a novel autoregressive framework for generating large
attributed graphs using decoder-only transformers. At the core of our approach
is a reversible "flattening" process that transforms graphs into random
sequences. By sampling and learning from these sequences, AutoGraph enables
transformers to model and generate complex graph structures in a manner akin to
natural language. In contrast to diffusion models that rely on computationally
intensive node features, our approach operates exclusively on these sequences.
The sampling complexity and sequence length scale linearly with the number of
edges, making AutoGraph highly scalable for generating large sparse graphs.
Empirically, AutoGraph achieves state-of-the-art performance across diverse
synthetic and molecular graph generation benchmarks, while delivering a
100-fold generation and a 3-fold training speedup compared to leading diffusion
models. Additionally, it demonstrates promising transfer capabilities and
supports substructure-conditioned generation without additional fine-tuning. By
extending language modeling techniques to graph generation, this work paves the
way for developing graph foundation models.
|
2502.02218
|
Digital Fairness Algorithms for Satellite Uplink NOMA
|
cs.IT eess.SP math.IT
|
Achieving digital fairness by using NOMA is one of the more pressing issues
in modern wireless communication systems for 5G/6G networks. This is
particularly true in the case of satellite uplink systems supporting a
population of IoT wireless devices scattered in a wide coverage area. In this
scenario, the variability of the link budget across space and time increases
the challenges of preventing a situation where only a subset of network users
can transmit while others are left unable to do so. This work investigates the
characteristics of an uplink NOMA system with the goal of equalizing the
achievable rate of the IoT network subscribers. Within the context of
single-slot NOMA, two key outcomes are achieved: the determination of the
optimal SIC ordering at the receiver and the exploration of power moderation,
coordinated by the receiver, to maximize the minimum user rate. In the context
of multi-slot NOMA, which is particularly relevant to the satellite scenario
under consideration, a user rate equalization algorithm is proposed and its
performance is analyzed numerically. The trade-off between network performance,
measured in terms of user rates, and complexity, determined by the number of
SIC steps implemented at the receiver, is thoroughly evaluated for the
satellite scenario under consideration.
|
2502.02221
|
Bias Detection via Maximum Subgroup Discrepancy
|
cs.LG cs.AI stat.ML
|
Bias evaluation is fundamental to trustworthy AI, both in terms of checking
data quality and in terms of checking the outputs of AI systems. In testing
data quality, for example, one may study a distance of a given dataset, viewed
as a distribution, to a given ground-truth reference dataset. However,
classical metrics, such as the Total Variation and the Wasserstein distances,
are known to have high sample complexities and, therefore, may fail to provide
meaningful distinction in many practical scenarios.
In this paper, we propose a new notion of distance, the Maximum Subgroup
Discrepancy (MSD). In this metric, two distributions are close if, roughly,
discrepancies are low for all feature subgroups. While the number of subgroups
may be exponential, we show that the sample complexity is linear in the number
of features, thus making it feasible for practical applications. Moreover, we
provide a practical algorithm for the evaluation of the distance, based on
Mixed-integer optimization (MIO). We also note that the proposed distance is
easily interpretable, thus providing clearer paths to fixing the biases once
they have been identified. It also provides guarantees for all subgroups.
Finally, we empirically evaluate, compare with other metrics, and demonstrate
the above properties of MSD on real-world datasets.
|
2502.02222
|
Self-dual codes and LCD codes in sum-rank metric
|
cs.IT math.IT
|
Sum-rank codes are an important class of codes which can be utilized for
linear network coding, space-time coding and distributed storage. Based on the
duality theory of sum-rank codes [Byrne, Gluesing-Luerssen, Ravagnani, IEEE
TIT, 2021], it is interesting to study self-dual sum-rank codes and linear
complementary dual (LCD) sum-rank codes.Firstly, we characterize the dual codes
of some sum-rank codes. Then we define self-dual sum-rank codes and LCD
sum-rank codes, provide some basic properties of such codes and then obtain two
methods of constructing self-dual sum-rank codes and LCD sum-rank codes from
Euclidean self-dual codes and Euclidean LCD codes. Some particular examples
especially some cyclic self-dual sum-rank codes and cyclic LCD sum-rank codes
with good parameters are also provided. At last, we prove that there exist
asymptotically good self-dual sum-rank codes.
|
2502.02223
|
SurvHive: a package to consistently access multiple survival-analysis
packages
|
q-bio.QM cs.LG
|
Survival analysis, a foundational tool for modeling time-to-event data, has
seen growing integration with machine learning (ML) approaches to handle the
complexities of censored data and time-varying risks. Despite these advances,
leveraging state-of-the-art survival models remains a challenge due to the
fragmented nature of existing implementations, which lack standardized
interfaces and require extensive preprocessing. We introduce SurvHive, a
Python-based framework designed to unify survival analysis methods within a
coherent and extensible interface modeled on scikit-learn. SurvHive integrates
classical statistical models with cutting-edge deep learning approaches,
including transformer-based architectures and parametric survival models. Using
a consistent API, SurvHive simplifies model training, evaluation, and
optimization, significantly reducing the barrier to entry for ML practitioners
exploring survival analysis. The package includes enhanced support for
hyper-parameter tuning, time-dependent risk evaluation metrics, and
cross-validation strategies tailored to censored data. With its extensibility
and focus on usability, SurvHive provides a bridge between survival analysis
and the broader ML community, facilitating advancements in time-to-event
modeling across domains. The SurvHive code and documentation are available
freely at https://github.com/compbiomed-unito/survhive.
|
2502.02225
|
Exploring the latent space of diffusion models directly through singular
value decomposition
|
cs.CV cs.AI cs.MM
|
Despite the groundbreaking success of diffusion models in generating
high-fidelity images, their latent space remains relatively under-explored,
even though it holds significant promise for enabling versatile and
interpretable image editing capabilities. The complicated denoising trajectory
and high dimensionality of the latent space make it extremely challenging to
interpret. Existing methods mainly explore the feature space of U-Net in
Diffusion Models (DMs) instead of the latent space itself. In contrast, we
directly investigate the latent space via Singular Value Decomposition (SVD)
and discover three useful properties that can be used to control generation
results without the requirements of data collection and maintain identity
fidelity generated images. Based on these properties, we propose a novel image
editing framework that is capable of learning arbitrary attributes from one
pair of latent codes destined by text prompts in Stable Diffusion Models. To
validate our approach, extensive experiments are conducted to demonstrate its
effectiveness and flexibility in image editing. We will release our codes soon
to foster further research and applications in this area.
|
2502.02229
|
A Robust Remote Photoplethysmography Method
|
cs.CV
|
Remote photoplethysmography (rPPG) is a method for measuring a subjects heart
rate remotely using a camera. Factors such as subject movement, ambient light
level, makeup etc. complicate such measurements by distorting the observed
pulse. Recent works on this topic have proposed a variety of approaches for
accurately measuring heart rate in humans, however these methods were tested in
ideal conditions, where the subject does not make significant movements and all
measurements are taken at the same level of illumination. In more realistic
conditions these methods suffer from decreased accuracy. The study proposes a
more robust method that is less susceptible to distortions and has minimal
hardware requirements. The proposed method uses a combination of mathematical
transforms to calculate the subjects heart rate. It performs best when used
with a camera that has been modified by removing its infrared filter, although
using an unmodified camera is also possible. The method was tested on 26 videos
taken from 19 volunteers of varying gender and age. The obtained results were
compared to reference data and the average mean absolute error was found to be
at 1.95 beats per minute, which is noticeably better than the results from
previous works. The remote photoplethysmography method proposed in the present
article is more resistant to distortions than methods from previous
publications and thus allows one to remotely and accurately measure the
subjects heart rate without imposing any significant limitations on the
subjects behavior.
|
2502.02232
|
Combinatorial Optimization Perspective based Framework for
Multi-behavior Recommendation
|
cs.IR
|
In real-world recommendation scenarios, users engage with items through
various types of behaviors. Leveraging diversified user behavior information
for learning can enhance the recommendation of target behaviors (e.g., buy), as
demonstrated by recent multi-behavior methods. The mainstream multi-behavior
recommendation framework consists of two steps: fusion and prediction. Recent
approaches utilize graph neural networks for multi-behavior fusion and employ
multi-task learning paradigms for joint optimization in the prediction step,
achieving significant success. However, these methods have limited perspectives
on multi-behavior fusion, which leads to inaccurate capture of user behavior
patterns in the fusion step. Moreover, when using multi-task learning for
prediction, the relationship between the target task and auxiliary tasks is not
sufficiently coordinated, resulting in negative information transfer. To
address these problems, we propose a novel multi-behavior recommendation
framework based on the combinatorial optimization perspective, named COPF.
Specifically, we treat multi-behavior fusion as a combinatorial optimization
problem, imposing different constraints at various stages of each behavior to
restrict the solution space, thus significantly enhancing fusion efficiency
(COGCN). In the prediction step, we improve both forward and backward
propagation during the generation and aggregation of multiple experts to
mitigate negative transfer caused by differences in both feature and label
distributions (DFME). Comprehensive experiments on three real-world datasets
indicate the superiority of COPF. Further analyses also validate the
effectiveness of the COGCN and DFME modules. Our code is available at
https://github.com/1918190/COPF.
|
2502.02233
|
Variance-Adjusted Cosine Distance as Similarity Metric
|
stat.ML cs.LG
|
Cosine similarity is a popular distance measure that measures the similarity
between two vectors in the inner product space. It is widely used in many data
classification algorithms like K-Nearest Neighbors, Clustering etc. This study
demonstrates limitations of application of cosine similarity. Particularly,
this study demonstrates that traditional cosine similarity metric is valid only
in the Euclidean space, whereas the original data resides in a random variable
space. When there is variance and correlation in the data, then cosine distance
is not a completely accurate measure of similarity. While new similarity and
distance metrics have been developed to make up for the limitations of cosine
similarity, these metrics are used as substitutes to cosine distance, and do
not make modifications to cosine distance to overcome its limitations.
Subsequently, we propose a modified cosine similarity metric, where cosine
distance is adjusted by variance-covariance of the data. Application of
variance-adjusted cosine distance gives better similarity performance compared
to traditional cosine distance. KNN modelling on the Wisconsin Breast Cancer
Dataset is performed using both traditional and modified cosine similarity
measures and compared. The modified formula shows 100% test accuracy on the
data.
|
2502.02234
|
Mask-informed Deep Contrastive Incomplete Multi-view Clustering
|
cs.CV cs.LG
|
Multi-view clustering (MvC) utilizes information from multiple views to
uncover the underlying structures of data. Despite significant advancements in
MvC, mitigating the impact of missing samples in specific views on the
integration of knowledge from different views remains a critical challenge.
This paper proposes a novel Mask-informed Deep Contrastive Incomplete
Multi-view Clustering (Mask-IMvC) method, which elegantly identifies a
view-common representation for clustering. Specifically, we introduce a
mask-informed fusion network that aggregates incomplete multi-view information
while considering the observation status of samples across various views as a
mask, thereby reducing the adverse effects of missing values. Additionally, we
design a prior knowledge-assisted contrastive learning loss that boosts the
representation capability of the aggregated view-common representation by
injecting neighborhood information of samples from different views. Finally,
extensive experiments are conducted to demonstrate the superiority of the
proposed Mask-IMvC method over state-of-the-art approaches across multiple MvC
datasets, both in complete and incomplete scenarios.
|
2502.02238
|
Using ChatGPT to refine draft conceptual schemata in supply-driven
design of multidimensional cubes
|
cs.DB cs.SE
|
Refinement is a critical step in supply-driven conceptual design of
multidimensional cubes because it can hardly be automated. In fact, it includes
steps such as the labeling of attributes as descriptive and the removal of
uninteresting attributes, thus relying on the end-users' requirements on the
one hand, and on the semantics of measures, dimensions, and attributes on the
other. As a consequence, it is normally carried out manually by designers in
close collaboration with end-users. The goal of this work is to check whether
LLMs can act as facilitators for the refinement task, so as to let it be
carried out entirely -- or mostly -- by end-users. The Dimensional Fact Model
is the target formalism for our study; as a representative LLM, we use
ChatGPT's model GPT-4o. To achieve our goal, we formulate three research
questions aimed at (i) understanding the basic competences of ChatGPT in
multidimensional modeling; (ii) understanding the basic competences of ChatGPT
in refinement; and (iii) investigating if the latter can be improved via prompt
engineering. The results of our experiments show that, indeed, a careful prompt
engineering can significantly improve the accuracy of refinement, and that the
residual errors can quickly be fixed via one additional prompt. However, we
conclude that, at present, some involvement of designers in refinement is still
necessary to ensure the validity of the refined schemata.
|
2502.02247
|
Rotation-Adaptive Point Cloud Domain Generalization via Intricate
Orientation Learning
|
cs.CV cs.AI cs.LG
|
The vulnerability of 3D point cloud analysis to unpredictable rotations poses
an open yet challenging problem: orientation-aware 3D domain generalization.
Cross-domain robustness and adaptability of 3D representations are crucial but
not easily achieved through rotation augmentation. Motivated by the inherent
advantages of intricate orientations in enhancing generalizability, we propose
an innovative rotation-adaptive domain generalization framework for 3D point
cloud analysis. Our approach aims to alleviate orientational shifts by
leveraging intricate samples in an iterative learning process. Specifically, we
identify the most challenging rotation for each point cloud and construct an
intricate orientation set by optimizing intricate orientations. Subsequently,
we employ an orientation-aware contrastive learning framework that incorporates
an orientation consistency loss and a margin separation loss, enabling
effective learning of categorically discriminative and generalizable features
with rotation consistency. Extensive experiments and ablations conducted on 3D
cross-domain benchmarks firmly establish the state-of-the-art performance of
our proposed approach in the context of orientation-aware 3D domain
generalization.
|
2502.02249
|
Conversation AI Dialog for Medicare powered by Finetuning and Retrieval
Augmented Generation
|
cs.CL cs.AI
|
Large language models (LLMs) have shown impressive capabilities in natural
language processing tasks, including dialogue generation. This research aims to
conduct a novel comparative analysis of two prominent techniques, fine-tuning
with LoRA (Low-Rank Adaptation) and the Retrieval-Augmented Generation (RAG)
framework, in the context of doctor-patient chat conversations with multiple
datasets of mixed medical domains. The analysis involves three state-of-the-art
models: Llama-2, GPT, and the LSTM model. Employing real-world doctor-patient
dialogues, we comprehensively evaluate the performance of models, assessing key
metrics such as language quality (perplexity, BLEU score), factual accuracy
(fact-checking against medical knowledge bases), adherence to medical
guidelines, and overall human judgments (coherence, empathy, safety). The
findings provide insights into the strengths and limitations of each approach,
shedding light on their suitability for healthcare applications. Furthermore,
the research investigates the robustness of the models in handling diverse
patient queries, ranging from general health inquiries to specific medical
conditions. The impact of domain-specific knowledge integration is also
explored, highlighting the potential for enhancing LLM performance through
targeted data augmentation and retrieval strategies.
|
2502.02257
|
UNIP: Rethinking Pre-trained Attention Patterns for Infrared Semantic
Segmentation
|
cs.CV
|
Pre-training techniques significantly enhance the performance of semantic
segmentation tasks with limited training data. However, the efficacy under a
large domain gap between pre-training (e.g. RGB) and fine-tuning (e.g.
infrared) remains underexplored. In this study, we first benchmark the infrared
semantic segmentation performance of various pre-training methods and reveal
several phenomena distinct from the RGB domain. Next, our layerwise analysis of
pre-trained attention maps uncovers that: (1) There are three typical attention
patterns (local, hybrid, and global); (2) Pre-training tasks notably influence
the pattern distribution across layers; (3) The hybrid pattern is crucial for
semantic segmentation as it attends to both nearby and foreground elements; (4)
The texture bias impedes model generalization in infrared tasks. Building on
these insights, we propose UNIP, a UNified Infrared Pre-training framework, to
enhance the pre-trained model performance. This framework uses the
hybrid-attention distillation NMI-HAD as the pre-training target, a large-scale
mixed dataset InfMix for pre-training, and a last-layer feature pyramid network
LL-FPN for fine-tuning. Experimental results show that UNIP outperforms various
pre-training methods by up to 13.5\% in average mIoU on three infrared
segmentation tasks, evaluated using fine-tuning and linear probing metrics.
UNIP-S achieves performance on par with MAE-L while requiring only 1/10 of the
computational cost. Furthermore, UNIP significantly surpasses state-of-the-art
(SOTA) infrared or RGB segmentation methods and demonstrates broad potential
for application in other modalities, such as RGB and depth. Our code is
available at https://github.com/casiatao/UNIP.
|
2502.02260
|
Adversarial ML Problems Are Getting Harder to Solve and to Evaluate
|
cs.LG cs.CR
|
In the past decade, considerable research effort has been devoted to securing
machine learning (ML) models that operate in adversarial settings. Yet,
progress has been slow even for simple "toy" problems (e.g., robustness to
small adversarial perturbations) and is often hindered by non-rigorous
evaluations. Today, adversarial ML research has shifted towards studying
larger, general-purpose language models. In this position paper, we argue that
the situation is now even worse: in the era of LLMs, the field of adversarial
ML studies problems that are (1) less clearly defined, (2) harder to solve, and
(3) even more challenging to evaluate. As a result, we caution that yet another
decade of work on adversarial ML may fail to produce meaningful progress.
|
2502.02265
|
Adviser-Actor-Critic: Eliminating Steady-State Error in Reinforcement
Learning Control
|
cs.LG cs.AI
|
High-precision control tasks present substantial challenges for reinforcement
learning (RL) algorithms, frequently resulting in suboptimal performance
attributed to network approximation inaccuracies and inadequate sample
quality.These issues are exacerbated when the task requires the agent to
achieve a precise goal state, as is common in robotics and other real-world
applications.We introduce Adviser-Actor-Critic (AAC), designed to address the
precision control dilemma by combining the precision of feedback control theory
with the adaptive learning capability of RL and featuring an Adviser that
mentors the actor to refine control actions, thereby enhancing the precision of
goal attainment.Finally, through benchmark tests, AAC outperformed standard RL
algorithms in precision-critical, goal-conditioned tasks, demonstrating AAC's
high precision, reliability, and robustness.Code are available at:
https://anonymous.4open.science/r/Adviser-Actor-Critic-8AC5.
|
2502.02269
|
Survey of Quantization Techniques for On-Device Vision-based Crack
Detection
|
cs.CV cs.LG
|
Structural Health Monitoring (SHM) ensures the safety and longevity of
infrastructure by enabling timely damage detection. Vision-based crack
detection, combined with UAVs, addresses the limitations of traditional
sensor-based SHM methods but requires the deployment of efficient deep learning
models on resource-constrained devices. This study evaluates two lightweight
convolutional neural network models, MobileNetV1x0.25 and MobileNetV2x0.5,
across TensorFlow, PyTorch, and Open Neural Network Exchange platforms using
three quantization techniques: dynamic quantization, post-training quantization
(PTQ), and quantization-aware training (QAT). Results show that QAT
consistently achieves near-floating-point accuracy, such as an F1-score of
0.8376 for MBNV2x0.5 with Torch-QAT, while maintaining efficient resource
usage. PTQ significantly reduces memory and energy consumption but suffers from
accuracy loss, particularly in TensorFlow. Dynamic quantization preserves
accuracy but faces deployment challenges on PyTorch. By leveraging QAT, this
work enables real-time, low-power crack detection on UAVs, enhancing safety,
scalability, and cost-efficiency in SHM applications, while providing insights
into balancing accuracy and efficiency across different platforms for
autonomous inspections.
|
2502.02270
|
Exact Sequence Classification with Hardmax Transformers
|
cs.LG math.OC stat.ML
|
We prove that hardmax attention transformers perfectly classify datasets of
$N$ labeled sequences in $\mathbb{R}^d$, $d\geq 2$. Specifically, given $N$
sequences with an arbitrary but finite length in $\mathbb{R}^d$, we construct a
transformer with $\mathcal{O}(N)$ blocks and $\mathcal{O}(Nd)$ parameters
perfectly classifying this dataset. Our construction achieves the best
complexity estimate to date, independent of the length of the sequences, by
innovatively alternating feed-forward and self-attention layers and by
capitalizing on the clustering effect inherent to the latter. Our novel
constructive method also uses low-rank parameter matrices within the attention
mechanism, a common practice in real-life transformer implementations.
Consequently, our analysis holds twofold significance: it substantially
advances the mathematical theory of transformers and it rigorously justifies
their exceptional real-world performance in sequence classification tasks.
|
2502.02275
|
A User's Guide to Sampling Strategies for Sliced Optimal Transport
|
cs.LG math.PR
|
This paper serves as a user's guide to sampling strategies for sliced optimal
transport. We provide reminders and additional regularity results on the Sliced
Wasserstein distance. We detail the construction methods, generation time
complexity, theoretical guarantees, and conditions for each strategy.
Additionally, we provide insights into their suitability for sliced optimal
transport in theory. Extensive experiments on both simulated and real-world
data offer a representative comparison of the strategies, culminating in
practical recommendations for their best usage.
|
2502.02277
|
Error Distribution Smoothing:Advancing Low-Dimensional Imbalanced
Regression
|
cs.LG cs.AI
|
In real-world regression tasks, datasets frequently exhibit imbalanced
distributions, characterized by a scarcity of data in high-complexity regions
and an abundance in low-complexity areas. This imbalance presents significant
challenges for existing classification methods with clear class boundaries,
while highlighting a scarcity of approaches specifically designed for
imbalanced regression problems. To better address these issues, we introduce a
novel concept of Imbalanced Regression, which takes into account both the
complexity of the problem and the density of data points, extending beyond
traditional definitions that focus only on data density. Furthermore, we
propose Error Distribution Smoothing (EDS) as a solution to tackle imbalanced
regression, effectively selecting a representative subset from the dataset to
reduce redundancy while maintaining balance and representativeness. Through
several experiments, EDS has shown its effectiveness, and the related code and
dataset can be accessed at
https://anonymous.4open.science/r/Error-Distribution-Smoothing-762F.
|
2502.02279
|
A Revisit of Total Correlation in Disentangled Variational Auto-Encoder
with Partial Disentanglement
|
cs.LG q-bio.NC
|
A fully disentangled variational auto-encoder (VAE) aims to identify
disentangled latent components from observations. However, enforcing full
independence between all latent components may be too strict for certain
datasets. In some cases, multiple factors may be entangled together in a
non-separable manner, or a single independent semantic meaning could be
represented by multiple latent components within a higher-dimensional manifold.
To address such scenarios with greater flexibility, we develop the Partially
Disentangled VAE (PDisVAE), which generalizes the total correlation (TC) term
in fully disentangled VAEs to a partial correlation (PC) term. This framework
can handle group-wise independence and can naturally reduce to either the
standard VAE or the fully disentangled VAE. Validation through three synthetic
experiments demonstrates the correctness and practicality of PDisVAE. When
applied to real-world datasets, PDisVAE discovers valuable information that is
difficult to find using fully disentangled VAEs, implying its versatility and
effectiveness.
|
2502.02283
|
GP-GS: Gaussian Processes for Enhanced Gaussian Splatting
|
cs.CV cs.AI
|
3D Gaussian Splatting has emerged as an efficient photorealistic novel view
synthesis method. However, its reliance on sparse Structure-from-Motion (SfM)
point clouds consistently compromises the scene reconstruction quality. To
address these limitations, this paper proposes a novel 3D reconstruction
framework Gaussian Processes Gaussian Splatting (GP-GS), where a multi-output
Gaussian Process model is developed to achieve adaptive and uncertainty-guided
densification of sparse SfM point clouds. Specifically, we propose a dynamic
sampling and filtering pipeline that adaptively expands the SfM point clouds by
leveraging GP-based predictions to infer new candidate points from the input 2D
pixels and depth maps. The pipeline utilizes uncertainty estimates to guide the
pruning of high-variance predictions, ensuring geometric consistency and
enabling the generation of dense point clouds. The densified point clouds
provide high-quality initial 3D Gaussians to enhance reconstruction
performance. Extensive experiments conducted on synthetic and real-world
datasets across various scales validate the effectiveness and practicality of
the proposed framework.
|
2502.02287
|
Adaptive Resource Allocation Optimization Using Large Language Models in
Dynamic Wireless Environments
|
eess.SY cs.LG cs.SY
|
Deep learning (DL) has made notable progress in addressing complex radio
access network control challenges that conventional analytic methods have
struggled to solve. However, DL has shown limitations in solving constrained
NP-hard problems often encountered in network optimization, such as those
involving quality of service (QoS) or discrete variables like user indices.
Current solutions rely on domain-specific architectures or heuristic
techniques, and a general DL approach for constrained optimization remains
undeveloped. Moreover, even minor changes in communication objectives demand
time-consuming retraining, limiting their adaptability to dynamic environments
where task objectives, constraints, environmental factors, and communication
scenarios frequently change. To address these challenges, we propose a large
language model for resource allocation optimizer (LLM-RAO), a novel approach
that harnesses the capabilities of LLMs to address the complex resource
allocation problem while adhering to QoS constraints. By employing a
prompt-based tuning strategy to flexibly convey ever-changing task descriptions
and requirements to the LLM, LLM-RAO demonstrates robust performance and
seamless adaptability in dynamic environments without requiring extensive
retraining. Simulation results reveal that LLM-RAO achieves up to a 40%
performance enhancement compared to conventional DL methods and up to an $80$\%
improvement over analytical approaches. Moreover, in scenarios with fluctuating
communication objectives, LLM-RAO attains up to 2.9 times the performance of
traditional DL-based networks.
|
2502.02289
|
Evalita-LLM: Benchmarking Large Language Models on Italian
|
cs.CL
|
We describe Evalita-LLM, a new benchmark designed to evaluate Large Language
Models (LLMs) on Italian tasks. The distinguishing and innovative features of
Evalita-LLM are the following: (i) all tasks are native Italian, avoiding
issues of translating from Italian and potential cultural biases; (ii) in
addition to well established multiple-choice tasks, the benchmark includes
generative tasks, enabling more natural interaction with LLMs; (iii) all tasks
are evaluated against multiple prompts, this way mitigating the model
sensitivity to specific prompts and allowing a fairer and objective evaluation.
We propose an iterative methodology, where candidate tasks and candidate
prompts are validated against a set of LLMs used for development. We report
experimental results from the benchmark's development phase, and provide
performance statistics for several state-of-the-art LLMs.
|
2502.02290
|
FRAUD-RLA: A new reinforcement learning adversarial attack against
credit card fraud detection
|
cs.LG cs.AI
|
Adversarial attacks pose a significant threat to data-driven systems, and
researchers have spent considerable resources studying them. Despite its
economic relevance, this trend largely overlooked the issue of credit card
fraud detection. To address this gap, we propose a new threat model that
demonstrates the limitations of existing attacks and highlights the necessity
to investigate new approaches. We then design a new adversarial attack for
credit card fraud detection, employing reinforcement learning to bypass
classifiers. This attack, called FRAUD-RLA, is designed to maximize the
attacker's reward by optimizing the exploration-exploitation tradeoff and
working with significantly less required knowledge than competitors. Our
experiments, conducted on three different heterogeneous datasets and against
two fraud detection systems, indicate that FRAUD-RLA is effective, even
considering the severe limitations imposed by our threat model.
|
2502.02295
|
Intelligent Reflecting Surface Based Localization of Mixed Near-Field
and Far-Field Targets
|
eess.SP cs.IT math.IT
|
This paper considers an intelligent reflecting surface (IRS)-assisted
bi-static localization architecture for the sixth-generation (6G) integrated
sensing and communication (ISAC) network. The system consists of a transmit
user, a receive base station (BS), an IRS, and multiple targets in either the
far-field or near-field region of the IRS. In particular, we focus on the
challenging scenario where the line-of-sight (LOS) paths between targets and
the BS are blocked, such that the emitted orthogonal frequency division
multiplexing (OFDM) signals from the user reach the BS merely via the
user-target-IRS-BS path. Based on the signals received by the BS, our goal is
to localize the targets by estimating their relative positions to the IRS,
instead of to the BS. We show that subspace-based methods, such as the multiple
signal classification (MUSIC) algorithm, can be applied onto the BS's received
signals to estimate the relative states from the targets to the IRS. To this
end, we create a virtual signal via combining user-target-IRS-BS channels over
various time slots. By applying MUSIC on such a virtual signal, we are able to
detect the far-field targets and the near-field targets, and estimate the
angle-of-arrivals (AOAs) and/or ranges from the targets to the IRS.
Furthermore, we theoretically verify that the proposed method can perfectly
estimate the relative states from the targets to the IRS in the ideal case with
infinite coherence blocks. Numerical results verify the effectiveness of our
proposed IRS-assisted localization scheme. Our paper demonstrates the potential
of employing passive anchors, i.e., IRSs, to improve the sensing coverage of
the active anchors, i.e., BSs.
|
2502.02300
|
Density Ratio Estimation with Conditional Probability Paths
|
cs.LG
|
Density ratio estimation in high dimensions can be reframed as integrating a
certain quantity, the time score, over probability paths which interpolate
between the two densities. In practice, the time score has to be estimated
based on samples from the two densities. However, existing methods for this
problem remain computationally expensive and can yield inaccurate estimates.
Inspired by recent advances in generative modeling, we introduce a novel
framework for time score estimation, based on a conditioning variable. Choosing
the conditioning variable judiciously enables a closed-form objective function.
We demonstrate that, compared to previous approaches, our approach results in
faster learning of the time score and competitive or better estimation
accuracies of the density ratio on challenging tasks. Furthermore, we establish
theoretical guarantees on the error of the estimated density ratio.
|
2502.02302
|
EdgeGFL: Rethinking Edge Information in Graph Feature Preference
Learning
|
cs.LG cs.AI
|
Graph Neural Networks (GNNs) have significant advantages in handling
non-Euclidean data and have been widely applied across various areas, thus
receiving increasing attention in recent years. The framework of GNN models
mainly includes the information propagation phase and the aggregation phase,
treating nodes and edges as information entities and propagation channels,
respectively. However, most existing GNN models face the challenge of
disconnection between node and edge feature information, as these models
typically treat the learning of edge and node features as independent tasks. To
address this limitation, we aim to develop an edge-empowered graph feature
preference learning framework that can capture edge embeddings to assist node
embeddings. By leveraging the learned multidimensional edge feature matrix, we
construct multi-channel filters to more effectively capture accurate node
features, thereby obtaining the non-local structural characteristics and
fine-grained high-order node features. Specifically, the inclusion of
multidimensional edge information enhances the functionality and flexibility of
the GNN model, enabling it to handle complex and diverse graph data more
effectively. Additionally, integrating relational representation learning into
the message passing framework allows graph nodes to receive more useful
information, thereby facilitating node representation learning. Finally,
experiments on four real-world heterogeneous graphs demonstrate the
effectiveness of theproposed model.
|
2502.02304
|
Comparative Analysis of FPGA and GPU Performance for Machine
Learning-Based Track Reconstruction at LHCb
|
hep-ex cs.DC cs.LG physics.ins-det
|
In high-energy physics, the increasing luminosity and detector granularity at
the Large Hadron Collider are driving the need for more efficient data
processing solutions. Machine Learning has emerged as a promising tool for
reconstructing charged particle tracks, due to its potentially linear
computational scaling with detector hits. The recent implementation of a graph
neural network-based track reconstruction pipeline in the first level trigger
of the LHCb experiment on GPUs serves as a platform for comparative studies
between computational architectures in the context of high-energy physics. This
paper presents a novel comparison of the throughput of ML model inference
between FPGAs and GPUs, focusing on the first step of the track reconstruction
pipeline$\unicode{x2013}$an implementation of a multilayer perceptron. Using
HLS4ML for FPGA deployment, we benchmark its performance against the GPU
implementation and demonstrate the potential of FPGAs for high-throughput,
low-latency inference without the need for an expertise in FPGA development and
while consuming significantly less power.
|
2502.02305
|
Information-Theoretic Proofs for Diffusion Sampling
|
stat.ML cs.IT cs.LG math.IT
|
This paper provides an elementary, self-contained analysis of diffusion-based
sampling methods for generative modeling. In contrast to existing approaches
that rely on continuous-time processes and then discretize, our treatment works
directly with discrete-time stochastic processes and yields precise
non-asymptotic convergence guarantees under broad assumptions. The key insight
is to couple the sampling process of interest with an idealized comparison
process that has an explicit Gaussian-convolution structure. We then leverage
simple identities from information theory, including the I-MMSE relationship,
to bound the discrepancy (in terms of the Kullback-Leibler divergence) between
these two discrete-time processes. In particular, we show that, if the
diffusion step sizes are chosen sufficiently small and one can approximate
certain conditional mean estimators well, then the sampling distribution is
provably close to the target distribution. Our results also provide a
transparent view on how to accelerate convergence by introducing additional
randomness in each step to match higher order moments in the comparison
process.
|
2502.02307
|
UniGaze: Towards Universal Gaze Estimation via Large-scale Pre-Training
|
cs.CV
|
Despite decades of research on data collection and model architectures,
current gaze estimation models face significant challenges in generalizing
across diverse data domains. While recent advances in self-supervised
pre-training have shown remarkable potential for improving model generalization
in various vision tasks, their effectiveness in gaze estimation remains
unexplored due to the geometric nature of the gaze regression task. We propose
UniGaze, which leverages large-scale, in-the-wild facial datasets through
self-supervised pre-training for gaze estimation. We carefully curate multiple
facial datasets that capture diverse variations in identity, lighting,
background, and head poses. By directly applying Masked Autoencoder (MAE)
pre-training on normalized face images with a Vision Transformer (ViT)
backbone, our UniGaze learns appropriate feature representations within the
specific input space required by downstream gaze estimation models. Through
comprehensive experiments using challenging cross-dataset evaluation and novel
protocols, including leave-one-dataset-out and joint-dataset settings, we
demonstrate that UniGaze significantly improves generalization across multiple
data domains while minimizing reliance on costly labeled data. The source code
and pre-trained models will be released upon acceptance.
|
2502.02308
|
Real-Time Operator Takeover for Visuomotor Diffusion Policy Training
|
cs.RO cs.LG
|
We present a Real-Time Operator Takeover (RTOT) paradigm enabling operators
to seamlessly take control of a live visuomotor diffusion policy, guiding the
system back into desirable states or reinforcing specific demonstrations. We
present new insights in using the Mahalonobis distance to automatically
identify undesirable states. Once the operator has intervened and redirected
the system, the control is seamlessly returned to the policy, which resumes
generating actions until further intervention is required. We demonstrate that
incorporating the targeted takeover demonstrations significantly improves
policy performance compared to training solely with an equivalent number of,
but longer, initial demonstrations. We provide an in-depth analysis of using
the Mahalanobis distance to detect out-of-distribution states, illustrating its
utility for identifying critical failure points during execution. Supporting
materials, including videos of initial and takeover demonstrations and all rice
scooping experiments, are available on the project website:
https://operator-takeover.github.io/
|
2502.02309
|
Review of Demographic Bias in Face Recognition
|
cs.CV cs.CR
|
Demographic bias in face recognition (FR) has emerged as a critical area of
research, given its impact on fairness, equity, and reliability across diverse
applications. As FR technologies are increasingly deployed globally,
disparities in performance across demographic groups -- such as race,
ethnicity, and gender -- have garnered significant attention. These biases not
only compromise the credibility of FR systems but also raise ethical concerns,
especially when these technologies are employed in sensitive domains. This
review consolidates extensive research efforts providing a comprehensive
overview of the multifaceted aspects of demographic bias in FR.
We systematically examine the primary causes, datasets, assessment metrics,
and mitigation approaches associated with demographic disparities in FR. By
categorizing key contributions in these areas, this work provides a structured
approach to understanding and addressing the complexity of this issue. Finally,
we highlight current advancements and identify emerging challenges that need
further investigation. This article aims to provide researchers with a unified
perspective on the state-of-the-art while emphasizing the critical need for
equitable and trustworthy FR systems.
|
2502.02310
|
Gaussian processes for dynamics learning in model predictive control
|
eess.SY cs.SY
|
Due to its state-of-the-art estimation performance complemented by rigorous
and non-conservative uncertainty bounds, Gaussian process regression is a
popular tool for enhancing dynamical system models and coping with their
inaccuracies. This has enabled a plethora of successful implementations of
Gaussian process-based model predictive control in a variety of applications
over the last years. However, despite its evident practical effectiveness,
there are still many open questions when attempting to analyze the associated
optimal control problem theoretically and to exploit the full potential of
Gaussian process regression in view of safe learning-based control.
The contribution of this review is twofold. The first is to survey the
available literature on the topic, highlighting the major theoretical
challenges such as (i) addressing scalability issues of Gaussian process
regression; (ii) taking into account the necessary approximations to obtain a
tractable MPC formulation; (iii) including online model updates to refine the
dynamics description, exploiting data collected during operation. The second is
to provide an extensive discussion of future research directions, collecting
results on uncertainty quantification that are related to (but yet unexploited
in) optimal control, among others. Ultimately, this paper provides a toolkit to
study and advance Gaussian process-based model predictive control.
|
2502.02311
|
MAGNNET: Multi-Agent Graph Neural Network-based Efficient Task
Allocation for Autonomous Vehicles with Deep Reinforcement Learning
|
cs.RO cs.LG cs.MA
|
This paper addresses the challenge of decentralized task allocation within
heterogeneous multi-agent systems operating under communication constraints. We
introduce a novel framework that integrates graph neural networks (GNNs) with a
centralized training and decentralized execution (CTDE) paradigm, further
enhanced by a tailored Proximal Policy Optimization (PPO) algorithm for
multi-agent deep reinforcement learning (MARL). Our approach enables unmanned
aerial vehicles (UAVs) and unmanned ground vehicles (UGVs) to dynamically
allocate tasks efficiently without necessitating central coordination in a 3D
grid environment. The framework minimizes total travel time while
simultaneously avoiding conflicts in task assignments. For the cost calculation
and routing, we employ reservation-based A* and R* path planners. Experimental
results revealed that our method achieves a high 92.5% conflict-free success
rate, with only a 7.49% performance gap compared to the centralized Hungarian
method, while outperforming the heuristic decentralized baseline based on
greedy approach. Additionally, the framework exhibits scalability with up to 20
agents with allocation processing of 2.8 s and robustness in responding to
dynamically generated tasks, underscoring its potential for real-world
applications in complex multi-agent scenarios.
|
2502.02315
|
VaiBot: Shuttle Between the Instructions and Parameters of Large
Language Models
|
cs.LG cs.CL
|
How to interact with LLMs through \emph{instructions} has been widely studied
by researchers. However, previous studies have treated the emergence of
instructions and the training of LLMs on task data as separate processes,
overlooking the inherent unity between the two. This paper proposes a neural
network framework, VaiBot, that integrates VAE and VIB, designed to uniformly
model, learn, and infer both deduction and induction tasks under LLMs. Through
experiments, we demonstrate that VaiBot performs on par with existing baseline
methods in terms of deductive capabilities while significantly surpassing them
in inductive capabilities. We also find that VaiBot can scale up using general
instruction-following data and exhibits excellent one-shot induction abilities.
We finally synergistically integrate the deductive and inductive processes of
VaiBot. Through T-SNE dimensionality reduction, we observe that its
inductive-deductive process significantly improves the distribution of training
parameters, enabling it to outperform baseline methods in inductive reasoning
tasks. The code and data for this paper can be found at
https://anonymous.4open.science/r/VaiBot-021F.
|
2502.02316
|
DIME:Diffusion-Based Maximum Entropy Reinforcement Learning
|
cs.LG
|
Maximum entropy reinforcement learning (MaxEnt-RL) has become the standard
approach to RL due to its beneficial exploration properties. Traditionally,
policies are parameterized using Gaussian distributions, which significantly
limits their representational capacity. Diffusion-based policies offer a more
expressive alternative, yet integrating them into MaxEnt-RL poses
challenges--primarily due to the intractability of computing their marginal
entropy. To overcome this, we propose Diffusion-Based Maximum Entropy RL
(DIME). DIME leverages recent advances in approximate inference with diffusion
models to derive a lower bound on the maximum entropy objective. Additionally,
we propose a policy iteration scheme that provably converges to the optimal
diffusion policy. Our method enables the use of expressive diffusion-based
policies while retaining the principled exploration benefits of MaxEnt-RL,
significantly outperforming other diffusion-based methods on challenging
high-dimensional control benchmarks. It is also competitive with
state-of-the-art non-diffusion based RL methods while requiring fewer
algorithmic design choices and smaller update-to-data ratios, reducing
computational complexity.
|
2502.02322
|
Improving Generalization Ability for 3D Object Detection by Learning
Sparsity-invariant Features
|
cs.CV cs.RO
|
In autonomous driving, 3D object detection is essential for accurately
identifying and tracking objects. Despite the continuous development of various
technologies for this task, a significant drawback is observed in most of
them-they experience substantial performance degradation when detecting objects
in unseen domains. In this paper, we propose a method to improve the
generalization ability for 3D object detection on a single domain. We primarily
focus on generalizing from a single source domain to target domains with
distinct sensor configurations and scene distributions. To learn
sparsity-invariant features from a single source domain, we selectively
subsample the source data to a specific beam, using confidence scores
determined by the current detector to identify the density that holds utmost
importance for the detector. Subsequently, we employ the teacher-student
framework to align the Bird's Eye View (BEV) features for different point
clouds densities. We also utilize feature content alignment (FCA) and
graph-based embedding relationship alignment (GERA) to instruct the detector to
be domain-agnostic. Extensive experiments demonstrate that our method exhibits
superior generalization capabilities compared to other baselines. Furthermore,
our approach even outperforms certain domain adaptation methods that can access
to the target domain data.
|
2502.02323
|
Hybrid Resolver Model Generalization for Fault Condition Modeling: A
Promising Tool for Reliability Study
|
eess.SY cs.SY
|
Resolvers, like all electromagnetic devices, are constantly under
investigation, both operationally and structurally. In this regard, proposing a
modeling methodology that can save significant time without compromising
accuracy is a big honor. In this study, a generalized hybrid model is suggested
that, in addition to the above benefits, has sufficient capability to ease
reliability study in the field of resolvers, where a large number of faulty
conditions must be investigated under different operating conditions, including
changes in angular velocity, voltage, and frequency of excitation; all of which
are highlighted in the context of fault coverage. This model also serves as a
promising tool for generating large datasets, which is advantageous for fault
diagnosis. A resolver with a non-uniform air gap is chosen as a case study to
challenge the suggested model, particularly in relation to eccentricity faults.
We generalize the suggested model to account for the most common faulty
conditions of resolvers: in-turn short circuits in signal and excitation
windings, as well as static and dynamic eccentricity faults. The close
agreement between the results of the suggested model and those from
Time-Stepping Finite Element Analysis (TS-FEA), along with significant time
savings in both healthy and faulty conditions, highlights the generality and
proficiency of the suggested model. Finally, the case study is prototyped, and
we verify the accuracy of the suggested model experimentally.
|
2502.02327
|
Policy-Guided Causal State Representation for Offline Reinforcement
Learning Recommendation
|
cs.IR cs.LG
|
In offline reinforcement learning-based recommender systems (RLRS), learning
effective state representations is crucial for capturing user preferences that
directly impact long-term rewards. However, raw state representations often
contain high-dimensional, noisy information and components that are not
causally relevant to the reward. Additionally, missing transitions in offline
data make it challenging to accurately identify features that are most relevant
to user satisfaction. To address these challenges, we propose Policy-Guided
Causal Representation (PGCR), a novel two-stage framework for causal feature
selection and state representation learning in offline RLRS. In the first
stage, we learn a causal feature selection policy that generates modified
states by isolating and retaining only the causally relevant components (CRCs)
while altering irrelevant components. This policy is guided by a reward
function based on the Wasserstein distance, which measures the causal effect of
state components on the reward and encourages the preservation of CRCs that
directly influence user interests. In the second stage, we train an encoder to
learn compact state representations by minimizing the mean squared error (MSE)
loss between the latent representations of the original and modified states,
ensuring that the representations focus on CRCs. We provide a theoretical
analysis proving the identifiability of causal effects from interventions,
validating the ability of PGCR to isolate critical state components for
decision-making. Extensive experiments demonstrate that PGCR significantly
improves recommendation performance, confirming its effectiveness for offline
RL-based recommender systems.
|
2502.02329
|
ReSpark: Leveraging Previous Data Reports as References to Generate New
Reports with LLMs
|
cs.HC cs.CL
|
Creating data reports is time-consuming, as it requires iterative exploration
and understanding of data, followed by summarizing the insights. While large
language models (LLMs) are powerful tools for data processing and text
generation, they often struggle to produce complete data reports that fully
meet user expectations. One significant challenge is effectively communicating
the entire analysis logic to LLMs. Moreover, determining a comprehensive
analysis logic can be mentally taxing for users. To address these challenges,
we propose ReSpark, an LLM-based method that leverages existing data reports as
references for creating new ones. Given a data table, ReSpark searches for
similar-topic reports, parses them into interdependent segments corresponding
to analytical objectives, and executes them with new data. It identifies
inconsistencies and customizes the objectives, data transformations, and
textual descriptions. ReSpark allows users to review real-time outputs, insert
new objectives, and modify report content. Its effectiveness was evaluated
through comparative and user studies.
|
2502.02331
|
On the Impact of Performative Risk Minimization for Binary Random
Variables
|
stat.ML cs.LG
|
Performativity, the phenomenon where outcomes are influenced by predictions,
is particularly prevalent in social contexts where individuals strategically
respond to a deployed model. In order to preserve the high accuracy of machine
learning models under distribution shifts caused by performativity, Perdomo et
al. (2020) introduced the concept of performative risk minimization (PRM).
While this framework ensures model accuracy, it overlooks the impact of the PRM
on the underlying distributions and the predictions of the model. In this
paper, we initiate the analysis of the impact of PRM, by studying
performativity for a sequential performative risk minimization problem with
binary random variables and linear performative shifts. We formulate two
natural measures of impact. In the case of full information, where the
distribution dynamics are known, we derive explicit formulas for the PRM
solution and our impact measures. In the case of partial information, we
provide performative-aware statistical estimators, as well as simulations. Our
analysis contrasts PRM to alternatives that do not model data shift and
indicates that PRM can have amplified side effects compared to such methods.
|
2502.02332
|
Coreset-Based Task Selection for Sample-Efficient Meta-Reinforcement
Learning
|
math.OC cs.LG
|
We study task selection to enhance sample efficiency in model-agnostic
meta-reinforcement learning (MAML-RL). Traditional meta-RL typically assumes
that all available tasks are equally important, which can lead to task
redundancy when they share significant similarities. To address this, we
propose a coreset-based task selection approach that selects a weighted subset
of tasks based on how diverse they are in gradient space, prioritizing the most
informative and diverse tasks. Such task selection reduces the number of
samples needed to find an $\epsilon$-close stationary solution by a factor of
O(1/$\epsilon$). Consequently, it guarantees a faster adaptation to unseen
tasks while focusing training on the most relevant tasks. As a case study, we
incorporate task selection to MAML-LQR (Toso et al., 2024b), and prove a sample
complexity reduction proportional to O(log(1/$\epsilon$)) when the task
specific cost also satisfy gradient dominance. Our theoretical guarantees
underscore task selection as a key component for scalable and sample-efficient
meta-RL. We numerically validate this trend across multiple RL benchmark
problems, illustrating the benefits of task selection beyond the LQR baseline.
|
2502.02334
|
Event-aided Semantic Scene Completion
|
cs.CV cs.RO eess.IV
|
Autonomous driving systems rely on robust 3D scene understanding. Recent
advances in Semantic Scene Completion (SSC) for autonomous driving underscore
the limitations of RGB-based approaches, which struggle under motion blur, poor
lighting, and adverse weather. Event cameras, offering high dynamic range and
low latency, address these challenges by providing asynchronous data that
complements RGB inputs. We present DSEC-SSC, the first real-world benchmark
specifically designed for event-aided SSC, which includes a novel 4D labeling
pipeline for generating dense, visibility-aware labels that adapt dynamically
to object motion. Our proposed RGB-Event fusion framework, EvSSC, introduces an
Event-aided Lifting Module (ELM) that effectively bridges 2D RGB-Event features
to 3D space, enhancing view transformation and the robustness of 3D volume
construction across SSC models. Extensive experiments on DSEC-SSC and simulated
SemanticKITTI-E demonstrate that EvSSC is adaptable to both transformer-based
and LSS-based SSC architectures. Notably, evaluations on SemanticKITTI-C
demonstrate that EvSSC achieves consistently improved prediction accuracy
across five degradation modes and both In-domain and Out-of-domain settings,
achieving up to a 52.5% relative improvement in mIoU when the image sensor
partially fails. Additionally, we quantitatively and qualitatively validate the
superiority of EvSSC under motion blur and extreme weather conditions, where
autonomous driving is challenged. The established datasets and our codebase
will be made publicly at https://github.com/Pandapan01/EvSSC.
|
2502.02336
|
Identifying Large-Scale Linear Parameter Varying Systems with Dynamic
Mode Decomposition Methods
|
eess.SY cs.LG cs.SY
|
Linear Parameter Varying (LPV) Systems are a well-established class of
nonlinear systems with a rich theory for stability analysis, control, and
analytical response finding, among other aspects. Although there are works on
data-driven identification of such systems, the literature is quite scarce in
terms of works that tackle the identification of LPV models for large-scale
systems. Since large-scale systems are ubiquitous in practice, this work
develops a methodology for the local and global identification of large-scale
LPV systems based on nonintrusive reduced-order modeling. The developed method
is coined as DMD-LPV for being inspired in the Dynamic Mode Decomposition
(DMD). To validate the proposed identification method, we identify a system
described by a discretized linear diffusion equation, with the diffusion gain
defined by a polynomial over a parameter. The experiments show that the
proposed method can easily identify a reduced-order LPV model of a given
large-scale system without the need to perform identification in the full-order
dimension, and with almost no performance decay over performing a reduction,
given that the model structure is well-established.
|
2502.02338
|
Geometric Neural Process Fields
|
cs.CV cs.LG
|
This paper addresses the challenge of Neural Field (NeF) generalization,
where models must efficiently adapt to new signals given only a few
observations. To tackle this, we propose Geometric Neural Process Fields
(G-NPF), a probabilistic framework for neural radiance fields that explicitly
captures uncertainty. We formulate NeF generalization as a probabilistic
problem, enabling direct inference of NeF function distributions from limited
context observations. To incorporate structural inductive biases, we introduce
a set of geometric bases that encode spatial structure and facilitate the
inference of NeF function distributions. Building on these bases, we design a
hierarchical latent variable model, allowing G-NPF to integrate structural
information across multiple spatial levels and effectively parameterize INR
functions. This hierarchical approach improves generalization to novel scenes
and unseen signals. Experiments on novel-view synthesis for 3D scenes, as well
as 2D image and 1D signal regression, demonstrate the effectiveness of our
method in capturing uncertainty and leveraging structural information for
improved generalization.
|
2502.02339
|
Boosting Multimodal Reasoning with MCTS-Automated Structured Thinking
|
cs.CL
|
Multimodal large language models (MLLMs) exhibit impressive capabilities but
still face challenges in complex visual reasoning. While recent efforts attempt
to enhance MLLMs' reasoning by incorporating OpenAI o1-like structured thinking
through explicit search structures or teacher-guided distillation, they often
struggle to balance performance and efficiency. A critical limitation is their
heavy reliance on extensive data and search spaces, resulting in low-efficiency
implicit insight extraction and data utilization. To address this, we propose
AStar, an Automated Structured thinking paradigm for multimodal reasoning via
Monte Carlo Tree Search (MCTS). AStar automatically derives high-level
cognitive reasoning patterns from limited data using MCTS-powered hierarchical
structures. Building on these explicit patterns, we design a unified reasoning
framework that seamlessly integrates models' internal reasoning capabilities
and external reasoning guidelines, enabling efficient inference with minimal
tree iterations. This novel paradigm strikes a compelling balance between
performance and efficiency. Extensive experiments demonstrate AStar's
effectiveness, achieving superior accuracy (54.0$\%$) on the MathVerse
benchmark with a 7B backbone, surpassing GPT-4o (50.2$\%$) while maintaining
substantial data and computational efficiency.
|
2502.02340
|
Transfer Risk Map: Mitigating Pixel-level Negative Transfer in Medical
Segmentation
|
cs.CV
|
How to mitigate negative transfer in transfer learning is a long-standing and
challenging issue, especially in the application of medical image segmentation.
Existing methods for reducing negative transfer focus on classification or
regression tasks, ignoring the non-uniform negative transfer risk in different
image regions. In this work, we propose a simple yet effective weighted
fine-tuning method that directs the model's attention towards regions with
significant transfer risk for medical semantic segmentation. Specifically, we
compute a transferability-guided transfer risk map to quantify the transfer
hardness for each pixel and the potential risks of negative transfer. During
the fine-tuning phase, we introduce a map-weighted loss function, normalized
with image foreground size to counter class imbalance. Extensive experiments on
brain segmentation datasets show our method significantly improves the target
task performance, with gains of 4.37% on FeTS2021 and 1.81% on iSeg2019,
avoiding negative transfer across modalities and tasks. Meanwhile, a 2.9% gain
under a few-shot scenario validates the robustness of our approach.
|
2502.02341
|
Test Time Training for 4D Medical Image Interpolation
|
eess.IV cs.AI cs.CV
|
4D medical image interpolation is essential for improving temporal resolution
and diagnostic precision in clinical applications. Previous works ignore the
problem of distribution shifts, resulting in poor generalization under
different distribution. A natural solution would be to adapt the model to a new
test distribution, but this cannot be done if the test input comes without a
ground truth label. In this paper, we propose a novel test time training
framework which uses self-supervision to adapt the model to a new distribution
without requiring any labels. Indeed, before performing frame interpolation on
each test video, the model is trained on the same instance using a
self-supervised task, such as rotation prediction or image reconstruction. We
conduct experiments on two publicly available 4D medical image interpolation
datasets, Cardiac and 4D-Lung. The experimental results show that the proposed
method achieves significant performance across various evaluation metrics on
both datasets. It achieves higher peak signal-to-noise ratio values, 33.73dB on
Cardiac and 34.02dB on 4D-Lung. Our method not only advances 4D medical image
interpolation but also provides a template for domain adaptation in other
fields such as image segmentation and image registration.
|
2502.02345
|
Optimal Subspace Inference for the Laplace Approximation of Bayesian
Neural Networks
|
cs.LG
|
Subspace inference for neural networks assumes that a subspace of their
parameter space suffices to produce a reliable uncertainty quantification. In
this work, we mathematically derive the optimal subspace model to a Bayesian
inference scenario based on the Laplace approximation. We demonstrate
empirically that, in the optimal case, often a fraction of parameters less than
1% is sufficient to obtain a reliable estimate of the full Laplace
approximation. Since the optimal solution is derived, we can evaluate all other
subspace models against a baseline. In addition, we give an approximation of
our method that is applicable to larger problem settings, in which the optimal
solution is not computable, and compare it to existing subspace models from the
literature. In general, our approximation scheme outperforms previous work.
Furthermore, we present a metric to qualitatively compare different subspace
models even if the exact Laplace approximation is unknown.
|
2502.02347
|
Exponentially Stable Combined Adaptive Control under Finite Excitation
Condition
|
eess.SY cs.SY
|
The parameter convergence relies on a stringent persistent excitation (PE)
condition in adaptive control. Several works have proposed a memory term in the
last decade to translate the PE condition to a feasible finite excitation (FE)
condition. This work proposes a combined model reference adaptive control for a
class of uncertain nonlinear systems with an unknown control effectiveness
vector. The closed-loop system is exponentially stable under the FE condition.
The exponential rate of convergence is independent of the excitation level of
the regressor vector and is lower-bounded in terms of the system parameters and
user-designed gains. Numerical simulation is illustrated, validating the
results obtained with the proposed adaptive control.
|
2502.02351
|
Exploring the Feasibility of AI-Assisted Spine MRI Protocol Optimization
Using DICOM Image Metadata
|
cs.LG
|
Artificial intelligence (AI) is increasingly being utilized to optimize
magnetic resonance imaging (MRI) protocols. Given that image details are
critical for diagnostic accuracy, optimizing MRI acquisition protocols is
essential for enhancing image quality. While medical physicists are responsible
for this optimization, the variability in equipment usage and the wide range of
MRI protocols in clinical settings pose significant challenges. This study aims
to validate the application of AI in optimizing MRI protocols using dynamic
data from clinical practice, specifically DICOM metadata. To achieve this, four
MRI spine exam databases were created, with the target attribute being the
binary classification of image quality (good or bad). Five AI models were
trained to identify trends in acquisition parameters that influence image
quality, grounded in MRI theory. These trends were analyzed using SHAP graphs.
The models achieved F1 performance ranging from 77% to 93% for datasets
containing 292 or more instances, with the observed trends aligning with MRI
theory. The models effectively reflected the practical realities of clinical
MRI settings, offering a valuable tool for medical physicists in quality
control tasks. In conclusion, AI has demonstrated its potential to optimize MRI
protocols, supporting medical physicists in improving image quality and
enhancing the efficiency of quality control in clinical practice.
|
2502.02356
|
A Fast Decoding Algorithm for Generalized Reed-Solomon Codes and
Alternant Codes
|
cs.IT math.IT
|
In this paper, it is shown that the syndromes of generalized Reed-Solomon
(GRS) codes and alternant codes can be characterized in terms of inverse fast
Fourier transform, regardless of code definitions. Then a fast decoding
algorithm is proposed, which has a computational complexity of $O(n\log(n-k) +
(n-k)\log^2(n-k))$ for all $(n,k)$ GRS codes and $(n,k)$ alternant codes.
Particularly, this provides a new decoding method for Goppa codes, which is an
important subclass of alternant codes. When decoding the binary Goppa code with
length $8192$ and correction capability $128$, the new algorithm is nearly 10
times faster than traditional methods. The decoding algorithm is suitable for
the McEliece cryptosystem, which is a candidate for post-quantum cryptography
techniques.
|
2502.02357
|
Graph-based Impact Analysis of Cyber-Attacks on Behind-the-Meter
Infrastructure
|
eess.SY cs.SY
|
Behind-the-Meter assets are getting more interconnected to realise new
applications like flexible tariffs. Cyber-attacks on the resulting control
infrastructure may impact a large number of devices, which can result in severe
impact on the power system. To analyse the possible impact of such attacks we
developed a graph model of the cyber-physical energy system, representing
interdependencies between the control infrastructure and the power system. This
model is than used for an impact analysis of cyber-attacks with different
attack vectors.
|
2502.02358
|
MotionLab: Unified Human Motion Generation and Editing via the
Motion-Condition-Motion Paradigm
|
cs.CV
|
Human motion generation and editing are key components of computer graphics
and vision. However, current approaches in this field tend to offer isolated
solutions tailored to specific tasks, which can be inefficient and impractical
for real-world applications. While some efforts have aimed to unify
motion-related tasks, these methods simply use different modalities as
conditions to guide motion generation. Consequently, they lack editing
capabilities, fine-grained control, and fail to facilitate knowledge sharing
across tasks. To address these limitations and provide a versatile, unified
framework capable of handling both human motion generation and editing, we
introduce a novel paradigm: Motion-Condition-Motion, which enables the unified
formulation of diverse tasks with three concepts: source motion, condition, and
target motion. Based on this paradigm, we propose a unified framework,
MotionLab, which incorporates rectified flows to learn the mapping from source
motion to target motion, guided by the specified conditions. In MotionLab, we
introduce the 1) MotionFlow Transformer to enhance conditional generation and
editing without task-specific modules; 2) Aligned Rotational Position Encoding}
to guarantee the time synchronization between source motion and target motion;
3) Task Specified Instruction Modulation; and 4) Motion Curriculum Learning for
effective multi-task learning and knowledge sharing across tasks. Notably, our
MotionLab demonstrates promising generalization capabilities and inference
efficiency across multiple benchmarks for human motion. Our code and additional
video results are available at: https://diouo.github.io/motionlab.github.io/.
|
2502.02362
|
Premise-Augmented Reasoning Chains Improve Error Identification in Math
reasoning with LLMs
|
cs.CL
|
Chain-of-Thought (CoT) prompting enhances mathematical reasoning in large
language models (LLMs) by enabling detailed step-by-step solutions. However,
due to the verbosity of LLMs, the resulting reasoning chains can be long,
making it harder to verify the reasoning steps and trace issues resulting from
dependencies between the steps that may be farther away in the sequence of
steps. Importantly, mathematical reasoning allows each step to be derived from
a small set of premises, which are a subset of the preceding steps in the
reasoning chain. In this paper, we present a framework that identifies the
premises for each step, to improve the evaluation of reasoning. We restructure
conventional linear reasoning chains into Premise Augmented Reasoning Chains
(PARC) by introducing premise links, resulting in a directed acyclic graph
where the nodes are the steps and the edges are the premise links. Through
experiments with a PARC-based dataset that we built, namely PERL (Premises and
ERrors identification in LLMs), we demonstrate that LLMs can reliably identify
premises within complex reasoning chains. In particular, even open-source LLMs
achieve 90% recall in premise identification. We also show that PARC helps to
identify errors in reasoning chains more reliably. The accuracy of error
identification improves by 6% to 16% absolute when step-by-step verification is
carried out in PARC under the premises. Our findings highlight the utility of
premise-centric representations in addressing complex problem-solving tasks and
open new avenues for improving the reliability of LLM-based reasoning
evaluations.
|
2502.02363
|
FAB-PPI: Frequentist, Assisted by Bayes, Prediction-Powered Inference
|
stat.ML cs.LG
|
Prediction-powered inference (PPI) enables valid statistical inference by
combining experimental data with machine learning predictions. When a
sufficient number of high-quality predictions is available, PPI results in more
accurate estimates and tighter confidence intervals than traditional methods.
In this paper, we propose to inform the PPI framework with prior knowledge on
the quality of the predictions. The resulting method, which we call
frequentist, assisted by Bayes, PPI (FAB-PPI), improves over PPI when the
observed prediction quality is likely under the prior, while maintaining its
frequentist guarantees. Furthermore, when using heavy-tailed priors, FAB-PPI
adaptively reverts to standard PPI in low prior probability regions. We
demonstrate the benefits of FAB-PPI in real and synthetic examples.
|
2502.02365
|
Measuring social mobility in temporal networks
|
cs.SI physics.soc-ph
|
In complex networks, the rich-get-richer effect (nodes with high degree at
one point in time gain more degree in their future) is commonly observed. In
practice this is often studied on a static network snapshot, for example, a
preferential attachment model assumed to explain the more highly connected
nodes or a rich-club}effect that analyses the most highly connected nodes. In
this paper, we consider temporal measures of how success (measured here as node
degree) propagates across time. By analogy with social mobility (a measure
people moving within a social hierarchy through their life) we define
hierarchical mobility to measure how a node's propensity to gain degree changes
over time. We introduce an associated taxonomy of temporal correlation
statistics including mobility, philanthropy and community. Mobility measures
the extent to which a node's degree gain in one time period predicts its degree
gain in the next. Philanthropy and community measure similar properties related
to node neighbourhood.
We apply these statistics both to artificial models and to 26 real temporal
networks. We find that most of our networks show a tendency for individual
nodes and their neighbourhoods to remain in similar hierarchical positions over
time, while most networks show low correlative effects between individuals and
their neighbourhoods. Moreover, we show that the mobility taxonomy can
discriminate between networks from different fields. We also generate
artificial network models to gain intuition about the behaviour and expected
range of the statistics. The artificial models show that the opposite of the
"rich-get-richer" effect requires the existence of inequality of degree in a
network. Overall, we show that measuring the hierarchical mobility of a
temporal network is an invaluable resource for discovering its underlying
structural dynamics.
|
2502.02367
|
Field Matching: an Electrostatic Paradigm to Generate and Transfer Data
|
cs.LG cs.AI cs.CV
|
We propose Electrostatic Field Matching (EFM), a novel method that is
suitable for both generative modeling and distribution transfer tasks. Our
approach is inspired by the physics of an electrical capacitor. We place source
and target distributions on the capacitor plates and assign them positive and
negative charges, respectively. We then learn the electrostatic field of the
capacitor using a neural network approximator. To map the distributions to each
other, we start at one plate of the capacitor and move the samples along the
learned electrostatic field lines until they reach the other plate. We
theoretically justify that this approach provably yields the distribution
transfer. In practice, we demonstrate the performance of our EFM in toy and
image data experiments.
|
2502.02368
|
Evaluating the Effectiveness of LLMs in Fixing Maintainability Issues in
Real-World Projects
|
cs.SE cs.AI
|
Large Language Models (LLMs) have gained attention for addressing coding
problems, but their effectiveness in fixing code maintainability remains
unclear. This study evaluates LLMs capability to resolve 127 maintainability
issues from 10 GitHub repositories. We use zero-shot prompting for Copilot Chat
and Llama 3.1, and few-shot prompting with Llama only. The LLM-generated
solutions are assessed for compilation errors, test failures, and new
maintainability problems. Llama with few-shot prompting successfully fixed
44.9% of the methods, while Copilot Chat and Llama zero-shot fixed 32.29% and
30%, respectively. However, most solutions introduced errors or new
maintainability issues. We also conducted a human study with 45 participants to
evaluate the readability of 51 LLM-generated solutions. The human study showed
that 68.63% of participants observed improved readability. Overall, while LLMs
show potential for fixing maintainability issues, their introduction of errors
highlights their current limitations.
|
2502.02371
|
Accurate Pocket Identification for Binding-Site-Agnostic Docking
|
q-bio.BM cs.AI cs.LG physics.bio-ph physics.med-ph
|
Accurate identification of druggable pockets is essential for structure-based
drug design. However, most pocket-identification algorithms prioritize their
geometric properties over downstream docking performance. To address this
limitation, we developed RAPID-Net, a pocket-finding algorithm for seamless
integration with docking workflows. When guiding AutoDock Vina, RAPID-Net
outperforms DiffBindFR on the PoseBusters benchmark and enables blind docking
on large proteins that AlphaFold 3 cannot process as a whole. Furthermore,
RAPID-Net surpasses PUResNet and Kalasanty in docking accuracy and
pocket-ligand intersection rates across diverse datasets, including
PoseBusters, Astex Diverse Set, BU48, and Coach420. When accuracy is evaluated
as ``at least one correct pose in the ensemble'', RAPID-Net outperforms
AlphaFold 3 on the PoseBusters benchmark, suggesting that our approach can be
further improved with a suitable pose reweighting tool offering a
cost-effective and competitive alternative to AlphaFold 3 for docking. Finally,
using several therapeutically relevant examples, we demonstrate the ability of
RAPID-Net to identify remote functional sites, highlighting its potential to
facilitate the development of innovative therapeutics.
|
2502.02372
|
MaintaAvatar: A Maintainable Avatar Based on Neural Radiance Fields by
Continual Learning
|
cs.CV cs.AI
|
The generation of a virtual digital avatar is a crucial research topic in the
field of computer vision. Many existing works utilize Neural Radiance Fields
(NeRF) to address this issue and have achieved impressive results. However,
previous works assume the images of the training person are available and fixed
while the appearances and poses of a subject could constantly change and
increase in real-world scenarios. How to update the human avatar but also
maintain the ability to render the old appearance of the person is a practical
challenge. One trivial solution is to combine the existing virtual avatar
models based on NeRF with continual learning methods. However, there are some
critical issues in this approach: learning new appearances and poses can cause
the model to forget past information, which in turn leads to a degradation in
the rendering quality of past appearances, especially color bleeding issues,
and incorrect human body poses. In this work, we propose a maintainable avatar
(MaintaAvatar) based on neural radiance fields by continual learning, which
resolves the issues by utilizing a Global-Local Joint Storage Module and a Pose
Distillation Module. Overall, our model requires only limited data collection
to quickly fine-tune the model while avoiding catastrophic forgetting, thus
achieving a maintainable virtual avatar. The experimental results validate the
effectiveness of our MaintaAvatar model.
|
2502.02377
|
A Minimax Approach to Ad Hoc Teamwork
|
cs.AI
|
We propose a minimax-Bayes approach to Ad Hoc Teamwork (AHT) that optimizes
policies against an adversarial prior over partners, explicitly accounting for
uncertainty about partners at time of deployment. Unlike existing methods that
assume a specific distribution over partners, our approach improves worst-case
performance guarantees. Extensive experiments, including evaluations on
coordinated cooking tasks from the Melting Pot suite, show our method's
superior robustness compared to self-play, fictitious play, and best response
learning. Our work highlights the importance of selecting an appropriate
training distribution over teammates to achieve robustness in AHT.
|
2502.02379
|
No Metric to Rule Them All: Toward Principled Evaluations of
Graph-Learning Datasets
|
cs.LG cs.SI stat.ML
|
Benchmark datasets have proved pivotal to the success of graph learning, and
good benchmark datasets are crucial to guide the development of the field.
Recent research has highlighted problems with graph-learning datasets and
benchmarking practices -- revealing, for example, that methods which ignore the
graph structure can outperform graph-based approaches on popular benchmark
datasets. Such findings raise two questions: (1) What makes a good
graph-learning dataset, and (2) how can we evaluate dataset quality in graph
learning? Our work addresses these questions. As the classic evaluation setup
uses datasets to evaluate models, it does not apply to dataset evaluation.
Hence, we start from first principles. Observing that graph-learning datasets
uniquely combine two modes -- the graph structure and the node features -- , we
introduce RINGS, a flexible and extensible mode-perturbation framework to
assess the quality of graph-learning datasets based on dataset ablations --
i.e., by quantifying differences between the original dataset and its perturbed
representations. Within this framework, we propose two measures -- performance
separability and mode complementarity -- as evaluation tools, each assessing,
from a distinct angle, the capacity of a graph dataset to benchmark the power
and efficacy of graph-learning methods. We demonstrate the utility of our
framework for graph-learning dataset evaluation in an extensive set of
experiments and derive actionable recommendations for improving the evaluation
of graph-learning methods. Our work opens new research directions in
data-centric graph learning, and it constitutes a first step toward the
systematic evaluation of evaluations.
|
2502.02380
|
The Cost Perspective of Liquid Democracy: Feasibility and Control
|
cs.GT cs.AI
|
We examine an approval-based model of Liquid Democracy with a budget
constraint on voting and delegating costs, aiming to centrally select casting
voters ensuring complete representation of the electorate. From a computational
complexity perspective, we focus on minimizing overall costs, maintaining short
delegation paths, and preventing excessive concentration of voting power.
Furthermore, we explore computational aspects of strategic control,
specifically, whether external agents can change election components to
influence the voting power of certain voters.
|
2502.02382
|
Circular Microalgae-Based Carbon Control for Net Zero
|
math.DS cs.LG math.OC
|
The alteration of the climate in various areas of the world is of increasing
concern since climate stability is a necessary condition for human survival as
well as every living organism. The main reason of climate change is the
greenhouse effect caused by the accumulation of carbon dioxide in the
atmosphere. In this paper, we design a networked system underpinned by
compartmental dynamical thermodynamics to circulate the atmospheric carbon
dioxide. Specifically, in the carbon dioxide emitter compartment, we develop an
initial-condition-dependent finite-time stabilizing controller that guarantees
stability within a desired time leveraging the system property of affinity in
the control. Then, to compensate for carbon emissions we show that a
cultivation of microalgae with a volume 625 times bigger than the one of the
carbon emitter is required. To increase the carbon uptake of the microalgae, we
implement the nonaffine-in-the-control microalgae dynamical equations as an
environment of a state-of-the-art library for reinforcement learning (RL),
namely, Stable-Baselines3, and then, through the library, we test the
performance of eight RL algorithms for training a controller that maximizes the
microalgae absorption of carbon through the light intensity. All the eight
controllers increased the carbon absorption of the cultivation during a
training of 200,000 time steps with a maximum episode length of 200 time steps
and with no termination conditions. This work is a first step towards
approaching net zero as a classical and learning-based network control problem.
The source code is publicly available.
|
2502.02384
|
STAIR: Improving Safety Alignment with Introspective Reasoning
|
cs.CL
|
Ensuring the safety and harmlessness of Large Language Models (LLMs) has
become equally critical as their performance in applications. However, existing
safety alignment methods typically suffer from safety-performance trade-offs
and the susceptibility to jailbreak attacks, primarily due to their reliance on
direct refusals for malicious queries. In this paper, we propose STAIR, a novel
framework that integrates SafeTy Alignment with Itrospective Reasoning. We
enable LLMs to identify safety risks through step-by-step analysis by
self-improving chain-of-thought (CoT) reasoning with safety awareness. STAIR
first equips the model with a structured reasoning capability and then advances
safety alignment via iterative preference optimization on step-level reasoning
data generated using our newly proposed Safety-Informed Monte Carlo Tree Search
(SI-MCTS). We further train a process reward model on this data to guide
test-time searches for improved responses. Extensive experiments show that
STAIR effectively mitigates harmful outputs while better preserving
helpfulness, compared to instinctive alignment strategies. With test-time
scaling, STAIR achieves a safety performance comparable to Claude-3.5 against
popular jailbreak attacks. Relevant resources in this work are available at
https://github.com/thu-ml/STAIR.
|
2502.02385
|
Achieving Hiding and Smart Anti-Jamming Communication: A Parallel DRL
Approach against Moving Reactive Jammer
|
cs.IT cs.LG cs.SY eess.SY math.IT
|
This paper addresses the challenge of anti-jamming in moving reactive jamming
scenarios. The moving reactive jammer initiates high-power tracking jamming
upon detecting any transmission activity, and when unable to detect a signal,
resorts to indiscriminate jamming. This presents dual imperatives: maintaining
hiding to avoid the jammer's detection and simultaneously evading
indiscriminate jamming. Spread spectrum techniques effectively reduce
transmitting power to elude detection but fall short in countering
indiscriminate jamming. Conversely, changing communication frequencies can help
evade indiscriminate jamming but makes the transmission vulnerable to tracking
jamming without spread spectrum techniques to remain hidden. Current
methodologies struggle with the complexity of simultaneously optimizing these
two requirements due to the expansive joint action spaces and the dynamics of
moving reactive jammers. To address these challenges, we propose a parallelized
deep reinforcement learning (DRL) strategy. The approach includes a
parallelized network architecture designed to decompose the action space. A
parallel exploration-exploitation selection mechanism replaces the $\varepsilon
$-greedy mechanism, accelerating convergence. Simulations demonstrate a nearly
90\% increase in normalized throughput.
|
2502.02386
|
Hypergraph Link Prediction via Hyperedge Copying
|
cs.SI nlin.AO physics.data-an physics.soc-ph
|
We propose a generative model of temporally-evolving hypergraphs in which
hyperedges form via noisy copying of previous hyperedges. Our proposed model
reproduces several stylized facts from many empirical hypergraphs, is learnable
from data, and defines a likelihood over a complete hypergraph rather than
ego-based or other sub-hypergraphs. Analyzing our model, we derive descriptions
of node degree, edge size, and edge intersection size distributions in terms of
the model parameters. We also show several features of empirical hypergraphs
which are and are not successfully captured by our model. We provide a scalable
stochastic expectation maximization algorithm with which we can fit our model
to hypergraph data sets with millions of nodes and edges. Finally, we assess
our model on a hypergraph link prediction task, finding that an instantiation
of our model with just 11 parameters can achieve competitive predictive
performance with large neural networks.
|
2502.02389
|
Rate-reliability functions for deterministic identification
|
cs.IT math.IT quant-ph
|
We investigate deterministic identification over arbitrary memoryless
channels under the constraint that the error probabilities of first and second
kind are exponentially small in the block length $n$, controlled by reliability
exponents $E_1,E_2 \geq 0$. In contrast to the regime of slowly vanishing
errors, where the identifiable message length scales as $\Theta(n\log n)$, here
we find that for positive exponents linear scaling is restored, now with a rate
that is a function of the reliability exponents. We give upper and lower bounds
on the ensuing rate-reliability function in terms of (the logarithm of) the
packing and covering numbers of the channel output set, which for small error
exponents $E_1,E_2>0$ can be expanded in leading order as the product of the
Minkowski dimension of a certain parametrisation the channel output set and
$\log\min\{E_1,E_2\}$. These allow us to recover the previously observed
slightly superlinear identification rates, and offer a different perspective
for understanding them in more traditional information theory terms. We further
illustrate our results with a discussion of the case of dimension zero, and
extend them to classical-quantum channels and quantum channels with tensor
product input restriction.
|
2502.02390
|
CoAT: Chain-of-Associated-Thoughts Framework for Enhancing Large
Language Models Reasoning
|
cs.CL cs.AI
|
Research on LLM technologies is rapidly emerging, with most of them employing
a 'fast thinking' approach to inference. Most LLMs generate the final result
based solely on a single query and LLM's reasoning capabilities. However, with
the advent of OpenAI-o1, 'slow thinking' techniques have garnered increasing
attention because its process is closer to the human thought process. Inspired
by the human ability to constantly associate and replenish knowledge during
thinking, we developed the novel Chain-of-Associated-Thoughts (CoAT) framework,
which introduces an innovative synergy between the Monte Carlo Tree Search
(MCTS) algorithm and a dynamic mechanism for integrating new key information,
termed 'associative memory'. By combining the structured exploration
capabilities of MCTS with the adaptive learning capacity of associative memory,
CoAT significantly expands the LLM search space, enabling our framework to
explore diverse reasoning pathways and dynamically update its knowledge base in
real-time. This allows the framework to not only revisit and refine earlier
inferences but also adaptively incorporate evolving information, ensuring that
the final output is both accurate and comprehensive. To validate the
effectiveness of our framework, we conducted extensive experiments across a
range of generative and reasoning tasks. These experiments demonstrated that
our framework outperforms conventional inference processes on accuracy,
coherence, and diversity. The framework's ability to iteratively expand its
search space while retaining contextually relevant information results.
|
2502.02391
|
FewTopNER: Integrating Few-Shot Learning with Topic Modeling and Named
Entity Recognition in a Multilingual Framework
|
cs.CL cs.AI
|
We introduce FewTopNER, a novel framework that integrates few-shot named
entity recognition (NER) with topic-aware contextual modeling to address the
challenges of cross-lingual and low-resource scenarios. FewTopNER leverages a
shared multilingual encoder based on XLM-RoBERTa, augmented with
language-specific calibration mechanisms, to generate robust contextual
embeddings. The architecture comprises a prototype-based entity recognition
branch, employing BiLSTM and Conditional Random Fields for sequence labeling,
and a topic modeling branch that extracts document-level semantic features
through hybrid probabilistic and neural methods. A cross-task bridge
facilitates dynamic bidirectional attention and feature fusion between entity
and topic representations, thereby enhancing entity disambiguation by
incorporating global semantic context. Empirical evaluations on multilingual
benchmarks across English, French, Spanish, German, and Italian demonstrate
that FewTopNER significantly outperforms existing state-of-the-art few-shot NER
models. In particular, the framework achieves improvements of 2.5-4.0
percentage points in F1 score and exhibits enhanced topic coherence, as
measured by normalized pointwise mutual information. Ablation studies further
confirm the critical contributions of the shared encoder and cross-task
integration mechanisms to the overall performance. These results underscore the
efficacy of incorporating topic-aware context into few-shot NER and highlight
the potential of FewTopNER for robust cross-lingual applications in
low-resource settings.
|
2502.02393
|
Lower Bounds for Chain-of-Thought Reasoning in Hard-Attention
Transformers
|
cs.LG cs.CC
|
Chain-of-thought reasoning and scratchpads have emerged as critical tools for
enhancing the computational capabilities of transformers. While theoretical
results show that polynomial-length scratchpads can extend transformers'
expressivity from $TC^0$ to $PTIME$, their required length remains poorly
understood. Empirical evidence even suggests that transformers need scratchpads
even for many problems in $TC^0$, such as Parity or Multiplication, challenging
optimistic bounds derived from circuit complexity. In this work, we initiate
the study of systematic lower bounds for the number of CoT steps across
different algorithmic problems, in the hard-attention regime. We study a
variety of algorithmic problems, and provide bounds that are tight up to
logarithmic factors. Overall, these results contribute to emerging
understanding of the power and limitations of chain-of-thought reasoning.
|
2502.02394
|
Robust contraction-based model predictive control for nonlinear systems
|
eess.SY cs.SY
|
Model Predictive Control (MPC) is a widely known control method that has
proved to be particularly effective in multivariable and constrained control.
Closed-loop stability and recursive feasibility can be guaranteed by employing
accurate models in prediction and suitable terminal ingredients, i.e. the
terminal cost function and the terminal constraint. Issues might arise in case
of model mismatches or perturbed systems, as the state predictions could be
inaccurate, and nonlinear systems for which the computation of the terminal
ingredients can result challenging. In this manuscript, we exploit the
properties of component-wise uniformly continuous and stabilizable systems to
introduce a robust contraction-based MPC for the regulation of nonlinear
perturbed systems, that employs an easy-to-design terminal cost function, does
not make use of terminal constraints, and selects the shortest prediction
horizon that guarantees the stability of the closed-loop system.
|
2502.02406
|
LV-XAttn: Distributed Cross-Attention for Long Visual Inputs in
Multimodal Large Language Models
|
cs.CV cs.AI cs.DC cs.LG
|
Cross-attention is commonly adopted in multimodal large language models
(MLLMs) for integrating visual information into the language backbone. However,
in applications with large visual inputs, such as video understanding,
processing a large number of visual tokens in cross-attention layers leads to
high memory demands and often necessitates distributed computation across
multiple GPUs. Existing distributed attention mechanisms face significant
communication overheads, making cross-attention layers a critical bottleneck
for efficient training and inference of MLLMs. To address this, we propose
LV-XAttn, a distributed, exact cross-attention mechanism with minimal
communication overhead. We observe that in applications involving large visual
inputs the size of the query block is typically much smaller than that of the
key-value blocks. Thus, in LV-XAttn we keep the large key-value blocks locally
on each GPU and exchange smaller query blocks across GPUs. We also introduce an
efficient activation recomputation technique enabling support for longer visual
context. We theoretically analyze the communication benefits of LV-XAttn and
show that it can achieve speedups for a wide range of models. Our evaluations
with mPLUG-Owl3 and OpenFlamingo models find that LV-XAttn achieves up to
5.58$\times$ end-to-end speedup compared to existing approaches.
|
Subsets and Splits
No community queries yet
The top public SQL queries from the community will appear here once available.