id
stringlengths 9
16
| title
stringlengths 4
278
| categories
stringlengths 5
104
| abstract
stringlengths 6
4.09k
|
|---|---|---|---|
2501.16757
|
ITVTON:Virtual Try-On Diffusion Transformer Model Based on Integrated
Image and Text
|
cs.CV
|
Recent advancements in virtual fitting for characters and clothing have
leveraged diffusion models to improve the realism of garment fitting. However,
challenges remain in handling complex scenes and poses, which can result in
unnatural garment fitting and poorly rendered intricate patterns. In this work,
we introduce ITVTON, a novel method that enhances clothing-character
interactions by combining clothing and character images along spatial channels
as inputs, thereby improving fitting accuracy for the inpainting model.
Additionally, we incorporate integrated textual descriptions from multiple
images to boost the realism of the generated visual effects. To optimize
computational efficiency, we limit training to the attention parameters within
a single diffusion transformer (Single-DiT) block. To more rigorously address
the complexities of real-world scenarios, we curated training samples from the
IGPair dataset, thereby enhancing ITVTON's performance across diverse
environments. Extensive experiments demonstrate that ITVTON outperforms
baseline methods both qualitatively and quantitatively, setting a new standard
for virtual fitting tasks.
|
2501.16758
|
Meta-Federated Learning: A Novel Approach for Real-Time Traffic Flow
Management
|
cs.LG cs.DC eess.SP
|
Efficient management of traffic flow in urban environments presents a
significant challenge, exacerbated by dynamic changes and the sheer volume of
data generated by modern transportation networks. Traditional centralized
traffic management systems often struggle with scalability and privacy
concerns, hindering their effectiveness. This paper introduces a novel approach
by combining Federated Learning (FL) and Meta-Learning (ML) to create a
decentralized, scalable, and adaptive traffic management system. Our approach,
termed Meta-Federated Learning, leverages the distributed nature of FL to
process data locally at the edge, thereby enhancing privacy and reducing
latency. Simultaneously, ML enables the system to quickly adapt to new traffic
conditions without the need for extensive retraining. We implement our model
across a simulated network of smart traffic devices, demonstrating that
Meta-Federated Learning significantly outperforms traditional models in terms
of prediction accuracy and response time. Furthermore, our approach shows
remarkable adaptability to sudden changes in traffic patterns, suggesting a
scalable solution for real-time traffic management in smart cities. This study
not only paves the way for more resilient urban traffic systems but also
exemplifies the potential of integrated FL and ML in other real-world
applications.
|
2501.16759
|
Are Joins over LSM-trees Ready: Take RocksDB as an Example
|
cs.DB
|
LSM-tree-based data stores are widely adopted in industries for their
excellent performance. As data scales increase, disk-based join operations
become indispensable yet costly for the database, making the selection of
suitable join methods crucial for system optimization. Current LSM-based stores
generally adhere to conventional relational database practices and support only
a limited number of join methods. However, the LSM-tree delivers distinct read
and write efficiency compared to the relational databases, which could
accordingly impact the performance of various join methods. Therefore, it is
necessary to reconsider the selection of join methods in this context to fully
explore the potential of various join algorithms and index designs. In this
work, we present a systematic study and an exhaustive benchmark for joins over
LSM-trees. We define a configuration space for join methods, encompassing
various join algorithms, secondary index types, and consistency strategies. We
also summarize a theoretical analysis to evaluate the overhead of each join
method for an in-depth understanding. Furthermore, we implement all join
methods in the configuration space on a unified platform and compare their
performance through extensive experiments. Our theoretical and experimental
results yield several insights and takeaways tailored to joins in LSM-based
stores that aid developers in choosing proper join methods based on their
working conditions.
|
2501.16760
|
AdaSemSeg: An Adaptive Few-shot Semantic Segmentation of Seismic Facies
|
cs.CV cs.LG
|
Automated interpretation of seismic images using deep learning methods is
challenging because of the limited availability of training data. Few-shot
learning is a suitable learning paradigm in such scenarios due to its ability
to adapt to a new task with limited supervision (small training budget).
Existing few-shot semantic segmentation (FSSS) methods fix the number of target
classes. Therefore, they do not support joint training on multiple datasets
varying in the number of classes. In the context of the interpretation of
seismic facies, fixing the number of target classes inhibits the generalization
capability of a model trained on one facies dataset to another, which is likely
to have a different number of facies. To address this shortcoming, we propose a
few-shot semantic segmentation method for interpreting seismic facies that can
adapt to the varying number of facies across the dataset, dubbed the AdaSemSeg.
In general, the backbone network of FSSS methods is initialized with the
statistics learned from the ImageNet dataset for better performance. The lack
of such a huge annotated dataset for seismic images motivates using a
self-supervised algorithm on seismic datasets to initialize the backbone
network. We have trained the AdaSemSeg on three public seismic facies datasets
with different numbers of facies and evaluated the proposed method on multiple
metrics. The performance of the AdaSemSeg on unseen datasets (not used in
training) is better than the prototype-based few-shot method and baselines.
|
2501.16762
|
Rate-Distortion under Neural Tracking of Speech: A Directed Redundancy
Approach
|
cs.IT math.IT
|
The data acquired at different scalp EEG electrodes when human subjects are
exposed to speech stimuli are highly redundant. The redundancy is partly due to
volume conduction effects and partly due to localized regions of the brain
synchronizing their activity in response to the stimuli. In a competing talker
scenario, we use a recent measure of directed redundancy to assess the amount
of redundant information that is causally conveyed from the attended stimuli to
the left temporal region of the brain. We observe that for the attended
stimuli, the transfer entropy as well as the directed redundancy is
proportional to the correlation between the speech stimuli and the
reconstructed signal from the EEG signals.
This demonstrates that both the rate as well as the rate-redundancy are
inversely proportional to the distortion in neural speech tracking. Thus, a
greater rate indicates a greater redundancy between the electrode signals, and
a greater correlation between the reconstructed signal and the attended
stimuli. A similar relationship is not observed for the distracting stimuli.
|
2501.16764
|
DiffSplat: Repurposing Image Diffusion Models for Scalable Gaussian
Splat Generation
|
cs.CV
|
Recent advancements in 3D content generation from text or a single image
struggle with limited high-quality 3D datasets and inconsistency from 2D
multi-view generation. We introduce DiffSplat, a novel 3D generative framework
that natively generates 3D Gaussian splats by taming large-scale text-to-image
diffusion models. It differs from previous 3D generative models by effectively
utilizing web-scale 2D priors while maintaining 3D consistency in a unified
model. To bootstrap the training, a lightweight reconstruction model is
proposed to instantly produce multi-view Gaussian splat grids for scalable
dataset curation. In conjunction with the regular diffusion loss on these
grids, a 3D rendering loss is introduced to facilitate 3D coherence across
arbitrary views. The compatibility with image diffusion models enables seamless
adaptions of numerous techniques for image generation to the 3D realm.
Extensive experiments reveal the superiority of DiffSplat in text- and
image-conditioned generation tasks and downstream applications. Thorough
ablation studies validate the efficacy of each critical design choice and
provide insights into the underlying mechanism.
|
2501.16767
|
Target-driven Self-Distillation for Partial Observed Trajectories
Forecasting
|
cs.CV
|
Accurate prediction of future trajectories of traffic agents is essential for
ensuring safe autonomous driving. However, partially observed trajectories can
significantly degrade the performance of even state-of-the-art models. Previous
approaches often rely on knowledge distillation to transfer features from fully
observed trajectories to partially observed ones. This involves firstly
training a fully observed model and then using a distillation process to create
the final model. While effective, they require multi-stage training, making the
training process very expensive. Moreover, knowledge distillation can lead to a
performance degradation of the model. In this paper, we introduce a
Target-driven Self-Distillation method (TSD) for motion forecasting. Our method
leverages predicted accurate targets to guide the model in making predictions
under partial observation conditions. By employing self-distillation, the model
learns from the feature distributions of both fully observed and partially
observed trajectories during a single end-to-end training process. This
enhances the model's ability to predict motion accurately in both fully
observed and partially observed scenarios. We evaluate our method on multiple
datasets and state-of-the-art motion forecasting models. Extensive experimental
results demonstrate that our approach achieves significant performance
improvements in both settings. To facilitate further research, we will release
our code and model checkpoints.
|
2501.16768
|
Towards the Generalization of Multi-view Learning: An
Information-theoretical Analysis
|
stat.ML cs.LG
|
Multiview learning has drawn widespread attention for its efficacy in
leveraging cross-view consensus and complementarity information to achieve a
comprehensive representation of data. While multi-view learning has undergone
vigorous development and achieved remarkable success, the theoretical
understanding of its generalization behavior remains elusive. This paper aims
to bridge this gap by developing information-theoretic generalization bounds
for multi-view learning, with a particular focus on multi-view reconstruction
and classification tasks. Our bounds underscore the importance of capturing
both consensus and complementary information from multiple different views to
achieve maximally disentangled representations. These results also indicate
that applying the multi-view information bottleneck regularizer is beneficial
for satisfactory generalization performance. Additionally, we derive novel
data-dependent bounds under both leave-one-out and supersample settings,
yielding computational tractable and tighter bounds. In the interpolating
regime, we further establish the fast-rate bound for multi-view learning,
exhibiting a faster convergence rate compared to conventional square-root
bounds. Numerical results indicate a strong correlation between the true
generalization gap and the derived bounds across various learning scenarios.
|
2501.16769
|
Beyond-Labels: Advancing Open-Vocabulary Segmentation With
Vision-Language Models
|
cs.CV
|
Self-supervised learning can resolve numerous image or linguistic processing
problems when effectively trained. This study investigated simple yet efficient
methods for adapting previously learned foundation models for open-vocabulary
semantic segmentation tasks. Our research proposed "Beyond-Labels," a
lightweight transformer-based fusion module that uses a handful of image
segmentation data to fuse frozen image representations with language concepts.
This strategy allows the model to successfully actualize enormous knowledge
from pretrained models without requiring extensive retraining, making the model
data-efficient and scalable. Furthermore, we efficiently captured positional
information in images using Fourier embeddings, thus improving the
generalization across various image sizes, addressing one of the key
limitations of previous methods. Extensive ablation tests were performed to
investigate the important components of our proposed method; when tested
against the common benchmark PASCAL-5i, it demonstrated superior performance
despite being trained on frozen image and language characteristics.
|
2501.16778
|
FlexMotion: Lightweight, Physics-Aware, and Controllable Human Motion
Generation
|
cs.CV cs.AI cs.GR cs.LG
|
Lightweight, controllable, and physically plausible human motion synthesis is
crucial for animation, virtual reality, robotics, and human-computer
interaction applications. Existing methods often compromise between
computational efficiency, physical realism, or spatial controllability. We
propose FlexMotion, a novel framework that leverages a computationally
lightweight diffusion model operating in the latent space, eliminating the need
for physics simulators and enabling fast and efficient training. FlexMotion
employs a multimodal pre-trained Transformer encoder-decoder, integrating joint
locations, contact forces, joint actuations and muscle activations to ensure
the physical plausibility of the generated motions. FlexMotion also introduces
a plug-and-play module, which adds spatial controllability over a range of
motion parameters (e.g., joint locations, joint actuations, contact forces, and
muscle activations). Our framework achieves realistic motion generation with
improved efficiency and control, setting a new benchmark for human motion
synthesis. We evaluate FlexMotion on extended datasets and demonstrate its
superior performance in terms of realism, physical plausibility, and
controllability.
|
2501.16783
|
A Stochastic Dynamical Theory of LLM Self-Adversariality: Modeling
Severity Drift as a Critical Process
|
cs.CL cs.AI nlin.AO
|
This paper introduces a continuous-time stochastic dynamical framework for
understanding how large language models (LLMs) may self-amplify latent biases
or toxicity through their own chain-of-thought reasoning. The model posits an
instantaneous "severity" variable $x(t) \in [0,1]$ evolving under a stochastic
differential equation (SDE) with a drift term $\mu(x)$ and diffusion
$\sigma(x)$. Crucially, such a process can be consistently analyzed via the
Fokker--Planck approach if each incremental step behaves nearly Markovian in
severity space. The analysis investigates critical phenomena, showing that
certain parameter regimes create phase transitions from subcritical
(self-correcting) to supercritical (runaway severity). The paper derives
stationary distributions, first-passage times to harmful thresholds, and
scaling laws near critical points. Finally, it highlights implications for
agents and extended LLM reasoning models: in principle, these equations might
serve as a basis for formal verification of whether a model remains stable or
propagates bias over repeated inferences.
|
2501.16786
|
Exploring the Role of Explicit Temporal Modeling in Multimodal Large
Language Models for Video Understanding
|
cs.CV cs.CL
|
Applying Multimodal Large Language Models (MLLMs) to video understanding
presents significant challenges due to the need to model temporal relations
across frames. Existing approaches adopt either implicit temporal modeling,
relying solely on the LLM decoder, or explicit temporal modeling, employing
auxiliary temporal encoders. To investigate this debate between the two
paradigms, we propose the Stackable Temporal Encoder (STE). STE enables
flexible explicit temporal modeling with adjustable temporal receptive fields
and token compression ratios. Using STE, we systematically compare implicit and
explicit temporal modeling across dimensions such as overall performance, token
compression effectiveness, and temporal-specific understanding. We also explore
STE's design considerations and broader impacts as a plug-in module and in
image modalities. Our findings emphasize the critical role of explicit temporal
modeling, providing actionable insights to advance video MLLMs.
|
2501.16787
|
Dynamic Hypergraph Representation for Bone Metastasis Cancer Analysis
|
cs.CV
|
Bone metastasis analysis is a significant challenge in pathology and plays a
critical role in determining patient quality of life and treatment strategies.
The microenvironment and specific tissue structures are essential for
pathologists to predict the primary bone cancer origins and primary bone cancer
subtyping. By digitizing bone tissue sections into whole slide images (WSIs)
and leveraging deep learning to model slide embeddings, this analysis can be
enhanced. However, tumor metastasis involves complex multivariate interactions
with diverse bone tissue structures, which traditional WSI analysis methods
such as multiple instance learning (MIL) fail to capture. Moreover, graph
neural networks (GNNs), limited to modeling pairwise relationships, are hard to
represent high-order biological associations. To address these challenges, we
propose a dynamic hypergraph neural network (DyHG) that overcomes the edge
construction limitations of traditional graph representations by connecting
multiple nodes via hyperedges. A low-rank strategy is used to reduce the
complexity of parameters in learning hypergraph structures, while a
Gumbel-Softmax-based sampling strategy optimizes the patch distribution across
hyperedges. An MIL aggregator is then used to derive a graph-level embedding
for comprehensive WSI analysis. To evaluate the effectiveness of DyHG, we
construct two large-scale datasets for primary bone cancer origins and
subtyping classification based on real-world bone metastasis scenarios.
Extensive experiments demonstrate that DyHG significantly outperforms
state-of-the-art (SOTA) baselines, showcasing its ability to model complex
biological interactions and improve the accuracy of bone metastasis analysis.
|
2501.16790
|
Exponential Family Attention
|
stat.ML cs.LG
|
The self-attention mechanism is the backbone of the transformer neural
network underlying most large language models. It can capture complex word
patterns and long-range dependencies in natural language. This paper introduces
exponential family attention (EFA), a probabilistic generative model that
extends self-attention to handle high-dimensional sequence, spatial, or
spatial-temporal data of mixed data types, including both discrete and
continuous observations. The key idea of EFA is to model each observation
conditional on all other existing observations, called the context, whose
relevance is learned in a data-driven way via an attention-based latent factor
model. In particular, unlike static latent embeddings, EFA uses the
self-attention mechanism to capture dynamic interactions in the context, where
the relevance of each context observations depends on other observations. We
establish an identifiability result and provide a generalization guarantee on
excess loss for EFA. Across real-world and synthetic data sets -- including
U.S. city temperatures, Instacart shopping baskets, and MovieLens ratings -- we
find that EFA consistently outperforms existing models in capturing complex
latent structures and reconstructing held-out data.
|
2501.16794
|
Algorithm for Automatic Legislative Text Consolidation
|
cs.CL
|
This study introduces a method for automating the consolidation process in a
legal context, a time-consuming task traditionally performed by legal
professionals. We present a generative approach that processes legislative
texts to automatically apply amendments. Our method employs light quantized
generative model, fine-tuned with LoRA, to generate accurate and reliable
amended texts. To the authors knowledge, this is the first time generative
models are used on legislative text consolidation. Our dataset is publicly
available on HuggingFace1. Experimental results demonstrate a significant
improvement in efficiency, offering faster updates to legal documents. A full
automated pipeline of legislative text consolidation can be done in a few
hours, with a success rate of more than 63% on a difficult bill.
|
2501.16800
|
DIRIGENt: End-To-End Robotic Imitation of Human Demonstrations Based on
a Diffusion Model
|
cs.RO cs.AI
|
There has been substantial progress in humanoid robots, with new skills
continuously being taught, ranging from navigation to manipulation. While these
abilities may seem impressive, the teaching methods often remain inefficient.
To enhance the process of teaching robots, we propose leveraging a mechanism
effectively used by humans: teaching by demonstrating. In this paper, we
introduce DIRIGENt (DIrect Robotic Imitation GENeration model), a novel
end-to-end diffusion approach that directly generates joint values from
observing human demonstrations, enabling a robot to imitate these actions
without any existing mapping between it and humans. We create a dataset in
which humans imitate a robot and then use this collected data to train a
diffusion model that enables a robot to imitate humans. The following three
aspects are the core of our contribution. First is our novel dataset with
natural pairs between human and robot poses, allowing our approach to imitate
humans accurately despite the gap between their anatomies. Second, the
diffusion input to our model alleviates the challenge of redundant joint
configurations, limiting the search space. And finally, our end-to-end
architecture from perception to action leads to an improved learning
capability. Through our experimental analysis, we show that combining these
three aspects allows DIRIGENt to outperform existing state-of-the-art
approaches in the field of generating joint values from RGB images.
|
2501.16803
|
RG-Attn: Radian Glue Attention for Multi-modality Multi-agent
Cooperative Perception
|
cs.RO cs.CV cs.NI eess.IV
|
Cooperative perception offers an optimal solution to overcome the perception
limitations of single-agent systems by leveraging Vehicle-to-Everything (V2X)
communication for data sharing and fusion across multiple agents. However, most
existing approaches focus on single-modality data exchange, limiting the
potential of both homogeneous and heterogeneous fusion across agents. This
overlooks the opportunity to utilize multi-modality data per agent, restricting
the system's performance. In the automotive industry, manufacturers adopt
diverse sensor configurations, resulting in heterogeneous combinations of
sensor modalities across agents. To harness the potential of every possible
data source for optimal performance, we design a robust LiDAR and camera
cross-modality fusion module, Radian-Glue-Attention (RG-Attn), applicable to
both intra-agent cross-modality fusion and inter-agent cross-modality fusion
scenarios, owing to the convenient coordinate conversion by transformation
matrix and the unified sampling/inversion mechanism. We also propose two
different architectures, named Paint-To-Puzzle (PTP) and
Co-Sketching-Co-Coloring (CoS-CoCo), for conducting cooperative perception. PTP
aims for maximum precision performance and achieves smaller data packet size by
limiting cross-agent fusion to a single instance, but requiring all
participants to be equipped with LiDAR. In contrast, CoS-CoCo supports agents
with any configuration-LiDAR-only, camera-only, or LiDAR-camera-both,
presenting more generalization ability. Our approach achieves state-of-the-art
(SOTA) performance on both real and simulated cooperative perception datasets.
The code will be released at GitHub in early 2025.
|
2501.16811
|
Not Every Patch is Needed: Towards a More Efficient and Effective
Backbone for Video-based Person Re-identification
|
cs.CV
|
This paper proposes a new effective and efficient plug-and-play backbone for
video-based person re-identification (ReID). Conventional video-based ReID
methods typically use CNN or transformer backbones to extract deep features for
every position in every sampled video frame. Here, we argue that this
exhaustive feature extraction could be unnecessary, since we find that
different frames in a ReID video often exhibit small differences and contain
many similar regions due to the relatively slight movements of human beings.
Inspired by this, a more selective, efficient paradigm is explored in this
paper. Specifically, we introduce a patch selection mechanism to reduce
computational cost by choosing only the crucial and non-repetitive patches for
feature extraction. Additionally, we present a novel network structure that
generates and utilizes pseudo frame global context to address the issue of
incomplete views resulting from sparse inputs. By incorporating these new
designs, our backbone can achieve both high performance and low computational
cost. Extensive experiments on multiple datasets show that our approach reduces
the computational cost by 74\% compared to ViT-B and 28\% compared to ResNet50,
while the accuracy is on par with ViT-B and outperforms ResNet50 significantly.
|
2501.16813
|
Multimodal Magic Elevating Depression Detection with a Fusion of Text
and Audio Intelligence
|
cs.CL cs.SD eess.AS
|
This study proposes an innovative multimodal fusion model based on a
teacher-student architecture to enhance the accuracy of depression
classification. Our designed model addresses the limitations of traditional
methods in feature fusion and modality weight allocation by introducing
multi-head attention mechanisms and weighted multimodal transfer learning.
Leveraging the DAIC-WOZ dataset, the student fusion model, guided by textual
and auditory teacher models, achieves significant improvements in
classification accuracy. Ablation experiments demonstrate that the proposed
model attains an F1 score of 99. 1% on the test set, significantly
outperforming unimodal and conventional approaches. Our method effectively
captures the complementarity between textual and audio features while
dynamically adjusting the contributions of the teacher models to enhance
generalization capabilities. The experimental results highlight the robustness
and adaptability of the proposed framework in handling complex multimodal data.
This research provides a novel technical framework for multimodal large model
learning in depression analysis, offering new insights into addressing the
limitations of existing methods in modality fusion and feature extraction.
|
2501.16817
|
Enhancing Non-Intrusive Load Monitoring with Features Extracted by
Independent Component Analysis
|
eess.SY cs.LG cs.SY
|
In this paper, a novel neural network architecture is proposed to address the
challenges in energy disaggregation algorithms. These challenges include the
limited availability of data and the complexity of disaggregating a large
number of appliances operating simultaneously. The proposed model utilizes
independent component analysis as the backbone of the neural network and is
evaluated using the F1-score for varying numbers of appliances working
concurrently. Our results demonstrate that the model is less prone to
overfitting, exhibits low complexity, and effectively decomposes signals with
many individual components. Furthermore, we show that the proposed model
outperforms existing algorithms when applied to real-world data.
|
2501.16823
|
Phase Noise Resilient Codebook Design for Sparse Code Multiple Access
|
cs.IT math.IT
|
Sparse code multiple access (SCMA) is a promising technique for future
machine type communication systems due to its superior spectral efficiency and
capability for supporting massive connectivity. This paper proposes a novel
class of sparse codebooks to improve the error rate performance of SCMA in the
presence of phase noise (PN). Specifically, we first analyze the error rate
performance of SCMA impaired by looking into the pair-wise error probability.
Then, a novel codebook design metric, called minimum PN metric (MPNM), is
proposed. In addition, to design PN resilient codebooks, we propose a novel
pulse-amplitude modulation (PAM)-based low projection mother constellation
(LP-MC), called LP-PAM. The codebooks for different users are obtained by
rotating and scaling the MC, where the phase rotation angles and scaling
factors for different users are optimized by maximizing the proposed MPNM.
Numerical results show that the proposed PNCBs have larger MPNM values and
achieve improved error rate performance than the-state-of-the-art codebooks.
|
2501.16825
|
Can Transformers Learn Full Bayesian Inference in Context?
|
cs.LG
|
Transformers have emerged as the dominant architecture in the field of deep
learning, with a broad range of applications and remarkable in-context learning
(ICL) capabilities. While not yet fully understood, ICL has already proved to
be an intriguing phenomenon, allowing transformers to learn in context --
without requiring further training. In this paper, we further advance the
understanding of ICL by demonstrating that transformers can perform full
Bayesian inference for commonly used statistical models in context. More
specifically, we introduce a general framework that builds on ideas from prior
fitted networks and continuous normalizing flows which enables us to infer
complex posterior distributions for methods such as generalized linear models
and latent factor models. Extensive experiments on real-world datasets
demonstrate that our ICL approach yields posterior samples that are similar in
quality to state-of-the-art MCMC or variational inference methods not operating
in context.
|
2501.16828
|
Late Breaking Results: Energy-Efficient Printed Machine Learning
Classifiers with Sequential SVMs
|
cs.LG cs.SY eess.IV eess.SY
|
Printed Electronics (PE) provide a mechanically flexible and cost-effective
solution for machine learning (ML) circuits, compared to silicon-based
technologies. However, due to large feature sizes, printed classifiers are
limited by high power, area, and energy overheads, which restricts the
realization of battery-powered systems. In this work, we design sequential
printed bespoke Support Vector Machine (SVM) circuits that adhere to the power
constraints of existing printed batteries while minimizing energy consumption,
thereby boosting battery life. Our results show 6.5x energy savings while
maintaining higher accuracy compared to the state of the art.
|
2501.16830
|
Statistical Analysis of Risk Assessment Factors and Metrics to Evaluate
Radicalisation in Twitter
|
cs.SI cs.CY cs.LG
|
Nowadays, Social Networks have become an essential communication tools
producing a large amount of information about their users and their
interactions, which can be analysed with Data Mining methods. In the last
years, Social Networks are being used to radicalise people. In this paper, we
study the performance of a set of indicators and their respective metrics,
devoted to assess the risk of radicalisation of a precise individual on three
different datasets. Keyword-based metrics, even though depending on the written
language, performs well when measuring frustration, perception of
discrimination as well as declaration of negative and positive ideas about
Western society and Jihadism, respectively. However, metrics based on frequent
habits such as writing ellipses are not well enough to characterise a user in
risk of radicalisation. The paper presents a detailed description of both, the
set of indicators used to asses the radicalisation in Social Networks and the
set of datasets used to evaluate them. Finally, an experimental study over
these datasets are carried out to evaluate the performance of the metrics
considered.
|
2501.16831
|
Data-Driven vs Traditional Approaches to Power Transformer's Top-Oil
Temperature Estimation
|
cs.LG
|
Power transformers are subjected to electrical currents and temperature
fluctuations that, if not properly controlled, can lead to major deterioration
of their insulation system. Therefore, monitoring the temperature of a power
transformer is fundamental to ensure a long-term operational life. Models
presented in the IEC 60076-7 and IEEE standards, for example, monitor the
temperature by calculating the top-oil and the hot-spot temperatures. However,
these models are not very accurate and rely on the power transformers'
properties. This paper focuses on finding an alternative method to predict the
top-oil temperatures given previous measurements. Given the large quantities of
data available, machine learning methods for time series forecasting are
analyzed and compared to the real measurements and the corresponding prediction
of the IEC standard. The methods tested are Artificial Neural Networks (ANNs),
Time-series Dense Encoder (TiDE), and Temporal Convolutional Networks (TCN)
using different combinations of historical measurements. Each of these methods
outperformed the IEC 60076-7 model and they are extended to estimate the
temperature rise over ambient. To enhance prediction reliability, we explore
the application of quantile regression to construct prediction intervals for
the expected top-oil temperature ranges. The best-performing model successfully
estimates conditional quantiles that provide sufficient coverage.
|
2501.16836
|
Misspellings in Natural Language Processing: A survey
|
cs.CL cs.AI
|
This survey provides an overview of the challenges of misspellings in natural
language processing (NLP). While often unintentional, misspellings have become
ubiquitous in digital communication, especially with the proliferation of Web
2.0, user-generated content, and informal text mediums such as social media,
blogs, and forums. Even if humans can generally interpret misspelled text, NLP
models frequently struggle to handle it: this causes a decline in performance
in common tasks like text classification and machine translation. In this
paper, we reconstruct a history of misspellings as a scientific problem. We
then discuss the latest advancements to address the challenge of misspellings
in NLP. Main strategies to mitigate the effect of misspellings include data
augmentation, double step, character-order agnostic, and tuple-based methods,
among others. This survey also examines dedicated data challenges and
competitions to spur progress in the field. Critical safety and ethical
concerns are also examined, for example, the voluntary use of misspellings to
inject malicious messages and hate speech on social networks. Furthermore, the
survey explores psycholinguistic perspectives on how humans process
misspellings, potentially informing innovative computational techniques for
text normalization and representation. Finally, the misspelling-related
challenges and opportunities associated with modern large language models are
also analyzed, including benchmarks, datasets, and performances of the most
prominent language models against misspellings. This survey aims to be an
exhaustive resource for researchers seeking to mitigate the impact of
misspellings in the rapidly evolving landscape of NLP.
|
2501.16838
|
Spread Codes from Abelian non-cyclic groups
|
cs.IT math.IT
|
Given the finite field $\mathbb{F}_{q}$, for a prime power $q$, in this paper
we present a way of constructing spreads of $\mathbb{F}_{q}^{n}$. They will
arise as orbits under the action of an Abelian non-cyclic group. First, we
construct a family of orbit codes of maximum distance using this group, and
then we complete each of these codes to achieve a spread of the whole space
having an orbital structure.
|
2501.16839
|
Flow Matching: Markov Kernels, Stochastic Processes and Transport Plans
|
cs.LG math.PR
|
Among generative neural models, flow matching techniques stand out for their
simple applicability and good scaling properties. Here, velocity fields of
curves connecting a simple latent and a target distribution are learned. Then
the corresponding ordinary differential equation can be used to sample from a
target distribution, starting in samples from the latent one. This paper
reviews from a mathematical point of view different techniques to learn the
velocity fields of absolutely continuous curves in the Wasserstein geometry. We
show how the velocity fields can be characterized and learned via i) transport
plans (couplings) between latent and target distributions, ii) Markov kernels
and iii) stochastic processes, where the latter two include the coupling
approach, but are in general broader. Besides this main goal, we show how flow
matching can be used for solving Bayesian inverse problems, where the
definition of conditional Wasserstein distances plays a central role. Finally,
we briefly address continuous normalizing flows and score matching techniques,
which approach the learning of velocity fields of curves from other directions.
|
2501.16841
|
Toward Explainable NILM: Real-Time Event-Based NILM Framework for
High-Frequency Data
|
eess.SY cs.SY
|
Non-Intrusive Load Monitoring (NILM) is an advanced, and cost-effective
technique for monitoring appliance-level energy consumption. However, its
adaptability is hindered by the lack of transparency and explainability. To
address this challenge, this paper presents an explainable, real-time,
event-based NILM framework specifically designed for high-frequency datasets.
The proposed framework ensures transparency at every stage by integrating a
z-score-based event detector, appliance signature estimation, Fourier-based
feature extraction, an XG-Boost classifier, and post hoc SHAP analysis. The
SHAP analysis further quantifies the contribution of individual features, such
as cosine of specific harmonic phases, to appliance classification. The
framework is trained and evaluated on the PLAID dataset, and achieved a
classification accuracy of 90% while maintaining low computational requirements
and a latency of less than one second.
|
2501.16847
|
Optimization and Learning in Open Multi-Agent Systems
|
math.OC cs.LG cs.MA cs.SY eess.SY
|
Modern artificial intelligence relies on networks of agents that collect
data, process information, and exchange it with neighbors to collaboratively
solve optimization and learning problems. This article introduces a novel
distributed algorithm to address a broad class of these problems in "open
networks", where the number of participating agents may vary due to several
factors, such as autonomous decisions, heterogeneous resource availability, or
DoS attacks. Extending the current literature, the convergence analysis of the
proposed algorithm is based on the newly developed "Theory of Open Operators",
which characterizes an operator as open when the set of components to be
updated changes over time, yielding to time-varying operators acting on
sequences of points of different dimensions and compositions. The mathematical
tools and convergence results developed here provide a general framework for
evaluating distributed algorithms in open networks, allowing to characterize
their performance in terms of the punctual distance from the optimal solution,
in contrast with regret-based metrics that assess cumulative performance over a
finite-time horizon. As illustrative examples, the proposed algorithm is used
to solve dynamic consensus or tracking problems on different metrics of
interest, such as average, median, and min/max value, as well as classification
problems with logistic loss functions.
|
2501.16848
|
Hybrid Phenology Modeling for Predicting Temperature Effects on Tree
Dormancy
|
cs.LG
|
Biophysical models offer valuable insights into climate-phenology
relationships in both natural and agricultural settings. However, there are
substantial structural discrepancies across models which require site-specific
recalibration, often yielding inconsistent predictions under similar climate
scenarios. Machine learning methods offer data-driven solutions, but often lack
interpretability and alignment with existing knowledge. We present a phenology
model describing dormancy in fruit trees, integrating conventional biophysical
models with a neural network to address their structural disparities. We
evaluate our hybrid model in an extensive case study predicting cherry tree
phenology in Japan, South Korea and Switzerland. Our approach consistently
outperforms both traditional biophysical and machine learning models in
predicting blooming dates across years. Additionally, the neural network's
adaptability facilitates parameter learning for specific tree varieties,
enabling robust generalization to new sites without site-specific
recalibration. This hybrid model leverages both biophysical constraints and
data-driven flexibility, offering a promising avenue for accurate and
interpretable phenology modeling.
|
2501.16863
|
HD-CB: The First Exploration of Hyperdimensional Computing for
Contextual Bandits Problems
|
cs.LG
|
Hyperdimensional Computing (HDC), also known as Vector Symbolic
Architectures, is a computing paradigm that combines the strengths of symbolic
reasoning with the efficiency and scalability of distributed connectionist
models in artificial intelligence. HDC has recently emerged as a promising
alternative for performing learning tasks in resource-constrained environments
thanks to its energy and computational efficiency, inherent parallelism, and
resilience to noise and hardware faults.
This work introduces the Hyperdimensional Contextual Bandits (HD-CB): the
first exploration of HDC to model and automate sequential decision-making
Contextual Bandits (CB) problems. The proposed approach maps environmental
states in a high-dimensional space and represents each action with dedicated
hypervectors (HVs). At each iteration, these HVs are used to select the optimal
action for the given context and are updated based on the received reward,
replacing computationally expensive ridge regression procedures required by
traditional linear CB algorithms with simple, highly parallel vector
operations. We propose four HD-CB variants, demonstrating their flexibility in
implementing different exploration strategies, as well as techniques to reduce
memory overhead and the number of hyperparameters. Extensive simulations on
synthetic datasets and a real-world benchmark reveal that HD-CB consistently
achieves competitive or superior performance compared to traditional linear CB
algorithms, while offering faster convergence time, lower computational
complexity, improved scalability, and high parallelism.
|
2501.16865
|
JRE-L: Journalist, Reader, and Editor LLMs in the Loop for Science
Journalism for the General Audience
|
cs.CL
|
Science journalism reports current scientific discoveries to non-specialists,
aiming to enable public comprehension of the state of the art. This task is
challenging as the audience often lacks specific knowledge about the presented
research. We propose a JRE-L framework that integrates three LLMs mimicking the
writing-reading-feedback-revision loop. In JRE-L, one LLM acts as the
journalist, another LLM as the general public reader, and the third LLM as an
editor. The journalist's writing is iteratively refined by feedback from the
reader and suggestions from the editor. Our experiments demonstrate that by
leveraging the collaboration of two 7B and one 1.8B open-source LLMs, we can
generate articles that are more accessible than those generated by existing
methods, including prompting single advanced models such as GPT-4 and other
LLM-collaboration strategies. Our code is publicly available at
github.com/Zzoay/JRE-L.
|
2501.16867
|
Empirical modeling and hybrid machine learning framework for nucleate
pool boiling on microchannel structured surfaces
|
physics.app-ph cs.LG
|
Micro-structured surfaces influence nucleation characteristics and bubble
dynamics besides increasing the heat transfer surface area, thus enabling
efficient nucleate boiling heat transfer. Modeling the pool boiling heat
transfer characteristics of these surfaces under varied conditions is essential
in diverse applications. A new empirical correlation for nucleate boiling on
microchannel structured surfaces has been proposed with the data collected from
various experiments in previous studies since the existing correlations are
limited by their accuracy and narrow operating ranges. This study also examines
various Machine Learning (ML) algorithms and Deep Neural Networks (DNN) on the
microchannel structured surfaces dataset to predict the nucleate pool boiling
Heat Transfer Coefficient (HTC). With the aim to integrate both the ML and
domain knowledge, a Physics-Informed Machine Learning Aided Framework (PIMLAF)
is proposed. The proposed correlation in this study is employed as the prior
physics-based model for PIMLAF, and a DNN is employed to model the residuals of
the prior model. This hybrid framework achieved the best performance in
comparison to the other ML models and DNNs. This framework is able to
generalize well for different datasets because the proposed correlation
provides the baseline knowledge of the boiling behavior. Also, SHAP
interpretation analysis identifies the critical parameters impacting the model
predictions and their effect on HTC prediction. This analysis further makes the
model more robust and reliable.
Keywords: Pool boiling, Microchannels, Heat transfer coefficient, Correlation
analysis, Machine learning, Deep neural network, Physics-informed machine
learning aided framework, SHAP analysis
|
2501.16868
|
Event-Based Adaptive Koopman Framework for Optic Flow-Guided Landing on
Moving Platforms
|
eess.SY cs.RO cs.SY
|
This paper presents an optic flow-guided approach for achieving soft landings
by resource-constrained unmanned aerial vehicles (UAVs) on dynamic platforms.
An offline data-driven linear model based on Koopman operator theory is
developed to describe the underlying (nonlinear) dynamics of optic flow output
obtained from a single monocular camera that maps to vehicle acceleration as
the control input. Moreover, a novel adaptation scheme within the Koopman
framework is introduced online to handle uncertainties such as unknown platform
motion and ground effect, which exert a significant influence during the
terminal stage of the descent process. Further, to minimize computational
overhead, an event-based adaptation trigger is incorporated into an
event-driven Model Predictive Control (MPC) strategy to regulate optic flow and
track a desired reference. A detailed convergence analysis ensures global
convergence of the tracking error to a uniform ultimate bound while ensuring
Zeno-free behavior. Simulation results demonstrate the algorithm's robustness
and effectiveness in landing on dynamic platforms under ground effect and
sensor noise, which compares favorably to non-adaptive event-triggered and
time-triggered adaptive schemes.
|
2501.16870
|
Experimenting with Affective Computing Models in Video Interviews with
Spanish-speaking Older Adults
|
cs.CV
|
Understanding emotional signals in older adults is crucial for designing
virtual assistants that support their well-being. However, existing affective
computing models often face significant limitations: (1) limited availability
of datasets representing older adults, especially in non-English-speaking
populations, and (2) poor generalization of models trained on younger or
homogeneous demographics. To address these gaps, this study evaluates
state-of-the-art affective computing models -- including facial expression
recognition, text sentiment analysis, and smile detection -- using videos of
older adults interacting with either a person or a virtual avatar. As part of
this effort, we introduce a novel dataset featuring Spanish-speaking older
adults engaged in human-to-human video interviews. Through three comprehensive
analyses, we investigate (1) the alignment between human-annotated labels and
automatic model outputs, (2) the relationships between model outputs across
different modalities, and (3) individual variations in emotional signals. Using
both the Wizard of Oz (WoZ) dataset and our newly collected dataset, we uncover
limited agreement between human annotations and model predictions, weak
consistency across modalities, and significant variability among individuals.
These findings highlight the shortcomings of generalized emotion perception
models and emphasize the need of incorporating personal variability and
cultural nuances into future systems.
|
2501.16875
|
Enhancing Web Service Anomaly Detection via Fine-grained Multi-modal
Association and Frequency Domain Analysis
|
cs.SE cs.LG
|
Anomaly detection is crucial for ensuring the stability and reliability of
web service systems. Logs and metrics contain multiple information that can
reflect the system's operational state and potential anomalies. Thus, existing
anomaly detection methods use logs and metrics to detect web service systems'
anomalies through data fusion approaches. They associate logs and metrics using
coarse-grained time window alignment and capture the normal patterns of system
operation through reconstruction. However, these methods have two issues that
limit their performance in anomaly detection. First, due to asynchrony between
logs and metrics, coarse-grained time window alignment cannot achieve a precise
association between the two modalities. Second, reconstruction-based methods
suffer from severe overgeneralization problems, resulting in anomalies being
accurately reconstructed. In this paper, we propose a novel anomaly detection
method named FFAD to address these two issues. On the one hand, FFAD employs
graph-based alignment to mine and extract associations between the modalities
from the constructed log-metric relation graph, achieving precise associations
between logs and metrics. On the other hand, we improve the model's fit to
normal data distributions through Fourier Frequency Focus, thereby enhancing
the effectiveness of anomaly detection. We validated the effectiveness of our
model on two real-world industrial datasets and one open-source dataset. The
results show that our method achieves an average anomaly detection F1-score of
93.6%, representing an 8.8% improvement over previous state-of-the-art methods.
|
2501.16879
|
Ultra-high resolution multimodal MRI dense labelled holistic brain atlas
|
eess.IV cs.CV
|
In this paper, we introduce holiAtlas, a holistic, multimodal and
high-resolution human brain atlas. This atlas covers different levels of
details of the human brain anatomy, from the organ to the substructure level,
using a new dense labelled protocol generated from the fusion of multiple local
protocols at different scales. This atlas has been constructed averaging images
and segmentations of 75 healthy subjects from the Human Connectome Project
database. Specifically, MR images of T1, T2 and WMn (White Matter nulled)
contrasts at 0.125 $mm^{3}$ resolution that were nonlinearly registered and
averaged using symmetric group-wise normalisation to construct the atlas. At
the finest level, the holiAtlas protocol has 350 different labels derived from
10 different delineation protocols. These labels were grouped at different
scales to provide a holistic view of the brain at different levels in a
coherent and consistent manner. This multiscale and multimodal atlas can be
used for the development of new ultra-high resolution segmentation methods that
can potentially leverage the early detection of neurological disorders.
|
2501.16884
|
Irony Detection, Reasoning and Understanding in Zero-shot Learning
|
cs.CL cs.AI
|
Irony is a powerful figurative language (FL) on social media that can
potentially mislead various NLP tasks, such as recommendation systems,
misinformation checks, and sentiment analysis. Understanding the implicit
meaning of this kind of subtle language is essential to mitigate irony's
negative impact on NLP tasks. However, building models to understand irony
presents a unique set of challenges, because irony is a complex form of
language that often relies on context, tone, and subtle cues to convey meaning
that is opposite or different from the literal interpretation. Large language
models, such as ChatGPT, are increasingly able to capture implicit and
contextual information. In this study, we investigate the generalization,
reasoning and understanding ability of ChatGPT on irony detection across six
different genre irony detection datasets. Our findings suggest that ChatGPT
appears to show an enhanced language understanding and reasoning ability. But
it needs to be very careful in prompt engineering design. Thus, we propose a
prompt engineering design framework IDADP to achieve higher irony detection
accuracy, improved understanding of irony, and more effective explanations
compared to other state-of-the-art ChatGPT zero-shot approaches. And ascertain
via experiments that the practice generated under the framework is likely to be
the promised solution to resolve the generalization issues of LLMs.
|
2501.16888
|
Secure Federated Graph-Filtering for Recommender Systems
|
cs.IR cs.CR
|
Recommender systems often rely on graph-based filters, such as normalized
item-item adjacency matrices and low-pass filters. While effective, the
centralized computation of these components raises concerns about privacy,
security, and the ethical use of user data. This work proposes two
decentralized frameworks for securely computing these critical graph components
without centralizing sensitive information. The first approach leverages
lightweight Multi-Party Computation and distributed singular vector
computations to privately compute key graph filters. The second extends this
framework by incorporating low-rank approximations, enabling a trade-off
between communication efficiency and predictive performance. Empirical
evaluations on benchmark datasets demonstrate that the proposed methods achieve
comparable accuracy to centralized state-of-the-art systems while ensuring data
confidentiality and maintaining low communication costs. Our results highlight
the potential for privacy-preserving decentralized architectures to bridge the
gap between utility and user data protection in modern recommender systems.
|
2501.16889
|
Extending Information Bottleneck Attribution to Video Sequences
|
cs.CV cs.AI
|
We introduce VIBA, a novel approach for explainable video classification by
adapting Information Bottlenecks for Attribution (IBA) to video sequences.
While most traditional explainability methods are designed for image models,
our IBA framework addresses the need for explainability in temporal models used
for video analysis. To demonstrate its effectiveness, we apply VIBA to video
deepfake detection, testing it on two architectures: the Xception model for
spatial features and a VGG11-based model for capturing motion dynamics through
optical flow. Using a custom dataset that reflects recent deepfake generation
techniques, we adapt IBA to create relevance and optical flow maps, visually
highlighting manipulated regions and motion inconsistencies. Our results show
that VIBA generates temporally and spatially consistent explanations, which
align closely with human annotations, thus providing interpretability for video
classification and particularly for deepfake detection.
|
2501.16894
|
DBSCAN in domains with periodic boundary conditions
|
cs.LG physics.comp-ph physics.flu-dyn
|
Many scientific problems involve data that is embedded in a space with
periodic boundary conditions. This can for instance be related to an inherent
cyclic or rotational symmetry in the data or a spatially extended periodicity.
When analyzing such data, well-tailored methods are needed to obtain efficient
approaches that obey the periodic boundary conditions of the problem. In this
work, we present a method for applying a clustering algorithm to data embedded
in a periodic domain based on the DBSCAN algorithm, a widely used unsupervised
machine learning method that identifies clusters in data. The proposed method
internally leverages the conventional DBSCAN algorithm for domains with open
boundaries, such that it remains compatible with all optimized implementations
for neighborhood searches in open domains. In this way, it retains the same
optimized runtime complexity of $O(N\log N)$. We demonstrate the workings of
the proposed method using synthetic data in one, two and three dimensions and
also apply it to a real-world example involving the clustering of bubbles in a
turbulent flow. The proposed approach is implemented in a ready-to-use Python
package that we make publicly available.
|
2501.16896
|
Frequency Matters: Explaining Biases of Face Recognition in the
Frequency Domain
|
cs.CV
|
Face recognition (FR) models are vulnerable to performance variations across
demographic groups. The causes for these performance differences are unclear
due to the highly complex deep learning-based structure of face recognition
models. Several works aimed at exploring possible roots of gender and ethnicity
bias, identifying semantic reasons such as hairstyle, make-up, or facial hair
as possible sources. Motivated by recent discoveries of the importance of
frequency patterns in convolutional neural networks, we explain bias in face
recognition using state-of-the-art frequency-based explanations. Our extensive
results show that different frequencies are important to FR models depending on
the ethnicity of the samples.
|
2501.16899
|
RDMM: Fine-Tuned LLM Models for On-Device Robotic Decision Making with
Enhanced Contextual Awareness in Specific Domains
|
cs.RO cs.AI
|
Large language models (LLMs) represent a significant advancement in
integrating physical robots with AI-driven systems. We showcase the
capabilities of our framework within the context of the real-world household
competition. This research introduces a framework that utilizes RDMM (Robotics
Decision-Making Models), which possess the capacity for decision-making within
domain-specific contexts, as well as an awareness of their personal knowledge
and capabilities. The framework leverages information to enhance the autonomous
decision-making of the system. In contrast to other approaches, our focus is on
real-time, on-device solutions, successfully operating on hardware with as
little as 8GB of memory. Our framework incorporates visual perception models
equipping robots with understanding of their environment. Additionally, the
framework has integrated real-time speech recognition capabilities, thus
enhancing the human-robot interaction experience. Experimental results
demonstrate that the RDMM framework can plan with an 93\% accuracy.
Furthermore, we introduce a new dataset consisting of 27k planning instances,
as well as 1.3k text-image annotated samples derived from the competition. The
framework, benchmarks, datasets, and models developed in this work are publicly
available on our GitHub repository at https://github.com/shadynasrat/RDMM.
|
2501.16900
|
RAINER: A Robust Ensemble Learning Grid Search-Tuned Framework for
Rainfall Patterns Prediction
|
cs.LG
|
Rainfall prediction remains a persistent challenge due to the highly
nonlinear and complex nature of meteorological data. Existing approaches lack
systematic utilization of grid search for optimal hyperparameter tuning,
relying instead on heuristic or manual selection, frequently resulting in
sub-optimal results. Additionally, these methods rarely incorporate newly
constructed meteorological features such as differences between temperature and
humidity to capture critical weather dynamics. Furthermore, there is a lack of
systematic evaluation of ensemble learning techniques and limited exploration
of diverse advanced models introduced in the past one or two years. To address
these limitations, we propose a robust ensemble learning grid search-tuned
framework (RAINER) for rainfall prediction. RAINER incorporates a comprehensive
feature engineering pipeline, including outlier removal, imputation of missing
values, feature reconstruction, and dimensionality reduction via Principal
Component Analysis (PCA). The framework integrates novel meteorological
features to capture dynamic weather patterns and systematically evaluates
non-learning mathematical-based methods and a variety of machine learning
models, from weak classifiers to advanced neural networks such as
Kolmogorov-Arnold Networks (KAN). By leveraging grid search for hyperparameter
tuning and ensemble voting techniques, RAINER achieves promising results within
real-world datasets.
|
2501.16902
|
Document Screenshot Retrievers are Vulnerable to Pixel Poisoning Attacks
|
cs.IR
|
Recent advancements in dense retrieval have introduced vision-language model
(VLM)-based retrievers, such as DSE and ColPali, which leverage document
screenshots embedded as vectors to enable effective search and offer a
simplified pipeline over traditional text-only methods. In this study, we
propose three pixel poisoning attack methods designed to compromise VLM-based
retrievers and evaluate their effectiveness under various attack settings and
parameter configurations. Our empirical results demonstrate that injecting even
a single adversarial screenshot into the retrieval corpus can significantly
disrupt search results, poisoning the top-10 retrieved documents for 41.9% of
queries in the case of DSE and 26.4% for ColPali. These vulnerability rates
notably exceed those observed with equivalent attacks on text-only retrievers.
Moreover, when targeting a small set of known queries, the attack success rate
raises, achieving complete success in certain cases. By exposing the
vulnerabilities inherent in vision-language models, this work highlights the
potential risks associated with their deployment.
|
2501.16904
|
Adversarial Masked Autoencoder Purifier with Defense Transferability
|
cs.CV
|
The study of adversarial defense still struggles to combat with advanced
adversarial attacks. In contrast to most prior studies that rely on the
diffusion model for test-time defense to remarkably increase the inference
time, we propose Masked AutoEncoder Purifier (MAEP), which integrates Masked
AutoEncoder (MAE) into an adversarial purifier framework for test-time
purification. While MAEP achieves promising adversarial robustness, it
particularly features model defense transferability and attack generalization
without relying on using additional data that is different from the training
dataset. To our knowledge, MAEP is the first study of adversarial purifier
based on MAE. Extensive experimental results demonstrate that our method can
not only maintain clear accuracy with only a slight drop but also exhibit a
close gap between the clean and robust accuracy. Notably, MAEP trained on
CIFAR10 achieves state-of-the-art performance even when tested directly on
ImageNet, outperforming existing diffusion-based models trained specifically on
ImageNet.
|
2501.16912
|
A Unified Evaluation Framework for Epistemic Predictions
|
cs.LG
|
Predictions of uncertainty-aware models are diverse, ranging from single
point estimates (often averaged over prediction samples) to predictive
distributions, to set-valued or credal-set representations. We propose a novel
unified evaluation framework for uncertainty-aware classifiers, applicable to a
wide range of model classes, which allows users to tailor the trade-off between
accuracy and precision of predictions via a suitably designed performance
metric. This makes possible the selection of the most suitable model for a
particular real-world application as a function of the desired trade-off. Our
experiments, concerning Bayesian, ensemble, evidential, deterministic, credal
and belief function classifiers on the CIFAR-10, MNIST and CIFAR-100 datasets,
show that the metric behaves as desired.
|
2501.16915
|
Understanding the Effect of Long-Term Memory Model Parameters in
Pole-Zero Identification for Stability Analysis of Power Amplifiers
|
eess.SY cs.SY
|
Understanding the nature of potential instabilities is indispensable for the
stabilization of power amplifiers. Pole-zero identification is one of the
techniques that can be used to determine the stability of a design in
large-signal operation. In this work, the possible presence of poles at the
fundamental frequency linked to the long-term memory parameters of the
transistor's model (self-heating and traps) is presented and discussed. The
paper shows how their effect on the identified frequency responses around the
fundamental frequency may compromise the stability analysis results and the
assessment of stability margins. The low observability of the poles at the
fundamental frequency highlights the importance of an accurate identification
of real poles in low-frequency bands. A specific algorithm for the automatic
frequency domain identification of non-resonant frequency responses and a
procedure for detecting and reducing overfitting of real poles is proposed in
this article. The benefits of the proposed methodology to correctly detect and
analyze real poles at low frequencies is demonstrated through Monte-Carlo
sensitivity analyses of two different amplifier designs.
|
2501.16917
|
B-FPGM: Lightweight Face Detection via Bayesian-Optimized Soft FPGM
Pruning
|
cs.CV
|
Face detection is a computer vision application that increasingly demands
lightweight models to facilitate deployment on devices with limited
computational resources. Neural network pruning is a promising technique that
can effectively reduce network size without significantly affecting
performance. In this work, we propose a novel face detection pruning pipeline
that leverages Filter Pruning via Geometric Median (FPGM) pruning, Soft Filter
Pruning (SFP) and Bayesian optimization in order to achieve a superior
trade-off between size and performance compared to existing approaches. FPGM
pruning is a structured pruning technique that allows pruning the least
significant filters in each layer, while SFP iteratively prunes the filters and
allows them to be updated in any subsequent training step. Bayesian
optimization is employed in order to optimize the pruning rates of each layer,
rather than relying on engineering expertise to determine the optimal pruning
rates for each layer. In our experiments across all three subsets of the WIDER
FACE dataset, our proposed approach B-FPGM consistently outperforms existing
ones in balancing model size and performance. All our experiments were applied
to EResFD, the currently smallest (in number of parameters) well-performing
face detector of the literature; a small ablation study with a second small
face detector, EXTD, is also reported. The source code and trained pruned face
detection models can be found at: https://github.com/IDTITI/B-FPGM.
|
2501.16918
|
On Rollouts in Model-Based Reinforcement Learning
|
cs.LG
|
Model-based reinforcement learning (MBRL) seeks to enhance data efficiency by
learning a model of the environment and generating synthetic rollouts from it.
However, accumulated model errors during these rollouts can distort the data
distribution, negatively impacting policy learning and hindering long-term
planning. Thus, the accumulation of model errors is a key bottleneck in current
MBRL methods. We propose Infoprop, a model-based rollout mechanism that
separates aleatoric from epistemic model uncertainty and reduces the influence
of the latter on the data distribution. Further, Infoprop keeps track of
accumulated model errors along a model rollout and provides termination
criteria to limit data corruption. We demonstrate the capabilities of Infoprop
in the Infoprop-Dyna algorithm, reporting state-of-the-art performance in
Dyna-style MBRL on common MuJoCo benchmark tasks while substantially increasing
rollout length and data quality.
|
2501.16919
|
Projection-free Algorithms for Online Convex Optimization with
Adversarial Constraints
|
cs.LG
|
We study a generalization of the Online Convex Optimization (OCO) framework
with time-varying adversarial constraints. In this problem, after selecting a
feasible action from the convex decision set $X,$ a convex constraint function
is revealed alongside the cost function in each round. Our goal is to design a
computationally efficient learning policy that achieves a small regret with
respect to the cost functions and a small cumulative constraint violation (CCV)
with respect to the constraint functions over a horizon of length $T$. It is
well-known that the projection step constitutes the major computational
bottleneck of the standard OCO algorithms. However, for many structured
decision sets, linear functions can be efficiently optimized over the decision
set. We propose a *projection-free* online policy which makes a single call to
a Linear Program (LP) solver per round. Our method outperforms state-of-the-art
projection-free online algorithms with adversarial constraints, achieving
improved bounds of $\tilde{O}(T^{\frac{3}{4}})$ for both regret and CCV. The
proposed algorithm is conceptually simple - it first constructs a surrogate
cost function as a non-negative linear combination of the cost and constraint
functions. Then, it passes the surrogate costs to a new, adaptive version of
the online conditional gradient subroutine, which we propose in this paper.
|
2501.16921
|
Data-Efficient Extremum-Seeking Control Using Kernel-Based Function
Approximation
|
eess.SY cs.SY
|
Existing extremum-seeking control (ESC) approaches typically rely on applying
repeated perturbations to input parameters and performing measurements of the
corresponding performance output. Performing these measurements can be costly
in practical applications, e.g., due to the use of resources, making it
desirable to reduce the number of performed measurements. Moreover, the
required separation between the different timescales in the ESC loop typically
results in slow convergence. With these challenges in mind, this work presents
an approach aimed at both increasing the convergence rate and reducing the
number of measurements that need to be performed. In the proposed approach,
input-output data obtained during operation is used to construct online an
approximation of the system's underlying cost function. By using this
approximation to perform parameter updates when a decrease in the cost can be
guaranteed, instead of performing additional measurements to perform this
update, more efficient use is made of the collected data. As a result,
reductions in both the required number of measurements and update steps are
indeed obtained. In addition, a stability analysis of the novel ESC approach is
provided. The benefits of the synergy between kernel-based function
approximation and standard ESC is demonstrated in simulation on a multi-input
dynamical system.
|
2501.16922
|
Agential AI for Integrated Continual Learning, Deliberative Behavior,
and Comprehensible Models
|
cs.AI cs.LG
|
Contemporary machine learning paradigm excels in statistical data analysis,
solving problems that classical AI couldn't. However, it faces key limitations,
such as a lack of integration with planning, incomprehensible internal
structure, and inability to learn continually. We present the initial design
for an AI system, Agential AI (AAI), in principle operating independently or on
top of statistical methods, designed to overcome these issues. AAI's core is a
learning method that models temporal dynamics with guarantees of completeness,
minimality, and continual learning, using component-level variation and
selection to learn the structure of the environment. It integrates this with a
behavior algorithm that plans on a learned model and encapsulates high-level
behavior patterns. Preliminary experiments on a simple environment show AAI's
effectiveness and potential.
|
2501.16923
|
In-Circuit Characterization of Low-Frequency Stability Margins in Power
Amplifiers
|
eess.SY cs.SY
|
Low-frequency resonances with low stability margins affect video bandwidth
characteristics of power amplifiers. In this work, a non-connectorized
measurement technique is presented to obtain the low-frequency critical poles
at internal nodes of a hybrid amplifier. The experimental setup uses a high
impedance probe connected to a vector network analyzer (VNA) to obtain a fully
calibrated closed-loop frequency response that is identified to get the poles
of the device at low frequency. Compared to previous connectorized solutions,
the approach avoids the ad-hoc insertion of extra RF connectors to access the
low-frequency dynamics of the amplifier. In addition, it simplifies the
characterization at multiple internal nodes, which is worthwhile for an
efficient detection and fixing of critical low frequency dynamics in multistage
power amplifiers. The technique is first applied to dc steady state regimes and
compared to the connectorized approach on a single stage amplifier. Next, it is
applied to a three-stage amplifier to show its potential to detect the origin
of the undesired dynamics and the most effective way to increase stability
margin. Finally, the technique has been extended to the large-signal case to
increase its usefulness for the design and diagnosis of high power amplifiers.
|
2501.16925
|
Detecting harassment and defamation in cyberbullying with
emotion-adaptive training
|
cs.CL
|
Existing research on detecting cyberbullying incidents on social media has
primarily concentrated on harassment and is typically approached as a binary
classification task. However, cyberbullying encompasses various forms, such as
denigration and harassment, which celebrities frequently face. Furthermore,
suitable training data for these diverse forms of cyberbullying remains scarce.
In this study, we first develop a celebrity cyberbullying dataset that
encompasses two distinct types of incidents: harassment and defamation. We
investigate various types of transformer-based models, namely masked (RoBERTa,
Bert and DistilBert), replacing(Electra), autoregressive (XLnet),
masked&permuted (Mpnet), text-text (T5) and large language models (Llama2 and
Llama3) under low source settings. We find that they perform competitively on
explicit harassment binary detection. However, their performance is
substantially lower on harassment and denigration multi-classification tasks.
Therefore, we propose an emotion-adaptive training framework (EAT) that helps
transfer knowledge from the domain of emotion detection to the domain of
cyberbullying detection to help detect indirect cyberbullying events. EAT
consistently improves the average macro F1, precision and recall by 20% in
cyberbullying detection tasks across nine transformer-based models under
low-resource settings. Our claims are supported by intuitive theoretical
insights and extensive experiments.
|
2501.16928
|
Detecting Critical Resonances in Microwave Amplifiers through Noise
Simulations
|
physics.ins-det cs.SY eess.SY
|
The presence of critical resonances in microwave power amplifiers has a
negative impact on its behavior and performance. These critical resonances are
usually predicted from pole-zero stability simulations. In this paper, a
different and less demanding approach for the circuit designer is proposed. It
is based on performing noise simulations of the amplifier and observing the
rise in the noise spectrum that happens when the system has low damping poles.
Critical resonance detection is simplified since no additional probes have to
be inserted in the circuit and no post-processing for pole-zero analysis is
required. The technique is applied to two amplifier prototypes fabricated in
microstrip hybrid technology and the results are compared with the conventional
pole-zero approach.
|
2501.16929
|
Giving Sense to Inputs: Toward an Accessible Control Framework for
Shared Autonomy
|
cs.RO cs.HC
|
While shared autonomy offers significant potential for assistive robotics,
key questions remain about how to effectively map 2D control inputs to 6D robot
motions. An intuitive framework should allow users to input commands
effortlessly, with the robot responding as expected, without users needing to
anticipate the impact of their inputs. In this article, we propose a dynamic
input mapping framework that links joystick movements to motions on control
frames defined along a trajectory encoded with canal surfaces. We evaluate our
method in a user study with 20 participants, demonstrating that our input
mapping framework reduces the workload and improves usability compared to a
baseline mapping with similar motion encoding. To prepare for deployment in
assistive scenarios, we built on the development from the accessible gaming
community to select an accessible control interface. We then tested the system
in an exploratory study, where three wheelchair users controlled the robot for
both daily living activities and a creative painting task, demonstrating its
feasibility for users closer to our target population.
|
2501.16931
|
Quantifying Uncertainty and Variability in Machine Learning: Confidence
Intervals for Quantiles in Performance Metric Distributions
|
cs.LG stat.AP
|
Machine learning models are widely used in applications where reliability and
robustness are critical. Model evaluation often relies on single-point
estimates of performance metrics such as accuracy, F1 score, or mean squared
error, that fail to capture the inherent variability in model performance. This
variability arises from multiple sources, including train-test split, weights
initialization, and hyperparameter tuning. Investigating the characteristics of
performance metric distributions, rather than focusing on a single point only,
is essential for informed decision-making during model selection and
optimization, especially in high-stakes settings.
How does the performance metric vary due to intrinsic uncertainty in the
selected modeling approach? For example, train-test split is modified, initial
weights for optimization are modified or hyperparameter tuning is done using an
algorithm with probabilistic nature?
This is shifting the focus from identifying a single best model to
understanding a distribution of the performance metric that captures
variability across different training conditions. By running multiple
experiments with varied settings, empirical distributions of performance
metrics can be generated. Analyzing these distributions can lead to more robust
models that generalize well across diverse scenarios.
This contribution explores the use of quantiles and confidence intervals to
analyze such distributions, providing a more complete understanding of model
performance and its uncertainty. Aimed at a statistically interested audience
within the machine learning community, the suggested approaches are easy to
implement and apply to various performance metrics for classification and
regression problems. Given the often long training times in ML, particular
attention is given to small sample sizes (in the order of 10-25).
|
2501.16932
|
Online-BLS: An Accurate and Efficient Online Broad Learning System for
Data Stream Classification
|
cs.LG
|
The state-of-the-art online learning models generally conduct a single online
gradient descent when a new sample arrives and thus suffer from suboptimal
model weights. To this end, we introduce an online broad learning system
framework with closed-form solutions for each online update. Different from
employing existing incremental broad learning algorithms for online learning
tasks, which tend to incur degraded accuracy and expensive online update
overhead, we design an effective weight estimation algorithm and an efficient
online updating strategy to remedy the above two deficiencies, respectively.
Specifically, an effective weight estimation algorithm is first developed by
replacing notorious matrix inverse operations with Cholesky decomposition and
forward-backward substitution to improve model accuracy. Second, we devise an
efficient online updating strategy that dramatically reduces online update
time. Theoretical analysis exhibits the splendid error bound and low time
complexity of our model. The most popular test-then-training evaluation
experiments on various real-world datasets prove its superiority and
efficiency. Furthermore, our framework is naturally extended to data stream
scenarios with concept drift and exceeds state-of-the-art baselines.
|
2501.16935
|
Beyond Human Intervention: Algorithmic Collusion through Multi-Agent
Learning Strategies
|
econ.TH cs.MA
|
Collusion in market pricing is a concept associated with human actions to
raise market prices through artificially limited supply. Recently, the idea of
algorithmic collusion was put forward, where the human action in the pricing
process is replaced by automated agents. Although experiments have shown that
collusive market equilibria can be reached through such techniques, without the
need for human intervention, many of the techniques developed remain
susceptible to exploitation by other players, making them difficult to
implement in practice. In this article, we explore a situation where an agent
has a multi-objective strategy, and not only learns to unilaterally exploit
market dynamics originating from other algorithmic agents, but also learns to
model the behaviour of other agents directly. Our results show how common
critiques about the viability of algorithmic collusion in real-life settings
can be overcome through the usage of slightly more complex algorithms.
|
2501.16937
|
TAID: Temporally Adaptive Interpolated Distillation for Efficient
Knowledge Transfer in Language Models
|
cs.LG cs.AI cs.CL
|
Causal language models have demonstrated remarkable capabilities, but their
size poses significant challenges for deployment in resource-constrained
environments. Knowledge distillation, a widely-used technique for transferring
knowledge from a large teacher model to a small student model, presents a
promising approach for model compression. A significant remaining issue lies in
the major differences between teacher and student models, namely the
substantial capacity gap, mode averaging, and mode collapse, which pose
barriers during distillation. To address these issues, we introduce
$\textit{Temporally Adaptive Interpolated Distillation (TAID)}$, a novel
knowledge distillation approach that dynamically interpolates student and
teacher distributions through an adaptive intermediate distribution, gradually
shifting from the student's initial distribution towards the teacher's
distribution. We provide a theoretical analysis demonstrating TAID's ability to
prevent mode collapse and empirically show its effectiveness in addressing the
capacity gap while balancing mode averaging and mode collapse. Our
comprehensive experiments demonstrate TAID's superior performance across
various model sizes and architectures in both instruction tuning and
pre-training scenarios. Furthermore, we showcase TAID's practical impact by
developing two state-of-the-art compact foundation models:
$\texttt{TAID-LLM-1.5B}$ for language tasks and $\texttt{TAID-VLM-2B}$ for
vision-language tasks. These results demonstrate TAID's effectiveness in
creating high-performing and efficient models, advancing the development of
more accessible AI technologies.
|
2501.16944
|
Exact Computation of Any-Order Shapley Interactions for Graph Neural
Networks
|
cs.LG cs.AI
|
Albeit the ubiquitous use of Graph Neural Networks (GNNs) in machine learning
(ML) prediction tasks involving graph-structured data, their interpretability
remains challenging. In explainable artificial intelligence (XAI), the Shapley
Value (SV) is the predominant method to quantify contributions of individual
features to a ML model's output. Addressing the limitations of SVs in complex
prediction models, Shapley Interactions (SIs) extend the SV to groups of
features. In this work, we explain single graph predictions of GNNs with SIs
that quantify node contributions and interactions among multiple nodes. By
exploiting the GNN architecture, we show that the structure of interactions in
node embeddings are preserved for graph prediction. As a result, the
exponential complexity of SIs depends only on the receptive fields, i.e. the
message-passing ranges determined by the connectivity of the graph and the
number of convolutional layers. Based on our theoretical results, we introduce
GraphSHAP-IQ, an efficient approach to compute any-order SIs exactly.
GraphSHAP-IQ is applicable to popular message passing techniques in conjunction
with a linear global pooling and output layer. We showcase that GraphSHAP-IQ
substantially reduces the exponential complexity of computing exact SIs on
multiple benchmark datasets. Beyond exact computation, we evaluate
GraphSHAP-IQ's approximation of SIs on popular GNN architectures and compare
with existing baselines. Lastly, we visualize SIs of real-world water
distribution networks and molecule structures using a SI-Graph.
|
2501.16945
|
ToolFactory: Automating Tool Generation by Leveraging LLM to Understand
REST API Documentations
|
cs.LG cs.AI cs.CL cs.SE
|
LLM-based tool agents offer natural language interfaces, enabling users to
seamlessly interact with computing services. While REST APIs are valuable
resources for building such agents, they must first be transformed into
AI-compatible tools. Automatically generating AI-compatible tools from REST API
documents can greatly streamline tool agent development and minimize user
learning curves. However, API documentation often suffers from a lack of
standardization, inconsistent schemas, and incomplete information. To address
these issues, we developed \textbf{ToolFactory}, an open-source pipeline for
automating tool generation from unstructured API documents. To enhance the
reliability of the developed tools, we implemented an evaluation method to
diagnose errors. Furthermore, we built a knowledge base of verified tools,
which we leveraged to infer missing information from poorly documented APIs. We
developed the API Extraction Benchmark, comprising 167 API documents and 744
endpoints in various formats, and designed a JSON schema to annotate them. This
annotated dataset was utilized to train and validate ToolFactory. The
experimental results highlight the effectiveness of ToolFactory. We also
demonstrated ToolFactory by creating a domain-specific AI agent for
glycomaterials research. ToolFactory exhibits significant potential for
facilitating the seamless integration of scientific REST APIs into AI
workflows.
|
2501.16947
|
Image-based Geo-localization for Robotics: Are Black-box Vision-Language
Models there yet?
|
cs.CV cs.RO
|
The advances in Vision-Language models (VLMs) offer exciting opportunities
for robotic applications involving image geo-localization, the problem of
identifying the geo-coordinates of a place based on visual data only. Recent
research works have focused on using a VLM as embeddings extractor for
geo-localization, however, the most sophisticated VLMs may only be available as
black boxes that are accessible through an API, and come with a number of
limitations: there is no access to training data, model features and gradients;
retraining is not possible; the number of predictions may be limited by the
API; training on model outputs is often prohibited; and queries are open-ended.
The utilization of a VLM as a stand-alone, zero-shot geo-localization system
using a single text-based prompt is largely unexplored. To bridge this gap,
this paper undertakes the first systematic study, to the best of our knowledge,
to investigate the potential of some of the state-of-the-art VLMs as
stand-alone, zero-shot geo-localization systems in a black-box setting with
realistic constraints. We consider three main scenarios for this thorough
investigation: a) fixed text-based prompt; b) semantically-equivalent
text-based prompts; and c) semantically-equivalent query images. We also take
into account the auto-regressive and probabilistic generation process of the
VLMs when investigating their utility for geo-localization task by using model
consistency as a metric in addition to traditional accuracy. Our work provides
new insights in the capabilities of different VLMs for the above-mentioned
scenarios.
|
2501.16952
|
Multiple Abstraction Level Retrieve Augment Generation
|
cs.CL cs.AI cs.LG
|
A Retrieval-Augmented Generation (RAG) model powered by a large language
model (LLM) provides a faster and more cost-effective solution for adapting to
new data and knowledge. It also delivers more specialized responses compared to
pre-trained LLMs. However, most existing approaches rely on retrieving
prefix-sized chunks as references to support question-answering (Q/A). This
approach is often deployed to address information needs at a single level of
abstraction, as it struggles to generate answers across multiple levels of
abstraction. In an RAG setting, while LLMs can summarize and answer questions
effectively when provided with sufficient details, retrieving excessive
information often leads to the 'lost in the middle' problem and exceeds token
limitations. We propose a novel RAG approach that uses chunks of multiple
abstraction levels (MAL), including multi-sentence-level, paragraph-level,
section-level, and document-level. The effectiveness of our approach is
demonstrated in an under-explored scientific domain of Glycoscience. Compared
to traditional single-level RAG approaches, our approach improves AI evaluated
answer correctness of Q/A by 25.739\% on Glyco-related papers.
|
2501.16961
|
Instantiation-based Formalization of Logical Reasoning Tasks using
Language Models and Logical Solvers
|
cs.AI
|
Robustness of reasoning remains a significant challenge for large language
models, and addressing it is essential for the practical applicability of
AI-driven reasoning systems. We introduce Semantic Self-Verification (SSV), a
novel approach that addresses the key challenge in combining language models
with the rigor of logical solvers: to accurately formulate the reasoning
problem from natural language to the formal language of the solver. SSV uses a
consistency-based approach to produce strong abstract formalizations of
problems using concrete instantiations that are generated by the model and
verified by the solver. In addition to significantly advancing the overall
reasoning accuracy over the state-of-the-art, a key novelty that this approach
presents is a feature of verification that has near-perfect precision over a
significant coverage of cases, as we demonstrate on open reasoning benchmarks.
We propose such *near-certain reasoning* as a new approach to reduce the need
for manual verification in many cases, taking us closer to more dependable and
autonomous AI reasoning systems.
|
2501.16964
|
Few Edges Are Enough: Few-Shot Network Attack Detection with Graph
Neural Networks
|
cs.LG cs.CR
|
Detecting cyberattacks using Graph Neural Networks (GNNs) has seen promising
results recently. Most of the state-of-the-art models that leverage these
techniques require labeled examples, hard to obtain in many real-world
scenarios. To address this issue, unsupervised learning and Self-Supervised
Learning (SSL) have emerged as interesting approaches to reduce the dependency
on labeled data. Nonetheless, these methods tend to yield more anomalous
detection algorithms rather than effective attack detection systems. This paper
introduces Few Edges Are Enough (FEAE), a GNN-based architecture trained with
SSL and Few-Shot Learning (FSL) to better distinguish between false positive
anomalies and actual attacks. To maximize the potential of few-shot examples,
our model employs a hybrid self-supervised objective that combines the
advantages of contrastive-based and reconstruction-based SSL. By leveraging
only a minimal number of labeled attack events, represented as attack edges,
FEAE achieves competitive performance on two well-known network datasets
compared to both supervised and unsupervised methods. Remarkably, our
experimental results unveil that employing only 1 malicious event for each
attack type in the dataset is sufficient to achieve substantial improvements.
FEAE not only outperforms self-supervised GNN baselines but also surpasses some
supervised approaches on one of the datasets.
|
2501.16966
|
Heterogeneity-aware Personalized Federated Learning via Adaptive
Dual-Agent Reinforcement Learning
|
cs.LG cs.AI
|
Federated Learning (FL) empowers multiple clients to collaboratively train
machine learning models without sharing local data, making it highly applicable
in heterogeneous Internet of Things (IoT) environments. However, intrinsic
heterogeneity in clients' model architectures and computing capabilities often
results in model accuracy loss and the intractable straggler problem, which
significantly impairs training effectiveness. To tackle these challenges, this
paper proposes a novel Heterogeneity-aware Personalized Federated Learning
method, named HAPFL, via multi-level Reinforcement Learning (RL) mechanisms.
HAPFL optimizes the training process by incorporating three strategic
components: 1) An RL-based heterogeneous model allocation mechanism. The
parameter server employs a Proximal Policy Optimization (PPO)-based RL agent to
adaptively allocate appropriately sized, differentiated models to clients based
on their performance, effectively mitigating performance disparities. 2) An
RL-based training intensity adjustment scheme. The parameter server leverages
another PPO-based RL agent to dynamically fine-tune the training intensity for
each client to further enhance training efficiency and reduce straggling
latency. 3) A knowledge distillation-based mutual learning mechanism. Each
client deploys both a heterogeneous local model and a homogeneous lightweight
model named LiteModel, where these models undergo mutual learning through
knowledge distillation. This uniform LiteModel plays a pivotal role in
aggregating and sharing global knowledge, significantly enhancing the
effectiveness of personalized local training. Experimental results across
multiple benchmark datasets demonstrate that HAPFL not only achieves high
accuracy but also substantially reduces the overall training time by
20.9%-40.4% and decreases straggling latency by 19.0%-48.0% compared to
existing solutions.
|
2501.16969
|
What Really Matters for Learning-based LiDAR-Camera Calibration
|
cs.CV
|
Calibration is an essential prerequisite for the accurate data fusion of
LiDAR and camera sensors. Traditional calibration techniques often require
specific targets or suitable scenes to obtain reliable 2D-3D correspondences.
To tackle the challenge of target-less and online calibration, deep neural
networks have been introduced to solve the problem in a data-driven manner.
While previous learning-based methods have achieved impressive performance on
specific datasets, they still struggle in complex real-world scenarios. Most
existing works focus on improving calibration accuracy but overlook the
underlying mechanisms. In this paper, we revisit the development of
learning-based LiDAR-Camera calibration and encourage the community to pay more
attention to the underlying principles to advance practical applications. We
systematically analyze the paradigm of mainstream learning-based methods, and
identify the critical limitations of regression-based methods with the widely
used data generation pipeline. Our findings reveal that most learning-based
methods inadvertently operate as retrieval networks, focusing more on
single-modality distributions rather than cross-modality correspondences. We
also investigate how the input data format and preprocessing operations impact
network performance and summarize the regression clues to inform further
improvements.
|
2501.16971
|
RODEO: Robust Outlier Detection via Exposing Adaptive
Out-of-Distribution Samples
|
cs.CV cs.LG
|
In recent years, there have been significant improvements in various forms of
image outlier detection. However, outlier detection performance under
adversarial settings lags far behind that in standard settings. This is due to
the lack of effective exposure to adversarial scenarios during training,
especially on unseen outliers, leading to detection models failing to learn
robust features. To bridge this gap, we introduce RODEO, a data-centric
approach that generates effective outliers for robust outlier detection. More
specifically, we show that incorporating outlier exposure (OE) and adversarial
training can be an effective strategy for this purpose, as long as the exposed
training outliers meet certain characteristics, including diversity, and both
conceptual differentiability and analogy to the inlier samples. We leverage a
text-to-image model to achieve this goal. We demonstrate both quantitatively
and qualitatively that our adaptive OE method effectively generates ``diverse''
and ``near-distribution'' outliers, leveraging information from both text and
image domains. Moreover, our experimental results show that utilizing our
synthesized outliers significantly enhances the performance of the outlier
detector, particularly in adversarial settings.
|
2501.16973
|
Towards Open-Source and Modular Space Systems with ATMOS
|
cs.RO
|
In the near future, autonomous space systems will compose a large number of
the spacecraft being deployed. Their tasks will involve autonomous rendezvous
and proximity operations with large structures, such as inspections or assembly
of orbiting space stations and maintenance and human-assistance tasks over
shared workspaces. To promote replicable and reliable scientific results for
autonomous control of spacecraft, we present the design of a space systems
laboratory based on open-source and modular software and hardware. The
simulation software provides a software-in-the-loop (SITL) architecture that
seamlessly transfers simulated results to the ATMOS platforms, developed for
testing of multi-agent autonomy schemes for microgravity. The manuscript
presents the KTH space systems laboratory facilities and the ATMOS platform as
open-source hardware and software contributions. Preliminary results showcase
SITL and real testing.
|
2501.16974
|
Excited-state nonadiabatic dynamics in explicit solvent using machine
learned interatomic potentials
|
physics.chem-ph cs.LG
|
Excited-state nonadiabatic simulations with quantum mechanics/molecular
mechanics (QM/MM) are essential to understand photoinduced processes in
explicit environments. However, the high computational cost of the underlying
quantum chemical calculations limits its application in combination with
trajectory surface hopping methods. Here, we use FieldSchNet, a machine-learned
interatomic potential capable of incorporating electric field effects into the
electronic states, to replace traditional QM/MM electrostatic embedding with
its ML/MM counterpart for nonadiabatic excited state trajectories. The
developed method is applied to furan in water, including five coupled singlet
states. Our results demonstrate that with sufficiently curated training data,
the ML/MM model reproduces the electronic kinetics and structural
rearrangements of QM/MM surface hopping reference simulations. Furthermore, we
identify performance metrics that provide robust and interpretable validation
of model accuracy.
|
2501.16975
|
Over-Tokenized Transformer: Vocabulary is Generally Worth Scaling
|
cs.CL cs.LG
|
Tokenization is a fundamental component of large language models (LLMs), yet
its influence on model scaling and performance is not fully explored. In this
paper, we introduce Over-Tokenized Transformers, a novel framework that
decouples input and output vocabularies to improve language modeling
performance. Specifically, our approach scales up input vocabularies to
leverage multi-gram tokens. Through extensive experiments, we uncover a
log-linear relationship between input vocabulary size and training loss,
demonstrating that larger input vocabularies consistently enhance model
performance, regardless of model size. Using a large input vocabulary, we
achieve performance comparable to double-sized baselines with no additional
cost. Our findings highlight the importance of tokenization in scaling laws and
provide practical insight for tokenizer design, paving the way for more
efficient and powerful LLMs.
|
2501.16981
|
Modulating CNN Features with Pre-Trained ViT Representations for
Open-Vocabulary Object Detection
|
cs.CV
|
Owing to large-scale image-text contrastive training, pre-trained vision
language model (VLM) like CLIP shows superior open-vocabulary recognition
ability. Most existing open-vocabulary object detectors attempt to utilize the
pre-trained VLM to attain generative representation. F-ViT uses the pre-trained
visual encoder as the backbone network and freezes it during training. However,
the frozen backbone doesn't benefit from the labeled data to strengthen the
representation. Therefore, we propose a novel two-branch backbone network
design, named as ViT-Feature-Modulated Multi-Scale Convolutional network
(VMCNet). VMCNet consists of a trainable convolutional branch, a frozen
pre-trained ViT branch and a feature modulation module. The trainable CNN
branch could be optimized with labeled data while the frozen pre-trained ViT
branch could keep the representation ability derived from large-scale
pre-training. Then, the proposed feature modulation module could modulate the
multi-scale CNN features with the representations from ViT branch. With the
proposed mixed structure, detector is more likely to discover novel categories.
Evaluated on two popular benchmarks, our method boosts the detection
performance on novel category and outperforms the baseline. On OV-COCO, the
proposed method achieves 44.3 AP$_{50}^{\mathrm{novel}}$ with ViT-B/16 and 48.5
AP$_{50}^{\mathrm{novel}}$ with ViT-L/14. On OV-LVIS, VMCNet with ViT-B/16 and
ViT-L/14 reaches 27.8 and 38.4 mAP$_{r}$.
|
2501.16986
|
Generative quantum combinatorial optimization by means of a novel
conditional generative quantum eigensolver
|
quant-ph cs.AI cs.LG
|
Quantum computing is entering a transformative phase with the emergence of
logical quantum processors, which hold the potential to tackle complex problems
beyond classical capabilities. While significant progress has been made,
applying quantum algorithms to real-world problems remains challenging. Hybrid
quantum-classical techniques have been explored to bridge this gap, but they
often face limitations in expressiveness, trainability, or scalability. In this
work, we introduce conditional Generative Quantum Eigensolver
(conditional-GQE), a context-aware quantum circuit generator powered by an
encoder-decoder Transformer. Focusing on combinatorial optimization, we train
our generator for solving problems with up to 10 qubits, exhibiting nearly
perfect performance on new problems. By leveraging the high expressiveness and
flexibility of classical generative models, along with an efficient
preference-based training scheme, conditional-GQE provides a generalizable and
scalable framework for quantum circuit generation. Our approach advances hybrid
quantum-classical computing and contributes to accelerate the transition toward
fault-tolerant quantum computing.
|
2501.16988
|
Marginal and Conditional Importance Measures from Machine Learning
Models and Their Relationship with Conditional Average Treatment Effect
|
stat.ML cs.LG
|
Interpreting black-box machine learning models is challenging due to their
strong dependence on data and inherently non-parametric nature. This paper
reintroduces the concept of importance through "Marginal Variable Importance
Metric" (MVIM), a model-agnostic measure of predictor importance based on the
true conditional expectation function. MVIM evaluates predictors' influence on
continuous or discrete outcomes. A permutation-based estimation approach,
inspired by \citet{breiman2001random} and \citet{fisher2019all}, is proposed to
estimate MVIM. MVIM estimator is biased when predictors are highly correlated,
as black-box models struggle to extrapolate in low-probability regions. To
address this, we investigated the bias-variance decomposition of MVIM to
understand the source and pattern of the bias under high correlation. A
Conditional Variable Importance Metric (CVIM), adapted from
\citet{strobl2008conditional}, is introduced to reduce this bias. Both MVIM and
CVIM exhibit a quadratic relationship with the conditional average treatment
effect (CATE).
|
2501.16992
|
FedEFM: Federated Endovascular Foundation Model with Unseen Data
|
cs.CV
|
In endovascular surgery, the precise identification of catheters and
guidewires in X-ray images is essential for reducing intervention risks.
However, accurately segmenting catheter and guidewire structures is challenging
due to the limited availability of labeled data. Foundation models offer a
promising solution by enabling the collection of similar domain data to train
models whose weights can be fine-tuned for downstream tasks. Nonetheless,
large-scale data collection for training is constrained by the necessity of
maintaining patient privacy. This paper proposes a new method to train a
foundation model in a decentralized federated learning setting for endovascular
intervention. To ensure the feasibility of the training, we tackle the unseen
data issue using differentiable Earth Mover's Distance within a knowledge
distillation framework. Once trained, our foundation model's weights provide
valuable initialization for downstream tasks, thereby enhancing task-specific
performance. Intensive experiments show that our approach achieves new
state-of-the-art results, contributing to advancements in endovascular
intervention and robotic-assisted endovascular surgery, while addressing the
critical issue of data sharing in the medical domain.
|
2501.16997
|
MAUCell: An Adaptive Multi-Attention Framework for Video Frame
Prediction
|
cs.CV cs.LG cs.RO
|
Temporal sequence modeling stands as the fundamental foundation for video
prediction systems and real-time forecasting operations as well as anomaly
detection applications. The achievement of accurate predictions through
efficient resource consumption remains an ongoing issue in contemporary
temporal sequence modeling. We introduce the Multi-Attention Unit (MAUCell)
which combines Generative Adversarial Networks (GANs) and spatio-temporal
attention mechanisms to improve video frame prediction capabilities. Our
approach implements three types of attention models to capture intricate motion
sequences. A dynamic combination of these attention outputs allows the model to
reach both advanced decision accuracy along with superior quality while
remaining computationally efficient. The integration of GAN elements makes
generated frames appear more true to life therefore the framework creates
output sequences which mimic real-world footage. The new design system
maintains equilibrium between temporal continuity and spatial accuracy to
deliver reliable video prediction. Through a comprehensive evaluation
methodology which merged the perceptual LPIPS measurement together with classic
tests MSE, MAE, SSIM and PSNR exhibited enhancing capabilities than
contemporary approaches based on direct benchmark tests of Moving MNIST, KTH
Action, and CASIA-B (Preprocessed) datasets. Our examination indicates that
MAUCell shows promise for operational time requirements. The research findings
demonstrate how GANs work best with attention mechanisms to create better
applications for predicting video sequences.
|
2501.17002
|
Covert Adversarial Actuators in Finite MDPs
|
cs.IT math.IT
|
We consider a Markov decision process (MDP) in which actions prescribed by
the controller are executed by a separate actuator, which may behave
adversarially. At each time step, the controller selects and transmits an
action to the actuator; however, the actuator may deviate from the intended
action to degrade the control reward. Given that the controller observes only
the sequence of visited states, we investigate whether the actuator can
covertly deviate from the controller's policy to minimize its reward without
being detected. We establish conditions for covert adversarial behavior over an
infinite time horizon and formulate an optimization problem to determine the
optimal adversarial policy under these conditions. Additionally, we derive the
asymptotic error exponents for detection in two scenarios: (1) a binary
hypothesis testing framework, where the actuator either follows the prescribed
policy or a known adversarial strategy, and (2) a composite hypothesis testing
framework, where the actuator may employ any stationary policy. For the latter
case, we also propose an optimization problem to maximize the adversary's
performance.
|
2501.17010
|
New Quantum MDS Codes with Flexible Parameters from Hermitian
Self-Orthogonal GRS Codes
|
cs.IT math.IT
|
Let $q$ be a prime power.
Let $\lambda>1$ be a divisor of $q-1$, and let $\tau>1$ and $\rho>1$ be
divisors of $q+1$.
Under certain conditions we prove that there exists an MDS stabilizer quantum
code with length
$n=\lambda \tau \sigma$ where $2\le \sigma \le \rho$.
This is a flexible construction, which
includes new MDS parameters not known before.
|
2501.17011
|
MIDI-GPT: A Controllable Generative Model for Computer-Assisted
Multitrack Music Composition
|
cs.SD cs.LG cs.MM eess.AS
|
We present and release MIDI-GPT, a generative system based on the Transformer
architecture that is designed for computer-assisted music composition
workflows. MIDI-GPT supports the infilling of musical material at the track and
bar level, and can condition generation on attributes including: instrument
type, musical style, note density, polyphony level, and note duration. In order
to integrate these features, we employ an alternative representation for
musical material, creating a time-ordered sequence of musical events for each
track and concatenating several tracks into a single sequence, rather than
using a single time-ordered sequence where the musical events corresponding to
different tracks are interleaved. We also propose a variation of our
representation allowing for expressiveness. We present experimental results
that demonstrate that MIDI-GPT is able to consistently avoid duplicating the
musical material it was trained on, generate music that is stylistically
similar to the training dataset, and that attribute controls allow enforcing
various constraints on the generated material. We also outline several
real-world applications of MIDI-GPT, including collaborations with industry
partners that explore the integration and evaluation of MIDI-GPT into
commercial products, as well as several artistic works produced using it.
|
2501.17015
|
Revisit Mixture Models for Multi-Agent Simulation: Experimental Study
within a Unified Framework
|
cs.AI cs.MA cs.RO
|
Simulation plays a crucial role in assessing autonomous driving systems,
where the generation of realistic multi-agent behaviors is a key aspect. In
multi-agent simulation, the primary challenges include behavioral multimodality
and closed-loop distributional shifts. In this study, we revisit mixture models
for generating multimodal agent behaviors, which can cover the mainstream
methods including continuous mixture models and GPT-like discrete models.
Furthermore, we introduce a closed-loop sample generation approach tailored for
mixture models to mitigate distributional shifts. Within the unified mixture
model~(UniMM) framework, we recognize critical configurations from both model
and data perspectives. We conduct a systematic examination of various model
configurations, including positive component matching, continuous regression,
prediction horizon, and the number of components. Moreover, our investigation
into the data configuration highlights the pivotal role of closed-loop samples
in achieving realistic simulations. To extend the benefits of closed-loop
samples across a broader range of mixture models, we further address the
shortcut learning and off-policy learning issues. Leveraging insights from our
exploration, the distinct variants proposed within the UniMM framework,
including discrete, anchor-free, and anchor-based models, all achieve
state-of-the-art performance on the WOSAC benchmark.
|
2501.17018
|
Six-Degree-of-Freedom Motion Emulation for Data-Driven Modeling of
Underwater Vehicles
|
cs.RO
|
This article presents a collaborative research effort aimed at developing a
novel six-degree-of-freedom (6-DOF) motion platform for the empirical
characterization of hydrodynamic forces crucial for the control and stability
of surface and subsurface vehicles. Traditional experimental methods, such as
the Planar Motion Mechanism (PMM), are limited by the number of simultaneously
articulated DOFs and are limited to single-frequency testing, making such
systems impractical for resolving frequency-dependent added mass or damping
matrices. The 6 DOF platform, termed a hexapod, overcomes these limitations by
offering enhanced maneuverability and the ability to test broad-banded
frequency spectra in multiple degrees of freedom in a single experiment.
|
2501.17021
|
On Oblivious Transfer Capacity of Noisy Multiple Access Channel
|
cs.IT cs.CR math.IT
|
This work investigates the problem of Oblivious Transfer (OT) over a noisy
Multiple Access Channel (MAC) involving two non-colluding senders and a single
receiver. The channel model is characterized by correlations among the parties,
with the parties assumed to be either honest-but-curious or, in the receiver's
case, potentially malicious. We propose a multiparty protocol for
honest-but-curious parties where the general MAC is reduced to a certain
correlation. In scenarios where the receiver is malicious, the protocol
achieves an achievable rate region.
|
2501.17022
|
Mobile Manipulation Instruction Generation from Multiple Images with
Automatic Metric Enhancement
|
cs.RO
|
We consider the problem of generating free-form mobile manipulation
instructions based on a target object image and receptacle image. Conventional
image captioning models are not able to generate appropriate instructions
because their architectures are typically optimized for single-image. In this
study, we propose a model that handles both the target object and receptacle to
generate free-form instruction sentences for mobile manipulation tasks.
Moreover, we introduce a novel training method that effectively incorporates
the scores from both learning-based and n-gram based automatic evaluation
metrics as rewards. This method enables the model to learn the co-occurrence
relationships between words and appropriate paraphrases. Results demonstrate
that our proposed method outperforms baseline methods including representative
multimodal large language models on standard automatic evaluation metrics.
Moreover, physical experiments reveal that using our method to augment data on
language instructions improves the performance of an existing multimodal
language understanding model for mobile manipulation.
|
2501.17030
|
Challenges in Ensuring AI Safety in DeepSeek-R1 Models: The Shortcomings
of Reinforcement Learning Strategies
|
cs.LG cs.AI cs.CL cs.CR
|
Large Language Models (LLMs) have achieved remarkable progress in reasoning,
alignment, and task-specific performance. However, ensuring harmlessness in
these systems remains a critical challenge, particularly in advanced models
like DeepSeek-R1. This paper examines the limitations of Reinforcement Learning
(RL) as the primary approach for reducing harmful outputs in DeepSeek-R1 and
compares it with Supervised Fine-Tuning (SFT). While RL improves reasoning
capabilities, it faces challenges such as reward hacking, generalization
failures, language mixing, and high computational costs. We propose hybrid
training approaches combining RL and SFT to achieve robust harmlessness
reduction. Usage recommendations and future directions for deploying
DeepSeek-R1 responsibly are also presented.
|
2501.17037
|
Standardised schema and taxonomy for AI incident databases in critical
digital infrastructure
|
cs.CY cs.AI cs.HC
|
The rapid deployment of Artificial Intelligence (AI) in critical digital
infrastructure introduces significant risks, necessitating a robust framework
for systematically collecting AI incident data to prevent future incidents.
Existing databases lack the granularity as well as the standardized structure
required for consistent data collection and analysis, impeding effective
incident management. This work proposes a standardized schema and taxonomy for
AI incident databases, addressing these challenges by enabling detailed and
structured documentation of AI incidents across sectors. Key contributions
include developing a unified schema, introducing new fields such as incident
severity, causes, and harms caused, and proposing a taxonomy for classifying AI
incidents in critical digital infrastructure. The proposed solution facilitates
more effective incident data collection and analysis, thus supporting
evidence-based policymaking, enhancing industry safety measures, and promoting
transparency. This work lays the foundation for a coordinated global response
to AI incidents, ensuring trust, safety, and accountability in using AI across
regions.
|
2501.17039
|
Enhanced Retrieval of Long Documents: Leveraging Fine-Grained Block
Representations with Large Language Models
|
cs.IR
|
In recent years, large language models (LLMs) have demonstrated exceptional
power in various domains, including information retrieval. Most of the previous
practices involve leveraging these models to create a single embedding for each
query, each passage, or each document individually, a strategy exemplified and
used by the Retrieval-Augmented Generation (RAG) framework. While this method
has proven effective, we argue that it falls short in fully capturing the
nuanced intricacies of document-level texts due to its reliance on a relatively
coarse-grained representation. To address this limitation, we introduce a
novel, fine-grained approach aimed at enhancing the accuracy of relevance
scoring for long documents. Our methodology firstly segments a long document
into blocks, each of which is embedded using an LLM, for matching with the
query representation. When calculating the relevance score, we aggregate the
query-block relevance scores through a weighted sum method, yielding a
comprehensive score for the query with the entire document. Despite its
apparent simplicity, our experimental findings reveal that this approach
outperforms standard representation methods and achieves a significant
reduction in embedding generation latency. Moreover, by carefully optimizing
pairwise loss functions, superior performances have been achieved.
|
2501.17041
|
Benchmarking Quantum Convolutional Neural Networks for Signal
Classification in Simulated Gamma-Ray Burst Detection
|
astro-ph.HE cs.AI quant-ph
|
This study evaluates the use of Quantum Convolutional Neural Networks (QCNNs)
for identifying signals resembling Gamma-Ray Bursts (GRBs) within simulated
astrophysical datasets in the form of light curves. The task addressed here
focuses on distinguishing GRB-like signals from background noise in simulated
Cherenkov Telescope Array Observatory (CTAO) data, the next-generation
astrophysical observatory for very high-energy gamma-ray science. QCNNs, a
quantum counterpart of classical Convolutional Neural Networks (CNNs), leverage
quantum principles to process and analyze high-dimensional data efficiently. We
implemented a hybrid quantum-classical machine learning technique using the
Qiskit framework, with the QCNNs trained on a quantum simulator. Several QCNN
architectures were tested, employing different encoding methods such as Data
Reuploading and Amplitude encoding. Key findings include that QCNNs achieved
accuracy comparable to classical CNNs, often surpassing 90\%, while using fewer
parameters, potentially leading to more efficient models in terms of
computational resources. A benchmark study further examined how hyperparameters
like the number of qubits and encoding methods affected performance, with more
qubits and advanced encoding methods generally enhancing accuracy but
increasing complexity. QCNNs showed robust performance on time-series datasets,
successfully detecting GRB signals with high precision. The research is a
pioneering effort in applying QCNNs to astrophysics, offering insights into
their potential and limitations. This work sets the stage for future
investigations to fully realize the advantages of QCNNs in astrophysical data
analysis.
|
2501.17042
|
Emergence of network communities driven by local rules
|
physics.soc-ph cond-mat.dis-nn cs.DM cs.SI math.CO
|
Natural systems are modeled by networks where nodes represent the system
units and links their interactions. The networks nodes are often segregated
into communities with different connectivity patterns. Node heterogeneity such
as political affiliation in social networks or biological function in gene
networks are highlighted as key factors driving the segregation of nodes into
communities. Here I demonstrate that node heterogeneity is not a necessary
requirement. Network communities are bound to emerge as a consequence of the
local nature of the network evolution. To this end I introduce the Ramsey
community number, $r_C$, the minimum graph size that warranties the emergence
of network communities with almost certainty. I show that the Watts-Strogatz,
local search and duplication-split network models all have finite $r_C$ values.
In contrast, random graphs do not have emergent communities property and the
Barab\'asi-Albert model does not reach certainty for the emergence of
communities. I conclude that network communities are an emergent property
rooted on the local nature of the network evolution.
|
2501.17044
|
Synthesizing 3D Abstractions by Inverting Procedural Buildings with
Transformers
|
cs.CV cs.AI cs.LG
|
We generate abstractions of buildings, reflecting the essential aspects of
their geometry and structure, by learning to invert procedural models. We first
build a dataset of abstract procedural building models paired with simulated
point clouds and then learn the inverse mapping through a transformer. Given a
point cloud, the trained transformer then infers the corresponding abstracted
building in terms of a programmatic language description. This approach
leverages expressive procedural models developed for gaming and animation, and
thereby retains desirable properties such as efficient rendering of the
inferred abstractions and strong priors for regularity and symmetry. Our
approach achieves good reconstruction accuracy in terms of geometry and
structure, as well as structurally consistent inpainting.
|
2501.17047
|
How Linguistics Learned to Stop Worrying and Love the Language Models
|
cs.CL
|
Language models can produce fluent, grammatical text. Nonetheless, some
maintain that language models don't really learn language and also that, even
if they did, that would not be informative for the study of human learning and
processing. On the other side, there have been claims that the success of LMs
obviates the need for studying linguistic theory and structure. We argue that
both extremes are wrong. LMs can contribute to fundamental questions about
linguistic structure, language processing, and learning. They force us to
rethink arguments about learning and are informative for major questions in
linguistic theory. But they do not replace linguistic structure and theory. We
offer an optimistic take on the relationship between language models and
linguistics.
|
2501.17049
|
Hellinger-Kantorovich Gradient Flows: Global Exponential Decay of
Entropy Functionals
|
math.AP cs.LG math.OC stat.ML
|
We investigate a family of gradient flows of positive and probability
measures, focusing on the Hellinger-Kantorovich (HK) geometry, which unifies
transport mechanism of Otto-Wasserstein, and the birth-death mechanism of
Hellinger (or Fisher-Rao). A central contribution is a complete
characterization of global exponential decay behaviors of entropy functionals
(e.g. KL, $\chi^2$) under Otto-Wasserstein and Hellinger-type gradient flows.
In particular, for the more challenging analysis of HK gradient flows on
positive measures -- where the typical log-Sobolev arguments fail -- we develop
a specialized shape-mass decomposition that enables new analysis results. Our
approach also leverages the (Polyak-)\L{}ojasiewicz-type functional
inequalities and a careful extension of classical dissipation estimates. These
findings provide a unified and complete theoretical framework for gradient
flows and underpin applications in computational algorithms for statistical
inference, optimization, and machine learning.
|
2501.17053
|
Contextual Self-paced Learning for Weakly Supervised Spatio-Temporal
Video Grounding
|
cs.CV
|
In this work, we focus on Weakly Supervised Spatio-Temporal Video Grounding
(WSTVG). It is a multimodal task aimed at localizing specific subjects
spatio-temporally based on textual queries without bounding box supervision.
Motivated by recent advancements in multi-modal foundation models for grounding
tasks, we first explore the potential of state-of-the-art object detection
models for WSTVG. Despite their robust zero-shot capabilities, our adaptation
reveals significant limitations, including inconsistent temporal predictions,
inadequate understanding of complex queries, and challenges in adapting to
difficult scenarios. We propose CoSPaL (Contextual Self-Paced Learning), a
novel approach which is designed to overcome these limitations. CoSPaL
integrates three core components: (1) Tubelet Phrase Grounding (TPG), which
introduces spatio-temporal prediction by linking textual queries to tubelets;
(2) Contextual Referral Grounding (CRG), which improves comprehension of
complex queries by extracting contextual information to refine object
identification over time; and (3) Self-Paced Scene Understanding (SPS), a
training paradigm that progressively increases task difficulty, enabling the
model to adapt to complex scenarios by transitioning from coarse to
fine-grained understanding.
|
2501.17054
|
Generative diffusion models from a PDE perspective
|
math.PR cs.LG
|
Diffusion models have become the de facto framework for generating new
datasets. The core of these models lies in the ability to reverse a diffusion
process in time. The goal of this manuscript is to explain, from a PDE
perspective, how this method works and how to derive the PDE governing the
reverse dynamics as well as to study its solution analytically. By linking
forward and reverse dynamics, we show that the reverse process's distribution
has its support contained within the original distribution. Consequently,
diffusion methods, in their analytical formulation, do not inherently
regularize the original distribution, and thus, there is no generalization
principle. This raises a question: where does generalization arise, given that
in practice it does occur? Moreover, we derive an explicit solution to the
reverse process's SDE under the assumption that the starting point of the
forward process is fixed. This provides a new derivation that links two popular
approaches to generative diffusion models: stable diffusion (discrete dynamics)
and the score-based approach (continuous dynamics). Finally, we explore the
case where the original distribution consists of a finite set of data points.
In this scenario, the reverse dynamics are explicit (i.e., the loss function
has a clear minimizer), and solving the dynamics fails to generate new samples:
the dynamics converge to the original samples. In a sense, solving the
minimization problem exactly is "too good for its own good" (i.e., an
overfitting regime).
|
2501.17059
|
Channel Estimation for XL-MIMO Systems with Decentralized Baseband
Processing: Integrating Local Reconstruction with Global Refinement
|
cs.IT eess.SP math.IT
|
In this paper, we investigate the channel estimation problem for extremely
large-scale multiple-input multiple-output (XL-MIMO) systems with a hybrid
analog-digital architecture, implemented within a decentralized baseband
processing (DBP) framework with a star topology. Existing centralized and fully
decentralized channel estimation methods face limitations due to excessive
computational complexity or degraded performance. To overcome these challenges,
we propose a novel two-stage channel estimation scheme that integrates local
sparse reconstruction with global fusion and refinement. Specifically, in the
first stage, by exploiting the sparsity of channels in the angular-delay
domain, the local reconstruction task is formulated as a sparse signal recovery
problem. To solve it, we develop a graph neural networks-enhanced sparse
Bayesian learning (SBL-GNNs) algorithm, which effectively captures dependencies
among channel coefficients, significantly improving estimation accuracy. In the
second stage, the local estimates from the local processing units (LPUs) are
aligned into a global angular domain for fusion at the central processing unit
(CPU). Based on the aggregated observations, the channel refinement is modeled
as a Bayesian denoising problem. To efficiently solve it, we devise a
variational message passing algorithm that incorporates a Markov chain-based
hierarchical sparse prior, effectively leveraging both the sparsity and the
correlations of the channels in the global angular-delay domain. Simulation
results validate the effectiveness and superiority of the proposed SBL-GNNs
algorithm over existing methods, demonstrating improved estimation performance
and reduced computational complexity.
|
2501.17062
|
EdgeMLOps: Operationalizing ML models with Cumulocity IoT and
thin-edge.io for Visual quality Inspection
|
cs.LG cs.AI cs.CV
|
This paper introduces EdgeMLOps, a framework leveraging Cumulocity IoT and
thin-edge.io for deploying and managing machine learning models on
resource-constrained edge devices. We address the challenges of model
optimization, deployment, and lifecycle management in edge environments. The
framework's efficacy is demonstrated through a visual quality inspection (VQI)
use case where images of assets are processed on edge devices, enabling
real-time condition updates within an asset management system. Furthermore, we
evaluate the performance benefits of different quantization methods,
specifically static and dynamic signed-int8, on a Raspberry Pi 4, demonstrating
significant inference time reductions compared to FP32 precision. Our results
highlight the potential of EdgeMLOps to enable efficient and scalable AI
deployments at the edge for industrial applications.
|
2501.17070
|
Context is Key for Agent Security
|
cs.CR cs.CL cs.LG
|
Judging the safety of an action, whether taken by a human or a system, must
take into account the context in which the action takes place. For example,
deleting an email from a user's mailbox may or may not be appropriate depending
on the email's content, the user's goals, or even available space. Systems
today that make these judgements -- providing security against harmful or
inappropriate actions -- rely on manually-crafted policies or user confirmation
for each relevant context. With the upcoming deployment of systems like
generalist agents, we argue that we must rethink security designs to adapt to
the scale of contexts and capabilities of these systems. As a first step, this
paper explores contextual security in the domain of agents and proposes
contextual security for agents (Conseca), a framework to generate just-in-time,
contextual, and human-verifiable security policies.
|
2501.17074
|
DataLens: ML-Oriented Interactive Tabular Data Quality Dashboard
|
cs.DB
|
Maintaining high data quality is crucial for reliable data analysis and
machine learning (ML). However, existing data quality management tools often
lack automation, interactivity, and integration with ML workflows. This
demonstration paper introduces DataLens, a novel interactive dashboard designed
to streamline and automate the data quality management process for tabular
data. DataLens integrates a suite of data profiling, error detection, and
repair tools, including statistical, rule-based, and ML-based methods. It
features a user-in-the-loop module for interactive rule validation, data
labeling, and custom rule definition, enabling domain experts to guide the
cleaning process. Furthermore, DataLens implements an iterative cleaning module
that automatically selects optimal cleaning tools based on downstream ML model
performance. To ensure reproducibility, DataLens generates DataSheets capturing
essential metadata and integrates with MLflow and Delta Lake for experiment
tracking and data version control. This demonstration showcases DataLens's
capabilities in effectively identifying and correcting data errors, improving
data quality for downstream tasks, and promoting reproducibility in data
cleaning pipelines.
|
Subsets and Splits
No community queries yet
The top public SQL queries from the community will appear here once available.