id
stringlengths 9
16
| title
stringlengths 4
278
| categories
stringlengths 5
104
| abstract
stringlengths 6
4.09k
|
|---|---|---|---|
2501.11673
|
Randomized Kaczmarz Methods with Beyond-Krylov Convergence
|
math.NA cs.DS cs.LG cs.NA math.OC stat.ML
|
Randomized Kaczmarz methods form a family of linear system solvers which
converge by repeatedly projecting their iterates onto randomly sampled
equations. While effective in some contexts, such as highly over-determined
least squares, Kaczmarz methods are traditionally deemed secondary to Krylov
subspace methods, since this latter family of solvers can exploit outliers in
the input's singular value distribution to attain fast convergence on
ill-conditioned systems.
In this paper, we introduce Kaczmarz++, an accelerated randomized block
Kaczmarz algorithm that exploits outlying singular values in the input to
attain a fast Krylov-style convergence. Moreover, we show that Kaczmarz++
captures large outlying singular values provably faster than popular Krylov
methods, for both over- and under-determined systems. We also develop an
optimized variant for positive semidefinite systems, called CD++, demonstrating
empirically that it is competitive in arithmetic operations with both CG and
GMRES on a collection of benchmark problems. To attain these results, we
introduce several novel algorithmic improvements to the Kaczmarz framework,
including adaptive momentum acceleration, Tikhonov-regularized projections, and
a memoization scheme for reusing information from previously sampled
equation~blocks.
|
2501.11689
|
Randomness, exchangeability, and conformal prediction
|
cs.LG math.ST stat.ML stat.TH
|
This paper continues development of the functional theory of randomness, a
modification of the algorithmic theory of randomness getting rid of unspecified
additive constants. It introduces new kinds of confidence predictors, including
randomness predictors (the most general confidence predictors based on the
assumption of IID observations) and exchangeability predictors (the most
general confidence predictors based on the assumption of exchangeable
observations). The main result implies that both are close to conformal
predictors and quantifies the difference between randomness prediction and
conformal prediction.
|
2501.11695
|
Spatially-Delineated Domain-Adapted AI Classification: An Application
for Oncology Data
|
cs.LG cs.AI
|
Given multi-type point maps from different place-types (e.g., tumor regions),
our objective is to develop a classifier trained on the source place-type to
accurately distinguish between two classes of the target place-type based on
their point arrangements. This problem is societally important for many
applications, such as generating clinical hypotheses for designing new
immunotherapies for cancer treatment. The challenge lies in the spatial
variability, the inherent heterogeneity and variation observed in spatial
properties or arrangements across different locations (i.e., place-types).
Previous techniques focus on self-supervised tasks to learn domain-invariant
features and mitigate domain differences; however, they often neglect the
underlying spatial arrangements among data points, leading to significant
discrepancies across different place-types. We explore a novel multi-task
self-learning framework that targets spatial arrangements, such as spatial
mix-up masking and spatial contrastive predictive coding, for
spatially-delineated domain-adapted AI classification. Experimental results on
real-world datasets (e.g., oncology data) show that the proposed framework
provides higher prediction accuracy than baseline methods.
|
2501.11699
|
Power Ramp-Rate Control via Power Regulation for Storageless
Grid-Connected Photovoltaic Systems
|
eess.SY cs.SY
|
Photovoltaic Power Ramp-Rate Control (PRRC) constitutes a key ancillary
service for future power systems. Although its implementation through the
installation of storage systems or irradiance sensors has been widely
investigated, fewer studies have explored the power curtailment approach. The
latter lacks efficiency, as it voluntarily produces power discharges, yet it is
a cost-effective solution in terms of capital expenditures. This paper proposes
a novel storageless and sensorless photovoltaic PRRC for grid-connected
applications in which the photovoltaic power, rather than the voltage, is the
controlled magnitude. The aforementioned contribution makes the effective
tracking of the power ramp-rate limit possible compared to the existing methods
in the literature. The method is assisted by a real-time curve-fitting
algorithm that estimates the Maximum Power Point while operating suboptimally.
Thus, no direct temperature or irradiance measurement systems are needed. The
validation of the proposed PRRC strategy has been tested by simulation and
compared to another approach available in the literature, considering
real-field highly variable irradiance data. Experimental validation of the
proposed strategy has been performed in real time via Controller
Hardware-in-the-Loop.
|
2501.11704
|
Ultra-High Reliability by Predictive Interference Management Using
Extreme Value Theory
|
eess.SY cs.SY
|
Ultra-reliable low-latency communications (URLLC) require innovative
approaches to modeling channel and interference dynamics, extending beyond
traditional average estimates to encompass entire statistical distributions,
including rare and extreme events that challenge achieving ultra-reliability
performance regions. In this paper, we propose a risk-sensitive approach based
on extreme value theory (EVT) to predict the signal-to-interference-plus-noise
ratio (SINR) for efficient resource allocation in URLLC systems. We employ EVT
to estimate the statistics of rare and extreme interference values, and kernel
density estimation (KDE) to model the distribution of non-extreme events. Using
a mixture model, we develop an interference prediction algorithm based on
quantile prediction, introducing a confidence level parameter to balance
reliability and resource usage. While accounting for the risk sensitivity of
interference estimates, the prediction outcome is then used for appropriate
resource allocation of a URLLC transmission under link outage constraints.
Simulation results demonstrate that the proposed method outperforms the
state-of-the-art first-order discrete-time Markov chain (DTMC) approach by
reducing outage rates up to 100-fold, achieving target outage probabilities as
low as \(10^{-7}\). Simultaneously, it minimizes radio resource usage
\(\simnot15 \%\) compared to DTMC, while remaining only \(\simnot20 \%\) above
the optimal case with perfect interference knowledge, resulting in
significantly higher prediction accuracy. Additionally, the method is
sample-efficient, able to predict interference effectively with minimal
training data.
|
2501.11705
|
Human services organizations and the responsible integration of AI:
Considering ethics and contextualizing risk(s)
|
cs.CY cs.AI
|
This paper examines the responsible integration of artificial intelligence
(AI) in human services organizations (HSOs), proposing a nuanced framework for
evaluating AI applications across multiple dimensions of risk. The authors
argue that ethical concerns about AI deployment -- including professional
judgment displacement, environmental impact, model bias, and data laborer
exploitation -- vary significantly based on implementation context and specific
use cases. They challenge the binary view of AI adoption, demonstrating how
different applications present varying levels of risk that can often be
effectively managed through careful implementation strategies. The paper
highlights promising solutions, such as local large language models, that can
facilitate responsible AI integration while addressing common ethical concerns.
The authors propose a dimensional risk assessment approach that considers
factors like data sensitivity, professional oversight requirements, and
potential impact on client wellbeing. They conclude by outlining a path forward
that emphasizes empirical evaluation, starting with lower-risk applications and
building evidence-based understanding through careful experimentation. This
approach enables organizations to maintain high ethical standards while
thoughtfully exploring how AI might enhance their capacity to serve clients and
communities effectively.
|
2501.11706
|
Trustformer: A Trusted Federated Transformer
|
cs.LG cs.CR
|
Transformers, a cornerstone of deep-learning architectures for sequential
data, have achieved state-of-the-art results in tasks like Natural Language
Processing (NLP). Models such as BERT and GPT-3 exemplify their success and
have driven the rise of large language models (LLMs). However, a critical
challenge persists: safeguarding the privacy of data used in LLM training.
Privacy-preserving techniques like Federated Learning (FL) offer potential
solutions, but practical limitations hinder their effectiveness for Transformer
training. Two primary issues are (I) the risk of sensitive information leakage
due to aggregation methods like FedAvg or FedSGD, and (II) the high
communication overhead caused by the large size of Transformer models.
This paper introduces a novel FL method that reduces communication overhead
while maintaining competitive utility. Our approach avoids sharing full model
weights by simulating a global model locally. We apply k-means clustering to
each Transformer layer, compute centroids locally, and transmit only these
centroids to the server instead of full weights or gradients. To enhance
security, we leverage Intel SGX for secure transmission of centroids. Evaluated
on a translation task, our method achieves utility comparable to
state-of-the-art baselines while significantly reducing communication costs.
This provides a more efficient and privacy-preserving FL solution for
Transformer models.
|
2501.11711
|
Leveraging graph neural networks and mobility data for COVID-19
forecasting
|
cs.LG cs.SI
|
The COVID-19 pandemic has victimized over 7 million people to date, prompting
diverse research efforts. Spatio-temporal models combining mobility data with
machine learning have gained attention for disease forecasting. Here, we
explore Graph Convolutional Recurrent Network (GCRN) and Graph Convolutional
Long Short-Term Memory (GCLSTM), which combine the power of Graph Neural
Networks (GNN) with traditional architectures that deal with sequential data.
The aim is to forecast future values of COVID-19 cases in Brazil and China by
leveraging human mobility networks, whose nodes represent geographical
locations and links are flows of vehicles or people. We show that employing
backbone extraction to filter out negligible connections in the mobility
network enhances predictive stability. Comparing regression and classification
tasks demonstrates that binary classification yields smoother, more
interpretable results. Interestingly, we observe qualitatively equivalent
results for both Brazil and China datasets by introducing sliding windows of
variable size and prediction horizons. Compared to prior studies, introducing
the sliding window and the network backbone extraction strategies yields
improvements of about 80% in root mean squared errors.
|
2501.11712
|
YouLeQD: Decoding the Cognitive Complexity of Questions and Engagement
in Online Educational Videos from Learners' Perspectives
|
cs.CL
|
Questioning is a fundamental aspect of education, as it helps assess
students' understanding, promotes critical thinking, and encourages active
engagement. With the rise of artificial intelligence in education, there is a
growing interest in developing intelligent systems that can automatically
generate and answer questions and facilitate interactions in both virtual and
in-person education settings. However, to develop effective AI models for
education, it is essential to have a fundamental understanding of questioning.
In this study, we created the YouTube Learners' Questions on Bloom's Taxonomy
Dataset (YouLeQD), which contains learner-posed questions from YouTube lecture
video comments. Along with the dataset, we developed two RoBERTa-based
classification models leveraging Large Language Models to detect questions and
analyze their cognitive complexity using Bloom's Taxonomy. This dataset and our
findings provide valuable insights into the cognitive complexity of
learner-posed questions in educational videos and their relationship with
interaction metrics. This can aid in the development of more effective AI
models for education and improve the overall learning experience for students.
|
2501.11714
|
The Transition from Centralized Machine Learning to Federated Learning
for Mental Health in Education: A Survey of Current Methods and Future
Directions
|
cs.CY cs.LG
|
Research has increasingly explored the application of artificial intelligence
(AI) and machine learning (ML) within the mental health domain to enhance both
patient care and healthcare provider efficiency. Given that mental health
challenges frequently emerge during early adolescence -- the critical years of
high school and college -- investigating AI/ML-driven mental health solutions
within the education domain is of paramount importance. Nevertheless,
conventional AI/ML techniques follow a centralized model training architecture,
which poses privacy risks due to the need for transferring students' sensitive
data from institutions, universities, and clinics to central servers. Federated
learning (FL) has emerged as a solution to address these risks by enabling
distributed model training while maintaining data privacy. Despite its
potential, research on applying FL to analyze students' mental health remains
limited. In this paper, we aim to address this limitation by proposing a
roadmap for integrating FL into mental health data analysis within educational
settings. We begin by providing an overview of mental health issues among
students and reviewing existing studies where ML has been applied to address
these challenges. Next, we examine broader applications of FL in the mental
health domain to emphasize the lack of focus on educational contexts. Finally,
we propose promising research directions focused on using FL to address mental
health issues in the education sector, which entails discussing the synergies
between the proposed directions with broader human-centered domains. By
categorizing the proposed research directions into short- and long-term
strategies and highlighting the unique challenges at each stage, we aim to
encourage the development of privacy-conscious AI/ML-driven mental health
solutions.
|
2501.11715
|
GL-ICNN: An End-To-End Interpretable Convolutional Neural Network for
the Diagnosis and Prediction of Alzheimer's Disease
|
cs.CV cs.AI
|
Deep learning methods based on Convolutional Neural Networks (CNNs) have
shown great potential to improve early and accurate diagnosis of Alzheimer's
disease (AD) dementia based on imaging data. However, these methods have yet to
be widely adopted in clinical practice, possibly due to the limited
interpretability of deep learning models. The Explainable Boosting Machine
(EBM) is a glass-box model but cannot learn features directly from input
imaging data. In this study, we propose a novel interpretable model that
combines CNNs and EBMs for the diagnosis and prediction of AD. We develop an
innovative training strategy that alternatingly trains the CNN component as a
feature extractor and the EBM component as the output block to form an
end-to-end model. The model takes imaging data as input and provides both
predictions and interpretable feature importance measures. We validated the
proposed model on the Alzheimer's Disease Neuroimaging Initiative (ADNI)
dataset and the Health-RI Parelsnoer Neurodegenerative Diseases Biobank (PND)
as an external testing set. The proposed model achieved an area-under-the-curve
(AUC) of 0.956 for AD and control classification, and 0.694 for the prediction
of conversion of mild cognitive impairment (MCI) to AD on the ADNI cohort. The
proposed model is a glass-box model that achieves a comparable performance with
other state-of-the-art black-box models. Our code is publicly available at:
https://anonymous.4open.science/r/GL-ICNN.
|
2501.11720
|
Prediction of Lung Metastasis from Hepatocellular Carcinoma using the
SEER Database
|
q-bio.TO cs.LG
|
Hepatocellular carcinoma (HCC) is a leading cause of cancer-related
mortality, with lung metastases being the most common site of distant spread
and significantly worsening prognosis. Despite the growing availability of
clinical and demographic data, predictive models for lung metastasis in HCC
remain limited in scope and clinical applicability. In this study, we develop
and validate an end-to-end machine learning pipeline using data from the
Surveillance, Epidemiology, and End Results (SEER) database. We evaluated three
machine learning models (Random Forest, XGBoost, and Logistic Regression)
alongside a multilayer perceptron (MLP) neural network. Our models achieved
high AUROC values and recall, with the Random Forest and MLP models
demonstrating the best overall performance (AUROC = 0.82). However, the low
precision across models highlights the challenges of accurately predicting
positive cases. To address these limitations, we developed a custom loss
function incorporating recall optimization, enabling the MLP model to achieve
the highest sensitivity. An ensemble approach further improved overall recall
by leveraging the strengths of individual models. Feature importance analysis
revealed key predictors such as surgery status, tumor staging, and follow up
duration, emphasizing the relevance of clinical interventions and disease
progression in metastasis prediction. While this study demonstrates the
potential of machine learning for identifying high-risk patients, limitations
include reliance on imbalanced datasets, incomplete feature annotations, and
the low precision of predictions. Future work should leverage the expanding
SEER dataset, improve data imputation techniques, and explore advanced
pre-trained models to enhance predictive accuracy and clinical utility.
|
2501.11721
|
Explain-Query-Test: Self-Evaluating LLMs Via Explanation and
Comprehension Discrepancy
|
cs.CL cs.LG
|
Large language models (LLMs) have demonstrated remarkable proficiency in
generating detailed and coherent explanations of complex concepts. However, the
extent to which these models truly comprehend the concepts they articulate
remains unclear. To assess the level of comprehension of a model relative to
the content it generates, we implemented a self-evaluation pipeline where
models: (i) given a topic generate an excerpt with information about the topic,
(ii) given an excerpt generate question-answer pairs, and finally (iii) given a
question generate an answer. We refer to this self-evaluation approach as
Explain-Query-Test (EQT). Interestingly, the accuracy on generated questions
resulting from running the EQT pipeline correlates strongly with the model
performance as verified by typical benchmarks such as MMLU-Pro. In other words,
EQT's performance is predictive of MMLU-Pro's, and EQT can be used to rank
models without the need for any external source of evaluation data other than
lists of topics of interest. Moreover, our results reveal a disparity between
the models' ability to produce detailed explanations and their performance on
questions related to those explanations. This gap highlights fundamental
limitations in the internal knowledge representation and reasoning abilities of
current LLMs. We release the code at https://github.com/asgsaeid/EQT.
|
2501.11729
|
SeRpEnt: Selective Resampling for Expressive State Space Models
|
cs.LG cs.CV
|
State Space Models (SSMs) have recently enjoyed a rise to prominence in the
field of deep learning for sequence modeling, especially as an alternative to
Transformers. Their success stems from avoiding two well-known drawbacks of
attention-based models: quadratic complexity with respect to the sequence
length and inability to model long-range dependencies. The SSM variant Mamba
has demonstrated performance comparable to Transformers without any form of
attention, thanks to the use of a selective mechanism for the state parameters.
Selectivity, however, is only evaluated empirically and the reasons of its
effectiveness remain unclear. In this work, we show how selectivity is related
to the sequence processing. Our analysis shows that selective time intervals in
Mamba act as linear approximators of information. Then, we propose our SeRpEnt
architecture, a SSM that further exploits selectivity to compress sequences in
an information-aware fashion. It employs a resampling mechanism that aggregates
elements based on their information content. Our empirical results in the Long
Range Arena benchmark and other language modeling tasks show benefits of the
SeRpEnt's resampling mechanism.
|
2501.11730
|
Transformer Vibration Forecasting for Advancing Rail Safety and
Maintenance 4.0
|
cs.LG cs.AI stat.ML
|
Maintaining railway axles is critical to preventing severe accidents and
financial losses. The railway industry is increasingly interested in advanced
condition monitoring techniques to enhance safety and efficiency, moving beyond
traditional periodic inspections toward Maintenance 4.0.
This study introduces a robust Deep Autoregressive solution that integrates
seamlessly with existing systems to avert mechanical failures. Our approach
simulates and predicts vibration signals under various conditions and fault
scenarios, improving dataset robustness for more effective detection systems.
These systems can alert maintenance needs, preventing accidents preemptively.
We use experimental vibration signals from accelerometers on train axles.
Our primary contributions include a transformer model, ShaftFormer, designed
for processing time series data, and an alternative model incorporating
spectral methods and enhanced observation models. Simulating vibration signals
under diverse conditions mitigates the high cost of obtaining experimental
signals for all scenarios. Given the non-stationary nature of railway vibration
signals, influenced by speed and load changes, our models address these
complexities, offering a powerful tool for predictive maintenance in the rail
industry.
|
2501.11733
|
Mobile-Agent-E: Self-Evolving Mobile Assistant for Complex Tasks
|
cs.CL cs.CV
|
Smartphones have become indispensable in modern life, yet navigating complex
tasks on mobile devices often remains frustrating. Recent advancements in large
multimodal model (LMM)-based mobile agents have demonstrated the ability to
perceive and act in mobile environments. However, current approaches face
significant limitations: they fall short in addressing real-world human needs,
struggle with reasoning-intensive and long-horizon tasks, and lack mechanisms
to learn and improve from prior experiences. To overcome these challenges, we
introduce Mobile-Agent-E, a hierarchical multi-agent framework capable of
self-evolution through past experience. By hierarchical, we mean an explicit
separation of high-level planning and low-level action execution. The framework
comprises a Manager, responsible for devising overall plans by breaking down
complex tasks into subgoals, and four subordinate agents--Perceptor, Operator,
Action Reflector, and Notetaker--which handle fine-grained visual perception,
immediate action execution, error verification, and information aggregation,
respectively. Mobile-Agent-E also features a novel self-evolution module which
maintains a persistent long-term memory comprising Tips and Shortcuts. Tips are
general guidance and lessons learned from prior tasks on how to effectively
interact with the environment. Shortcuts are reusable, executable sequences of
atomic operations tailored for specific subroutines. The inclusion of Tips and
Shortcuts facilitates continuous refinement in performance and efficiency.
Alongside this framework, we introduce Mobile-Eval-E, a new benchmark featuring
complex mobile tasks requiring long-horizon, multi-app interactions. Empirical
results show that Mobile-Agent-E achieves a 22% absolute improvement over
previous state-of-the-art approaches across three foundation model backbones.
Project page: https://x-plug.github.io/MobileAgent.
|
2501.11734
|
MedicoSAM: Towards foundation models for medical image segmentation
|
eess.IV cs.CV
|
Medical image segmentation is an important analysis task in clinical practice
and research. Deep learning has massively advanced the field, but current
approaches are mostly based on models trained for a specific task. Training
such models or adapting them to a new condition is costly due to the need for
(manually) labeled data. The emergence of vision foundation models, especially
Segment Anything, offers a path to universal segmentation for medical images,
overcoming these issues. Here, we study how to improve Segment Anything for
medical images by comparing different finetuning strategies on a large and
diverse dataset. We evaluate the finetuned models on a wide range of
interactive and (automatic) semantic segmentation tasks. We find that the
performance can be clearly improved for interactive segmentation. However,
semantic segmentation does not benefit from pretraining on medical images. Our
best model, MedicoSAM, is publicly available at
https://github.com/computational-cell-analytics/medico-sam. We show that it is
compatible with existing tools for data annotation and believe that it will be
of great practical value.
|
2501.11739
|
Episodic memory in AI agents poses risks that should be studied and
mitigated
|
cs.AI cs.CY
|
Most current AI models have little ability to store and later retrieve a
record or representation of what they do. In human cognition, episodic memories
play an important role in both recall of the past as well as planning for the
future. The ability to form and use episodic memories would similarly enable a
broad range of improved capabilities in an AI agent that interacts with and
takes actions in the world. Researchers have begun directing more attention to
developing memory abilities in AI models. It is therefore likely that models
with such capability will be become widespread in the near future. This could
in some ways contribute to making such AI agents safer by enabling users to
better monitor, understand, and control their actions. However, as a new
capability with wide applications, we argue that it will also introduce
significant new risks that researchers should begin to study and address. We
outline these risks and benefits and propose four principles to guide the
development of episodic memory capabilities so that these will enhance, rather
than undermine, the effort to keep AI safe and trustworthy.
|
2501.11740
|
PIR Over Wireless Channels: Achieving Privacy With Public Responses
|
cs.IT math.IT
|
In this paper, we address the problem of Private Information Retrieval (PIR)
over a public Additive White Gaussian Noise (AWGN) channel. In such a setup,
the server's responses are visible to other servers. Thus, a curious server can
listen to the other responses, compromising the user's privacy. Indeed,
previous works on PIR over a shared medium assumed the servers cannot
instantaneously listen to other responses. To address this gap, we present a
novel randomized lattice -- PIR coding scheme that jointly codes for privacy,
channel noise, and curious servers which may listen to other responses. We
demonstrate that a positive PIR rate is achievable even in cases where the
channel to the curious server is stronger than the channel to the user.
|
2501.11741
|
FaceQSORT: a Multi-Face Tracking Method based on Biometric and
Appearance Features
|
cs.CV
|
Tracking multiple faces is a difficult problem, as there may be partially
occluded or lateral faces. In multiple face tracking, association is typically
based on (biometric) face features. However, the models used to extract these
face features usually require frontal face images, which can limit the tracking
performance. In this work, a multi-face tracking method inspired by StrongSort,
FaceQSORT, is proposed. To mitigate the problem of partially occluded or
lateral faces, biometric face features are combined with visual appearance
features (i.e., generated by a generic object classifier), with both features
are extracted from the same face patch. A comprehensive experimental evaluation
is performed, including a comparison of different face descriptors, an
evaluation of different parameter settings, and the application of a different
similarity metric. All experiments are conducted with a new multi-face tracking
dataset and a subset of the ChokePoint dataset. The `Paris Lodron University
Salzburg Faces in a Queue' dataset consists of a total of seven fully annotated
sequences (12730 frames) and is made publicly available as part of this work.
Together with this dataset, annotations of 6 sequences from the ChokePoint
dataset are also provided.
|
2501.11742
|
Force-Aware Autonomous Robotic Surgery
|
cs.RO
|
This work demonstrates the benefits of using tool-tissue interaction forces
in the design of autonomous systems in robot-assisted surgery (RAS). Autonomous
systems in surgery must manipulate tissues of different stiffness levels and
hence should apply different levels of forces accordingly. We hypothesize that
this ability is enabled by using force measurements as input to policies
learned from human demonstrations. To test this hypothesis, we use
Action-Chunking Transformers (ACT) to train two policies through imitation
learning for automated tissue retraction with the da Vinci Research Kit (dVRK).
To quantify the effects of using tool-tissue interaction force data, we trained
a "no force policy" that uses the vision and robot kinematic data, and compared
it to a "force policy" that uses force, vision and robot kinematic data. When
tested on a previously seen tissue sample, the force policy is 3 times more
successful in autonomously performing the task compared with the no force
policy. In addition, the force policy is more gentle with the tissue compared
with the no force policy, exerting on average 62% less force on the tissue.
When tested on a previously unseen tissue sample, the force policy is 3.5 times
more successful in autonomously performing the task, exerting an order of
magnitude less forces on the tissue, compared with the no force policy. These
results open the door to design force-aware autonomous systems that can meet
the surgical guidelines for tissue handling, especially using the newly
released RAS systems with force feedback capabilities such as the da Vinci 5.
|
2501.11743
|
Non-Reversible Langevin Algorithms for Constrained Sampling
|
cs.LG math.PR stat.CO
|
We consider the constrained sampling problem where the goal is to sample from
a target distribution on a constrained domain. We propose skew-reflected
non-reversible Langevin dynamics (SRNLD), a continuous-time stochastic
differential equation with skew-reflected boundary. We obtain non-asymptotic
convergence rate of SRNLD to the target distribution in both total variation
and 1-Wasserstein distances. By breaking reversibility, we show that the
convergence is faster than the special case of the reversible dynamics. Based
on the discretization of SRNLD, we propose skew-reflected non-reversible
Langevin Monte Carlo (SRNLMC), and obtain non-asymptotic discretization error
from SRNLD, and convergence guarantees to the target distribution in
1-Wasserstein distance. We show better performance guarantees than the
projected Langevin Monte Carlo in the literature that is based on the
reversible dynamics. Numerical experiments are provided for both synthetic and
real datasets to show efficiency of the proposed algorithms.
|
2501.11745
|
Personalized Federated Learning for Cellular VR: Online Learning and
Dynamic Caching
|
cs.IT cs.LG math.IT
|
Delivering an immersive experience to virtual reality (VR) users through
wireless connectivity offers the freedom to engage from anywhere at any time.
Nevertheless, it is challenging to ensure seamless wireless connectivity that
delivers real-time and high-quality videos to the VR users. This paper proposes
a field of view (FoV) aware caching for mobile edge computing (MEC)-enabled
wireless VR network. In particular, the FoV of each VR user is
cached/prefetched at the base stations (BSs) based on the caching strategies
tailored to each BS. Specifically, decentralized and personalized federated
learning (DP-FL) based caching strategies with guarantees are presented.
Considering VR systems composed of multiple VR devices and BSs, a DP-FL caching
algorithm is implemented at each BS to personalize content delivery for VR
users. The utilized DP-FL algorithm guarantees a probably approximately correct
(PAC) bound on the conditional average cache hit. Further, to reduce the cost
of communicating gradients, one-bit quantization of the stochastic gradient
descent (OBSGD) is proposed, and a convergence guarantee of
$\mathcal{O}(1/\sqrt{T})$ is obtained for the proposed algorithm, where $T$ is
the number of iterations. Additionally, to better account for the wireless
channel dynamics, the FoVs are grouped into multicast or unicast groups based
on the number of requesting VR users. The performance of the proposed DP-FL
algorithm is validated through realistic VR head-tracking dataset, and the
proposed algorithm is shown to have better performance in terms of average
delay and cache hit as compared to baseline algorithms.
|
2501.11746
|
SILO: Solving Inverse Problems with Latent Operators
|
cs.CV cs.AI cs.LG
|
Consistent improvement of image priors over the years has led to the
development of better inverse problem solvers. Diffusion models are the
newcomers to this arena, posing the strongest known prior to date. Recently,
such models operating in a latent space have become increasingly predominant
due to their efficiency. In recent works, these models have been applied to
solve inverse problems. Working in the latent space typically requires multiple
applications of an Autoencoder during the restoration process, which leads to
both computational and restoration quality challenges. In this work, we propose
a new approach for handling inverse problems with latent diffusion models,
where a learned degradation function operates within the latent space,
emulating a known image space degradation. Usage of the learned operator
reduces the dependency on the Autoencoder to only the initial and final steps
of the restoration process, facilitating faster sampling and superior
restoration quality. We demonstrate the effectiveness of our method on a
variety of image restoration tasks and datasets, achieving significant
improvements over prior art.
|
2501.11747
|
Optimizing Pretraining Data Mixtures with LLM-Estimated Utility
|
cs.CL cs.AI
|
Large Language Models improve with increasing amounts of high-quality
training data. However, leveraging larger datasets requires balancing quality,
quantity, and diversity across sources. After evaluating nine baseline methods
under both compute- and data-constrained scenarios, we find token-count
heuristics outperform manual and learned mixes, indicating that simple
approaches accounting for dataset size and diversity are surprisingly
effective. Building on this insight, we propose two complementary approaches:
UtiliMax, which extends token-based heuristics by incorporating utility
estimates from reduced-scale ablations, achieving up to a 10.6x speedup over
manual baselines; and Model Estimated Data Utility (MEDU), which leverages LLMs
to estimate data utility from small samples, matching ablation-based
performance while reducing computational requirements by $\sim$200x. Together,
these approaches establish a new framework for automated, compute-efficient
data mixing that is robust across training regimes.
|
2501.11752
|
Are generative models fair? A study of racial bias in dermatological
image generation
|
cs.CV
|
Racial bias in medicine, such as in dermatology, presents significant ethical
and clinical challenges. This is likely to happen because there is a
significant underrepresentation of darker skin tones in training datasets for
machine learning models. While efforts to address bias in dermatology have
focused on improving dataset diversity and mitigating disparities in
discriminative models, the impact of racial bias on generative models remains
underexplored. Generative models, such as Variational Autoencoders (VAEs), are
increasingly used in healthcare applications, yet their fairness across diverse
skin tones is currently not well understood. In this study, we evaluate the
fairness of generative models in clinical dermatology with respect to racial
bias. For this purpose, we first train a VAE with a perceptual loss to generate
and reconstruct high-quality skin images across different skin tones. We
utilize the Fitzpatrick17k dataset to examine how racial bias influences the
representation and performance of these models. Our findings indicate that VAE
performance is, as expected, influenced by representation, i.e. increased skin
tone representation comes with increased performance on the given skin tone.
However, we also observe, even independently of representation, that the VAE
performs better for lighter skin tones. Additionally, the uncertainty estimates
produced by the VAE are ineffective in assessing the model's fairness. These
results highlight the need for more representative dermatological datasets, but
also a need for better understanding the sources of bias in such model, as well
as improved uncertainty quantification mechanisms to detect and address racial
bias in generative models for trustworthy healthcare technologies.
|
2501.11755
|
A generalizable 3D framework and model for self-supervised learning in
medical imaging
|
eess.IV cs.CV
|
Current self-supervised learning methods for 3D medical imaging rely on
simple pretext formulations and organ- or modality-specific datasets, limiting
their generalizability and scalability. We present 3DINO, a cutting-edge SSL
method adapted to 3D datasets, and use it to pretrain 3DINO-ViT: a
general-purpose medical imaging model, on an exceptionally large, multimodal,
and multi-organ dataset of ~100,000 3D medical imaging scans from over 10
organs. We validate 3DINO-ViT using extensive experiments on numerous medical
imaging segmentation and classification tasks. Our results demonstrate that
3DINO-ViT generalizes across modalities and organs, including
out-of-distribution tasks and datasets, outperforming state-of-the-art methods
on the majority of evaluation metrics and labeled dataset sizes. Our 3DINO
framework and 3DINO-ViT will be made available to enable research on 3D
foundation models or further finetuning for a wide range of medical imaging
applications.
|
2501.11757
|
An Information Geometric Approach to Local Information Privacy with
Applications to Max-lift and Local Differential Privacy
|
cs.IT math.IT
|
We study an information-theoretic privacy mechanism design, where an agent
observes useful data $Y$ and wants to reveal the information to a user. Since
the useful data is correlated with the private data $X$, the agent uses a
privacy mechanism to produce disclosed data $U$ that can be released. We assume
that the agent observes $Y$ and has no direct access to $X$, i.e., the private
data is hidden. We study the privacy mechanism design that maximizes the
revealed information about $Y$ while satisfying a bounded Local Information
Privacy (LIP) criterion. When the leakage is sufficiently small, concepts from
information geometry allow us to locally approximate the mutual information. By
utilizing this approximation the main privacy-utility trade-off problem can be
rewritten as a quadratic optimization problem that has closed-form solution
under some constraints. For the cases where the closed-form solution is not
obtained we provide lower bounds on it. In contrast to the previous works that
have complexity issues, here, we provide simple privacy designs with low
complexity which are based on finding the maximum singular value and singular
vector of a matrix. To do so, we follow two approaches where in the first one
we find a lower bound on the main problem and then approximate it, however, in
the second approach we approximate the main problem directly. In this work, we
present geometrical interpretations of the proposed methods and in a numerical
example we compare our results considering both approaches with the optimal
solution and the previous methods. Furthermore, we discuss how our method can
be generalized considering larger amounts for the privacy leakage. Finally, we
discuss how the proposed methods can be applied to deal with differential
privacy.
|
2501.11758
|
A Review Paper of the Effects of Distinct Modalities and ML Techniques
to Distracted Driving Detection
|
cs.CV stat.ML
|
Distracted driving remains a significant global challenge with severe human
and economic repercussions, demanding improved detection and intervention
strategies. While previous studies have extensively explored single-modality
approaches, recent research indicates that these systems often fall short in
identifying complex distraction patterns, particularly cognitive distractions.
This systematic review addresses critical gaps by providing a comprehensive
analysis of machine learning (ML) and deep learning (DL) techniques applied
across various data modalities - visual,, sensory, auditory, and multimodal. By
categorizing and evaluating studies based on modality, data accessibility, and
methodology, this review clarifies which approaches yield the highest accuracy
and are best suited for specific distracted driving detection goals. The
findings offer clear guidance on the advantages of multimodal versus
single-modal systems and capture the latest advancements in the field.
Ultimately, this review contributes valuable insights for developing robust
distracted driving detection frameworks, supporting enhanced road safety and
mitigation strategies.
|
2501.11759
|
Poison-RAG: Adversarial Data Poisoning Attacks on Retrieval-Augmented
Generation in Recommender Systems
|
cs.IR
|
This study presents Poison-RAG, a framework for adversarial data poisoning
attacks targeting retrieval-augmented generation (RAG)-based recommender
systems. Poison-RAG manipulates item metadata, such as tags and descriptions,
to influence recommendation outcomes. Using item metadata generated through a
large language model (LLM) and embeddings derived via the OpenAI API, we
explore the impact of adversarial poisoning attacks on provider-side, where
attacks are designed to promote long-tail items and demote popular ones. Two
attack strategies are proposed: local modifications, which personalize tags for
each item using BERT embeddings, and global modifications, applying uniform
tags across the dataset. Experiments conducted on the MovieLens dataset in a
black-box setting reveal that local strategies improve manipulation
effectiveness by up to 50\%, while global strategies risk boosting already
popular items. Results indicate that popular items are more susceptible to
attacks, whereas long-tail items are harder to manipulate. Approximately 70\%
of items lack tags, presenting a cold-start challenge; data augmentation and
synthesis are proposed as potential defense mechanisms to enhance RAG-based
systems' resilience. The findings emphasize the need for robust metadata
management to safeguard recommendation frameworks. Code and data are available
at https://github.com/atenanaz/Poison-RAG.
|
2501.11762
|
Disentangling stellar atmospheric parameters in astronomical spectra
using Generative Adversarial Neural Networks
|
astro-ph.IM astro-ph.GA astro-ph.SR cs.LG
|
A method based on Generative Adversaria! Networks (GANs) is developed for
disentangling the physical (effective temperature and gravity) and chemical
(metallicity, overabundance of a-elements with respect to iron) atmospheric
properties in astronomical spectra. Using a projection of the stellar spectra,
commonly called latent space, in which the contribution dueto one or several
main stellar physicochemical properties is minimised while others are enhanced,
it was possible to maximise the information related to certain properties,
which can then be extracted using artificial neural networks (ANN) as
regressors with higher accuracy than a reference method based on the use of ANN
trained with the original spectra. Methods. Our model utilises autoencoders,
comprising two artificial neural networks: an encoder anda decoder which
transform input data into a low-dimensional representation known as latent
space. It also uses discriminators, which are additional neural networks aimed
at transforming the traditional autoencoder training into an adversaria!
approach, to disentangle or reinforce the astrophysical parameters from the
latent space. The GANDALF tool is described. It was developed to define, train,
and test our GAN model with a web framework to show how the disentangling
algorithm works visually. It is open to the community in Github. Results. The
performance of our approach for retrieving atmospheric stellar properties from
spectra is demonstrated using Gaia Radial Velocity Spectrograph (RVS) data from
DR3. We use a data-driven perspective and obtain very competitive values, ali
within the literature errors, and with the advantage of an important
dimensionality reduction of the data to be processed.
|
2501.11765
|
Is logical analysis performed by transformers taking place in
self-attention or in the fully connected part?
|
cs.CL cs.AI cs.LG
|
Transformers architecture apply self-attention to tokens represented as
vectors, before a fully connected (neuronal network) layer. These two parts can
be layered many times. Traditionally, self-attention is seen as a mechanism for
aggregating information before logical operations are performed by the fully
connected layer. In this paper, we show, that quite counter-intuitively, the
logical analysis can also be performed within the self-attention. For this we
implement a handcrafted single-level encoder layer which performs the logical
analysis within self-attention. We then study the scenario in which a one-level
transformer model undergoes self-learning using gradient descent. We
investigate whether the model utilizes fully connected layers or self-attention
mechanisms for logical analysis when it has the choice. Given that gradient
descent can become stuck at undesired zeros, we explicitly calculate these
unwanted zeros and find ways to avoid them. We do all this in the context of
predicting grammatical category pairs of adjacent tokens in a text. We believe
that our findings have broader implications for understanding the potential
logical operations performed by self-attention.
|
2501.11770
|
The Value of Nothing: Multimodal Extraction of Human Values Expressed by
TikTok Influencers
|
cs.CL cs.CY cs.SI
|
Societal and personal values are transmitted to younger generations through
interaction and exposure. Traditionally, children and adolescents learned
values from parents, educators, or peers. Nowadays, social platforms serve as a
significant channel through which youth (and adults) consume information, as
the main medium of entertainment, and possibly the medium through which they
learn different values. In this paper we extract implicit values from TikTok
movies uploaded by online influencers targeting children and adolescents. We
curated a dataset of hundreds of TikTok movies and annotated them according to
the Schwartz Theory of Personal Values. We then experimented with an array of
Masked and Large language model, exploring how values can be detected.
Specifically, we considered two pipelines -- direct extraction of values from
video and a 2-step approach in which videos are first converted to elaborated
scripts and then values are extracted.
Achieving state-of-the-art results, we find that the 2-step approach performs
significantly better than the direct approach and that using a trainable Masked
Language Model as a second step significantly outperforms a few-shot
application of a number of Large Language Models. We further discuss the impact
of fine-tuning and compare the performance of the different models on
identification of values present or contradicted in the TikTok. Finally, we
share the first values-annotated dataset of TikTok videos. Our results pave the
way to further research on influence and value transmission in video-based
social platforms.
|
2501.11773
|
Can Bayesian Neural Networks Make Confident Predictions?
|
stat.ML cs.LG math.ST stat.TH
|
Bayesian inference promises a framework for principled uncertainty
quantification of neural network predictions. Barriers to adoption include the
difficulty of fully characterizing posterior distributions on network
parameters and the interpretability of posterior predictive distributions. We
demonstrate that under a discretized prior for the inner layer weights, we can
exactly characterize the posterior predictive distribution as a Gaussian
mixture. This setting allows us to define equivalence classes of network
parameter values which produce the same likelihood (training error) and to
relate the elements of these classes to the network's scaling regime -- defined
via ratios of the training sample size, the size of each layer, and the number
of final layer parameters. Of particular interest are distinct parameter
realizations that map to low training error and yet correspond to distinct
modes in the posterior predictive distribution. We identify settings that
exhibit such predictive multimodality, and thus provide insight into the
accuracy of unimodal posterior approximations. We also characterize the
capacity of a model to "learn from data" by evaluating contraction of the
posterior predictive in different scaling regimes.
|
2501.11776
|
EfficientVITON: An Efficient Virtual Try-On Model using Optimized
Diffusion Process
|
cs.CV
|
Would not it be much more convenient for everybody to try on clothes by only
looking into a mirror ? The answer to that problem is virtual try-on, enabling
users to digitally experiment with outfits. The core challenge lies in
realistic image-to-image translation, where clothing must fit diverse human
forms, poses, and figures. Early methods, which used 2D transformations,
offered speed, but image quality was often disappointing and lacked the nuance
of deep learning. Though GAN-based techniques enhanced realism, their
dependence on paired data proved limiting. More adaptable methods offered great
visuals but demanded significant computing power and time. Recent advances in
diffusion models have shown promise for high-fidelity translation, yet the
current crop of virtual try-on tools still struggle with detail loss and
warping issues. To tackle these challenges, this paper proposes EfficientVITON,
a new virtual try-on system leveraging the impressive pre-trained Stable
Diffusion model for better images and deployment feasibility. The system
includes a spatial encoder to maintain clothings finer details and zero
cross-attention blocks to capture the subtleties of how clothes fit a human
body. Input images are carefully prepared, and the diffusion process has been
tweaked to significantly cut generation time without image quality loss. The
training process involves two distinct stages of fine-tuning, carefully
incorporating a balance of loss functions to ensure both accurate try-on
results and high-quality visuals. Rigorous testing on the VITON-HD dataset,
supplemented with real-world examples, has demonstrated that EfficientVITON
achieves state-of-the-art results.
|
2501.11779
|
Glinthawk: A Two-Tiered Architecture for Offline LLM Inference
|
cs.LG cs.DC cs.PF
|
We introduce Glinthawk, an architecture for offline Large Language Model
(LLM) inference. By leveraging a two-tiered structure, Glinthawk optimizes the
utilization of the high-end accelerators ("Tier 1") by offloading the attention
mechanism to lower-end compute tier ("Tier 2"). This separation allows the
memory demand of the attention, known as the key-value cache, to scale
independently from the model weights, enabling larger batch sizes and more
efficient accelerator usage. Prototyped with NVIDIA T4 GPUs and standard CPU
VMs, Glinthawk improves throughput by $5.9\times$ and reduces cost of
generation by $2.8\times$, compared to paged attention baselines. For long
sequence lengths, it achieves $16.3\times$ throughput improvement at
$2.4\times$ less cost. Our evaluation shows that this architecture can tolerate
moderate network latency with minimal performance degradation, making it highly
effective for latency-tolerant, throughput-focused applications such as batch
processing. The prototype is publicly available at
https://github.com/microsoft/glinthawk.
|
2501.11782
|
Human-AI Collaborative Game Testing with Vision Language Models
|
cs.HC cs.AI
|
As modern video games become increasingly complex, traditional manual testing
methods are proving costly and inefficient, limiting the ability to ensure
high-quality game experiences. While advancements in Artificial Intelligence
(AI) offer the potential to assist human testers, the effectiveness of AI in
truly enhancing real-world human performance remains underexplored. This study
investigates how AI can improve game testing by developing and experimenting
with an AI-assisted workflow that leverages state-of-the-art machine learning
models for defect detection. Through an experiment involving 800 test cases and
276 participants of varying backgrounds, we evaluate the effectiveness of AI
assistance under four conditions: with or without AI support, and with or
without detailed knowledge of defects and design documentation. The results
indicate that AI assistance significantly improves defect identification
performance, particularly when paired with detailed knowledge. However,
challenges arise when AI errors occur, negatively impacting human
decision-making. Our findings show the importance of optimizing human-AI
collaboration and implementing strategies to mitigate the effects of AI
inaccuracies. By this research, we demonstrate AI's potential and problems in
enhancing efficiency and accuracy in game testing workflows and offers
practical insights for integrating AI into the testing process.
|
2501.11784
|
Generating visual explanations from deep networks using implicit neural
representations
|
cs.CV
|
Explaining deep learning models in a way that humans can easily understand is
essential for responsible artificial intelligence applications. Attribution
methods constitute an important area of explainable deep learning. The
attribution problem involves finding parts of the network's input that are the
most responsible for the model's output. In this work, we demonstrate that
implicit neural representations (INRs) constitute a good framework for
generating visual explanations. Firstly, we utilize coordinate-based implicit
networks to reformulate and extend the extremal perturbations technique and
generate attribution masks. Experimental results confirm the usefulness of our
method. For instance, by proper conditioning of the implicit network, we obtain
attribution masks that are well-behaved with respect to the imposed area
constraints. Secondly, we present an iterative INR-based method that can be
used to generate multiple non-overlapping attribution masks for the same image.
We depict that a deep learning model may associate the image label with both
the appearance of the object of interest as well as with areas and textures
usually accompanying the object. Our study demonstrates that implicit networks
are well-suited for the generation of attribution masks and can provide
interesting insights about the performance of deep learning models.
|
2501.11786
|
Synthetic Data Can Mislead Evaluations: Membership Inference as Machine
Text Detection
|
cs.CL cs.CR cs.LG
|
Recent work shows membership inference attacks (MIAs) on large language
models (LLMs) produce inconclusive results, partly due to difficulties in
creating non-member datasets without temporal shifts. While researchers have
turned to synthetic data as an alternative, we show this approach can be
fundamentally misleading. Our experiments indicate that MIAs function as
machine-generated text detectors, incorrectly identifying synthetic data as
training samples regardless of the data source. This behavior persists across
different model architectures and sizes, from open-source models to commercial
ones such as GPT-3.5. Even synthetic text generated by different, potentially
larger models is classified as training data by the target model. Our findings
highlight a serious concern: using synthetic data in membership evaluations may
lead to false conclusions about model memorization and data leakage. We caution
that this issue could affect other evaluations using model signals such as loss
where synthetic or machine-generated translated data substitutes for real-world
samples.
|
2501.11788
|
OciorABA: Improved Error-Free Asynchronous Byzantine Agreement via
Partial Vector Agreement
|
cs.DC cs.CR cs.IT math.IT
|
In this work, we propose an error-free, information-theoretically secure
multi-valued asynchronous Byzantine agreement (ABA) protocol, called OciorABA.
This protocol achieves ABA consensus on an $\ell$-bit message with an expected
communication complexity of $O(n\ell + n^3 \log q )$ bits and an expected round
complexity of $O(1)$ rounds, under the optimal resilience condition $n \geq 3t
+ 1$ in an $n$-node network, where up to $t$ nodes may be dishonest. Here, $q$
denotes the alphabet size of the error correction code used in the protocol. In
our protocol design, we introduce a new primitive: asynchronous partial vector
agreement (APVA). In APVA, the distributed nodes input their vectors and aim to
output a common vector, where some of the elements of those vectors may be
missing or unknown. We propose an APVA protocol with an expected communication
complexity of $O( n^3 \log q )$ bits and an expected round complexity of $O(1)$
rounds. This APVA protocol serves as a key building block for our OciorABA
protocol.
|
2501.11790
|
Benchmarking Large Language Models via Random Variables
|
cs.CL cs.AI
|
Recent studies have raised concerns about the reliability of current
mathematical benchmarks, highlighting issues such as simplistic design and
potential data contamination. Therefore, creating a reliable benchmark that
effectively evaluates the genuine capabilities of large language models (LLMs)
in mathematical reasoning remains a significant challenge. To address this, we
propose RV-Bench, a framework for Benchmarking LLMs via Random Variables in
mathematical reasoning. Specifically, the background content of a random
variable question (RV question) mirrors the original problem in existing
benchmarks, but the variable combinations are randomized, making it "unseen" by
the LLMs. Models must completely understand the question pattern of the
original problem to correctly answer RV questions with various variable values.
As a result, the LLM's genuine capability in mathematical reasoning is
reflected by its accuracy and robustness on RV-Bench. We conducted extensive
experiments on over 30 representative LLMs across more than 1000 RV questions.
Our findings suggest that LLMs exhibit an imbalance in proficiency between
encountered and "unseen" data domains. Proficiency generalization across
similar mathematical reasoning tasks is verified to be limited by accuracy and
robustness, but it can still be enhanced through test-time scaling.
|
2501.11795
|
Provably effective detection of effective data poisoning attacks
|
cs.CR cs.CV cs.LG stat.ML
|
This paper establishes a mathematically precise definition of dataset
poisoning attack and proves that the very act of effectively poisoning a
dataset ensures that the attack can be effectively detected. On top of a
mathematical guarantee that dataset poisoning is identifiable by a new
statistical test that we call the Conformal Separability Test, we provide
experimental evidence that we can adequately detect poisoning attempts in the
real world.
|
2501.11799
|
Policy-Adaptable Methods For Resolving Normative Conflicts Through
Argumentation and Graph Colouring
|
cs.AI cs.LO math.LO
|
In a multi-agent system, one may choose to govern the behaviour of an agent
by imposing norms, which act as guidelines for how agents should act either all
of the time or in given situations. However, imposing multiple norms on one or
more agents may result in situations where these norms conflict over how the
agent should behave. In any system with normative conflicts (such as safe
reinforcement models or systems which monitor safety protocols), one must
decide which norms should be followed such that the most important and most
relevant norms are maintained. We introduce a new method for resolving
normative conflicts through argumentation and graph colouring which is
compatible with a variety of normative conflict resolution policies. We prove
that this method always creates an admissible set of arguments under
argumentation semantics, meaning that it produces coherent outputs. We also
introduce more robust variants of this method, each building upon their
predecessor to create a superior output, and we include further mathematical
proof of their coherence. Our most advanced variant uses the existing concept
of curtailment, where one norm may supersede another without fully eliminating
it. The methods we introduce are all compatible with various pre-existing
policies for resolving normative conflicts. Empirical evaluations are also
performed to compare our algorithms to each other and to others in existing
literature.
|
2501.11800
|
TFLOP: Table Structure Recognition Framework with Layout Pointer
Mechanism
|
cs.CV
|
Table Structure Recognition (TSR) is a task aimed at converting table images
into a machine-readable format (e.g. HTML), to facilitate other applications
such as information retrieval. Recent works tackle this problem by identifying
the HTML tags and text regions, where the latter is used for text extraction
from the table document. These works however, suffer from misalignment issues
when mapping text into the identified text regions. In this paper, we introduce
a new TSR framework, called TFLOP (TSR Framework with LayOut Pointer
mechanism), which reformulates the conventional text region prediction and
matching into a direct text region pointing problem. Specifically, TFLOP
utilizes text region information to identify both the table's structure tags
and its aligned text regions, simultaneously. Without the need for region
prediction and alignment, TFLOP circumvents the additional text region matching
stage, which requires finely-calibrated post-processing. TFLOP also employs
span-aware contrastive supervision to enhance the pointing mechanism in tables
with complex structure. As a result, TFLOP achieves the state-of-the-art
performance across multiple benchmarks such as PubTabNet, FinTabNet, and
SynthTabNet. In our extensive experiments, TFLOP not only exhibits competitive
performance but also shows promising results on industrial document TSR
scenarios such as documents with watermarks or in non-English domain.
|
2501.11803
|
Automating High Quality RT Planning at Scale
|
cs.HC cs.LG cs.RO
|
Radiotherapy (RT) planning is complex, subjective, and time-intensive.
Advances in artificial intelligence (AI) promise to improve its precision,
efficiency, and consistency, but progress is often limited by the scarcity of
large, standardized datasets. To address this, we introduce the Automated
Iterative RT Planning (AIRTP) system, a scalable solution for generating
high-quality treatment plans. This scalable solution is designed to generate
substantial volumes of consistently high-quality treatment plans, overcoming a
key obstacle in the advancement of AI-driven RT planning. Our AIRTP pipeline
adheres to clinical guidelines and automates essential steps, including
organ-at-risk (OAR) contouring, helper structure creation, beam setup,
optimization, and plan quality improvement, using AI integrated with RT
planning software like Eclipse of Varian. Furthermore, a novel approach for
determining optimization parameters to reproduce 3D dose distributions, i.e. a
method to convert dose predictions to deliverable treatment plans constrained
by machine limitations. A comparative analysis of plan quality reveals that our
automated pipeline produces treatment plans of quality comparable to those
generated manually, which traditionally require several hours of labor per
plan. Committed to public research, the first data release of our AIRTP
pipeline includes nine cohorts covering head-and-neck and lung cancer sites to
support an AAPM 2025 challenge. This data set features more than 10 times the
number of plans compared to the largest existing well-curated public data set
to our best knowledge.
Repo:{https://github.com/RiqiangGao/GDP-HMM_AAPMChallenge}
|
2501.11813
|
Utilising Deep Learning to Elicit Expert Uncertainty
|
cs.LG stat.OT
|
Recent work [ 14 ] has introduced a method for prior elicitation that
utilizes records of expert decisions to infer a prior distribution. While this
method provides a promising approach to eliciting expert uncertainty, it has
only been demonstrated using tabular data, which may not entirely represent the
information used by experts to make decisions. In this paper, we demonstrate
how analysts can adopt a deep learning approach to utilize the method proposed
in [14 ] with the actual information experts use. We provide an overview of
deep learning models that can effectively model expert decision-making to
elicit distributions that capture expert uncertainty and present an example
examining the risk of colon cancer to show in detail how these models can be
used.
|
2501.11815
|
CogMorph: Cognitive Morphing Attacks for Text-to-Image Models
|
cs.CV
|
The development of text-to-image (T2I) generative models, that enable the
creation of high-quality synthetic images from textual prompts, has opened new
frontiers in creative design and content generation. However, this paper
reveals a significant and previously unrecognized ethical risk inherent in this
technology and introduces a novel method, termed the Cognitive Morphing Attack
(CogMorph), which manipulates T2I models to generate images that retain the
original core subjects but embeds toxic or harmful contextual elements. This
nuanced manipulation exploits the cognitive principle that human perception of
concepts is shaped by the entire visual scene and its context, producing images
that amplify emotional harm far beyond attacks that merely preserve the
original semantics. To address this, we first construct an imagery toxicity
taxonomy spanning 10 major and 48 sub-categories, aligned with human
cognitive-perceptual dimensions, and further build a toxicity risk matrix
resulting in 1,176 high-quality T2I toxic prompts. Based on this, our CogMorph
first introduces Cognitive Toxicity Augmentation, which develops a cognitive
toxicity knowledge base with rich external toxic representations for humans
(e.g., fine-grained visual features) that can be utilized to further guide the
optimization of adversarial prompts. In addition, we present Contextual
Hierarchical Morphing, which hierarchically extracts critical parts of the
original prompt (e.g., scenes, subjects, and body parts), and then iteratively
retrieves and fuses toxic features to inject harmful contexts. Extensive
experiments on multiple open-sourced T2I models and black-box commercial APIs
(e.g., DALLE-3) demonstrate the efficacy of CogMorph which significantly
outperforms other baselines by large margins (+20.62% on average).
|
2501.11817
|
Toward Effective Digraph Representation Learning: A Magnetic Adaptive
Propagation based Approach
|
cs.LG cs.AI cs.DB cs.SI
|
The $q$-parameterized magnetic Laplacian serves as the foundation of directed
graph (digraph) convolution, enabling this kind of digraph neural network
(MagDG) to encode node features and structural insights by complex-domain
message passing. As a generalization of undirected methods, MagDG shows
superior capability in modeling intricate web-scale topology. Despite the great
success achieved by existing MagDGs, limitations still exist: (1) Hand-crafted
$q$: The performance of MagDGs depends on selecting an appropriate
$q$-parameter to construct suitable graph propagation equations in the complex
domain. This parameter tuning, driven by downstream tasks, limits model
flexibility and significantly increases manual effort. (2) Coarse Message
Passing: Most approaches treat all nodes with the same complex-domain
propagation and aggregation rules, neglecting their unique digraph contexts.
This oversight results in sub-optimal performance. To address the above issues,
we propose two key techniques: (1) MAP is crafted to be a plug-and-play
complex-domain propagation optimization strategy in the context of digraph
learning, enabling seamless integration into any MagDG to improve predictions
while enjoying high running efficiency. (2) MAP++ is a new digraph learning
framework, further incorporating a learnable mechanism to achieve adaptively
edge-wise propagation and node-wise aggregation in the complex domain for
better performance. Extensive experiments on 12 datasets demonstrate that MAP
enjoys flexibility for it can be incorporated with any MagDG, and scalability
as it can deal with web-scale digraphs. MAP++ achieves SOTA predictive
performance on 4 different downstream tasks.
|
2501.11818
|
Group-Agent Reinforcement Learning with Heterogeneous Agents
|
cs.LG
|
Group-agent reinforcement learning (GARL) is a newly arising learning
scenario, where multiple reinforcement learning agents study together in a
group, sharing knowledge in an asynchronous fashion. The goal is to improve the
learning performance of each individual agent. Under a more general
heterogeneous setting where different agents learn using different algorithms,
we advance GARL by designing novel and effective group-learning mechanisms.
They guide the agents on whether and how to learn from action choices from the
others, and allow the agents to adopt available policy and value function
models sent by another agent if they perform better. We have conducted
extensive experiments on a total of 43 different Atari 2600 games to
demonstrate the superior performance of the proposed method. After the group
learning, among the 129 agents examined, 96% are able to achieve a learning
speed-up, and 72% are able to learn over 100 times faster. Also, around 41% of
those agents have achieved a higher accumulated reward score by learning in
less than 5% of the time steps required by a single agent when learning on its
own.
|
2501.11820
|
Comparative Analysis of Control Strategies for Position Regulation in DC
Servo Motors
|
eess.SY cs.SY
|
A servomotor is a closed-loop system designed for precise movement control,
utilizing position feedback to achieve accurate final positions. Due to the
ability to deliver higher power output and operate at enhanced speeds, DC servo
motors are considered ideal for applications requiring precision and
performance. This research aims to design, simulate, and compare various
control strategies for precise position control in DC servo motors (DSM). The
controllers evaluated in this study include proportional (P),
proportional-integral (PI), proportional-integral-derivative (PID),
state-feedback controllers (SFC), and state-feedback controllers augmented with
integral action (SFCIA). The performance of these controllers was evaluated
using MATLAB simulations, characterized by overshoot, settling time,
steady-state error, rise time, and peak time. The results indicate that the
state-feedback controller with integral action (SFCIA) surpasses other control
strategies by achieving zero steady-state error, minimal overshoot, the
shortest settling time, and optimized rise and peak times. These findings
highlight the effectiveness of SFCIA for tasks requiring high levels of
stability, precision, and dynamic performance.
|
2501.11823
|
Toward Scalable Graph Unlearning: A Node Influence Maximization based
Approach
|
cs.LG cs.AI cs.DB cs.SI
|
Machine unlearning, as a pivotal technology for enhancing model robustness
and data privacy, has garnered significant attention in prevalent web mining
applications, especially in thriving graph-based scenarios. However, most
existing graph unlearning (GU) approaches face significant challenges due to
the intricate interactions among web-scale graph elements during the model
training: (1) The gradient-driven node entanglement hinders the complete
knowledge removal in response to unlearning requests; (2) The billion-level
graph elements in the web scenarios present inevitable scalability issues. To
break the above limitations, we open up a new perspective by drawing a
connection between GU and conventional social influence maximization. To this
end, we propose Node Influence Maximization (NIM) through the decoupled
influence propagation model and fine-grained influence function in a scalable
manner, which is crafted to be a plug-and-play strategy to identify potential
nodes affected by unlearning entities. This approach enables offline execution
independent of GU, allowing it to be seamlessly integrated into most GU methods
to improve their unlearning performance. Based on this, we introduce Scalable
Graph Unlearning (SGU) as a new fine-tuned framework, which balances the
forgetting and reasoning capability of the unlearned model by entity-specific
optimizations. Extensive experiments on 14 datasets, including large-scale
ogbn-papers100M, have demonstrated the effectiveness of our approach.
Specifically, NIM enhances the forgetting capability of most GU methods, while
SGU achieves comprehensive SOTA performance and maintains scalability.
|
2501.11827
|
PXGen: A Post-hoc Explainable Method for Generative Models
|
cs.LG cs.AI
|
With the rapid growth of generative AI in numerous applications, explainable
AI (XAI) plays a crucial role in ensuring the responsible development and
deployment of generative AI technologies. XAI has undergone notable
advancements and widespread adoption in recent years, reflecting a concerted
push to enhance the transparency, interpretability, and credibility of AI
systems. Recent research emphasizes that a proficient XAI method should adhere
to a set of criteria, primarily focusing on two key areas. Firstly, it should
ensure the quality and fluidity of explanations, encompassing aspects like
faithfulness, plausibility, completeness, and tailoring to individual needs.
Secondly, the design principle of the XAI system or mechanism should cover the
following factors such as reliability, resilience, the verifiability of its
outputs, and the transparency of its algorithm. However, research in XAI for
generative models remains relatively scarce, with little exploration into how
such methods can effectively meet these criteria in that domain. In this work,
we propose PXGen, a post-hoc explainable method for generative models. Given a
model that needs to be explained, PXGen prepares two materials for the
explanation, the Anchor set and intrinsic & extrinsic criteria. Those materials
are customizable by users according to their purpose and requirements. Via the
calculation of each criterion, each anchor has a set of feature values and
PXGen provides examplebased explanation methods according to the feature values
among all the anchors and illustrated and visualized to the users via tractable
algorithms such as k-dispersion or k-center.
|
2501.11828
|
Fact-Preserved Personalized News Headline Generation
|
cs.CL cs.AI
|
Personalized news headline generation, aiming at generating user-specific
headlines based on readers' preferences, burgeons a recent flourishing research
direction. Existing studies generally inject a user interest embedding into an
encoderdecoder headline generator to make the output personalized, while the
factual consistency of headlines is inadequate to be verified. In this paper,
we propose a framework Fact-Preserved Personalized News Headline Generation
(short for FPG), to prompt a tradeoff between personalization and consistency.
In FPG, the similarity between the candidate news to be exposed and the
historical clicked news is used to give different levels of attention to key
facts in the candidate news, and the similarity scores help to learn a
fact-aware global user embedding. Besides, an additional training procedure
based on contrastive learning is devised to further enhance the factual
consistency of generated headlines. Extensive experiments conducted on a
real-world benchmark PENS validate the superiority of FPG, especially on the
tradeoff between personalization and factual consistency.
|
2501.11830
|
ShadowGenes: Leveraging Recurring Patterns within Computational Graphs
for Model Genealogy
|
cs.LG cs.CR
|
Machine learning model genealogy enables practitioners to determine which
architectural family a neural network belongs to. In this paper, we introduce
ShadowGenes, a novel, signature-based method for identifying a given model's
architecture, type, and family. Our method involves building a computational
graph of the model that is agnostic of its serialization format, then analyzing
its internal operations to identify unique patterns, and finally building and
refining signatures based on these. We highlight important workings of the
underlying engine and demonstrate the technique used to construct a signature
and scan a given model. This approach to model genealogy can be applied to
model files without the need for additional external information. We test
ShadowGenes on a labeled dataset of over 1,400 models and achieve a mean true
positive rate of 97.49% and a precision score of 99.51%; which validates the
technique as a practical method for model genealogy. This enables practitioners
to understand the use cases of a given model, the internal computational
process, and identify possible security risks, such as the potential for model
backdooring.
|
2501.11833
|
Is your LLM trapped in a Mental Set? Investigative study on how mental
sets affect the reasoning capabilities of LLMs
|
cs.CL cs.AI
|
In this paper, we present an investigative study on how Mental Sets influence
the reasoning capabilities of LLMs. LLMs have excelled in diverse natural
language processing (NLP) tasks, driven by advancements in parameter-efficient
fine-tuning (PEFT) and emergent capabilities like in-context learning (ICL).
For complex reasoning tasks, selecting the right model for PEFT or ICL is
critical, often relying on scores on benchmarks such as MMLU, MATH, and GSM8K.
However, current evaluation methods, based on metrics like F1 Score or
reasoning chain assessments by larger models, overlook a key dimension:
adaptability to unfamiliar situations and overcoming entrenched thinking
patterns. In cognitive psychology, Mental Set refers to the tendency to persist
with previously successful strategies, even when they become inefficient - a
challenge for problem solving and reasoning. We compare the performance of LLM
models like Llama-3.1-8B-Instruct, Llama-3.1-70B-Instruct and GPT-4o in the
presence of mental sets. To the best of our knowledge, this is the first study
to integrate cognitive psychology concepts into the evaluation of LLMs for
complex reasoning tasks, providing deeper insights into their adaptability and
problem-solving efficacy.
|
2501.11834
|
PDA Construction via Union of Cartesian Product Cache Configurations for
Coded Caching
|
cs.IT math.IT
|
Caching is an efficient technique to reduce peak traffic by storing popular
content in local caches. Placement delivery array (PDA) proposed by Yan et al.
is a combinatorial structure to design coded caching schemes with uncoded
placement and one-shot linear delivery. By taking the $m$-fold Cartesian
product of a small base PDA, Wang et al. constructed a big PDA while
maintaining the memory ratio and transmission load unchanged, which achieves
linear growth in both the number of users and coded caching gain. In order to
achieve exponential growth in both the number of users and coded caching gain,
in this paper we propose a PDA construction by taking the union operation of
the cache configurations from the $m$-fold Cartesian product of a base PDA. The
resulting PDA leads to a coded caching scheme with subpacketization increasing
sub-exponentially with the number of users while keeping the load constant for
fixed memory ratio. By applying the proposed construction to existing base
PDAs, three new coded caching schemes are obtained, which cover some existing
schemes as special cases and can achieve lower load with simultaneously lower
subpacketization for some memory ratios.
|
2501.11835
|
Hybrid Adaptive Modeling using Neural Networks Trained with Nonlinear
Dynamics Based Features
|
cs.LG nlin.AO
|
Accurate models are essential for design, performance prediction, control,
and diagnostics in complex engineering systems. Physics-based models excel
during the design phase but often become outdated during system deployment due
to changing operational conditions, unknown interactions, excitations, and
parametric drift. While data-based models can capture the current state of
complex systems, they face significant challenges, including excessive data
dependence, limited generalizability to changing conditions, and inability to
predict parametric dependence. This has led to combining physics and data in
modeling, termed physics-infused machine learning, often using numerical
simulations from physics-based models. This paper introduces a novel approach
that departs from standard techniques by uncovering information from nonlinear
dynamical modeling and embedding it in data-based models. The goal is to create
a hybrid adaptive modeling framework that integrates data-based modeling with
newly measured data and analytical nonlinear dynamical models for enhanced
accuracy, parametric dependence, and improved generalizability. By explicitly
incorporating nonlinear dynamic phenomena through perturbation methods, the
predictive capabilities are more realistic and insightful compared to knowledge
obtained from brute-force numerical simulations. In particular, perturbation
methods are utilized to derive asymptotic solutions which are parameterized to
generate frequency responses. Frequency responses provide comprehensive
insights into dynamics and nonlinearity which are quantified and extracted as
high-quality features. A machine-learning model, trained by these features,
tracks parameter variations and updates the mismatched model. The results
demonstrate that this adaptive modeling method outperforms numerical gray box
models in prediction accuracy and computational efficiency.
|
2501.11836
|
Data-driven Detection and Evaluation of Damages in Concrete Structures:
Using Deep Learning and Computer Vision
|
cs.CV cs.AI cs.LG
|
Structural integrity is vital for maintaining the safety and longevity of
concrete infrastructures such as bridges, tunnels, and walls. Traditional
methods for detecting damages like cracks and spalls are labor-intensive,
time-consuming, and prone to human error. To address these challenges, this
study explores advanced data-driven techniques using deep learning for
automated damage detection and analysis. Two state-of-the-art instance
segmentation models, YOLO-v7 instance segmentation and Mask R-CNN, were
evaluated using a dataset comprising 400 images, augmented to 10,995 images
through geometric and color-based transformations to enhance robustness. The
models were trained and validated using a dataset split into 90% training set,
validation and test set 10%. Performance metrics such as precision, recall,
mean average precision (mAP@0.5), and frames per second (FPS) were used for
evaluation. YOLO-v7 achieved a superior mAP@0.5 of 96.1% and processed 40 FPS,
outperforming Mask R-CNN, which achieved a mAP@0.5 of 92.1% with a slower
processing speed of 18 FPS. The findings recommend YOLO-v7 instance
segmentation model for real-time, high-speed structural health monitoring,
while Mask R-CNN is better suited for detailed offline assessments. This study
demonstrates the potential of deep learning to revolutionize infrastructure
maintenance, offering a scalable and efficient solution for automated damage
detection.
|
2501.11839
|
Supervised Learning for Analog and RF Circuit Design: Benchmarks and
Comparative Insights
|
cs.LG cs.AI cs.AR
|
Automating analog and radio-frequency (RF) circuit design using machine
learning (ML) significantly reduces the time and effort required for parameter
optimization. This study explores supervised ML-based approaches for designing
circuit parameters from performance specifications across various circuit
types, including homogeneous and heterogeneous designs. By evaluating diverse
ML models, from neural networks like transformers to traditional methods like
random forests, we identify the best-performing models for each circuit. Our
results show that simpler circuits, such as low-noise amplifiers, achieve
exceptional accuracy with mean relative errors as low as 0.3% due to their
linear parameter-performance relationships. In contrast, complex circuits, like
power amplifiers and voltage-controlled oscillators, present challenges due to
their non-linear interactions and larger design spaces. For heterogeneous
circuits, our approach achieves an 88% reduction in errors with increased
training data, with the receiver achieving a mean relative error as low as
0.23%, showcasing the scalability and accuracy of the proposed methodology.
Additionally, we provide insights into model strengths, with transformers
excelling in capturing non-linear mappings and k-nearest neighbors performing
robustly in moderately linear parameter spaces, especially in heterogeneous
circuits with larger datasets. This work establishes a foundation for extending
ML-driven design automation, enabling more efficient and scalable circuit
design workflows.
|
2501.11841
|
Survey on Monocular Metric Depth Estimation
|
cs.CV
|
Monocular Depth Estimation (MDE) is a fundamental computer vision task
underpinning applications such as spatial understanding, 3D reconstruction, and
autonomous driving. While deep learning-based MDE methods can predict relative
depth from a single image, their lack of metric scale information often results
in scale inconsistencies, limiting their utility in downstream tasks like
visual SLAM, 3D reconstruction, and novel view synthesis. Monocular Metric
Depth Estimation (MMDE) addresses these challenges by enabling precise,
scene-scale depth inference. MMDE improves depth consistency, enhances
sequential task stability, simplifies integration into downstream applications,
and broadens practical use cases. This paper provides a comprehensive review of
depth estimation technologies, highlighting the evolution from geometry-based
methods to state-of-the-art deep learning approaches. It emphasizes
advancements in scale-agnostic methods, which are crucial for enabling
zero-shot generalization as the foundational capability for MMDE. Recent
progress in zero-shot MMDE research is explored, focusing on challenges such as
model generalization and the loss of detail at scene boundaries. Innovative
strategies to address these issues include unlabelled data augmentation, image
patching, architectural optimization, and generative techniques. These
advancements, analyzed in detail, demonstrate significant contributions to
overcoming existing limitations. Finally, this paper synthesizes recent
developments in zero-shot MMDE, identifies unresolved challenges, and outlines
future research directions. By offering a clear roadmap and cutting-edge
insights, this work aims to deepen understanding of MMDE, inspire novel
applications, and drive technological innovation.
|
2501.11842
|
Harnessing Rydberg Atomic Receivers: From Quantum Physics to Wireless
Communications
|
cs.IT eess.SP math.IT
|
The intrinsic integration of Rydberg atomic receivers into wireless
communication systems is proposed, by harnessing the principles of quantum
physics in wireless communications. More particularly, we conceive a pair of
Rydberg atomic receivers, one incorporates a local oscillator (LO), referred to
as an LO-dressed receiver, while the other operates without an LO and is termed
an LO-free receiver. The appropriate wireless model is developed for each
configuration, elaborating on the receiver's responses to the radio frequency
(RF) signal, on the potential noise sources, and on the system performance.
Next, we investigate the association distortion effects that might occur,
specifically demonstrating the boundaries of linear dynamic regions, which
provides critical insights into its practical implementations in wireless
systems. Extensive simulation results are provided for characterizing the
performance of wireless systems, harnessing this pair of Rydberg atomic
receivers. Our results demonstrate that they deliver complementary benefits:
LO-free systems excel in proximity operations, while LO-dressed systems are
eminently suitable for long-distance sensing at extremely low power levels.
More specifically, LO-dressed systems achieve a significant signal-to-noise
ratio (SNR) gain of approximately 44 dB over conventional RF receivers,
exhibiting an effective coverage range extension over conventional RF receivers
by a factor of 150. Furthermore, LO-dressed systems support higher-order
quadrature amplitude modulation (QAM) at reduced symbol error rates (SER)
compared to conventional RF receivers, hence significantly enhancing wireless
communication performance.
|
2501.11847
|
A Survey on Memory-Efficient Large-Scale Model Training in AI for
Science
|
cs.LG cs.AI
|
Scientific research faces high costs and inefficiencies with traditional
methods, but the rise of deep learning and large language models (LLMs) offers
innovative solutions. This survey reviews LLM applications across scientific
fields such as biology, medicine, chemistry, and meteorology, underscoring
their role in advancing research. However, the continuous expansion of model
size has led to significant memory demands, hindering further development and
application of LLMs for science. To address this, we review memory-efficient
training techniques for LLMs based on the transformer architecture, including
distributed training, mixed precision training, and gradient checkpointing.
Using AlphaFold 2 as an example, we demonstrate how tailored memory
optimization methods can reduce storage needs while preserving prediction
accuracy. We also discuss the challenges of memory optimization in practice and
potential future directions, hoping to provide valuable insights for
researchers and engineers.
|
2501.11849
|
Network-informed Prompt Engineering against Organized Astroturf
Campaigns under Extreme Class Imbalance
|
cs.CL cs.AI cs.SI
|
Detecting organized political campaigns is of paramount importance in
fighting against disinformation on social media. Existing approaches for the
identification of such organized actions employ techniques mostly from network
science, graph machine learning and natural language processing. Their ultimate
goal is to analyze the relationships and interactions (e.g. re-posting) among
users and the textual similarities of their posts. Despite their effectiveness
in recognizing astroturf campaigns, these methods face significant challenges,
notably the class imbalance in available training datasets. To mitigate this
issue, recent methods usually resort to data augmentation or increasing the
number of positive samples, which may not always be feasible or sufficient in
real-world settings. Following a different path, in this paper, we propose a
novel framework for identifying astroturf campaigns based solely on large
language models (LLMs), introducing a Balanced Retrieval-Augmented Generation
(Balanced RAG) component. Our approach first gives both textual information
concerning the posts (in our case tweets) and the user interactions of the
social network as input to a language model. Then, through prompt engineering
and the proposed Balanced RAG method, it effectively detects coordinated
disinformation campaigns on X (Twitter). The proposed framework does not
require any training or fine-tuning of the language model. Instead, by
strategically harnessing the strengths of prompt engineering and Balanced RAG,
it facilitates LLMs to overcome the effects of class imbalance and effectively
identify coordinated political campaigns. The experimental results demonstrate
that by incorporating the proposed prompt engineering and Balanced RAG methods,
our framework outperforms the traditional graph-based baselines, achieving
2x-3x improvements in terms of precision, recall and F1 scores.
|
2501.11851
|
Challenges in Expanding Portuguese Resources: A View from Open
Information Extraction
|
cs.CL
|
Open Information Extraction (Open IE) is the task of extracting structured
information from textual documents, independent of domain. While traditional
Open IE methods were based on unsupervised approaches, recently, with the
emergence of robust annotated datasets, new data-based approaches have been
developed to achieve better results. These innovations, however, have focused
mainly on the English language due to a lack of datasets and the difficulty of
constructing such resources for other languages. In this work, we present a
high-quality manually annotated corpus for Open Information Extraction in the
Portuguese language, based on a rigorous methodology grounded in established
semantic theories. We discuss the challenges encountered in the annotation
process, propose a set of structural and contextual annotation rules, and
validate our corpus by evaluating the performance of state-of-the-art Open IE
systems. Our resource addresses the lack of datasets for Open IE in Portuguese
and can support the development and evaluation of new methods and systems in
this area.
|
2501.11852
|
Cross-Entropy Attacks to Language Models via Rare Event Simulation
|
cs.CL cs.CR cs.LG
|
Black-box textual adversarial attacks are challenging due to the lack of
model information and the discrete, non-differentiable nature of text. Existing
methods often lack versatility for attacking different models, suffer from
limited attacking performance due to the inefficient optimization with word
saliency ranking, and frequently sacrifice semantic integrity to achieve better
attack outcomes. This paper introduces a novel approach to textual adversarial
attacks, which we call Cross-Entropy Attacks (CEA), that uses Cross-Entropy
optimization to address the above issues. Our CEA approach defines adversarial
objectives for both soft-label and hard-label settings and employs CE
optimization to identify optimal replacements. Through extensive experiments on
document classification and language translation problems, we demonstrate that
our attack method excels in terms of attacking performance, imperceptibility,
and sentence quality.
|
2501.11854
|
WaveNet-SF: A Hybrid Network for Retinal Disease Detection Based on
Wavelet Transform in the Spatial-Frequency Domain
|
eess.IV cs.CV
|
Retinal diseases are a leading cause of vision impairment and blindness, with
timely diagnosis being critical for effective treatment. Optical Coherence
Tomography (OCT) has become a standard imaging modality for retinal disease
diagnosis, but OCT images often suffer from issues such as speckle noise,
complex lesion shapes, and varying lesion sizes, making interpretation
challenging. In this paper, we propose a novel framework, WaveNet-SF, to
enhance retinal disease detection by integrating spatial-domain and
frequency-domain learning. The framework utilizes wavelet transforms to
decompose OCT images into low- and high-frequency components, enabling the
model to extract both global structural features and fine-grained details. To
improve lesion detection, we introduce a multi-scale wavelet spatial attention
(MSW-SA) module, which enhances the model's focus on regions of interest at
multiple scales. Additionally, a high-frequency feature compensation block
(HFFC) is incorporated to recover edge information lost during wavelet
decomposition, suppress noise, and preserve fine details crucial for lesion
detection. Our approach achieves state-of-the-art (SOTA) classification
accuracies of 97.82% and 99. 58% on the OCT-C8 and OCT2017 datasets,
respectively, surpassing existing methods. These results demonstrate the
efficacy of WaveNet-SF in addressing the challenges of OCT image analysis and
its potential as a powerful tool for retinal disease diagnosis.
|
2501.11855
|
A New Construction Structure on Coded Caching with Linear
Subpacketization: Non-Half-Sum Disjoint Packing
|
cs.IT math.IT
|
Coded caching is a promising technique to effectively reduce peak traffic by
using local caches and the multicast gains generated by these local caches. We
prefer to design a coded caching scheme with the subpacketization $F$ and
transmission load $R$ as small as possible since these are the key metrics for
evaluating the implementation complexity and transmission efficiency of the
scheme, respectively. However, most of the existing coded caching schemes have
large subpacketizations which grow exponentially with the number of users $K$,
and there are a few schemes with linear subpacketizations which have large
transmission loads. In this paper, we focus on studying the linear
subpacketization, i.e., $K=F$, coded caching scheme with low transmission load.
Specifically, we first introduce a new combinatorial structure called
non-half-sum disjoint packing (NHSDP) which can be used to generate a coded
caching scheme with $K=F$. Then a class of new schemes is obtained by
constructing NHSDP. Theoretical and numerical comparisons show that (i)
compared to the existing schemes with linear subpacketization (to the number of
users), the proposed scheme achieves a lower load; (ii) compared to some
existing schemes with polynomial subpacketization, the proposed scheme can also
achieve a lower load in some cases; (iii) compared to some existing schemes
with exponential subpacketization, the proposed scheme has loads close to those
of these schemes in some cases. Moreover, the new concept of NHSDP is closely
related to the classical combinatorial structures such as cyclic difference
packing (CDP), non-three-term arithmetic progressions (NTAP), and perfect hash
family (PHF). These connections indicate that NHSDP is an important
combinatorial structure in the field of combinatorial design.
|
2501.11858
|
EmbodiedEval: Evaluate Multimodal LLMs as Embodied Agents
|
cs.CV cs.CL
|
Multimodal Large Language Models (MLLMs) have shown significant advancements,
providing a promising future for embodied agents. Existing benchmarks for
evaluating MLLMs primarily utilize static images or videos, limiting
assessments to non-interactive scenarios. Meanwhile, existing embodied AI
benchmarks are task-specific and not diverse enough, which do not adequately
evaluate the embodied capabilities of MLLMs. To address this, we propose
EmbodiedEval, a comprehensive and interactive evaluation benchmark for MLLMs
with embodied tasks. EmbodiedEval features 328 distinct tasks within 125 varied
3D scenes, each of which is rigorously selected and annotated. It covers a
broad spectrum of existing embodied AI tasks with significantly enhanced
diversity, all within a unified simulation and evaluation framework tailored
for MLLMs. The tasks are organized into five categories: navigation, object
interaction, social interaction, attribute question answering, and spatial
question answering to assess different capabilities of the agents. We evaluated
the state-of-the-art MLLMs on EmbodiedEval and found that they have a
significant shortfall compared to human level on embodied tasks. Our analysis
demonstrates the limitations of existing MLLMs in embodied capabilities,
providing insights for their future development. We open-source all evaluation
data and simulation framework at https://github.com/thunlp/EmbodiedEval.
|
2501.11860
|
Bayesian Despeckling of Structured Sources
|
cs.IT cs.LG math.IT stat.AP
|
Speckle noise is a fundamental challenge in coherent imaging systems,
significantly degrading image quality. Over the past decades, numerous
despeckling algorithms have been developed for applications such as Synthetic
Aperture Radar (SAR) and digital holography. In this paper, we aim to establish
a theoretically grounded approach to despeckling. We propose a method
applicable to general structured stationary stochastic sources. We demonstrate
the effectiveness of the proposed method on piecewise constant sources.
Additionally, we theoretically derive a lower bound on the despeckling
performance for such sources. The proposed depseckler applied to the 1-Markov
structured sources achieves better reconstruction performance with no strong
simplification of the ground truth signal model or speckle noise.
|
2501.11866
|
Evaluating multiple models using labeled and unlabeled data
|
cs.LG cs.CY
|
It remains difficult to evaluate machine learning classifiers in the absence
of a large, labeled dataset. While labeled data can be prohibitively expensive
or impossible to obtain, unlabeled data is plentiful. Here, we introduce
Semi-Supervised Model Evaluation (SSME), a method that uses both labeled and
unlabeled data to evaluate machine learning classifiers. SSME is the first
evaluation method to take advantage of the fact that: (i) there are frequently
multiple classifiers for the same task, (ii) continuous classifier scores are
often available for all classes, and (iii) unlabeled data is often far more
plentiful than labeled data. The key idea is to use a semi-supervised mixture
model to estimate the joint distribution of ground truth labels and classifier
predictions. We can then use this model to estimate any metric that is a
function of classifier scores and ground truth labels (e.g., accuracy or
expected calibration error). We present experiments in four domains where
obtaining large labeled datasets is often impractical: (1) healthcare, (2)
content moderation, (3) molecular property prediction, and (4) image
annotation. Our results demonstrate that SSME estimates performance more
accurately than do competing methods, reducing error by 5.1x relative to using
labeled data alone and 2.4x relative to the next best competing method. SSME
also improves accuracy when evaluating performance across subsets of the test
distribution (e.g., specific demographic subgroups) and when evaluating the
performance of language models.
|
2501.11869
|
Saturation in Snapshot Compressive Imaging
|
eess.IV cs.IT math.IT stat.AP
|
Snapshot Compressive Imaging (SCI) maps three-dimensional (3D) data cubes,
such as videos or hyperspectral images, into two-dimensional (2D) measurements
via optical modulation, enabling efficient data acquisition and reconstruction.
Recent advances have shown the potential of mask optimization to enhance SCI
performance, but most studies overlook nonlinear distortions caused by
saturation in practical systems. Saturation occurs when high-intensity
measurements exceed the sensor's dynamic range, leading to information loss
that standard reconstruction algorithms cannot fully recover. This paper
addresses the challenge of optimizing binary masks in SCI under saturation. We
theoretically characterize the performance of compression-based SCI recovery in
the presence of saturation and leverage these insights to optimize masks for
such conditions. Our analysis reveals trade-offs between mask statistics and
reconstruction quality in saturated systems. Experimental results using a
Plug-and-Play (PnP) style network validate the theory, demonstrating improved
recovery performance and robustness to saturation with our optimized binary
masks.
|
2501.11870
|
Coarse-to-Fine Lightweight Meta-Embedding for ID-Based Recommendation
|
cs.IR cs.AI
|
The state-of-the-art recommendation systems have shifted the attention to
efficient recommendation, e.g., on-device recommendation, under memory
constraints. To this end, the existing methods either focused on the
lightweight embeddings for both users and items, or involved on-device systems
enjoying the compact embeddings to enhance reusability and reduces space
complexity. However, they focus solely on the coarse granularity of embedding,
while overlook the fine-grained semantic nuances, to adversarially downgrade
the efficacy of meta-embeddings in capturing the intricate relationship over
both user and item, consequently resulting into the suboptimal recommendations.
In this paper, we aim to study how the meta-embedding can efficiently learn
varied grained semantics, together with how the fine-grained meta-embedding can
strengthen the representation of coarse-grained meta-embedding. To answer these
questions, we develop a novel graph neural networks (GNNs) based recommender
where each user and item serves as the node, linked directly to coarse-grained
virtual nodes and indirectly to fine-grained virtual nodes, ensuring different
grained semantic learning, while disclosing: 1) In contrast to coarse-grained
semantics, fine-grained semantics are well captured through sparse
meta-embeddings, which adaptively 2) balance the embedding uniqueness and
memory constraint. Additionally, the initialization method come up upon
SparsePCA, along with a soft thresholding activation function to render the
sparseness of the meta-embeddings. We propose a weight bridging update strategy
that focuses on matching each coarse-grained meta-embedding with several
fine-grained meta-embeddings based on the users/items' semantics. Extensive
experiments substantiate our method's superiority over existing baselines. Our
code is available at https://github.com/htyjers/C2F-MetaEmbed.
|
2501.11873
|
Demons in the Detail: On Implementing Load Balancing Loss for Training
Specialized Mixture-of-Expert Models
|
cs.LG cs.CL
|
This paper revisits the implementation of
$\textbf{L}$oad-$\textbf{b}$alancing $\textbf{L}$oss (LBL) when training
Mixture-of-Experts (MoEs) models. Specifically, LBL for MoEs is defined as $N_E
\sum_{i=1}^{N_E} f_i p_i$, where $N_E$ is the total number of experts, $f_i$
represents the frequency of expert $i$ being selected, and $p_i$ denotes the
average gating score of the expert $i$. Existing MoE training frameworks
usually employ the parallel training strategy so that $f_i$ and the LBL are
calculated within a $\textbf{micro-batch}$ and then averaged across parallel
groups. In essence, a micro-batch for training billion-scale LLMs normally
contains very few sequences. So, the micro-batch LBL is almost at the sequence
level, and the router is pushed to distribute the token evenly within each
sequence. Under this strict constraint, even tokens from a domain-specific
sequence ($\textit{e.g.}$, code) are uniformly routed to all experts, thereby
inhibiting expert specialization. In this work, we propose calculating LBL
using a $\textbf{global-batch}$ to loose this constraint. Because a
global-batch contains much more diverse sequences than a micro-batch, which
will encourage load balance at the corpus level. Specifically, we introduce an
extra communication step to synchronize $f_i$ across micro-batches and then use
it to calculate the LBL. Through experiments on training MoEs-based LLMs (up to
$\textbf{42.8B}$ total parameters and $\textbf{400B}$ tokens), we surprisingly
find that the global-batch LBL strategy yields excellent performance gains in
both pre-training perplexity and downstream tasks. Our analysis reveals that
the global-batch LBL also greatly improves the domain specialization of MoE
experts.
|
2501.11876
|
FNIN: A Fourier Neural Operator-based Numerical Integration Network for
Surface-form-gradients
|
cs.CV
|
Surface-from-gradients (SfG) aims to recover a three-dimensional (3D) surface
from its gradients. Traditional methods encounter significant challenges in
achieving high accuracy and handling high-resolution inputs, particularly
facing the complex nature of discontinuities and the inefficiencies associated
with large-scale linear solvers. Although recent advances in deep learning,
such as photometric stereo, have enhanced normal estimation accuracy, they do
not fully address the intricacies of gradient-based surface reconstruction. To
overcome these limitations, we propose a Fourier neural operator-based
Numerical Integration Network (FNIN) within a two-stage optimization framework.
In the first stage, our approach employs an iterative architecture for
numerical integration, harnessing an advanced Fourier neural operator to
approximate the solution operator in Fourier space. Additionally, a
self-learning attention mechanism is incorporated to effectively detect and
handle discontinuities. In the second stage, we refine the surface
reconstruction by formulating a weighted least squares problem, addressing the
identified discontinuities rationally. Extensive experiments demonstrate that
our method achieves significant improvements in both accuracy and efficiency
compared to current state-of-the-art solvers. This is particularly evident in
handling high-resolution images with complex data, achieving errors of fewer
than 0.1 mm on tested objects.
|
2501.11877
|
From Drafts to Answers: Unlocking LLM Potential via Aggregation
Fine-Tuning
|
cs.CL cs.AI
|
Scaling data and model size has been proven effective for boosting the
performance of large language models. In addition to training-time scaling,
recent studies have revealed that increasing test-time computational resources
can further improve performance. In this work, we introduce Aggregation
Fine-Tuning (AFT), a supervised finetuning paradigm where the model learns to
synthesize multiple draft responses, referred to as proposals, into a single,
refined answer, termed aggregation. At inference time, a propose-and-aggregate
strategy further boosts performance by iteratively generating proposals and
aggregating them. Empirical evaluations on benchmark datasets show that
AFT-trained models substantially outperform standard SFT. Notably, an AFT
model, fine-tuned from Llama3.1-8B-Base with only 64k data, achieves a 41.3% LC
win rate on AlpacaEval 2, surpassing significantly larger LLMs such as
Llama3.1-405B-Instruct and GPT4. By combining sequential refinement and
parallel sampling, the propose-and-aggregate framework scales inference-time
computation in a flexible manner. Overall, These findings position AFT as a
promising approach to unlocking additional capabilities of LLMs without
resorting to increasing data volume or model size.
|
2501.11880
|
Community-Aware Temporal Walks: Parameter-Free Representation Learning
on Continuous-Time Dynamic Graphs
|
cs.LG cs.AI
|
Dynamic graph representation learning plays a crucial role in understanding
evolving behaviors. However, existing methods often struggle with flexibility,
adaptability, and the preservation of temporal and structural dynamics. To
address these issues, we propose Community-aware Temporal Walks (CTWalks), a
novel framework for representation learning on continuous-time dynamic graphs.
CTWalks integrates three key components: a community-based parameter-free
temporal walk sampling mechanism, an anonymization strategy enriched with
community labels, and an encoding process that leverages continuous temporal
dynamics modeled via ordinary differential equations (ODEs). This design
enables precise modeling of both intra- and inter-community interactions,
offering a fine-grained representation of evolving temporal patterns in
continuous-time dynamic graphs. CTWalks theoretically overcomes locality bias
in walks and establishes its connection to matrix factorization. Experiments on
benchmark datasets demonstrate that CTWalks outperforms established methods in
temporal link prediction tasks, achieving higher accuracy while maintaining
robustness.
|
2501.11881
|
Channel Resolvability Using Multiplicative Weight Update Algorithm
|
cs.IT math.IT
|
We study the channel resolvability problem, which is used to prove strong
converse of identification via channel. Channel resolvability has been solved
by only random coding in the literature. We prove channel resolvability using
the multiplicative weight update algorithm. This is the first approach to
channel resolvability using non-random coding.
|
2501.11883
|
An Improved Lower Bound on Oblivious Transfer Capacity Using
Polarization and Interaction
|
cs.IT math.IT
|
We consider the oblivious transfer (OT) capacities of noisy channels against
the passive adversary; this problem has not been solved even for the binary
symmetric channel (BSC). In the literature, the general construction of OT has
been known only for generalized erasure channels (GECs); for the BSC, we
convert the channel to the binary symmetric erasure channel (BSEC), which is a
special instance of the GEC, via alphabet extension and erasure emulation. In a
previous paper by the authors, we derived an improved lower bound on the OT
capacity of BSC by proposing a method to recursively emulate BSEC via
interactive communication. In this paper, we introduce two new ideas of OT
construction: (i) via ``polarization" and interactive communication, we
recursively emulate GECs that are not necessarily a BSEC; (ii) in addition to
the GEC emulation part, we also utilize interactive communication in the key
agreement part of OT protocol. By these methods, we derive lower bounds on the
OT capacity of BSC that are superior to the previous one for a certain range of
crossover probabilities of the BSC. Via our new lower bound, we show that, at
the crossover probability being zero, the slope of tangent of the OT capacity
is unbounded.
|
2501.11884
|
Fast Underwater Scene Reconstruction using Multi-View Stereo and
Physical Imaging
|
cs.CV
|
Underwater scene reconstruction poses a substantial challenge because of the
intricate interplay between light and the medium, resulting in scattering and
absorption effects that make both depth estimation and rendering more complex.
While recent Neural Radiance Fields (NeRF) based methods for underwater scenes
achieve high-quality results by modeling and separating the scattering medium,
they still suffer from slow training and rendering speeds. To address these
limitations, we propose a novel method that integrates Multi-View Stereo (MVS)
with a physics-based underwater image formation model. Our approach consists of
two branches: one for depth estimation using the traditional cost volume
pipeline of MVS, and the other for rendering based on the physics-based image
formation model. The depth branch improves scene geometry, while the medium
branch determines the scattering parameters to achieve precise scene rendering.
Unlike traditional MVSNet methods that rely on ground-truth depth, our method
does not necessitate the use of depth truth, thus allowing for expedited
training and rendering processes. By leveraging the medium subnet to estimate
the medium parameters and combining this with a color MLP for rendering, we
restore the true colors of underwater scenes and achieve higher-fidelity
geometric representations. Experimental results show that our method enables
high-quality synthesis of novel views in scattering media, clear views
restoration by removing the medium, and outperforms existing methods in
rendering quality and training efficiency.
|
2501.11885
|
Med-R$^2$: Crafting Trustworthy LLM Physicians through Retrieval and
Reasoning of Evidence-Based Medicine
|
cs.CL
|
In recent years, Large Language Models (LLMs) have exhibited remarkable
capabilities in clinical scenarios. However, despite their potential, existing
works face challenges when applying LLMs to medical settings. Strategies
relying on training with medical datasets are highly cost-intensive and may
suffer from outdated training data. Leveraging external knowledge bases is a
suitable alternative, yet it faces obstacles such as limited retrieval
precision and poor effectiveness in answer extraction. These issues
collectively prevent LLMs from demonstrating the expected level of proficiency
in mastering medical expertise. To address these challenges, we introduce
Med-R^2, a novel LLM physician framework that adheres to the Evidence-Based
Medicine (EBM) process, efficiently integrating retrieval mechanisms as well as
the selection and reasoning processes of evidence, thereby enhancing the
problem-solving capabilities of LLMs in healthcare scenarios and fostering a
trustworthy LLM physician. Our comprehensive experiments indicate that Med-R^2
achieves a 14.87\% improvement over vanilla RAG methods and even a 3.59\%
enhancement compared to fine-tuning strategies, without incurring additional
training costs.
|
2501.11887
|
Connection-Coordination Rapport (CCR) Scale: A Dual-Factor Scale to
Measure Human-Robot Rapport
|
cs.RO cs.HC
|
Robots, particularly in service and companionship roles, must develop
positive relationships with people they interact with regularly to be
successful. These positive human-robot relationships can be characterized as
establishing "rapport," which indicates mutual understanding and interpersonal
connection that form the groundwork for successful long-term human-robot
interaction. However, the human-robot interaction research literature lacks
scale instruments to assess human-robot rapport in a variety of situations. In
this work, we developed the 18-item Connection-Coordination Rapport (CCR) Scale
to measure human-robot rapport. We first ran Study 1 (N = 288) where online
participants rated videos of human-robot interactions using a set of candidate
items. Our Study 1 results showed the discovery of two factors in our scale,
which we named "Connection" and "Coordination." We then evaluated this scale by
running Study 2 (N = 201) where online participants rated a new set of
human-robot interaction videos with our scale and an existing rapport scale
from virtual agents research for comparison. We also validated our scale by
replicating a prior in-person human-robot interaction study, Study 3 (N = 44),
and found that rapport is rated significantly greater when participants
interacted with a responsive robot (responsive condition) as opposed to an
unresponsive robot (unresponsive condition). Results from these studies
demonstrate high reliability and validity for the CCR scale, which can be used
to measure rapport in both first-person and third-person perspectives. We
encourage the adoption of this scale in future studies to measure rapport in a
variety of human-robot interactions.
|
2501.11893
|
DynoSAM: Open-Source Smoothing and Mapping Framework for Dynamic SLAM
|
cs.RO
|
Traditional Visual Simultaneous Localization and Mapping (vSLAM) systems
focus solely on static scene structures, overlooking dynamic elements in the
environment. Although effective for accurate visual odometry in complex
scenarios, these methods discard crucial information about moving objects. By
incorporating this information into a Dynamic SLAM framework, the motion of
dynamic entities can be estimated, enhancing navigation whilst ensuring
accurate localization. However, the fundamental formulation of Dynamic SLAM
remains an open challenge, with no consensus on the optimal approach for
accurate motion estimation within a SLAM pipeline. Therefore, we developed
DynoSAM, an open-source framework for Dynamic SLAM that enables the efficient
implementation, testing, and comparison of various Dynamic SLAM optimization
formulations. DynoSAM integrates static and dynamic measurements into a unified
optimization problem solved using factor graphs, simultaneously estimating
camera poses, static scene, object motion or poses, and object structures. We
evaluate DynoSAM across diverse simulated and real-world datasets, achieving
state-of-the-art motion estimation in indoor and outdoor environments, with
substantial improvements over existing systems. Additionally, we demonstrate
DynoSAM utility in downstream applications, including 3D reconstruction of
dynamic scenes and trajectory prediction, thereby showcasing potential for
advancing dynamic object-aware SLAM systems. DynoSAM is open-sourced at
https://github.com/ACFR-RPG/DynOSAM.
|
2501.11895
|
Contrastive Masked Autoencoders for Character-Level Open-Set Writer
Identification
|
cs.CV cs.LG
|
In the realm of digital forensics and document authentication, writer
identification plays a crucial role in determining the authors of documents
based on handwriting styles. The primary challenge in writer-id is the
"open-set scenario", where the goal is accurately recognizing writers unseen
during the model training. To overcome this challenge, representation learning
is the key. This method can capture unique handwriting features, enabling it to
recognize styles not previously encountered during training. Building on this
concept, this paper introduces the Contrastive Masked Auto-Encoders (CMAE) for
Character-level Open-Set Writer Identification. We merge Masked Auto-Encoders
(MAE) with Contrastive Learning (CL) to simultaneously and respectively capture
sequential information and distinguish diverse handwriting styles.
Demonstrating its effectiveness, our model achieves state-of-the-art (SOTA)
results on the CASIA online handwriting dataset, reaching an impressive
precision rate of 89.7%. Our study advances universal writer-id with a
sophisticated representation learning approach, contributing substantially to
the ever-evolving landscape of digital handwriting analysis, and catering to
the demands of an increasingly interconnected world.
|
2501.11896
|
Systematic Abductive Reasoning via Diverse Relation Representations in
Vector-symbolic Architecture
|
cs.AI
|
In abstract visual reasoning, monolithic deep learning models suffer from
limited interpretability and generalization, while existing neuro-symbolic
approaches fall short in capturing the diversity and systematicity of
attributes and relation representations. To address these challenges, we
propose a Systematic Abductive Reasoning model with diverse relation
representations (Rel-SAR) in Vector-symbolic Architecture (VSA) to solve
Raven's Progressive Matrices (RPM). To derive attribute representations with
symbolic reasoning potential, we introduce not only various types of atomic
vectors that represent numeric, periodic and logical semantics, but also the
structured high-dimentional representation (SHDR) for the overall Grid
component. For systematic reasoning, we propose novel numerical and logical
relation functions and perform rule abduction and execution in a unified
framework that integrates these relation representations. Experimental results
demonstrate that Rel-SAR achieves significant improvement on RPM tasks and
exhibits robust out-of-distribution generalization. Rel-SAR leverages the
synergy between HD attribute representations and symbolic reasoning to achieve
systematic abductive reasoning with both interpretable and computable
semantics.
|
2501.11898
|
Highly Efficient Rotation-Invariant Spectral Embedding for Scalable
Incomplete Multi-View Clustering
|
cs.LG
|
Incomplete multi-view clustering presents significant challenges due to
missing views. Although many existing graph-based methods aim to recover
missing instances or complete similarity matrices with promising results, they
still face several limitations: (1) Recovered data may be unsuitable for
spectral clustering, as these methods often ignore guidance from spectral
analysis; (2) Complex optimization processes require high computational burden,
hindering scalability to large-scale problems; (3) Most methods do not address
the rotational mismatch problem in spectral embeddings. To address these
issues, we propose a highly efficient rotation-invariant spectral embedding
(RISE) method for scalable incomplete multi-view clustering. RISE learns
view-specific embeddings from incomplete bipartite graphs to capture the
complementary information. Meanwhile, a complete consensus representation with
second-order rotation-invariant property is recovered from these incomplete
embeddings in a unified model. Moreover, we design a fast alternating
optimization algorithm with linear complexity and promising convergence to
solve the proposed formulation. Extensive experiments on multiple datasets
demonstrate the effectiveness, scalability, and efficiency of RISE compared to
the state-of-the-art methods.
|
2501.11899
|
LASER: Lip Landmark Assisted Speaker Detection for Robustness
|
cs.CV cs.LG
|
Active Speaker Detection (ASD) aims to identify speaking individuals in
complex visual scenes. While humans can easily detect speech by matching lip
movements to audio, current ASD models struggle to establish this
correspondence, often misclassifying non-speaking instances when audio and lip
movements are unsynchronized. To address this limitation, we propose Lip
landmark Assisted Speaker dEtection for Robustness (LASER). Unlike models that
rely solely on facial frames, LASER explicitly focuses on lip movements by
integrating lip landmarks in training. Specifically, given a face track, LASER
extracts frame-level visual features and the 2D coordinates of lip landmarks
using a lightweight detector. These coordinates are encoded into dense feature
maps, providing spatial and structural information on lip positions.
Recognizing that landmark detectors may sometimes fail under challenging
conditions (e.g., low resolution, occlusions, extreme angles), we incorporate
an auxiliary consistency loss to align predictions from both lip-aware and
face-only features, ensuring reliable performance even when lip data is absent.
Extensive experiments across multiple datasets show that LASER outperforms
state-of-the-art models, especially in scenarios with desynchronized audio and
visuals, demonstrating robust performance in real-world video contexts. Code is
available at \url{https://github.com/plnguyen2908/LASER_ASD}.
|
2501.11900
|
Panoramic Interests: Stylistic-Content Aware Personalized Headline
Generation
|
cs.CL cs.AI
|
Personalized news headline generation aims to provide users with
attention-grabbing headlines that are tailored to their preferences. Prevailing
methods focus on user-oriented content preferences, but most of them overlook
the fact that diverse stylistic preferences are integral to users' panoramic
interests, leading to suboptimal personalization. In view of this, we propose a
novel Stylistic-Content Aware Personalized Headline Generation (SCAPE)
framework. SCAPE extracts both content and stylistic features from headlines
with the aid of large language model (LLM) collaboration. It further adaptively
integrates users' long- and short-term interests through a contrastive
learning-based hierarchical fusion network. By incorporating the panoramic
interests into the headline generator, SCAPE reflects users' stylistic-content
preferences during the generation process. Extensive experiments on the
real-world dataset PENS demonstrate the superiority of SCAPE over baselines.
|
2501.11901
|
Enhancing Adversarial Transferability via Component-Wise Augmentation
Method
|
cs.CV
|
Deep Neural Networks (DNNs) are highly vulnerable to adversarial examples,
which pose significant challenges in security-sensitive applications. Among
various adversarial attack strategies, input transformation-based attacks have
demonstrated remarkable effectiveness in enhancing adversarial transferability.
However, existing methods fail to diversify attention regions across models
adequately and introduce excessive information loss during transformations. In
this paper, we introduce a novel input transformation-based method, termed
Component-Wise Augmentation (CWA), designed to enhance transferability by
locally applying block-wise transformations. CWA strategically integrates
interpolation and selective rotation on individual image blocks to diversify
model attention regions while preserving semantic integrity. Extensive
experiments on the standard ImageNet dataset show that CWA consistently
outperforms state-of-the-art methods in both attack success rates and stability
across CNN- and Transformer-based models, while also demonstrating superior
performance against multiple defense methods.
|
2501.11903
|
Finding the nearest bounded-real port-Hamiltonian system
|
math.OC cs.NA cs.SY eess.SY math.NA
|
In this paper, we consider linear time-invariant continuous control systems
which are bounded real, also known as scattering passive. Our main theoretical
contribution is to show the equivalence between such systems and
port-Hamiltonian (PH) systems whose factors satisfy certain linear matrix
inequalities. Based on this result, we propose a formulation for the problem of
finding the nearest bounded-real system to a given system, and design an
algorithm combining alternating optimization and Nesterov's fast gradient
method. This formulation also allows us to check whether a given system is
bounded real by solving a semidefinite program, and provide a PH
parametrization for it. We illustrate our proposed algorithms on real and
synthetic data sets.
|
2501.11905
|
Phase Transitions in Phase-Only Compressed Sensing
|
cs.IT eess.SP math.IT
|
The goal of phase-only compressed sensing is to recover a structured signal
$\mathbf{x}$ from the phases $\mathbf{z} = {\rm sign}(\mathbf{\Phi}\mathbf{x})$
under some complex-valued sensing matrix $\mathbf{\Phi}$. Exact reconstruction
of the signal's direction is possible: we can reformulate it as a linear
compressed sensing problem and use basis pursuit (i.e., constrained norm
minimization). For $\mathbf{\Phi}$ with i.i.d. complex-valued Gaussian entries,
this paper shows that the phase transition is approximately located at the
statistical dimension of the descent cone of a signal-dependent norm.
Leveraging this insight, we derive asymptotically precise formulas for the
phase transition locations in phase-only sensing of both sparse signals and
low-rank matrices. Our results prove that the minimum number of measurements
required for exact recovery is smaller for phase-only measurements than for
traditional linear compressed sensing. For instance, in recovering a 1-sparse
signal with sufficiently large dimension, phase-only compressed sensing
requires approximately 68% of the measurements needed for linear compressed
sensing. This result disproves earlier conjecture suggesting that the two phase
transitions coincide. Our proof hinges on the Gaussian min-max theorem and the
key observation that, up to a signal-dependent orthogonal transformation, the
sensing matrix in the reformulated problem behaves as a nearly Gaussian matrix.
|
2501.11906
|
Multi-source Multi-level Multi-token Ethereum Dataset and Benchmark
Platform
|
cs.CE
|
This paper introduces 3MEthTaskforce (https://3meth.github.io), a
multi-source, multi-level, and multi-token Ethereum dataset addressing the
limitations of single-source datasets. Integrating over 300 million transaction
records, 3,880 token profiles, global market indicators, and Reddit sentiment
data from 2014-2024, it enables comprehensive studies on user behavior, market
sentiment, and token performance. 3MEthTaskforce defines benchmarks for user
behavior prediction and token price prediction tasks, using 6 dynamic graph
networks and 19 time-series models to evaluate performance. Its multimodal
design supports risk analysis and market fluctuation modeling, providing a
valuable resource for advancing blockchain analytics and decentralized finance
research.
|
2501.11909
|
Bridging the Communication Gap: Evaluating AI Labeling Practices for
Trustworthy AI Development
|
cs.AI
|
As artificial intelligence (AI) becomes integral to economy and society,
communication gaps between developers, users, and stakeholders hinder trust and
informed decision-making. High-level AI labels, inspired by frameworks like EU
energy labels, have been proposed to make the properties of AI models more
transparent. Without requiring deep technical expertise, they can inform on the
trade-off between predictive performance and resource efficiency. However, the
practical benefits and limitations of AI labeling remain underexplored. This
study evaluates AI labeling through qualitative interviews along four key
research questions. Based on thematic analysis and inductive coding, we found a
broad range of practitioners to be interested in AI labeling (RQ1). They see
benefits for alleviating communication gaps and aiding non-expert
decision-makers, however limitations, misunderstandings, and suggestions for
improvement were also discussed (RQ2). Compared to other reporting formats,
interviewees positively evaluated the reduced complexity of labels, increasing
overall comprehensibility (RQ3). Trust was influenced most by usability and the
credibility of the responsible labeling authority, with mixed preferences for
self-certification versus third-party certification (RQ4). Our Insights
highlight that AI labels pose a trade-off between simplicity and complexity,
which could be resolved by developing customizable and interactive labeling
frameworks to address diverse user needs. Transparent labeling of resource
efficiency also nudged interviewee priorities towards paying more attention to
sustainability aspects during AI development. This study validates AI labels as
a valuable tool for enhancing trust and communication in AI, offering
actionable guidelines for their refinement and standardization.
|
2501.11911
|
Integrate Temporal Graph Learning into LLM-based Temporal Knowledge
Graph Model
|
cs.IR
|
Temporal Knowledge Graph Forecasting (TKGF) aims to predict future events
based on the observed events in history. Recently, Large Language Models (LLMs)
have exhibited remarkable capabilities, generating significant research
interest in their application for reasoning over temporal knowledge graphs
(TKGs). Existing LLM-based methods have integrated retrieved historical facts
or static graph representations into LLMs. Despite the notable performance of
LLM-based methods, they are limited by the insufficient modeling of temporal
patterns and ineffective cross-modal alignment between graph and language,
hindering the ability of LLMs to fully grasp the temporal and structural
information in TKGs. To tackle these issues, we propose a novel framework
TGL-LLM to integrate temporal graph learning into LLM-based temporal knowledge
graph model. Specifically, we introduce temporal graph learning to capture the
temporal and relational patterns and obtain the historical graph embedding.
Furthermore, we design a hybrid graph tokenization to sufficiently model the
temporal patterns within LLMs. To achieve better alignment between graph and
language, we employ a two-stage training paradigm to finetune LLMs on
high-quality and diverse data, thereby resulting in better performance.
Extensive experiments on three real-world datasets show that our approach
outperforms a range of state-of-the-art (SOTA) methods.
|
2501.11914
|
LuxVeri at GenAI Detection Task 1: Inverse Perplexity Weighted Ensemble
for Robust Detection of AI-Generated Text across English and Multilingual
Contexts
|
cs.CL cs.AI
|
This paper presents a system developed for Task 1 of the COLING 2025 Workshop
on Detecting AI-Generated Content, focusing on the binary classification of
machine-generated versus human-written text. Our approach utilizes an ensemble
of models, with weights assigned according to each model's inverse perplexity,
to enhance classification accuracy. For the English text detection task, we
combined RoBERTa-base, RoBERTa-base with the OpenAI detector, and
BERT-base-cased, achieving a Macro F1-score of 0.7458, which ranked us 12th out
of 35 teams. We ensembled RemBERT, XLM-RoBERTa-base, and
BERT-base-multilingual-case for the multilingual text detection task, employing
the same inverse perplexity weighting technique. This resulted in a Macro
F1-score of 0.7513, positioning us 4th out of 25 teams. Our results demonstrate
the effectiveness of inverse perplexity weighting in improving the robustness
of machine-generated text detection across both monolingual and multilingual
settings, highlighting the potential of ensemble methods for this challenging
task.
|
2501.11915
|
Stabilizing Optimal Control for Nonlinear Stochastic Systems: A
Parametric Gradient-Based Approach
|
math.OC cs.SY eess.SY
|
This study proposes a method for designing stabilizing suboptimal controllers
for nonlinear stochastic systems. These systems include time-invariant
stochastic parameters that represent uncertainty of dynamics, posing two key
difficulties in optimal control. Firstly, the time-invariant stochastic nature
violates the principle of optimality and Hamilton-Jacobi equations, which are
fundamental tools for solving optimal control problems. Secondly, nonlinear
systems must be robustly stabilized against these stochastic parameters. To
overcome these difficulties simultaneously, this study presents a
parametric-gradient-based method with a penalty function. A controller and cost
function are parameterized using basis functions, and a gradient method is
employed to optimize the controller by minimizing the parameterized cost
function. Crucial challenges in this approach are parameterizing the cost
function appropriately and deriving the gradient of the cost. This study
provides explicit formulations of an optimally parameterized cost and its
gradient. Furthermore, a suitable penalty function is proposed to ensure robust
stability, even when using the gradient method. Consequently, the gradient
method produces a suboptimal feedback controller that guarantees the robust
stability. The effectiveness of the proposed method is demonstrated through
numerical simulations, highlighting its performance in comparison with other
baseline methods.
|
2501.11916
|
Generating with Fairness: A Modality-Diffused Counterfactual Framework
for Incomplete Multimodal Recommendations
|
cs.IR
|
Incomplete scenario is a prevalent, practical, yet challenging setting in
Multimodal Recommendations (MMRec), where some item modalities are missing due
to various factors. Recently, a few efforts have sought to improve the
recommendation accuracy by exploring generic structures from incomplete data.
However, two significant gaps persist: 1) the difficulty in accurately
generating missing data due to the limited ability to capture modality
distributions; and 2) the critical but overlooked visibility bias, where items
with missing modalities are more likely to be disregarded due to the
prioritization of items' multimodal data over user preference alignment. This
bias raises serious concerns about the fair treatment of items. To bridge these
two gaps, we propose a novel Modality-Diffused Counterfactual (MoDiCF)
framework for incomplete multimodal recommendations. MoDiCF features two key
modules: a novel modality-diffused data completion module and a new
counterfactual multimodal recommendation module. The former, equipped with a
particularly designed multimodal generative framework, accurately generates and
iteratively refines missing data from learned modality-specific distribution
spaces. The latter, grounded in the causal perspective, effectively mitigates
the negative causal effects of visibility bias and thus assures fairness in
recommendations. Both modules work collaboratively to address the two
aforementioned significant gaps for generating more accurate and fair results.
Extensive experiments on three real-world datasets demonstrate the superior
performance of MoDiCF in terms of both recommendation accuracy and fairness.
The code and processed datasets are released at
https://github.com/JinLi-i/MoDiCF.
|
2501.11918
|
LuxVeri at GenAI Detection Task 3: Cross-Domain Detection of
AI-Generated Text Using Inverse Perplexity-Weighted Ensemble of Fine-Tuned
Transformer Models
|
cs.CL cs.AI
|
This paper presents our approach for Task 3 of the GenAI content detection
workshop at COLING-2025, focusing on Cross-Domain Machine-Generated Text (MGT)
Detection. We propose an ensemble of fine-tuned transformer models, enhanced by
inverse perplexity weighting, to improve classification accuracy across diverse
text domains. For Subtask A (Non-Adversarial MGT Detection), we combined a
fine-tuned RoBERTa-base model with an OpenAI detector-integrated RoBERTa-base
model, achieving an aggregate TPR score of 0.826, ranking 10th out of 23
detectors. In Subtask B (Adversarial MGT Detection), our fine-tuned
RoBERTa-base model achieved a TPR score of 0.801, securing 8th out of 22
detectors. Our results demonstrate the effectiveness of inverse
perplexity-based weighting for enhancing generalization and performance in both
non-adversarial and adversarial MGT detection, highlighting the potential for
transformer models in cross-domain AI-generated content detection.
|
2501.11919
|
Improving Fine-Tuning with Latent Cluster Correction
|
cs.LG
|
The existence of salient semantic clusters in the latent spaces of a neural
network during training strongly correlates its final accuracy on
classification tasks. This paper proposes a novel fine-tuning method that
boosts performance by optimising the formation of these latent clusters, using
the Louvain community detection algorithm and a specifically designed
clustering loss function. We present preliminary results that demonstrate the
viability of this process on classical neural network architectures during
fine-tuning on the CIFAR-100 dataset.
|
2501.11921
|
Goal-oriented Transmission Scheduling: Structure-guided DRL with a
Unified Dual On-policy and Off-policy Approach
|
cs.IT cs.AI cs.LG cs.SY eess.SP eess.SY math.IT
|
Goal-oriented communications prioritize application-driven objectives over
data accuracy, enabling intelligent next-generation wireless systems. Efficient
scheduling in multi-device, multi-channel systems poses significant challenges
due to high-dimensional state and action spaces. We address these challenges by
deriving key structural properties of the optimal solution to the goal-oriented
scheduling problem, incorporating Age of Information (AoI) and channel states.
Specifically, we establish the monotonicity of the optimal state value function
(a measure of long-term system performance) w.r.t. channel states and prove its
asymptotic convexity w.r.t. AoI states. Additionally, we derive the
monotonicity of the optimal policy w.r.t. channel states, advancing the
theoretical framework for optimal scheduling. Leveraging these insights, we
propose the structure-guided unified dual on-off policy DRL (SUDO-DRL), a
hybrid algorithm that combines the stability of on-policy training with the
sample efficiency of off-policy methods. Through a novel structural property
evaluation framework, SUDO-DRL enables effective and scalable training,
addressing the complexities of large-scale systems. Numerical results show
SUDO-DRL improves system performance by up to 45% and reduces convergence time
by 40% compared to state-of-the-art methods. It also effectively handles
scheduling in much larger systems, where off-policy DRL fails and on-policy
benchmarks exhibit significant performance loss, demonstrating its scalability
and efficacy in goal-oriented communications.
|
2501.11923
|
Progressive Cross Attention Network for Flood Segmentation using
Multispectral Satellite Imagery
|
cs.CV cs.LG
|
In recent years, the integration of deep learning techniques with remote
sensing technology has revolutionized the way natural hazards, such as floods,
are monitored and managed. However, existing methods for flood segmentation
using remote sensing data often overlook the utility of correlative features
among multispectral satellite information. In this study, we introduce a
progressive cross attention network (ProCANet), a deep learning model that
progressively applies both self- and cross-attention mechanisms to
multispectral features, generating optimal feature combinations for flood
segmentation. The proposed model was compared with state-of-the-art approaches
using Sen1Floods11 dataset and our bespoke flood data generated for the Citarum
River basin, Indonesia. Our model demonstrated superior performance with the
highest Intersection over Union (IoU) score of 0.815. Our results in this
study, coupled with the ablation assessment comparing scenarios with and
without attention across various modalities, opens a promising path for
enhancing the accuracy of flood analysis using remote sensing technology.
|
Subsets and Splits
No community queries yet
The top public SQL queries from the community will appear here once available.