id
stringlengths 9
16
| title
stringlengths 4
278
| categories
stringlengths 5
104
| abstract
stringlengths 6
4.09k
|
|---|---|---|---|
2501.04373
|
FGU3R: Fine-Grained Fusion via Unified 3D Representation for Multimodal
3D Object Detection
|
cs.CV
|
Multimodal 3D object detection has garnered considerable interest in
autonomous driving. However, multimodal detectors suffer from dimension
mismatches that derive from fusing 3D points with 2D pixels coarsely, which
leads to sub-optimal fusion performance. In this paper, we propose a multimodal
framework FGU3R to tackle the issue mentioned above via unified 3D
representation and fine-grained fusion, which consists of two important
components. First, we propose an efficient feature extractor for raw and pseudo
points, termed Pseudo-Raw Convolution (PRConv), which modulates multimodal
features synchronously and aggregates the features from different types of
points on key points based on multimodal interaction. Second, a Cross-Attention
Adaptive Fusion (CAAF) is designed to fuse homogeneous 3D RoI (Region of
Interest) features adaptively via a cross-attention variant in a fine-grained
manner. Together they make fine-grained fusion on unified 3D representation.
The experiments conducted on the KITTI and nuScenes show the effectiveness of
our proposed method.
|
2501.04374
|
Instructive3D: Editing Large Reconstruction Models with Text
Instructions
|
cs.CV
|
Transformer based methods have enabled users to create, modify, and
comprehend text and image data. Recently proposed Large Reconstruction Models
(LRMs) further extend this by providing the ability to generate high-quality 3D
models with the help of a single object image. These models, however, lack the
ability to manipulate or edit the finer details, such as adding standard design
patterns or changing the color and reflectance of the generated objects, thus
lacking fine-grained control that may be very helpful in domains such as
augmented reality, animation and gaming. Naively training LRMs for this purpose
would require generating precisely edited images and 3D object pairs, which is
computationally expensive. In this paper, we propose Instructive3D, a novel LRM
based model that integrates generation and fine-grained editing, through user
text prompts, of 3D objects into a single model. We accomplish this by adding
an adapter that performs a diffusion process conditioned on a text prompt
specifying edits in the triplane latent space representation of 3D object
models. Our method does not require the generation of edited 3D objects.
Additionally, Instructive3D allows us to perform geometrically consistent
modifications, as the edits done through user-defined text prompts are applied
to the triplane latent representation thus enhancing the versatility and
precision of 3D objects generated. We compare the objects generated by
Instructive3D and a baseline that first generates the 3D object meshes using a
standard LRM model and then edits these 3D objects using text prompts when
images are provided from the Objaverse LVIS dataset. We find that Instructive3D
produces qualitatively superior 3D objects with the properties specified by the
edit prompts.
|
2501.04376
|
Exploring Unbiased Deepfake Detection via Token-Level Shuffling and
Mixing
|
cs.CV
|
The generalization problem is broadly recognized as a critical challenge in
detecting deepfakes. Most previous work believes that the generalization gap is
caused by the differences among various forgery methods. However, our
investigation reveals that the generalization issue can still occur when
forgery-irrelevant factors shift. In this work, we identify two biases that
detectors may also be prone to overfitting: position bias and content bias, as
depicted in Fig. 1. For the position bias, we observe that detectors are prone
to lazily depending on the specific positions within an image (e.g., central
regions even no forgery). As for content bias, we argue that detectors may
potentially and mistakenly utilize forgery-unrelated information for detection
(e.g., background, and hair). To intervene these biases, we propose two
branches for shuffling and mixing with tokens in the latent space of
transformers. For the shuffling branch, we rearrange the tokens and
corresponding position embedding for each image while maintaining the local
correlation. For the mixing branch, we randomly select and mix the tokens in
the latent space between two images with the same label within the mini-batch
to recombine the content information. During the learning process, we align the
outputs of detectors from different branches in both feature space and logit
space. Contrastive losses for features and divergence losses for logits are
applied to obtain unbiased feature representation and classifiers. We
demonstrate and verify the effectiveness of our method through extensive
experiments on widely used evaluation datasets.
|
2501.04377
|
On Computational Limits and Provably Efficient Criteria of Visual
Autoregressive Models: A Fine-Grained Complexity Analysis
|
cs.LG cs.AI cs.CC cs.CV
|
Recently, Visual Autoregressive ($\mathsf{VAR}$) Models introduced a
groundbreaking advancement in the field of image generation, offering a
scalable approach through a coarse-to-fine ``next-scale prediction'' paradigm.
Suppose that $n$ represents the height and width of the last VQ code map
generated by $\mathsf{VAR}$ models, the state-of-the-art algorithm in [Tian,
Jiang, Yuan, Peng and Wang, NeurIPS 2024] takes $O(n^{4+o(1)})$ time, which is
computationally inefficient. In this work, we analyze the computational limits
and efficiency criteria of $\mathsf{VAR}$ Models through a fine-grained
complexity lens. Our key contribution is identifying the conditions under which
$\mathsf{VAR}$ computations can achieve sub-quadratic time complexity. We have
proved that assuming the Strong Exponential Time Hypothesis ($\mathsf{SETH}$)
from fine-grained complexity theory, a sub-quartic time algorithm for
$\mathsf{VAR}$ models is impossible. To substantiate our theoretical findings,
we present efficient constructions leveraging low-rank approximations that
align with the derived criteria. This work initiates the study of the
computational efficiency of the $\mathsf{VAR}$ model from a theoretical
perspective. Our technique will shed light on advancing scalable and efficient
image generation in $\mathsf{VAR}$ frameworks.
|
2501.04387
|
The unbearable lightness of Restricted Boltzmann Machines: Theoretical
Insights and Biological Applications
|
cond-mat.dis-nn cs.LG physics.data-an
|
Restricted Boltzmann Machines are simple yet powerful neural networks. They
can be used for learning structure in data, and are used as a building block of
more complex neural architectures. At the same time, their simplicity makes
them easy to use, amenable to theoretical analysis, yielding interpretable
models in applications. Here, we focus on reviewing the role that the
activation functions, describing the input-output relationship of single
neurons in RBM, play in the functionality of these models. We discuss recent
theoretical results on the benefits and limitations of different activation
functions. We also review applications to biological data analysis, namely
neural data analysis, where RBM units are mostly taken to have sigmoid
activation functions and binary units, to protein data analysis and immunology
where non-binary units and non-sigmoid activation functions have recently been
shown to yield important insights into the data. Finally, we discuss open
problems addressing which can shed light on broader issues in neural network
research.
|
2501.04389
|
Evidence-based multimodal fusion on structured EHRs and free-text notes
for ICU outcome prediction
|
cs.IT math.IT
|
Objective: Accurate Intensive Care Unit (ICU) outcome prediction is critical
for improving patient treatment quality and ICU resource allocation. Existing
research mainly focuses on structured data and lacks effective frameworks to
integrate clinical notes from heterogeneous electronic health records (EHRs).
This study aims to explore a multimodal framework based on evidence theory that
can effectively combine heterogeneous structured EHRs and free-text notes for
accurate and reliable ICU outcome prediction. Materials and Methods: We
proposed an evidence-based multimodal fusion framework to predict ICU outcomes,
including mortality and prolonged length of stay (PLOS), by utilizing both
structured EHR data and free-text notes from the MIMIC-III database. We compare
the performance against baseline models that use only structured EHRs,
free-text notes, or existing multimodal approaches. Results: The results
demonstrate that the evidence-based multimodal fusion model achieved both
accurate and reliable prediction. Specifically, it outperformed the best
baseline by 1.05%/1.02% in BACC, 9.74%/6.04% in F1 score, 1.28%/0.9% in AUROC,
and 6.21%/2.68% in AUPRC for predicting mortality and PLOS, respectively.
Additionally, it improved the reliability of the predictions with a 26.8%/15.1%
reduction in the Brier score and a 25.0%/13.3% reduction in negative
log-likelihood. Conclusion: This study demonstrates that the evidence-based
multimodal fusion framework can serve as a strong baseline for predictions
using structured EHRs and free-text notes. It effectively reduces false
positives, which can help improve the allocation of medical resources in the
ICU. This framework can be further applied to analyze multimodal EHRs for other
clinical tasks.
|
2501.04390
|
iFADIT: Invertible Face Anonymization via Disentangled Identity
Transform
|
cs.CV
|
Face anonymization aims to conceal the visual identity of a face to safeguard
the individual's privacy. Traditional methods like blurring and pixelation can
largely remove identifying features, but these techniques significantly degrade
image quality and are vulnerable to deep reconstruction attacks. Generative
models have emerged as a promising solution for anonymizing faces while
preserving a natural appearance. However, many still face limitations in visual
quality and often overlook the potential to recover the original face from the
anonymized version, which can be valuable in specific contexts such as image
forensics. This paper proposes a novel framework named iFADIT, an acronym for
Invertible Face Anonymization via Disentangled Identity Transform. The
framework features a disentanglement architecture coupled with a secure
flow-based model: the former decouples identity information from
non-identifying attributes, while the latter transforms the decoupled identity
into an anonymized version in an invertible manner controlled by a secret key.
The anonymized face can then be reconstructed based on a pre-trained StyleGAN
that ensures high image quality and realistic facial details. Recovery of the
original face (aka de-anonymization) is possible upon the availability of the
matching secret, by inverting the anonymization process based on the same set
of model parameters. Furthermore, a dedicated secret-key mechanism along with a
dual-phase training strategy is devised to ensure the desired properties of
face anonymization. Qualitative and quantitative experiments demonstrate the
superiority of the proposed approach in anonymity, reversibility, security,
diversity, and interpretability over competing methods.
|
2501.04393
|
SEO: Stochastic Experience Optimization for Large Language Models
|
cs.CL
|
Large Language Models (LLMs) can benefit from useful experiences to improve
their performance on specific tasks. However, finding helpful experiences for
different LLMs is not obvious, since it is unclear what experiences suit
specific LLMs. Previous studies intended to automatically find useful
experiences using LLMs, while it is difficult to ensure the effectiveness of
the obtained experience. In this paper, we propose Stochastic Experience
Optimization (SEO), an iterative approach that finds optimized model-specific
experience without modifying model parameters through experience update in
natural language. In SEO, we propose a stochastic validation method to ensure
the update direction of experience, avoiding unavailing updates. Experimental
results on three tasks for three LLMs demonstrate that experiences optimized by
SEO can achieve consistently improved performance. Further analysis indicates
that SEO-optimized experience can generalize to out-of-distribution data,
boosting the performance of LLMs on similar tasks.
|
2501.04398
|
Implementation Of Wildlife Observation System
|
cs.RO
|
By entering the habitats of wild animals, wildlife watchers can engage
closely with them. There are some wild animals that are not always safe to
approach. Therefore, we suggest this system for observing wildlife. Android
phones can be used by users to see live events. Wildlife observers can thus get
a close-up view of wild animals by employing this robotic vehicle. The commands
are delivered to the system via a Wi-Fi module. As we developed the technology
to enable our robot to deal with the challenges of maintaining continuous
surveillance of a target, we found that our robot needed to be able to move
silently and purposefully when monitoring a natural target without being
noticed. After processing the data, the computer sends commands to the motors
to turn on. The driver motors, which deliver the essential signal outputs to
drive the vehicle movement, are now in charge of driving the motors.
|
2501.04401
|
Tracking UWB Devices Through Radio Frequency Fingerprinting Is Possible
|
cs.LG cs.IT cs.NI math.IT
|
Ultra-wideband (UWB) is a state-of-the-art technology designed for
applications requiring centimeter-level localization. Its widespread adoption
by smartphone manufacturer naturally raises security and privacy concerns.
Successfully implementing Radio Frequency Fingerprinting (RFF) to UWB could
enable physical layer security, but might also allow undesired tracking of the
devices. The scope of this paper is to explore the feasibility of applying RFF
to UWB and investigates how well this technique generalizes across different
environments. We collected a realistic dataset using off-the-shelf UWB devices
with controlled variation in device positioning. Moreover, we developed an
improved deep learning pipeline to extract the hardware signature from the
signal data. In stable conditions, the extracted RFF achieves over 99%
accuracy. While the accuracy decreases in more changing environments, we still
obtain up to 76% accuracy in untrained locations.
|
2501.04403
|
Rising Rested MAB with Linear Drift
|
cs.LG
|
We consider non-stationary multi-arm bandit (MAB) where the expected reward
of each action follows a linear function of the number of times we executed the
action. Our main result is a tight regret bound of
$\tilde{\Theta}(T^{4/5}K^{3/5})$, by providing both upper and lower bounds. We
extend our results to derive instance dependent regret bounds, which depend on
the unknown parametrization of the linear drift of the rewards.
|
2501.04408
|
Resource Allocation for the Training of Image Semantic Communication
Networks
|
cs.SI
|
Semantic communication is a new paradigm that aims at providing more
efficient communication for the next-generation wireless network. It focuses on
transmitting extracted, meaningful information instead of the raw data.
However, deep learning-enabled image semantic communication models often
require a significant amount of time and energy for training, which is
unacceptable, especially for mobile devices. To solve this challenge, our paper
first introduces a distributed image semantic communication system where the
base station and local devices will collaboratively train the models for uplink
communication. Furthermore, we formulate a joint optimization problem to
balance time and energy consumption on the local devices during training while
ensuring effective model performance. An adaptable resource allocation
algorithm is proposed to meet requirements under different scenarios, and its
time complexity, solution quality, and convergence are thoroughly analyzed.
Experimental results demonstrate the superiority of our algorithm in resource
allocation optimization against existing benchmarks and discuss its impact on
the performance of image semantic communication systems.
|
2501.04409
|
Lossless Privacy-Preserving Aggregation for Decentralized Federated
Learning
|
cs.LG
|
Privacy concerns arise as sensitive data proliferate. Despite decentralized
federated learning (DFL) aggregating gradients from neighbors to avoid direct
data transmission, it still poses indirect data leaks from the transmitted
gradients. Existing privacy-preserving methods for DFL add noise to gradients.
They either diminish the model predictive accuracy or suffer from ineffective
gradient protection. In this paper, we propose a novel lossless
privacy-preserving aggregation rule named LPPA to enhance gradient protection
as much as possible but without loss of DFL model predictive accuracy. LPPA
subtly injects the noise difference between the sent and received noise into
transmitted gradients for gradient protection. The noise difference
incorporates neighbors' randomness for each client, effectively safeguarding
against data leaks. LPPA employs the noise flow conservation theory to ensure
that the noise impact can be globally eliminated. The global sum of all noise
differences remains zero, ensuring that accurate gradient aggregation is
unaffected and the model accuracy remains intact. We theoretically prove that
the privacy-preserving capacity of LPPA is \sqrt{2} times greater than that of
noise addition, while maintaining comparable model accuracy to the standard DFL
aggregation without noise injection. Experimental results verify the
theoretical findings and show that LPPA achieves a 14% mean improvement in
accuracy over noise addition. We also demonstrate the effectiveness of LPPA in
protecting raw data and guaranteeing lossless model accuracy.
|
2501.04410
|
User Simulation in the Era of Generative AI: User Modeling, Synthetic
Data Generation, and System Evaluation
|
cs.AI cs.HC cs.IR cs.LG
|
User simulation is an emerging interdisciplinary topic with multiple critical
applications in the era of Generative AI. It involves creating an intelligent
agent that mimics the actions of a human user interacting with an AI system,
enabling researchers to model and analyze user behaviour, generate synthetic
data for training, and evaluate interactive AI systems in a controlled and
reproducible manner. User simulation has profound implications for diverse
fields and plays a vital role in the pursuit of Artificial General
Intelligence. This paper provides an overview of user simulation, highlighting
its key applications, connections to various disciplines, and outlining future
research directions to advance this increasingly important technology.
|
2501.04413
|
Machine Learning and statistical classification of CRISPR-Cas12a
diagnostic assays
|
q-bio.QM cs.LG
|
CRISPR-based diagnostics have gained increasing attention as biosensing tools
able to address limitations in contemporary molecular diagnostic tests. To
maximise the performance of CRISPR-based assays, much effort has focused on
optimizing the chemistry and biology of the biosensing reaction. However, less
attention has been paid to improving the techniques used to analyse
CRISPR-based diagnostic data. To date, diagnostic decisions typically involve
various forms of slope-based classification. Such methods are superior to
traditional methods based on assessing absolute signals, but still have
limitations. Herein, we establish performance benchmarks (total accuracy,
sensitivity, and specificity) using common slope-based methods. We compare the
performance of these benchmark methods with three different quadratic empirical
distribution function statistical tests, finding significant improvements in
diagnostic speed and accuracy when applied to a clinical data set. Two of the
three statistical techniques, the Kolmogorov-Smirnov and Anderson-Darling
tests, report the lowest time-to-result and highest total test accuracy.
Furthermore, we developed a long short-term memory recurrent neural network to
classify CRISPR-biosensing data, achieving 100% specificity on our model data
set. Finally, we provide guidelines on choosing the classification method and
classification method parameters that best suit a diagnostic assays needs.
|
2501.04418
|
Not All Bonds Are Created Equal: Dyadic Latent Class Models for
Relational Event Data
|
cs.SI
|
Dynamic social networks can be conceptualized as sequences of dyadic
interactions between individuals over time. The relational event model has been
the workhorse to analyze such interaction sequences in empirical social network
research. When addressing possible unobserved heterogeneity in the interaction
mechanisms, standard approaches, such as the stochastic block model, aim to
cluster the variation at the actor level. Though useful, the implied latent
structure of the adjacency matrix is restrictive which may lead to biased
interpretations and insights. To address this shortcoming, we introduce a more
flexible dyadic latent class relational event model (DLC-REM) that captures the
unobserved heterogeneity at the dyadic level. Through numerical simulations, we
provide a proof of concept demonstrating that this approach is more general
than latent actor-level approaches. To illustrate the applicability of the
model, we apply it to a dataset of militarized interstate conflicts between
countries.
|
2501.04420
|
A Closer Look on Gender Stereotypes in Movie Recommender Systems and
Their Implications with Privacy
|
cs.IR
|
The movie recommender system typically leverages user feedback to provide
personalized recommendations that align with user preferences and increase
business revenue. This study investigates the impact of gender stereotypes on
such systems through a specific attack scenario. In this scenario, an attacker
determines users' gender, a private attribute, by exploiting gender stereotypes
about movie preferences and analyzing users' feedback data, which is either
publicly available or observed within the system. The study consists of two
phases. In the first phase, a user study involving 630 participants identified
gender stereotypes associated with movie genres, which often influence viewing
choices. In the second phase, four inference algorithms were applied to detect
gender stereotypes by combining the findings from the first phase with users'
feedback data. Results showed that these algorithms performed more effectively
than relying solely on feedback data for gender inference. Additionally, we
quantified the extent of gender stereotypes to evaluate their broader impact on
digital computational science. The latter part of the study utilized two major
movie recommender datasets: MovieLens 1M and Yahoo!Movie. Detailed experimental
information is available on our GitHub repository:
https://github.com/fr-iit/GSMRS
|
2501.04421
|
Risk-averse policies for natural gas futures trading using
distributional reinforcement learning
|
cs.LG
|
Financial markets have experienced significant instabilities in recent years,
creating unique challenges for trading and increasing interest in risk-averse
strategies. Distributional Reinforcement Learning (RL) algorithms, which model
the full distribution of returns rather than just expected values, offer a
promising approach to managing market uncertainty. This paper investigates this
potential by studying the effectiveness of three distributional RL algorithms
for natural gas futures trading and exploring their capacity to develop
risk-averse policies. Specifically, we analyze the performance and behavior of
Categorical Deep Q-Network (C51), Quantile Regression Deep Q-Network (QR-DQN),
and Implicit Quantile Network (IQN). To the best of our knowledge, these
algorithms have never been applied in a trading context. These policies are
compared against five Machine Learning (ML) baselines, using a detailed dataset
provided by Predictive Layer SA, a company supplying ML-based strategies for
energy trading. The main contributions of this study are as follows. (1) We
demonstrate that distributional RL algorithms significantly outperform
classical RL methods, with C51 achieving performance improvement of more than
32\%. (2) We show that training C51 and IQN to maximize CVaR produces
risk-sensitive policies with adjustable risk aversion. Specifically, our
ablation studies reveal that lower CVaR confidence levels increase risk
aversion, while higher levels decrease it, offering flexible risk management
options. In contrast, QR-DQN shows less predictable behavior. These findings
emphasize the potential of distributional RL for developing adaptable,
risk-averse trading strategies in volatile markets.
|
2501.04422
|
A new methodology for the optimization of bolt tightening sequences for
ring type joints
|
eess.SY cs.SY
|
Achieving uniform bolt load distribution is critical to obtain leak-free
service in pressure vessel gasketed joints used in offshore pipelines. This is
a difficult task due to bolt load variations during the assembly process. In
this sense, the Elastic Interaction Coefficients Method has been developed in
previous works to define tightening sequences that provide the target load at
the end of the sequence in one or two passes. The method is very costly because
a complete sequence must be simulated and the load of every bolt must be
measured after each tightening operation. The present work validates this
method for Ring Type Joints and further develops a numerically and
experimentally validated new methodology that provides highly satisfactory
results with a significantly lower cost.
|
2501.04424
|
NSA: Neuro-symbolic ARC Challenge
|
cs.AI cs.CL
|
The Abstraction and Reasoning Corpus (ARC) evaluates general reasoning
capabilities that are difficult for both machine learning models and
combinatorial search methods. We propose a neuro-symbolic approach that
combines a transformer for proposal generation with combinatorial search using
a domain-specific language. The transformer narrows the search space by
proposing promising search directions, which allows the combinatorial search to
find the actual solution in short time. We pre-train the trainsformer with
synthetically generated data. During test-time we generate additional
task-specific training tasks and fine-tune our model. Our results surpass
comparable state of the art on the ARC evaluation set by 27% and compare
favourably on the ARC train set. We make our code and dataset publicly
available at https://github.com/Batorskq/NSA.
|
2501.04425
|
End-to-End Bangla AI for Solving Math Olympiad Problem Benchmark:
Leveraging Large Language Model Using Integrated Approach
|
cs.CL
|
This work introduces systematic approach for enhancing large language models
(LLMs) to address Bangla AI mathematical challenges. Through the assessment of
diverse LLM configurations, fine-tuning with specific datasets, and the
implementation of Retrieval-Augmented Generation (RAG), we enhanced the model's
reasoning precision in a multilingual setting. Crucial discoveries indicate
that customized prompting, dataset augmentation, and iterative reasoning
improve the model's efficiency regarding Olympiad-level mathematical
challenges.
|
2501.04426
|
Dual-Force: Enhanced Offline Diversity Maximization under Imitation
Constraints
|
cs.LG cs.AI cs.RO
|
While many algorithms for diversity maximization under imitation constraints
are online in nature, many applications require offline algorithms without
environment interactions. Tackling this problem in the offline setting,
however, presents significant challenges that require non-trivial, multi-stage
optimization processes with non-stationary rewards. In this work, we present a
novel offline algorithm that enhances diversity using an objective based on Van
der Waals (VdW) force and successor features, and eliminates the need to learn
a previously used skill discriminator. Moreover, by conditioning the value
function and policy on a pre-trained Functional Reward Encoding (FRE), our
method allows for better handling of non-stationary rewards and provides
zero-shot recall of all skills encountered during training, significantly
expanding the set of skills learned in prior work. Consequently, our algorithm
benefits from receiving a consistently strong diversity signal (VdW), and
enjoys more stable and efficient training. We demonstrate the effectiveness of
our method in generating diverse skills for two robotic tasks in simulation:
locomotion of a quadruped and local navigation with obstacle traversal.
|
2501.04435
|
A Digital Shadow for Modeling, Studying and Preventing Urban Crime
|
cs.AI cs.MA cs.SI
|
Crime is one of the greatest threats to urban security. Around 80 percent of
the world's population lives in countries with high levels of criminality. Most
of the crimes committed in the cities take place in their urban environments.
This paper presents the development and validation of a digital shadow platform
for modeling and simulating urban crime. This digital shadow has been
constructed using data-driven agent-based modeling and simulation techniques,
which are suitable for capturing dynamic interactions among individuals and
with their environment. Our approach transforms and integrates well-known
criminological theories and the expert knowledge of law enforcement agencies
(LEA), policy makers, and other stakeholders under a theoretical model, which
is in turn combined with real crime, spatial (cartographic) and socio-economic
data into an urban model characterizing the daily behavior of citizens. The
digital shadow has also been instantiated for the city of Malaga, for which we
had over 300,000 complaints available. This instance has been calibrated with
those complaints and other geographic and socio-economic information of the
city. To the best of our knowledge, our digital shadow is the first for large
urban areas that has been calibrated with a large dataset of real crime reports
and with an accurate representation of the urban environment. The performance
indicators of the model after being calibrated, in terms of the metrics widely
used in predictive policing, suggest that our simulated crime generation
matches the general pattern of crime in the city according to historical data.
Our digital shadow platform could be an interesting tool for modeling and
predicting criminal behavior in an urban environment on a daily basis and,
thus, a useful tool for policy makers, criminologists, sociologists, LEAs, etc.
to study and prevent urban crime.
|
2501.04436
|
Federated Fine-Tuning of LLMs: Framework Comparison and Research
Directions
|
cs.LG cs.AI
|
Federated learning (FL) provides a privacy-preserving solution for
fine-tuning pre-trained large language models (LLMs) using distributed private
datasets, enabling task-specific adaptation while preserving data privacy.
However, fine-tuning the extensive parameters in LLMs is particularly
challenging in resource-constrained federated scenarios due to the significant
communication and computational costs. To gain a deeper understanding of how
these challenges can be addressed, this article conducts a comparative analysis
three advanced federated LLM (FedLLM) frameworks that integrate knowledge
distillation (KD) and split learning (SL) to mitigate these issues: 1) FedLLMs,
where clients upload model parameters or gradients to enable straightforward
and effective fine-tuning; 2) KD-FedLLMs, which leverage KD for efficient
knowledge sharing via logits; and 3) Split-FedLLMs, which split the LLMs into
two parts, with one part executed on the client and the other one on the
server, to balance the computational load. Each framework is evaluated based on
key performance metrics, including model accuracy, communication overhead, and
client-side computational load, offering insights into their effectiveness for
various federated fine-tuning scenarios. Through this analysis, we identify
framework-specific optimization opportunities to enhance the efficiency of
FedLLMs and discuss broader research directions, highlighting open
opportunities to better adapt FedLLMs for real-world applications. A use case
is presented to demonstrate the performance comparison of these three
frameworks under varying configurations and settings.
|
2501.04437
|
Integrating LLMs with ITS: Recent Advances, Potentials, Challenges, and
Future Directions
|
eess.SY cs.AI cs.ET cs.SY
|
Intelligent Transportation Systems (ITS) are crucial for the development and
operation of smart cities, addressing key challenges in efficiency,
productivity, and environmental sustainability. This paper comprehensively
reviews the transformative potential of Large Language Models (LLMs) in
optimizing ITS. Initially, we provide an extensive overview of ITS,
highlighting its components, operational principles, and overall effectiveness.
We then delve into the theoretical background of various LLM techniques, such
as GPT, T5, CTRL, and BERT, elucidating their relevance to ITS applications.
Following this, we examine the wide-ranging applications of LLMs within ITS,
including traffic flow prediction, vehicle detection and classification,
autonomous driving, traffic sign recognition, and pedestrian detection. Our
analysis reveals how these advanced models can significantly enhance traffic
management and safety. Finally, we explore the challenges and limitations LLMs
face in ITS, such as data availability, computational constraints, and ethical
considerations. We also present several future research directions and
potential innovations to address these challenges. This paper aims to guide
researchers and practitioners through the complexities and opportunities of
integrating LLMs in ITS, offering a roadmap to create more efficient,
sustainable, and responsive next-generation transportation systems.
|
2501.04438
|
Effect of Information Technology on Job Creation to Support Economic:
Case Studies of Graduates in Universities (2023-2024) of the KRG of Iraq
|
cs.CY cs.AI
|
The aim of this study is to assess the impact of information technology (IT)
on university graduates in terms of employment development, which will aid in
economic issues. This study uses a descriptive research methodology and a
quantitative approach to understand variables. The focus of this study is to
ascertain how graduates of Kurdistan regional universities might use IT to
secure employment and significantly contribute to the nation's economic
revival. The sample size was established by the use of judgmental sampling
procedure and consisted of 314 people. The researcher prepared the
questionnaire to collect data, and then SPSS statistical software, version 22,
and Excel 2010 were used to modify, compile, and tabulate the results. The
study's outcome showed that information technology is incredibly inventive, has
a promising future, and makes life much easier for everyone. It also proved
that a deep academic understanding of information technology and its
constituent parts helps graduates of Kurdistan Regional University find
suitable careers. More importantly, though, anyone looking for work or a means
of support will find great benefit from possessing credentials and
understanding of IT. The study's final finding was that information technology
has actively advanced the country's economy. Not only is IT helping to boost
youth employment, but it is also turning into a worthwhile investment for
economic growth.
|
2501.04440
|
RSAR: Restricted State Angle Resolver and Rotated SAR Benchmark
|
cs.CV
|
Rotated object detection has made significant progress in the optical remote
sensing. However, advancements in the Synthetic Aperture Radar (SAR) field are
laggard behind, primarily due to the absence of a large-scale dataset.
Annotating such a dataset is inefficient and costly. A promising solution is to
employ a weakly supervised model (e.g., trained with available horizontal boxes
only) to generate pseudo-rotated boxes for reference before manual calibration.
Unfortunately, the existing weakly supervised models exhibit limited accuracy
in predicting the object's angle. Previous works attempt to enhance angle
prediction by using angle resolvers that decouple angles into cosine and sine
encodings. In this work, we first reevaluate these resolvers from a unified
perspective of dimension mapping and expose that they share the same
shortcomings: these methods overlook the unit cycle constraint inherent in
these encodings, easily leading to prediction biases. To address this issue, we
propose the Unit Cycle Resolver, which incorporates a unit circle constraint
loss to improve angle prediction accuracy. Our approach can effectively improve
the performance of existing state-of-the-art weakly supervised methods and even
surpasses fully supervised models on existing optical benchmarks (i.e.,
DOTA-v1.0 dataset). With the aid of UCR, we further annotate and introduce
RSAR, the largest multi-class rotated SAR object detection dataset to date.
Extensive experiments on both RSAR and optical datasets demonstrate that our
UCR enhances angle prediction accuracy. Our dataset and code can be found at:
https://github.com/zhasion/RSAR.
|
2501.04441
|
Motif Discovery Framework for Psychiatric EEG Data Classification
|
cs.LG
|
In current medical practice, patients undergoing depression treatment must
wait four to six weeks before a clinician can assess medication response due to
the delayed noticeable effects of antidepressants. Identification of a
treatment response at any earlier stage is of great importance, since it can
reduce the emotional and economic burden connected with the treatment. We
approach the prediction of a patient response to a treatment as a
classification problem, by utilizing the dynamic properties of EEG recordings
on the 7th day of the treatment. We present a novel framework that applies
motif discovery to extract meaningful features from EEG data distinguishing
between depression treatment responders and non-responders. We applied our
framework also to classification tasks in other psychiatric EEG datasets,
namely to patients with symptoms of schizophrenia, pediatric patients with
intractable seizures, and Alzheimer disease and dementia. We achieved high
classification precision in all data sets. The results demonstrate that the
dynamic properties of the EEGs may support clinicians in decision making both
in diagnosis and in the prediction depression treatment response as early as on
the 7th day of the treatment. To our best knowledge, our work is the first one
using motifs in the depression diagnostics in general.
|
2501.04442
|
A Survey on Path Planning Problem of Rolling Contacts: Approaches,
Applications and Future Challenges
|
cs.RO
|
This paper explores an eclectic range of path-planning methodologies
engineered for rolling surfaces. Our focus is on the kinematic intricacies of
rolling contact systems, which are investigated through a motion planning lens.
Beyond summarizing the approaches to single-contact rotational surfaces, we
explore the challenging domain of spin-rolling multi-contact systems. Our work
proposes solutions for the higher-dimensional problem of multiple rotating
objects in contact. Venturing beyond kinematics, these methodologies find
application across a spectrum of domains, including rolling robots,
reconfigurable swarm robotics, micro/nano manipulation, and nonprehensile
manipulations. Through meticulously examining established planning strategies,
we unveil their practical implementations in various real-world scenarios, from
intricate dexterous manipulation tasks to the nimble manoeuvring of rolling
robots and even shape planning of multi-contact swarms of particles. This study
introduces the persistent challenges and unexplored frontiers of robotics,
intricately linked to both path planning and mechanism design. As we illuminate
existing solutions, we also set the stage for future breakthroughs in this
dynamic and rapidly evolving field by highlighting the critical importance of
addressing rolling contact problems.
|
2501.04443
|
Revisiting LocalSGD and SCAFFOLD: Improved Rates and Missing Analysis
|
math.OC cs.DC cs.LG
|
LocalSGD and SCAFFOLD are widely used methods in distributed stochastic
optimization, with numerous applications in machine learning, large-scale data
processing, and federated learning. However, rigorously establishing their
theoretical advantages over simpler methods, such as minibatch SGD (MbSGD), has
proven challenging, as existing analyses often rely on strong assumptions,
unrealistic premises, or overly restrictive scenarios.
In this work, we revisit the convergence properties of LocalSGD and SCAFFOLD
under a variety of existing or weaker conditions, including gradient
similarity, Hessian similarity, weak convexity, and Lipschitz continuity of the
Hessian. Our analysis shows that (i) LocalSGD achieves faster convergence
compared to MbSGD for weakly convex functions without requiring stronger
gradient similarity assumptions; (ii) LocalSGD benefits significantly from
higher-order similarity and smoothness; and (iii) SCAFFOLD demonstrates faster
convergence than MbSGD for a broader class of non-quadratic functions. These
theoretical insights provide a clearer understanding of the conditions under
which LocalSGD and SCAFFOLD outperform MbSGD.
|
2501.04444
|
A novel Facial Recognition technique with Focusing on Masked Faces
|
cs.CV cs.AI
|
Recognizing the same faces with and without masks is important for ensuring
consistent identification in security, access control, and public safety. This
capability is crucial in scenarios like law enforcement, healthcare, and
surveillance, where accurate recognition must be maintained despite facial
occlusion. This research focuses on the challenge of recognizing the same faces
with and without masks by employing cosine similarity as the primary technique.
With the increased use of masks, traditional facial recognition systems face
significant accuracy issues, making it crucial to develop methods that can
reliably identify individuals in masked conditions. For that reason, this study
proposed Masked-Unmasked Face Matching Model (MUFM). This model employs
transfer learning using the Visual Geometry Group (VGG16) model to extract
significant facial features, which are subsequently classified utilizing the
K-Nearest Neighbors (K-NN) algorithm. The cosine similarity metric is employed
to compare masked and unmasked faces of the same individuals. This approach
represents a novel contribution, as the task of recognizing the same individual
with and without a mask using cosine similarity has not been previously
addressed. By integrating these advanced methodologies, the research
demonstrates effective identification of individuals despite the presence of
masks, addressing a significant limitation in traditional systems. Using data
is another essential part of this work, by collecting and preparing an image
dataset from three different sources especially some of those data are real
provided a comprehensive power of this research. The image dataset used were
already collected in three different datasets of masked and unmasked for the
same faces.
|
2501.04453
|
Gradient Purification: Defense Against Poisoning Attack in Decentralized
Federated Learning
|
cs.LG
|
Decentralized federated learning (DFL) is inherently vulnerable to poisoning
attacks, as malicious clients can transmit manipulated model gradients to
neighboring clients. Existing defense methods either reject suspicious
gradients per iteration or restart DFL aggregation after detecting all
malicious clients. They overlook the potential accuracy benefit from the
discarded malicious gradients. In this paper, we propose a novel gradient
purification defense, named GPD, that integrates seamlessly with existing DFL
aggregation to defend against poisoning attacks. It aims to mitigate the harm
in model gradients while retaining the benefit in model weights for enhancing
accuracy. For each benign client in GPD, a recording variable is designed to
track the historically aggregated gradients from one of its neighbors. It
allows benign clients to precisely detect malicious neighbors and swiftly
mitigate aggregated malicious gradients via historical consistency checks. Upon
mitigation, GPD optimizes model weights via aggregating gradients solely from
benign clients. This retains the previously beneficial portions from malicious
clients and exploits the contributions from benign clients, thereby
significantly enhancing the model accuracy. We analyze the convergence of GPD,
as well as its ability to harvest high accuracy. Extensive experiments over
three datasets demonstrate that, GPD is capable of mitigating poisoning attacks
under both iid and non-iid data distributions. It significantly outperforms
state-of-the-art defenses in terms of accuracy against various poisoning
attacks.
|
2501.04455
|
Hidden Entity Detection from GitHub Leveraging Large Language Models
|
cs.CL cs.DL
|
Named entity recognition is an important task when constructing knowledge
bases from unstructured data sources. Whereas entity detection methods mostly
rely on extensive training data, Large Language Models (LLMs) have paved the
way towards approaches that rely on zero-shot learning (ZSL) or few-shot
learning (FSL) by taking advantage of the capabilities LLMs acquired during
pretraining. Specifically, in very specialized scenarios where large-scale
training data is not available, ZSL / FSL opens new opportunities. This paper
follows this recent trend and investigates the potential of leveraging Large
Language Models (LLMs) in such scenarios to automatically detect datasets and
software within textual content from GitHub repositories. While existing
methods focused solely on named entities, this study aims to broaden the scope
by incorporating resources such as repositories and online hubs where entities
are also represented by URLs. The study explores different FSL prompt learning
approaches to enhance the LLMs' ability to identify dataset and software
mentions within repository texts. Through analyses of LLM effectiveness and
learning strategies, this paper offers insights into the potential of advanced
language models for automated entity detection.
|
2501.04459
|
Rapid Automated Mapping of Clouds on Titan With Instance Segmentation
|
astro-ph.IM astro-ph.EP cs.CV eess.IV
|
Despite widespread adoption of deep learning models to address a variety of
computer vision tasks, planetary science has yet to see extensive utilization
of such tools to address its unique problems. On Titan, the largest moon of
Saturn, tracking seasonal trends and weather patterns of clouds provides
crucial insights into one of the most complex climates in the Solar System, yet
much of the available image data are still analyzed in a conventional way. In
this work, we apply a Mask R-CNN trained via transfer learning to perform
instance segmentation of clouds in Titan images acquired by the Cassini
spacecraft - a previously unexplored approach to a big data problem in
planetary science. We demonstrate that an automated technique can provide
quantitative measures for clouds, such as areas and centroids, that may
otherwise be prohibitively time-intensive to produce by human mapping.
Furthermore, despite Titan specific challenges, our approach yields accuracy
comparable to contemporary cloud identification studies on Earth and other
worlds. We compare the efficiencies of human-driven versus algorithmic
approaches, showing that transfer learning provides speed-ups that may open new
horizons for data investigation for Titan. Moreover, we suggest that such
approaches have broad potential for application to similar problems in
planetary science where they are currently under-utilized. Future planned
missions to the planets and remote sensing initiatives for the Earth promise to
provide a deluge of image data in the coming years that will benefit strongly
from leveraging machine learning approaches to perform the analysis.
|
2501.04467
|
A Histologic Dataset of Normal and Atypical Mitotic Figures on Human
Breast Cancer (AMi-Br)
|
cs.CV cs.DB
|
Assessment of the density of mitotic figures (MFs) in histologic tumor
sections is an important prognostic marker for many tumor types, including
breast cancer. Recently, it has been reported in multiple works that the
quantity of MFs with an atypical morphology (atypical MFs, AMFs) might be an
independent prognostic criterion for breast cancer. AMFs are an indicator of
mutations in the genes regulating the cell cycle and can lead to aberrant
chromosome constitution (aneuploidy) of the tumor cells. To facilitate further
research on this topic using pattern recognition, we present the first ever
publicly available dataset of atypical and normal MFs (AMi-Br). For this, we
utilized two of the most popular MF datasets (MIDOG 2021 and TUPAC) and
subclassified all MFs using a three expert majority vote. Our final dataset
consists of 3,720 MFs, split into 832 AMFs (22.4%) and 2,888 normal MFs (77.6%)
across all 223 tumor cases in the combined set. We provide baseline
classification experiments to investigate the consistency of the dataset, using
a Monte Carlo cross-validation and different strategies to combat class
imbalance. We found an averaged balanced accuracy of up to 0.806 when using a
patch-level data set split, and up to 0.713 when using a patient-level split.
|
2501.04470
|
Regularising NARX models with multi-task learning
|
cs.LG
|
A Nonlinear Auto-Regressive with eXogenous inputs (NARX) model can be used to
describe time-varying processes; where the output depends on both previous
outputs and current/previous external input variables. One limitation of NARX
models is their propensity to overfit and result in poor generalisation for
future predictions. The proposed method to help to overcome the issue of
overfitting is a NARX model which predicts outputs at both the current time and
several lead times into the future. This is a form of multi-task learner (MTL);
whereby the lead time outputs will regularise the current time output. This
work shows that for high noise level, MTL can be used to regularise NARX with a
lower Normalised Mean Square Error (NMSE) compared to the NMSE of the
independent learner counterpart.
|
2501.04472
|
Hybrid Artificial Intelligence Strategies for Drone Navigation
|
cs.AI cs.RO
|
Objective: This paper describes the development of hybrid artificial
intelligence strategies for drone navigation. Methods: The navigation module
combines a deep learning model with a rule-based engine depending on the agent
state. The deep learning model has been trained using reinforcement learning.
The rule-based engine uses expert knowledge to deal with specific situations.
The navigation module incorporates several strategies to explain the drone
decision based on its observation space, and different mechanisms for including
human decisions in the navigation process. Finally, this paper proposes an
evaluation methodology based on defining several scenarios and analyzing the
performance of the different strategies according to metrics adapted to each
scenario. Results: Two main navigation problems have been studied. For the
first scenario (reaching known targets), it has been possible to obtain a 90%
task completion rate, reducing significantly the number of collisions thanks to
the rule-based engine. For the second scenario, it has been possible to reduce
20% of the time required to locate all the targets using the reinforcement
learning model. Conclusions: Reinforcement learning is a very good strategy to
learn policies for drone navigation, but in critical situations, it is
necessary to complement it with a rule-based module to increase task success
rate.
|
2501.04473
|
When LLMs Struggle: Reference-less Translation Evaluation for
Low-resource Languages
|
cs.CL
|
This paper investigates the reference-less evaluation of machine translation
for low-resource language pairs, known as quality estimation (QE).
Segment-level QE is a challenging cross-lingual language understanding task
that provides a quality score (0-100) to the translated output. We
comprehensively evaluate large language models (LLMs) in zero/few-shot
scenarios and perform instruction fine-tuning using a novel prompt based on
annotation guidelines. Our results indicate that prompt-based approaches are
outperformed by the encoder-based fine-tuned QE models. Our error analysis
reveals tokenization issues, along with errors due to transliteration and named
entities, and argues for refinement in LLM pre-training for cross-lingual
tasks. We release the data, and models trained publicly for further research.
|
2501.04477
|
Rethinking High-speed Image Reconstruction Framework with Spike Camera
|
cs.CV
|
Spike cameras, as innovative neuromorphic devices, generate continuous spike
streams to capture high-speed scenes with lower bandwidth and higher dynamic
range than traditional RGB cameras. However, reconstructing high-quality images
from the spike input under low-light conditions remains challenging.
Conventional learning-based methods often rely on the synthetic dataset as the
supervision for training. Still, these approaches falter when dealing with
noisy spikes fired under the low-light environment, leading to further
performance degradation in the real-world dataset. This phenomenon is primarily
due to inadequate noise modelling and the domain gap between synthetic and real
datasets, resulting in recovered images with unclear textures, excessive noise,
and diminished brightness. To address these challenges, we introduce a novel
spike-to-image reconstruction framework SpikeCLIP that goes beyond traditional
training paradigms. Leveraging the CLIP model's powerful capability to align
text and images, we incorporate the textual description of the captured scene
and unpaired high-quality datasets as the supervision. Our experiments on
real-world low-light datasets U-CALTECH and U-CIFAR demonstrate that SpikeCLIP
significantly enhances texture details and the luminance balance of recovered
images. Furthermore, the reconstructed images are well-aligned with the broader
visual features needed for downstream tasks, ensuring more robust and versatile
performance in challenging environments.
|
2501.04480
|
Research on environment perception and behavior prediction of
intelligent UAV based on semantic communication
|
cs.AI cs.RO
|
The convergence of drone delivery systems, virtual worlds, and blockchain has
transformed logistics and supply chain management, providing a fast, and
environmentally friendly alternative to traditional ground transportation
methods;Provide users with a real-world experience, virtual service providers
need to collect up-to-the-minute delivery information from edge devices. To
address this challenge, 1) a reinforcement learning approach is introduced to
enable drones with fast training capabilities and the ability to autonomously
adapt to new virtual scenarios for effective resource allocation.2) A semantic
communication framework for meta-universes is proposed, which utilizes the
extraction of semantic information to reduce the communication cost and
incentivize the transmission of information for meta-universe services.3) In
order to ensure that user information security, a lightweight authentication
and key agreement scheme is designed between the drone and the user by
introducing blockchain technology. In our experiments, the drone adaptation
performance is improved by about 35\%, and the local offloading rate can reach
90\% with the increase of the number of base stations. The semantic
communication system proposed in this paper is compared with the Cross Entropy
baseline model. Introducing blockchain technology the throughput of the
transaction is maintained at a stable value with different number of drones.
|
2501.04481
|
Safe Reinforcement Learning with Minimal Supervision
|
cs.LG cs.RO cs.SY eess.SY
|
Reinforcement learning (RL) in the real world necessitates the development of
procedures that enable agents to explore without causing harm to themselves or
others. The most successful solutions to the problem of safe RL leverage
offline data to learn a safe-set, enabling safe online exploration. However,
this approach to safe-learning is often constrained by the demonstrations that
are available for learning.
In this paper we investigate the influence of the quantity and quality of
data used to train the initial safe learning problem offline on the ability to
learn safe-RL policies online. Specifically, we focus on tasks with spatially
extended goal states where we have few or no demonstrations available.
Classically this problem is addressed either by using hand-designed controllers
to generate data or by collecting user-generated demonstrations. However, these
methods are often expensive and do not scale to more complex tasks and
environments. To address this limitation we propose an unsupervised RL-based
offline data collection procedure, to learn complex and scalable policies
without the need for hand-designed controllers or user demonstrations. Our
research demonstrates the significance of providing sufficient demonstrations
for agents to learn optimal safe-RL policies online, and as a result, we
propose optimistic forgetting, a novel online safe-RL approach that is
practical for scenarios with limited data. Further, our unsupervised data
collection approach highlights the need to balance diversity and optimality for
safe online exploration.
|
2501.04483
|
Demystification and Near-perfect Estimation of Minimum Gas Limit and Gas
Used for Ethereum Smart Contracts
|
cs.SE cs.CE cs.DC cs.ET cs.NI
|
The Ethereum blockchain has a \emph{gas system} that associates operations
with a cost in gas units. Two central concepts of this system are the \emph{gas
limit} assigned by the issuer of a transaction and the \emph{gas used} by a
transaction. The former is a budget that must not be exhausted before the
completion of the transaction execution; otherwise, the execution fails.
Therefore, it seems rather essential to determine the \emph{minimum gas limit}
that ensures the execution of a transaction will not abort due to the lack of
gas. Despite its practical relevance, this concept has not been properly
addressed. In the literature, gas used and minimum gas limit are conflated.
This paper proposes a precise notion of minimum gas limit and how it can differ
from gas used by a transaction; this is also demonstrated with a quantitative
study on real transactions of the Ethereum blockchain. Another significant
contribution is the proposition of a fairly precise estimator for each of the
two metrics. Again, the confusion between these concepts has led to the
creation of estimators only for the gas used by a transaction. We demonstrate
that the minimum gas limit for the state of the Ethereum blockchain (after the
block) $t$ can serve as a near-perfect estimation for the execution of the
transaction at block $t + \Delta$, where $\Delta \leq 11$; the same holds for
estimating gas used. These precise estimators can be very valuable in helping
the users predict the gas budget of transactions and developers in optimising
their smart contracts; over and underestimating gas used and minimum gas limit
can lead to a number of practical issues. Overall, this paper serves as an
important reference for blockchain developers and users as to how the gas
system really works.
|
2501.04484
|
PolInterviews -- A Dataset of German Politician Public Broadcast
Interviews
|
cs.CL
|
This paper presents a novel dataset of public broadcast interviews featuring
high-ranking German politicians. The interviews were sourced from YouTube,
transcribed, processed for speaker identification, and stored in a tidy and
open format. The dataset comprises 99 interviews with 33 different German
politicians across five major interview formats, containing a total of 28,146
sentences. As the first of its kind, this dataset offers valuable opportunities
for research on various aspects of political communication in the (German)
political contexts, such as agenda-setting, interviewer dynamics, or
politicians' self-presentation.
|
2501.04486
|
MB-TaylorFormer V2: Improved Multi-branch Linear Transformer Expanded by
Taylor Formula for Image Restoration
|
cs.CV
|
Recently, Transformer networks have demonstrated outstanding performance in
the field of image restoration due to the global receptive field and
adaptability to input. However, the quadratic computational complexity of
Softmax-attention poses a significant limitation on its extensive application
in image restoration tasks, particularly for high-resolution images. To tackle
this challenge, we propose a novel variant of the Transformer. This variant
leverages the Taylor expansion to approximate the Softmax-attention and
utilizes the concept of norm-preserving mapping to approximate the remainder of
the first-order Taylor expansion, resulting in a linear computational
complexity. Moreover, we introduce a multi-branch architecture featuring
multi-scale patch embedding into the proposed Transformer, which has four
distinct advantages: 1) various sizes of the receptive field; 2) multi-level
semantic information; 3) flexible shapes of the receptive field; 4) accelerated
training and inference speed. Hence, the proposed model, named the second
version of Taylor formula expansion-based Transformer (for short
MB-TaylorFormer V2) has the capability to concurrently process coarse-to-fine
features, capture long-distance pixel interactions with limited computational
cost, and improve the approximation of the Taylor expansion remainder.
Experimental results across diverse image restoration benchmarks demonstrate
that MB-TaylorFormer V2 achieves state-of-the-art performance in multiple image
restoration tasks, such as image dehazing, deraining, desnowing, motion
deblurring, and denoising, with very little computational overhead. The source
code is available at https://github.com/FVL2020/MB-TaylorFormerV2.
|
2501.04487
|
Integrating remote sensing data assimilation, deep learning and large
language model for interactive wheat breeding yield prediction
|
cs.LG cs.AI
|
Yield is one of the core goals of crop breeding. By predicting the potential
yield of different breeding materials, breeders can screen these materials at
various growth stages to select the best performing. Based on unmanned aerial
vehicle remote sensing technology, high-throughput crop phenotyping data in
breeding areas is collected to provide data support for the breeding decisions
of breeders. However, the accuracy of current yield predictions still requires
improvement, and the usability and user-friendliness of yield forecasting tools
remain suboptimal. To address these challenges, this study introduces a hybrid
method and tool for crop yield prediction, designed to allow breeders to
interactively and accurately predict wheat yield by chatting with a large
language model (LLM). First, the newly designed data assimilation algorithm is
used to assimilate the leaf area index into the WOFOST model. Then, selected
outputs from the assimilation process, along with remote sensing inversion
results, are used to drive the time-series temporal fusion transformer model
for wheat yield prediction. Finally, based on this hybrid method and leveraging
an LLM with retrieval augmented generation technology, we developed an
interactive yield prediction Web tool that is user-friendly and supports
sustainable data updates. This tool integrates multi-source data to assist
breeding decision-making. This study aims to accelerate the identification of
high-yield materials in the breeding process, enhance breeding efficiency, and
enable more scientific and smart breeding decisions.
|
2501.04493
|
The Role of Machine Learning in Congenital Heart Disease Diagnosis:
Datasets, Algorithms, and Insights
|
eess.IV cs.AI cs.CV
|
Congenital heart disease is among the most common fetal abnormalities and
birth defects. Despite identifying numerous risk factors influencing its onset,
a comprehensive understanding of its genesis and management across diverse
populations remains limited. Recent advancements in machine learning have
demonstrated the potential for leveraging patient data to enable early
congenital heart disease detection. Over the past seven years, researchers have
proposed various data-driven and algorithmic solutions to address this
challenge. This paper presents a systematic review of congential heart disease
recognition using machine learning, conducting a meta-analysis of 432
references from leading journals published between 2018 and 2024. A detailed
investigation of 74 scholarly works highlights key factors, including
databases, algorithms, applications, and solutions. Additionally, the survey
outlines reported datasets used by machine learning experts for congenital
heart disease recognition. Using a systematic literature review methodology,
this study identifies critical challenges and opportunities in applying machine
learning to congenital heart disease.
|
2501.04503
|
Developing a Modular Compiler for a Subset of a C-like Language
|
cs.PL cs.CL cs.DC cs.PF
|
The paper introduces the development of a modular compiler for a subset of a
C-like language, which addresses the challenges in constructing a compiler for
high-level languages. This modular approach will allow developers to modify a
language by adding or removing subsets as required, resulting in a minimal and
memory-efficient compiler. The development process is divided into small,
incremental steps, where each step yields a fully functioning compiler for an
expanding subset of the language. The paper outlines the iterative
developmental phase of the compiler, emphasizing progressive enhancements in
capabilities and functionality. Adherence to industry best practices of modular
design, code reusability, and documentation has enabled the resulting
compiler's functional efficiency, maintainability, and extensibility. The
compiler proved to be effective not only in managing the language structure but
also in developing optimized code, which demonstrates its practical usability.
This was also further assessed using the compiler on a tiny memory-deficient
single-board computer, again showing the compiler's efficiency and suitability
for resource-constrained devices.
|
2501.04508
|
Linear Model of Aggregated Homogeneous Energy Storage Elements with
Realizable Dispatch Guarantees
|
eess.SY cs.SY
|
To optimize battery dispatch, a model is required that can predict the state
of charge (SOC) trajectory and ensure dispatch is admissible (i.e., does not
lead to unexpected SOC saturation). However, battery dispatch optimization is
inherently challenging since batteries cannot simultaneously charge and
discharge, which begets a non-convex complementarity constraint. In this paper,
we consider a composition of energy storage elements that can charge or
discharge independently and provide a sufficient linear energy storage model of
the composite battery. This permits convex optimization of the composite
battery SOC trajectory while ensuring admissibility of the resulting
(aggregated) power schedule and disaggregation to the individual energy storage
elements.
|
2501.04510
|
CGP-Tuning: Structure-Aware Soft Prompt Tuning for Code Vulnerability
Detection
|
cs.SE cs.AI
|
Large language models (LLMs) have been proposed as powerful tools for
detecting software vulnerabilities, where task-specific fine-tuning is
typically employed to provide vulnerability-specific knowledge to the LLMs for
this purpose. However, traditional full-parameter fine-tuning is inefficient
for modern, complex LLMs, which contain billions of parameters.
Soft prompt tuning has been suggested as a more efficient alternative for
fine-tuning LLMs in general cases. However, pure soft prompt tuning treats
source code as plain text, losing structural information inherent in source
code. Meanwhile, graph-enhanced soft prompt tuning methods, which aim to
address this issue, are unable to preserve the rich semantic information within
code graphs, as they are primarily designed for general graph-related tasks and
focus more on adjacency information. They also fail to ensure computational
efficiency while accounting for graph-text interactions.
This paper, therefore, introduces a new code graph-enhanced, structure-aware
soft prompt tuning method for vulnerability detection, referred to as
CGP-Tuning. It employs innovative type-aware embeddings to capture the rich
semantic information within code graphs, along with a novel and efficient
cross-modal alignment module that achieves linear computational cost while
incorporating graph-text interactions. The proposed CGP-Tuning is evaluated on
the latest DiverseVul dataset and the most recent open-source code LLMs,
CodeLlama and CodeGemma. Experimental results demonstrate that CGP-Tuning
outperforms the best state-of-the-art method by an average of 3.5 percentage
points in accuracy, without compromising its vulnerability detection
capabilities for long source code.
|
2501.04513
|
Improving Image Captioning by Mimicking Human Reformulation Feedback at
Inference-time
|
cs.CV cs.CL
|
Incorporating automatically predicted human feedback into the process of
training generative models has attracted substantial recent interest, while
feedback at inference time has received less attention. The typical feedback at
training time, i.e., preferences of choice given two samples, does not
naturally transfer to the inference phase. We introduce a novel type of
feedback -- caption reformulations -- and train models to mimic reformulation
feedback based on human annotations. Our method does not require training the
image captioning model itself, thereby demanding substantially less
computational effort. We experiment with two types of reformulation feedback:
first, we collect a dataset of human reformulations that correct errors in the
generated captions. We find that incorporating reformulation models trained on
this data into the inference phase of existing image captioning models results
in improved captions, especially when the original captions are of low quality.
We apply our method to non-English image captioning, a domain where robust
models are less prevalent, and gain substantial improvement. Second, we apply
reformulations to style transfer. Quantitative evaluations reveal
state-of-the-art performance on German image captioning and English style
transfer, while human validation with a detailed comparative framework exposes
the specific axes of improvement.
|
2501.04515
|
SplineFormer: An Explainable Transformer-Based Approach for Autonomous
Endovascular Navigation
|
eess.IV cs.CV cs.RO
|
Endovascular navigation is a crucial aspect of minimally invasive procedures,
where precise control of curvilinear instruments like guidewires is critical
for successful interventions. A key challenge in this task is accurately
predicting the evolving shape of the guidewire as it navigates through the
vasculature, which presents complex deformations due to interactions with the
vessel walls. Traditional segmentation methods often fail to provide accurate
real-time shape predictions, limiting their effectiveness in highly dynamic
environments. To address this, we propose SplineFormer, a new transformer-based
architecture, designed specifically to predict the continuous, smooth shape of
the guidewire in an explainable way. By leveraging the transformer's ability,
our network effectively captures the intricate bending and twisting of the
guidewire, representing it as a spline for greater accuracy and smoothness. We
integrate our SplineFormer into an end-to-end robot navigation system by
leveraging the condensed information. The experimental results demonstrate that
our SplineFormer is able to perform endovascular navigation autonomously and
achieves a 50% success rate when cannulating the brachiocephalic artery on the
real robot.
|
2501.04517
|
Histogram-Equalized Quantization for logic-gated Residual Neural
Networks
|
cs.LG cs.AR
|
Adjusting the quantization according to the data or to the model loss seems
mandatory to enable a high accuracy in the context of quantized neural
networks. This work presents Histogram-Equalized Quantization (HEQ), an
adaptive framework for linear symmetric quantization. HEQ automatically adapts
the quantization thresholds using a unique step size optimization. We
empirically show that HEQ achieves state-of-the-art performances on CIFAR-10.
Experiments on the STL-10 dataset even show that HEQ enables a proper training
of our proposed logic-gated (OR, MUX) residual networks with a higher accuracy
at a lower hardware complexity than previous work.
|
2501.04519
|
rStar-Math: Small LLMs Can Master Math Reasoning with Self-Evolved Deep
Thinking
|
cs.CL
|
We present rStar-Math to demonstrate that small language models (SLMs) can
rival or even surpass the math reasoning capability of OpenAI o1, without
distillation from superior models. rStar-Math achieves this by exercising "deep
thinking" through Monte Carlo Tree Search (MCTS), where a math policy SLM
performs test-time search guided by an SLM-based process reward model.
rStar-Math introduces three innovations to tackle the challenges in training
the two SLMs: (1) a novel code-augmented CoT data sythesis method, which
performs extensive MCTS rollouts to generate step-by-step verified reasoning
trajectories used to train the policy SLM; (2) a novel process reward model
training method that avoids na\"ive step-level score annotation, yielding a
more effective process preference model (PPM); (3) a self-evolution recipe in
which the policy SLM and PPM are built from scratch and iteratively evolved to
improve reasoning capabilities. Through 4 rounds of self-evolution with
millions of synthesized solutions for 747k math problems, rStar-Math boosts
SLMs' math reasoning to state-of-the-art levels. On the MATH benchmark, it
improves Qwen2.5-Math-7B from 58.8% to 90.0% and Phi3-mini-3.8B from 41.4% to
86.4%, surpassing o1-preview by +4.5% and +0.9%. On the USA Math Olympiad
(AIME), rStar-Math solves an average of 53.3% (8/15) of problems, ranking among
the top 20% the brightest high school math students. Code and data will be
available at https://github.com/microsoft/rStar.
|
2501.04527
|
Towards Fair Class-wise Robustness: Class Optimal Distribution
Adversarial Training
|
cs.LG cs.CV
|
Adversarial training has proven to be a highly effective method for improving
the robustness of deep neural networks against adversarial attacks.
Nonetheless, it has been observed to exhibit a limitation in terms of robust
fairness, characterized by a significant disparity in robustness across
different classes. Recent efforts to mitigate this problem have turned to
class-wise reweighted methods. However, these methods suffer from a lack of
rigorous theoretical analysis and are limited in their exploration of the
weight space, as they mainly rely on existing heuristic algorithms or intuition
to compute weights. In addition, these methods fail to guarantee the
consistency of the optimization direction due to the decoupled optimization of
weights and the model parameters. They potentially lead to suboptimal weight
assignments and consequently, a suboptimal model. To address these problems,
this paper proposes a novel min-max training framework, Class Optimal
Distribution Adversarial Training (CODAT), which employs distributionally
robust optimization to fully explore the class-wise weight space, thus enabling
the identification of the optimal weight with theoretical guarantees.
Furthermore, we derive a closed-form optimal solution to the internal
maximization and then get a deterministic equivalent objective function, which
provides a theoretical basis for the joint optimization of weights and model
parameters. Meanwhile, we propose a fairness elasticity coefficient for the
evaluation of the algorithm with regard to both robustness and robust fairness.
Experimental results on various datasets show that the proposed method can
effectively improve the robust fairness of the model and outperform the
state-of-the-art approaches.
|
2501.04528
|
Towards a Problem-Oriented Domain Adaptation Framework for Machine
Learning
|
cs.LG cs.AI
|
Domain adaptation is a sub-field of machine learning that involves
transferring knowledge from a source domain to perform the same task in the
target domain. It is a typical challenge in machine learning that arises, e.g.,
when data is obtained from various sources or when using a data basis that
changes over time. Recent advances in the field offer promising methods, but it
is still challenging for researchers and practitioners to determine if domain
adaptation is suitable for a given problem -- and, subsequently, to select the
appropriate approach. This article employs design science research to develop a
problem-oriented framework for domain adaptation, which is matured in three
evaluation episodes. We describe a framework that distinguishes between five
domain adaptation scenarios, provides recommendations for addressing each
scenario, and offers guidelines for determining if a problem falls into one of
these scenarios. During the multiple evaluation episodes, the framework is
tested on artificial and real-world datasets and an experimental study
involving 100 participants. The evaluation demonstrates that the framework has
the explanatory power to capture any domain adaptation problem effectively. In
summary, we provide clear guidance for researchers and practitioners who want
to employ domain adaptation but lack in-depth knowledge of the possibilities.
|
2501.04529
|
A Plug-and-Play Bregman ADMM Module for Inferring Event Branches in
Temporal Point Processes
|
cs.LG
|
An event sequence generated by a temporal point process is often associated
with a hidden and structured event branching process that captures the
triggering relations between its historical and current events. In this study,
we design a new plug-and-play module based on the Bregman ADMM (BADMM)
algorithm, which infers event branches associated with event sequences in the
maximum likelihood estimation framework of temporal point processes (TPPs).
Specifically, we formulate the inference of event branches as an optimization
problem for the event transition matrix under sparse and low-rank constraints,
which is embedded in existing TPP models or their learning paradigms. We can
implement this optimization problem based on subspace clustering and sparse
group-lasso, respectively, and solve it using the Bregman ADMM algorithm, whose
unrolling leads to the proposed BADMM module. When learning a classic TPP
(e.g., Hawkes process) by the expectation-maximization algorithm, the BADMM
module helps derive structured responsibility matrices in the E-step.
Similarly, the BADMM module helps derive low-rank and sparse attention maps for
the neural TPPs with self-attention layers. The structured responsibility
matrices and attention maps, which work as learned event transition matrices,
indicate event branches, e.g., inferring isolated events and those key events
triggering many subsequent events. Experiments on both synthetic and real-world
data show that plugging our BADMM module into existing TPP models and learning
paradigms can improve model performance and provide us with interpretable
structured event branches. The code is available at
\url{https://github.com/qingmeiwangdaily/BADMM_TPP}.
|
2501.04534
|
Combining YOLO and Visual Rhythm for Vehicle Counting
|
cs.CV cs.LG
|
Video-based vehicle detection and counting play a critical role in managing
transport infrastructure. Traditional image-based counting methods usually
involve two main steps: initial detection and subsequent tracking, which are
applied to all video frames, leading to a significant increase in computational
complexity. To address this issue, this work presents an alternative and more
efficient method for vehicle detection and counting. The proposed approach
eliminates the need for a tracking step and focuses solely on detecting
vehicles in key video frames, thereby increasing its efficiency. To achieve
this, we developed a system that combines YOLO, for vehicle detection, with
Visual Rhythm, a way to create time-spatial images that allows us to focus on
frames that contain useful information. Additionally, this method can be used
for counting in any application involving unidirectional moving targets to be
detected and identified. Experimental analysis using real videos shows that the
proposed method achieves mean counting accuracy around 99.15% over a set of
videos, with a processing speed three times faster than tracking based
approaches.
|
2501.04538
|
HypeRL: Parameter-Informed Reinforcement Learning for Parametric PDEs
|
cs.LG
|
In this work, we devise a new, general-purpose reinforcement learning
strategy for the optimal control of parametric partial differential equations
(PDEs). Such problems frequently arise in applied sciences and engineering and
entail a significant complexity when control and/or state variables are
distributed in high-dimensional space or depend on varying parameters.
Traditional numerical methods, relying on either iterative minimization
algorithms or dynamic programming, while reliable, often become computationally
infeasible. Indeed, in either way, the optimal control problem must be solved
for each instance of the parameters, and this is out of reach when dealing with
high-dimensional time-dependent and parametric PDEs. In this paper, we propose
HypeRL, a deep reinforcement learning (DRL) framework to overcome the
limitations shown by traditional methods. HypeRL aims at approximating the
optimal control policy directly. Specifically, we employ an actor-critic DRL
approach to learn an optimal feedback control strategy that can generalize
across the range of variation of the parameters. To effectively learn such
optimal control laws, encoding the parameter information into the DRL policy
and value function neural networks (NNs) is essential. To do so, HypeRL uses
two additional NNs, often called hypernetworks, to learn the weights and biases
of the value function and the policy NNs. We validate the proposed approach on
two PDE-constrained optimal control benchmarks, namely a 1D
Kuramoto-Sivashinsky equation and a 2D Navier-Stokes equations, by showing that
the knowledge of the PDE parameters and how this information is encoded, i.e.,
via a hypernetwork, is an essential ingredient for learning parameter-dependent
control policies that can generalize effectively to unseen scenarios and for
improving the sample efficiency of such policies.
|
2501.04541
|
Cyber-Physical Steganography in Robotic Motion Control
|
cs.RO cs.AI cs.CR
|
Steganography, the art of information hiding, has continually evolved across
visual, auditory and linguistic domains, adapting to the ceaseless interplay
between steganographic concealment and steganalytic revelation. This study
seeks to extend the horizons of what constitutes a viable steganographic medium
by introducing a steganographic paradigm in robotic motion control. Based on
the observation of the robot's inherent sensitivity to changes in its
environment, we propose a methodology to encode messages as environmental
stimuli influencing the motions of the robotic agent and to decode messages
from the resulting motion trajectory. The constraints of maximal robot
integrity and minimal motion deviation are established as fundamental
principles underlying secrecy. As a proof of concept, we conduct experiments in
simulated environments across various manipulation tasks, incorporating robotic
embodiments equipped with generalist multimodal policies.
|
2501.04547
|
Medical artificial intelligence toolbox (MAIT): an explainable machine
learning framework for binary classification, survival modelling, and
regression analyses
|
cs.LG
|
While machine learning offers diverse techniques suitable for exploring
various medical research questions, a cohesive synergistic framework can
facilitate the integration and understanding of new approaches within unified
model development and interpretation. We therefore introduce the Medical
Artificial Intelligence Toolbox (MAIT), an explainable, open-source Python
pipeline for developing and evaluating binary classification, regression, and
survival models on tabular datasets. MAIT addresses key challenges (e.g., high
dimensionality, class imbalance, mixed variable types, and missingness) while
promoting transparency in reporting (TRIPOD+AI compliant). Offering automated
configurations for beginners and customizable source code for experts, MAIT
streamlines two primary use cases: Discovery (feature importance via unified
scoring, e.g., SHapley Additive exPlanations - SHAP) and Prediction (model
development and deployment with optimized solutions). Moreover, MAIT proposes
new techniques including fine-tuning of probability threshold in binary
classification, translation of cumulative hazard curves to binary
classification, enhanced visualizations for model interpretation for mixed data
types, and handling censoring through semi-supervised learning, to adapt to a
wide set of data constraints and study designs. We provide detailed tutorials
on GitHub, using four open-access data sets, to demonstrate how MAIT can be
used to improve implementation and interpretation of ML models in medical
research.
|
2501.04561
|
OpenOmni: Large Language Models Pivot Zero-shot Omnimodal Alignment
across Language with Real-time Self-Aware Emotional Speech Synthesis
|
cs.CL cs.CV
|
Recent advancements in omnimodal learning have been achieved in understanding
and generation across images, text, and speech, though mainly within
proprietary models. Limited omnimodal datasets and the inherent challenges
associated with real-time emotional speech generation have hindered open-source
progress. To address these issues, we propose openomni, a two-stage training
method combining omnimodal alignment and speech generation to develop a
state-of-the-art omnimodal large language model. In the alignment phase, a
pre-trained speech model is further trained on text-image tasks to generalize
from vision to speech in a (near) zero-shot manner, outperforming models
trained on tri-modal datasets. In the speech generation phase, a lightweight
decoder facilitates real-time emotional speech through training on speech tasks
and preference learning. Experiments demonstrate that openomni consistently
improves across omnimodal, vision-language, and speech-language evaluations,
enabling natural, emotion-rich dialogues and real-time emotional speech
generation.
|
2501.04565
|
Learnable Scaled Gradient Descent for Guaranteed Robust Tensor PCA
|
cs.CV
|
Robust tensor principal component analysis (RTPCA) aims to separate the
low-rank and sparse components from multi-dimensional data, making it an
essential technique in the signal processing and computer vision fields.
Recently emerging tensor singular value decomposition (t-SVD) has gained
considerable attention for its ability to better capture the low-rank structure
of tensors compared to traditional matrix SVD. However, existing methods often
rely on the computationally expensive tensor nuclear norm (TNN), which limits
their scalability for real-world tensors. To address this issue, we explore an
efficient scaled gradient descent (SGD) approach within the t-SVD framework for
the first time, and propose the RTPCA-SGD method. Theoretically, we rigorously
establish the recovery guarantees of RTPCA-SGD under mild assumptions,
demonstrating that with appropriate parameter selection, it achieves linear
convergence to the true low-rank tensor at a constant rate, independent of the
condition number. To enhance its practical applicability, we further propose a
learnable self-supervised deep unfolding model, which enables effective
parameter learning. Numerical experiments on both synthetic and real-world
datasets demonstrate the superior performance of the proposed methods while
maintaining competitive computational efficiency, especially consuming less
time than RTPCA-TNN.
|
2501.04566
|
Recursive Least Squares with Fading Regularization for Finite-Time
Convergence without Persistent Excitation
|
eess.SP cs.SY eess.SY
|
This paper extends recursive least squares (RLS) to include time-varying
regularization. This extension provides flexibility for updating the least
squares regularization term in real time. Existing results with constant
regularization imply that the parameter-estimation error dynamics of RLS are
globally attractive to zero if and only the regressor is weakly persistently
exciting. This work shows that, by extending classical RLS to include a
time-varying (fading) regularization term that converges to zero, the
parameter-estimation error dynamics are globally attractive to zero without
weakly persistent excitation. Moreover, if the fading regularization term
converges to zero in finite time, then the parameter estimation error also
converges to zero in finite time. Finally, we propose rank-1 fading
regularization (R1FR) RLS, a time-varying regularization algorithm with fading
regularization that converges to zero, and which runs in the same computational
complexity as classical RLS. Numerical examples are presented to validate
theoretical guarantees and to show how R1FR-RLS can protect against
over-regularization.
|
2501.04568
|
Supervision-free Vision-Language Alignment
|
cs.CV cs.AI cs.CL cs.LG
|
Vision-language models (VLMs) have demonstrated remarkable potential in
integrating visual and linguistic information, but their performance is often
constrained by the need for extensive, high-quality image-text training data.
Curation of these image-text pairs is both time-consuming and computationally
expensive. To address this challenge, we introduce SVP (Supervision-free Visual
Projection), a novel framework that enhances vision-language alignment without
relying on curated data or preference annotation. SVP leverages self-captioning
and a pre-trained grounding model as a feedback mechanism to elicit latent
information in VLMs. We evaluate our approach across six key areas: captioning,
referring, visual question answering, multitasking, hallucination control, and
object recall. Results demonstrate significant improvements, including a 14%
average improvement in captioning tasks, up to 12% increase in object recall,
and substantial reduction in hallucination rates. Notably, a small VLM using
SVP achieves hallucination reductions comparable to a model five times larger,
while a VLM with initially poor referring capabilities more than doubles its
performance, approaching parity with a model twice its size.
|
2501.04570
|
Large-Scale Spectral Graph Neural Networks via Laplacian Sparsification:
Technical Report
|
cs.LG
|
Graph Neural Networks (GNNs) play a pivotal role in graph-based tasks for
their proficiency in representation learning. Among the various GNN methods,
spectral GNNs employing polynomial filters have shown promising performance on
tasks involving both homophilous and heterophilous graph structures. However,
The scalability of spectral GNNs on large graphs is limited because they learn
the polynomial coefficients through multiple forward propagation executions
during forward propagation. Existing works have attempted to scale up spectral
GNNs by eliminating the linear layers on the input node features, a change that
can disrupt end-to-end training, potentially impact performance, and become
impractical with high-dimensional input features. To address the above
challenges, we propose "Spectral Graph Neural Networks with Laplacian
Sparsification (SGNN-LS)", a novel graph spectral sparsification method to
approximate the propagation patterns of spectral GNNs. We prove that our
proposed method generates Laplacian sparsifiers that can approximate both fixed
and learnable polynomial filters with theoretical guarantees. Our method allows
the application of linear layers on the input node features, enabling
end-to-end training as well as the handling of raw text features. We conduct an
extensive experimental analysis on datasets spanning various graph scales and
properties to demonstrate the superior efficiency and effectiveness of our
method. The results show that our method yields superior results in comparison
with the corresponding approximated base models, especially on dataset
Ogbn-papers100M(111M nodes, 1.6B edges) and MAG-scholar-C (2.8M features).
|
2501.04572
|
Regret Analysis: a control perspective
|
eess.SY cs.LG cs.SY math.OC
|
Online learning and model reference adaptive control have many interesting
intersections. One area where they differ however is in how the algorithms are
analyzed and what objective or metric is used to discriminate "good" algorithms
from "bad" algorithms. In adaptive control there are usually two objectives: 1)
prove that all time varying parameters/states of the system are bounded, and 2)
that the instantaneous error between the adaptively controlled system and a
reference system converges to zero over time (or at least a compact set). For
online learning the performance of algorithms is often characterized by the
regret the algorithm incurs. Regret is defined as the cumulative loss (cost)
over time from the online algorithm minus the cumulative loss (cost) of the
single optimal fixed parameter choice in hindsight. Another significant
difference between the two areas of research is with regard to the assumptions
made in order to obtain said results. Adaptive control makes assumptions about
the input-output properties of the control problem and derives solutions for a
fixed error model or optimization task. In the online learning literature
results are derived for classes of loss functions (i.e. convex) while a priori
assuming certain signals are bounded. In this work we discuss these differences
in detail through the regret based analysis of gradient descent for convex
functions and the control based analysis of a streaming regression problem. We
close with a discussion about the newly defined paradigm of online adaptive
control.
|
2501.04575
|
InfiGUIAgent: A Multimodal Generalist GUI Agent with Native Reasoning
and Reflection
|
cs.AI cs.CL cs.HC
|
Graphical User Interface (GUI) Agents, powered by multimodal large language
models (MLLMs), have shown great potential for task automation on computing
devices such as computers and mobile phones. However, existing agents face
challenges in multi-step reasoning and reliance on textual annotations,
limiting their effectiveness. We introduce \textit{InfiGUIAgent}, an MLLM-based
GUI Agent trained with a two-stage supervised fine-tuning pipeline. Stage 1
enhances fundamental skills such as GUI understanding and grounding, while
Stage 2 integrates hierarchical reasoning and expectation-reflection reasoning
skills using synthesized data to enable native reasoning abilities of the
agents. \textit{InfiGUIAgent} achieves competitive performance on several GUI
benchmarks, highlighting the impact of native reasoning skills in enhancing GUI
interaction for automation tasks. Resources are available at
\url{https://github.com/Reallm-Labs/InfiGUIAgent}.
|
2501.04577
|
A 65 nm Bayesian Neural Network Accelerator with 360 fJ/Sample In-Word
GRNG for AI Uncertainty Estimation
|
cs.AR cs.AI cs.LG cs.RO
|
Uncertainty estimation is an indispensable capability for AI-enabled,
safety-critical applications, e.g. autonomous vehicles or medical diagnosis.
Bayesian neural networks (BNNs) use Bayesian statistics to provide both
classification predictions and uncertainty estimation, but they suffer from
high computational overhead associated with random number generation and
repeated sample iterations. Furthermore, BNNs are not immediately amenable to
acceleration through compute-in-memory architectures due to the frequent memory
writes necessary after each RNG operation. To address these challenges, we
present an ASIC that integrates 360 fJ/Sample Gaussian RNG directly into the
SRAM memory words. This integration reduces RNG overhead and enables
fully-parallel compute-in-memory operations for BNNs. The prototype chip
achieves 5.12 GSa/s RNG throughput and 102 GOp/s neural network throughput
while occupying 0.45 mm2, bringing AI uncertainty estimation to edge
computation.
|
2501.04578
|
Analysis of Climatic Trends and Variability in Indian Topography
|
cs.SI
|
The climatic change is one of the serious concerns nowadays. The impacts of
climate change are global in scope and unprecedented in scale. Moreover, a
small perturbation in climatic changes affects not only the pristine ecosystem
but also the socioeconomic sectors. Specifically, the affect of climatic
changes is related to frequent casualties. This makes it essential to dwelve
deeper into analyzing the socio-climatic trends and variability. This work
provides a comprehensive analysis of India's climatic trends, emphasizing on
regional variations and specifically delving into the unique climate of Delhi.
Specifically, this research unveils the temporal and spatial variations in
temperature patterns by amalgamating extensive datasets encompassing India's
diverse landscapes. The study uses advanced statistical tools and methodologies
to scrutinize temperature's annual and seasonal variability. The insights drawn
from this rigorous analysis may offer invaluable contributions to regional
planning strategies, adaptive measures, and informed decision-making amidst the
complex impacts of climate change. By bridging the gap between broader climatic
trends and localized impacts, this research aims to facilitate more effective
measures to mitigate and adapt to the multifaceted challenges of climate
change, ensuring a more nuanced and tailored approaches. We utilized the
Mann-Kendall test and Theil-Sen's slope estimator to analyze the trends and
variability of the climatic conditions over the decades. The results
demonstrate that temperature variations have increased over 0.58oC on average
over the last decade. Moreover, over last decade the variability of Indian
states shows that Lakshadweep faced the highest change (0.87oC), highlighting
coastal vulnerability, while Tripura observed the least change of 0.07oC.
|
2501.04579
|
Unified Coding for Both Human Perception and Generalized Machine
Analytics with CLIP Supervision
|
cs.CV cs.MM
|
The image compression model has long struggled with adaptability and
generalization, as the decoded bitstream typically serves only human or machine
needs and fails to preserve information for unseen visual tasks. Therefore,
this paper innovatively introduces supervision obtained from multimodal
pre-training models and incorporates adaptive multi-objective optimization
tailored to support both human visual perception and machine vision
simultaneously with a single bitstream, denoted as Unified and Generalized
Image Coding for Machine (UG-ICM). Specifically, to get rid of the reliance
between compression models with downstream task supervision, we introduce
Contrastive Language-Image Pre-training (CLIP) models into the training
constraint for improved generalization. Global-to-instance-wise CLIP
supervision is applied to help obtain hierarchical semantics that make models
more generalizable for the tasks relying on the information of different
granularity. Furthermore, for supporting both human and machine visions with
only a unifying bitstream, we incorporate a conditional decoding strategy that
takes as conditions human or machine preferences, enabling the bitstream to be
decoded into different versions for corresponding preferences. As such, our
proposed UG-ICM is fully trained in a self-supervised manner, i.e., without
awareness of any specific downstream models and tasks. The extensive
experiments have shown that the proposed UG-ICM is capable of achieving
remarkable improvements in various unseen machine analytics tasks, while
simultaneously providing perceptually satisfying images.
|
2501.04582
|
Boosting Salient Object Detection with Knowledge Distillated from Large
Foundation Models
|
cs.CV
|
Salient Object Detection (SOD) aims to identify and segment prominent regions
within a scene. Traditional models rely on manually annotated pseudo labels
with precise pixel-level accuracy, which is time-consuming. We developed a
low-cost, high-precision annotation method by leveraging large foundation
models to address the challenges. Specifically, we use a weakly supervised
approach to guide large models in generating pseudo-labels through textual
prompts. Since large models do not effectively focus on the salient regions of
images, we manually annotate a subset of text to fine-tune the model. Based on
this approach, which enables precise and rapid generation of pseudo-labels, we
introduce a new dataset, BDS-TR. Compared to the previous DUTS-TR dataset,
BDS-TR is more prominent in scale and encompasses a wider variety of categories
and scenes. This expansion will enhance our model's applicability across a
broader range of scenarios and provide a more comprehensive foundational
dataset for future SOD research. Additionally, we present an edge decoder based
on dynamic upsampling, which focuses on object edges while gradually recovering
image feature resolution. Comprehensive experiments on five benchmark datasets
demonstrate that our method significantly outperforms state-of-the-art
approaches and also surpasses several existing fully-supervised SOD methods.
The code and results will be made available.
|
2501.04584
|
A Direct-adjoint Approach for Material Point Model Calibration with
Application to Plasticity
|
cs.CE
|
This paper proposes a new approach for the calibration of material parameters
in elastoplastic constitutive models. The calibration is posed as a constrained
optimization problem, where the constitutive evolution equations serve as
constraints. The objective function quantifies the mismatch between the stress
predicted by the model and corresponding experimental measurements. To improve
calibration efficiency, a novel direct-adjoint approach is presented to compute
the Hessian of the objective function, which enables the use of second-order
optimization algorithms. Automatic differentiation (AD) is used for gradient
and Hessian computations. Two numerical examples are employed to validate the
Hessian matrices and to demonstrate that the Newton-Raphson algorithm
consistently outperforms gradient-based algorithms such as L-BFGS-B.
|
2501.04586
|
Identity-Preserving Video Dubbing Using Motion Warping
|
cs.CV
|
Video dubbing aims to synthesize realistic, lip-synced videos from a
reference video and a driving audio signal. Although existing methods can
accurately generate mouth shapes driven by audio, they often fail to preserve
identity-specific features, largely because they do not effectively capture the
nuanced interplay between audio cues and the visual attributes of reference
identity . As a result, the generated outputs frequently lack fidelity in
reproducing the unique textural and structural details of the reference
identity. To address these limitations, we propose IPTalker, a novel and robust
framework for video dubbing that achieves seamless alignment between driving
audio and reference identity while ensuring both lip-sync accuracy and
high-fidelity identity preservation. At the core of IPTalker is a
transformer-based alignment mechanism designed to dynamically capture and model
the correspondence between audio features and reference images, thereby
enabling precise, identity-aware audio-visual integration. Building on this
alignment, a motion warping strategy further refines the results by spatially
deforming reference images to match the target audio-driven configuration. A
dedicated refinement process then mitigates occlusion artifacts and enhances
the preservation of fine-grained textures, such as mouth details and skin
features. Extensive qualitative and quantitative evaluations demonstrate that
IPTalker consistently outperforms existing approaches in terms of realism, lip
synchronization, and identity retention, establishing a new state of the art
for high-quality, identity-consistent video dubbing.
|
2501.04588
|
Federated-Continual Dynamic Segmentation of Histopathology guided by
Barlow Continuity
|
cs.LG cs.AI
|
Federated- and Continual Learning have been established as approaches to
enable privacy-aware learning on continuously changing data, as required for
deploying AI systems in histopathology images. However, data shifts can occur
in a dynamic world, spatially between institutions and temporally, due to
changing data over time. This leads to two issues: Client Drift, where the
central model degrades from aggregating data from clients trained on shifted
data, and Catastrophic Forgetting, from temporal shifts such as changes in
patient populations. Both tend to degrade the model's performance of previously
seen data or spatially distributed training. Despite both problems arising from
the same underlying problem of data shifts, existing research addresses them
only individually. In this work, we introduce a method that can jointly
alleviate Client Drift and Catastrophic Forgetting by using our proposed
Dynamic Barlow Continuity that evaluates client updates on a public reference
dataset and uses this to guide the training process to a spatially and
temporally shift-invariant model. We evaluate our approach on the
histopathology datasets BCSS and Semicol and prove our method to be highly
effective by jointly improving the dice score as much as from 15.8% to 71.6% in
Client Drift and from 42.5% to 62.8% in Catastrophic Forgetting. This enables
Dynamic Learning by establishing spatio-temporal shift-invariance.
|
2501.04591
|
Quantum-inspired Embeddings Projection and Similarity Metrics for
Representation Learning
|
cs.CL cond-mat.dis-nn quant-ph
|
Over the last decade, representation learning, which embeds complex
information extracted from large amounts of data into dense vector spaces, has
emerged as a key technique in machine learning. Among other applications, it
has been a key building block for large language models and advanced computer
vision systems based on contrastive learning. A core component of
representation learning systems is the projection head, which maps the original
embeddings into different, often compressed spaces, while preserving the
similarity relationship between vectors.
In this paper, we propose a quantum-inspired projection head that includes a
corresponding quantum-inspired similarity metric. Specifically, we map
classical embeddings onto quantum states in Hilbert space and introduce a
quantum circuit-based projection head to reduce embedding dimensionality. To
evaluate the effectiveness of this approach, we extended the BERT language
model by integrating our projection head for embedding compression. We compared
the performance of embeddings, which were compressed using our quantum-inspired
projection head, with those compressed using a classical projection head on
information retrieval tasks using the TREC 2019 and TREC 2020 Deep Learning
benchmarks. The results demonstrate that our quantum-inspired method achieves
competitive performance relative to the classical method while utilizing 32
times fewer parameters. Furthermore, when trained from scratch, it notably
excels, particularly on smaller datasets. This work not only highlights the
effectiveness of the quantum-inspired approach but also emphasizes the utility
of efficient, ad hoc low-entanglement circuit simulations within neural
networks as a powerful quantum-inspired technique.
|
2501.04594
|
Understanding Expectations for a Robotic Guide Dog for Visually Impaired
People
|
cs.RO
|
Robotic guide dogs hold significant potential to enhance the autonomy and
mobility of blind or visually impaired (BVI) individuals by offering universal
assistance over unstructured terrains at affordable costs. However, the design
of robotic guide dogs remains underexplored, particularly in systematic aspects
such as gait controllers, navigation behaviors, interaction methods, and verbal
explanations. Our study addresses this gap by conducting user studies with 18
BVI participants, comprising 15 cane users and three guide dog users.
Participants interacted with a quadrupedal robot and provided both quantitative
and qualitative feedback. Our study revealed several design implications, such
as a preference for a learning-based controller and a rigid handle, gradual
turns with asymmetric speeds, semantic communication methods, and
explainability. The study also highlighted the importance of customization to
support users with diverse backgrounds and preferences, along with practical
concerns such as battery life, maintenance, and weather issues. These findings
offer valuable insights and design implications for future research and
development of robotic guide dogs.
|
2501.04595
|
MobileH2R: Learning Generalizable Human to Mobile Robot Handover
Exclusively from Scalable and Diverse Synthetic Data
|
cs.RO
|
This paper introduces MobileH2R, a framework for learning generalizable
vision-based human-to-mobile-robot (H2MR) handover skills. Unlike traditional
fixed-base handovers, this task requires a mobile robot to reliably receive
objects in a large workspace enabled by its mobility. Our key insight is that
generalizable handover skills can be developed in simulators using high-quality
synthetic data, without the need for real-world demonstrations. To achieve
this, we propose a scalable pipeline for generating diverse synthetic full-body
human motion data, an automated method for creating safe and imitation-friendly
demonstrations, and an efficient 4D imitation learning method for distilling
large-scale demonstrations into closed-loop policies with base-arm
coordination. Experimental evaluations in both simulators and the real world
show significant improvements (at least +15% success rate) over baseline
methods in all cases. Experiments also validate that large-scale and diverse
synthetic data greatly enhances robot learning, highlighting our scalable
framework.
|
2501.04597
|
FrontierNet: Learning Visual Cues to Explore
|
cs.RO cs.CV
|
Exploration of unknown environments is crucial for autonomous robots; it
allows them to actively reason and decide on what new data to acquire for tasks
such as mapping, object discovery, and environmental assessment. Existing
methods, such as frontier-based methods, rely heavily on 3D map operations,
which are limited by map quality and often overlook valuable context from
visual cues. This work aims at leveraging 2D visual cues for efficient
autonomous exploration, addressing the limitations of extracting goal poses
from a 3D map. We propose a image-only frontier-based exploration system, with
FrontierNet as a core component developed in this work. FrontierNet is a
learning-based model that (i) detects frontiers, and (ii) predicts their
information gain, from posed RGB images enhanced by monocular depth priors. Our
approach provides an alternative to existing 3D-dependent exploration systems,
achieving a 16% improvement in early-stage exploration efficiency, as validated
through extensive simulations and real-world experiments.
|
2501.04606
|
Enhancing Low-Cost Video Editing with Lightweight Adaptors and
Temporal-Aware Inversion
|
cs.CV
|
Recent advancements in text-to-image (T2I) generation using diffusion models
have enabled cost-effective video-editing applications by leveraging
pre-trained models, eliminating the need for resource-intensive training.
However, the frame-independence of T2I generation often results in poor
temporal consistency. Existing methods address this issue through temporal
layer fine-tuning or inference-based temporal propagation, but these approaches
suffer from high training costs or limited temporal coherence. To address these
challenges, we propose a General and Efficient Adapter (GE-Adapter) that
integrates temporal-spatial and semantic consistency with Baliteral DDIM
inversion. This framework introduces three key components: (1) Frame-based
Temporal Consistency Blocks (FTC Blocks) to capture frame-specific features and
enforce smooth inter-frame transitions via temporally-aware loss functions; (2)
Channel-dependent Spatial Consistency Blocks (SCD Blocks) employing bilateral
filters to enhance spatial coherence by reducing noise and artifacts; and (3)
Token-based Semantic Consistency Module (TSC Module) to maintain semantic
alignment using shared prompt tokens and frame-specific tokens. Our method
significantly improves perceptual quality, text-image alignment, and temporal
coherence, as demonstrated on the MSR-VTT dataset. Additionally, it achieves
enhanced fidelity and frame-to-frame coherence, offering a practical solution
for T2V editing.
|
2501.04608
|
Comprehensive Examination of Unrolled Networks for Solving Linear
Inverse Problems
|
eess.IV cs.CV cs.LG
|
Unrolled networks have become prevalent in various computer vision and
imaging tasks. Although they have demonstrated remarkable efficacy in solving
specific computer vision and computational imaging tasks, their adaptation to
other applications presents considerable challenges. This is primarily due to
the multitude of design decisions that practitioners working on new
applications must navigate, each potentially affecting the network's overall
performance. These decisions include selecting the optimization algorithm,
defining the loss function, and determining the number of convolutional layers,
among others. Compounding the issue, evaluating each design choice requires
time-consuming simulations to train, fine-tune the neural network, and optimize
for its performance. As a result, the process of exploring multiple options and
identifying the optimal configuration becomes time-consuming and
computationally demanding. The main objectives of this paper are (1) to unify
some ideas and methodologies used in unrolled networks to reduce the number of
design choices a user has to make, and (2) to report a comprehensive ablation
study to discuss the impact of each of the choices involved in designing
unrolled networks and present practical recommendations based on our findings.
We anticipate that this study will help scientists and engineers design
unrolled networks for their applications and diagnose problems within their
networks efficiently.
|
2501.04610
|
Resilient Peer-to-peer Learning based on Adaptive Aggregation
|
cs.LG
|
Collaborative learning in peer-to-peer networks offers the benefits of
distributed learning while mitigating the risks associated with single points
of failure inherent in centralized servers. However, adversarial workers pose
potential threats by attempting to inject malicious information into the
network. Thus, ensuring the resilience of peer-to-peer learning emerges as a
pivotal research objective. The challenge is exacerbated in the presence of
non-convex loss functions and non-iid data distributions. This paper introduces
a resilient aggregation technique tailored for such scenarios, aimed at
fostering similarity among peers' learning processes. The aggregation weights
are determined through an optimization procedure, and use the loss function
computed using the neighbor's models and individual private data, thereby
addressing concerns regarding data privacy in distributed machine learning.
Theoretical analysis demonstrates convergence of parameters with non-convex
loss functions and non-iid data distributions. Empirical evaluations across
three distinct machine learning tasks support the claims. The empirical
findings, which encompass a range of diverse attack models, also demonstrate
improved accuracy when compared to existing methodologies.
|
2501.04613
|
A Semantic Partitioning Method for Large-Scale Training of Knowledge
Graph Embeddings
|
cs.LG cs.DC
|
In recent years, knowledge graph embeddings have achieved great success. Many
methods have been proposed and achieved state-of-the-art results in various
tasks. However, most of the current methods present one or more of the
following problems: (i) They only consider fact triplets, while ignoring the
ontology information of knowledge graphs. (ii) The obtained embeddings do not
contain much semantic information. Therefore, using these embeddings for
semantic tasks is problematic. (iii) They do not enable large-scale training.
In this paper, we propose a new algorithm that incorporates the ontology of
knowledge graphs and partitions the knowledge graph based on classes to include
more semantic information for parallel training of large-scale knowledge graph
embeddings. Our preliminary results show that our algorithm performs well on
several popular benchmarks.
|
2501.04614
|
MedCoDi-M: A Multi-Prompt Foundation Model for Multimodal Medical Data
Generation
|
cs.AI cs.LG
|
Artificial Intelligence is revolutionizing medical practice, enhancing
diagnostic accuracy and healthcare delivery. However, its adaptation in medical
settings still faces significant challenges, related to data availability and
privacy constraints. Synthetic data has emerged as a promising solution to
mitigate these issues, addressing data scarcity while preserving privacy.
Recently, Latent Diffusion Models have emerged as a powerful tool for
generating high-quality synthetic data. Meanwhile, the integration of different
modalities has gained interest, emphasizing the need of models capable of
handle multimodal medical data. Existing approaches struggle to integrate
complementary information and lack the ability to generate modalities
simultaneously. To address this challenge, we present MedCoDi-M, a
6.77-billion-parameter model, designed for multimodal medical data generation,
that, following Foundation Model paradigm, exploits contrastive learning and
large quantity of data to build a shared latent space which capture the
relationships between different data modalities. Further, we introduce the
Multi-Prompt training technique, which significantly boosts MedCoDi-M's
generation under different settings. We extensively validate MedCoDi-M: first
we benchmark it against five competitors on the MIMIC-CXR dataset, a
state-of-the-art dataset for Chest X-ray and radiological report generation.
Secondly, we perform a Visual Turing Test with expert radiologists to assess
the realism and clinical relevance of the generated data, ensuring alignment
with real-world scenarios. Finally, we assess the utility of MedCoDi-M in
addressing key challenges in the medical field, such as anonymization, data
scarcity and imbalance learning. The results are promising, demonstrating the
applicability of MedCoDi-M in medical contexts. Project page is at
https://cosbidev.github.io/MedCoDi-M/.
|
2501.04623
|
Large-scale Grid Optimization: The Workhorse of Future Grid Computations
|
eess.SY cs.SY
|
Purpose: The computation methods for modeling, controlling and optimizing the
transforming grid are evolving rapidly. We review and systemize knowledge for a
special class of computation methods that solve large-scale power grid
optimization problems. Summary: Large-scale grid optimizations are pertinent
for, amongst other things, hedging against risk due to resource stochasticity,
evaluating aggregated DERs' impact on grid operation and design, and improving
the overall efficiency of grid operation in terms of cost, reliability, and
carbon footprint. We attribute the continual growth in scale and complexity of
grid optimizations to a large influx of new spatial and temporal features in
both transmission (T) and distribution (D) networks. Therefore, to systemize
knowledge in the field, we discuss the recent advancements in T and D systems
from the viewpoint of mechanistic physics-based and emerging data-driven
methods. Findings: We find that while mechanistic physics-based methods are
leading the science in solving large-scale grid optimizations, data-driven
techniques, especially physics-constrained ones, are emerging as an alternative
to solve otherwise intractable problems. We also find observable gaps in the
field and ascertain these gaps from the paper's literature review and by
collecting and synthesizing feedback from industry experts.
|
2501.04628
|
FatesGS: Fast and Accurate Sparse-View Surface Reconstruction using
Gaussian Splatting with Depth-Feature Consistency
|
cs.CV
|
Recently, Gaussian Splatting has sparked a new trend in the field of computer
vision. Apart from novel view synthesis, it has also been extended to the area
of multi-view reconstruction. The latest methods facilitate complete, detailed
surface reconstruction while ensuring fast training speed. However, these
methods still require dense input views, and their output quality significantly
degrades with sparse views. We observed that the Gaussian primitives tend to
overfit the few training views, leading to noisy floaters and incomplete
reconstruction surfaces. In this paper, we present an innovative sparse-view
reconstruction framework that leverages intra-view depth and multi-view feature
consistency to achieve remarkably accurate surface reconstruction.
Specifically, we utilize monocular depth ranking information to supervise the
consistency of depth distribution within patches and employ a smoothness loss
to enhance the continuity of the distribution. To achieve finer surface
reconstruction, we optimize the absolute position of depth through multi-view
projection features. Extensive experiments on DTU and BlendedMVS demonstrate
that our method outperforms state-of-the-art methods with a speedup of 60x to
200x, achieving swift and fine-grained mesh reconstruction without the need for
costly pre-training.
|
2501.04630
|
Evaluating Interval-based Tokenization for Pitch Representation in
Symbolic Music Analysis
|
cs.IR cs.SD eess.AS
|
Symbolic music analysis tasks are often performed by models originally
developed for Natural Language Processing, such as Transformers. Such models
require the input data to be represented as sequences, which is achieved
through a process of tokenization. Tokenization strategies for symbolic music
often rely on absolute MIDI values to represent pitch information. However,
music research largely promotes the benefit of higher-level representations
such as melodic contour and harmonic relations for which pitch intervals turn
out to be more expressive than absolute pitches. In this work, we introduce a
general framework for building interval-based tokenizations. By evaluating
these tokenizations on three music analysis tasks, we show that such
interval-based tokenizations improve model performances and facilitate their
explainability.
|
2501.04631
|
Disentangled Clothed Avatar Generation with Layered Representation
|
cs.CV
|
Clothed avatar generation has wide applications in virtual and augmented
reality, filmmaking, and more. Previous methods have achieved success in
generating diverse digital avatars, however, generating avatars with
disentangled components (\eg, body, hair, and clothes) has long been a
challenge. In this paper, we propose LayerAvatar, the first feed-forward
diffusion-based method for generating component-disentangled clothed avatars.
To achieve this, we first propose a layered UV feature plane representation,
where components are distributed in different layers of the Gaussian-based UV
feature plane with corresponding semantic labels. This representation supports
high-resolution and real-time rendering, as well as expressive animation
including controllable gestures and facial expressions. Based on the
well-designed representation, we train a single-stage diffusion model and
introduce constrain terms to address the severe occlusion problem of the
innermost human body layer. Extensive experiments demonstrate the impressive
performances of our method in generating disentangled clothed avatars, and we
further explore its applications in component transfer. The project page is
available at: https://olivia23333.github.io/LayerAvatar/
|
2501.04633
|
"Can you be my mum?": Manipulating Social Robots in the Large Language
Models Era
|
cs.HC cs.CY cs.RO
|
Recent advancements in robots powered by large language models have enhanced
their conversational abilities, enabling interactions closely resembling human
dialogue. However, these models introduce safety and security concerns in HRI,
as they are vulnerable to manipulation that can bypass built-in safety
measures. Imagining a social robot deployed in a home, this work aims to
understand how everyday users try to exploit a language model to violate
ethical principles, such as by prompting the robot to act like a life partner.
We conducted a pilot study involving 21 university students who interacted with
a Misty robot, attempting to circumvent its safety mechanisms across three
scenarios based on specific HRI ethical principles: attachment, freedom, and
empathy. Our results reveal that participants employed five techniques,
including insulting and appealing to pity using emotional language. We hope
this work can inform future research in designing strong safeguards to ensure
ethical and secure human-robot interactions.
|
2501.04635
|
Knowledge Retrieval Based on Generative AI
|
cs.IR cs.AI
|
This study develops a question-answering system based on Retrieval-Augmented
Generation (RAG) using Chinese Wikipedia and Lawbank as retrieval sources.
Using TTQA and TMMLU+ as evaluation datasets, the system employs BGE-M3 for
dense vector retrieval to obtain highly relevant search results and
BGE-reranker to reorder these results based on query relevance. The most
pertinent retrieval outcomes serve as reference knowledge for a Large Language
Model (LLM), enhancing its ability to answer questions and establishing a
knowledge retrieval system grounded in generative AI. The system's
effectiveness is assessed through a two-stage evaluation: automatic and
assisted performance evaluations. The automatic evaluation calculates accuracy
by comparing the model's auto-generated labels with ground truth answers,
measuring performance under standardized conditions without human intervention.
The assisted performance evaluation involves 20 finance-related multiple-choice
questions answered by 20 participants without financial backgrounds. Initially,
participants answer independently. Later, they receive system-generated
reference information to assist in answering, examining whether the system
improves accuracy when assistance is provided. The main contributions of this
research are: (1) Enhanced LLM Capability: By integrating BGE-M3 and
BGE-reranker, the system retrieves and reorders highly relevant results,
reduces hallucinations, and dynamically accesses authorized or public knowledge
sources. (2) Improved Data Privacy: A customized RAG architecture enables local
operation of the LLM, eliminating the need to send private data to external
servers. This approach enhances data security, reduces reliance on commercial
services, lowers operational costs, and mitigates privacy risks.
|
2501.04641
|
A Statistical Theory of Contrastive Pre-training and Multimodal
Generative AI
|
cs.LG math.ST stat.ML stat.TH
|
Multi-modal generative AI systems, such as those combining vision and
language, rely on contrastive pre-training to learn representations across
different modalities. While their practical benefits are widely acknowledged, a
rigorous theoretical understanding of the contrastive pre-training framework
remains limited. This paper develops a theoretical framework to explain the
success of contrastive pre-training in downstream tasks, such as zero-shot
classification, conditional diffusion models, and vision-language models. We
introduce the concept of approximate sufficient statistics, a generalization of
the classical sufficient statistics, and show that near-minimizers of the
contrastive pre-training loss are approximately sufficient, making them
adaptable to diverse downstream tasks. We further propose the Joint Generative
Hierarchical Model for the joint distribution of images and text, showing that
transformers can efficiently approximate relevant functions within this model
via belief propagation. Building on this framework, we derive sample complexity
guarantees for multi-modal learning based on contrastive pre-trained
representations. Numerical simulations validate these theoretical findings,
demonstrating the strong generalization performance of contrastively
pre-trained transformers in various multi-modal tasks.
|
2501.04643
|
Discrete Wavelet Transform-Based Capsule Network for Hyperspectral Image
Classification
|
cs.CV
|
Hyperspectral image (HSI) classification is a crucial technique for remote
sensing to build large-scale earth monitoring systems. HSI contains much more
information than traditional visual images for identifying the categories of
land covers. One recent feasible solution for HSI is to leverage CapsNets for
capturing spectral-spatial information. However, these methods require high
computational requirements due to the full connection architecture between
stacked capsule layers. To solve this problem, a DWT-CapsNet is proposed to
identify partial but important connections in CapsNet for a effective and
efficient HSI classification. Specifically, we integrate a tailored attention
mechanism into a Discrete Wavelet Transform (DWT)-based downsampling layer,
alleviating the information loss problem of conventional downsampling operation
in feature extractors. Moreover, we propose a novel multi-scale routing
algorithm that prunes a large proportion of connections in CapsNet. A capsule
pyramid fusion mechanism is designed to aggregate the spectral-spatial
relationships in multiple levels of granularity, and then a self-attention
mechanism is further conducted in a partially and locally connected
architecture to emphasize the meaningful relationships. As shown in the
experimental results, our method achieves state-of-the-art accuracy while
keeping lower computational demand regarding running time, flops, and the
number of parameters, rendering it an appealing choice for practical
implementation in HSI classification.
|
2501.04648
|
FlairGPT: Repurposing LLMs for Interior Designs
|
cs.GR cs.CL cs.CV
|
Interior design involves the careful selection and arrangement of objects to
create an aesthetically pleasing, functional, and harmonized space that aligns
with the client's design brief. This task is particularly challenging, as a
successful design must not only incorporate all the necessary objects in a
cohesive style, but also ensure they are arranged in a way that maximizes
accessibility, while adhering to a variety of affordability and usage
considerations. Data-driven solutions have been proposed, but these are
typically room- or domain-specific and lack explainability in their design
design considerations used in producing the final layout. In this paper, we
investigate if large language models (LLMs) can be directly utilized for
interior design. While we find that LLMs are not yet capable of generating
complete layouts, they can be effectively leveraged in a structured manner,
inspired by the workflow of interior designers. By systematically probing LLMs,
we can reliably generate a list of objects along with relevant constraints that
guide their placement. We translate this information into a design layout
graph, which is then solved using an off-the-shelf constrained optimization
setup to generate the final layouts. We benchmark our algorithm in various
design configurations against existing LLM-based methods and human designs, and
evaluate the results using a variety of quantitative and qualitative metrics
along with user studies. In summary, we demonstrate that LLMs, when used in a
structured manner, can effectively generate diverse high-quality layouts,
making them a viable solution for creating large-scale virtual scenes. Project
webpage at https://flairgpt.github.io/
|
2501.04652
|
Multi-task retriever fine-tuning for domain-specific and efficient RAG
|
cs.CL cs.IR cs.LG
|
Retrieval-Augmented Generation (RAG) has become ubiquitous when deploying
Large Language Models (LLMs), as it can address typical limitations such as
generating hallucinated or outdated information. However, when building
real-world RAG applications, practical issues arise. First, the retrieved
information is generally domain-specific. Since it is computationally expensive
to fine-tune LLMs, it is more feasible to fine-tune the retriever to improve
the quality of the data included in the LLM input. Second, as more applications
are deployed in the same real-world system, one cannot afford to deploy
separate retrievers. Moreover, these RAG applications normally retrieve
different kinds of data. Our solution is to instruction fine-tune a small
retriever encoder on a variety of domain-specific tasks to allow us to deploy
one encoder that can serve many use cases, thereby achieving low-cost,
scalability, and speed. We show how this encoder generalizes to out-of-domain
settings as well as to an unseen retrieval task on real-world enterprise use
cases.
|
2501.04661
|
Assessing Language Comprehension in Large Language Models Using
Construction Grammar
|
cs.CL cs.AI
|
Large Language Models, despite their significant capabilities, are known to
fail in surprising and unpredictable ways. Evaluating their true
`understanding' of language is particularly challenging due to the extensive
web-scale data they are trained on. Therefore, we construct an evaluation to
systematically assess natural language understanding (NLU) in LLMs by
leveraging Construction Grammar (CxG), which provides insights into the meaning
captured by linguistic elements known as constructions (Cxns). CxG is
well-suited for this purpose because provides a theoretical basis to construct
targeted evaluation sets. These datasets are carefully constructed to include
examples which are unlikely to appear in pre-training data, yet intuitive and
easy for humans to understand, enabling a more targeted and reliable
assessment. Our experiments focus on downstream natural language inference and
reasoning tasks by comparing LLMs' understanding of the underlying meanings
communicated through 8 unique Cxns with that of humans. The results show that
while LLMs demonstrate some knowledge of constructional information, even the
latest models including GPT-o1 struggle with abstract meanings conveyed by
these Cxns, as demonstrated in cases where test sentences are dissimilar to
their pre-training data. We argue that such cases provide a more accurate test
of true language understanding, highlighting key limitations in LLMs' semantic
capabilities. We make our novel dataset and associated experimental data
including prompts and model responses publicly available.
|
2501.04662
|
On The Origin of Cultural Biases in Language Models: From Pre-training
Data to Linguistic Phenomena
|
cs.CL
|
Language Models (LMs) have been shown to exhibit a strong preference towards
entities associated with Western culture when operating in non-Western
languages. In this paper, we aim to uncover the origins of entity-related
cultural biases in LMs by analyzing several contributing factors, including the
representation of entities in pre-training data and the impact of variations in
linguistic phenomena across languages. We introduce CAMeL-2, a parallel
Arabic-English benchmark of 58,086 entities associated with Arab and Western
cultures and 367 masked natural contexts for entities. Our evaluations using
CAMeL-2 reveal reduced performance gaps between cultures by LMs when tested in
English compared to Arabic. We find that LMs struggle in Arabic with entities
that appear at high frequencies in pre-training, where entities can hold
multiple word senses. This also extends to entities that exhibit high lexical
overlap with languages that are not Arabic but use the Arabic script. Further,
we show how frequency-based tokenization leads to this issue in LMs, which gets
worse with larger Arabic vocabularies. We will make CAMeL-2 available at:
https://github.com/tareknaous/camel2
|
2501.04665
|
HyFusion: Enhanced Reception Field Transformer for Hyperspectral Image
Fusion
|
eess.IV cs.CV
|
Hyperspectral image (HSI) fusion addresses the challenge of reconstructing
High-Resolution HSIs (HR-HSIs) from High-Resolution Multispectral images
(HR-MSIs) and Low-Resolution HSIs (LR-HSIs), a critical task given the high
costs and hardware limitations associated with acquiring high-quality HSIs.
While existing methods leverage spatial and spectral relationships, they often
suffer from limited receptive fields and insufficient feature utilization,
leading to suboptimal performance. Furthermore, the scarcity of high-quality
HSI data highlights the importance of efficient data utilization to maximize
reconstruction quality. To address these issues, we propose HyFusion, a novel
Dual-Coupled Network (DCN) framework designed to enhance cross-domain feature
extraction and enable effective feature map reusing. The framework first
processes HR-MSI and LR-HSI inputs through specialized subnetworks that
mutually enhance each other during feature extraction, preserving complementary
spatial and spectral details. At its core, HyFusion utilizes an Enhanced
Reception Field Block (ERFB), which combines shifting-window attention and
dense connections to expand the receptive field, effectively capturing
long-range dependencies while minimizing information loss. Extensive
experiments demonstrate that HyFusion achieves state-of-the-art performance in
HR-MSI/LR-HSI fusion, significantly improving reconstruction quality while
maintaining a compact model size and computational efficiency. By integrating
enhanced receptive fields and feature map reusing into a coupled network
architecture, HyFusion provides a practical and effective solution for HSI
fusion in resource-constrained scenarios, setting a new benchmark in
hyperspectral imaging. Our code will be publicly available.
|
2501.04666
|
Enhancing Virtual Try-On with Synthetic Pairs and Error-Aware Noise
Scheduling
|
cs.CV
|
Given an isolated garment image in a canonical product view and a separate
image of a person, the virtual try-on task aims to generate a new image of the
person wearing the target garment. Prior virtual try-on works face two major
challenges in achieving this goal: a) the paired (human, garment) training data
has limited availability; b) generating textures on the human that perfectly
match that of the prompted garment is difficult, often resulting in distorted
text and faded textures. Our work explores ways to tackle these issues through
both synthetic data as well as model refinement. We introduce a garment
extraction model that generates (human, synthetic garment) pairs from a single
image of a clothed individual. The synthetic pairs can then be used to augment
the training of virtual try-on. We also propose an Error-Aware Refinement-based
Schr\"odinger Bridge (EARSB) that surgically targets localized generation
errors for correcting the output of a base virtual try-on model. To identify
likely errors, we propose a weakly-supervised error classifier that localizes
regions for refinement, subsequently augmenting the Schr\"odinger Bridge's
noise schedule with its confidence heatmap. Experiments on VITON-HD and
DressCode-Upper demonstrate that our synthetic data augmentation enhances the
performance of prior work, while EARSB improves the overall image quality. In
user studies, our model is preferred by the users in an average of 59% of
cases.
|
2501.04667
|
Natural Variational Annealing for Multimodal Optimization
|
stat.ML cs.LG stat.CO
|
We introduce a new multimodal optimization approach called Natural
Variational Annealing (NVA) that combines the strengths of three foundational
concepts to simultaneously search for multiple global and local modes of
black-box nonconvex objectives. First, it implements a simultaneous search by
using variational posteriors, such as, mixtures of Gaussians. Second, it
applies annealing to gradually trade off exploration for exploitation. Finally,
it learns the variational search distribution using natural-gradient learning
where updates resemble well-known and easy-to-implement algorithms. The three
concepts come together in NVA giving rise to new algorithms and also allowing
us to incorporate "fitness shaping", a core concept from evolutionary
algorithms. We assess the quality of search on simulations and compare them to
methods using gradient descent and evolution strategies. We also provide an
application to a real-world inverse problem in planetary science.
|
2501.04670
|
Are They the Same? Exploring Visual Correspondence Shortcomings of
Multimodal LLMs
|
cs.CV
|
Recent advancements in multimodal models have shown a strong ability in
visual perception, reasoning abilities, and vision-language understanding.
However, studies on visual matching ability are missing, where finding the
visual correspondence of objects is essential in vision research. Our research
reveals that the matching capabilities in recent multimodal LLMs (MLLMs) still
exhibit systematic shortcomings, even with current strong MLLMs models, GPT-4o.
In particular, we construct a Multimodal Visual Matching (MMVM) benchmark to
fairly benchmark over 30 different MLLMs. The MMVM benchmark is built from 15
open-source datasets and Internet videos with manual annotation. We categorize
the data samples of MMVM benchmark into eight aspects based on the required
cues and capabilities to more comprehensively evaluate and analyze current
MLLMs. In addition, we have designed an automatic annotation pipeline to
generate the MMVM SFT dataset, including 220K visual matching data with
reasoning annotation. Finally, we present CoLVA, a novel contrastive MLLM with
two novel technical designs: fine-grained vision expert with object-level
contrastive learning and instruction augmentation strategy. CoLVA achieves
51.06\% overall accuracy (OA) on the MMVM benchmark, surpassing GPT-4o and
baseline by 8.41\% and 23.58\% OA, respectively. The results show the
effectiveness of our MMVM SFT dataset and our novel technical designs. Code,
benchmark, dataset, and models are available at
https://github.com/zhouyiks/CoLVA.
|
2501.04671
|
DRIVINGVQA: Analyzing Visual Chain-of-Thought Reasoning of Vision
Language Models in Real-World Scenarios with Driving Theory Tests
|
cs.CV cs.AI
|
Large vision-language models (LVLMs) augment language models with visual
understanding, enabling multimodal reasoning. However, due to the modality gap
between textual and visual data, they often face significant challenges, such
as over-reliance on text priors, hallucinations, and limited capacity for
complex visual reasoning. Existing benchmarks to evaluate visual reasoning in
LVLMs often rely on schematic or synthetic images and on imprecise
machine-generated explanations. To bridge the modality gap, we present
DrivingVQA, a new benchmark derived from driving theory tests to evaluate
visual chain-of-thought reasoning in complex real-world scenarios. It offers
3,931 expert-crafted multiple-choice problems and interleaved explanations
grounded with entities relevant to the reasoning process. We leverage this
dataset to perform an extensive study of LVLMs' ability to reason about complex
visual scenarios. Our experiments reveal that open-source and proprietary LVLMs
struggle with visual chain-of-thought reasoning under zero-shot settings. We
investigate training strategies that leverage relevant entities to improve
visual reasoning. Notably, we observe a performance boost of up to 7\% when
reasoning over image tokens of cropped regions tied to these entities.
|
Subsets and Splits
No community queries yet
The top public SQL queries from the community will appear here once available.