id
stringlengths 9
16
| title
stringlengths 4
278
| categories
stringlengths 5
104
| abstract
stringlengths 6
4.09k
|
|---|---|---|---|
2501.09050
|
Generating Realistic Synthetic Head Rotation Data for Extended Reality
using Deep Learning
|
cs.CV cs.AI
|
Extended Reality is a revolutionary method of delivering multimedia content
to users. A large contributor to its popularity is the sense of immersion and
interactivity enabled by having real-world motion reflected in the virtual
experience accurately and immediately. This user motion, mainly caused by head
rotations, induces several technical challenges. For instance, which content is
generated and transmitted depends heavily on where the user is looking.
Seamless systems, taking user motion into account proactively, will therefore
require accurate predictions of upcoming rotations. Training and evaluating
such predictors requires vast amounts of orientational input data, which is
expensive to gather, as it requires human test subjects. A more feasible
approach is to gather a modest dataset through test subjects, and then extend
it to a more sizeable set using synthetic data generation methods. In this
work, we present a head rotation time series generator based on TimeGAN, an
extension of the well-known Generative Adversarial Network, designed
specifically for generating time series. This approach is able to extend a
dataset of head rotations with new samples closely matching the distribution of
the measured time series.
|
2501.09051
|
Polyp detection in colonoscopy images using YOLOv11
|
cs.CV cs.AI
|
Colorectal cancer (CRC) is one of the most commonly diagnosed cancers all
over the world. It starts as a polyp in the inner lining of the colon. To
prevent CRC, early polyp detection is required. Colonosopy is used for the
inspection of the colon. Generally, the images taken by the camera placed at
the tip of the endoscope are analyzed by the experts manually. Various
traditional machine learning models have been used with the rise of machine
learning. Recently, deep learning models have shown more effectiveness in polyp
detection due to their superiority in generalizing and learning small features.
These deep learning models for object detection can be segregated into two
different types: single-stage and two-stage. Generally, two stage models have
higher accuracy than single stage ones but the single stage models have low
inference time. Hence, single stage models are easy to use for quick object
detection. YOLO is one of the singlestage models used successfully for polyp
detection. It has drawn the attention of researchers because of its lower
inference time. The researchers have used Different versions of YOLO so far,
and with each newer version, the accuracy of the model is increasing. This
paper aims to see the effectiveness of the recently released YOLOv11 to detect
polyp. We analyzed the performance for all five models of YOLOv11 (YOLO11n,
YOLO11s, YOLO11m, YOLO11l, YOLO11x) with Kvasir dataset for the training and
testing. Two different versions of the dataset were used. The first consisted
of the original dataset, and the other was created using augmentation
techniques. The performance of all the models with these two versions of the
dataset have been analysed.
|
2501.09052
|
Continual Test-Time Adaptation for Single Image Defocus Deblurring via
Causal Siamese Networks
|
eess.IV cs.LG
|
Single image defocus deblurring (SIDD) aims to restore an all-in-focus image
from a defocused one. Distribution shifts in defocused images generally lead to
performance degradation of existing methods during out-of-distribution
inferences. In this work, we gauge the intrinsic reason behind the performance
degradation, which is identified as the heterogeneity of lens-specific point
spread functions. Empirical evidence supports this finding, motivating us to
employ a continual test-time adaptation (CTTA) paradigm for SIDD. However,
traditional CTTA methods, which primarily rely on entropy minimization, cannot
sufficiently explore task-dependent information for pixel-level regression
tasks like SIDD. To address this issue, we propose a novel Siamese
networks-based continual test-time adaptation framework, which adapts source
models to continuously changing target domains only requiring unlabeled target
data in an online manner. To further mitigate semantically erroneous textures
introduced by source SIDD models under severe degradation, we revisit the
learning paradigm through a structural causal model and propose Causal Siamese
networks (CauSiam). Our method leverages large-scale pre-trained
vision-language models to derive discriminative universal semantic priors and
integrates these priors into Siamese networks, ensuring causal identifiability
between blurry inputs and restored images. Extensive experiments demonstrate
that CauSiam effectively improves the generalization performance of existing
SIDD methods in continuously changing domains.
|
2501.09055
|
SHYI: Action Support for Contrastive Learning in High-Fidelity
Text-to-Image Generation
|
cs.CV
|
In this project, we address the issue of infidelity in text-to-image
generation, particularly for actions involving multiple objects. For this we
build on top of the CONFORM framework which uses Contrastive Learning to
improve the accuracy of the generated image for multiple objects. However the
depiction of actions which involves multiple different object has still large
room for improvement. To improve, we employ semantically hypergraphic
contrastive adjacency learning, a comprehension of enhanced contrastive
structure and "contrast but link" technique. We further amend Stable
Diffusion's understanding of actions by InteractDiffusion. As evaluation
metrics we use image-text similarity CLIP and TIFA. In addition, we conducted a
user study.
Our method shows promising results even with verbs that Stable Diffusion
understands mediocrely. We then provide future directions by analyzing the
results.
Our codebase can be found on polybox under the link:
https://polybox.ethz.ch/index.php/s/dJm3SWyRohUrFxn
|
2501.09056
|
Decompose-ToM: Enhancing Theory of Mind Reasoning in Large Language
Models through Simulation and Task Decomposition
|
cs.CL cs.AI
|
Theory of Mind (ToM) is the ability to understand and reflect on the mental
states of others. Although this capability is crucial for human interaction,
testing on Large Language Models (LLMs) reveals that they possess only a
rudimentary understanding of it. Although the most capable closed-source LLMs
have come close to human performance on some ToM tasks, they still perform
poorly on complex variations of the task that involve more structured
reasoning. In this work, we utilize the concept of "pretend-play", or
``Simulation Theory'' from cognitive psychology to propose ``Decompose-ToM'':
an LLM-based inference algorithm that improves model performance on complex ToM
tasks. We recursively simulate user perspectives and decompose the ToM task
into a simpler set of functions: subject identification, question-reframing,
world model updation, and knowledge availability. We test the algorithm on
higher-order ToM tasks and a task testing for ToM capabilities in a
conversational setting, demonstrating that our approach shows significant
improvement across models compared to baseline methods while requiring minimal
prompt tuning across tasks and no additional model training.
|
2501.09064
|
Generative diffusion model with inverse renormalization group flows
|
cond-mat.stat-mech cond-mat.dis-nn cs.LG physics.app-ph physics.bio-ph
|
Diffusion models represent a class of generative models that produce data by
denoising a sample corrupted by white noise. Despite the success of diffusion
models in computer vision, audio synthesis, and point cloud generation, so far
they overlook inherent multiscale structures in data and have a slow generation
process due to many iteration steps. In physics, the renormalization group
offers a fundamental framework for linking different scales and giving an
accurate coarse-grained model. Here we introduce a renormalization group-based
diffusion model that leverages multiscale nature of data distributions for
realizing a high-quality data generation. In the spirit of renormalization
group procedures, we define a flow equation that progressively erases data
information from fine-scale details to coarse-grained structures. Through
reversing the renormalization group flows, our model is able to generate
high-quality samples in a coarse-to-fine manner. We validate the versatility of
the model through applications to protein structure prediction and image
generation. Our model consistently outperforms conventional diffusion models
across standard evaluation metrics, enhancing sample quality and/or
accelerating sampling speed by an order of magnitude. The proposed method
alleviates the need for data-dependent tuning of hyperparameters in the
generative diffusion models, showing promise for systematically increasing
sample efficiency based on the concept of the renormalization group.
|
2501.09080
|
Average-Reward Reinforcement Learning with Entropy Regularization
|
cs.LG cs.AI
|
The average-reward formulation of reinforcement learning (RL) has drawn
increased interest in recent years due to its ability to solve
temporally-extended problems without discounting. Independently, RL algorithms
have benefited from entropy-regularization: an approach used to make the
optimal policy stochastic, thereby more robust to noise. Despite the distinct
benefits of the two approaches, the combination of entropy regularization with
an average-reward objective is not well-studied in the literature and there has
been limited development of algorithms for this setting. To address this gap in
the field, we develop algorithms for solving entropy-regularized average-reward
RL problems with function approximation. We experimentally validate our method,
comparing it with existing algorithms on standard benchmarks for RL.
|
2501.09081
|
Inferring Transition Dynamics from Value Functions
|
cs.LG cs.AI
|
In reinforcement learning, the value function is typically trained to solve
the Bellman equation, which connects the current value to future values. This
temporal dependency hints that the value function may contain implicit
information about the environment's transition dynamics. By rearranging the
Bellman equation, we show that a converged value function encodes a model of
the underlying dynamics of the environment. We build on this insight to propose
a simple method for inferring dynamics models directly from the value function,
potentially mitigating the need for explicit model learning. Furthermore, we
explore the challenges of next-state identifiability, discussing conditions
under which the inferred dynamics model is well-defined. Our work provides a
theoretical foundation for leveraging value functions in dynamics modeling and
opens a new avenue for bridging model-free and model-based reinforcement
learning.
|
2501.09086
|
Salient Information Preserving Adversarial Training Improves Clean and
Robust Accuracy
|
cs.CV
|
In this work we introduce Salient Information Preserving Adversarial Training
(SIP-AT), an intuitive method for relieving the robustness-accuracy trade-off
incurred by traditional adversarial training. SIP-AT uses salient image regions
to guide the adversarial training process in such a way that fragile features
deemed meaningful by an annotator remain unperturbed during training, allowing
models to learn highly predictive non-robust features without sacrificing
overall robustness. This technique is compatible with both human-based and
automatically generated salience estimates, allowing SIP-AT to be used as a
part of human-driven model development without forcing SIP-AT to be reliant
upon additional human data. We perform experiments across multiple datasets and
architectures and demonstrate that SIP-AT is able to boost the clean accuracy
of models while maintaining a high degree of robustness against attacks at
multiple epsilon levels. We complement our central experiments with an
observational study measuring the rate at which human subjects successfully
identify perturbed images. This study helps build a more intuitive
understanding of adversarial attack strength and demonstrates the heightened
importance of low-epsilon robustness. Our results demonstrate the efficacy of
SIP-AT and provide valuable insight into the risks posed by adversarial samples
of various strengths.
|
2501.09089
|
Physics-Aware POD-Based Learning for Ab initio QEM-Galerkin Simulations
of Periodic Nanostructures
|
physics.comp-ph cond-mat.mtrl-sci cs.CE
|
Quantum nanostructures offer crucial applications in electronics, photonics,
materials, drugs, etc. For accurate design and analysis of nanostructures and
materials, simulations of the Schrodinger or Schrodinger-like equation are
always needed. For large nanostructures, these eigenvalue problems can be
computationally intensive. One effective solution is a learning method via
Proper Orthogonal Decomposition (POD), together with ab initio Galerkin
projection of the Schrodinger equation. POD-Galerkin projects the problem onto
a reduced-order space with the POD basis representing electron wave functions
(WFs) guided by the first principles in simulations. To minimize training
effort and enhance robustness of POD-Galerkin in larger structures, the quantum
element method (QEM) was proposed previously, which partitions nanostructures
into generic quantum elements. Larger nanostructures can then be constructed by
the trained generic quantum elements, each of which is represented by its
POD-Galerkin model. This work investigates QEM-Galerkin thoroughly in
multi-element quantum-dot (QD) structures on approaches to further improve
training effectiveness and simulation accuracy and efficiency for QEM-Galerkin.
To further improve computing speed, POD and Fourier bases for periodic
potentials are also examined in QEM-Galerkin simulations. Results indicate
that, considering efficiency and accuracy, the POD potential basis is superior
to the Fourier potential basis even for periodic potentials. Overall,
QEM-Galerkin offers more than a 2-order speedup in computation over direct
numerical simulation for multi-element QD structures, and more improvement is
observed in a structure comprising more elements.
|
2501.09092
|
SteLLA: A Structured Grading System Using LLMs with RAG
|
cs.CL cs.AI cs.CY
|
Large Language Models (LLMs) have shown strong general capabilities in many
applications. However, how to make them reliable tools for some specific tasks
such as automated short answer grading (ASAG) remains a challenge. We present
SteLLA (Structured Grading System Using LLMs with RAG) in which a) Retrieval
Augmented Generation (RAG) approach is used to empower LLMs specifically on the
ASAG task by extracting structured information from the highly relevant and
reliable external knowledge based on the instructor-provided reference answer
and rubric, b) an LLM performs a structured and question-answering-based
evaluation of student answers to provide analytical grades and feedback. A
real-world dataset that contains students' answers in an exam was collected
from a college-level Biology course. Experiments show that our proposed system
can achieve substantial agreement with the human grader while providing
break-down grades and feedback on all the knowledge points examined in the
problem. A qualitative and error analysis of the feedback generated by GPT4
shows that GPT4 is good at capturing facts while may be prone to inferring too
much implication from the given text in the grading task which provides
insights into the usage of LLMs in the ASAG system.
|
2501.09096
|
Self Pre-training with Adaptive Mask Autoencoders for Variable-Contrast
3D Medical Imaging
|
eess.IV cs.CV
|
The Masked Autoencoder (MAE) has recently demonstrated effectiveness in
pre-training Vision Transformers (ViT) for analyzing natural images. By
reconstructing complete images from partially masked inputs, the ViT encoder
gathers contextual information to predict the missing regions. This capability
to aggregate context is especially important in medical imaging, where
anatomical structures are functionally and mechanically linked to surrounding
regions. However, current methods do not consider variations in the number of
input images, which is typically the case in real-world Magnetic Resonance (MR)
studies. To address this limitation, we propose a 3D Adaptive Masked
Autoencoders (AMAE) architecture that accommodates a variable number of 3D
input contrasts per subject. A magnetic resonance imaging (MRI) dataset of
45,364 subjects was used for pretraining and a subset of 1648 training, 193
validation and 215 test subjects were used for finetuning. The performance
demonstrates that self pre-training of this adaptive masked autoencoders can
enhance the infarct segmentation performance by 2.8%-3.7% for ViT-based
segmentation models.
|
2501.09101
|
Relation U-Net
|
eess.IV cs.CV
|
Towards clinical interpretations, this paper presents a new
''output-with-confidence'' segmentation neural network with multiple input
images and multiple output segmentation maps and their pairwise relations. A
confidence score of the test image without ground-truth can be estimated from
the difference among the estimated relation maps. We evaluate the method based
on the widely used vanilla U-Net for segmentation and our new model is named
Relation U-Net which can output segmentation maps of the input images as well
as an estimated confidence score of the test image without ground-truth.
Experimental results on four public datasets show that Relation U-Net can not
only provide better accuracy than vanilla U-Net but also estimate a confidence
score which is linearly correlated to the segmentation accuracy on test images.
|
2501.09102
|
Tracking the Takes and Trajectories of English-Language News Narratives
across Trustworthy and Worrisome Websites
|
cs.SI cs.AI cs.CY cs.LG
|
Understanding how misleading and outright false information enters news
ecosystems remains a difficult challenge that requires tracking how narratives
spread across thousands of fringe and mainstream news websites. To do this, we
introduce a system that utilizes encoder-based large language models and
zero-shot stance detection to scalably identify and track news narratives and
their attitudes across over 4,000 factually unreliable, mixed-reliability, and
factually reliable English-language news websites. Running our system over an
18 month period, we track the spread of 146K news stories. Using network-based
interference via the NETINF algorithm, we show that the paths of news
narratives and the stances of websites toward particular entities can be used
to uncover slanted propaganda networks (e.g., anti-vaccine and anti-Ukraine)
and to identify the most influential websites in spreading these attitudes in
the broader news ecosystem. We hope that increased visibility into our
distributed news ecosystem can help with the reporting and fact-checking of
propaganda and disinformation.
|
2501.09103
|
Similarity-Quantized Relative Difference Learning for Improved Molecular
Activity Prediction
|
cs.LG
|
Accurate prediction of molecular activities is crucial for efficient drug
discovery, yet remains challenging due to limited and noisy datasets. We
introduce Similarity-Quantized Relative Learning (SQRL), a learning framework
that reformulates molecular activity prediction as relative difference learning
between structurally similar pairs of compounds. SQRL uses precomputed
molecular similarities to enhance training of graph neural networks and other
architectures, and significantly improves accuracy and generalization in
low-data regimes common in drug discovery. We demonstrate its broad
applicability and real-world potential through benchmarking on public datasets
as well as proprietary industry data. Our findings demonstrate that leveraging
similarity-aware relative differences provides an effective paradigm for
molecular activity prediction.
|
2501.09104
|
A Non-autoregressive Model for Joint STT and TTS
|
cs.SD cs.AI eess.AS
|
In this paper, we take a step towards jointly modeling automatic speech
recognition (STT) and speech synthesis (TTS) in a fully non-autoregressive way.
We develop a novel multimodal framework capable of handling the speech and text
modalities as input either individually or together. The proposed model can
also be trained with unpaired speech or text data owing to its multimodal
nature. We further propose an iterative refinement strategy to improve the STT
and TTS performance of our model such that the partial hypothesis at the output
can be fed back to the input of our model, thus iteratively improving both STT
and TTS predictions. We show that our joint model can effectively perform both
STT and TTS tasks, outperforming the STT-specific baseline in all tasks and
performing competitively with the TTS-specific baseline across a wide range of
evaluation metrics.
|
2501.09106
|
Physical Layer Security in FAS-aided Wireless Powered NOMA Systems
|
cs.IT eess.SP math.IT
|
The rapid evolution of communication technologies and the emergence of
sixth-generation (6G) networks have introduced unprecedented opportunities for
ultra-reliable, low-latency, and energy-efficient communication. However, the
integration of advanced technologies like non-orthogonal multiple access (NOMA)
and wireless powered communication networks (WPCNs) brings significant
challenges, particularly in terms of energy constraints and security
vulnerabilities. Traditional antenna systems and orthogonal multiple access
schemes struggle to meet the increasing demands for performance and security in
such environments. To address this gap, this paper investigates the impact of
emerging fluid antenna systems (FAS) on the performance of physical layer
security (PLS) in WPCNs. Specifically, we consider a scenario in which a
transmitter, powered by a power beacon via an energy link, transmits
confidential messages to legitimate FAS-aided users over information links
while an external eavesdropper attempts to decode the transmitted signals.
Additionally, users leverage the NOMA scheme, where the far user may also act
as an internal eavesdropper. For the proposed model, we first derive the
distributions of the equivalent channels at each node and subsequently obtain
compact expressions for the secrecy outage probability (SOP) and average
secrecy capacity (ASC), using the Gaussian quadrature methods. Our results
reveal that incorporating the FAS for NOMA users, instead of the TAS, enhances
the performance of the proposed secure WPCN.
|
2501.09107
|
Rethinking Post-Training Quantization: Introducing a Statistical
Pre-Calibration Approach
|
cs.LG
|
As Large Language Models (LLMs) become increasingly computationally complex,
developing efficient deployment strategies, such as quantization, becomes
crucial. State-of-the-art Post-training Quantization (PTQ) techniques often
rely on calibration processes to maintain the accuracy of these models.
However, while these calibration techniques can enhance performance in certain
domains, they may not be as effective in others. This paper aims to draw
attention to robust statistical approaches that can mitigate such issues. We
propose a weight-adaptive PTQ method that can be considered a precursor to
calibration-based PTQ methods, guiding the quantization process to preserve the
distribution of weights by minimizing the Kullback-Leibler divergence between
the quantized weights and the originally trained weights. This minimization
ensures that the quantized model retains the Shannon information content of the
original model to a great extent, guaranteeing robust and efficient deployment
across many tasks. As such, our proposed approach can perform on par with most
common calibration-based PTQ methods, establishing a new pre-calibration step
for further adjusting the quantized weights with calibration. We show that our
pre-calibration results achieve the same accuracy as some existing
calibration-based PTQ methods on various LLMs.
|
2501.09112
|
Mantis Shrimp: Exploring Photometric Band Utilization in Computer Vision
Networks for Photometric Redshift Estimation
|
astro-ph.IM cs.AI
|
We present Mantis Shrimp, a multi-survey deep learning model for photometric
redshift estimation that fuses ultra-violet (GALEX), optical (PanSTARRS), and
infrared (UnWISE) imagery. Machine learning is now an established approach for
photometric redshift estimation, with generally acknowledged higher performance
in areas with a high density of spectroscopically identified galaxies over
template-based methods. Multiple works have shown that image-based
convolutional neural networks can outperform tabular-based color/magnitude
models. In comparison to tabular models, image models have additional design
complexities: it is largely unknown how to fuse inputs from different
instruments which have different resolutions or noise properties. The Mantis
Shrimp model estimates the conditional density estimate of redshift using
cutout images. The density estimates are well calibrated and the point
estimates perform well in the distribution of available spectroscopically
confirmed galaxies with (bias = 1e-2), scatter (NMAD = 2.44e-2) and
catastrophic outlier rate ($\eta$=17.53$\%$). We find that early fusion
approaches (e.g., resampling and stacking images from different instruments)
match the performance of late fusion approaches (e.g., concatenating latent
space representations), so that the design choice ultimately is left to the
user. Finally, we study how the models learn to use information across bands,
finding evidence that our models successfully incorporates information from all
surveys. The applicability of our model to the analysis of large populations of
galaxies is limited by the speed of downloading cutouts from external servers;
however, our model could be useful in smaller studies such as generating priors
over redshift for stellar population synthesis.
|
2501.09114
|
Generative Medical Image Anonymization Based on Latent Code Projection
and Optimization
|
cs.CV cs.AI
|
Medical image anonymization aims to protect patient privacy by removing
identifying information, while preserving the data utility to solve downstream
tasks. In this paper, we address the medical image anonymization problem with a
two-stage solution: latent code projection and optimization. In the projection
stage, we design a streamlined encoder to project input images into a latent
space and propose a co-training scheme to enhance the projection process. In
the optimization stage, we refine the latent code using two deep loss functions
designed to address the trade-off between identity protection and data utility
dedicated to medical images. Through a comprehensive set of qualitative and
quantitative experiments, we showcase the effectiveness of our approach on the
MIMIC-CXR chest X-ray dataset by generating anonymized synthetic images that
can serve as training set for detecting lung pathologies. Source codes are
available at https://github.com/Huiyu-Li/GMIA.
|
2501.09116
|
Deep Distance Map Regression Network with Shape-aware Loss for
Imbalanced Medical Image Segmentation
|
eess.IV cs.CV
|
Small object segmentation, like tumor segmentation, is a difficult and
critical task in the field of medical image analysis. Although deep learning
based methods have achieved promising performance, they are restricted to the
use of binary segmentation mask. Inspired by the rigorous mapping between
binary segmentation mask and distance map, we adopt distance map as a novel
ground truth and employ a network to fulfill the computation of distance map.
Specially, we propose a new segmentation framework that incorporates the
existing binary segmentation network and a light weight regression network
(dubbed as LR-Net). Thus, the LR-Net can convert the distance map computation
into a regression task and leverage the rich information of distance maps.
Additionally, we derive a shape-aware loss by employing distance maps as
penalty map to infer the complete shape of an object. We evaluated our approach
on MICCAI 2017 Liver Tumor Segmentation (LiTS) Challenge dataset and a clinical
dataset. Experimental results show that our approach outperforms the
classification-based methods as well as other existing state-of-the-arts.
|
2501.09117
|
Multi-Class Traffic Assignment using Multi-View Heterogeneous Graph
Attention Networks
|
cs.LG
|
Solving traffic assignment problem for large networks is computationally
challenging when conventional optimization-based methods are used. In our
research, we develop an innovative surrogate model for a traffic assignment
when multi-class vehicles are involved. We do so by employing heterogeneous
graph neural networks which use a multiple-view graph attention mechanism
tailored to different vehicle classes, along with additional links connecting
origin-destination pairs. We also integrate the node-based flow conservation
law into the loss function. As a result, our model adheres to flow conservation
while delivering highly accurate predictions for link flows and utilization
ratios. Through numerical experiments conducted on urban transportation
networks, we demonstrate that our model surpasses traditional neural network
approaches in convergence speed and predictive accuracy in both user
equilibrium and system optimal versions of traffic assignment.
|
2501.09126
|
Augmenting Human-Annotated Training Data with Large Language Model
Generation and Distillation in Open-Response Assessment
|
cs.CL cs.CY cs.LG
|
Large Language Models (LLMs) like GPT-4o can help automate text
classification tasks at low cost and scale. However, there are major concerns
about the validity and reliability of LLM outputs. By contrast, human coding is
generally more reliable but expensive to procure at scale. In this study, we
propose a hybrid solution to leverage the strengths of both. We combine
human-coded data and synthetic LLM-produced data to fine-tune a classical
machine learning classifier, distilling both into a smaller BERT model. We
evaluate our method on a human-coded test set as a validity measure for LLM
output quality. In three experiments, we systematically vary LLM-generated
samples' size, variety, and consistency, informed by best practices in LLM
tuning. Our findings indicate that augmenting datasets with synthetic samples
improves classifier performance, with optimal results achieved at an 80%
synthetic to 20% human-coded data ratio. Lower temperature settings of 0.3,
corresponding to less variability in LLM generations, produced more stable
improvements but also limited model learning from augmented samples. In
contrast, higher temperature settings (0.7 and above) introduced greater
variability in performance estimates and, at times, lower performance. Hence,
LLMs may produce more uniform output that classifiers overfit to earlier or
produce more diverse output that runs the risk of deteriorating model
performance through information irrelevant to the prediction task. Filtering
out inconsistent synthetic samples did not enhance performance. We conclude
that integrating human and LLM-generated data to improve text classification
models in assessment offers a scalable solution that leverages both the
accuracy of human coding and the variety of LLM outputs.
|
2501.09127
|
Multilingual LLMs Struggle to Link Orthography and Semantics in
Bilingual Word Processing
|
cs.CL
|
Bilingual lexical processing is shaped by the complex interplay of
phonological, orthographic, and semantic features of two languages within an
integrated mental lexicon. In humans, this is evident in the ease with which
cognate words - words similar in both orthographic form and meaning (e.g.,
blind, meaning "sightless" in both English and German) - are processed,
compared to the challenges posed by interlingual homographs, which share
orthographic form but differ in meaning (e.g., gift, meaning "present" in
English but "poison" in German). We investigate how multilingual Large Language
Models (LLMs) handle such phenomena, focusing on English-Spanish,
English-French, and English-German cognates, non-cognate, and interlingual
homographs. Specifically, we evaluate their ability to disambiguate meanings
and make semantic judgments, both when these word types are presented in
isolation or within sentence contexts. Our findings reveal that while certain
LLMs demonstrate strong performance in recognizing cognates and non-cognates in
isolation, they exhibit significant difficulty in disambiguating interlingual
homographs, often performing below random baselines. This suggests LLMs tend to
rely heavily on orthographic similarities rather than semantic understanding
when interpreting interlingual homographs. Further, we find LLMs exhibit
difficulty in retrieving word meanings, with performance in isolative
disambiguation tasks having no correlation with semantic understanding.
Finally, we study how the LLM processes interlingual homographs in incongruent
sentences. We find models to opt for different strategies in understanding
English and non-English homographs, highlighting a lack of a unified approach
to handling cross-lingual ambiguities.
|
2501.09129
|
Deep Self-Supervised Disturbance Mapping with the OPERA Sentinel-1
Radiometric Terrain Corrected SAR Backscatter Product
|
cs.CV cs.LG eess.IV
|
Mapping land surface disturbances supports disaster response, resource and
ecosystem management, and climate adaptation efforts. Synthetic aperture radar
(SAR) is an invaluable tool for disturbance mapping, providing consistent
time-series images of the ground regardless of weather or illumination
conditions. Despite SAR's potential for disturbance mapping, processing SAR
data to an analysis-ready format requires expertise and significant compute
resources, particularly for large-scale global analysis. In October 2023,
NASA's Observational Products for End-Users from Remote Sensing Analysis
(OPERA) project released the near-global Radiometric Terrain Corrected SAR
backscatter from Sentinel-1 (RTC-S1) dataset, providing publicly available,
analysis-ready SAR imagery. In this work, we utilize this new dataset to
systematically analyze land surface disturbances. As labeling SAR data is often
prohibitively time-consuming, we train a self-supervised vision transformer -
which requires no labels to train - on OPERA RTC-S1 data to estimate a
per-pixel distribution from the set of baseline imagery and assess disturbances
when there is significant deviation from the modeled distribution. To test our
model's capability and generality, we evaluate three different natural
disasters - which represent high-intensity, abrupt disturbances - from three
different regions of the world. Across events, our approach yields high quality
delineations: F1 scores exceeding 0.6 and Areas Under the Precision-Recall
Curve exceeding 0.65, consistently outperforming existing SAR disturbance
methods. Our findings suggest that a self-supervised vision transformer is
well-suited for global disturbance mapping and can be a valuable tool for
operational, near-global disturbance monitoring, particularly when labeled data
does not exist.
|
2501.09134
|
Benchmarking Robustness of Contrastive Learning Models for Medical
Image-Report Retrieval
|
cs.CV cs.AI cs.CL cs.IR cs.LG
|
Medical images and reports offer invaluable insights into patient health. The
heterogeneity and complexity of these data hinder effective analysis. To bridge
this gap, we investigate contrastive learning models for cross-domain
retrieval, which associates medical images with their corresponding clinical
reports. This study benchmarks the robustness of four state-of-the-art
contrastive learning models: CLIP, CXR-RePaiR, MedCLIP, and CXR-CLIP. We
introduce an occlusion retrieval task to evaluate model performance under
varying levels of image corruption. Our findings reveal that all evaluated
models are highly sensitive to out-of-distribution data, as evidenced by the
proportional decrease in performance with increasing occlusion levels. While
MedCLIP exhibits slightly more robustness, its overall performance remains
significantly behind CXR-CLIP and CXR-RePaiR. CLIP, trained on a
general-purpose dataset, struggles with medical image-report retrieval,
highlighting the importance of domain-specific training data. The evaluation of
this work suggests that more effort needs to be spent on improving the
robustness of these models. By addressing these limitations, we can develop
more reliable cross-domain retrieval models for medical applications.
|
2501.09136
|
Agentic Retrieval-Augmented Generation: A Survey on Agentic RAG
|
cs.AI cs.CL cs.IR
|
Large Language Models (LLMs) have revolutionized artificial intelligence (AI)
by enabling human like text generation and natural language understanding.
However, their reliance on static training data limits their ability to respond
to dynamic, real time queries, resulting in outdated or inaccurate outputs.
Retrieval Augmented Generation (RAG) has emerged as a solution, enhancing LLMs
by integrating real time data retrieval to provide contextually relevant and
up-to-date responses. Despite its promise, traditional RAG systems are
constrained by static workflows and lack the adaptability required for
multistep reasoning and complex task management.
Agentic Retrieval-Augmented Generation (Agentic RAG) transcends these
limitations by embedding autonomous AI agents into the RAG pipeline. These
agents leverage agentic design patterns reflection, planning, tool use, and
multiagent collaboration to dynamically manage retrieval strategies,
iteratively refine contextual understanding, and adapt workflows to meet
complex task requirements. This integration enables Agentic RAG systems to
deliver unparalleled flexibility, scalability, and context awareness across
diverse applications.
This survey provides a comprehensive exploration of Agentic RAG, beginning
with its foundational principles and the evolution of RAG paradigms. It
presents a detailed taxonomy of Agentic RAG architectures, highlights key
applications in industries such as healthcare, finance, and education, and
examines practical implementation strategies. Additionally, it addresses
challenges in scaling these systems, ensuring ethical decision making, and
optimizing performance for real-world applications, while providing detailed
insights into frameworks and tools for implementing Agentic RAG.
|
2501.09137
|
Gradient Descent Converges Linearly to Flatter Minima than Gradient Flow
in Shallow Linear Networks
|
cs.LG math.OC stat.ML
|
We study the gradient descent (GD) dynamics of a depth-2 linear neural
network with a single input and output. We show that GD converges at an
explicit linear rate to a global minimum of the training loss, even with a
large stepsize -- about $2/\textrm{sharpness}$. It still converges for even
larger stepsizes, but may do so very slowly. We also characterize the solution
to which GD converges, which has lower norm and sharpness than the gradient
flow solution. Our analysis reveals a trade off between the speed of
convergence and the magnitude of implicit regularization. This sheds light on
the benefits of training at the ``Edge of Stability'', which induces additional
regularization by delaying convergence and may have implications for training
more complex models.
|
2501.09138
|
Few-Shot Adaptation of Training-Free Foundation Model for 3D Medical
Image Segmentation
|
cs.CV
|
Vision foundation models have achieved remarkable progress across various
image analysis tasks. In the image segmentation task, foundation models like
the Segment Anything Model (SAM) enable generalizable zero-shot segmentation
through user-provided prompts. However, SAM primarily trained on natural
images, lacks the domain-specific expertise of medical imaging. This limitation
poses challenges when applying SAM to medical image segmentation, including the
need for extensive fine-tuning on specialized medical datasets and a dependency
on manual prompts, which are both labor-intensive and require intervention from
medical experts.
This work introduces the Few-shot Adaptation of Training-frEe SAM (FATE-SAM),
a novel method designed to adapt the advanced Segment Anything Model 2 (SAM2)
for 3D medical image segmentation. FATE-SAM reassembles pre-trained modules of
SAM2 to enable few-shot adaptation, leveraging a small number of support
examples to capture anatomical knowledge and perform prompt-free segmentation,
without requiring model fine-tuning. To handle the volumetric nature of medical
images, we incorporate a Volumetric Consistency mechanism that enhances spatial
coherence across 3D slices. We evaluate FATE-SAM on multiple medical imaging
datasets and compare it with supervised learning methods, zero-shot SAM
approaches, and fine-tuned medical SAM methods. Results show that FATE-SAM
delivers robust and accurate segmentation while eliminating the need for large
annotated datasets and expert intervention. FATE-SAM provides a practical,
efficient solution for medical image segmentation, making it more accessible
for clinical applications.
|
2501.09143
|
Reducing real-time complexity via sub-control Lyapunov functions: from
theory to experiments
|
eess.SY cs.SY math.OC
|
The techniques to design control Lyapunov functions (CLF), along with a
proper stabilizing feedback, possibly in the presence of constraints, often
provide control laws that are too complex for proper implementation online,
especially when an optimization problem is involved. In this work, we show how
to acquire an alternative, computationally attractive feedback. Given a nominal
CLF and a nominal state feedback, we say that a different positive definite
function is a Sub-control Lyapunov function (SCLF) if its Lyapunov derivative
is negative-definite and bounded above by the Lyapunov derivative of the
nominal function with the nominal control. It turns out that if we consider a
family of basis functions, then a SCLF can be computed by linear programming,
with an infinite number of constraints. The idea is that although the offline
computational burden to achieve the new controller and solve the linear program
is considerable, the online computational burden is drastically reduced.
Comprehensive simulations and experiments on drone control are conducted to
demonstrate the effectiveness of the study.
|
2501.09146
|
Towards Federated Multi-Armed Bandit Learning for Content Dissemination
using Swarm of UAVs
|
cs.LG cs.NI
|
This paper introduces an Unmanned Aerial Vehicle - enabled content management
architecture that is suitable for critical content access in communities of
users that are communication-isolated during diverse types of disaster
scenarios. The proposed architecture leverages a hybrid network of stationary
anchor UAVs and mobile Micro-UAVs for ubiquitous content dissemination. The
anchor UAVs are equipped with both vertical and lateral communication links,
and they serve local users, while the mobile micro-ferrying UAVs extend
coverage across communities with increased mobility. The focus is on developing
a content dissemination system that dynamically learns optimal caching policies
to maximize content availability. The core innovation is an adaptive content
dissemination framework based on distributed Federated Multi-Armed Bandit
learning. The goal is to optimize UAV content caching decisions based on
geo-temporal content popularity and user demand variations. A Selective Caching
Algorithm is also introduced to reduce redundant content replication by
incorporating inter-UAV information sharing. This method strategically
preserves the uniqueness in user preferences while amalgamating the
intelligence across a distributed learning system. This approach improves the
learning algorithm's ability to adapt to diverse user preferences. Functional
verification and performance evaluation confirm the proposed architecture's
utility across different network sizes, UAV swarms, and content popularity
patterns.
|
2501.09154
|
Towards Multilingual LLM Evaluation for Baltic and Nordic languages: A
study on Lithuanian History
|
cs.CL cs.AI
|
In this work, we evaluated Lithuanian and general history knowledge of
multilingual Large Language Models (LLMs) on a multiple-choice
question-answering task. The models were tested on a dataset of Lithuanian
national and general history questions translated into Baltic, Nordic, and
other languages (English, Ukrainian, Arabic) to assess the knowledge sharing
from culturally and historically connected groups. We evaluated GPT-4o,
LLaMa3.1 8b and 70b, QWEN2.5 7b and 72b, Mistral Nemo 12b, LLaMa3 8b, Mistral
7b, LLaMa3.2 3b, and Nordic fine-tuned models (GPT-SW3 and LLaMa3 8b).
Our results show that GPT-4o consistently outperformed all other models
across language groups, with slightly better results for Baltic and Nordic
languages. Larger open-source models like QWEN2.5 72b and LLaMa3.1 70b
performed well but showed weaker alignment with Baltic languages. Smaller
models (Mistral Nemo 12b, LLaMa3.2 3b, QWEN 7B, LLaMa3.1 8B, and LLaMa3 8b)
demonstrated gaps with LT-related alignment with Baltic languages while
performing better on Nordic and other languages. The Nordic fine-tuned models
did not surpass multilingual models, indicating that shared cultural or
historical context alone does not guarantee better performance.
|
2501.09155
|
VCRScore: Image captioning metric based on V\&L Transformers, CLIP, and
precision-recall
|
cs.CV cs.CL
|
Image captioning has become an essential Vision & Language research task. It
is about predicting the most accurate caption given a specific image or video.
The research community has achieved impressive results by continuously
proposing new models and approaches to improve the overall model's performance.
Nevertheless, despite increasing proposals, the performance metrics used to
measure their advances have remained practically untouched through the years. A
probe of that, nowadays metrics like BLEU, METEOR, CIDEr, and ROUGE are still
very used, aside from more sophisticated metrics such as BertScore and
ClipScore.
Hence, it is essential to adjust how are measure the advances, limitations,
and scopes of the new image captioning proposals, as well as to adapt new
metrics to these new advanced image captioning approaches.
This work proposes a new evaluation metric for the image captioning problem.
To do that, first, it was generated a human-labeled dataset to assess to which
degree the captions correlate with the image's content. Taking these human
scores as ground truth, we propose a new metric, and compare it with several
well-known metrics, from classical to newer ones. Outperformed results were
also found, and interesting insights were presented and discussed.
|
2501.09158
|
Evaluating GenAI for Simplifying Texts for Education: Improving Accuracy
and Consistency for Enhanced Readability
|
cs.CL
|
Generative artificial intelligence (GenAI) holds great promise as a tool to
support personalized learning. Teachers need tools to efficiently and
effectively enhance content readability of educational texts so that they are
matched to individual students reading levels, while retaining key details.
Large Language Models (LLMs) show potential to fill this need, but previous
research notes multiple shortcomings in current approaches. In this study, we
introduced a generalized approach and metrics for the systematic evaluation of
the accuracy and consistency in which LLMs, prompting techniques, and a novel
multi-agent architecture to simplify sixty informational reading passages,
reducing each from the twelfth grade level down to the eighth, sixth, and
fourth grade levels. We calculated the degree to which each LLM and prompting
technique accurately achieved the targeted grade level for each passage,
percentage change in word count, and consistency in maintaining keywords and
key phrases (semantic similarity). One-sample t-tests and multiple regression
models revealed significant differences in the best performing LLM and prompt
technique for each of the four metrics. Both LLMs and prompting techniques
demonstrated variable utility in grade level accuracy and consistency of
keywords and key phrases when attempting to level content down to the fourth
grade reading level. These results demonstrate the promise of the application
of LLMs for efficient and precise automated text simplification, the
shortcomings of current models and prompting methods in attaining an ideal
balance across various evaluation criteria, and a generalizable method to
evaluate future systems.
|
2501.09160
|
AutoLoop: Fast Visual SLAM Fine-tuning through Agentic Curriculum
Learning
|
cs.RO cs.AI cs.LG
|
Current visual SLAM systems face significant challenges in balancing
computational efficiency with robust loop closure handling. Traditional
approaches require careful manual tuning and incur substantial computational
overhead, while learning-based methods either lack explicit loop closure
capabilities or implement them through computationally expensive methods. We
present AutoLoop, a novel approach that combines automated curriculum learning
with efficient fine-tuning for visual SLAM systems. Our method employs a DDPG
(Deep Deterministic Policy Gradient) agent to dynamically adjust loop closure
weights during training, eliminating the need for manual hyperparameter search
while significantly reducing the required training steps. The approach
pre-computes potential loop closure pairs offline and leverages them through an
agent-guided curriculum, allowing the model to adapt efficiently to new
scenarios. Experiments conducted on TartanAir for training and validated across
multiple benchmarks including KITTI, EuRoC, ICL-NUIM and TUM RGB-D demonstrate
that AutoLoop achieves comparable or superior performance while reducing
training time by an order of magnitude compared to traditional approaches.
AutoLoop provides a practical solution for rapid adaptation of visual SLAM
systems, automating the weight tuning process that traditionally requires
multiple manual iterations. Our results show that this automated curriculum
strategy not only accelerates training but also maintains or improves the
model's performance across diverse environmental conditions.
|
2501.09162
|
A Vessel Bifurcation Landmark Pair Dataset for Abdominal CT Deformable
Image Registration (DIR) Validation
|
cs.CV physics.med-ph
|
Deformable image registration (DIR) is an enabling technology in many
diagnostic and therapeutic tasks. Despite this, DIR algorithms have limited
clinical use, largely due to a lack of benchmark datasets for quality assurance
during development. To support future algorithm development, here we introduce
our first-of-its-kind abdominal CT DIR benchmark dataset, comprising large
numbers of highly accurate landmark pairs on matching blood vessel
bifurcations. Abdominal CT image pairs of 30 patients were acquired from
several public repositories as well as the authors' institution with IRB
approval. The two CTs of each pair were originally acquired for the same
patient on different days. An image processing workflow was developed and
applied to each image pair: 1) Abdominal organs were segmented with a deep
learning model, and image intensity within organ masks was overwritten. 2)
Matching image patches were manually identified between two CTs of each image
pair 3) Vessel bifurcation landmarks were labeled on one image of each image
patch pair. 4) Image patches were deformably registered, and landmarks were
projected onto the second image. 5) Landmark pair locations were refined
manually or with an automated process. This workflow resulted in 1895 total
landmark pairs, or 63 per case on average. Estimates of the landmark pair
accuracy using digital phantoms were 0.7+/-1.2mm. The data is published in
Zenodo at https://doi.org/10.5281/zenodo.14362785. Instructions for use can be
found at https://github.com/deshanyang/Abdominal-DIR-QA. This dataset is a
first-of-its-kind for abdominal DIR validation. The number, accuracy, and
distribution of landmark pairs will allow for robust validation of DIR
algorithms with precision beyond what is currently available.
|
2501.09163
|
Towards Understanding Extrapolation: a Causal Lens
|
cs.LG cs.AI stat.ML
|
Canonical work handling distribution shifts typically necessitates an entire
target distribution that lands inside the training distribution. However,
practical scenarios often involve only a handful of target samples, potentially
lying outside the training support, which requires the capability of
extrapolation. In this work, we aim to provide a theoretical understanding of
when extrapolation is possible and offer principled methods to achieve it
without requiring an on-support target distribution. To this end, we formulate
the extrapolation problem with a latent-variable model that embodies the
minimal change principle in causal mechanisms. Under this formulation, we cast
the extrapolation problem into a latent-variable identification problem. We
provide realistic conditions on shift properties and the estimation objectives
that lead to identification even when only one off-support target sample is
available, tackling the most challenging scenarios. Our theory reveals the
intricate interplay between the underlying manifold's smoothness and the shift
properties. We showcase how our theoretical results inform the design of
practical adaptation algorithms. Through experiments on both synthetic and
real-world data, we validate our theoretical findings and their practical
implications.
|
2501.09164
|
The Veln(ia)s is in the Details: Evaluating LLM Judgment on Latvian and
Lithuanian Short Answer Matching
|
cs.CL cs.AI
|
In this work, we address the challenge of evaluating large language models
(LLMs) on the short answer matching task for Latvian and Lithuanian languages.
We introduce novel datasets consisting of 502 Latvian and 690 Lithuanian
question-answer pairs. For each question-answer pair, we generated matched and
non-matched answers using a set of alteration rules specifically designed to
introduce small but meaningful changes in the text. These generated answers
serve as test cases to assess the ability of LLMs to detect subtle differences
in matching of the original answers. A subset of the datasets was manually
verified for quality and accuracy. Our results show that while larger LLMs,
such as QWEN2.5 72b and LLaMa3.1 70b, demonstrate near-perfect performance in
distinguishing matched and non-matched answers, smaller models show more
variance. For instance, LLaMa3.1 8b and EuroLLM 9b benefited from few-shot
examples, while Mistral Nemo 12b underperformed on detection of subtle text
alteration, particularly in Lithuanian, even with additional examples. QWEN2.5
7b and Mistral 7b were able to obtain a strong and comparable performance to
the larger 70b models in zero and few shot experiments. Moreover, the
performance of Mistral 7b was weaker in few shot experiments.
|
2501.09166
|
Attention is All You Need Until You Need Retention
|
cs.LG cs.AI
|
This work introduces a novel Retention Layer mechanism for Transformer based
architectures, addressing their inherent lack of intrinsic retention
capabilities. Unlike human cognition, which can encode and dynamically recall
symbolic templates, Generative Pretrained Transformers rely solely on fixed
pretrained weights and ephemeral context windows, limiting their adaptability.
The proposed Retention Layer incorporates a persistent memory module capable of
real time data population, dynamic recall, and guided output generation. This
enhancement allows models to store, update, and reuse observed patterns across
sessions, enabling incremental learning and bridging the gap between static
pretraining and dynamic, context sensitive adaptation. The Retention Layer
design parallels social learning processes, encompassing attention, retention,
reproduction, and motivation stages. Technically, it integrates a memory
attention mechanism and episodic buffers to manage memory scalability, mitigate
overfitting, and ensure efficient recall. Applications span adaptive personal
assistants, real time fraud detection, autonomous robotics, content moderation,
and healthcare diagnostics. In each domain, the retention mechanism enables
systems to learn incrementally, personalize outputs, and respond to evolving
real world challenges effectively. By emulating key aspects of human learning,
this retention enhanced architecture fosters a more fluid and responsive AI
paradigm, paving the way for dynamic, session aware models that extend the
capabilities of traditional Transformers into domains requiring continual
adaptation.
|
2501.09167
|
Embodied Scene Understanding for Vision Language Models via MetaVQA
|
cs.CV cs.RO
|
Vision Language Models (VLMs) demonstrate significant potential as embodied
AI agents for various mobility applications. However, a standardized,
closed-loop benchmark for evaluating their spatial reasoning and sequential
decision-making capabilities is lacking. To address this, we present MetaVQA: a
comprehensive benchmark designed to assess and enhance VLMs' understanding of
spatial relationships and scene dynamics through Visual Question Answering
(VQA) and closed-loop simulations. MetaVQA leverages Set-of-Mark prompting and
top-down view ground-truth annotations from nuScenes and Waymo datasets to
automatically generate extensive question-answer pairs based on diverse
real-world traffic scenarios, ensuring object-centric and context-rich
instructions. Our experiments show that fine-tuning VLMs with the MetaVQA
dataset significantly improves their spatial reasoning and embodied scene
comprehension in safety-critical simulations, evident not only in improved VQA
accuracies but also in emerging safety-aware driving maneuvers. In addition,
the learning demonstrates strong transferability from simulation to real-world
observation. Code and data will be publicly available at
https://metadriverse.github.io/metavqa .
|
2501.09171
|
Generative AI Takes a Statistics Exam: A Comparison of Performance
between ChatGPT3.5, ChatGPT4, and ChatGPT4o-mini
|
stat.OT cs.LG
|
Many believe that use of generative AI as a private tutor has the potential
to shrink access and achievement gaps between students and schools with
abundant resources versus those with fewer resources. Shrinking the gap is
possible only if paid and free versions of the platforms perform with the same
accuracy. In this experiment, we investigate the performance of GPT versions
3.5, 4.0, and 4o-mini on the same 16-question statistics exam given to a class
of first-year graduate students. While we do not advocate using any generative
AI platform to complete an exam, the use of exam questions allows us to explore
aspects of ChatGPT's responses to typical questions that students might
encounter in a statistics course. Results on accuracy indicate that GPT 3.5
would fail the exam, GPT4 would perform well, and GPT4o-mini would perform
somewhere in between. While we acknowledge the existence of other Generative
AI/LLMs, our discussion concerns only ChatGPT because it is the most widely
used platform on college campuses at this time. We further investigate
differences among the AI platforms in the answers for each problem using
methods developed for text analytics, such as reading level evaluation and
topic modeling. Results indicate that GPT3.5 and 4o-mini have characteristics
that are more similar than either of them have with GPT4.
|
2501.09173
|
Formalising the intentional stance 2: a coinductive approach
|
math.OC cs.SY eess.SY math.PR
|
Given a stochastic process with inputs and outputs, how might its behaviour
be related to pursuit of a goal? We model this using 'transducers', objects
that capture only the external behaviour of a system and not its internal
state. A companion paper summarises our results for cognitive scientists; the
current paper gives formal definitions and proofs.
To formalise the concept of a system that behaves as if it were pursuing a
goal, we consider what happens when a transducer (a 'policy') is coupled to
another transducer that comes equipped with a success condition (a
'teleo-environment'). An optimal policy is identified with a transducer that
behaves as if it were perfectly rational in the pursuit of a goal; our
framework also allows us to model constrained rationality.
Optimal policies obey a version of Bellman's principle: a policy that's
optimal in one time step will again be optimal in the next time step, but with
respect to a different teleo-environment (obtained from the original one by a
modified version of Bayesian filtering). This property sometimes also applies
to the bounded-rational case; we give a sufficient condition.
A policy is deterministic if and only if there exists a teleo-environment for
which it is uniquely optimal among the set of all policies; we relate this to
classical representation theorems from decision theory. This result need not
hold in the bounded-rational case; we give an example related to the
absent-minded driver problem. The formalism is defined using coinduction,
following the style proposed by Czajka.
|
2501.09174
|
Short-time Variational Mode Decomposition
|
cs.IT math.IT
|
Variational mode decomposition (VMD) and its extensions like Multivariate VMD
(MVMD) decompose signals into ensembles of band-limited modes with narrow
central frequencies. These methods utilize Fourier transformations to shift
signals between time and frequency domains. However, since Fourier
transformations span the entire time-domain signal, they are suboptimal for
non-stationary time series.
We introduce Short-Time Variational Mode Decomposition (STVMD), an innovative
extension of the VMD algorithm that incorporates the Short-Time Fourier
transform (STFT) to minimize the impact of local disturbances. STVMD segments
signals into short time windows, converting these segments into the frequency
domain. It then formulates a variational optimization problem to extract
band-limited modes representing the windowed data. The optimization aims to
minimize the sum of the bandwidths of these modes across the windowed data,
extending the cost functions used in VMD and MVMD. Solutions are derived using
the alternating direction method of multipliers, ensuring the extraction of
modes with narrow bandwidths.
STVMD is divided into dynamic and non-dynamic types, depending on whether the
central frequencies vary with time. Our experiments show that non-dynamic STVMD
is comparable to VMD with properly sized time windows, while dynamic STVMD
better accommodates non-stationary signals, evidenced by reduced mode function
errors and tracking of dynamic central frequencies. This effectiveness is
validated by steady-state visual-evoked potentials in electroencephalogram
signals.
|
2501.09178
|
Enhancing Graph Representation Learning with Localized Topological
Features
|
cs.LG cs.SI
|
Representation learning on graphs is a fundamental problem that can be
crucial in various tasks. Graph neural networks, the dominant approach for
graph representation learning, are limited in their representation power.
Therefore, it can be beneficial to explicitly extract and incorporate
high-order topological and geometric information into these models. In this
paper, we propose a principled approach to extract the rich connectivity
information of graphs based on the theory of persistent homology. Our method
utilizes the topological features to enhance the representation learning of
graph neural networks and achieve state-of-the-art performance on various node
classification and link prediction benchmarks. We also explore the option of
end-to-end learning of the topological features, i.e., treating topological
computation as a differentiable operator during learning. Our theoretical
analysis and empirical study provide insights and potential guidelines for
employing topological features in graph learning tasks.
|
2501.09182
|
A Blockchain-Enabled Approach to Cross-Border Compliance and Trust
|
cs.AI cs.CR cs.CY cs.SE
|
As artificial intelligence (AI) systems become increasingly integral to
critical infrastructure and global operations, the need for a unified,
trustworthy governance framework is more urgent that ever. This paper proposes
a novel approach to AI governance, utilizing blockchain and distributed ledger
technologies (DLT) to establish a decentralized, globally recognized framework
that ensures security, privacy, and trustworthiness of AI systems across
borders. The paper presents specific implementation scenarios within the
financial sector, outlines a phased deployment timeline over the next decade,
and addresses potential challenges with solutions grounded in current research.
By synthesizing advancements in blockchain, AI ethics, and cybersecurity, this
paper offers a comprehensive roadmap for a decentralized AI governance
framework capable of adapting to the complex and evolving landscape of global
AI regulation.
|
2501.09185
|
Cancer-Net PCa-Seg: Benchmarking Deep Learning Models for Prostate
Cancer Segmentation Using Synthetic Correlated Diffusion Imaging
|
eess.IV cs.CV
|
Prostate cancer (PCa) is the most prevalent cancer among men in the United
States, accounting for nearly 300,000 cases, 29% of all diagnoses and 35,000
total deaths in 2024. Traditional screening methods such as prostate-specific
antigen (PSA) testing and magnetic resonance imaging (MRI) have been pivotal in
diagnosis, but have faced limitations in specificity and generalizability. In
this paper, we explore the potential of enhancing PCa lesion segmentation using
a novel MRI modality called synthetic correlated diffusion imaging (CDI$^s$).
We employ several state-of-the-art deep learning models, including U-Net,
SegResNet, Swin UNETR, Attention U-Net, and LightM-UNet, to segment PCa lesions
from a 200 CDI$^s$ patient cohort. We find that SegResNet achieved superior
segmentation performance with a Dice-Sorensen coefficient (DSC) of $76.68 \pm
0.8$. Notably, the Attention U-Net, while slightly less accurate (DSC $74.82
\pm 2.0$), offered a favorable balance between accuracy and computational
efficiency. Our findings demonstrate the potential of deep learning models in
improving PCa lesion segmentation using CDI$^s$ to enhance PCa management and
clinical support.
|
2501.09186
|
Guiding Retrieval using LLM-based Listwise Rankers
|
cs.IR cs.AI
|
Large Language Models (LLMs) have shown strong promise as rerankers,
especially in ``listwise'' settings where an LLM is prompted to rerank several
search results at once. However, this ``cascading'' retrieve-and-rerank
approach is limited by the bounded recall problem: relevant documents not
retrieved initially are permanently excluded from the final ranking. Adaptive
retrieval techniques address this problem, but do not work with listwise
rerankers because they assume a document's score is computed independently from
other documents. In this paper, we propose an adaptation of an existing
adaptive retrieval method that supports the listwise setting and helps guide
the retrieval process itself (thereby overcoming the bounded recall problem for
LLM rerankers). Specifically, our proposed algorithm merges results both from
the initial ranking and feedback documents provided by the most relevant
documents seen up to that point. Through extensive experiments across diverse
LLM rerankers, first stage retrievers, and feedback sources, we demonstrate
that our method can improve nDCG@10 by up to 13.23% and recall by 28.02%--all
while keeping the total number of LLM inferences constant and overheads due to
the adaptive process minimal. The work opens the door to leveraging LLM-based
search in settings where the initial pool of results is limited, e.g., by
legacy systems, or by the cost of deploying a semantic first-stage.
|
2501.09187
|
Patch-aware Vector Quantized Codebook Learning for Unsupervised Visual
Defect Detection
|
cs.CV cs.AI cs.LG
|
Unsupervised visual defect detection is critical in industrial applications,
requiring a representation space that captures normal data features while
detecting deviations. Achieving a balance between expressiveness and
compactness is challenging; an overly expressive space risks inefficiency and
mode collapse, impairing detection accuracy. We propose a novel approach using
an enhanced VQ-VAE framework optimized for unsupervised defect detection. Our
model introduces a patch-aware dynamic code assignment scheme, enabling
context-sensitive code allocation to optimize spatial representation. This
strategy enhances normal-defect distinction and improves detection accuracy
during inference. Experiments on MVTecAD, BTAD, and MTSD datasets show our
method achieves state-of-the-art performance.
|
2501.09189
|
Testing Noise Assumptions of Learning Algorithms
|
cs.LG cs.DS
|
We pose a fundamental question in computational learning theory: can we
efficiently test whether a training set satisfies the assumptions of a given
noise model? This question has remained unaddressed despite decades of research
on learning in the presence of noise. In this work, we show that this task is
tractable and present the first efficient algorithm to test various noise
assumptions on the training data.
To model this question, we extend the recently proposed testable learning
framework of Rubinfeld and Vasilyan (2023) and require a learner to run an
associated test that satisfies the following two conditions: (1) whenever the
test accepts, the learner outputs a classifier along with a certificate of
optimality, and (2) the test must pass for any dataset drawn according to a
specified modeling assumption on both the marginal distribution and the noise
model. We then consider the problem of learning halfspaces over Gaussian
marginals with Massart noise (where each label can be flipped with probability
less than $1/2$ depending on the input features), and give a fully-polynomial
time testable learning algorithm.
We also show a separation between the classical setting of learning in the
presence of structured noise and testable learning. In fact, for the simple
case of random classification noise (where each label is flipped with fixed
probability $\eta = 1/2$), we show that testable learning requires
super-polynomial time while classical learning is trivial.
|
2501.09192
|
Estimation-Aware Trajectory Optimization with Set-Valued Measurement
Uncertainties
|
math.OC cs.RO cs.SY eess.SY
|
In this paper, we present an optimization-based framework for generating
estimation-aware trajectories in scenarios where measurement (output)
uncertainties are state-dependent and set-valued. The framework leverages the
concept of regularity for set-valued output maps. Specifically, we demonstrate
that, for output-regular maps, one can utilize a set-valued observability
measure that is concave with respect to finite-horizon state trajectories. By
maximizing this measure, optimized estimation-aware trajectories can be
designed for a broad class of systems, including those with locally linearized
dynamics. To illustrate the effectiveness of the proposed approach, we provide
a representative example in the context of trajectory planning for vision-based
estimation. We present an estimation-aware trajectory for an uncooperative
target-tracking problem that uses a machine learning (ML)-based estimation
module on an ego-satellite.
|
2501.09194
|
Grounding Text-to-Image Diffusion Models for Controlled High-Quality
Image Generation
|
cs.CV cs.AI
|
Text-to-image (T2I) generative diffusion models have demonstrated outstanding
performance in synthesizing diverse, high-quality visuals from text captions.
Several layout-to-image models have been developed to control the generation
process by utilizing a wide range of layouts, such as segmentation maps, edges,
and human keypoints. In this work, we propose ObjectDiffusion, a model that
conditions T2I diffusion models on semantic and spatial grounding information,
enabling the precise rendering and placement of desired objects in specific
locations defined by bounding boxes. To achieve this, we make substantial
modifications to the network architecture introduced in ControlNet to integrate
it with the grounding method proposed in GLIGEN. We fine-tune ObjectDiffusion
on the COCO2017 training dataset and evaluate it on the COCO2017 validation
dataset. Our model improves the precision and quality of controllable image
generation, achieving an AP$_{\text{50}}$ of 46.6, an AR of 44.5, and an FID of
19.8, outperforming the current SOTA model trained on open-source datasets
across all three metrics. ObjectDiffusion demonstrates a distinctive capability
in synthesizing diverse, high-quality, high-fidelity images that seamlessly
conform to the semantic and spatial control layout. Evaluated in qualitative
and quantitative tests, ObjectDiffusion exhibits remarkable grounding
capabilities in closed-set and open-set vocabulary settings across a wide
variety of contexts. The qualitative assessment verifies the ability of
ObjectDiffusion to generate multiple detailed objects in varying sizes, forms,
and locations.
|
2501.09198
|
Combining Movement Primitives with Contraction Theory
|
cs.RO
|
This paper presents a modular framework for motion planning using movement
primitives. Central to the approach is Contraction Theory, a modular stability
tool for nonlinear dynamical systems. The approach extends prior methods by
achieving parallel and sequential combinations of both discrete and rhythmic
movements, while enabling independent modulation of each movement. This modular
framework enables a divide-and-conquer strategy to simplify the programming of
complex robot motion planning. Simulation examples illustrate the flexibility
and versatility of the framework, highlighting its potential to address diverse
challenges in robot motion planning.
|
2501.09203
|
Unified Few-shot Crack Segmentation and its Precise 3D Automatic
Measurement in Concrete Structures
|
cs.CV cs.RO
|
Visual-Spatial Systems has become increasingly essential in concrete crack
inspection. However, existing methods often lacks adaptability to diverse
scenarios, exhibits limited robustness in image-based approaches, and struggles
with curved or complex geometries. To address these limitations, an innovative
framework for two-dimensional (2D) crack detection, three-dimensional (3D)
reconstruction, and 3D automatic crack measurement was proposed by integrating
computer vision technologies and multi-modal Simultaneous localization and
mapping (SLAM) in this study. Firstly, building on a base DeepLabv3+
segmentation model, and incorporating specific refinements utilizing foundation
model Segment Anything Model (SAM), we developed a crack segmentation method
with strong generalization across unfamiliar scenarios, enabling the generation
of precise 2D crack masks. To enhance the accuracy and robustness of 3D
reconstruction, Light Detection and Ranging (LiDAR) point clouds were utilized
together with image data and segmentation masks. By leveraging both image- and
LiDAR-SLAM, we developed a multi-frame and multi-modal fusion framework that
produces dense, colorized point clouds, effectively capturing crack semantics
at a 3D real-world scale. Furthermore, the crack geometric attributions were
measured automatically and directly within 3D dense point cloud space,
surpassing the limitations of conventional 2D image-based measurements. This
advancement makes the method suitable for structural components with curved and
complex 3D geometries. Experimental results across various concrete structures
highlight the significant improvements and unique advantages of the proposed
method, demonstrating its effectiveness, accuracy, and robustness in real-world
applications.
|
2501.09209
|
Surgical Visual Understanding (SurgVU) Dataset
|
cs.CV
|
Owing to recent advances in machine learning and the ability to harvest large
amounts of data during robotic-assisted surgeries, surgical data science is
ripe for foundational work. We present a large dataset of surgical videos and
their accompanying labels for this purpose. We describe how the data was
collected and some of its unique attributes. Multiple example problems are
outlined. Although the dataset was curated for a particular set of scientific
challenges (in an accompanying paper), it is general enough to be used for a
broad range machine learning questions. Our hope is that this dataset exposes
the larger machine learning community to the challenging problems within
surgical data science, and becomes a touchstone for future research. The videos
are available at
https://storage.googleapis.com/isi-surgvu/surgvu24_videos_only.zip, the labels
at https://storage.googleapis.com/isi-surgvu/surgvu24_labels_updated_v2.zip,
and a validation set for tool detection problem at
https://storage.googleapis.com/isi-surgvu/cat1_test_set_public.zip.
|
2501.09211
|
Fuzzy Integration of Data Lake Tables
|
cs.DB cs.IR
|
Data integration is an important step in any data science pipeline where the
objective is to unify the information available in different datasets for
comprehensive analysis. Full Disjunction, which is an associative extension of
the outer join operator, has been shown to be an effective operator for
integrating datasets. It fully preserves and combines the available
information. Existing Full Disjunction algorithms only consider the equi-join
scenario where only tuples having the same value on joining columns are
integrated. This, however, does not realistically represent an open data
scenario, where datasets come from diverse sources with inconsistent values
(e.g., synonyms, abbreviations, etc.) and with limited metadata. So, joining
just on equal values severely limits the ability of Full Disjunction to fully
combine datasets. Thus, in this work, we propose an extension of Full
Disjunction to also account for "fuzzy" matches among tuples. We present a
novel data-driven approach to enable the joining of approximate or fuzzy
matches within Full Disjunction. Experimentally, we show that fuzzy Full
Disjunction does not add significant time overhead over a state-of-the-art Full
Disjunction implementation and also that it enhances the integration
effectiveness.
|
2501.09213
|
FineMedLM-o1: Enhancing the Medical Reasoning Ability of LLM from
Supervised Fine-Tuning to Test-Time Training
|
cs.CL
|
Recent advancements in large language models (LLMs) have shown promise in
medical applications such as disease diagnosis and treatment planning. However,
most existing medical LLMs struggle with the advanced reasoning required for
complex clinical scenarios, such as differential diagnosis or personalized
treatment suggestions. We proposed FineMedLM-o1, which leverages high-quality
synthetic medical data and long-form reasoning data for Supervised Fine-Tuning
(SFT) and Direct Preference Optimization (DPO), enabling advanced dialogue and
deep reasoning capabilities. Additionally, we introduced Test-Time Training
(TTT) in the medical domain for the first time, facilitating domain adaptation
and ensuring reliable, accurate reasoning. Experimental results demonstrate
that FineMedLM-o1 achieves a 23% average performance improvement over prior
models on key medical benchmarks. Furthermore, the introduction of TTT provides
an additional 14% performance boost, highlighting its effectiveness in
enhancing medical reasoning capabilities. To support this process, we also
proposed a novel method for synthesizing medical dialogue. Compared to other
open-source datasets, our dataset stands out as superior in both quality and
complexity. The project and data will be released on GitHub.
|
2501.09214
|
Boosting Short Text Classification with Multi-Source Information
Exploration and Dual-Level Contrastive Learning
|
cs.CL
|
Short text classification, as a research subtopic in natural language
processing, is more challenging due to its semantic sparsity and insufficient
labeled samples in practical scenarios. We propose a novel model named
MI-DELIGHT for short text classification in this work. Specifically, it first
performs multi-source information (i.e., statistical information, linguistic
information, and factual information) exploration to alleviate the sparsity
issues. Then, the graph learning approach is adopted to learn the
representation of short texts, which are presented in graph forms. Moreover, we
introduce a dual-level (i.e., instance-level and cluster-level) contrastive
learning auxiliary task to effectively capture different-grained contrastive
information within massive unlabeled data. Meanwhile, previous models merely
perform the main task and auxiliary tasks in parallel, without considering the
relationship among tasks. Therefore, we introduce a hierarchical architecture
to explicitly model the correlations between tasks. We conduct extensive
experiments across various benchmark datasets, demonstrating that MI-DELIGHT
significantly surpasses previous competitive models. It even outperforms
popular large language models on several datasets.
|
2501.09217
|
Adaptive Law-Based Transformation (ALT): A Lightweight Feature
Representation for Time Series Classification
|
cs.LG cs.AI cs.CV stat.ML
|
Time series classification (TSC) is fundamental in numerous domains,
including finance, healthcare, and environmental monitoring. However,
traditional TSC methods often struggle with the inherent complexity and
variability of time series data. Building on our previous work with the linear
law-based transformation (LLT) - which improved classification accuracy by
transforming the feature space based on key data patterns - we introduce
adaptive law-based transformation (ALT). ALT enhances LLT by incorporating
variable-length shifted time windows, enabling it to capture distinguishing
patterns of various lengths and thereby handle complex time series more
effectively. By mapping features into a linearly separable space, ALT provides
a fast, robust, and transparent solution that achieves state-of-the-art
performance with only a few hyperparameters.
|
2501.09218
|
Interpretable Droplet Digital PCR Assay for Trustworthy Molecular
Diagnostics
|
q-bio.QM cs.AI
|
Accurate molecular quantification is essential for advancing research and
diagnostics in fields such as infectious diseases, cancer biology, and genetic
disorders. Droplet digital PCR (ddPCR) has emerged as a gold standard for
achieving absolute quantification. While computational ddPCR technologies have
advanced significantly, achieving automatic interpretation and consistent
adaptability across diverse operational environments remains a challenge. To
address these limitations, we introduce the intelligent interpretable droplet
digital PCR (I2ddPCR) assay, a comprehensive framework integrating front-end
predictive models (for droplet segmentation and classification) with GPT-4o
multimodal large language model (MLLM, for context-aware explanations and
recommendations) to automate and enhance ddPCR image analysis. This approach
surpasses the state-of-the-art models, affording 99.05% accuracy in processing
complex ddPCR images containing over 300 droplets per image with varying
signal-to-noise ratios (SNRs). By combining specialized neural networks and
large language models, the I2ddPCR assay offers a robust and adaptable solution
for absolute molecular quantification, achieving a sensitivity capable of
detecting low-abundance targets as low as 90.32 copies/{\mu}L. Furthermore, it
improves model's transparency through detailed explanation and troubleshooting
guidance, empowering users to make informed decisions. This innovative
framework has the potential to benefit molecular diagnostics, disease research,
and clinical applications, especially in resource-constrained settings.
|
2501.09219
|
A Simple Graph Contrastive Learning Framework for Short Text
Classification
|
cs.CL
|
Short text classification has gained significant attention in the information
age due to its prevalence and real-world applications. Recent advancements in
graph learning combined with contrastive learning have shown promising results
in addressing the challenges of semantic sparsity and limited labeled data in
short text classification. However, existing models have certain limitations.
They rely on explicit data augmentation techniques to generate contrastive
views, resulting in semantic corruption and noise. Additionally, these models
only focus on learning the intrinsic consistency between the generated views,
neglecting valuable discriminative information from other potential views. To
address these issues, we propose a Simple graph contrastive learning framework
for Short Text Classification (SimSTC). Our approach involves performing graph
learning on multiple text-related component graphs to obtain multi-view text
embeddings. Subsequently, we directly apply contrastive learning on these
embeddings. Notably, our method eliminates the need for data augmentation
operations to generate contrastive views while still leveraging the benefits of
multi-view contrastive learning. Despite its simplicity, our model achieves
outstanding performance, surpassing large language models on various datasets.
|
2501.09221
|
ASCENT-ViT: Attention-based Scale-aware Concept Learning Framework for
Enhanced Alignment in Vision Transformers
|
cs.CV cs.LG
|
As Vision Transformers (ViTs) are increasingly adopted in sensitive vision
applications, there is a growing demand for improved interpretability. This has
led to efforts to forward-align these models with carefully annotated abstract,
human-understandable semantic entities - concepts. Concepts provide global
rationales to the model predictions and can be quickly understood/intervened on
by domain experts. Most current research focuses on designing model-agnostic,
plug-and-play generic concept-based explainability modules that do not
incorporate the inner workings of foundation models (e.g., inductive biases,
scale invariance, etc.) during training. To alleviate this issue for ViTs, in
this paper, we propose ASCENT-ViT, an attention-based, concept learning
framework that effectively composes scale and position-aware representations
from multiscale feature pyramids and ViT patch representations, respectively.
Further, these representations are aligned with concept annotations through
attention matrices - which incorporate spatial and global (semantic) concepts.
ASCENT-ViT can be utilized as a classification head on top of standard ViT
backbones for improved predictive performance and accurate and robust concept
explanations as demonstrated on five datasets, including three widely used
benchmarks (CUB, Pascal APY, Concept-MNIST) and 2 real-world datasets (AWA2,
KITS).
|
2501.09223
|
Foundations of Large Language Models
|
cs.CL cs.AI cs.LG
|
This is a book about large language models. As indicated by the title, it
primarily focuses on foundational concepts rather than comprehensive coverage
of all cutting-edge technologies. The book is structured into four main
chapters, each exploring a key area: pre-training, generative models, prompting
techniques, and alignment methods. It is intended for college students,
professionals, and practitioners in natural language processing and related
fields, and can serve as a reference for anyone interested in large language
models.
|
2501.09229
|
Tessellated Linear Model for Age Prediction from Voice
|
cs.LG cs.SD eess.AS
|
Voice biometric tasks, such as age estimation require modeling the often
complex relationship between voice features and the biometric variable. While
deep learning models can handle such complexity, they typically require large
amounts of accurately labeled data to perform well. Such data are often scarce
for biometric tasks such as voice-based age prediction. On the other hand,
simpler models like linear regression can work with smaller datasets but often
fail to generalize to the underlying non-linear patterns present in the data.
In this paper we propose the Tessellated Linear Model (TLM), a piecewise linear
approach that combines the simplicity of linear models with the capacity of
non-linear functions. TLM tessellates the feature space into convex regions and
fits a linear model within each region. We optimize the tessellation and the
linear models using a hierarchical greedy partitioning. We evaluated TLM on the
TIMIT dataset on the task of age prediction from voice, where it outperformed
state-of-the-art deep learning models.
|
2501.09238
|
Mono-Forward: Backpropagation-Free Algorithm for Efficient Neural
Network Training Harnessing Local Errors
|
cs.LG
|
Backpropagation is the standard method for achieving state-of-the-art
accuracy in neural network training, but it often imposes high memory costs and
lacks biological plausibility. In this paper, we introduce the Mono-Forward
algorithm, a purely local layerwise learning method inspired by Hinton's
Forward-Forward framework. Unlike backpropagation, Mono-Forward optimizes each
layer solely with locally available information, eliminating the reliance on
global error signals. We evaluated Mono-Forward on multi-layer perceptrons and
convolutional neural networks across multiple benchmarks, including MNIST,
Fashion-MNIST, CIFAR-10, and CIFAR-100. The test results show that Mono-Forward
consistently matches or surpasses the accuracy of backpropagation across all
tasks, with significantly reduced and more even memory usage, better
parallelizability, and a comparable convergence rate.
|
2501.09239
|
AI-based Identity Fraud Detection: A Systematic Review
|
cs.AI
|
With the rapid development of digital services, a large volume of personally
identifiable information (PII) is stored online and is subject to cyberattacks
such as Identity fraud. Most recently, the use of Artificial Intelligence (AI)
enabled deep fake technologies has significantly increased the complexity of
identity fraud. Fraudsters may use these technologies to create highly
sophisticated counterfeit personal identification documents, photos and videos.
These advancements in the identity fraud landscape pose challenges for identity
fraud detection and society at large. There is a pressing need to review and
understand identity fraud detection methods, their limitations and potential
solutions. This research aims to address this important need by using the
well-known systematic literature review method. This paper reviewed a selected
set of 43 papers across 4 major academic literature databases. In particular,
the review results highlight the two types of identity fraud prevention and
detection methods, in-depth and open challenges. The results were also
consolidated into a taxonomy of AI-based identity fraud detection and
prevention methods including key insights and trends. Overall, this paper
provides a foundational knowledge base to researchers and practitioners for
further research and development in this important area of digital identity
fraud.
|
2501.09240
|
Task Vectors in In-Context Learning: Emergence, Formation, and Benefit
|
cs.LG
|
In-context learning is a remarkable capability of transformers, referring to
their ability to adapt to specific tasks based on a short history or context.
Previous research has found that task-specific information is locally encoded
within models, though their emergence and functionality remain unclear due to
opaque pre-training processes. In this work, we investigate the formation of
task vectors in a controlled setting, using models trained from scratch on
synthetic datasets. Our findings confirm that task vectors naturally emerge
under certain conditions, but the tasks may be relatively weakly and/or
non-locally encoded within the model. To promote strong task vectors encoded at
a prescribed location within the model, we propose an auxiliary training
mechanism based on a task vector prompting loss (TVP-loss). This method
eliminates the need to search for task-correlated encodings within the trained
model and demonstrably improves robustness and generalization.
|
2501.09254
|
Clone-Robust AI Alignment
|
cs.LG cs.AI cs.GT
|
A key challenge in training Large Language Models (LLMs) is properly aligning
them with human preferences. Reinforcement Learning with Human Feedback (RLHF)
uses pairwise comparisons from human annotators to train reward functions and
has emerged as a popular alignment method. However, input datasets in RLHF are
not necessarily balanced in the types of questions and answers that are
included. Therefore, we want RLHF algorithms to perform well even when the set
of alternatives is not uniformly distributed. Drawing on insights from social
choice theory, we introduce robustness to approximate clones, a desirable
property of RLHF algorithms which requires that adding near-duplicate
alternatives does not significantly change the learned reward function. We
first demonstrate that the standard RLHF algorithm based on regularized maximum
likelihood estimation (MLE) fails to satisfy this property. We then propose the
weighted MLE, a new RLHF algorithm that modifies the standard regularized MLE
by weighting alternatives based on their similarity to other alternatives. This
new algorithm guarantees robustness to approximate clones while preserving
desirable theoretical properties.
|
2501.09258
|
Delayed Fusion: Integrating Large Language Models into First-Pass
Decoding in End-to-end Speech Recognition
|
cs.CL cs.SD eess.AS
|
This paper presents an efficient decoding approach for end-to-end automatic
speech recognition (E2E-ASR) with large language models (LLMs). Although
shallow fusion is the most common approach to incorporate language models into
E2E-ASR decoding, we face two practical problems with LLMs. (1) LLM inference
is computationally costly. (2) There may be a vocabulary mismatch between the
ASR model and the LLM. To resolve this mismatch, we need to retrain the ASR
model and/or the LLM, which is at best time-consuming and in many cases not
feasible. We propose "delayed fusion," which applies LLM scores to ASR
hypotheses with a delay during decoding and enables easier use of pre-trained
LLMs in ASR tasks. This method can reduce not only the number of hypotheses
scored by the LLM but also the number of LLM inference calls. It also allows
re-tokenizion of ASR hypotheses during decoding if ASR and LLM employ different
tokenizations. We demonstrate that delayed fusion provides improved decoding
speed and accuracy compared to shallow fusion and N-best rescoring using the
LibriHeavy ASR corpus and three public LLMs, OpenLLaMA 3B & 7B and Mistral 7B.
|
2501.09259
|
OpticFusion: Multi-Modal Neural Implicit 3D Reconstruction of
Microstructures by Fusing White Light Interferometry and Optical Microscopy
|
cs.CV physics.app-ph physics.ins-det physics.optics
|
White Light Interferometry (WLI) is a precise optical tool for measuring the
3D topography of microstructures. However, conventional WLI cannot capture the
natural color of a sample's surface, which is essential for many microscale
research applications that require both 3D geometry and color information.
Previous methods have attempted to overcome this limitation by modifying WLI
hardware and analysis software, but these solutions are often costly. In this
work, we address this challenge from a computer vision multi-modal
reconstruction perspective for the first time. We introduce OpticFusion, a
novel approach that uses an additional digital optical microscope (OM) to
achieve 3D reconstruction with natural color textures using multi-view WLI and
OM images. Our method employs a two-step data association process to obtain the
poses of WLI and OM data. By leveraging the neural implicit representation, we
fuse multi-modal data and apply color decomposition technology to extract the
sample's natural color. Tested on our multi-modal dataset of various microscale
samples, OpticFusion achieves detailed 3D reconstructions with color textures.
Our method provides an effective tool for practical applications across
numerous microscale research fields. The source code and our real-world dataset
are available at https://github.com/zju3dv/OpticFusion.
|
2501.09262
|
On the convergence rate of noisy Bayesian Optimization with Expected
Improvement
|
stat.ML cs.LG math.OC
|
Expected improvement (EI) is one of the most widely used acquisition
functions in Bayesian optimization (BO). Despite its proven success in
applications for decades, important open questions remain on the theoretical
convergence behaviors and rates for EI. In this paper, we contribute to the
convergence theory of EI in three novel and critical areas. First, we consider
objective functions that fit under the Gaussian process (GP) prior assumption,
whereas existing works mostly focus on functions in the reproducing kernel
Hilbert space (RKHS). Second, we establish for the first time the asymptotic
error bound and its corresponding rate for GP-EI with noisy observations under
the GP prior assumption. Third, by investigating the exploration and
exploitation properties of the non-convex EI function, we establish improved
error bounds of GP-EI for both the noise-free and noisy cases.
|
2501.09265
|
Perspective Transition of Large Language Models for Solving Subjective
Tasks
|
cs.CL cs.AI
|
Large language models (LLMs) have revolutionized the field of natural
language processing, enabling remarkable progress in various tasks. Different
from objective tasks such as commonsense reasoning and arithmetic
question-answering, the performance of LLMs on subjective tasks is still
limited, where the perspective on the specific problem plays crucial roles for
better interpreting the context and giving proper response. For example, in
certain scenarios, LLMs may perform better when answering from an expert role
perspective, potentially eliciting their relevant domain knowledge. In
contrast, in some scenarios, LLMs may provide more accurate responses when
answering from a third-person standpoint, enabling a more comprehensive
understanding of the problem and potentially mitigating inherent biases. In
this paper, we propose Reasoning through Perspective Transition (RPT), a method
based on in-context learning that enables LLMs to dynamically select among
direct, role, and third-person perspectives for the best way to solve
corresponding subjective problem. Through extensive experiments on totally 12
subjective tasks by using both closed-source and open-source LLMs including
GPT-4, GPT-3.5, Llama-3, and Qwen-2, our method outperforms widely used single
fixed perspective based methods such as chain-of-thought prompting and expert
prompting, highlights the intricate ways that LLMs can adapt their perspectives
to provide nuanced and contextually appropriate responses for different
problems.
|
2501.09267
|
Are Open-Vocabulary Models Ready for Detection of MEP Elements on
Construction Sites
|
cs.CV cs.RO
|
The construction industry has long explored robotics and computer vision, yet
their deployment on construction sites remains very limited. These technologies
have the potential to revolutionize traditional workflows by enhancing
accuracy, efficiency, and safety in construction management. Ground robots
equipped with advanced vision systems could automate tasks such as monitoring
mechanical, electrical, and plumbing (MEP) systems. The present research
evaluates the applicability of open-vocabulary vision-language models compared
to fine-tuned, lightweight, closed-set object detectors for detecting MEP
components using a mobile ground robotic platform. A dataset collected with
cameras mounted on a ground robot was manually annotated and analyzed to
compare model performance. The results demonstrate that, despite the
versatility of vision-language models, fine-tuned lightweight models still
largely outperform them in specialized environments and for domain-specific
tasks.
|
2501.09268
|
Knowledge Distillation for Image Restoration : Simultaneous Learning
from Degraded and Clean Images
|
cs.CV eess.IV
|
Model compression through knowledge distillation has seen extensive
application in classification and segmentation tasks. However, its potential in
image-to-image translation, particularly in image restoration, remains
underexplored. To address this gap, we propose a Simultaneous Learning
Knowledge Distillation (SLKD) framework tailored for model compression in image
restoration tasks. SLKD employs a dual-teacher, single-student architecture
with two distinct learning strategies: Degradation Removal Learning (DRL) and
Image Reconstruction Learning (IRL), simultaneously. In DRL, the student
encoder learns from Teacher A to focus on removing degradation factors, guided
by a novel BRISQUE extractor. In IRL, the student decoder learns from Teacher B
to reconstruct clean images, with the assistance of a proposed PIQE extractor.
These strategies enable the student to learn from degraded and clean images
simultaneously, ensuring high-quality compression of image restoration models.
Experimental results across five datasets and three tasks demonstrate that SLKD
achieves substantial reductions in FLOPs and parameters, exceeding 80\%, while
maintaining strong image restoration performance.
|
2501.09273
|
ThinTact:Thin Vision-Based Tactile Sensor by Lensless Imaging
|
cs.RO
|
Vision-based tactile sensors have drawn increasing interest in the robotics
community. However, traditional lens-based designs impose minimum thickness
constraints on these sensors, limiting their applicability in space-restricted
settings. In this paper, we propose ThinTact, a novel lensless vision-based
tactile sensor with a sensing field of over 200 mm2 and a thickness of less
than 10 mm.ThinTact utilizes the mask-based lensless imaging technique to map
the contact information to CMOS signals. To ensure real-time tactile sensing,
we propose a real-time lensless reconstruction algorithm that leverages a
frequency-spatial-domain joint filter based on discrete cosine transform (DCT).
This algorithm achieves computation significantly faster than existing
optimization-based methods. Additionally, to improve the sensing quality, we
develop a mask optimization method based on the generic algorithm and the
corresponding system matrix calibration algorithm.We evaluate the performance
of our proposed lensless reconstruction and tactile sensing through qualitative
and quantitative experiments. Furthermore, we demonstrate ThinTact's practical
applicability in diverse applications, including texture recognition and
contact-rich object manipulation. The paper will appear in the IEEE
Transactions on Robotics: https://ieeexplore.ieee.org/document/10842357. Video:
https://youtu.be/YrOO9BDMAHo
|
2501.09274
|
Large Language Model is Secretly a Protein Sequence Optimizer
|
cs.LG cs.AI q-bio.QM
|
We consider the protein sequence engineering problem, which aims to find
protein sequences with high fitness levels, starting from a given wild-type
sequence. Directed evolution has been a dominating paradigm in this field which
has an iterative process to generate variants and select via experimental
feedback. We demonstrate large language models (LLMs), despite being trained on
massive texts, are secretly protein sequence optimizers. With a directed
evolutionary method, LLM can perform protein engineering through Pareto and
experiment-budget constrained optimization, demonstrating success on both
synthetic and experimental fitness landscapes.
|
2501.09275
|
MagnetDB: A Longitudinal Torrent Discovery Dataset with IMDb-Matched
Movies and TV Shows
|
cs.CY cs.MM cs.NI cs.SI
|
BitTorrent remains a prominent channel for illicit distribution of
copyrighted material, yet the supply side of such content remains understudied.
We introduce MagnetDB, a longitudinal dataset of torrents discovered through
the BitTorrent DHT between 2018 and 2024, containing more than 28.6 million
torrents and metadata of more than 950 million files. While our primary focus
is on enabling research based on the supply of pirated movies and TV shows, the
dataset also encompasses other legitimate and illegitimate torrents. By
applying IMDb-matching and annotation to movie and TV show torrents, MagnetDB
facilitates detailed analyses of pirated content evolution in the BitTorrent
network. Researchers can leverage MagnetDB to examine distribution trends,
subcultural practices, and the gift economy within piracy ecosystems. Through
its scale and temporal scope, MagnetDB presents a unique opportunity for
investigating the broader dynamics of BitTorrent and advancing empirical
knowledge on digital piracy.
|
2501.09277
|
Bias for Action: Video Implicit Neural Representations with Bias
Modulation
|
cs.CV
|
We propose a new continuous video modeling framework based on implicit neural
representations (INRs) called ActINR. At the core of our approach is the
observation that INRs can be considered as a learnable dictionary, with the
shapes of the basis functions governed by the weights of the INR, and their
locations governed by the biases. Given compact non-linear activation
functions, we hypothesize that an INR's biases are suitable to capture motion
across images, and facilitate compact representations for video sequences.
Using these observations, we design ActINR to share INR weights across frames
of a video sequence, while using unique biases for each frame. We further model
the biases as the output of a separate INR conditioned on time index to promote
smoothness. By training the video INR and this bias INR together, we
demonstrate unique capabilities, including $10\times$ video slow motion,
$4\times$ spatial super resolution along with $2\times$ slow motion, denoising,
and video inpainting. ActINR performs remarkably well across numerous video
processing tasks (often achieving more than 6dB improvement), setting a new
standard for continuous modeling of videos.
|
2501.09278
|
Text-guided Synthetic Geometric Augmentation for Zero-shot 3D
Understanding
|
cs.CV
|
Zero-shot recognition models require extensive training data for
generalization. However, in zero-shot 3D classification, collecting 3D data and
captions is costly and laborintensive, posing a significant barrier compared to
2D vision. Recent advances in generative models have achieved unprecedented
realism in synthetic data production, and recent research shows the potential
for using generated data as training data. Here, naturally raising the
question: Can synthetic 3D data generated by generative models be used as
expanding limited 3D datasets? In response, we present a synthetic 3D dataset
expansion method, Textguided Geometric Augmentation (TeGA). TeGA is tailored
for language-image-3D pretraining, which achieves SoTA in zero-shot 3D
classification, and uses a generative textto-3D model to enhance and extend
limited 3D datasets. Specifically, we automatically generate text-guided
synthetic 3D data and introduce a consistency filtering strategy to discard
noisy samples where semantics and geometric shapes do not match with text. In
the experiment to double the original dataset size using TeGA, our approach
demonstrates improvements over the baselines, achieving zeroshot performance
gains of 3.0% on Objaverse-LVIS, 4.6% on ScanObjectNN, and 8.7% on ModelNet40.
These results demonstrate that TeGA effectively bridges the 3D data gap,
enabling robust zero-shot 3D classification even with limited real training
data and paving the way for zero-shot 3D vision application.
|
2501.09279
|
Text Semantics to Flexible Design: A Residential Layout Generation
Method Based on Stable Diffusion Model
|
cs.AI
|
Flexibility in the AI-based residential layout design remains a significant
challenge, as traditional methods like rule-based heuristics and graph-based
generation often lack flexibility and require substantial design knowledge from
users. To address these limitations, we propose a cross-modal design approach
based on the Stable Diffusion model for generating flexible residential
layouts. The method offers multiple input types for learning objectives,
allowing users to specify both boundaries and layouts. It incorporates natural
language as design constraints and introduces ControlNet to enable stable
layout generation through two distinct pathways. We also present a scheme that
encapsulates design expertise within a knowledge graph and translates it into
natural language, providing an interpretable representation of design
knowledge. This comprehensibility and diversity of input options enable
professionals and non-professionals to directly express design requirements,
enhancing flexibility and controllability. Finally, experiments verify the
flexibility of the proposed methods under multimodal constraints better than
state-of-the-art models, even when specific semantic information about room
areas or connections is incomplete.
|
2501.09281
|
SoccerSynth-Detection: A Synthetic Dataset for Soccer Player Detection
|
cs.CV
|
In soccer video analysis, player detection is essential for identifying key
events and reconstructing tactical positions. The presence of numerous players
and frequent occlusions, combined with copyright restrictions, severely
restricts the availability of datasets, leaving limited options such as
SoccerNet-Tracking and SportsMOT. These datasets suffer from a lack of
diversity, which hinders algorithms from adapting effectively to varied soccer
video contexts. To address these challenges, we developed
SoccerSynth-Detection, the first synthetic dataset designed for the detection
of synthetic soccer players. It includes a broad range of random lighting and
textures, as well as simulated camera motion blur. We validated its efficacy
using the object detection model (Yolov8n) against real-world datasets
(SoccerNet-Tracking and SportsMoT). In transfer tests, it matched the
performance of real datasets and significantly outperformed them in images with
motion blur; in pre-training tests, it demonstrated its efficacy as a
pre-training dataset, significantly enhancing the algorithm's overall
performance. Our work demonstrates the potential of synthetic datasets to
replace real datasets for algorithm training in the field of soccer video
analysis.
|
2501.09283
|
Free-Knots Kolmogorov-Arnold Network: On the Analysis of Spline Knots
and Advancing Stability
|
cs.LG
|
Kolmogorov-Arnold Neural Networks (KANs) have gained significant attention in
the machine learning community. However, their implementation often suffers
from poor training stability and heavy trainable parameter. Furthermore, there
is limited understanding of the behavior of the learned activation functions
derived from B-splines. In this work, we analyze the behavior of KANs through
the lens of spline knots and derive the lower and upper bound for the number of
knots in B-spline-based KANs. To address existing limitations, we propose a
novel Free Knots KAN that enhances the performance of the original KAN while
reducing the number of trainable parameters to match the trainable parameter
scale of standard Multi-Layer Perceptrons (MLPs). Additionally, we introduce
new a training strategy to ensure $C^2$ continuity of the learnable spline,
resulting in smoother activation compared to the original KAN and improve the
training stability by range expansion. The proposed method is comprehensively
evaluated on 8 datasets spanning various domains, including image, text, time
series, multimodal, and function approximation tasks. The promising results
demonstrates the feasibility of KAN-based network and the effectiveness of
proposed method.
|
2501.09284
|
SEAL: Entangled White-box Watermarks on Low-Rank Adaptation
|
cs.AI cs.CR
|
Recently, LoRA and its variants have become the de facto strategy for
training and sharing task-specific versions of large pretrained models, thanks
to their efficiency and simplicity. However, the issue of copyright protection
for LoRA weights, especially through watermark-based techniques, remains
underexplored. To address this gap, we propose SEAL (SEcure wAtermarking on
LoRA weights), the universal whitebox watermarking for LoRA. SEAL embeds a
secret, non-trainable matrix between trainable LoRA weights, serving as a
passport to claim ownership. SEAL then entangles the passport with the LoRA
weights through training, without extra loss for entanglement, and distributes
the finetuned weights after hiding the passport. When applying SEAL, we
observed no performance degradation across commonsense reasoning,
textual/visual instruction tuning, and text-to-image synthesis tasks. We
demonstrate that SEAL is robust against a variety of known attacks: removal,
obfuscation, and ambiguity attacks.
|
2501.09289
|
Control Barrier Function-Based Safety Filters: Characterization of
Undesired Equilibria, Unbounded Trajectories, and Limit Cycles
|
math.OC cs.SY eess.SY
|
This paper focuses on safety filters designed based on Control Barrier
Functions (CBFs): these are modifications of a nominal stabilizing controller
typically utilized in safety-critical control applications to render a given
subset of states forward invariant. The paper investigates the dynamical
properties of the closed-loop systems, with a focus on characterizing
undesirable behaviors that may emerge due to the use of CBF-based filters.
These undesirable behaviors include unbounded trajectories, limit cycles, and
undesired equilibria, which can be locally stable and even form a continuum.
Our analysis offer the following contributions: (i) conditions under which
trajectories remain bounded and (ii) conditions under which limit cycles do not
exist; (iii) we show that undesired equilibria can be characterized by solving
an algebraic equation, and (iv) we provide examples that show that
asymptotically stable undesired equilibria can exist for a large class of
nominal controllers and design parameters of the safety filter (even for convex
safe sets). Further, for the specific class of planar systems, (v) we provide
explicit formulas for the total number of undesired equilibria and the
proportion of saddle points and asymptotically stable equilibria, and (vi) in
the case of linear planar systems, we present an exhaustive analysis of their
global stability properties. Examples throughout the paper illustrate the
results.
|
2501.09290
|
Interoceptive Robots for Convergent Shared Control in Collaborative
Construction Work
|
cs.RO
|
Building autonomous mobile robots (AMRs) with optimized efficiency and
adaptive capabilities-able to respond to changing task demands and dynamic
environments-is a strongly desired goal for advancing construction robotics.
Such robots can play a critical role in enabling automation, reducing
operational carbon footprints, and supporting modular construction processes.
Inspired by the adaptive autonomy of living organisms, we introduce
interoception, which centers on the robot's internal state representation, as a
foundation for developing self-reflection and conscious learning to enable
continual learning and adaptability in robotic agents. In this paper, we
factorize internal state variables and mathematical properties as "cognitive
dissonance" in shared control paradigms, where human interventions occasionally
occur. We offer a new perspective on how interoception can help build adaptive
motion planning in AMRs by integrating the legacy of heuristic costs from
grid/graph-based algorithms with recent advances in neuroscience and
reinforcement learning. Declarative and procedural knowledge extracted from
human semantic inputs is encoded into a hypergraph model that overlaps with the
spatial configuration of onsite layout for path planning. In addition, we
design a velocity-replay module using an encoder-decoder architecture with
few-shot learning to enable robots to replicate velocity profiles in
contextualized scenarios for multi-robot synchronization and handover
collaboration. These "cached" knowledge representations are demonstrated in
simulated environments for multi-robot motion planning and stacking tasks. The
insights from this study pave the way toward artificial general intelligence in
AMRs, fostering their progression from complexity to competence in construction
automation.
|
2501.09291
|
LAVCap: LLM-based Audio-Visual Captioning using Optimal Transport
|
cs.MM cs.AI cs.SD eess.AS
|
Automated audio captioning is a task that generates textual descriptions for
audio content, and recent studies have explored using visual information to
enhance captioning quality. However, current methods often fail to effectively
fuse audio and visual data, missing important semantic cues from each modality.
To address this, we introduce LAVCap, a large language model (LLM)-based
audio-visual captioning framework that effectively integrates visual
information with audio to improve audio captioning performance. LAVCap employs
an optimal transport-based alignment loss to bridge the modality gap between
audio and visual features, enabling more effective semantic extraction.
Additionally, we propose an optimal transport attention module that enhances
audio-visual fusion using an optimal transport assignment map. Combined with
the optimal training strategy, experimental results demonstrate that each
component of our framework is effective. LAVCap outperforms existing
state-of-the-art methods on the AudioCaps dataset, without relying on large
datasets or post-processing. Code is available at
https://github.com/NAVER-INTEL-Co-Lab/gaudi-lavcap.
|
2501.09292
|
To Retrieve or Not to Retrieve? Uncertainty Detection for Dynamic
Retrieval Augmented Generation
|
cs.CL cs.AI cs.IR
|
Retrieval-Augmented Generation equips large language models with the
capability to retrieve external knowledge, thereby mitigating hallucinations by
incorporating information beyond the model's intrinsic abilities. However, most
prior works have focused on invoking retrieval deterministically, which makes
it unsuitable for tasks such as long-form question answering. Instead,
dynamically performing retrieval by invoking it only when the underlying LLM
lacks the required knowledge can be more efficient. In this context, we delve
deeper into the question, "To Retrieve or Not to Retrieve?" by exploring
multiple uncertainty detection methods. We evaluate these methods for the task
of long-form question answering, employing dynamic retrieval, and present our
comparisons. Our findings suggest that uncertainty detection metrics, such as
Degree Matrix Jaccard and Eccentricity, can reduce the number of retrieval
calls by almost half, with only a slight reduction in question-answering
accuracy.
|
2501.09294
|
Efficient Few-Shot Medical Image Analysis via Hierarchical Contrastive
Vision-Language Learning
|
cs.CV cs.CL
|
Few-shot learning in medical image classification presents a significant
challenge due to the limited availability of annotated data and the complex
nature of medical imagery. In this work, we propose Adaptive Vision-Language
Fine-tuning with Hierarchical Contrastive Alignment (HiCA), a novel framework
that leverages the capabilities of Large Vision-Language Models (LVLMs) for
medical image analysis. HiCA introduces a two-stage fine-tuning strategy,
combining domain-specific pretraining and hierarchical contrastive learning to
align visual and textual representations at multiple levels. We evaluate our
approach on two benchmark datasets, Chest X-ray and Breast Ultrasound,
achieving state-of-the-art performance in both few-shot and zero-shot settings.
Further analyses demonstrate the robustness, generalizability, and
interpretability of our method, with substantial improvements in performance
compared to existing baselines. Our work highlights the potential of
hierarchical contrastive strategies in adapting LVLMs to the unique challenges
of medical imaging tasks.
|
2501.09298
|
Physics-informed deep learning for infectious disease forecasting
|
cs.LG q-bio.QM
|
Accurate forecasting of contagious illnesses has become increasingly
important to public health policymaking, and better prediction could prevent
the loss of millions of lives. To better prepare for future pandemics, it is
essential to improve forecasting methods and capabilities. In this work, we
propose a new infectious disease forecasting model based on physics-informed
neural networks (PINNs), an emerging area of scientific machine learning. The
proposed PINN model incorporates dynamical systems representations of disease
transmission into the loss function, thereby assimilating epidemiological
theory and data using neural networks (NNs). Our approach is designed to
prevent model overfitting, which often occurs when training deep learning
models with observation data alone. In addition, we employ an additional
sub-network to account for mobility, vaccination, and other covariates that
influence the transmission rate, a key parameter in the compartment model. To
demonstrate the capability of the proposed model, we examine the performance of
the model using state-level COVID-19 data in California. Our simulation results
show that predictions of PINN model on the number of cases, deaths, and
hospitalizations are consistent with existing benchmarks. In particular, the
PINN model outperforms the basic NN model and naive baseline forecast. We also
show that the performance of the PINN model is comparable to a sophisticated
Gaussian infection state space with time dependence (GISST) forecasting model
that integrates the compartment model with a data observation model and a
regression model for inferring parameters in the compartment model.
Nonetheless, the PINN model offers a simpler structure and is easier to
implement. Our results show that the proposed forecaster could potentially
serve as a new computational tool to enhance the current capacity of infectious
disease forecasting.
|
2501.09302
|
Creating Virtual Environments with 3D Gaussian Splatting: A Comparative
Study
|
cs.CV cs.GR cs.HC
|
3D Gaussian Splatting (3DGS) has recently emerged as an innovative and
efficient 3D representation technique. While its potential for extended reality
(XR) applications is frequently highlighted, its practical effectiveness
remains underexplored. In this work, we examine three distinct 3DGS-based
approaches for virtual environment (VE) creation, leveraging their unique
strengths for efficient and visually compelling scene representation. By
conducting a comparable study, we evaluate the feasibility of 3DGS in creating
immersive VEs, identify its limitations in XR applications, and discuss future
research and development opportunities.
|
2501.09304
|
Finding the Trigger: Causal Abductive Reasoning on Video Events
|
cs.CV cs.LG
|
This paper introduces a new problem, Causal Abductive Reasoning on Video
Events (CARVE), which involves identifying causal relationships between events
in a video and generating hypotheses about causal chains that account for the
occurrence of a target event. To facilitate research in this direction, we
create two new benchmark datasets with both synthetic and realistic videos,
accompanied by trigger-target labels generated through a novel counterfactual
synthesis approach. To explore the challenge of solving CARVE, we present a
Causal Event Relation Network (CERN) that examines the relationships between
video events in temporal and semantic spaces to efficiently determine the
root-cause trigger events. Through extensive experiments, we demonstrate the
critical roles of event relational representation learning and interaction
modeling in solving video causal reasoning challenges. The introduction of the
CARVE task, along with the accompanying datasets and the CERN framework, will
advance future research on video causal reasoning and significantly facilitate
various applications, including video surveillance, root-cause analysis and
movie content management.
|
2501.09305
|
Domain-conditioned and Temporal-guided Diffusion Modeling for
Accelerated Dynamic MRI Reconstruction
|
eess.IV cs.CV physics.med-ph
|
Purpose: To propose a domain-conditioned and temporal-guided diffusion
modeling method, termed dynamic Diffusion Modeling (dDiMo), for accelerated
dynamic MRI reconstruction, enabling diffusion process to characterize
spatiotemporal information for time-resolved multi-coil Cartesian and
non-Cartesian data. Methods: The dDiMo framework integrates temporal
information from time-resolved dimensions, allowing for the concurrent capture
of intra-frame spatial features and inter-frame temporal dynamics in diffusion
modeling. It employs additional spatiotemporal ($x$-$t$) and self-consistent
frequency-temporal ($k$-$t$) priors to guide the diffusion process. This
approach ensures precise temporal alignment and enhances the recovery of fine
image details. To facilitate a smooth diffusion process, the nonlinear
conjugate gradient algorithm is utilized during the reverse diffusion steps.
The proposed model was tested on two types of MRI data: Cartesian-acquired
multi-coil cardiac MRI and Golden-Angle-Radial-acquired multi-coil
free-breathing lung MRI, across various undersampling rates. Results: dDiMo
achieved high-quality reconstructions at various acceleration factors,
demonstrating improved temporal alignment and structural recovery compared to
other competitive reconstruction methods, both qualitatively and
quantitatively. This proposed diffusion framework exhibited robust performance
in handling both Cartesian and non-Cartesian acquisitions, effectively
reconstructing dynamic datasets in cardiac and lung MRI under different imaging
conditions. Conclusion: This study introduces a novel diffusion modeling method
for dynamic MRI reconstruction.
|
2501.09307
|
RoboReflect: Robotic Reflective Reasoning for Grasping
Ambiguous-Condition Objects
|
cs.RO
|
As robotic technology rapidly develops, robots are being employed in an
increasing number of fields. However, due to the complexity of deployment
environments or the prevalence of ambiguous-condition objects, the practical
application of robotics still faces many challenges, leading to frequent
errors. Traditional methods and some LLM-based approaches, although improved,
still require substantial human intervention and struggle with autonomous error
correction in complex scenarios.In this work, we propose RoboReflect, a novel
framework leveraging large vision-language models (LVLMs) to enable
self-reflection and autonomous error correction in robotic grasping tasks.
RoboReflect allows robots to automatically adjust their strategies based on
unsuccessful attempts until successful execution is achieved.The corrected
strategies are saved in a memory for future task reference.We evaluate
RoboReflect through extensive testing on eight common objects prone to
ambiguous conditions of three categories.Our results demonstrate that
RoboReflect not only outperforms existing grasp pose estimation methods like
AnyGrasp and high-level action planning techniques using GPT-4V but also
significantly enhances the robot's ability to adapt and correct errors
independently. These findings underscore the critical importance of autonomous
selfreflection in robotic systems while effectively addressing the challenges
posed by ambiguous environments.
|
2501.09309
|
Understanding Mental Health Content on Social Media and Its Effect
Towards Suicidal Ideation
|
cs.CY cs.AI cs.CL
|
This review underscores the critical need for effective strategies to
identify and support individuals with suicidal ideation, exploiting
technological innovations in ML and DL to further suicide prevention efforts.
The study details the application of these technologies in analyzing vast
amounts of unstructured social media data to detect linguistic patterns,
keywords, phrases, tones, and contextual cues associated with suicidal
thoughts. It explores various ML and DL models like SVMs, CNNs, LSTM, neural
networks, and their effectiveness in interpreting complex data patterns and
emotional nuances within text data. The review discusses the potential of these
technologies to serve as a life-saving tool by identifying at-risk individuals
through their digital traces. Furthermore, it evaluates the real-world
effectiveness, limitations, and ethical considerations of employing these
technologies for suicide prevention, stressing the importance of responsible
development and usage. The study aims to fill critical knowledge gaps by
analyzing recent studies, methodologies, tools, and techniques in this field.
It highlights the importance of synthesizing current literature to inform
practical tools and suicide prevention efforts, guiding innovation in reliable,
ethical systems for early intervention. This research synthesis evaluates the
intersection of technology and mental health, advocating for the ethical and
responsible application of ML, DL, and NLP to offer life-saving potential
worldwide while addressing challenges like generalizability, biases, privacy,
and the need for further research to ensure these technologies do not
exacerbate existing inequities and harms.
|
2501.09310
|
A Study of In-Context-Learning-Based Text-to-SQL Errors
|
cs.CL cs.AI cs.SE
|
Large language models (LLMs) have been adopted to perform text-to-SQL tasks,
utilizing their in-context learning (ICL) capability to translate natural
language questions into structured query language (SQL). However, such a
technique faces correctness problems and requires efficient repairing
solutions. In this paper, we conduct the first comprehensive study of
text-to-SQL errors. Our study covers four representative ICL-based techniques,
five basic repairing methods, two benchmarks, and two LLM settings. We find
that text-to-SQL errors are widespread and summarize 29 error types of 7
categories. We also find that existing repairing attempts have limited
correctness improvement at the cost of high computational overhead with many
mis-repairs. Based on the findings, we propose MapleRepair, a novel text-to-SQL
error detection and repairing framework. The evaluation demonstrates that
MapleRepair outperforms existing solutions by repairing 13.8% more queries with
neglectable mis-repairs and 67.4% less overhead.
|
2501.09311
|
Shape-Based Single Object Classification Using Ensemble Method
Classifiers
|
cs.CV cs.AI cs.CL
|
Nowadays, more and more images are available. Annotation and retrieval of the
images pose classification problems, where each class is defined as the group
of database images labelled with a common semantic label. Various systems have
been proposed for content-based retrieval, as well as for image classification
and indexing. In this paper, a hierarchical classification framework has been
proposed for bridging the semantic gap effectively and achieving multi-category
image classification. A well known pre-processing and post-processing method
was used and applied to three problems; image segmentation, object
identification and image classification. The method was applied to classify
single object images from Amazon and Google datasets. The classification was
tested for four different classifiers; BayesNetwork (BN), Random Forest (RF),
Bagging and Vote. The estimated classification accuracies ranged from 20% to
99% (using 10-fold cross validation). The Bagging classifier presents the best
performance, followed by the Random Forest classifier.
|
2501.09316
|
SOP-Agent: Empower General Purpose AI Agent with Domain-Specific SOPs
|
cs.AI
|
Despite significant advancements in general-purpose AI agents, several
challenges still hinder their practical application in real-world scenarios.
First, the limited planning capabilities of Large Language Models (LLM)
restrict AI agents from effectively solving complex tasks that require
long-horizon planning. Second, general-purpose AI agents struggle to
efficiently utilize domain-specific knowledge and human expertise. In this
paper, we introduce the Standard Operational Procedure-guided Agent
(SOP-agent), a novel framework for constructing domain-specific agents through
pseudocode-style Standard Operational Procedures (SOPs) written in natural
language. Formally, we represent a SOP as a decision graph, which is traversed
to guide the agent in completing tasks specified by the SOP. We conduct
extensive experiments across tasks in multiple domains, including
decision-making, search and reasoning, code generation, data cleaning, and
grounded customer service. The SOP-agent demonstrates excellent versatility,
achieving performance superior to general-purpose agent frameworks and
comparable to domain-specific agent systems. Additionally, we introduce the
Grounded Customer Service Benchmark, the first benchmark designed to evaluate
the grounded decision-making capabilities of AI agents in customer service
scenarios based on SOPs.
|
2501.09320
|
Cooperative Decentralized Backdoor Attacks on Vertical Federated
Learning
|
cs.LG cs.CR
|
Federated learning (FL) is vulnerable to backdoor attacks, where adversaries
alter model behavior on target classification labels by embedding triggers into
data samples. While these attacks have received considerable attention in
horizontal FL, they are less understood for vertical FL (VFL), where devices
hold different features of the samples, and only the server holds the labels.
In this work, we propose a novel backdoor attack on VFL which (i) does not rely
on gradient information from the server and (ii) considers potential collusion
among multiple adversaries for sample selection and trigger embedding. Our
label inference model augments variational autoencoders with metric learning,
which adversaries can train locally. A consensus process over the adversary
graph topology determines which datapoints to poison. We further propose
methods for trigger splitting across the adversaries, with an intensity-based
implantation scheme skewing the server towards the trigger. Our convergence
analysis reveals the impact of backdoor perturbations on VFL indicated by a
stationarity gap for the trained model, which we verify empirically as well. We
conduct experiments comparing our attack with recent backdoor VFL approaches,
finding that ours obtains significantly higher success rates for the same main
task performance despite not using server information. Additionally, our
results verify the impact of collusion on attack performance.
|
2501.09321
|
Soft Knowledge Distillation with Multi-Dimensional Cross-Net Attention
for Image Restoration Models Compression
|
cs.CV
|
Transformer-based encoder-decoder models have achieved remarkable success in
image-to-image transfer tasks, particularly in image restoration. However,
their high computational complexity-manifested in elevated FLOPs and parameter
counts-limits their application in real-world scenarios. Existing knowledge
distillation methods in image restoration typically employ lightweight student
models that directly mimic the intermediate features and reconstruction results
of the teacher, overlooking the implicit attention relationships between them.
To address this, we propose a Soft Knowledge Distillation (SKD) strategy that
incorporates a Multi-dimensional Cross-net Attention (MCA) mechanism for
compressing image restoration models. This mechanism facilitates interaction
between the student and teacher across both channel and spatial dimensions,
enabling the student to implicitly learn the attention matrices. Additionally,
we employ a Gaussian kernel function to measure the distance between student
and teacher features in kernel space, ensuring stable and efficient feature
learning. To further enhance the quality of reconstructed images, we replace
the commonly used L1 or KL divergence loss with a contrastive learning loss at
the image level. Experiments on three tasks-image deraining, deblurring, and
denoising-demonstrate that our SKD strategy significantly reduces computational
complexity while maintaining strong image restoration capabilities.
|
2501.09324
|
Safety-Critical Control for Discrete-time Stochastic Systems with
Flexible Safe Bounds using Affine and Quadratic Control Barrier Functions
|
eess.SY cs.SY
|
This paper presents a safe controller synthesis of discrete-time stochastic
systems using Control Barrier Functions (CBFs). The proposed condition allows
the design of a safe controller synthesis that ensures system safety while
avoiding the conservative bounds of safe probabilities. In particular, this
study focuses on the design of CBFs that provide flexibility in the choice of
functions to obtain tighter bounds on the safe probabilities. Numerical
examples demonstrate the effectiveness of the approach.
|
2501.09326
|
Algorithm for Semantic Network Generation from Texts of Low Resource
Languages Such as Kiswahili
|
cs.CL
|
Processing low-resource languages, such as Kiswahili, using machine learning
is difficult due to lack of adequate training data. However, such low-resource
languages are still important for human communication and are already in daily
use and users need practical machine processing tasks such as summarization,
disambiguation and even question answering (QA). One method of processing such
languages, while bypassing the need for training data, is the use semantic
networks. Some low resource languages, such as Kiswahili, are of the
subject-verb-object (SVO) structure, and similarly semantic networks are a
triple of subject-predicate-object, hence SVO parts of speech tags can map into
a semantic network triple. An algorithm to process raw natural language text
and map it into a semantic network is therefore necessary and desirable in
structuring low resource languages texts. This algorithm tested on the
Kiswahili QA task with upto 78.6% exact match.
|
Subsets and Splits
No community queries yet
The top public SQL queries from the community will appear here once available.