id
stringlengths 9
16
| title
stringlengths 4
278
| categories
stringlengths 5
104
| abstract
stringlengths 6
4.09k
|
|---|---|---|---|
2501.07186
|
Generalizable Graph Neural Networks for Robust Power Grid Topology
Control
|
cs.LG cs.AI stat.ML
|
The energy transition necessitates new congestion management methods. One
such method is controlling the grid topology with machine learning (ML). This
approach has gained popularity following the Learning to Run a Power Network
(L2RPN) competitions. Graph neural networks (GNNs) are a class of ML models
that reflect graph structure in their computation, which makes them suitable
for power grid modeling. Various GNN approaches for topology control have thus
been proposed. We propose the first GNN model for grid topology control that
uses only GNN layers. Additionally, we identify the busbar information
asymmetry problem that the popular homogeneous graph representation suffers
from, and propose a heterogeneous graph representation to resolve it. We train
both homogeneous and heterogeneous GNNs and fully connected neural networks
(FCNN) baselines on an imitation learning task. We evaluate the models
according to their classification accuracy and grid operation ability. We find
that the heterogeneous GNNs perform best on in-distribution networks, followed
by the FCNNs, and lastly, the homogeneous GNNs. We also find that both GNN
types generalize better to out-of-distribution networks than FCNNs.
|
2501.07187
|
Real-time Mode-Aware Dataflow: A Dataflow Model to Specify and Analyze
Mode-dependent CPSs under Relaxed Timing Constraints
|
eess.SY cs.SY
|
Modern Cyber-Physical Systems (CPS) often exhibit both relaxed real-time
constraints and a mode-dependent execution. Relaxed real-time constraints mean
that only a subset of the processes of a CPS have real-time constraints, and a
mode-dependent CPS has conditional execution branches. Static analysis tools,
such as the PolyGraph model (a formalism extending the Cyclo-Static Dataflow
model with real-time constraints), can specify and analyze systems with relaxed
real-time constraints. However, PolyGraph is limited in its ability to specify
and analyze mode-dependent CPSs. This paper extends PolyGraph with routing
actors, yielding the Routed PolyGraph model. This model is further extended to
the Real-time Mode-Aware Dataflow (RMDF), which both leverages routing actors
and incorporates a new dataflow actor to specify mode-dependent CPSs under
relaxed real-time constraints. This paper also extends the static analyses of
PolyGraph to RMDF. We showcase the application of RMDF with a specification and
an analysis (derivation of timing constraints at the job-level and a
feasibility test) of the vision processing system of the Ingenuity Mars
helicopter.
|
2501.07191
|
Pre-Trained Large Language Model Based Remaining Useful Life Transfer
Prediction of Bearing
|
eess.SY cs.LG cs.SY
|
Accurately predicting the remaining useful life (RUL) of rotating machinery,
such as bearings, is essential for ensuring equipment reliability and
minimizing unexpected industrial failures. Traditional data-driven deep
learning methods face challenges in practical settings due to inconsistent
training and testing data distributions and limited generalization for
long-term predictions.
|
2501.07192
|
A4O: All Trigger for One sample
|
cs.CR cs.CV
|
Backdoor attacks have become a critical threat to deep neural networks
(DNNs), drawing many research interests. However, most of the studied attacks
employ a single type of trigger. Consequently, proposed backdoor defenders
often rely on the assumption that triggers would appear in a unified way. In
this paper, we show that this naive assumption can create a loophole, allowing
more sophisticated backdoor attacks to bypass. We design a novel backdoor
attack mechanism that incorporates multiple types of backdoor triggers,
focusing on stealthiness and effectiveness. Our journey begins with the
intriguing observation that the performance of a backdoor attack in deep
learning models, as well as its detectability and removability, are all
proportional to the magnitude of the trigger. Based on this correlation, we
propose reducing the magnitude of each trigger type and combining them to
achieve a strong backdoor relying on the combined trigger while still staying
safely under the radar of defenders. Extensive experiments on three standard
datasets demonstrate that our method can achieve high attack success rates
(ASRs) while consistently bypassing state-of-the-art defenses.
|
2501.07194
|
VAGeo: View-specific Attention for Cross-View Object Geo-Localization
|
cs.CV
|
Cross-view object geo-localization (CVOGL) aims to locate an object of
interest in a captured ground- or drone-view image within the satellite image.
However, existing works treat ground-view and drone-view query images
equivalently, overlooking their inherent viewpoint discrepancies and the
spatial correlation between the query image and the satellite-view reference
image. To this end, this paper proposes a novel View-specific Attention
Geo-localization method (VAGeo) for accurate CVOGL. Specifically, VAGeo
contains two key modules: view-specific positional encoding (VSPE) module and
channel-spatial hybrid attention (CSHA) module. In object-level, according to
the characteristics of different viewpoints of ground and drone query images,
viewpoint-specific positional codings are designed to more accurately identify
the click-point object of the query image in the VSPE module. In feature-level,
a hybrid attention in the CSHA module is introduced by combining channel
attention and spatial attention mechanisms simultaneously for learning
discriminative features. Extensive experimental results demonstrate that the
proposed VAGeo gains a significant performance improvement, i.e., improving
acc@0.25/acc@0.5 on the CVOGL dataset from 45.43%/42.24% to 48.21%/45.22% for
ground-view, and from 61.97%/57.66% to 66.19%/61.87% for drone-view.
|
2501.07196
|
Crowdsourced human-based computational approach for tagging peripheral
blood smear sample images from Sickle Cell Disease patients using non-expert
users
|
cs.HC cs.AI
|
In this paper, we present a human-based computation approach for the analysis
of peripheral blood smear (PBS) images images in patients with Sickle Cell
Disease (SCD). We used the Mechanical Turk microtask market to crowdsource the
labeling of PBS images. We then use the expert-tagged erythrocytesIDB dataset
to assess the accuracy and reliability of our proposal. Our results showed that
when a robust consensus is achieved among the Mechanical Turk workers,
probability of error is very low, based on comparison with expert analysis.
This suggests that our proposed approach can be used to annotate datasets of
PBS images, which can then be used to train automated methods for the diagnosis
of SCD. In future work, we plan to explore the potential integration of our
findings with outcomes obtained through automated methodologies. This could
lead to the development of more accurate and reliable methods for the diagnosis
of SCD
|
2501.07197
|
Lung Cancer detection using Deep Learning
|
eess.IV cs.CV cs.LG
|
In this paper we discuss lung cancer detection using hybrid model of
Convolutional-Neural-Networks (CNNs) and Support-Vector-Machines-(SVMs) in
order to gain early detection of tumors, benign or malignant. The work uses
this hybrid model by training upon the Computed Tomography scans (CT scans) as
dataset. Using deep learning for detecting lung cancer early is a cutting-edge
method.
|
2501.07201
|
An Enhanced Zeroth-Order Stochastic Frank-Wolfe Framework for
Constrained Finite-Sum Optimization
|
cs.LG cs.NA math.NA
|
We propose an enhanced zeroth-order stochastic Frank-Wolfe framework to
address constrained finite-sum optimization problems, a structure prevalent in
large-scale machine-learning applications. Our method introduces a novel double
variance reduction framework that effectively reduces the gradient
approximation variance induced by zeroth-order oracles and the stochastic
sampling variance from finite-sum objectives. By leveraging this framework, our
algorithm achieves significant improvements in query efficiency, making it
particularly well-suited for high-dimensional optimization tasks. Specifically,
for convex objectives, the algorithm achieves a query complexity of O(d
\sqrt{n}/\epsilon ) to find an epsilon-suboptimal solution, where d is the
dimensionality and n is the number of functions in the finite-sum objective.
For non-convex objectives, it achieves a query complexity of
O(d^{3/2}\sqrt{n}/\epsilon^2 ) without requiring the computation ofd partial
derivatives at each iteration. These complexities are the best known among
zeroth-order stochastic Frank-Wolfe algorithms that avoid explicit gradient
calculations. Empirical experiments on convex and non-convex machine learning
tasks, including sparse logistic regression, robust classification, and
adversarial attacks on deep networks, validate the computational efficiency and
scalability of our approach. Our algorithm demonstrates superior performance in
both convergence rate and query complexity compared to existing methods.
|
2501.07202
|
FaceOracle: Chat with a Face Image Oracle
|
cs.CV
|
A face image is a mandatory part of ID and travel documents. Obtaining
high-quality face images when issuing such documents is crucial for both human
examiners and automated face recognition systems. In several international
standards, face image quality requirements are intricate and defined in detail.
Identifying and understanding non-compliance or defects in the submitted face
images is crucial for both issuing authorities and applicants. In this work, we
introduce FaceOracle, an LLM-powered AI assistant that helps its users analyze
a face image in a natural conversational manner using standard compliant
algorithms. Leveraging the power of LLMs, users can get explanations of various
face image quality concepts as well as interpret the outcome of face image
quality assessment (FIQA) algorithms. We implement a proof-of-concept that
demonstrates how experts at an issuing authority could integrate FaceOracle
into their workflow to analyze, understand, and communicate their decisions
more efficiently, resulting in enhanced productivity.
|
2501.07206
|
A data-driven approach to discover and quantify systemic lupus
erythematosus etiological heterogeneity from electronic health records
|
cs.LG stat.AP
|
Systemic lupus erythematosus (SLE) is a complex heterogeneous disease with
many manifestational facets. We propose a data-driven approach to discover
probabilistic independent sources from multimodal imperfect EHR data. These
sources represent exogenous variables in the data generation process causal
graph that estimate latent root causes of the presence of SLE in the health
record. We objectively evaluated the sources against the original variables
from which they were discovered by training supervised models to discriminate
SLE from negative health records using a reduced set of labelled instances. We
found 19 predictive sources with high clinical validity and whose EHR
signatures define independent factors of SLE heterogeneity. Using the sources
as input patient data representation enables models to provide with rich
explanations that better capture the clinical reasons why a particular record
is (not) an SLE case. Providers may be willing to trade patient-level
interpretability for discrimination especially in challenging cases.
|
2501.07212
|
Future-Conditioned Recommendations with Multi-Objective Controllable
Decision Transformer
|
cs.IR
|
Securing long-term success is the ultimate aim of recommender systems,
demanding strategies capable of foreseeing and shaping the impact of decisions
on future user satisfaction. Current recommendation strategies grapple with two
significant hurdles. Firstly, the future impacts of recommendation decisions
remain obscured, rendering it impractical to evaluate them through direct
optimization of immediate metrics. Secondly, conflicts often emerge between
multiple objectives, like enhancing accuracy versus exploring diverse
recommendations. Existing strategies, trapped in a "training, evaluation, and
retraining" loop, grow more labor-intensive as objectives evolve. To address
these challenges, we introduce a future-conditioned strategy for
multi-objective controllable recommendations, allowing for the direct
specification of future objectives and empowering the model to generate item
sequences that align with these goals autoregressively. We present the
Multi-Objective Controllable Decision Transformer (MocDT), an offline
Reinforcement Learning (RL) model capable of autonomously learning the mapping
from multiple objectives to item sequences, leveraging extensive offline data.
Consequently, it can produce recommendations tailored to any specified
objectives during the inference stage. Our empirical findings emphasize the
controllable recommendation strategy's ability to produce item sequences
according to different objectives while maintaining performance that is
competitive with current recommendation strategies across various objectives.
|
2501.07213
|
Multi-face emotion detection for effective Human-Robot Interaction
|
cs.HC cs.AI cs.CV cs.RO
|
The integration of dialogue interfaces in mobile devices has become
ubiquitous, providing a wide array of services. As technology progresses,
humanoid robots designed with human-like features to interact effectively with
people are gaining prominence, and the use of advanced human-robot dialogue
interfaces is continually expanding. In this context, emotion recognition plays
a crucial role in enhancing human-robot interaction by enabling robots to
understand human intentions. This research proposes a facial emotion detection
interface integrated into a mobile humanoid robot, capable of displaying
real-time emotions from multiple individuals on a user interface. To this end,
various deep neural network models for facial expression recognition were
developed and evaluated under consistent computer-based conditions, yielding
promising results. Afterwards, a trade-off between accuracy and memory
footprint was carefully considered to effectively implement this application on
a mobile humanoid robot.
|
2501.07214
|
TimeLogic: A Temporal Logic Benchmark for Video QA
|
cs.CV
|
Temporal logical understanding, a core facet of human cognition, plays a
pivotal role in capturing complex sequential events and their temporal
relationships within videos. This capability is particularly crucial in tasks
like Video Question Answering (VideoQA), where the goal is to process visual
data over time together with textual data to provide coherent answers. However,
current VideoQA benchmarks devote little focus to evaluating this critical
skill due to the challenge of annotating temporal logic. Despite the
advancement of vision-language models, assessing their temporal logical
reasoning powers remains a challenge, primarily due to the lack QA pairs that
demand formal, complex temporal reasoning. To bridge this gap, we introduce the
TimeLogic QA (TLQA) framework to automatically generate the QA pairs,
specifically designed to evaluate the temporal logical understanding. To this
end, TLQA leverages temporal annotations from existing video datasets together
with temporal operators derived from logic theory to construct questions that
test understanding of event sequences and their temporal relationships. TLQA
framework is generic and scalable, capable of leveraging both, existing video
action datasets with temporal action segmentation annotations, or video
datasets with temporal scene graph annotations, to automatically generate
temporal logical questions. We leverage 4 datasets, STAR, Breakfast, AGQA, and
CrossTask, and generate two VideoQA dataset variants - small (TLQA-S) and large
(TLQA-L) - containing 2k and 10k QA pairs for each category, resulting in 32k
and 160k total pairs per dataset. We undertake a comprehensive evaluation of
leading-edge VideoQA models, employing the TLQA to benchmark their temporal
logical understanding capabilities. We assess the VideoQA model's temporal
reasoning performance on 16 categories of temporal logic with varying temporal
complexity.
|
2501.07216
|
Temperature Driven Multi-modal/Single-actuated Soft Finger
|
cs.RO
|
Soft pneumatic fingers are of great research interest. However, their
significant potential is limited as most of them can generate only one motion,
mostly bending. The conventional design of soft fingers does not allow them to
switch to another motion mode. In this paper, we developed a novel multi-modal
and single-actuated soft finger where its motion mode is switched by changing
the finger's temperature. Our soft finger is capable of switching between three
distinctive motion modes: bending, twisting, and extension-in approximately
five seconds. We carried out a detailed experimental study of the soft finger
and evaluated its repeatability and range of motion. It exhibited repeatability
of around one millimeter and a fifty percent larger range of motion than a
standard bending actuator. We developed an analytical model for a
fiber-reinforced soft actuator for twisting motion. This helped us relate the
input pressure to the output twist radius of the twisting motion. This model
was validated by experimental verification. Further, a soft robotic gripper
with multiple grasp modes was developed using three actuators. This gripper can
adapt to and grasp objects of a large range of size, shape, and stiffness. We
showcased its grasping capabilities by successfully grasping a small berry, a
large roll, and a delicate tofu cube.
|
2501.07217
|
When lies are mostly truthful: automated verbal deception detection for
embedded lies
|
cs.CL
|
Background: Verbal deception detection research relies on narratives and
commonly assumes statements as truthful or deceptive. A more realistic
perspective acknowledges that the veracity of statements exists on a continuum
with truthful and deceptive parts being embedded within the same statement.
However, research on embedded lies has been lagging behind. Methods: We
collected a novel dataset of 2,088 truthful and deceptive statements with
annotated embedded lies. Using a within-subjects design, participants provided
a truthful account of an autobiographical event. They then rewrote their
statement in a deceptive manner by including embedded lies, which they
highlighted afterwards and judged on lie centrality, deceptiveness, and source.
Results: We show that a fined-tuned language model (Llama-3-8B) can classify
truthful statements and those containing embedded lies with 64% accuracy.
Individual differences, linguistic properties and explainability analysis
suggest that the challenge of moving the dial towards embedded lies stems from
their resemblance to truthful statements. Typical deceptive statements
consisted of 2/3 truthful information and 1/3 embedded lies, largely derived
from past personal experiences and with minimal linguistic differences with
their truthful counterparts. Conclusion: We present this dataset as a novel
resource to address this challenge and foster research on embedded lies in
verbal deception detection.
|
2501.07220
|
Multiple-Satellite Cooperative Information Communication and Location
Sensing in LEO Satellite Constellations
|
cs.IT math.IT
|
Integrated sensing and communication (ISAC) and ubiquitous connectivity are
two usage scenarios of sixth generation (6G) networks. In this context, low
earth orbit (LEO) satellite constellations, as an important component of 6G
networks, is expected to provide ISAC services across the globe. In this paper,
we propose a novel dual-function LEO satellite constellation framework that
realizes information communication for multiple user equipments (UEs) and
location sensing for interested target simultaneously with the same hardware
and spectrum. In order to improve both information transmission rate and
location sensing accuracy within limited wireless resources under dynamic
environment, we design a multiple-satellite cooperative information
communication and location sensing algorithm by jointly optimizing
communication beamforming and sensing waveform according to the characteristics
of LEO satellite constellation. Finally, extensive simulation results are
presented to demonstrate the competitive performance of the proposed
algorithms.
|
2501.07221
|
Exploring the Use of Contrastive Language-Image Pre-Training for Human
Posture Classification: Insights from Yoga Pose Analysis
|
cs.CV cs.AI
|
Accurate human posture classification in images and videos is crucial for
automated applications across various fields, including work safety, physical
rehabilitation, sports training, or daily assisted living. Recently, multimodal
learning methods, such as Contrastive Language-Image Pretraining (CLIP), have
advanced significantly in jointly understanding images and text. This study
aims to assess the effectiveness of CLIP in classifying human postures,
focusing on its application in yoga. Despite the initial limitations of the
zero-shot approach, applying transfer learning on 15,301 images (real and
synthetic) with 82 classes has shown promising results. The article describes
the full procedure for fine-tuning, including the choice for image description
syntax, models and hyperparameters adjustment. The fine-tuned CLIP model,
tested on 3826 images, achieves an accuracy of over 85%, surpassing the current
state-of-the-art of previous works on the same dataset by approximately 6%, its
training time being 3.5 times lower than what is needed to fine-tune a
YOLOv8-based model. For more application-oriented scenarios, with smaller
datasets of six postures each, containing 1301 and 401 training images, the
fine-tuned models attain an accuracy of 98.8% and 99.1%, respectively.
Furthermore, our experiments indicate that training with as few as 20 images
per pose can yield around 90% accuracy in a six-class dataset. This study
demonstrates that this multimodal technique can be effectively used for yoga
pose classification, and possibly for human posture classification, in general.
Additionally, CLIP inference time (around 7 ms) supports that the model can be
integrated into automated systems for posture evaluation, e.g., for developing
a real-time personal yoga assistant for performance assessment.
|
2501.07223
|
Improving Incremental Nonlinear Dynamic Inversion Robustness Using
Robust Control in Aerial Robotics
|
cs.RO
|
Improving robustness to uncertainty and rejection of external disturbances
represents a significant challenge in aerial robotics. Nonlinear controllers
based on Incremental Nonlinear Dynamic Inversion (INDI), known for their
ability in estimating disturbances through measured-filtered data, have been
notably used in such applications. Typically, these controllers comprise two
cascaded loops: an inner loop employing nonlinear dynamic inversion and an
outer loop generating the virtual control inputs via linear controllers. In
this paper, a novel methodology is introduced, that combines the advantages of
INDI with the robustness of linear structured $\mathcal{H}_\infty$ controllers.
A full cascaded architecture is proposed to control the dynamics of a
multirotor drone, covering both stabilization and guidance. In particular,
low-order $\mathcal{H}_\infty$ controllers are designed for the outer loop by
properly structuring the problem and solving it through non-smooth
optimization. A comparative analysis is conducted between an existing INDI/PD
approach and the proposed INDI/$\mathcal{H}_\infty$ strategy, showing a notable
enhancement in the rejection of external disturbances. It is carried out first
using MATLAB simulations involving a nonlinear model of a Parrot Bebop
quadcopter drone, and then experimentally using a customized quadcopter built
by the ENAC team. The results show an improvement of more than 50\% in the
rejection of disturbances such as gusts.
|
2501.07224
|
Touched by ChatGPT: Using an LLM to Drive Affective Tactile Interaction
|
cs.RO
|
Touch is a fundamental aspect of emotion-rich communication, playing a vital
role in human interaction and offering significant potential in human-robot
interaction. Previous research has demonstrated that a sparse representation of
human touch can effectively convey social tactile signals. However, advances in
human-robot tactile interaction remain limited, as many humanoid robots possess
simplistic capabilities, such as only opening and closing their hands,
restricting nuanced tactile expressions. In this study, we explore how a robot
can use sparse representations of tactile vibrations to convey emotions to a
person. To achieve this, we developed a wearable sleeve integrated with a 5x5
grid of vibration motors, enabling the robot to communicate diverse tactile
emotions and gestures. Using chain prompts within a Large Language Model (LLM),
we generated distinct 10-second vibration patterns corresponding to 10 emotions
(e.g., happiness, sadness, fear) and 6 touch gestures (e.g., pat, rub, tap).
Participants (N = 32) then rated each vibration stimulus based on perceived
valence and arousal. People are accurate at recognising intended emotions, a
result which aligns with earlier findings. These results highlight the LLM's
ability to generate emotional haptic data and effectively convey emotions
through tactile signals. By translating complex emotional and tactile
expressions into vibratory patterns, this research demonstrates how LLMs can
enhance physical interaction between humans and robots.
|
2501.07227
|
MECD+: Unlocking Event-Level Causal Graph Discovery for Video Reasoning
|
cs.CV
|
Video causal reasoning aims to achieve a high-level understanding of videos
from a causal perspective. However, it exhibits limitations in its scope,
primarily executed in a question-answering paradigm and focusing on brief video
segments containing isolated events and basic causal relations, lacking
comprehensive and structured causality analysis for videos with multiple
interconnected events. To fill this gap, we introduce a new task and dataset,
Multi-Event Causal Discovery (MECD). It aims to uncover the causal relations
between events distributed chronologically across long videos. Given visual
segments and textual descriptions of events, MECD identifies the causal
associations between these events to derive a comprehensive and structured
event-level video causal graph explaining why and how the result event
occurred. To address the challenges of MECD, we devise a novel framework
inspired by the Granger Causality method, incorporating an efficient mask-based
event prediction model to perform an Event Granger Test. It estimates causality
by comparing the predicted result event when premise events are masked versus
unmasked. Furthermore, we integrate causal inference techniques such as
front-door adjustment and counterfactual inference to mitigate challenges in
MECD like causality confounding and illusory causality. Additionally, context
chain reasoning is introduced to conduct more robust and generalized reasoning.
Experiments validate the effectiveness of our framework in reasoning complete
causal relations, outperforming GPT-4o and VideoChat2 by 5.77% and 2.70%,
respectively. Further experiments demonstrate that causal relation graphs can
also contribute to downstream video understanding tasks such as video question
answering and video event prediction.
|
2501.07236
|
CSTA: Spatial-Temporal Causal Adaptive Learning for Exemplar-Free Video
Class-Incremental Learning
|
cs.CV
|
Continual learning aims to acquire new knowledge while retaining past
information. Class-incremental learning (CIL) presents a challenging scenario
where classes are introduced sequentially. For video data, the task becomes
more complex than image data because it requires learning and preserving both
spatial appearance and temporal action involvement. To address this challenge,
we propose a novel exemplar-free framework that equips separate spatiotemporal
adapters to learn new class patterns, accommodating the incremental information
representation requirements unique to each class. While separate adapters are
proven to mitigate forgetting and fit unique requirements, naively applying
them hinders the intrinsic connection between spatial and temporal information
increments, affecting the efficiency of representing newly learned class
information. Motivated by this, we introduce two key innovations from a causal
perspective. First, a causal distillation module is devised to maintain the
relation between spatial-temporal knowledge for a more efficient
representation. Second, a causal compensation mechanism is proposed to reduce
the conflicts during increment and memorization between different types of
information. Extensive experiments conducted on benchmark datasets demonstrate
that our framework can achieve new state-of-the-art results, surpassing current
example-based methods by 4.2% in accuracy on average.
|
2501.07237
|
Breaking Memory Limits: Gradient Wavelet Transform Enhances LLMs
Training
|
cs.LG cs.AI
|
Large language models (LLMs) have shown impressive performance across a range
of natural language processing tasks. However, their vast number of parameters
introduces significant memory challenges during training, particularly when
using memory-intensive optimizers like Adam. Existing memory-efficient
algorithms often rely on techniques such as singular value decomposition
projection or weight freezing. While these approaches help alleviate memory
constraints, they generally produce suboptimal results compared to full-rank
updates. In this paper, we investigate the memory-efficient method beyond
low-rank training, proposing a novel solution called Gradient Wavelet Transform
(GWT), which applies wavelet transforms to gradients in order to significantly
reduce the memory requirements for maintaining optimizer states. We demonstrate
that GWT can be seamlessly integrated with memory-intensive optimizers,
enabling efficient training without sacrificing performance. Through extensive
experiments on both pre-training and fine-tuning tasks, we show that GWT
achieves state-of-the-art performance compared with advanced memory-efficient
optimizers and full-rank approaches in terms of both memory usage and training
performance.
|
2501.07238
|
Lessons From Red Teaming 100 Generative AI Products
|
cs.AI
|
In recent years, AI red teaming has emerged as a practice for probing the
safety and security of generative AI systems. Due to the nascency of the field,
there are many open questions about how red teaming operations should be
conducted. Based on our experience red teaming over 100 generative AI products
at Microsoft, we present our internal threat model ontology and eight main
lessons we have learned:
1. Understand what the system can do and where it is applied
2. You don't have to compute gradients to break an AI system
3. AI red teaming is not safety benchmarking
4. Automation can help cover more of the risk landscape
5. The human element of AI red teaming is crucial
6. Responsible AI harms are pervasive but difficult to measure
7. LLMs amplify existing security risks and introduce new ones
8. The work of securing AI systems will never be complete
By sharing these insights alongside case studies from our operations, we
offer practical recommendations aimed at aligning red teaming efforts with real
world risks. We also highlight aspects of AI red teaming that we believe are
often misunderstood and discuss open questions for the field to consider.
|
2501.07244
|
Can Vision-Language Models Evaluate Handwritten Math?
|
cs.CV cs.CL
|
Recent advancements in Vision-Language Models (VLMs) have opened new
possibilities in automatic grading of handwritten student responses,
particularly in mathematics. However, a comprehensive study to test the ability
of VLMs to evaluate and reason over handwritten content remains absent. To
address this gap, we introduce FERMAT, a benchmark designed to assess the
ability of VLMs to detect, localize and correct errors in handwritten
mathematical content. FERMAT spans four key error dimensions - computational,
conceptual, notational, and presentation - and comprises over 2,200 handwritten
math solutions derived from 609 manually curated problems from grades 7-12 with
intentionally introduced perturbations. Using FERMAT we benchmark nine VLMs
across three tasks: error detection, localization, and correction. Our results
reveal significant shortcomings in current VLMs in reasoning over handwritten
text, with Gemini-1.5-Pro achieving the highest error correction rate (77%). We
also observed that some models struggle with processing handwritten content, as
their accuracy improves when handwritten inputs are replaced with printed text
or images. These findings highlight the limitations of current VLMs and reveal
new avenues for improvement. We release FERMAT and all the associated resources
in the open-source to drive further research.
|
2501.07245
|
Depth and Image Fusion for Road Obstacle Detection Using Stereo Camera
|
cs.CV cs.MM
|
This paper is devoted to the detection of objects on a road, performed with a
combination of two methods based on both the use of depth information and video
analysis of data from a stereo camera. Since neither the time of the appearance
of an object on the road, nor its size and shape is known in advance,
ML/DL-based approaches are not applicable. The task becomes more complicated
due to variations in artificial illumination, inhomogeneous road surface
texture, and unknown character and features of the object. To solve this
problem we developed the depth and image fusion method that complements a
search of small contrast objects by RGB-based method, and obstacle detection by
stereo image-based approach with SLIC superpixel segmentation. We conducted
experiments with static and low speed obstacles in an underground parking lot
and demonstrated the successful work of the developed technique for detecting
and even tracking small objects, which can be parking infrastructure objects,
things left on the road, wheels, dropped boxes, etc.
|
2501.07246
|
Audio-CoT: Exploring Chain-of-Thought Reasoning in Large Audio Language
Model
|
cs.SD cs.CL cs.MM eess.AS
|
Large Audio-Language Models (LALMs) have demonstrated remarkable performance
in tasks involving audio perception and understanding, such as speech
recognition and audio captioning. However, their reasoning capabilities -
critical for solving complex real-world problems - remain underexplored. In
this work, we conduct the first exploration into integrating Chain-of-Thought
(CoT) reasoning into LALMs to enhance their reasoning ability across auditory
modalities. We evaluate representative CoT methods, analyzing their performance
in both information extraction and reasoning tasks across sound, music, and
speech domains. Our findings reveal that CoT methods significantly improve
performance on easy and medium tasks but encounter challenges with hard tasks,
where reasoning chains can confuse the model rather than improve accuracy.
Additionally, we identify a positive correlation between reasoning path length
and accuracy, demonstrating the potential of scaling inference for advanced
instruction-following and reasoning. This study not only highlights the promise
of CoT in enhancing LALM reasoning capabilities but also identifies key
limitations and provides actionable directions for future research.
|
2501.07247
|
Interpretable machine-learning for predicting molecular weight of PLA
based on artificial bee colony optimization algorithm and adaptive neurofuzzy
inference system
|
eess.SY cs.LG cs.SY
|
This article discusses the integration of the Artificial Bee Colony (ABC)
algorithm with two supervised learning methods, namely Artificial Neural
Networks (ANNs) and Adaptive Network-based Fuzzy Inference System (ANFIS), for
feature selection from Near-Infrared (NIR) spectra for predicting the molecular
weight of medical-grade Polylactic Acid (PLA). During extrusion processing of
PLA, in-line NIR spectra were captured along with extrusion process and machine
setting data. With a dataset comprising 63 observations and 512 input features,
appropriate machine learning tools are essential for interpreting data and
selecting features to improve prediction accuracy. Initially, the ABC
optimization algorithm is coupled with ANN/ANFIS to forecast PLA molecular
weight. The objective functions of the ABC algorithm are to minimize the root
mean square error (RMSE) between experimental and predicted PLA molecular
weights while also minimizing the number of input features. Results indicate
that employing ABC-ANFIS yields the lowest RMSE of 282 Da and identifies four
significant parameters (NIR wavenumbers 6158 cm-1, 6310 cm-1, 6349 cm-1, and
melt temperature) for prediction. These findings demonstrate the effectiveness
of using the ABC algorithm with ANFIS for selecting a minimal set of features
to predict PLA molecular weight with high accuracy during processing
|
2501.07248
|
Implicit Neural Representations for Registration of Left Ventricle
Myocardium During a Cardiac Cycle
|
eess.IV cs.CV
|
Understanding the movement of the left ventricle myocardium (LVmyo) during
the cardiac cycle is essential for assessing cardiac function. One way to model
this movement is through a series of deformable image registrations (DIRs) of
the LVmyo. Traditional deep learning methods for DIRs, such as those based on
convolutional neural networks, often require substantial memory and
computational resources. In contrast, implicit neural representations (INRs)
offer an efficient approach by operating on any number of continuous points.
This study extends the use of INRs for DIR to cardiac computed tomography (CT),
focusing on LVmyo registration. To enhance the precision of the registration
around the LVmyo, we incorporate the signed distance field of the LVmyo with
the Hounsfield Unit values from the CT frames. This guides the registration of
the LVmyo, while keeping the tissue information from the CT frames. Our
framework demonstrates high registration accuracy and provides a robust method
for temporal registration that facilitates further analysis of LVmyo motion.
|
2501.07250
|
Large Language Models: New Opportunities for Access to Science
|
astro-ph.IM cs.IR physics.soc-ph
|
The adaptation of Large Language Models like ChatGPT for information
retrieval from scientific data, software and publications is offering new
opportunities to simplify access to and understanding of science for persons
from all levels of expertise. They can become tools to both enhance the
usability of the open science environment we are building as well as help to
provide systematic insight to a long-built corpus of scientific publications.
The uptake of Retrieval Augmented Generation-enhanced chat applications in the
construction of the open science environment of the KM3NeT neutrino detectors
serves as a focus point to explore and exemplify prospects for the wider
application of Large Language Models for our science.
|
2501.07251
|
MOS-Attack: A Scalable Multi-objective Adversarial Attack Framework
|
cs.LG cs.AI cs.CR cs.CV
|
Crafting adversarial examples is crucial for evaluating and enhancing the
robustness of Deep Neural Networks (DNNs), presenting a challenge equivalent to
maximizing a non-differentiable 0-1 loss function.
However, existing single objective methods, namely adversarial attacks focus
on a surrogate loss function, do not fully harness the benefits of engaging
multiple loss functions, as a result of insufficient understanding of their
synergistic and conflicting nature.
To overcome these limitations, we propose the Multi-Objective Set-based
Attack (MOS Attack), a novel adversarial attack framework leveraging multiple
loss functions and automatically uncovering their interrelations.
The MOS Attack adopts a set-based multi-objective optimization strategy,
enabling the incorporation of numerous loss functions without additional
parameters.
It also automatically mines synergistic patterns among various losses,
facilitating the generation of potent adversarial attacks with fewer
objectives.
Extensive experiments have shown that our MOS Attack outperforms
single-objective attacks. Furthermore, by harnessing the identified synergistic
patterns, MOS Attack continues to show superior results with a reduced number
of loss functions.
|
2501.07255
|
GazeGrasp: DNN-Driven Robotic Grasping with Wearable Eye-Gaze Interface
|
cs.RO
|
We present GazeGrasp, a gaze-based manipulation system enabling individuals
with motor impairments to control collaborative robots using eye-gaze. The
system employs an ESP32 CAM for eye tracking, MediaPipe for gaze detection, and
YOLOv8 for object localization, integrated with a Universal Robot UR10 for
manipulation tasks. After user-specific calibration, the system allows
intuitive object selection with a magnetic snapping effect and robot control
via eye gestures. Experimental evaluation involving 13 participants
demonstrated that the magnetic snapping effect significantly reduced gaze
alignment time, improving task efficiency by 31%. GazeGrasp provides a robust,
hands-free interface for assistive robotics, enhancing accessibility and
autonomy for users.
|
2501.07256
|
EdgeTAM: On-Device Track Anything Model
|
cs.CV
|
On top of Segment Anything Model (SAM), SAM 2 further extends its capability
from image to video inputs through a memory bank mechanism and obtains a
remarkable performance compared with previous methods, making it a foundation
model for video segmentation task. In this paper, we aim at making SAM 2 much
more efficient so that it even runs on mobile devices while maintaining a
comparable performance. Despite several works optimizing SAM for better
efficiency, we find they are not sufficient for SAM 2 because they all focus on
compressing the image encoder, while our benchmark shows that the newly
introduced memory attention blocks are also the latency bottleneck. Given this
observation, we propose EdgeTAM, which leverages a novel 2D Spatial Perceiver
to reduce the computational cost. In particular, the proposed 2D Spatial
Perceiver encodes the densely stored frame-level memories with a lightweight
Transformer that contains a fixed set of learnable queries. Given that video
segmentation is a dense prediction task, we find preserving the spatial
structure of the memories is essential so that the queries are split into
global-level and patch-level groups. We also propose a distillation pipeline
that further improves the performance without inference overhead. As a result,
EdgeTAM achieves 87.7, 70.0, 72.3, and 71.7 J&F on DAVIS 2017, MOSE, SA-V val,
and SA-V test, while running at 16 FPS on iPhone 15 Pro Max.
|
2501.07259
|
PO-GVINS: Tightly Coupled GNSS-Visual-Inertial Integration with
Pose-Only Representation
|
cs.RO
|
Accurate and reliable positioning is crucial for perception, decision-making,
and other high-level applications in autonomous driving, unmanned aerial
vehicles, and intelligent robots. Given the inherent limitations of standalone
sensors, integrating heterogeneous sensors with complementary capabilities is
one of the most effective approaches to achieving this goal. In this paper, we
propose a filtering-based, tightly coupled global navigation satellite system
(GNSS)-visual-inertial positioning framework with a pose-only formulation
applied to the visual-inertial system (VINS), termed PO-GVINS. Specifically,
multiple-view imaging used in current VINS requires a priori of 3D feature,
then jointly estimate camera poses and 3D feature position, which inevitably
introduces linearization error of the feature as well as facing dimensional
explosion. However, the pose-only (PO) formulation, which is demonstrated to be
equivalent to the multiple-view imaging and has been applied in visual
reconstruction, represent feature depth using two camera poses and thus 3D
feature position is removed from state vector avoiding aforementioned
difficulties. Inspired by this, we first apply PO formulation in our VINS,
i.e., PO-VINS. GNSS raw measurements are then incorporated with integer
ambiguity resolved to achieve accurate and drift-free estimation. Extensive
experiments demonstrate that the proposed PO-VINS significantly outperforms the
multi-state constrained Kalman filter (MSCKF). By incorporating GNSS
measurements, PO-GVINS achieves accurate, drift-free state estimation, making
it a robust solution for positioning in challenging environments.
|
2501.07260
|
Skip Mamba Diffusion for Monocular 3D Semantic Scene Completion
|
cs.CV cs.AI
|
3D semantic scene completion is critical for multiple downstream tasks in
autonomous systems. It estimates missing geometric and semantic information in
the acquired scene data. Due to the challenging real-world conditions, this
task usually demands complex models that process multi-modal data to achieve
acceptable performance. We propose a unique neural model, leveraging advances
from the state space and diffusion generative modeling to achieve remarkable 3D
semantic scene completion performance with monocular image input. Our technique
processes the data in the conditioned latent space of a variational autoencoder
where diffusion modeling is carried out with an innovative state space
technique. A key component of our neural network is the proposed Skimba (Skip
Mamba) denoiser, which is adept at efficiently processing long-sequence data.
The Skimba diffusion model is integral to our 3D scene completion network,
incorporating a triple Mamba structure, dimensional decomposition residuals and
varying dilations along three directions. We also adopt a variant of this
network for the subsequent semantic segmentation stage of our method. Extensive
evaluation on the standard SemanticKITTI and SSCBench-KITTI360 datasets show
that our approach not only outperforms other monocular techniques by a large
margin, it also achieves competitive performance against stereo methods. The
code is available at https://github.com/xrkong/skimba
|
2501.07267
|
Transforming Role Classification in Scientific Teams Using LLMs and
Advanced Predictive Analytics
|
cs.DL cs.SI
|
Scientific team dynamics are critical in determining the nature and impact of
research outputs. However, existing methods for classifying author roles based
on self-reports and clustering lack comprehensive contextual analysis of
contributions. Thus, we present a transformative approach to classifying author
roles in scientific teams using advanced large language models (LLMs), which
offers a more refined analysis compared to traditional clustering methods.
Specifically, we seek to complement and enhance these traditional methods by
utilizing open source and proprietary LLMs, such as GPT-4, Llama3 70B, Llama2
70B, and Mistral 7x8B, for role classification. Utilizing few-shot prompting,
we categorize author roles and demonstrate that GPT-4 outperforms other models
across multiple categories, surpassing traditional approaches such as XGBoost
and BERT. Our methodology also includes building a predictive deep learning
model using 10 features. By training this model on a dataset derived from the
OpenAlex database, which provides detailed metadata on academic publications --
such as author-publication history, author affiliation, research topics, and
citation counts -- we achieve an F1 score of 0.76, demonstrating robust
classification of author roles.
|
2501.07273
|
An Extended Survey and a Comparison Framework for Dataflow Models of
Computation and Communication
|
eess.SY cs.SY
|
Dataflow Model of Computation and Communications (DF MoCCs) is a formalism
used to specify the behavior of Cyber-Physical Systems (CPSs). DF MoCCs are
widely used in the design of CPSs, as they provide a high-level of abstraction
to specify the system's behavior. DF MoCCs rules give semantics to a dataflow
specification of a CPS, and static analysis algorithms rely on these semantics
to guarantee safety properties of the dataflow specification, such as bounded
memory usage and deadlock freeness. A wide range of DF MoCCs exists, each with
its own characteristics and static analyses. This paper presents a survey of
those DF MoCCs and a classification in eight categories. In addition, DF MoCCs
are characterized by a comprehensive list of features and static analyses,
which reflect their expressiveness and analyzability. Based on this
characterization, a framework is proposed to compare the expressiveness and the
analyzability of DF MoCCs quantitatively.
|
2501.07274
|
Mining Intraday Risk Factor Collections via Hierarchical Reinforcement
Learning based on Transferred Options
|
cs.CE
|
Traditional risk factors like beta, size/value, and momentum often lag behind
market dynamics in measuring and predicting stock return volatility.
Statistical models like PCA and factor analysis fail to capture hidden
nonlinear relationships. Genetic programming (GP) can identify nonlinear
factors but often lacks mechanisms for evaluating factor quality, and the
resulting formulas are complex. To address these challenges, we propose a
Hierarchical Proximal Policy Optimization (HPPO) framework for automated factor
generation and evaluation. HPPO uses two PPO models: a high-level policy
assigns weights to stock features, and a low-level policy identifies latent
nonlinear relationships. The Pearson correlation between generated factors and
return volatility serves as the reward signal. Transfer learning pre-trains the
high-level policy on large-scale historical data, fine-tuning it with the
latest data to adapt to new features and shifts. Experiments show the HPPO-TO
algorithm achieves a 25\% excess return in HFT markets across China (CSI
300/800), India (Nifty 100), and the US (S\&P 500). Code and data are available
at https://github.com/wencyxu/HRL-HF_risk_factor_set.
|
2501.07275
|
Generating Poisoning Attacks against Ridge Regression Models with
Categorical Features
|
cs.LG math.OC
|
Machine Learning (ML) models have become a very powerful tool to extract
information from large datasets and use it to make accurate predictions and
automated decisions. However, ML models can be vulnerable to external attacks,
causing them to underperform or deviate from their expected tasks. One way to
attack ML models is by injecting malicious data to mislead the algorithm during
the training phase, which is referred to as a poisoning attack. We can prepare
for such situations by designing anticipated attacks, which are later used for
creating and testing defence strategies. In this paper, we propose an algorithm
to generate strong poisoning attacks for a ridge regression model containing
both numerical and categorical features that explicitly models and poisons
categorical features. We model categorical features as SOS-1 sets and formulate
the problem of designing poisoning attacks as a bilevel optimization problem
that is nonconvex mixed-integer in the upper-level and unconstrained convex
quadratic in the lower-level. We present the mathematical formulation of the
problem, introduce a single-level reformulation based on the Karush-Kuhn-Tucker
(KKT) conditions of the lower level, find bounds for the lower-level variables
to accelerate solver performance, and propose a new algorithm to poison
categorical features. Numerical experiments show that our method improves the
mean squared error of all datasets compared to the previous benchmark in the
literature.
|
2501.07276
|
Bridging Smart Meter Gaps: A Benchmark of Statistical, Machine Learning
and Time Series Foundation Models for Data Imputation
|
cs.AI cs.LG
|
The integrity of time series data in smart grids is often compromised by
missing values due to sensor failures, transmission errors, or disruptions.
Gaps in smart meter data can bias consumption analyses and hinder reliable
predictions, causing technical and economic inefficiencies. As smart meter data
grows in volume and complexity, conventional techniques struggle with its
nonlinear and nonstationary patterns. In this context, Generative Artificial
Intelligence offers promising solutions that may outperform traditional
statistical methods. In this paper, we evaluate two general-purpose Large
Language Models and five Time Series Foundation Models for smart meter data
imputation, comparing them with conventional Machine Learning and statistical
models. We introduce artificial gaps (30 minutes to one day) into an anonymized
public dataset to test inference capabilities. Results show that Time Series
Foundation Models, with their contextual understanding and pattern recognition,
could significantly enhance imputation accuracy in certain cases. However, the
trade-off between computational cost and performance gains remains a critical
consideration.
|
2501.07278
|
Lifelong Learning of Large Language Model based Agents: A Roadmap
|
cs.AI
|
Lifelong learning, also known as continual or incremental learning, is a
crucial component for advancing Artificial General Intelligence (AGI) by
enabling systems to continuously adapt in dynamic environments. While large
language models (LLMs) have demonstrated impressive capabilities in natural
language processing, existing LLM agents are typically designed for static
systems and lack the ability to adapt over time in response to new challenges.
This survey is the first to systematically summarize the potential techniques
for incorporating lifelong learning into LLM-based agents. We categorize the
core components of these agents into three modules: the perception module for
multimodal input integration, the memory module for storing and retrieving
evolving knowledge, and the action module for grounded interactions with the
dynamic environment. We highlight how these pillars collectively enable
continuous adaptation, mitigate catastrophic forgetting, and improve long-term
performance. This survey provides a roadmap for researchers and practitioners
working to develop lifelong learning capabilities in LLM agents, offering
insights into emerging trends, evaluation metrics, and application scenarios.
Relevant literature and resources are available at \href{this
url}{https://github.com/qianlima-lab/awesome-lifelong-llm-agent}.
|
2501.07279
|
Toward Universal Decoding of Binary Linear Block Codes via Enhanced
Polar Transformations
|
cs.IT math.IT
|
Binary linear block codes (BLBCs) are essential to modern communication, but
their diverse structures often require multiple decoders, increasing
complexity. This work introduces enhanced polar decoding ($\mathsf{PD}^+$), a
universal soft decoding algorithm that transforms any BLBC into a polar-like
code compatible with efficient polar code decoders such as successive
cancellation list (SCL) decoding. Key innovations in $\mathsf{PD}^+$ include
pruning polar kernels, shortening codes, and leveraging a simulated annealing
algorithm to optimize transformations. These enable $\mathsf{PD}^+$ to achieve
competitive or superior performance to state-of-the-art algorithms like OSD and
GRAND across various codes, including extended BCH, extended Golay, and binary
quadratic residue codes, with significantly lower complexity. Moreover,
$\mathsf{PD}^+$ is designed to be forward-compatible with advancements in polar
code decoding techniques and AI-driven search methods, making it a robust and
versatile solution for universal BLBC decoding in both present and future
systems.
|
2501.07288
|
LLM-Net: Democratizing LLMs-as-a-Service through Blockchain-based Expert
Networks
|
cs.AI
|
The centralization of Large Language Models (LLMs) development has created
significant barriers to AI advancement, limiting the democratization of these
powerful technologies. This centralization, coupled with the scarcity of
high-quality training data and mounting complexity of maintaining comprehensive
expertise across rapidly expanding knowledge domains, poses critical challenges
to the continued growth of LLMs. While solutions like Retrieval-Augmented
Generation (RAG) offer potential remedies, maintaining up-to-date expert
knowledge across diverse domains remains a significant challenge, particularly
given the exponential growth of specialized information. This paper introduces
LLMs Networks (LLM-Net), a blockchain-based framework that democratizes
LLMs-as-a-Service through a decentralized network of specialized LLM providers.
By leveraging collective computational resources and distributed domain
expertise, LLM-Net incorporates fine-tuned expert models for various specific
domains, ensuring sustained knowledge growth while maintaining service quality
through collaborative prompting mechanisms. The framework's robust design
includes blockchain technology for transparent transaction and performance
validation, establishing an immutable record of service delivery. Our
simulation, built on top of state-of-the-art LLMs such as Claude 3.5 Sonnet,
Llama 3.1, Grok-2, and GPT-4o, validates the effectiveness of the
reputation-based mechanism in maintaining service quality by selecting
high-performing respondents (LLM providers). Thereby it demonstrates the
potential of LLM-Net to sustain AI advancement through the integration of
decentralized expertise and blockchain-based accountability.
|
2501.07290
|
Principles for Responsible AI Consciousness Research
|
cs.AI
|
Recent research suggests that it may be possible to build conscious AI
systems now or in the near future. Conscious AI systems would arguably deserve
moral consideration, and it may be the case that large numbers of conscious
systems could be created and caused to suffer. Furthermore, AI systems or
AI-generated characters may increasingly give the impression of being
conscious, leading to debate about their moral status. Organisations involved
in AI research must establish principles and policies to guide research and
deployment choices and public communication concerning consciousness. Even if
an organisation chooses not to study AI consciousness as such, it will still
need policies in place, as those developing advanced AI systems risk
inadvertently creating conscious entities. Responsible research and deployment
practices are essential to address this possibility. We propose five principles
for responsible research and argue that research organisations should make
voluntary, public commitments to principles on these lines. Our principles
concern research objectives and procedures, knowledge sharing and public
communications.
|
2501.07292
|
Estimating quantum relative entropies on quantum computers
|
quant-ph cs.IT cs.LG cs.NA math.IT math.NA
|
Quantum relative entropy, a quantum generalization of the well-known
Kullback-Leibler divergence, serves as a fundamental measure of the
distinguishability between quantum states and plays a pivotal role in quantum
information science. Despite its importance, efficiently estimating quantum
relative entropy between two quantum states on quantum computers remains a
significant challenge. In this work, we propose the first quantum algorithm for
estimating quantum relative entropy and Petz R\'{e}nyi divergence from two
unknown quantum states on quantum computers, addressing open problems
highlighted in [Phys. Rev. A 109, 032431 (2024)] and [IEEE Trans. Inf. Theory
70, 5653-5680 (2024)]. This is achieved by combining quadrature approximations
of relative entropies, the variational representation of quantum f-divergences,
and a new technique for parameterizing Hermitian polynomial operators to
estimate their traces with quantum states. Notably, the circuit size of our
algorithm is at most 2n+1 with n being the number of qubits in the quantum
states and it is directly applicable to distributed scenarios, where quantum
states to be compared are hosted on cross-platform quantum computers. We
validate our algorithm through numerical simulations, laying the groundwork for
its future deployment on quantum hardware devices.
|
2501.07294
|
Dataset-Agnostic Recommender Systems
|
cs.IR cs.LG
|
[This is a position paper and does not contain any empirical or theoretical
results] Recommender systems have become a cornerstone of personalized user
experiences, yet their development typically involves significant manual
intervention, including dataset-specific feature engineering, hyperparameter
tuning, and configuration. To this end, we introduce a novel paradigm:
Dataset-Agnostic Recommender Systems (DAReS) that aims to enable a single
codebase to autonomously adapt to various datasets without the need for
fine-tuning, for a given recommender system task. Central to this approach is
the Dataset Description Language (DsDL), a structured format that provides
metadata about the dataset's features and labels, and allow the system to
understand dataset's characteristics, allowing it to autonomously manage
processes like feature selection, missing values imputation, noise removal, and
hyperparameter optimization. By reducing the need for domain-specific expertise
and manual adjustments, DAReS offers a more efficient and scalable solution for
building recommender systems across diverse application domains. It addresses
critical challenges in the field, such as reusability, reproducibility, and
accessibility for non-expert users or entry-level researchers.
|
2501.07295
|
GestLLM: Advanced Hand Gesture Interpretation via Large Language Models
for Human-Robot Interaction
|
cs.RO
|
This paper introduces GestLLM, an advanced system for human-robot interaction
that enables intuitive robot control through hand gestures. Unlike conventional
systems, which rely on a limited set of predefined gestures, GestLLM leverages
large language models and feature extraction via MediaPipe to interpret a
diverse range of gestures. This integration addresses key limitations in
existing systems, such as restricted gesture flexibility and the inability to
recognize complex or unconventional gestures commonly used in human
communication.
By combining state-of-the-art feature extraction and language model
capabilities, GestLLM achieves performance comparable to leading
vision-language models while supporting gestures underrepresented in
traditional datasets. For example, this includes gestures from popular culture,
such as the ``Vulcan salute" from Star Trek, without any additional
pretraining, prompt engineering, etc. This flexibility enhances the naturalness
and inclusivity of robot control, making interactions more intuitive and
user-friendly.
GestLLM provides a significant step forward in gesture-based interaction,
enabling robots to understand and respond to a wide variety of hand gestures
effectively. This paper outlines its design, implementation, and evaluation,
demonstrating its potential applications in advanced human-robot collaboration,
assistive robotics, and interactive entertainment.
|
2501.07296
|
Event-based Video Person Re-identification via Cross-Modality and
Temporal Collaboration
|
cs.CV
|
Video-based person re-identification (ReID) has become increasingly important
due to its applications in video surveillance applications. By employing events
in video-based person ReID, more motion information can be provided between
continuous frames to improve recognition accuracy. Previous approaches have
assisted by introducing event data into the video person ReID task, but they
still cannot avoid the privacy leakage problem caused by RGB images. In order
to avoid privacy attacks and to take advantage of the benefits of event data,
we consider using only event data. To make full use of the information in the
event stream, we propose a Cross-Modality and Temporal Collaboration (CMTC)
network for event-based video person ReID. First, we design an event transform
network to obtain corresponding auxiliary information from the input of raw
events. Additionally, we propose a differential modality collaboration module
to balance the roles of events and auxiliaries to achieve complementary
effects. Furthermore, we introduce a temporal collaboration module to exploit
motion information and appearance cues. Experimental results demonstrate that
our method outperforms others in the task of event-based video person ReID.
|
2501.07297
|
Toward Realistic Camouflaged Object Detection: Benchmarks and Method
|
cs.CV
|
Camouflaged object detection (COD) primarily relies on semantic or instance
segmentation methods. While these methods have made significant advancements in
identifying the contours of camouflaged objects, they may be inefficient or
cost-effective for tasks that only require the specific location of the object.
Object detection algorithms offer an optimized solution for Realistic
Camouflaged Object Detection (RCOD) in such cases. However, detecting
camouflaged objects remains a formidable challenge due to the high degree of
similarity between the features of the objects and their backgrounds. Unlike
segmentation methods that perform pixel-wise comparisons to differentiate
between foreground and background, object detectors omit this analysis, further
aggravating the challenge. To solve this problem, we propose a camouflage-aware
feature refinement (CAFR) strategy. Since camouflaged objects are not rare
categories, CAFR fully utilizes a clear perception of the current object within
the prior knowledge of large models to assist detectors in deeply understanding
the distinctions between background and foreground. Specifically, in CAFR, we
introduce the Adaptive Gradient Propagation (AGP) module that fine-tunes all
feature extractor layers in large detection models to fully refine
class-specific features from camouflaged contexts. We then design the Sparse
Feature Refinement (SFR) module that optimizes the transformer-based feature
extractor to focus primarily on capturing class-specific features in
camouflaged scenarios. To facilitate the assessment of RCOD tasks, we manually
annotate the labels required for detection on three existing segmentation COD
datasets, creating a new benchmark for RCOD tasks. Code and datasets are
available at: https://github.com/zhimengXin/RCOD.
|
2501.07299
|
ViewVR: Visual Feedback Modes to Achieve Quality of VR-based
Telemanipulation
|
cs.RO
|
The paper focuses on an immersive teleoperation system that enhances
operator's ability to actively perceive the robot's surroundings. A
consumer-grade HTC Vive VR system was used to synchronize the operator's hand
and head movements with a UR3 robot and a custom-built robotic head with two
degrees of freedom (2-DoF). The system's usability, manipulation efficiency,
and intuitiveness of control were evaluated in comparison with static head
camera positioning across three distinct tasks. Code and other supplementary
materials can be accessed by link: https://github.com/ErkhovArtem/ViewVR
|
2501.07300
|
Comparative analysis of optical character recognition methods for S\'ami
texts from the National Library of Norway
|
cs.CL cs.CV
|
Optical Character Recognition (OCR) is crucial to the National Library of
Norway's (NLN) digitisation process as it converts scanned documents into
machine-readable text. However, for the S\'ami documents in NLN's collection,
the OCR accuracy is insufficient. Given that OCR quality affects downstream
processes, evaluating and improving OCR for text written in S\'ami languages is
necessary to make these resources accessible. To address this need, this work
fine-tunes and evaluates three established OCR approaches, Transkribus,
Tesseract and TrOCR, for transcribing S\'ami texts from NLN's collection. Our
results show that Transkribus and TrOCR outperform Tesseract on this task,
while Tesseract achieves superior performance on an out-of-domain dataset.
Furthermore, we show that fine-tuning pre-trained models and supplementing
manual annotations with machine annotations and synthetic text images can yield
accurate OCR for S\'ami languages, even with a moderate amount of manually
annotated data.
|
2501.07301
|
The Lessons of Developing Process Reward Models in Mathematical
Reasoning
|
cs.CL cs.AI cs.LG
|
Process Reward Models (PRMs) emerge as a promising approach for process
supervision in mathematical reasoning of Large Language Models (LLMs), which
aim to identify and mitigate intermediate errors in the reasoning processes.
However, the development of effective PRMs faces significant challenges,
particularly in data annotation and evaluation methodologies. In this paper,
through extensive experiments, we demonstrate that commonly used Monte Carlo
(MC) estimation-based data synthesis for PRMs typically yields inferior
performance and generalization compared to LLM-as-a-judge and human annotation
methods. MC estimation relies on completion models to evaluate current-step
correctness, leading to inaccurate step verification. Furthermore, we identify
potential biases in conventional Best-of-N (BoN) evaluation strategies for
PRMs: (1) The unreliable policy models generate responses with correct answers
but flawed processes, leading to a misalignment between the evaluation criteria
of BoN and the PRM objectives of process verification. (2) The tolerance of
PRMs of such responses leads to inflated BoN scores. (3) Existing PRMs have a
significant proportion of minimum scores concentrated on the final answer
steps, revealing the shift from process to outcome-based assessment in BoN
Optimized PRMs. To address these challenges, we develop a consensus filtering
mechanism that effectively integrates MC estimation with LLM-as-a-judge and
advocates a more comprehensive evaluation framework that combines
response-level and step-level metrics. Based on the mechanisms, we
significantly improve both model performance and data efficiency in the BoN
evaluation and the step-wise error identification task. Finally, we release a
new state-of-the-art PRM that outperforms existing open-source alternatives and
provides practical guidelines for future research in building process
supervision models.
|
2501.07304
|
Code and Pixels: Multi-Modal Contrastive Pre-training for Enhanced
Tabular Data Analysis
|
cs.CV cs.LG
|
Learning from tabular data is of paramount importance, as it complements the
conventional analysis of image and video data by providing a rich source of
structured information that is often critical for comprehensive understanding
and decision-making processes. We present Multi-task Contrastive Masked Tabular
Modeling (MT-CMTM), a novel method aiming to enhance tabular models by
leveraging the correlation between tabular data and corresponding images.
MT-CMTM employs a dual strategy combining contrastive learning with masked
tabular modeling, optimizing the synergy between these data modalities.
Central to our approach is a 1D Convolutional Neural Network with residual
connections and an attention mechanism (1D-ResNet-CBAM), designed to
efficiently process tabular data without relying on images. This enables
MT-CMTM to handle purely tabular data for downstream tasks, eliminating the
need for potentially costly image acquisition and processing.
We evaluated MT-CMTM on the DVM car dataset, which is uniquely suited for
this particular scenario, and the newly developed HIPMP dataset, which connects
membrane fabrication parameters with image data. Our MT-CMTM model outperforms
the proposed tabular 1D-ResNet-CBAM, which is trained from scratch, achieving a
relative 1.48% improvement in relative MSE on HIPMP and a 2.38% increase in
absolute accuracy on DVM. These results demonstrate MT-CMTM's robustness and
its potential to advance the field of multi-modal learning.
|
2501.07305
|
The Devil is in the Spurious Correlation: Boosting Moment Retrieval via
Temporal Dynamic Learning
|
cs.CV
|
Given a textual query along with a corresponding video, the objective of
moment retrieval aims to localize the moments relevant to the query within the
video. While commendable results have been demonstrated by existing
transformer-based approaches, predicting the accurate temporal span of the
target moment is currently still a major challenge. In this paper, we reveal
that a crucial reason stems from the spurious correlation between the text
queries and the moment context. Namely, the model may associate the textual
query with the background frames rather than the target moment. To address this
issue, we propose a temporal dynamic learning approach for moment retrieval,
where two strategies are designed to mitigate the spurious correlation. First,
we introduce a novel video synthesis approach to construct a dynamic context
for the relevant moment. With separate yet similar videos mixed up, the
synthesis approach empowers our model to attend to the target moment of the
corresponding query under various dynamic contexts. Second, we enhance the
representation by learning temporal dynamics. Besides the visual
representation, text queries are aligned with temporal dynamic representations,
which enables our model to establish a non-spurious correlation between the
query-related moment and context. With the aforementioned proposed method, the
spurious correlation issue in moment retrieval can be largely alleviated. Our
method establishes a new state-of-the-art performance on two popular benchmarks
of moment retrieval, \ie, QVHighlights and Charades-STA. In addition, the
detailed ablation analyses demonstrate the effectiveness of the proposed
strategies. Our code will be publicly available.
|
2501.07306
|
Variable Bregman Majorization-Minimization Algorithm and its Application
to Dirichlet Maximum Likelihood Estimation
|
cs.LG math.OC
|
We propose a novel Bregman descent algorithm for minimizing a convex function
that is expressed as the sum of a differentiable part (defined over an open
set) and a possibly nonsmooth term. The approach, referred to as the Variable
Bregman Majorization-Minimization (VBMM) algorithm, extends the Bregman
Proximal Gradient method by allowing the Bregman function used in the
divergence to adaptively vary at each iteration, provided it satisfies a
majorizing condition on the objective function. This adaptive framework enables
the algorithm to approximate the objective more precisely at each iteration,
thereby allowing for accelerated convergence compared to the traditional
Bregman Proximal Gradient descent. We establish the convergence of the VBMM
algorithm to a minimizer under mild assumptions on the family of metrics used.
Furthermore, we introduce a novel application of both the Bregman Proximal
Gradient method and the VBMM algorithm to the estimation of the
multidimensional parameters of a Dirichlet distribution through the
maximization of its log-likelihood. Numerical experiments confirm that the VBMM
algorithm outperforms existing approaches in terms of convergence speed.
|
2501.07312
|
Localization-Aware Multi-Scale Representation Learning for Repetitive
Action Counting
|
cs.CV
|
Repetitive action counting (RAC) aims to estimate the number of
class-agnostic action occurrences in a video without exemplars. Most current
RAC methods rely on a raw frame-to-frame similarity representation for period
prediction. However, this approach can be significantly disrupted by common
noise such as action interruptions and inconsistencies, leading to sub-optimal
counting performance in realistic scenarios. In this paper, we introduce a
foreground localization optimization objective into similarity representation
learning to obtain more robust and efficient video features. We propose a
Localization-Aware Multi-Scale Representation Learning (LMRL) framework.
Specifically, we apply a Multi-Scale Period-Aware Representation (MPR) with a
scale-specific design to accommodate various action frequencies and learn more
flexible temporal correlations. Furthermore, we introduce the Repetition
Foreground Localization (RFL) method, which enhances the representation by
coarsely identifying periodic actions and incorporating global semantic
information. These two modules can be jointly optimized, resulting in a more
discerning periodic action representation. Our approach significantly reduces
the impact of noise, thereby improving counting accuracy. Additionally, the
framework is designed to be scalable and adaptable to different types of video
content. Experimental results on the RepCountA and UCFRep datasets demonstrate
that our proposed method effectively handles repetitive action counting.
|
2501.07314
|
FinerWeb-10BT: Refining Web Data with LLM-Based Line-Level Filtering
|
cs.CL
|
Data quality is crucial for training Large Language Models (LLMs).
Traditional heuristic filters often miss low-quality text or mistakenly remove
valuable content. In this paper, we introduce an LLM-based line-level filtering
method to enhance training data quality. We use GPT-4o mini to label a
20,000-document sample from FineWeb at the line level, allowing the model to
create descriptive labels for low-quality lines. These labels are grouped into
nine main categories, and we train a DeBERTa-v3 classifier to scale the
filtering to a 10B-token subset of FineWeb. To test the impact of our
filtering, we train GPT-2 models on both the original and the filtered
datasets. The results show that models trained on the filtered data achieve
higher accuracy on the HellaSwag benchmark and reach their performance targets
faster, even with up to 25\% less data. This demonstrates that LLM-based
line-level filtering can significantly improve data quality and training
efficiency for LLMs. We release our quality-annotated dataset, FinerWeb-10BT,
and the codebase to support further work in this area.
|
2501.07317
|
Evaluation of Artificial Intelligence Methods for Lead Time Prediction
in Non-Cycled Areas of Automotive Production
|
cs.LG cs.AI cs.RO
|
The present study examines the effectiveness of applying Artificial
Intelligence methods in an automotive production environment to predict unknown
lead times in a non-cycle-controlled production area. Data structures are
analyzed to identify contextual features and then preprocessed using one-hot
encoding. Methods selection focuses on supervised machine learning techniques.
In supervised learning methods, regression and classification methods are
evaluated. Continuous regression based on target size distribution is not
feasible. Classification methods analysis shows that Ensemble Learning and
Support Vector Machines are the most suitable. Preliminary study results
indicate that gradient boosting algorithms LightGBM, XGBoost, and CatBoost
yield the best results. After further testing and extensive hyperparameter
optimization, the final method choice is the LightGBM algorithm. Depending on
feature availability and prediction interval granularity, relative prediction
accuracies of up to 90% can be achieved. Further tests highlight the importance
of periodic retraining of AI models to accurately represent complex production
processes using the database. The research demonstrates that AI methods can be
effectively applied to highly variable production data, adding business value
by providing an additional metric for various control tasks while outperforming
current non AI-based systems.
|
2501.07318
|
Movable Antenna Enhanced Integrated Sensing and Communication Via
Antenna Position Optimization
|
cs.IT eess.SP math.IT
|
In this paper, we propose an integrated sensing and communication (ISAC)
system aided by the movable-antenna (MA) array, which can improve the
communication and sensing performance via flexible antenna movement over
conventional fixed-position antenna (FPA) array. First, we consider the
downlink multiuser communication, where each user is randomly distributed
within a given three-dimensional zone with local movement. To reduce the
overhead of frequent antenna movement, the antenna position vector (APV) is
designed based on users' statistical channel state information (CSI), so that
the antennas only need to be moved in a large timescale. Then, for target
sensing, the Cramer-Rao bounds (CRBs) of the estimation mean square error for
different spatial angles of arrival (AoAs) are derived as functions of MAs'
positions. Based on the above, we formulate an optimization problem to maximize
the expected minimum achievable rate among all communication users, with given
constraints on the maximum acceptable CRB thresholds for target sensing. An
alternating optimization algorithm is proposed to iteratively optimize one of
the horizontal and vertical APVs of the MA array with the other being fixed.
Numerical results demonstrate that our proposed MA arrays can significantly
enlarge the trade-off region between communication and sensing performance
compared to conventional FPA arrays with different inter-antenna spacing. It is
also revealed that the steering vectors of the designed MA arrays exhibit low
correlation in the angular domain, thus effectively reducing channel
correlation among communication users to enhance their achievable rates, while
alleviating ambiguity in target angle estimation to achieve improved sensing
accuracy.
|
2501.07324
|
Foundation Models at Work: Fine-Tuning for Fairness in Algorithmic
Hiring
|
cs.LG
|
Foundation models require fine-tuning to ensure their generative outputs
align with intended results for specific tasks. Automating this fine-tuning
process is challenging, as it typically needs human feedback that can be
expensive to acquire. We present AutoRefine, a method that leverages
reinforcement learning for targeted fine-tuning, utilizing direct feedback from
measurable performance improvements in specific downstream tasks. We
demonstrate the method for a problem arising in algorithmic hiring platforms
where linguistic biases influence a recommendation system. In this setting, a
generative model seeks to rewrite given job specifications to receive more
diverse candidate matches from a recommendation engine which matches jobs to
candidates. Our model detects and regulates biases in job descriptions to meet
diversity and fairness criteria. The experiments on a public hiring dataset and
a real-world hiring platform showcase how large language models can assist in
identifying and mitigation biases in the real world.
|
2501.07327
|
Community Aware Temporal Network Generation
|
cs.SI physics.soc-ph
|
The advantages of temporal networks in capturing complex dynamics, such as
diffusion and contagion, has led to breakthroughs in real world systems across
numerous fields. In the case of human behavior, face-to-face interaction
networks enable us to understand the dynamics of how communities emerge and
evolve in time through the interactions, which is crucial in fields like
epidemics, sociological studies and urban science. However, state-of-the-art
datasets suffer from a number of drawbacks, such as short time-span for data
collection and a small number of participants. Moreover, concerns arise for the
participants' privacy and the data collection costs. Over the past years, many
successful algorithms for static networks generation have been proposed, but
they often do not tackle the social structure of interactions or their temporal
aspect. In this work, we extend a recent network generation approach to capture
the evolution of interactions between different communities. Our method labels
nodes based on their community affiliation and constructs surrogate networks
that reflect the interactions of the original temporal networks between nodes
with different labels. This enables the generation of synthetic networks that
replicate realistic behaviors. We validate our approach by comparing structural
measures between the original and generated networks across multiple
face-to-face interaction datasets.
|
2501.07329
|
Joint Automatic Speech Recognition And Structure Learning For Better
Speech Understanding
|
cs.SD cs.CL eess.AS
|
Spoken language understanding (SLU) is a structure prediction task in the
field of speech. Recently, many works on SLU that treat it as a
sequence-to-sequence task have achieved great success. However, This method is
not suitable for simultaneous speech recognition and understanding. In this
paper, we propose a joint speech recognition and structure learning framework
(JSRSL), an end-to-end SLU model based on span, which can accurately transcribe
speech and extract structured content simultaneously. We conduct experiments on
name entity recognition and intent classification using the Chinese dataset
AISHELL-NER and the English dataset SLURP. The results show that our proposed
method not only outperforms the traditional sequence-to-sequence method in both
transcription and extraction capabilities but also achieves state-of-the-art
performance on the two datasets.
|
2501.07331
|
Efficient Event-based Delay Learning in Spiking Neural Networks
|
cs.NE
|
Spiking Neural Networks (SNNs) are attracting increased attention as a more
energy-efficient alternative to traditional Artificial Neural Networks. Spiking
neurons are stateful and intrinsically recurrent, making them well-suited for
spatio-temporal tasks. However, this intrinsic memory is limited by synaptic
and membrane time constants. A powerful additional mechanism are delays. In
this paper, we propose a novel event-based training method for SNNs with
delays, grounded in the EventProp formalism and enabling the calculation of
exact gradients with respect to weights and delays. Our method supports
multiple spikes per neuron and, to our best knowledge, is the first delay
learning algorithm to be applied to recurrent SNNs. We evaluate our method on a
simple sequence detection task, and the Yin-Yang, Spiking Heidelberg Digits and
Spiking Speech Commands datasets, demonstrating that our algorithm can optimize
delays from suboptimal initial conditions and enhance classification accuracy
compared to architectures without delays. Finally, we show that our approach
uses less than half the memory of the current state-of-the-art delay-learning
method and is up to 26x faster.
|
2501.07334
|
Anonymization of Documents for Law Enforcement with Machine Learning
|
cs.AI cs.CV
|
The steadily increasing utilization of data-driven methods and approaches in
areas that handle sensitive personal information such as in law enforcement
mandates an ever increasing effort in these institutions to comply with data
protection guidelines. In this work, we present a system for automatically
anonymizing images of scanned documents, reducing manual effort while ensuring
data protection compliance. Our method considers the viability of further
forensic processing after anonymization by minimizing automatically redacted
areas by combining automatic detection of sensitive regions with knowledge from
a manually anonymized reference document. Using a self-supervised image model
for instance retrieval of the reference document, our approach requires only
one anonymized example to efficiently redact all documents of the same type,
significantly reducing processing time. We show that our approach outperforms
both a purely automatic redaction system and also a naive copy-paste scheme of
the reference anonymization to other documents on a hand-crafted dataset of
ground truth redactions.
|
2501.07335
|
TempoGPT: Enhancing Temporal Reasoning via Quantizing Embedding
|
cs.LG cs.AI
|
Multi-modal language model has made advanced progress in vision and audio,
but still faces significant challenges in dealing with complex reasoning tasks
in the time series domain. The reasons are twofold. First, labels for
multi-modal time series data are coarse and devoid of analysis or reasoning
processes. Training with these data cannot improve the model's reasoning
capabilities. Second, due to the lack of precise tokenization in processing
time series, the representation patterns for temporal and textual information
are inconsistent, which hampers the effectiveness of multi-modal alignment. To
address these challenges, we propose a multi-modal time series data
construction approach and a multi-modal time series language model (TLM),
TempoGPT. Specially, we construct multi-modal data for complex reasoning tasks
by analyzing the variable-system relationships within a white-box system.
Additionally, proposed TempoGPT achieves consistent representation between
temporal and textual information by quantizing temporal embeddings, where
temporal embeddings are quantized into a series of discrete tokens using a
predefined codebook; subsequently, a shared embedding layer processes both
temporal and textual tokens. Extensive experiments demonstrate that TempoGPT
accurately perceives temporal information, logically infers conclusions, and
achieves state-of-the-art in the constructed complex time series reasoning
tasks. Moreover, we quantitatively demonstrate the effectiveness of quantizing
temporal embeddings in enhancing multi-modal alignment and the reasoning
capabilities of TLMs. Code and data are available at
https://github.com/zhanghaochuan20/TempoGPT.
|
2501.07337
|
Digital Operating Mode Classification of Real-World Amateur Radio
Transmissions
|
cs.LG eess.SP
|
This study presents an ML approach for classifying digital radio operating
modes evaluated on real-world transmissions. We generated 98 different
parameterized radio signals from 17 digital operating modes, transmitted each
of them on the 70 cm (UHF) amateur radio band, and recorded our transmissions
with two different architectures of SDR receivers. Three lightweight ML models
were trained exclusively on spectrograms of limited non-transmitted signals
with random characters as payloads. This training involved an online data
augmentation pipeline to simulate various radio channel impairments. Our best
model, EfficientNetB0, achieved an accuracy of 93.80% across the 17 operating
modes and 85.47% across all 98 parameterized radio signals, evaluated on our
real-world transmissions with Wikipedia articles as payloads. Furthermore, we
analyzed the impact of varying signal durations & the number of FFT bins on
classification, assessed the effectiveness of our simulated channel
impairments, and tested our models across multiple simulated SNRs.
|
2501.07342
|
A method for estimating roadway billboard salience
|
cs.CV
|
Roadside billboards and other forms of outdoor advertising play a crucial
role in marketing initiatives; however, they can also distract drivers,
potentially contributing to accidents. This study delves into the significance
of roadside advertising in images captured from a driver's perspective.
Firstly, it evaluates the effectiveness of neural networks in detecting
advertising along roads, focusing on the YOLOv5 and Faster R-CNN models.
Secondly, the study addresses the determination of billboard significance using
methods for saliency extraction. The UniSal and SpectralResidual methods were
employed to create saliency maps for each image. The study establishes a
database of eye tracking sessions captured during city highway driving to
assess the saliency models.
|
2501.07343
|
Fast-Revisit Coverage Path Planning for Autonomous Mobile Patrol Robots
Using Long-Range Sensor Information
|
cs.RO
|
The utilization of Unmanned Ground Vehicles (UGVs) for patrolling industrial
sites has expanded significantly. These UGVs typically are equipped with
perception systems, e.g., computer vision, with limited range due to sensor
limitations or site topology. High-level control of the UGVs requires Coverage
Path Planning (CPP) algorithms that navigate all relevant waypoints and
promptly start the next cycle. In this paper, we propose the novel Fast-Revisit
Coverage Path Planning (FaRe-CPP) algorithm using a greedy heuristic approach
to propose waypoints for maximum coverage area and a random search-based path
optimization technique to obtain a path along the proposed waypoints with
minimum revisit time. We evaluated the algorithm in a simulated environment
using Gazebo and a camera-equipped TurtleBot3 against a number of existing
algorithms. Compared to their average revisit times and path lengths, our
FaRe-CPP algorithm approximately showed a 45% and 40% reduction, respectively,
in these highly relevant performance indicators.
|
2501.07346
|
Enhancing Online Reinforcement Learning with Meta-Learned Objective from
Offline Data
|
cs.LG
|
A major challenge in Reinforcement Learning (RL) is the difficulty of
learning an optimal policy from sparse rewards. Prior works enhance online RL
with conventional Imitation Learning (IL) via a handcrafted auxiliary
objective, at the cost of restricting the RL policy to be sub-optimal when the
offline data is generated by a non-expert policy. Instead, to better leverage
valuable information in offline data, we develop Generalized Imitation Learning
from Demonstration (GILD), which meta-learns an objective that distills
knowledge from offline data and instills intrinsic motivation towards the
optimal policy. Distinct from prior works that are exclusive to a specific RL
algorithm, GILD is a flexible module intended for diverse vanilla off-policy RL
algorithms. In addition, GILD introduces no domain-specific hyperparameter and
minimal increase in computational cost. In four challenging MuJoCo tasks with
sparse rewards, we show that three RL algorithms enhanced with GILD
significantly outperform state-of-the-art methods.
|
2501.07358
|
Deep Generative Clustering with VAEs and Expectation-Maximization
|
cs.LG stat.ML
|
We propose a novel deep clustering method that integrates Variational
Autoencoders (VAEs) into the Expectation-Maximization (EM) framework. Our
approach models the probability distribution of each cluster with a VAE and
alternates between updating model parameters by maximizing the Evidence Lower
Bound (ELBO) of the log-likelihood and refining cluster assignments based on
the learned distributions. This enables effective clustering and generation of
new samples from each cluster. Unlike existing VAE-based methods, our approach
eliminates the need for a Gaussian Mixture Model (GMM) prior or additional
regularization techniques. Experiments on MNIST and FashionMNIST demonstrate
superior clustering performance compared to state-of-the-art methods.
|
2501.07359
|
Emergent effects of scaling on the functional hierarchies within large
language models
|
cs.CL cs.AI
|
Large language model (LLM) architectures are often described as functionally
hierarchical: Early layers process syntax, middle layers begin to parse
semantics, and late layers integrate information. The present work revisits
these ideas. This research submits simple texts to an LLM (e.g., "A church and
organ") and extracts the resulting activations. Then, for each layer, support
vector machines and ridge regressions are fit to predict a text's label and
thus examine whether a given layer encodes some information. Analyses using a
small model (Llama-3.2-3b; 28 layers) partly bolster the common hierarchical
perspective: Item-level semantics are most strongly represented early (layers
2-7), then two-item relations (layers 8-12), and then four-item analogies
(layers 10-15). Afterward, the representation of items and simple relations
gradually decreases in deeper layers that focus on more global information.
However, several findings run counter to a steady hierarchy view: First,
although deep layers can represent document-wide abstractions, deep layers also
compress information from early portions of the context window without
meaningful abstraction. Second, when examining a larger model
(Llama-3.3-70b-Instruct), stark fluctuations in abstraction level appear: As
depth increases, two-item relations and four-item analogies initially increase
in their representation, then markedly decrease, and afterward increase again
momentarily. This peculiar pattern consistently emerges across several
experiments. Third, another emergent effect of scaling is coordination between
the attention mechanisms of adjacent layers. Across multiple experiments using
the larger model, adjacent layers fluctuate between what information they each
specialize in representing. In sum, an abstraction hierarchy often manifests
across layers, but large models also deviate from this structure in curious
ways.
|
2501.07360
|
TimberVision: A Multi-Task Dataset and Framework for Log-Component
Segmentation and Tracking in Autonomous Forestry Operations
|
cs.CV cs.LG
|
Timber represents an increasingly valuable and versatile resource. However,
forestry operations such as harvesting, handling and measuring logs still
require substantial human labor in remote environments posing significant
safety risks. Progressively automating these tasks has the potential of
increasing their efficiency as well as safety, but requires an accurate
detection of individual logs as well as live trees and their context. Although
initial approaches have been proposed for this challenging application domain,
specialized data and algorithms are still too scarce to develop robust
solutions. To mitigate this gap, we introduce the TimberVision dataset,
consisting of more than 2k annotated RGB images containing a total of 51k trunk
components including cut and lateral surfaces, thereby surpassing any existing
dataset in this domain in terms of both quantity and detail by a large margin.
Based on this data, we conduct a series of ablation experiments for oriented
object detection and instance segmentation and evaluate the influence of
multiple scene parameters on model performance. We introduce a generic
framework to fuse the components detected by our models for both tasks into
unified trunk representations. Furthermore, we automatically derive geometric
properties and apply multi-object tracking to further enhance robustness. Our
detection and tracking approach provides highly descriptive and accurate trunk
representations solely from RGB image data, even under challenging
environmental conditions. Our solution is suitable for a wide range of
application scenarios and can be readily combined with other sensor modalities.
|
2501.07363
|
Several Families of Entanglement-Assisted Quantum Quasi-Cyclic LDPC
Codes
|
cs.IT math.IT
|
We introduce several families of entanglement-assisted (EA)
Calderbank-Shor-Steane (CSS) codes derived from two distinct classes of
low-density parity-check (LDPC) codes. We derive two families of EA quantum
QC-LDPC codes, namely, the spatially coupled (SC) and the non-spatially coupled
cases. These two families are constructed by tiling permutation matrices of
prime and composite orders. We establish several code properties along with
conditions for guaranteed girth for the proposed code families. The Tanner
graphs of the proposed EA quantum QC-LDPC and EA quantum QC-SC-LDPC codes have
girths greater than four, which is required for good error correction
performance. Some of the proposed families of codes require only
\textit{minimal} Bell pairs to be shared across the quantum transceiver.
Furthermore, we construct two families of EA quantum QC-LDPC codes based on a
single classical code, with Tanner graphs having girths greater than six,
further improving the error correction performance. We evaluate the performance
of these codes using both depolarizing and Markovian noise models to assess the
random and burst error performance. Using a modified version of the sum-product
algorithm over a quaternary alphabet, we show how correlated Pauli errors can
be handled within the decoding setup. Simulation results show that nearly an
order of improvement in the error correction performance can be achieved with
quaternary decoder compared to binary decoder over the depolarizing and
Markovian error channels, thereby generalizing the approach of EA quantum
QC-LDPC code designs to work with both random and burst quantum error models,
useful in practice.
|
2501.07365
|
Multimodal semantic retrieval for product search
|
cs.IR cs.LG
|
Semantic retrieval (also known as dense retrieval) based on textual data has
been extensively studied for both web search and product search application
fields, where the relevance of a query and a potential target document is
computed by their dense vector representation comparison. Product image is
crucial for e-commerce search interactions and is a key factor for customers at
product explorations. However, its impact on semantic retrieval has not been
well studied yet. In this research, we build a multimodal representation for
product items in e-commerce search in contrast to pure-text representation of
products, and investigate the impact of such representations. The models are
developed and evaluated on e-commerce datasets. We demonstrate that a
multimodal representation scheme for a product can show improvement either on
purchase recall or relevance accuracy in semantic retrieval. Additionally, we
provide numerical analysis for exclusive matches retrieved by a multimodal
semantic retrieval model versus a text-only semantic retrieval model, to
demonstrate the validation of multimodal solutions.
|
2501.07368
|
Extracting Participation in Collective Action from Social Media
|
cs.SI cs.CY physics.soc-ph
|
Social media play a key role in mobilizing collective action, holding the
potential for studying the pathways that lead individuals to actively engage in
addressing global challenges. However, quantitative research in this area has
been limited by the absence of granular and large-scale ground truth about the
level of participation in collective action among individual social media
users. To address this limitation, we present a novel suite of text classifiers
designed to identify expressions of participation in collective action from
social media posts, in a topic-agnostic fashion. Grounded in the theoretical
framework of social movement mobilization, our classification captures
participation and categorizes it into four levels: recognizing collective
issues, engaging in calls-to-action, expressing intention of action, and
reporting active involvement. We constructed a labeled training dataset of
Reddit comments through crowdsourcing, which we used to train BERT classifiers
and fine-tune Llama3 models. Our findings show that smaller language models can
reliably detect expressions of participation (weighted F1=0.71), and rival
larger models in capturing nuanced levels of participation. By applying our
methodology to Reddit, we illustrate its effectiveness as a robust tool for
characterizing online communities in innovative ways compared to topic
modeling, stance detection, and keyword-based methods. Our framework
contributes to Computational Social Science research by providing a new source
of reliable annotations useful for investigating the social dynamics of
collective action.
|
2501.07371
|
Simulating the Hubbard Model with Equivariant Normalizing Flows
|
cond-mat.str-el cs.LG hep-lat
|
Generative models, particularly normalizing flows, have shown exceptional
performance in learning probability distributions across various domains of
physics, including statistical mechanics, collider physics, and lattice field
theory. In the context of lattice field theory, normalizing flows have been
successfully applied to accurately learn the Boltzmann distribution, enabling a
range of tasks such as direct estimation of thermodynamic observables and
sampling independent and identically distributed (i.i.d.) configurations.
In this work, we present a proof-of-concept demonstration that normalizing
flows can be used to learn the Boltzmann distribution for the Hubbard model.
This model is widely employed to study the electronic structure of graphene and
other carbon nanomaterials. State-of-the-art numerical simulations of the
Hubbard model, such as those based on Hybrid Monte Carlo (HMC) methods, often
suffer from ergodicity issues, potentially leading to biased estimates of
physical observables. Our numerical experiments demonstrate that leveraging
i.i.d.\ sampling from the normalizing flow effectively addresses these issues.
|
2501.07373
|
Dynami-CAL GraphNet: A Physics-Informed Graph Neural Network Conserving
Linear and Angular Momentum for Dynamical Systems
|
cs.LG cs.CE physics.comp-ph
|
Accurate, interpretable, and real-time modeling of multi-body dynamical
systems is essential for predicting behaviors and inferring physical properties
in natural and engineered environments. Traditional physics-based models face
scalability challenges and are computationally demanding, while data-driven
approaches like Graph Neural Networks (GNNs) often lack physical consistency,
interpretability, and generalization. In this paper, we propose Dynami-CAL
GraphNet, a Physics-Informed Graph Neural Network that integrates the learning
capabilities of GNNs with physics-based inductive biases to address these
limitations. Dynami-CAL GraphNet enforces pairwise conservation of linear and
angular momentum for interacting nodes using edge-local reference frames that
are equivariant to rotational symmetries, invariant to translations, and
equivariant to node permutations. This design ensures physically consistent
predictions of node dynamics while offering interpretable, edge-wise linear and
angular impulses resulting from pairwise interactions. Evaluated on a 3D
granular system with inelastic collisions, Dynami-CAL GraphNet demonstrates
stable error accumulation over extended rollouts, effective extrapolations to
unseen configurations, and robust handling of heterogeneous interactions and
external forces. Dynami-CAL GraphNet offers significant advantages in fields
requiring accurate, interpretable, and real-time modeling of complex multi-body
dynamical systems, such as robotics, aerospace engineering, and materials
science. By providing physically consistent and scalable predictions that
adhere to fundamental conservation laws, it enables the inference of forces and
moments while efficiently handling heterogeneous interactions and external
forces.
|
2501.07375
|
A RankNet-Inspired Surrogate-Assisted Hybrid Metaheuristic for Expensive
Coverage Optimization
|
cs.NE
|
Coverage optimization generally involves deploying a set of facilities (e.g.,
sensors) to best satisfy the demands of specified points, with wide
applications in fields such as location science and sensor networks. In
practical applications, coverage optimization focuses on target coverage, which
is typically formulated as Mixed-Variable Optimization Problems (MVOPs) due to
complex real-world constraints. Meanwhile, high-fidelity discretization and
visibility analysis may bring additional calculations, which significantly
increases the computational cost. These factors pose significant challenges for
fitness evaluations (FEs) in canonical Evolutionary Algorithms (EAs), and
evolve the coverage problem into an Expensive Mixed-Variable Optimization
Problem (EMVOP). To address these issues, we propose the RankNet-Inspired
Surrogate-assisted Hybrid Metaheuristic (RI-SHM), an extension of our previous
work. RI-SHM integrates three key components: (1) a RankNet-based pairwise
global surrogate that innovatively predicts rankings between pairs of
individuals, bypassing the challenges of fitness estimation in discontinuous
solution space; (2) a surrogate-assisted local Estimation of Distribution
Algorithm (EDA) that enhances local exploitation and helps escape from local
optima; and (3) a fitness diversity-driven switching strategy that dynamically
balances exploration and exploitation. Experiments demonstrate that our
algorithm can effectively handle large-scale coverage optimization tasks of up
to 300 dimensions and more than 1,800 targets within desirable runtime.
Compared to state-of-the-art algorithms for EMVOPs, RI-SHM consistently
outperforms them by up to 56.5$\%$ across all tested instances.
|
2501.07378
|
FedSemiDG: Domain Generalized Federated Semi-supervised Medical Image
Segmentation
|
cs.CV
|
Medical image segmentation is challenging due to the diversity of medical
images and the lack of labeled data, which motivates recent developments in
federated semi-supervised learning (FSSL) to leverage a large amount of
unlabeled data from multiple centers for model training without sharing raw
data. However, what remains under-explored in FSSL is the domain shift problem
which may cause suboptimal model aggregation and low effectivity of the
utilization of unlabeled data, eventually leading to unsatisfactory performance
in unseen domains. In this paper, we explore this previously ignored scenario,
namely domain generalized federated semi-supervised learning (FedSemiDG), which
aims to learn a model in a distributed manner from multiple domains with
limited labeled data and abundant unlabeled data such that the model can
generalize well to unseen domains. We present a novel framework, Federated
Generalization-Aware SemiSupervised Learning (FGASL), to address the challenges
in FedSemiDG by effectively tackling critical issues at both global and local
levels. Globally, we introduce Generalization-Aware Aggregation (GAA),
assigning adaptive weights to local models based on their generalization
performance. Locally, we use a Dual-Teacher Adaptive Pseudo Label Refinement
(DR) strategy to combine global and domain-specific knowledge, generating more
reliable pseudo labels. Additionally, Perturbation-Invariant Alignment (PIA)
enforces feature consistency under perturbations, promoting domain-invariant
learning. Extensive experiments on three medical segmentation tasks (cardiac
MRI, spine MRI and bladder cancer MRI) demonstrate that our method
significantly outperforms state-of-the-art FSSL and domain generalization
approaches, achieving robust generalization on unseen domains.
|
2501.07382
|
Information-Theoretic Dual Memory System for Continual Learning
|
cs.LG cs.AI
|
Continuously acquiring new knowledge from a dynamic environment is a
fundamental capability for animals, facilitating their survival and ability to
address various challenges. This capability is referred to as continual
learning, which focuses on the ability to learn a sequence of tasks without the
detriment of previous knowledge. A prevalent strategy to tackle continual
learning involves selecting and storing numerous essential data samples from
prior tasks within a fixed-size memory buffer. However, the majority of current
memory-based techniques typically utilize a single memory buffer, which poses
challenges in concurrently managing newly acquired and previously learned
samples. Drawing inspiration from the Complementary Learning Systems (CLS)
theory, which defines rapid and gradual learning mechanisms for processing
information, we propose an innovative dual memory system called the
Information-Theoretic Dual Memory System (ITDMS). This system comprises a fast
memory buffer designed to retain temporary and novel samples, alongside a slow
memory buffer dedicated to preserving critical and informative samples. The
fast memory buffer is optimized employing an efficient reservoir sampling
process. Furthermore, we introduce a novel information-theoretic memory
optimization strategy that selectively identifies and retains diverse and
informative data samples for the slow memory buffer. Additionally, we propose a
novel balanced sample selection procedure that automatically identifies and
eliminates redundant memorized samples, thus freeing up memory capacity for new
data acquisitions, which can deal with a growing array of tasks. Our
methodology is rigorously assessed through a series of continual learning
experiments, with empirical results underscoring the effectiveness of the
proposed system.
|
2501.07390
|
Kolmogorov-Arnold Network for Remote Sensing Image Semantic Segmentation
|
cs.CV
|
Semantic segmentation plays a crucial role in remote sensing applications,
where the accurate extraction and representation of features are essential for
high-quality results. Despite the widespread use of encoder-decoder
architectures, existing methods often struggle with fully utilizing the
high-dimensional features extracted by the encoder and efficiently recovering
detailed information during decoding. To address these problems, we propose a
novel semantic segmentation network, namely DeepKANSeg, including two key
innovations based on the emerging Kolmogorov Arnold Network (KAN). Notably, the
advantage of KAN lies in its ability to decompose high-dimensional complex
functions into univariate transformations, enabling efficient and flexible
representation of intricate relationships in data. First, we introduce a
KAN-based deep feature refinement module, namely DeepKAN to effectively capture
complex spatial and rich semantic relationships from high-dimensional features.
Second, we replace the traditional multi-layer perceptron (MLP) layers in the
global-local combined decoder with KAN-based linear layers, namely GLKAN. This
module enhances the decoder's ability to capture fine-grained details during
decoding. To evaluate the effectiveness of the proposed method, experiments are
conducted on two well-known fine-resolution remote sensing benchmark datasets,
namely ISPRS Vaihingen and ISPRS Potsdam. The results demonstrate that the
KAN-enhanced segmentation model achieves superior performance in terms of
accuracy compared to state-of-the-art methods. They highlight the potential of
KANs as a powerful alternative to traditional architectures in semantic
segmentation tasks. Moreover, the explicit univariate decomposition provides
improved interpretability, which is particularly beneficial for applications
requiring explainable learning in remote sensing.
|
2501.07391
|
Enhancing Retrieval-Augmented Generation: A Study of Best Practices
|
cs.CL cs.AI
|
Retrieval-Augmented Generation (RAG) systems have recently shown remarkable
advancements by integrating retrieval mechanisms into language models,
enhancing their ability to produce more accurate and contextually relevant
responses. However, the influence of various components and configurations
within RAG systems remains underexplored. A comprehensive understanding of
these elements is essential for tailoring RAG systems to complex retrieval
tasks and ensuring optimal performance across diverse applications. In this
paper, we develop several advanced RAG system designs that incorporate query
expansion, various novel retrieval strategies, and a novel Contrastive
In-Context Learning RAG. Our study systematically investigates key factors,
including language model size, prompt design, document chunk size, knowledge
base size, retrieval stride, query expansion techniques, Contrastive In-Context
Learning knowledge bases, multilingual knowledge bases, and Focus Mode
retrieving relevant context at sentence-level. Through extensive
experimentation, we provide a detailed analysis of how these factors influence
response quality. Our findings offer actionable insights for developing RAG
systems, striking a balance between contextual richness and
retrieval-generation efficiency, thereby paving the way for more adaptable and
high-performing RAG frameworks in diverse real-world scenarios. Our code and
implementation details are publicly available.
|
2501.07392
|
The Essentials of AI for Life and Society: An AI Literacy Course for the
University Community
|
cs.AI cs.CY
|
We describe the development of a one-credit course to promote AI literacy at
The University of Texas at Austin. In response to a call for the rapid
deployment of class to serve a broad audience in Fall of 2023, we designed a
14-week seminar-style course that incorporated an interdisciplinary group of
speakers who lectured on topics ranging from the fundamentals of AI to societal
concerns including disinformation and employment. University students, faculty,
and staff, and even community members outside of the University, were invited
to enroll in this online offering: The Essentials of AI for Life and Society.
We collected feedback from course participants through weekly reflections and a
final survey. Satisfyingly, we found that attendees reported gains in their AI
literacy. We sought critical feedback through quantitative and qualitative
analysis, which uncovered challenges in designing a course for this general
audience. We utilized the course feedback to design a three-credit version of
the course that is being offered in Fall of 2024. The lessons we learned and
our plans for this new iteration may serve as a guide to instructors designing
AI courses for a broad audience.
|
2501.07396
|
Zero-Shot Scene Understanding for Automatic Target Recognition Using
Large Vision-Language Models
|
cs.CV
|
Automatic target recognition (ATR) plays a critical role in tasks such as
navigation and surveillance, where safety and accuracy are paramount. In
extreme use cases, such as military applications, these factors are often
challenged due to the presence of unknown terrains, environmental conditions,
and novel object categories. Current object detectors, including open-world
detectors, lack the ability to confidently recognize novel objects or operate
in unknown environments, as they have not been exposed to these new conditions.
However, Large Vision-Language Models (LVLMs) exhibit emergent properties that
enable them to recognize objects in varying conditions in a zero-shot manner.
Despite this, LVLMs struggle to localize objects effectively within a scene. To
address these limitations, we propose a novel pipeline that combines the
detection capabilities of open-world detectors with the recognition confidence
of LVLMs, creating a robust system for zero-shot ATR of novel classes and
unknown domains. In this study, we compare the performance of various LVLMs for
recognizing military vehicles, which are often underrepresented in training
datasets. Additionally, we examine the impact of factors such as distance
range, modality, and prompting methods on the recognition performance,
providing insights into the development of more reliable ATR systems for novel
conditions and classes.
|
2501.07397
|
VDOR: A Video-based Dataset for Object Removal via Sequence Consistency
|
cs.CV
|
Object removal, as a sub-task of image inpainting, has garnered significant
attention in recent years. Existing datasets related to object removal serve a
valuable foundation for model validation and optimization. However, they mainly
rely on inpainting techniques to generate pseudo-removed results, leading to
distribution gaps between synthetic and real-world data. While some real-world
datasets mitigate these issues, they face challenges such as limited
scalability, high annotation costs, and unrealistic representations of lighting
and shadows. To address these limitations, we propose a novel video-based
annotation pipeline for constructing a realistic illumination-aware object
removal dataset. Leveraging this pipeline, we introduce VDOR, a dataset
specifically designed for object removal tasks, which comprises triplets of
original frame images with objects, background images without objects, and
corresponding masks. By leveraging continuous real-world video frames, we
minimize distribution gaps and accurately capture realistic lighting and shadow
variations, ensuring close alignment with real-world scenarios. Our approach
significantly reduces annotation effort while providing a robust foundation for
advancing object removal research.
|
2501.07398
|
An ontology-based description of nano computed tomography measurements
in electronic laboratory notebooks: from metadata schema to first user
experience
|
cs.DB
|
In recent years, the importance of well-documented metadata has been
discussed increasingly in many research fields. Making all metadata generated
during scientific research available in a findable, accessible, interoperable,
and reusable (FAIR) manner remains a significant challenge for researchers
across fields. Scientific communities are agreeing to achieve this by making
all data available in a semantically annotated knowledge graph using semantic
web technologies. Most current approaches do not gather metadata in a
consistent and community-agreed standardized way, and there are insufficient
tools to support the process of turning them into a knowledge graph. We present
an example solution in which the creation of a schema and ontology are placed
at the beginning of the scientific process which is then - using the electronic
laboratory notebook framework Herbie - turned into a bespoke data collection
platform to facilitate validation and semantic annotation of the metadata
immediately during an experiment. Using the example of synchrotron
radiation-based nano computed tomography measurements, we present a holistic
approach which can capture the complex metadata of such research instruments in
a flexible and straightforward manner. Different instrument setups of this
beamline can be considered, allowing a user-friendly experience. We show how
Herbie turns all semantic documents into an accessible user interface, where
all data entered automatically fulfills all requirements of being FAIR, and
present how data can be directly extracted via competency questions without
requiring familiarity with the fine-grained structure of the knowledge graph.
|
2501.07399
|
Efficiently Closing Loops in LiDAR-Based SLAM Using Point Cloud Density
Maps
|
cs.RO
|
Consistent maps are key for most autonomous mobile robots. They often use
SLAM approaches to build such maps. Loop closures via place recognition help
maintain accurate pose estimates by mitigating global drift. This paper
presents a robust loop closure detection pipeline for outdoor SLAM with
LiDAR-equipped robots. The method handles various LiDAR sensors with different
scanning patterns, field of views and resolutions. It generates local maps from
LiDAR scans and aligns them using a ground alignment module to handle both
planar and non-planar motion of the LiDAR, ensuring applicability across
platforms. The method uses density-preserving bird's eye view projections of
these local maps and extracts ORB feature descriptors from them for place
recognition. It stores the feature descriptors in a binary search tree for
efficient retrieval, and self-similarity pruning addresses perceptual aliasing
in repetitive environments. Extensive experiments on public and self-recorded
datasets demonstrate accurate loop closure detection, long-term localization,
and cross-platform multi-map alignment, agnostic to the LiDAR scanning
patterns, fields of view, and motion profiles.
|
2501.07400
|
Derivation of effective gradient flow equations and dynamical truncation
of training data in Deep Learning
|
cs.LG cs.AI math.AP math.OC stat.ML
|
We derive explicit equations governing the cumulative biases and weights in
Deep Learning with ReLU activation function, based on gradient descent for the
Euclidean cost in the input layer, and under the assumption that the weights
are, in a precise sense, adapted to the coordinate system distinguished by the
activations. We show that gradient descent corresponds to a dynamical process
in the input layer, whereby clusters of data are progressively reduced in
complexity ("truncated") at an exponential rate that increases with the number
of data points that have already been truncated. We provide a detailed
discussion of several types of solutions to the gradient flow equations. A main
motivation for this work is to shed light on the interpretability question in
supervised learning.
|
2501.07405
|
PROTECT: Protein circadian time prediction using unsupervised learning
|
cs.LG cs.AI
|
Circadian rhythms regulate the physiology and behavior of humans and animals.
Despite advancements in understanding these rhythms and predicting circadian
phases at the transcriptional level, predicting circadian phases from proteomic
data remains elusive. This challenge is largely due to the scarcity of time
labels in proteomic datasets, which are often characterized by small sample
sizes, high dimensionality, and significant noise. Furthermore, existing
methods for predicting circadian phases from transcriptomic data typically rely
on prior knowledge of known rhythmic genes, making them unsuitable for
proteomic datasets. To address this gap, we developed a novel computational
method using unsupervised deep learning techniques to predict circadian sample
phases from proteomic data without requiring time labels or prior knowledge of
proteins or genes. Our model involves a two-stage training process optimized
for robust circadian phase prediction: an initial greedy one-layer-at-a-time
pre-training which generates informative initial parameters followed by
fine-tuning. During fine-tuning, a specialized loss function guides the model
to align protein expression levels with circadian patterns, enabling it to
accurately capture the underlying rhythmic structure within the data. We tested
our method on both time-labeled and unlabeled proteomic data. For labeled data,
we compared our predictions to the known time labels, achieving high accuracy,
while for unlabeled human datasets, including postmortem brain regions and
urine samples, we explored circadian disruptions. Notably, our analysis
identified disruptions in rhythmic proteins between Alzheimer's disease and
control subjects across these samples.
|
2501.07408
|
Initial Findings on Sensor based Open Vocabulary Activity Recognition
via Text Embedding Inversion
|
cs.AI
|
Conventional human activity recognition (HAR) relies on classifiers trained
to predict discrete activity classes, inherently limiting recognition to
activities explicitly present in the training set. Such classifiers would
invariably fail, putting zero likelihood, when encountering unseen activities.
We propose Open Vocabulary HAR (OV-HAR), a framework that overcomes this
limitation by first converting each activity into natural language and breaking
it into a sequence of elementary motions. This descriptive text is then encoded
into a fixed-size embedding. The model is trained to regress this embedding,
which is subsequently decoded back into natural language using a pre-trained
embedding inversion model. Unlike other works that rely on auto-regressive
large language models (LLMs) at their core, OV-HAR achieves open vocabulary
recognition without the computational overhead of such models. The generated
text can be transformed into a single activity class using LLM prompt
engineering. We have evaluated our approach on different modalities, including
vision (pose), IMU, and pressure sensors, demonstrating robust generalization
across unseen activities and modalities, offering a fundamentally different
paradigm from contemporary classifiers.
|
2501.07421
|
Empirical Comparison of Four Stereoscopic Depth Sensing Cameras for
Robotics Applications
|
cs.RO
|
Depth sensing is an essential technology in robotics and many other fields.
Many depth sensing (or RGB-D) cameras are available on the market and selecting
the best one for your application can be challenging. In this work, we tested
four stereoscopic RGB-D cameras that sense the distance by using two images
from slightly different views. We empirically compared four cameras (Intel
RealSense D435, Intel RealSense D455, StereoLabs ZED 2, and Luxonis OAK-D Pro)
in three scenarios: (i) planar surface perception, (ii) plastic doll
perception, (iii) household object perception (YCB dataset). We recorded and
evaluated more than 3,000 RGB-D frames for each camera. For table-top robotics
scenarios with distance to objects up to one meter, the best performance is
provided by the D435 camera. For longer distances, the other three models
perform better, making them more suitable for some mobile robotics
applications. OAK-D Pro additionally offers integrated AI modules (e.g., object
and human keypoint detection). ZED 2 is not a standalone device and requires a
computer with a GPU for depth data acquisition. All data (more than 12,000
RGB-D frames) are publicly available at https://osf.io/f2seb.
|
2501.07423
|
An Investigation into Seasonal Variations in Energy Forecasting for
Student Residences
|
cs.LG cs.AI
|
This research provides an in-depth evaluation of various machine learning
models for energy forecasting, focusing on the unique challenges of seasonal
variations in student residential settings. The study assesses the performance
of baseline models, such as LSTM and GRU, alongside state-of-the-art
forecasting methods, including Autoregressive Feedforward Neural Networks,
Transformers, and hybrid approaches. Special attention is given to predicting
energy consumption amidst challenges like seasonal patterns, vacations,
meteorological changes, and irregular human activities that cause sudden
fluctuations in usage. The findings reveal that no single model consistently
outperforms others across all seasons, emphasizing the need for season-specific
model selection or tailored designs. Notably, the proposed Hyper Network based
LSTM and MiniAutoEncXGBoost models exhibit strong adaptability to seasonal
variations, effectively capturing abrupt changes in energy consumption during
summer months. This study advances the energy forecasting field by emphasizing
the critical role of seasonal dynamics and model-specific behavior in achieving
accurate predictions.
|
2501.07426
|
MVICAD2: Multi-View Independent Component Analysis with Delays and
Dilations
|
cs.LG
|
Machine learning techniques in multi-view settings face significant
challenges, particularly when integrating heterogeneous data, aligning feature
spaces, and managing view-specific biases. These issues are prominent in
neuroscience, where data from multiple subjects exposed to the same stimuli are
analyzed to uncover brain activity dynamics. In magnetoencephalography (MEG),
where signals are captured at the scalp level, estimating the brain's
underlying sources is crucial, especially in group studies where sources are
assumed to be similar for all subjects. Common methods, such as Multi-View
Independent Component Analysis (MVICA), assume identical sources across
subjects, but this assumption is often too restrictive due to individual
variability and age-related changes. Multi-View Independent Component Analysis
with Delays (MVICAD) addresses this by allowing sources to differ up to a
temporal delay. However, temporal dilation effects, particularly in auditory
stimuli, are common in brain dynamics, making the estimation of time delays
alone insufficient. To address this, we propose Multi-View Independent
Component Analysis with Delays and Dilations (MVICAD2), which allows sources to
differ across subjects in both temporal delays and dilations. We present a
model with identifiable sources, derive an approximation of its likelihood in
closed form, and use regularization and optimization techniques to enhance
performance. Through simulations, we demonstrate that MVICAD2 outperforms
existing multi-view ICA methods. We further validate its effectiveness using
the Cam-CAN dataset, and showing how delays and dilations are related to aging.
|
2501.07429
|
Distance Measure Based on an Embedding of the Manifold of K-Component
Gaussian Mixture Models into the Manifold of Symmetric Positive Definite
Matrices
|
math.DG cs.LG
|
In this paper, a distance between the Gaussian Mixture Models(GMMs) is
obtained based on an embedding of the K-component Gaussian Mixture Model into
the manifold of the symmetric positive definite matrices. Proof of embedding of
K-component GMMs into the manifold of symmetric positive definite matrices is
given and shown that it is a submanifold. Then, proved that the manifold of
GMMs with the pullback of induced metric is isometric to the submanifold with
the induced metric. Through this embedding we obtain a general lower bound for
the Fisher-Rao metric. This lower bound is a distance measure on the manifold
of GMMs and we employ it for the similarity measure of GMMs. The effectiveness
of this framework is demonstrated through an experiment on standard machine
learning benchmarks, achieving accuracy of 98%, 92%, and 93.33% on the UIUC,
KTH-TIPS, and UMD texture recognition datasets respectively.
|
2501.07430
|
Introducing 3D Representation for Medical Image Volume-to-Volume
Translation via Score Fusion
|
cs.CV cs.AI
|
In volume-to-volume translations in medical images, existing models often
struggle to capture the inherent volumetric distribution using 3D voxelspace
representations, due to high computational dataset demands. We present
Score-Fusion, a novel volumetric translation model that effectively learns 3D
representations by ensembling perpendicularly trained 2D diffusion models in
score function space. By carefully initializing our model to start with an
average of 2D models as in TPDM, we reduce 3D training to a fine-tuning process
and thereby mitigate both computational and data demands. Furthermore, we
explicitly design the 3D model's hierarchical layers to learn ensembles of 2D
features, further enhancing efficiency and performance. Moreover, Score-Fusion
naturally extends to multi-modality settings, by fusing diffusion models
conditioned on different inputs for flexible, accurate integration. We
demonstrate that 3D representation is essential for better performance in
downstream recognition tasks, such as tumor segmentation, where most
segmentation models are based on 3D representation. Extensive experiments
demonstrate that Score-Fusion achieves superior accuracy and volumetric
fidelity in 3D medical image super-resolution and modality translation. Beyond
these improvements, our work also provides broader insight into learning-based
approaches for score function fusion.
|
2501.07432
|
Empirical Evaluation of the Implicit Hitting Set Approach for Weighted
CSPs
|
cs.AI
|
SAT technology has proven to be surprisingly effective in a large variety of
domains. However, for the Weighted CSP problem dedicated algorithms have always
been superior. One approach not well-studied so far is the use of SAT in
conjunction with the Implicit Hitting Set approach. In this work, we explore
some alternatives to the existing algorithm of reference. The alternatives,
mostly borrowed from related boolean frameworks, consider trade-offs for the
two main components of the IHS approach: the computation of low-cost hitting
vectors, and their transformation into high-cost cores. For each one, we
propose 4 levels of intensity. Since we also test the usefulness of cost
function merging, our experiments consider 32 different implementations. Our
empirical study shows that for WCSP it is not easy to identify the best
alternative. Nevertheless, the cost-function merging encoding and extracting
maximal cores seems to be a robust approach.
|
2501.07434
|
Guided SAM: Label-Efficient Part Segmentation
|
cs.CV
|
Localizing object parts precisely is essential for tasks such as object
recognition and robotic manipulation. Recent part segmentation methods require
extensive training data and labor-intensive annotations. Segment-Anything Model
(SAM) has demonstrated good performance on a wide range of segmentation
problems, but requires (manual) positional prompts to guide it where to
segment. Furthermore, since it has been trained on full objects instead of
object parts, it is prone to over-segmentation of parts. To address this, we
propose a novel approach that guides SAM towards the relevant object parts. Our
method learns positional prompts from coarse patch annotations that are easier
and cheaper to acquire. We train classifiers on image patches to identify part
classes and aggregate patches into regions of interest (ROIs) with positional
prompts. SAM is conditioned on these ROIs and prompts. This approach, termed
`Guided SAM', enhances efficiency and reduces manual effort, allowing effective
part segmentation with minimal labeled data. We demonstrate the efficacy of
Guided SAM on a dataset of car parts, improving the average IoU on state of the
art models from 0.37 to 0.49 with annotations that are on average five times
more efficient to acquire.
|
2501.07437
|
Pairwise Comparisons without Stochastic Transitivity: Model, Theory and
Applications
|
stat.ML cs.LG
|
Most statistical models for pairwise comparisons, including the Bradley-Terry
(BT) and Thurstone models and many extensions, make a relatively strong
assumption of stochastic transitivity. This assumption imposes the existence of
an unobserved global ranking among all the players/teams/items and monotone
constraints on the comparison probabilities implied by the global ranking.
However, the stochastic transitivity assumption does not hold in many
real-world scenarios of pairwise comparisons, especially games involving
multiple skills or strategies. As a result, models relying on this assumption
can have suboptimal predictive performance. In this paper, we propose a general
family of statistical models for pairwise comparison data without a stochastic
transitivity assumption, substantially extending the BT and Thurstone models.
In this model, the pairwise probabilities are determined by a (approximately)
low-dimensional skew-symmetric matrix. Likelihood-based estimation methods and
computational algorithms are developed, which allow for sparse data with only a
small proportion of observed pairs. Theoretical analysis shows that the
proposed estimator achieves minimax-rate optimality, which adapts effectively
to the sparsity level of the data. The spectral theory for skew-symmetric
matrices plays a crucial role in the implementation and theoretical analysis.
The proposed method's superiority against the BT model, along with its broad
applicability across diverse scenarios, is further supported by simulations and
real data analysis.
|
2501.07440
|
Attention when you need
|
q-bio.NC cs.AI
|
Being attentive to task-relevant features can improve task performance, but
paying attention comes with its own metabolic cost. Therefore, strategic
allocation of attention is crucial in performing the task efficiently. This
work aims to understand this strategy. Recently, de Gee et al. conducted
experiments involving mice performing an auditory sustained attention-value
task. This task required the mice to exert attention to identify whether a
high-order acoustic feature was present amid the noise. By varying the trial
duration and reward magnitude, the task allows us to investigate how an agent
should strategically deploy their attention to maximize their benefits and
minimize their costs. In our work, we develop a reinforcement learning-based
normative model of the mice to understand how it balances attention cost
against its benefits. The model is such that at each moment the mice can choose
between two levels of attention and decide when to take costly actions that
could obtain rewards. Our model suggests that efficient use of attentional
resources involves alternating blocks of high attention with blocks of low
attention. In the extreme case where the agent disregards sensory input during
low attention states, we see that high attention is used rhythmically. Our
model provides evidence about how one should deploy attention as a function of
task utility, signal statistics, and how attention affects sensory evidence.
|
2501.07445
|
Online inductive learning from answer sets for efficient reinforcement
learning exploration
|
cs.AI
|
This paper presents a novel approach combining inductive logic programming
with reinforcement learning to improve training performance and explainability.
We exploit inductive learning of answer set programs from noisy examples to
learn a set of logical rules representing an explainable approximation of the
agent policy at each batch of experience. We then perform answer set reasoning
on the learned rules to guide the exploration of the learning agent at the next
batch, without requiring inefficient reward shaping and preserving optimality
with soft bias. The entire procedure is conducted during the online execution
of the reinforcement learning algorithm. We preliminarily validate the efficacy
of our approach by integrating it into the Q-learning algorithm for the Pac-Man
scenario in two maps of increasing complexity. Our methodology produces a
significant boost in the discounted return achieved by the agent, even in the
first batches of training. Moreover, inductive learning does not compromise the
computational time required by Q-learning and learned rules quickly converge to
an explanation of the agent policy.
|
2501.07446
|
Synthesis and Analysis of Data as Probability Measures with
Entropy-Regularized Optimal Transport
|
stat.ML cs.LG
|
We consider synthesis and analysis of probability measures using the
entropy-regularized Wasserstein-2 cost and its unbiased version, the Sinkhorn
divergence. The synthesis problem consists of computing the barycenter, with
respect to these costs, of $m$ reference measures given a set of coefficients
belonging to the $m$-dimensional simplex. The analysis problem consists of
finding the coefficients for the closest barycenter in the Wasserstein-2
distance to a given measure $\mu$. Under the weakest assumptions on the
measures thus far in the literature, we compute the derivative of the
entropy-regularized Wasserstein-2 cost. We leverage this to establish a
characterization of regularized barycenters as solutions to a fixed-point
equation for the average of the entropic maps from the barycenter to the
reference measures. This characterization yields a finite-dimensional, convex,
quadratic program for solving the analysis problem when $\mu$ is a barycenter.
It is shown that these coordinates, as well as the value of the barycenter
functional, can be estimated from samples with dimension-independent rates of
convergence, a hallmark of entropy-regularized optimal transport, and we verify
these rates experimentally. We also establish that barycentric coordinates are
stable with respect to perturbations in the Wasserstein-2 metric, suggesting a
robustness of these coefficients to corruptions. We employ the barycentric
coefficients as features for classification of corrupted point cloud data, and
show that compared to neural network baselines, our approach is more efficient
in small training data regimes.
|
Subsets and Splits
No community queries yet
The top public SQL queries from the community will appear here once available.