id stringlengths 9 16 | title stringlengths 4 278 | categories stringlengths 5 104 | abstract stringlengths 6 4.09k |
|---|---|---|---|
2501.03830 | MeshConv3D: Efficient convolution and pooling operators for triangular
3D meshes | cs.CV cs.GR | Convolutional neural networks (CNNs) have been pivotal in various 2D image
analysis tasks, including computer vision, image indexing and retrieval or
semantic classification. Extending CNNs to 3D data such as point clouds and 3D
meshes raises significant challenges since the very basic convolution and
pooling operators need to be completely re-visited and re-defined in an
appropriate manner to tackle irregular connectivity issues. In this paper, we
introduce MeshConv3D, a 3D mesh-dedicated methodology integrating specialized
convolution and face collapse-based pooling operators. MeshConv3D operates
directly on meshes of arbitrary topology, without any need of prior
re-meshing/conversion techniques. In order to validate our approach, we have
considered a semantic classification task. The experimental results obtained on
three distinct benchmark datasets show that the proposed approach makes it
possible to achieve equivalent or superior classification results, while
minimizing the related memory footprint and computational load.
|
2501.03832 | Three-dimensional attention Transformer for state evaluation in
real-time strategy games | cs.LG cs.AI | Situation assessment in Real-Time Strategy (RTS) games is crucial for
understanding decision-making in complex adversarial environments. However,
existing methods remain limited in processing multi-dimensional feature
information and temporal dependencies. Here we propose a tri-dimensional
Space-Time-Feature Transformer (TSTF Transformer) architecture, which
efficiently models battlefield situations through three independent but
cascaded modules: spatial attention, temporal attention, and feature attention.
On a dataset comprising 3,150 adversarial experiments, the 8-layer TSTF
Transformer demonstrates superior performance: achieving 58.7% accuracy in the
early game (~4% progress), significantly outperforming the conventional
Timesformer's 41.8%; reaching 97.6% accuracy in the mid-game (~40% progress)
while maintaining low performance variation (standard deviation 0.114).
Meanwhile, this architecture requires fewer parameters (4.75M) compared to the
baseline model (5.54M). Our study not only provides new insights into situation
assessment in RTS games but also presents an innovative paradigm for
Transformer-based multi-dimensional temporal modeling.
|
2501.03833 | Sequence Reconstruction for the Single-Deletion Single-Substitution
Channel | cs.IT math.IT | The central problem in sequence reconstruction is to find the minimum number
of distinct channel outputs required to uniquely reconstruct the transmitted
sequence. According to Levenshtein's work in 2001, this number is determined by
the size of the maximum intersection between the error balls of any two
distinct input sequences of the channel. In this work, we study the sequence
reconstruction problem for single-deletion single-substitution channel,
assuming that the transmitted sequence belongs to a $q$-ary code with minimum
Hamming distance at least $2$, where $q\geq 2$ is any fixed integer.
Specifically, we prove that for any two $q$-ary sequences of length $n$ and
with Hamming distance $d\geq 2$, the size of the intersection of their error
balls is upper bounded by $2qn-3q-2-\delta_{q,2}$, where $\delta_{i,j}$ is the
Kronecker delta. We also prove the tightness of this bound by constructing two
sequences the intersection size of whose error balls achieves this bound.
|
2501.03835 | TACLR: A Scalable and Efficient Retrieval-based Method for Industrial
Product Attribute Value Identification | cs.CL cs.AI cs.IR | Product Attribute Value Identification (PAVI) involves identifying attribute
values from product profiles, a key task for improving product search,
recommendations, and business analytics on e-commerce platforms. However,
existing PAVI methods face critical challenges, such as inferring implicit
values, handling out-of-distribution (OOD) values, and producing normalized
outputs. To address these limitations, we introduce Taxonomy-Aware Contrastive
Learning Retrieval (TACLR), the first retrieval-based method for PAVI. TACLR
formulates PAVI as an information retrieval task by encoding product profiles
and candidate values into embeddings and retrieving values based on their
similarity to the item embedding. It leverages contrastive training with
taxonomy-aware hard negative sampling and employs adaptive inference with
dynamic thresholds. TACLR offers three key advantages: (1) it effectively
handles implicit and OOD values while producing normalized outputs; (2) it
scales to thousands of categories, tens of thousands of attributes, and
millions of values; and (3) it supports efficient inference for high-load
industrial scenarios. Extensive experiments on proprietary and public datasets
validate the effectiveness and efficiency of TACLR. Moreover, it has been
successfully deployed in a real-world e-commerce platform, processing millions
of product listings daily while supporting dynamic, large-scale attribute
taxonomies.
|
2501.03836 | SCC-YOLO: An Improved Object Detector for Assisting in Brain Tumor
Diagnosis | eess.IV cs.AI cs.CV | Brain tumors can result in neurological dysfunction, alterations in cognitive
and psychological states, increased intracranial pressure, and the occurrence
of seizures, thereby presenting a substantial risk to human life and health.
The You Only Look Once(YOLO) series models have demonstrated superior accuracy
in object detection for medical imaging. In this paper, we develop a novel
SCC-YOLO architecture by integrating the SCConv attention mechanism into
YOLOv9. The SCConv module reconstructs an efficient convolutional module by
reducing spatial and channel redundancy among features, thereby enhancing the
learning of image features. We investigate the impact of intergrating different
attention mechanisms with the YOLOv9 model on brain tumor image detection using
both the Br35H dataset and our self-made dataset(Brain_Tumor_Dataset).
Experimental results show that on the Br35H dataset, SCC-YOLO achieved a 0.3%
improvement in mAp50 compared to YOLOv9, while on our self-made dataset,
SCC-YOLO exhibited a 0.5% improvement over YOLOv9. SCC-YOLO has reached
state-of-the-art performance in brain tumor detection. Source code is available
at : https://jihulab.com/healthcare-information-studio/SCC-YOLO/-/tree/master
|
2501.03838 | LM-Net: A Light-weight and Multi-scale Network for Medical Image
Segmentation | cs.CV | Current medical image segmentation approaches have limitations in deeply
exploring multi-scale information and effectively combining local detail
textures with global contextual semantic information. This results in
over-segmentation, under-segmentation, and blurred segmentation boundaries. To
tackle these challenges, we explore multi-scale feature representations from
different perspectives, proposing a novel, lightweight, and multi-scale
architecture (LM-Net) that integrates advantages of both Convolutional Neural
Networks (CNNs) and Vision Transformers (ViTs) to enhance segmentation
accuracy. LM-Net employs a lightweight multi-branch module to capture
multi-scale features at the same level. Furthermore, we introduce two modules
to concurrently capture local detail textures and global semantics with
multi-scale features at different levels: the Local Feature Transformer (LFT)
and Global Feature Transformer (GFT). The LFT integrates local window
self-attention to capture local detail textures, while the GFT leverages global
self-attention to capture global contextual semantics. By combining these
modules, our model achieves complementarity between local and global
representations, alleviating the problem of blurred segmentation boundaries in
medical image segmentation. To evaluate the feasibility of LM-Net, extensive
experiments have been conducted on three publicly available datasets with
different modalities. Our proposed model achieves state-of-the-art results,
surpassing previous methods, while only requiring 4.66G FLOPs and 5.4M
parameters. These state-of-the-art results on three datasets with different
modalities demonstrate the effectiveness and adaptability of our proposed
LM-Net for various medical image segmentation tasks.
|
2501.03839 | MedFocusCLIP : Improving few shot classification in medical datasets
using pixel wise attention | eess.IV cs.CV | With the popularity of foundational models, parameter efficient fine tuning
has become the defacto approach to leverage pretrained models to perform
downstream tasks. Taking inspiration from recent advances in large language
models, Visual Prompt Tuning, and similar techniques, learn an additional
prompt to efficiently finetune a pretrained vision foundational model. However,
we observe that such prompting is insufficient for fine-grained visual
classification tasks such as medical image classification, where there is large
inter-class variance, and small intra-class variance. Hence, in this paper we
propose to leverage advanced segmentation capabilities of Segment Anything
Model 2 (SAM2) as a visual prompting cue to help visual encoder in the CLIP
(Contrastive Language-Image Pretraining) by guiding the attention in CLIP
visual encoder to relevant regions in the image. This helps the model to focus
on highly discriminative regions, without getting distracted from visually
similar background features, an essential requirement in a fewshot, finegrained
classification setting. We evaluate our method on diverse medical datasets
including X-rays, CT scans, and MRI images, and report an accuracy of (71%,
81%, 86%, 58%) from the proposed approach on (COVID, lung-disease, brain-tumor,
breast-cancer) datasets against (66%, 70%, 68%, 29%) from a pretrained CLIP
model after fewshot training. The proposed approach also allows to obtain
interpretable explanation for the classification performance through the
localization obtained using segmentation.
|
2501.03840 | Machine learning applications in archaeological practices: a review | cs.LG | Artificial intelligence and machine learning applications in archaeology have
increased significantly in recent years, and these now span all subfields,
geographical regions, and time periods. The prevalence and success of these
applications have remained largely unexamined, as recent reviews on the use of
machine learning in archaeology have only focused only on specific subfields of
archaeology. Our review examined an exhaustive corpus of 135 articles published
between 1997 and 2022. We observed a significant increase in the number of
publications from 2019 onwards. Automatic structure detection and artefact
classification were the most represented tasks in the articles reviewed,
followed by taphonomy, and archaeological predictive modelling. From the
review, clustering and unsupervised methods were underrepresented compared to
supervised models. Artificial neural networks and ensemble learning account for
two thirds of the total number of models used. However, if machine learning
models are gaining in popularity they remain subject to misunderstanding. We
observed, in some cases, poorly defined requirements and caveats of the machine
learning methods used. Furthermore, the goals and the needs of machine learning
applications for archaeological purposes are in some cases unclear or poorly
expressed. To address this, we proposed a workflow guide for archaeologists to
develop coherent and consistent methodologies adapted to their research
questions, project scale and data. As in many other areas, machine learning is
rapidly becoming an important tool in archaeological research and practice,
useful for the analyses of large and multivariate data, although not without
limitations. This review highlights the importance of well-defined and
well-reported structured methodologies and collaborative practices to maximise
the potential of applications of machine learning methods in archaeology.
|
2501.03841 | OmniManip: Towards General Robotic Manipulation via Object-Centric
Interaction Primitives as Spatial Constraints | cs.RO | The development of general robotic systems capable of manipulating in
unstructured environments is a significant challenge. While Vision-Language
Models(VLM) excel in high-level commonsense reasoning, they lack the
fine-grained 3D spatial understanding required for precise manipulation tasks.
Fine-tuning VLM on robotic datasets to create Vision-Language-Action
Models(VLA) is a potential solution, but it is hindered by high data collection
costs and generalization issues. To address these challenges, we propose a
novel object-centric representation that bridges the gap between VLM's
high-level reasoning and the low-level precision required for manipulation. Our
key insight is that an object's canonical space, defined by its functional
affordances, provides a structured and semantically meaningful way to describe
interaction primitives, such as points and directions. These primitives act as
a bridge, translating VLM's commonsense reasoning into actionable 3D spatial
constraints. In this context, we introduce a dual closed-loop, open-vocabulary
robotic manipulation system: one loop for high-level planning through primitive
resampling, interaction rendering and VLM checking, and another for low-level
execution via 6D pose tracking. This design ensures robust, real-time control
without requiring VLM fine-tuning. Extensive experiments demonstrate strong
zero-shot generalization across diverse robotic manipulation tasks,
highlighting the potential of this approach for automating large-scale
simulation data generation.
|
2501.03843 | BERTopic for Topic Modeling of Hindi Short Texts: A Comparative Study | cs.IR cs.CL cs.LG | As short text data in native languages like Hindi increasingly appear in
modern media, robust methods for topic modeling on such data have gained
importance. This study investigates the performance of BERTopic in modeling
Hindi short texts, an area that has been under-explored in existing research.
Using contextual embeddings, BERTopic can capture semantic relationships in
data, making it potentially more effective than traditional models, especially
for short and diverse texts. We evaluate BERTopic using 6 different document
embedding models and compare its performance against 8 established topic
modeling techniques, such as Latent Dirichlet Allocation (LDA), Non-negative
Matrix Factorization (NMF), Latent Semantic Indexing (LSI), Additive
Regularization of Topic Models (ARTM), Probabilistic Latent Semantic Analysis
(PLSA), Embedded Topic Model (ETM), Combined Topic Model (CTM), and Top2Vec.
The models are assessed using coherence scores across a range of topic counts.
Our results reveal that BERTopic consistently outperforms other models in
capturing coherent topics from short Hindi texts.
|
2501.03847 | Diffusion as Shader: 3D-aware Video Diffusion for Versatile Video
Generation Control | cs.CV cs.AI cs.GR | Diffusion models have demonstrated impressive performance in generating
high-quality videos from text prompts or images. However, precise control over
the video generation process, such as camera manipulation or content editing,
remains a significant challenge. Existing methods for controlled video
generation are typically limited to a single control type, lacking the
flexibility to handle diverse control demands. In this paper, we introduce
Diffusion as Shader (DaS), a novel approach that supports multiple video
control tasks within a unified architecture. Our key insight is that achieving
versatile video control necessitates leveraging 3D control signals, as videos
are fundamentally 2D renderings of dynamic 3D content. Unlike prior methods
limited to 2D control signals, DaS leverages 3D tracking videos as control
inputs, making the video diffusion process inherently 3D-aware. This innovation
allows DaS to achieve a wide range of video controls by simply manipulating the
3D tracking videos. A further advantage of using 3D tracking videos is their
ability to effectively link frames, significantly enhancing the temporal
consistency of the generated videos. With just 3 days of fine-tuning on 8 H800
GPUs using less than 10k videos, DaS demonstrates strong control capabilities
across diverse tasks, including mesh-to-video generation, camera control,
motion transfer, and object manipulation.
|
2501.03848 | Semise: Semi-supervised learning for severity representation in medical
image | eess.IV cs.CV | This paper introduces SEMISE, a novel method for representation learning in
medical imaging that combines self-supervised and supervised learning. By
leveraging both labeled and augmented data, SEMISE addresses the challenge of
data scarcity and enhances the encoder's ability to extract meaningful
features. This integrated approach leads to more informative representations,
improving performance on downstream tasks. As result, our approach achieved a
12% improvement in classification and a 3% improvement in segmentation,
outperforming existing methods. These results demonstrate the potential of
SIMESE to advance medical image analysis and offer more accurate solutions for
healthcare applications, particularly in contexts where labeled data is
limited.
|
2501.03850 | Partitioning Strategies for Parallel Computation of Flexible Skylines | cs.DB | While classical skyline queries identify interesting data within large
datasets, flexible skylines introduce preferences through constraints on
attribute weights, and further reduce the data returned. However, computing
these queries can be time-consuming for large datasets. We propose and
implement a parallel computation scheme consisting of a parallel phase followed
by a sequential phase, and apply it to flexible skylines. We assess the
additional effect of an initial filtering phase to reduce dataset size before
parallel processing, and the elimination of the sequential part (the most
time-consuming) altogether. All our experiments are executed in the PySpark
framework for a number of different datasets of varying sizes and dimensions.
|
2501.03853 | Leveraging time and parameters for nonlinear model reduction methods | math.NA cs.LG cs.NA | In this paper, we consider model order reduction (MOR) methods for problems
with slowly decaying Kolmogorov $n$-widths as, e.g., certain wave-like or
transport-dominated problems. To overcome this Kolmogorov barrier within MOR,
nonlinear projections are used, which are often realized numerically using
autoencoders. These autoencoders generally consist of a nonlinear encoder and a
nonlinear decoder and involve costly training of the hyperparameters to obtain
a good approximation quality of the reduced system. To facilitate the training
process, we show that extending the to-be-reduced system and its corresponding
training data makes it possible to replace the nonlinear encoder with a linear
encoder without sacrificing accuracy, thus roughly halving the number of
hyperparameters to be trained.
|
2501.03854 | Comparison of Integration Methods for Cut Elements | cs.CE | Using an interface inserted in a background mesh is an alternative way of
constructing a complex geometrical shape with a relative low meshing efforts.
However, this process may require special treatment of elements cut by the
interface. Our study focuses on comparing the integration of cut elements
defined by implicit and parametric curves. We investigate the efficiency and
robustness of open-source tools such as Algoim [5](a library for quadrature on
implicitly defined geometries) and Ginkgo [2](a library for isogeometric
analysis on Boolean operations with a parametric description) with numerical
examples computing the area defined by the interface and benchmarks for 2D
elasticity problem using the open-source code GeoPDEs [7]. It is concluded that
none of the two interface descriptions is preferable with respect to the
quality of the integration. Thus, the choice of the interface type depends only
on the studied problem and the available curve description, but not on the
numerical aspects of the integration.
|
2501.03855 | BabyLMs for isiXhosa: Data-Efficient Language Modelling in a
Low-Resource Context | cs.CL | The BabyLM challenge called on participants to develop sample-efficient
language models. Submissions were pretrained on a fixed English corpus, limited
to the amount of words children are exposed to in development (<100m). The
challenge produced new architectures for data-efficient language modelling,
which outperformed models trained on trillions of words. This is promising for
low-resource languages, where available corpora are limited to much less than
100m words. In this paper, we explore the potential of BabyLMs for low-resource
languages, using the isiXhosa language as a case study. We pretrain two BabyLM
architectures, ELC-BERT and MLSM, on an isiXhosa corpus. They outperform a
vanilla pretrained model on POS tagging and NER, achieving notable gains (+3.2
F1) for the latter. In some instances, the BabyLMs even outperform XLM-R. Our
findings show that data-efficient models are viable for low-resource languages,
but highlight the continued importance, and lack of, high-quality pretraining
data. Finally, we visually analyse how BabyLM architectures encode isiXhosa.
|
2501.03857 | Progressive Document-level Text Simplification via Large Language Models | cs.CL | Research on text simplification has primarily focused on lexical and
sentence-level changes. Long document-level simplification (DS) is still
relatively unexplored. Large Language Models (LLMs), like ChatGPT, have
excelled in many natural language processing tasks. However, their performance
on DS tasks is unsatisfactory, as they often treat DS as merely document
summarization. For the DS task, the generated long sequences not only must
maintain consistency with the original document throughout, but complete
moderate simplification operations encompassing discourses, sentences, and
word-level simplifications. Human editors employ a hierarchical complexity
simplification strategy to simplify documents. This study delves into
simulating this strategy through the utilization of a multi-stage collaboration
using LLMs. We propose a progressive simplification method (ProgDS) by
hierarchically decomposing the task, including the discourse-level,
topic-level, and lexical-level simplification. Experimental results demonstrate
that ProgDS significantly outperforms existing smaller models or direct
prompting with LLMs, advancing the state-of-the-art in the document
simplification task.
|
2501.03858 | Symmetry and Generalisation in Machine Learning | cs.LG stat.ML | This work is about understanding the impact of invariance and equivariance on
generalisation in supervised learning. We use the perspective afforded by an
averaging operator to show that for any predictor that is not equivariant,
there is an equivariant predictor with strictly lower test risk on all
regression problems where the equivariance is correctly specified. This
constitutes a rigorous proof that symmetry, in the form of invariance or
equivariance, is a useful inductive bias.
We apply these ideas to equivariance and invariance in random design least
squares and kernel ridge regression respectively. This allows us to specify the
reduction in expected test risk in more concrete settings and express it in
terms of properties of the group, the model and the data.
Along the way, we give examples and additional results to demonstrate the
utility of the averaging operator approach in analysing equivariant predictors.
In addition, we adopt an alternative perspective and formalise the common
intuition that learning with invariant models reduces to a problem in terms of
orbit representatives. The formalism extends naturally to a similar intuition
for equivariant models. We conclude by connecting the two perspectives and
giving some ideas for future work.
|
2501.03859 | A Synergistic Framework for Learning Shape Estimation and Shape-Aware
Whole-Body Control Policy for Continuum Robots | cs.RO | In this paper, we present a novel synergistic framework for learning shape
estimation and a shape-aware whole-body control policy for tendon-driven
continuum robots. Our approach leverages the interaction between two Augmented
Neural Ordinary Differential Equations (ANODEs) -- the Shape-NODE and
Control-NODE -- to achieve continuous shape estimation and shape-aware control.
The Shape-NODE integrates prior knowledge from Cosserat rod theory, allowing it
to adapt and account for model mismatches, while the Control-NODE uses this
shape information to optimize a whole-body control policy, trained in a Model
Predictive Control (MPC) fashion. This unified framework effectively overcomes
limitations of existing data-driven methods, such as poor shape awareness and
challenges in capturing complex nonlinear dynamics. Extensive evaluations in
both simulation and real-world environments demonstrate the framework's robust
performance in shape estimation, trajectory tracking, and obstacle avoidance.
The proposed method consistently outperforms state-of-the-art end-to-end,
Neural-ODE, and Recurrent Neural Network (RNN) models, particularly in terms of
tracking accuracy and generalization capabilities.
|
2501.03863 | Improving Dialectal Slot and Intent Detection with Auxiliary Tasks: A
Multi-Dialectal Bavarian Case Study | cs.CL | Reliable slot and intent detection (SID) is crucial in natural language
understanding for applications like digital assistants. Encoder-only
transformer models fine-tuned on high-resource languages generally perform well
on SID. However, they struggle with dialectal data, where no standardized form
exists and training data is scarce and costly to produce. We explore zero-shot
transfer learning for SID, focusing on multiple Bavarian dialects, for which we
release a new dataset for the Munich dialect. We evaluate models trained on
auxiliary tasks in Bavarian, and compare joint multi-task learning with
intermediate-task training. We also compare three types of auxiliary tasks:
token-level syntactic tasks, named entity recognition (NER), and language
modelling. We find that the included auxiliary tasks have a more positive
effect on slot filling than intent classification (with NER having the most
positive effect), and that intermediate-task training yields more consistent
performance gains. Our best-performing approach improves intent classification
performance on Bavarian dialects by 5.1 and slot filling F1 by 8.4 percentage
points.
|
2501.03865 | Truthful mechanisms for linear bandit games with private contexts | cs.LG cs.GT | The contextual bandit problem, where agents arrive sequentially with personal
contexts and the system adapts its arm allocation decisions accordingly, has
recently garnered increasing attention for enabling more personalized outcomes.
However, in many healthcare and recommendation applications, agents have
private profiles and may misreport their contexts to gain from the system. For
example, in adaptive clinical trials, where hospitals sequentially recruit
volunteers to test multiple new treatments and adjust plans based on
volunteers' reported profiles such as symptoms and interim data, participants
may misreport severe side effects like allergy and nausea to avoid perceived
suboptimal treatments. We are the first to study this issue of private context
misreporting in a stochastic contextual bandit game between the system and
non-repeated agents. We show that traditional low-regret algorithms, such as
UCB family algorithms and Thompson sampling, fail to ensure truthful reporting
and can result in linear regret in the worst case, while traditional truthful
algorithms like explore-then-commit (ETC) and $\epsilon$-greedy algorithm incur
sublinear but high regret. We propose a mechanism that uses a linear program to
ensure truthfulness while minimizing deviation from Thompson sampling, yielding
an $O(\ln T)$ frequentist regret. Our numerical experiments further demonstrate
strong performance in multiple contexts and across other distribution families.
|
2501.03870 | Add Noise, Tasks, or Layers? MaiNLP at the VarDial 2025 Shared Task on
Norwegian Dialectal Slot and Intent Detection | cs.CL | Slot and intent detection (SID) is a classic natural language understanding
task. Despite this, research has only more recently begun focusing on SID for
dialectal and colloquial varieties. Many approaches for low-resource scenarios
have not yet been applied to dialectal SID data, or compared to each other on
the same datasets. We participate in the VarDial 2025 shared task on slot and
intent detection in Norwegian varieties, and compare multiple set-ups: varying
the training data (English, Norwegian, or dialectal Norwegian), injecting
character-level noise, training on auxiliary tasks, and applying Layer
Swapping, a technique in which layers of models fine-tuned on different
datasets are assembled into a model. We find noise injection to be beneficial
while the effects of auxiliary tasks are mixed. Though some experimentation was
required to successfully assemble a model from layers, it worked surprisingly
well; a combination of models trained on English and small amounts of dialectal
data produced the most robust slot predictions. Our best models achieve 97.6%
intent accuracy and 85.6% slot F1 in the shared task.
|
2501.03874 | Neuromorphic Optical Tracking and Imaging of Randomly Moving Targets
through Strongly Scattering Media | cs.NE cs.CV cs.LG eess.IV | Tracking and acquiring simultaneous optical images of randomly moving targets
obscured by scattering media remains a challenging problem of importance to
many applications that require precise object localization and identification.
In this work we develop an end-to-end neuromorphic optical engineering and
computational approach to demonstrate how to track and image normally invisible
objects by combining an event detecting camera with a multistage neuromorphic
deep learning strategy. Photons emerging from dense scattering media are
detected by the event camera and converted to pixel-wise asynchronized spike
trains - a first step in isolating object-specific information from the
dominant uninformative background. Spiking data is fed into a deep spiking
neural network (SNN) engine where object tracking and image reconstruction are
performed by two separate yet interconnected modules running in parallel in
discrete time steps over the event duration. Through benchtop experiments we
demonstrate tracking and imaging randomly moving objects in dense turbid media
as well as image reconstruction of spatially stationary but optically dynamic
objects. Standardized character sets serve as representative proxies for
geometrically complex objects, underscoring the method's generality. The
results highlight the advantages of a fully neuromorphic approach in meeting a
major imaging technology with high computational efficiency and low power
consumption.
|
2501.03875 | ZDySS -- Zero-Shot Dynamic Scene Stylization using Gaussian Splatting | cs.CV | Stylizing a dynamic scene based on an exemplar image is critical for various
real-world applications, including gaming, filmmaking, and augmented and
virtual reality. However, achieving consistent stylization across both spatial
and temporal dimensions remains a significant challenge. Most existing methods
are designed for static scenes and often require an optimization process for
each style image, limiting their adaptability. We introduce ZDySS, a zero-shot
stylization framework for dynamic scenes, allowing our model to generalize to
previously unseen style images at inference. Our approach employs Gaussian
splatting for scene representation, linking each Gaussian to a learned feature
vector that renders a feature map for any given view and timestamp. By applying
style transfer on the learned feature vectors instead of the rendered feature
map, we enhance spatio-temporal consistency across frames. Our method
demonstrates superior performance and coherence over state-of-the-art baselines
in tests on real-world dynamic scenes, making it a robust solution for
practical applications.
|
2501.03877 | Stochastically Constrained Best Arm Identification with Thompson
Sampling | cs.LG | We consider the problem of the best arm identification in the presence of
stochastic constraints, where there is a finite number of arms associated with
multiple performance measures. The goal is to identify the arm that optimizes
the objective measure subject to constraints on the remaining measures. We will
explore the popular idea of Thompson sampling (TS) as a means to solve it. To
the best of our knowledge, it is the first attempt to extend TS to this
problem. We will design a TS-based sampling algorithm, establish its asymptotic
optimality in the rate of posterior convergence, and demonstrate its superior
performance using numerical examples.
|
2501.03879 | CL3DOR: Contrastive Learning for 3D Large Multimodal Models via Odds
Ratio on High-Resolution Point Clouds | cs.CV cs.AI | Recent research has demonstrated that Large Language Models (LLMs) are not
limited to text-only tasks but can also function as multimodal models across
various modalities, including audio, images, and videos. In particular,
research on 3D Large Multimodal Models (3D LMMs) is making notable strides,
driven by the potential of processing higher-dimensional data like point
clouds. However, upon closer examination, we find that the visual and textual
content within each sample of existing training datasets lacks both high
informational granularity and clarity, which serve as a bottleneck for precise
cross-modal understanding. To address these issues, we propose CL3DOR,
Contrastive Learning for 3D large multimodal models via Odds ratio on
high-Resolution point clouds, designed to ensure greater specificity and
clarity in both visual and textual content. Specifically, we increase the
density of point clouds per object and construct informative hard negative
responses in the training dataset to penalize unwanted responses. To leverage
hard negative responses, we incorporate the odds ratio as an auxiliary term for
contrastive learning into the conventional language modeling loss. CL3DOR
achieves state-of-the-art performance in 3D scene understanding and reasoning
benchmarks. Additionally, we demonstrate the effectiveness of CL3DOR's key
components through extensive experiments.
|
2501.03880 | SELMA3D challenge: Self-supervised learning for 3D light-sheet
microscopy image segmentation | eess.IV cs.CV cs.LG | Recent innovations in light sheet microscopy, paired with developments in
tissue clearing techniques, enable the 3D imaging of large mammalian tissues
with cellular resolution. Combined with the progress in large-scale data
analysis, driven by deep learning, these innovations empower researchers to
rapidly investigate the morphological and functional properties of diverse
biological samples. Segmentation, a crucial preliminary step in the analysis
process, can be automated using domain-specific deep learning models with
expert-level performance. However, these models exhibit high sensitivity to
domain shifts, leading to a significant drop in accuracy when applied to data
outside their training distribution. To address this limitation, and inspired
by the recent success of self-supervised learning in training generalizable
models, we organized the SELMA3D Challenge during the MICCAI 2024 conference.
SELMA3D provides a vast collection of light-sheet images from cleared mice and
human brains, comprising 35 large 3D images-each with over 1000^3 voxels-and
315 annotated small patches for finetuning, preliminary testing and final
testing. The dataset encompasses diverse biological structures, including
vessel-like and spot-like structures. Five teams participated in all phases of
the challenge, and their proposed methods are reviewed in this paper.
Quantitative and qualitative results from most participating teams demonstrate
that self-supervised learning on large datasets improves segmentation model
performance and generalization. We will continue to support and extend SELMA3D
as an inaugural MICCAI challenge focused on self-supervised learning for 3D
microscopy image segmentation.
|
2501.03881 | An LSTM-based Test Selection Method for Self-Driving Cars | cs.RO cs.SE | Self-driving cars require extensive testing, which can be costly in terms of
time. To optimize this process, simple and straightforward tests should be
excluded, focusing on challenging tests instead. This study addresses the test
selection problem for lane-keeping systems for self-driving cars. Road segment
features, such as angles and lengths, were extracted and treated as sequences,
enabling classification of the test cases as "safe" or "unsafe" using a long
short-term memory (LSTM) model. The proposed model is compared against machine
learning-based test selectors. Results demonstrated that the LSTM-based method
outperformed machine learning-based methods in accuracy and precision metrics
while exhibiting comparable performance in recall and F1 scores. This work
introduces a novel deep learning-based approach to the road classification
problem, providing an effective solution for self-driving car test selection
using a simulation environment.
|
2501.03884 | AlphaPO - Reward shape matters for LLM alignment | cs.CL | Reinforcement Learning with Human Feedback (RLHF) and its variants have made
huge strides toward the effective alignment of large language models (LLMs) to
follow instructions and reflect human values. More recently, Direct Alignment
Algorithms (DAAs) have emerged in which the reward modeling stage of RLHF is
skipped by characterizing the reward directly as a function of the policy being
learned. Some popular examples of DAAs include Direct Preference Optimization
(DPO) and Simple Preference Optimization (SimPO). These methods often suffer
from likelihood displacement, a phenomenon by which the probabilities of
preferred responses are often reduced undesirably.
In this paper, we argue that, for DAAs the reward (function) shape matters.
We introduce \textbf{AlphaPO}, a new DAA method that leverages an
$\alpha$-parameter to help change the shape of the reward function beyond the
standard log reward. AlphaPO helps maintain fine-grained control over
likelihood displacement and over-optimization. Compared to SimPO, one of the
best performing DAAs, AlphaPO leads to about 7\% to 10\% relative improvement
in alignment performance for the instruct versions of Mistral-7B and Llama3-8B
while achieving 15\% to 50\% relative improvement over DPO on the same models.
The analysis and results presented highlight the importance of the reward
shape, and how one can systematically change it to affect training dynamics, as
well as improve alignment performance.
|
2501.03888 | Neural DNF-MT: A Neuro-symbolic Approach for Learning Interpretable and
Editable Policies | cs.AI cs.LG cs.LO | Although deep reinforcement learning has been shown to be effective, the
model's black-box nature presents barriers to direct policy interpretation. To
address this problem, we propose a neuro-symbolic approach called neural DNF-MT
for end-to-end policy learning. The differentiable nature of the neural DNF-MT
model enables the use of deep actor-critic algorithms for training. At the same
time, its architecture is designed so that trained models can be directly
translated into interpretable policies expressed as standard (bivalent or
probabilistic) logic programs. Moreover, additional layers can be included to
extract abstract features from complex observations, acting as a form of
predicate invention. The logic representations are highly interpretable, and we
show how the bivalent representations of deterministic policies can be edited
and incorporated back into a neural model, facilitating manual intervention and
adaptation of learned policies. We evaluate our approach on a range of tasks
requiring learning deterministic or stochastic behaviours from various forms of
observations. Our empirical results show that our neural DNF-MT model performs
at the level of competing black-box methods whilst providing interpretable
policies.
|
2501.03891 | Superpixel Boundary Correction for Weakly-Supervised Semantic
Segmentation on Histopathology Images | cs.CV | With the rapid advancement of deep learning, computational pathology has made
significant progress in cancer diagnosis and subtyping. Tissue segmentation is
a core challenge, essential for prognosis and treatment decisions. Weakly
supervised semantic segmentation (WSSS) reduces the annotation requirement by
using image-level labels instead of pixel-level ones. However, Class Activation
Map (CAM)-based methods still suffer from low spatial resolution and unclear
boundaries. To address these issues, we propose a multi-level superpixel
correction algorithm that refines CAM boundaries using superpixel clustering
and floodfill. Experimental results show that our method achieves great
performance on breast cancer segmentation dataset with mIoU of 71.08%,
significantly improving tumor microenvironment boundary delineation.
|
2501.03892 | LEAP: LLM-powered End-to-end Automatic Library for Processing Social
Science Queries on Unstructured Data | cs.DB | Social scientists are increasingly interested in analyzing the semantic
information (e.g., emotion) of unstructured data (e.g., Tweets), where the
semantic information is not natively present. Performing this analysis in a
cost-efficient manner requires using machine learning (ML) models to extract
the semantic information and subsequently analyze the now structured data.
However, this process remains challenging for domain experts.
To demonstrate the challenges in social science analytics, we collect a
dataset, QUIET-ML, of 120 real-world social science queries in natural language
and their ground truth answers. Existing systems struggle with these queries
since (1) they require selecting and applying ML models, and (2) more than a
quarter of these queries are vague, making standard tools like natural language
to SQL systems unsuited. To address these issues, we develop LEAP, an
end-to-end library that answers social science queries in natural language with
ML. LEAP filters vague queries to ensure that the answers are deterministic and
selects from internally supported and user-defined ML functions to extend the
unstructured data to structured tables with necessary annotations. LEAP further
generates and executes code to respond to these natural language queries. LEAP
achieves a 100% pass @ 3 and 92% pass @ 1 on QUIET-ML, with a \$1.06 average
end-to-end cost, of which code generation costs \$0.02.
|
2501.03894 | Robust Moving-horizon Estimation for Nonlinear Systems: From Perfect to
Imperfect Optimization | eess.SY cs.SY | Robust stability of moving-horizon estimators is investigated for nonlinear
discrete-time systems that are detectable in the sense of incremental
input/output-to-state stability and are affected by disturbances. The estimate
of a moving-horizon estimator stems from the on-line solution of a
least-squares minimization problem at each time instant. The resulting
stability guarantees depend on the optimization tolerance in solving such
minimization problems. Specifically, two main contributions are established:
(i) the robust stability of the estimation error, while supposing to solve
exactly the on-line minimization problem; (ii) the practical robust stability
of the estimation error with state estimates obtained by an imperfect
minimization. Finally, the construction of such robust moving-horizon
estimators and the performances resulting from the design based on the
theoretical findings are showcased with two numerical examples.
|
2501.03895 | LLaVA-Mini: Efficient Image and Video Large Multimodal Models with One
Vision Token | cs.CV cs.AI cs.CL | The advent of real-time large multimodal models (LMMs) like GPT-4o has
sparked considerable interest in efficient LMMs. LMM frameworks typically
encode visual inputs into vision tokens (continuous representations) and
integrate them and textual instructions into the context of large language
models (LLMs), where large-scale parameters and numerous context tokens
(predominantly vision tokens) result in substantial computational overhead.
Previous efforts towards efficient LMMs always focus on replacing the LLM
backbone with smaller models, while neglecting the crucial issue of token
quantity. In this paper, we introduce LLaVA-Mini, an efficient LMM with minimal
vision tokens. To achieve a high compression ratio of vision tokens while
preserving visual information, we first analyze how LMMs understand vision
tokens and find that most vision tokens only play a crucial role in the early
layers of LLM backbone, where they mainly fuse visual information into text
tokens. Building on this finding, LLaVA-Mini introduces modality pre-fusion to
fuse visual information into text tokens in advance, thereby facilitating the
extreme compression of vision tokens fed to LLM backbone into one token.
LLaVA-Mini is a unified large multimodal model that can support the
understanding of images, high-resolution images, and videos in an efficient
manner. Experiments across 11 image-based and 7 video-based benchmarks
demonstrate that LLaVA-Mini outperforms LLaVA-v1.5 with just 1 vision token
instead of 576. Efficiency analyses reveal that LLaVA-Mini can reduce FLOPs by
77%, deliver low-latency responses within 40 milliseconds, and process over
10,000 frames of video on the GPU hardware with 24GB of memory.
|
2501.03902 | Explainable Reinforcement Learning via Temporal Policy Decomposition | cs.LG cs.AI | We investigate the explainability of Reinforcement Learning (RL) policies
from a temporal perspective, focusing on the sequence of future outcomes
associated with individual actions. In RL, value functions compress information
about rewards collected across multiple trajectories and over an infinite
horizon, allowing a compact form of knowledge representation. However, this
compression obscures the temporal details inherent in sequential
decision-making, presenting a key challenge for interpretability. We present
Temporal Policy Decomposition (TPD), a novel explainability approach that
explains individual RL actions in terms of their Expected Future Outcome (EFO).
These explanations decompose generalized value functions into a sequence of
EFOs, one for each time step up to a prediction horizon of interest, revealing
insights into when specific outcomes are expected to occur. We leverage
fixed-horizon temporal difference learning to devise an off-policy method for
learning EFOs for both optimal and suboptimal actions, enabling contrastive
explanations consisting of EFOs for different state-action pairs. Our
experiments demonstrate that TPD generates accurate explanations that (i)
clarify the policy's future strategy and anticipated trajectory for a given
action and (ii) improve understanding of the reward composition, facilitating
fine-tuning of the reward function to align with human expectations.
|
2501.03904 | Exploring the Potential of Large Language Models in Public
Transportation: San Antonio Case Study | cs.LG cs.AI cs.IR | The integration of large language models (LLMs) into public transit systems
presents a transformative opportunity to enhance urban mobility. This study
explores the potential of LLMs to revolutionize public transportation
management within the context of San Antonio's transit system. Leveraging the
capabilities of LLMs in natural language processing and data analysis, we
investigate their capabilities to optimize route planning, reduce wait times,
and provide personalized travel assistance. By utilizing the General Transit
Feed Specification (GTFS) and other relevant data, this research aims to
demonstrate how LLMs can potentially improve resource allocation, elevate
passenger satisfaction, and inform data-driven decision-making in transit
operations. A comparative analysis of different ChatGPT models was conducted to
assess their ability to understand transportation information, retrieve
relevant data, and provide comprehensive responses. Findings from this study
suggest that while LLMs hold immense promise for public transit, careful
engineering and fine-tuning are essential to realizing their full potential.
San Antonio serves as a case study to inform the development of LLM-powered
transit systems in other urban environments.
|
2501.03905 | mFabric: An Efficient and Scalable Fabric for Mixture-of-Experts
Training | cs.NI cs.LG | Mixture-of-Expert (MoE) models outperform conventional models by selectively
activating different subnets, named \emph{experts}, on a per-token basis. This
gated computation generates dynamic communications that cannot be determined
beforehand, challenging the existing GPU interconnects that remain
\emph{static} during the distributed training process. In this paper, we
advocate for a first-of-its-kind system, called mFabric, that unlocks topology
reconfiguration \emph{during} distributed MoE training. Towards this vision, we
first perform a production measurement study and show that the MoE dynamic
communication pattern has \emph{strong locality}, alleviating the requirement
of global reconfiguration. Based on this, we design and implement a
\emph{regionally reconfigurable high-bandwidth domain} on top of existing
electrical interconnects using optical circuit switching (OCS), achieving
scalability while maintaining rapid adaptability. We have built a fully
functional mFabric prototype with commodity hardware and a customized
collective communication runtime that trains state-of-the-art MoE models with
\emph{in-training} topology reconfiguration across 32 A100 GPUs. Large-scale
packet-level simulations show that mFabric delivers comparable performance as
the non-blocking fat-tree fabric while boosting the training cost efficiency
(e.g., performance per dollar) of four representative MoE models by
1.2$\times$--1.5$\times$ and 1.9$\times$--2.3$\times$ at 100 Gbps and 400 Gbps
link bandwidths, respectively.
|
2501.03907 | Implicit Coordination using Active Epistemic Inference for Multi-Robot
Systems | cs.RO | A Multi-robot system (MRS) provides significant advantages for intricate
tasks such as environmental monitoring, underwater inspections, and space
missions. However, addressing potential communication failures or the lack of
communication infrastructure in these fields remains a challenge. A significant
portion of MRS research presumes that the system can maintain communication
with proximity constraints, but this approach does not solve situations where
communication is either non-existent, unreliable, or poses a security risk.
Some approaches tackle this issue using predictions about other robots while
not communicating, but these methods generally only permit agents to utilize
first-order reasoning, which involves reasoning based purely on their own
observations. In contrast, to deal with this problem, our proposed framework
utilizes Theory of Mind (ToM), employing higher-order reasoning by shifting a
robot's perspective to reason about a belief of others observations. Our
approach has two main phases: i) an efficient runtime plan adaptation using
active inference to signal intentions and reason about a robot's own belief and
the beliefs of others in the system, and ii) a hierarchical epistemic planning
framework to iteratively reason about the current MRS mission state. The
proposed framework outperforms greedy and first-order reasoning approaches and
is validated using simulations and experiments with heterogeneous robotic
systems.
|
2501.03910 | HYB-VITON: A Hybrid Approach to Virtual Try-On Combining Explicit and
Implicit Warping | cs.CV | Virtual try-on systems have significant potential in e-commerce, allowing
customers to visualize garments on themselves. Existing image-based methods
fall into two categories: those that directly warp garment-images onto
person-images (explicit warping), and those using cross-attention to
reconstruct given garments (implicit warping). Explicit warping preserves
garment details but often produces unrealistic output, while implicit warping
achieves natural reconstruction but struggles with fine details. We propose
HYB-VITON, a novel approach that combines the advantages of each method and
includes both a preprocessing pipeline for warped garments and a novel training
option. These components allow us to utilize beneficial regions of explicitly
warped garments while leveraging the natural reconstruction of implicit
warping. A series of experiments demonstrates that HYB-VITON preserves garment
details more faithfully than recent diffusion-based methods, while producing
more realistic results than a state-of-the-art explicit warping method.
|
2501.03916 | Dolphin: Closed-loop Open-ended Auto-research through Thinking,
Practice, and Feedback | cs.AI cs.CL cs.CV | The scientific research paradigm is undergoing a profound transformation
owing to the development of Artificial Intelligence (AI). Recent works
demonstrate that various AI-assisted research methods can largely improve
research efficiency by improving data analysis, accelerating computation, and
fostering novel idea generation. To further move towards the ultimate goal
(i.e., automatic scientific research), in this paper, we propose Dolphin, the
first closed-loop open-ended auto-research framework to further build the
entire process of human scientific research. Dolphin can generate research
ideas, perform experiments, and get feedback from experimental results to
generate higher-quality ideas. More specifically, Dolphin first generates novel
ideas based on relevant papers which are ranked by the topic and task
attributes. Then, the codes are automatically generated and debugged with the
exception-traceback-guided local code structure. Finally, Dolphin automatically
analyzes the results of each idea and feeds the results back to the next round
of idea generation. Experiments are conducted on the benchmark datasets of
different topics and results show that Dolphin can generate novel ideas
continuously and complete the experiment in a loop. We highlight that Dolphin
can automatically propose methods that are comparable to the state-of-the-art
in some tasks such as 2D image classification and 3D point classification.
|
2501.03922 | Changing almost perfect nonlinear functions on affine subspaces of small
codimensions | math.CO cs.IT math.CA math.IT | In this article, we study algebraic decompositions and secondary
constructions of almost perfect nonlinear (APN) functions. In many cases, we
establish precise criteria which characterize when certain modifications of a
given APN function yield new ones. Furthermore, we show that some of the newly
constructed functions are extended-affine inequivalent to the original ones.
|
2501.03923 | Explainable AI model reveals disease-related mechanisms in single-cell
RNA-seq data | q-bio.GN cs.CV cs.LG | Neurodegenerative diseases (NDDs) are complex and lack effective treatment
due to their poorly understood mechanism. The increasingly used data analysis
from Single nucleus RNA Sequencing (snRNA-seq) allows to explore transcriptomic
events at a single cell level, yet face challenges in interpreting the
mechanisms underlying a disease. On the other hand, Neural Network (NN) models
can handle complex data to offer insights but can be seen as black boxes with
poor interpretability. In this context, explainable AI (XAI) emerges as a
solution that could help to understand disease-associated mechanisms when
combined with efficient NN models. However, limited research explores XAI in
single-cell data. In this work, we implement a method for identifying
disease-related genes and the mechanistic explanation of disease progression
based on NN model combined with SHAP. We analyze available Huntington's disease
(HD) data to identify both HD-altered genes and mechanisms by adding Gene Set
Enrichment Analysis (GSEA) comparing two methods, differential gene expression
analysis (DGE) and NN combined with SHAP approach. Our results show that DGE
and SHAP approaches offer both common and differential sets of altered genes
and pathways, reinforcing the usefulness of XAI methods for a broader
perspective of disease.
|
2501.03928 | From Newswire to Nexus: Using text-based actor embeddings and
transformer networks to forecast conflict dynamics | cs.CY cs.CL cs.LG | This study advances the field of conflict forecasting by using text-based
actor embeddings with transformer models to predict dynamic changes in violent
conflict patterns at the actor level. More specifically, we combine newswire
texts with structured conflict event data and leverage recent advances in
Natural Language Processing (NLP) techniques to forecast escalations and
de-escalations among conflicting actors, such as governments, militias,
separatist movements, and terrorists. This new approach accurately and promptly
captures the inherently volatile patterns of violent conflicts, which existing
methods have not been able to achieve. To create this framework, we began by
curating and annotating a vast international newswire corpus, leveraging
hand-labeled event data from the Uppsala Conflict Data Program. By using this
hybrid dataset, our models can incorporate the textual context of news sources
along with the precision and detail of structured event data. This combination
enables us to make both dynamic and granular predictions about conflict
developments. We validate our approach through rigorous back-testing against
historical events, demonstrating superior out-of-sample predictive power. We
find that our approach is quite effective in identifying and predicting phases
of conflict escalation and de-escalation, surpassing the capabilities of
traditional models. By focusing on actor interactions, our explicit goal is to
provide actionable insights to policymakers, humanitarian organizations, and
peacekeeping operations in order to enable targeted and effective intervention
strategies.
|
2501.03930 | Towards Reliable Testing for Multiple Information Retrieval System
Comparisons | cs.IR | Null Hypothesis Significance Testing is the \textit{de facto} tool for
assessing effectiveness differences between Information Retrieval systems.
Researchers use statistical tests to check whether those differences will
generalise to online settings or are just due to the samples observed in the
laboratory. Much work has been devoted to studying which test is the most
reliable when comparing a pair of systems, but most of the IR real-world
experiments involve more than two. In the multiple comparisons scenario,
testing several systems simultaneously may inflate the errors committed by the
tests. In this paper, we use a new approach to assess the reliability of
multiple comparison procedures using simulated and real TREC data. Experiments
show that Wilcoxon plus the Benjamini-Hochberg correction yields Type I error
rates according to the significance level for typical sample sizes while being
the best test in terms of statistical power.
|
2501.03931 | Magic Mirror: ID-Preserved Video Generation in Video Diffusion
Transformers | cs.CV | We present Magic Mirror, a framework for generating identity-preserved videos
with cinematic-level quality and dynamic motion. While recent advances in video
diffusion models have shown impressive capabilities in text-to-video
generation, maintaining consistent identity while producing natural motion
remains challenging. Previous methods either require person-specific
fine-tuning or struggle to balance identity preservation with motion diversity.
Built upon Video Diffusion Transformers, our method introduces three key
components: (1) a dual-branch facial feature extractor that captures both
identity and structural features, (2) a lightweight cross-modal adapter with
Conditioned Adaptive Normalization for efficient identity integration, and (3)
a two-stage training strategy combining synthetic identity pairs with video
data. Extensive experiments demonstrate that Magic Mirror effectively balances
identity consistency with natural motion, outperforming existing methods across
multiple metrics while requiring minimal parameters added. The code and model
will be made publicly available at:
https://github.com/dvlab-research/MagicMirror/
|
2501.03932 | CoStruction: Conjoint radiance field optimization for urban scene
reconStruction with limited image overlap | cs.CV | Reconstructing the surrounding surface geometry from recorded driving
sequences poses a significant challenge due to the limited image overlap and
complex topology of urban environments. SoTA neural implicit surface
reconstruction methods often struggle in such setting, either failing due to
small vision overlap or exhibiting suboptimal performance in accurately
reconstructing both the surface and fine structures. To address these
limitations, we introduce CoStruction, a novel hybrid implicit surface
reconstruction method tailored for large driving sequences with limited camera
overlap. CoStruction leverages cross-representation uncertainty estimation to
filter out ambiguous geometry caused by limited observations. Our method
performs joint optimization of both radiance fields in addition to guided
sampling achieving accurate reconstruction of large areas along with fine
structures in complex urban scenarios. Extensive evaluation on major driving
datasets demonstrates the superiority of our approach in reconstructing large
driving sequences with limited image overlap, outperforming concurrent SoTA
methods.
|
2501.03936 | PPTAgent: Generating and Evaluating Presentations Beyond Text-to-Slides | cs.AI cs.CL | Automatically generating presentations from documents is a challenging task
that requires accommodating content quality, visual appeal, and structural
coherence. Existing methods primarily focus on improving and evaluating the
content quality in isolation, overlooking visual appeal and structural
coherence, which limits their practical applicability. To address these
limitations, we propose PPTAgent, which comprehensively improves presentation
generation through a two-stage, edit-based approach inspired by human
workflows. PPTAgent first analyzes reference presentations to extract
slide-level functional types and content schemas, then drafts an outline and
iteratively generates editing actions based on selected reference slides to
create new slides. To comprehensively evaluate the quality of generated
presentations, we further introduce PPTEval, an evaluation framework that
assesses presentations across three dimensions: Content, Design, and Coherence.
Results demonstrate that PPTAgent significantly outperforms existing automatic
presentation generation methods across all three dimensions.
|
2501.03937 | A precise asymptotic analysis of learning diffusion models: theory and
insights | cs.LG cond-mat.dis-nn | In this manuscript, we consider the problem of learning a flow or
diffusion-based generative model parametrized by a two-layer auto-encoder,
trained with online stochastic gradient descent, on a high-dimensional target
density with an underlying low-dimensional manifold structure. We derive a
tight asymptotic characterization of low-dimensional projections of the
distribution of samples generated by the learned model, ascertaining in
particular its dependence on the number of training samples. Building on this
analysis, we discuss how mode collapse can arise, and lead to model collapse
when the generative model is re-trained on generated synthetic data.
|
2501.03939 | Visual question answering: from early developments to recent advances --
a survey | cs.CV cs.MM | Visual Question Answering (VQA) is an evolving research field aimed at
enabling machines to answer questions about visual content by integrating image
and language processing techniques such as feature extraction, object
detection, text embedding, natural language understanding, and language
generation. With the growth of multimodal data research, VQA has gained
significant attention due to its broad applications, including interactive
educational tools, medical image diagnosis, customer service, entertainment,
and social media captioning. Additionally, VQA plays a vital role in assisting
visually impaired individuals by generating descriptive content from images.
This survey introduces a taxonomy of VQA architectures, categorizing them based
on design choices and key components to facilitate comparative analysis and
evaluation. We review major VQA approaches, focusing on deep learning-based
methods, and explore the emerging field of Large Visual Language Models (LVLMs)
that have demonstrated success in multimodal tasks like VQA. The paper further
examines available datasets and evaluation metrics essential for measuring VQA
system performance, followed by an exploration of real-world VQA applications.
Finally, we highlight ongoing challenges and future directions in VQA research,
presenting open questions and potential areas for further development. This
survey serves as a comprehensive resource for researchers and practitioners
interested in the latest advancements and future
|
2501.03940 | Not all tokens are created equal: Perplexity Attention Weighted Networks
for AI generated text detection | cs.CL cs.AI | The rapid advancement in large language models (LLMs) has significantly
enhanced their ability to generate coherent and contextually relevant text,
raising concerns about the misuse of AI-generated content and making it
critical to detect it. However, the task remains challenging, particularly in
unseen domains or with unfamiliar LLMs. Leveraging LLM next-token distribution
outputs offers a theoretically appealing approach for detection, as they
encapsulate insights from the models' extensive pre-training on diverse
corpora. Despite its promise, zero-shot methods that attempt to operationalize
these outputs have met with limited success. We hypothesize that one of the
problems is that they use the mean to aggregate next-token distribution metrics
across tokens, when some tokens are naturally easier or harder to predict and
should be weighted differently. Based on this idea, we propose the Perplexity
Attention Weighted Network (PAWN), which uses the last hidden states of the LLM
and positions to weight the sum of a series of features based on metrics from
the next-token distribution across the sequence length. Although not zero-shot,
our method allows us to cache the last hidden states and next-token
distribution metrics on disk, greatly reducing the training resource
requirements. PAWN shows competitive and even better performance
in-distribution than the strongest baselines (fine-tuned LMs) with a fraction
of their trainable parameters. Our model also generalizes better to unseen
domains and source models, with smaller variability in the decision boundary
across distribution shifts. It is also more robust to adversarial attacks, and
if the backbone has multilingual capabilities, it presents decent
generalization to languages not seen during supervised training, with LLaMA3-1B
reaching a mean macro-averaged F1 score of 81.46% in cross-validation with nine
languages.
|
2501.03941 | Synthetic Data Privacy Metrics | cs.LG cs.AI | Recent advancements in generative AI have made it possible to create
synthetic datasets that can be as accurate as real-world data for training AI
models, powering statistical insights, and fostering collaboration with
sensitive datasets while offering strong privacy guarantees. Effectively
measuring the empirical privacy of synthetic data is an important step in the
process. However, while there is a multitude of new privacy metrics being
published every day, there currently is no standardization. In this paper, we
review the pros and cons of popular metrics that include simulations of
adversarial attacks. We also review current best practices for amending
generative models to enhance the privacy of the data they create (e.g.
differential privacy).
|
2501.03944 | A GPU Implementation of Multi-Guiding Spark Fireworks Algorithm for
Efficient Black-Box Neural Network Optimization | cs.NE | Swarm intelligence optimization algorithms have gained significant attention
due to their ability to solve complex optimization problems. However, the
efficiency of optimization in large-scale problems limits the use of related
methods. This paper presents a GPU-accelerated version of the Multi-Guiding
Spark Fireworks Algorithm (MGFWA), which significantly improves the
computational efficiency compared to its traditional CPU-based counterpart. We
benchmark the GPU-MGFWA on several neural network black-box optimization
problems and demonstrate its superior performance in terms of both speed and
solution quality. By leveraging the parallel processing power of modern GPUs,
the proposed GPU-MGFWA results in faster convergence and reduced computation
time for large-scale optimization tasks. The proposed implementation offers a
promising approach to accelerate swarm intelligence algorithms, making them
more suitable for real-time applications and large-scale industrial problems.
Source code is released at https://github.com/mxxxr/MGFWA.
|
2501.03952 | Localizing AI: Evaluating Open-Weight Language Models for Languages of
Baltic States | cs.CL cs.AI | Although large language models (LLMs) have transformed our expectations of
modern language technologies, concerns over data privacy often restrict the use
of commercially available LLMs hosted outside of EU jurisdictions. This limits
their application in governmental, defence, and other data-sensitive sectors.
In this work, we evaluate the extent to which locally deployable open-weight
LLMs support lesser-spoken languages such as Lithuanian, Latvian, and Estonian.
We examine various size and precision variants of the top-performing
multilingual open-weight models, Llama~3, Gemma~2, Phi, and NeMo, on machine
translation, multiple-choice question answering, and free-form text generation.
The results indicate that while certain models like Gemma~2 perform close to
the top commercially available models, many LLMs struggle with these languages.
Most surprisingly, however, we find that these models, while showing close to
state-of-the-art translation performance, are still prone to lexical
hallucinations with errors in at least 1 in 20 words for all open-weight
multilingual LLMs.
|
2501.03957 | Vision Language Models as Values Detectors | cs.HC cs.CV | Large Language Models integrating textual and visual inputs have introduced
new possibilities for interpreting complex data. Despite their remarkable
ability to generate coherent and contextually relevant text based on visual
stimuli, the alignment of these models with human perception in identifying
relevant elements in images requires further exploration. This paper
investigates the alignment between state-of-the-art LLMs and human annotators
in detecting elements of relevance within home environment scenarios. We
created a set of twelve images depicting various domestic scenarios and
enlisted fourteen annotators to identify the key element in each image. We then
compared these human responses with outputs from five different LLMs, including
GPT-4o and four LLaVA variants. Our findings reveal a varied degree of
alignment, with LLaVA 34B showing the highest performance but still scoring
low. However, an analysis of the results highlights the models' potential to
detect value-laden elements in images, suggesting that with improved training
and refined prompts, LLMs could enhance applications in social robotics,
assistive technologies, and human-computer interaction by providing deeper
insights and more contextually relevant responses.
|
2501.03961 | Channel Coding based on Skew Polynomials and Multivariate Polynomials | cs.IT eess.SP math.IT | This dissertation considers new constructions and decoding approaches for
error-correcting codes based on non-conventional polynomials, with the
objective of providing new coding solutions to the applications mentioned
above. With skew polynomials, we construct codes that are dual-containing,
which is a desired property of quantum error-correcting codes. By considering
evaluation codes based on skew polynomials, a condition on the existence of
optimal support-constrained codes is derived and an application of such codes
in the distributed multi-source networks is proposed. For a class of multicast
networks, the advantage of vector network coding compared to scalar network
coding is investigated. Multivariate polynomials have been attracting
increasing interest in constructing codes with repair capabilities by accessing
only a small amount of available symbols, which is required to build
failure-resistant distributed storage systems. A new class of bivariate
evaluation codes and their local recovery capability are studied.
Interestingly, the well-known Reed-Solomon codes are used in a class of locally
recoverable codes with availability (multiple disjoint recovery sets) via
subspace design. Aside from new constructions, decoding approaches are
considered in order to increase the error correction capability in the case
where the code is fixed. In particular, new lower and upper bounds on the
success probability of joint decoding interleaved alternant codes by a
syndrome-based decoder are derived, where alternant codes are an important
class of algebraic codes containing Goppa codes, BCH codes, and Reed-Muller
codes as sub-classes.
|
2501.03964 | A comparative study of uncertainty quantification methods in gust
response analysis of a Lift-Plus-Cruise eVTOL aircraft wing | cs.CE | Wind gusts, being inherently stochastic, can significantly influence the
safety and performance of aircraft. This study investigates a three-dimensional
uncertainty quantification (UQ) problem to explore how uncertainties in gust
and flight conditions affect the structural response of a Lift-Plus-Cruise
eVTOL aircraft wing. The analysis employs an unsteady aeroelastic model with a
one-way coupling between a panel method aerodynamic solver and a shell analysis
structural solver to predict the wing's response under varying conditions.
Additionally, this paper presents a comparative evaluation of commonly used
non-intrusive UQ methods, including non-intrusive polynomial chaos, kriging,
Monte Carlo, univariate dimension reduction, and gradient-enhanced univariate
dimension reduction. These methods are assessed based on their effectiveness in
estimating various risk measures-mean, standard deviation, and 95th
percentile-of critical structural response outputs such as maximum tip
displacement and average strain energy. The numerical results reveal
significant variability in the structural response outputs, even under
relatively small ranges of uncertain inputs. This highlights the sensitivity of
the system to uncertainties in gust and flight conditions. Furthermore, the
performance of the implemented UQ methods varies significantly depending on the
specific risk measures and the quantity of interest being analyzed.
|
2501.03967 | Temporal Feature Weaving for Neonatal Echocardiographic Viewpoint Video
Classification | cs.CV | Automated viewpoint classification in echocardiograms can help
under-resourced clinics and hospitals in providing faster diagnosis and
screening when expert technicians may not be available. We propose a novel
approach towards echocardiographic viewpoint classification. We show that
treating viewpoint classification as video classification rather than image
classification yields advantage. We propose a CNN-GRU architecture with a novel
temporal feature weaving method, which leverages both spatial and temporal
information to yield a 4.33\% increase in accuracy over baseline image
classification while using only four consecutive frames. The proposed approach
incurs minimal computational overhead. Additionally, we publish the Neonatal
Echocardiogram Dataset (NED), a professionally-annotated dataset providing
sixteen viewpoints and associated echocardipgraphy videos to encourage future
work and development in this field. Code available at:
https://github.com/satchelfrench/NED
|
2501.03968 | VLM-driven Behavior Tree for Context-aware Task Planning | cs.RO cs.AI cs.CV cs.HC | The use of Large Language Models (LLMs) for generating Behavior Trees (BTs)
has recently gained attention in the robotics community, yet remains in its
early stages of development. In this paper, we propose a novel framework that
leverages Vision-Language Models (VLMs) to interactively generate and edit BTs
that address visual conditions, enabling context-aware robot operations in
visually complex environments. A key feature of our approach lies in the
conditional control through self-prompted visual conditions. Specifically, the
VLM generates BTs with visual condition nodes, where conditions are expressed
as free-form text. Another VLM process integrates the text into its prompt and
evaluates the conditions against real-world images during robot execution. We
validated our framework in a real-world cafe scenario, demonstrating both its
feasibility and limitations.
|
2501.03971 | Impact of Leg Stiffness on Energy Efficiency in One Legged Hopping | cs.RO | In the fields of robotics and biomechanics, the integration of elastic
elements such as springs and tendons in legged systems has long been recognized
for enabling energy-efficient locomotion. Yet, a significant challenge
persists: designing a robotic leg that perform consistently across diverse
operating conditions, especially varying average forward speeds. It remains
unclear whether, for such a range of operating conditions, the stiffness of the
elastic elements needs to be varied or if a similar performance can be obtained
by changing the motion and actuation while keeping the stiffness fixed. This
work explores the influence of the leg stiffness on the energy efficiency of a
monopedal robot through an extensive parametric study of its periodic hopping
motion. To this end, we formulate an optimal control problem parameterized by
average forward speed and leg stiffness, solving it numerically using direct
collocation. Our findings indicate that, compared to the use of a fixed
stiffness, employing variable stiffness in legged systems improves energy
efficiency by 20 % maximally and by 6.8 % on average across a range of speeds.
|
2501.03972 | MAD-BA: 3D LiDAR Bundle Adjustment -- from Uncertainty Modelling to
Structure Optimization | cs.RO | The joint optimization of sensor poses and 3D structure is fundamental for
state estimation in robotics and related fields. Current LiDAR systems often
prioritize pose optimization, with structure refinement either omitted or
treated separately using representations like signed distance functions or
neural networks. This paper introduces a framework for simultaneous
optimization of sensor poses and 3D map, represented as surfels. A generalized
LiDAR uncertainty model is proposed to address degraded or less reliable
measurements in varying scenarios. Experimental results on public datasets
demonstrate improved performance over most comparable state-of-the-art methods.
The system is provided as open-source software to support further research.
|
2501.03988 | Semantically Cohesive Word Grouping in Indian Languages | cs.CL | Indian languages are inflectional and agglutinative and typically follow
clause-free word order. The structure of sentences across most major Indian
languages are similar when their dependency parse trees are considered. While
some differences in the parsing structure occur due to peculiarities of a
language or its preferred natural way of conveying meaning, several apparent
differences are simply due to the granularity of representation of the smallest
semantic unit of processing in a sentence. The semantic unit is typically a
word, typographically separated by whitespaces. A single whitespace-separated
word in one language may correspond to a group of words in another. Hence,
grouping of words based on semantics helps unify the parsing structure of
parallel sentences across languages and, in the process, morphology. In this
work, we propose word grouping as a major preprocessing step for any
computational or linguistic processing of sentences for Indian languages. Among
Indian languages, since Hindi is one of the least agglutinative, we expect it
to benefit the most from word-grouping. Hence, in this paper, we focus on Hindi
to study the effects of grouping. We perform quantitative assessment of our
proposal with an intrinsic method that perturbs sentences by shuffling words as
well as an extrinsic evaluation that verifies the importance of word grouping
for the task of Machine Translation (MT) using decomposed prompting. We also
qualitatively analyze certain aspects of the syntactic structure of sentences.
Our experiments and analyses show that the proposed grouping technique brings
uniformity in the syntactic structures, as well as aids underlying NLP tasks.
|
2501.03989 | (De)-Indexing and the Right to be Forgotten | cs.CY cs.IR | In the digital age, the challenge of forgetfulness has emerged as a
significant concern, particularly regarding the management of personal data and
its accessibility online. The right to be forgotten (RTBF) allows individuals
to request the removal of outdated or harmful information from public access,
yet implementing this right poses substantial technical difficulties for search
engines. This paper aims to introduce non-experts to the foundational concepts
of information retrieval (IR) and de-indexing, which are critical for
understanding how search engines can effectively "forget" certain content. We
will explore various IR models, including boolean, probabilistic, vector space,
and embedding-based approaches, as well as the role of Large Language Models
(LLMs) in enhancing data processing capabilities. By providing this overview,
we seek to highlight the complexities involved in balancing individual privacy
rights with the operational challenges faced by search engines in managing
information visibility.
|
2501.03991 | Influences on LLM Calibration: A Study of Response Agreement, Loss
Functions, and Prompt Styles | cs.CL | Calibration, the alignment between model confidence and prediction accuracy,
is critical for the reliable deployment of large language models (LLMs).
Existing works neglect to measure the generalization of their methods to other
prompt styles and different sizes of LLMs. To address this, we define a
controlled experimental setting covering 12 LLMs and four prompt styles. We
additionally investigate if incorporating the response agreement of multiple
LLMs and an appropriate loss function can improve calibration performance.
Concretely, we build Calib-n, a novel framework that trains an auxiliary model
for confidence estimation that aggregates responses from multiple LLMs to
capture inter-model agreement. To optimize calibration, we integrate focal and
AUC surrogate losses alongside binary cross-entropy. Experiments across four
datasets demonstrate that both response agreement and focal loss improve
calibration from baselines. We find that few-shot prompts are the most
effective for auxiliary model-based methods, and auxiliary models demonstrate
robust calibration performance across accuracy variations, outperforming LLMs'
internal probabilities and verbalized confidences. These insights deepen the
understanding of influence factors in LLM calibration, supporting their
reliable deployment in diverse applications.
|
2501.03992 | NeuralSVG: An Implicit Representation for Text-to-Vector Generation | cs.CV | Vector graphics are essential in design, providing artists with a versatile
medium for creating resolution-independent and highly editable visual content.
Recent advancements in vision-language and diffusion models have fueled
interest in text-to-vector graphics generation. However, existing approaches
often suffer from over-parameterized outputs or treat the layered structure - a
core feature of vector graphics - as a secondary goal, diminishing their
practical use. Recognizing the importance of layered SVG representations, we
propose NeuralSVG, an implicit neural representation for generating vector
graphics from text prompts. Inspired by Neural Radiance Fields (NeRFs),
NeuralSVG encodes the entire scene into the weights of a small MLP network,
optimized using Score Distillation Sampling (SDS). To encourage a layered
structure in the generated SVG, we introduce a dropout-based regularization
technique that strengthens the standalone meaning of each shape. We
additionally demonstrate that utilizing a neural representation provides an
added benefit of inference-time control, enabling users to dynamically adapt
the generated SVG based on user-provided inputs, all with a single learned
representation. Through extensive qualitative and quantitative evaluations, we
demonstrate that NeuralSVG outperforms existing methods in generating
structured and flexible SVG.
|
2501.03995 | RAG-Check: Evaluating Multimodal Retrieval Augmented Generation
Performance | cs.LG cs.CV cs.IR cs.IT math.IT | Retrieval-augmented generation (RAG) improves large language models (LLMs) by
using external knowledge to guide response generation, reducing hallucinations.
However, RAG, particularly multi-modal RAG, can introduce new hallucination
sources: (i) the retrieval process may select irrelevant pieces (e.g.,
documents, images) as raw context from the database, and (ii) retrieved images
are processed into text-based context via vision-language models (VLMs) or
directly used by multi-modal language models (MLLMs) like GPT-4o, which may
hallucinate. To address this, we propose a novel framework to evaluate the
reliability of multi-modal RAG using two performance measures: (i) the
relevancy score (RS), assessing the relevance of retrieved entries to the
query, and (ii) the correctness score (CS), evaluating the accuracy of the
generated response. We train RS and CS models using a ChatGPT-derived database
and human evaluator samples. Results show that both models achieve ~88%
accuracy on test data. Additionally, we construct a 5000-sample human-annotated
database evaluating the relevancy of retrieved pieces and the correctness of
response statements. Our RS model aligns with human preferences 20% more often
than CLIP in retrieval, and our CS model matches human preferences ~91% of the
time. Finally, we assess various RAG systems' selection and generation
performances using RS and CS.
|
2501.03999 | WAPTS: A Weighted Allocation Probability Adjusted Thompson Sampling
Algorithm for High-Dimensional and Sparse Experiment Settings | cs.LG stat.ML | Aiming for more effective experiment design, such as in video content
advertising where different content options compete for user engagement, these
scenarios can be modeled as multi-arm bandit problems. In cases where limited
interactions are available due to external factors, such as the cost of
conducting experiments, recommenders often face constraints due to the small
number of user interactions. In addition, there is a trade-off between
selecting the best treatment and the ability to personalize and contextualize
based on individual factors. A popular solution to this dilemma is the
Contextual Bandit framework. It aims to maximize outcomes while incorporating
personalization (contextual) factors, customizing treatments such as a user's
profile to individual preferences. Despite their advantages, Contextual Bandit
algorithms face challenges like measurement bias and the 'curse of
dimensionality.' These issues complicate the management of numerous
interventions and often lead to data sparsity through participant segmentation.
To address these problems, we introduce the Weighted Allocation Probability
Adjusted Thompson Sampling (WAPTS) algorithm. WAPTS builds on the contextual
Thompson Sampling method by using a dynamic weighting parameter. This improves
the allocation process for interventions and enables rapid optimization in
data-sparse environments. We demonstrate the performance of our approach on
different numbers of arms and effect sizes.
|
2501.04000 | A Survey on Federated Learning in Human Sensing | cs.LG cs.HC | Human Sensing, a field that leverages technology to monitor human activities,
psycho-physiological states, and interactions with the environment, enhances
our understanding of human behavior and drives the development of advanced
services that improve overall quality of life. However, its reliance on
detailed and often privacy-sensitive data as the basis for its machine learning
(ML) models raises significant legal and ethical concerns. The recently
proposed ML approach of Federated Learning (FL) promises to alleviate many of
these concerns, as it is able to create accurate ML models without sending raw
user data to a central server. While FL has demonstrated its usefulness across
a variety of areas, such as text prediction and cyber security, its benefits in
Human Sensing are under-explored, given the particular challenges in this
domain. This survey conducts a comprehensive analysis of the current
state-of-the-art studies on FL in Human Sensing, and proposes a taxonomy and an
eight-dimensional assessment for FL approaches. Through the eight-dimensional
assessment, we then evaluate whether the surveyed studies consider a specific
FL-in-Human-Sensing challenge or not. Finally, based on the overall analysis,
we discuss open challenges and highlight five research aspects related to FL in
Human Sensing that require urgent research attention. Our work provides a
comprehensive corpus of FL studies and aims to assist FL practitioners in
developing and evaluating solutions that effectively address the real-world
complexities of Human Sensing.
|
2501.04001 | Sa2VA: Marrying SAM2 with LLaVA for Dense Grounded Understanding of
Images and Videos | cs.CV | This work presents Sa2VA, the first unified model for dense grounded
understanding of both images and videos. Unlike existing multi-modal large
language models, which are often limited to specific modalities and tasks,
Sa2VA supports a wide range of image and video tasks, including referring
segmentation and conversation, with minimal one-shot instruction tuning. Sa2VA
combines SAM-2, a foundation video segmentation model, with LLaVA, an advanced
vision-language model, and unifies text, image, and video into a shared LLM
token space. Using the LLM, Sa2VA generates instruction tokens that guide SAM-2
in producing precise masks, enabling a grounded, multi-modal understanding of
both static and dynamic visual content. Additionally, we introduce Ref-SAV, an
auto-labeled dataset containing over 72k object expressions in complex video
scenes, designed to boost model performance. We also manually validate 2k video
objects in the Ref-SAV datasets to benchmark referring video object
segmentation in complex environments. Experiments show that Sa2VA achieves
state-of-the-art across multiple tasks, particularly in referring video object
segmentation, highlighting its potential for complex real-world applications.
|
2501.04002 | Extraction Of Cumulative Blobs From Dynamic Gestures | cs.CV | Gesture recognition is a perceptual user interface, which is based on CV
technology that allows the computer to interpret human motions as commands,
allowing users to communicate with a computer without the use of hands, thus
making the mouse and keyboard superfluous. Gesture recognition's main weakness
is a light condition because gesture control is based on computer vision, which
heavily relies on cameras. These cameras are used to interpret gestures in 2D
and 3D, so the extracted information can vary depending on the source of light.
The limitation of the system cannot work in a dark environment. A simple night
vision camera can be used as our camera for motion capture as they also blast
out infrared light which is not visible to humans but can be clearly seen with
a camera that has no infrared filter this majorly overcomes the limitation of
systems which cannot work in a dark environment. So, the video stream from the
camera is fed into a Raspberry Pi which has a Python program running OpenCV
module which is used for detecting, isolating and tracking the path of dynamic
gesture, then we use an algorithm of machine learning to recognize the pattern
drawn and accordingly control the GPIOs of the raspberry pi to perform some
activities.
|
2501.04003 | Are VLMs Ready for Autonomous Driving? An Empirical Study from the
Reliability, Data, and Metric Perspectives | cs.CV cs.RO | Recent advancements in Vision-Language Models (VLMs) have sparked interest in
their use for autonomous driving, particularly in generating interpretable
driving decisions through natural language. However, the assumption that VLMs
inherently provide visually grounded, reliable, and interpretable explanations
for driving remains largely unexamined. To address this gap, we introduce
DriveBench, a benchmark dataset designed to evaluate VLM reliability across 17
settings (clean, corrupted, and text-only inputs), encompassing 19,200 frames,
20,498 question-answer pairs, three question types, four mainstream driving
tasks, and a total of 12 popular VLMs. Our findings reveal that VLMs often
generate plausible responses derived from general knowledge or textual cues
rather than true visual grounding, especially under degraded or missing visual
inputs. This behavior, concealed by dataset imbalances and insufficient
evaluation metrics, poses significant risks in safety-critical scenarios like
autonomous driving. We further observe that VLMs struggle with multi-modal
reasoning and display heightened sensitivity to input corruptions, leading to
inconsistencies in performance. To address these challenges, we propose refined
evaluation metrics that prioritize robust visual grounding and multi-modal
understanding. Additionally, we highlight the potential of leveraging VLMs'
awareness of corruptions to enhance their reliability, offering a roadmap for
developing more trustworthy and interpretable decision-making systems in
real-world autonomous driving contexts. The benchmark toolkit is publicly
accessible.
|
2501.04004 | LiMoE: Mixture of LiDAR Representation Learners from Automotive Scenes | cs.CV cs.LG cs.RO | LiDAR data pretraining offers a promising approach to leveraging large-scale,
readily available datasets for enhanced data utilization. However, existing
methods predominantly focus on sparse voxel representation, overlooking the
complementary attributes provided by other LiDAR representations. In this work,
we propose LiMoE, a framework that integrates the Mixture of Experts (MoE)
paradigm into LiDAR data representation learning to synergistically combine
multiple representations, such as range images, sparse voxels, and raw points.
Our approach consists of three stages: i) Image-to-LiDAR Pretraining, which
transfers prior knowledge from images to point clouds across different
representations; ii) Contrastive Mixture Learning (CML), which uses MoE to
adaptively activate relevant attributes from each representation and distills
these mixed features into a unified 3D network; iii) Semantic Mixture
Supervision (SMS), which combines semantic logits from multiple representations
to boost downstream segmentation performance. Extensive experiments across 11
large-scale LiDAR datasets demonstrate our effectiveness and superiority. The
code and model checkpoints have been made publicly accessible.
|
2501.04005 | LargeAD: Large-Scale Cross-Sensor Data Pretraining for Autonomous
Driving | cs.CV cs.LG cs.RO | Recent advancements in vision foundation models (VFMs) have revolutionized
visual perception in 2D, yet their potential for 3D scene understanding,
particularly in autonomous driving applications, remains underexplored. In this
paper, we introduce LargeAD, a versatile and scalable framework designed for
large-scale 3D pretraining across diverse real-world driving datasets. Our
framework leverages VFMs to extract semantically rich superpixels from 2D
images, which are aligned with LiDAR point clouds to generate high-quality
contrastive samples. This alignment facilitates cross-modal representation
learning, enhancing the semantic consistency between 2D and 3D data. We
introduce several key innovations: i) VFM-driven superpixel generation for
detailed semantic representation, ii) a VFM-assisted contrastive learning
strategy to align multimodal features, iii) superpoint temporal consistency to
maintain stable representations across time, and iv) multi-source data
pretraining to generalize across various LiDAR configurations. Our approach
delivers significant performance improvements over state-of-the-art methods in
both linear probing and fine-tuning tasks for both LiDAR-based segmentation and
object detection. Extensive experiments on eleven large-scale multi-modal
datasets highlight our superior performance, demonstrating the adaptability,
efficiency, and robustness in real-world autonomous driving scenarios.
|
2501.04006 | Advancing Similarity Search with GenAI: A Retrieval Augmented Generation
Approach | cs.IR | This article introduces an innovative Retrieval Augmented Generation approach
to similarity search. The proposed method uses a generative model to capture
nuanced semantic information and retrieve similarity scores based on advanced
context understanding. The study focuses on the BIOSSES dataset containing 100
pairs of sentences extracted from the biomedical domain, and introduces
similarity search correlation results that outperform those previously attained
on this dataset. Through an in-depth analysis of the model sensitivity, the
research identifies optimal conditions leading to the highest similarity search
accuracy: the results reveals high Pearson correlation scores, reaching
specifically 0.905 at a temperature of 0.5 and a sample size of 20 examples
provided in the prompt. The findings underscore the potential of generative
models for semantic information retrieval and emphasize a promising research
direction to similarity search.
|
2501.04007 | Untapped Potential in Self-Optimization of Hopfield Networks: The
Creativity of Unsupervised Learning | cs.NE nlin.AO | The Self-Optimization (SO) model can be considered as the third operational
mode of the classical Hopfield Network (HN), leveraging the power of
associative memory to enhance optimization performance. Moreover, is has been
argued to express characteristics of minimal agency which, together with its
biological plausibility, renders it useful for the study of artificial life. In
this article, we draw attention to another facet of the SO model: its capacity
for creativity. Drawing on the creativity studies literature, we argue that the
model satisfies the necessary and sufficient conditions of a creative process.
Moreover, we explore the dependency of different creative outcomes based on
learning parameters, specifically the learning and reset rates. We conclude
that the SO model allows for simulating and understanding the emergence of
creative behaviors in artificial systems that learn.
|
2501.04008 | A Generative AI-driven Metadata Modelling Approach | cs.DL cs.AI cs.IR | Since decades, the modelling of metadata has been core to the functioning of
any academic library. Its importance has only enhanced with the increasing
pervasiveness of Generative Artificial Intelligence (AI)-driven information
activities and services which constitute a library's outreach. However, with
the rising importance of metadata, there arose several outstanding problems
with the process of designing a library metadata model impacting its
reusability, crosswalk and interoperability with other metadata models. This
paper posits that the above problems stem from an underlying thesis that there
should only be a few core metadata models which would be necessary and
sufficient for any information service using them, irrespective of the
heterogeneity of intra-domain or inter-domain settings. To that end, this paper
advances a contrary view of the above thesis and substantiates its argument in
three key steps. First, it introduces a novel way of thinking about a library
metadata model as an ontology-driven composition of five functionally
interlinked representation levels from perception to its intensional definition
via properties. Second, it introduces the representational manifoldness
implicit in each of the five levels which cumulatively contributes to a
conceptually entangled library metadata model. Finally, and most importantly,
it proposes a Generative AI-driven Human-Large Language Model (LLM)
collaboration based metadata modelling approach to disentangle the entanglement
inherent in each representation level leading to the generation of a
conceptually disentangled metadata model. Throughout the paper, the arguments
are exemplified by motivating scenarios and examples from representative
libraries handling cancer information.
|
2501.04009 | Multi-SpaCE: Multi-Objective Subsequence-based Sparse Counterfactual
Explanations for Multivariate Time Series Classification | cs.NE cs.LG stat.ML | Deep Learning systems excel in complex tasks but often lack transparency,
limiting their use in critical applications. Counterfactual explanations, a
core tool within eXplainable Artificial Intelligence (XAI), offer insights into
model decisions by identifying minimal changes to an input to alter its
predicted outcome. However, existing methods for time series data are limited
by univariate assumptions, rigid constraints on modifications, or lack of
validity guarantees. This paper introduces Multi-SpaCE, a multi-objective
counterfactual explanation method for multivariate time series. Using
non-dominated ranking genetic algorithm II (NSGA-II), Multi-SpaCE balances
proximity, sparsity, plausibility, and contiguity. Unlike most methods, it
ensures perfect validity, supports multivariate data and provides a Pareto
front of solutions, enabling flexibility to different end-user needs.
Comprehensive experiments in diverse datasets demonstrate the ability of
Multi-SpaCE to consistently achieve perfect validity and deliver superior
performance compared to existing methods.
|
2501.04012 | FlexCache: Flexible Approximate Cache System for Video Diffusion | cs.MM cs.LG | Text-to-Video applications receive increasing attention from the public.
Among these, diffusion models have emerged as the most prominent approach,
offering impressive quality in visual content generation. However, it still
suffers from substantial computational complexity, often requiring several
minutes to generate a single video. While prior research has addressed the
computational overhead in text-to-image diffusion models, the techniques
developed are not directly suitable for video diffusion models due to the
significantly larger cache requirements and enhanced computational demands
associated with video generation.
We present FlexCache, a flexible approximate cache system that addresses the
challenges in two main designs. First, we compress the caches before saving
them to storage. Our compression strategy can reduce 6.7 times consumption on
average. Then we find that the approximate cache system can achieve higher hit
rate and computation savings by decoupling the object and background. We
further design a tailored cache replacement policy to support the two
techniques mentioned above better. Through our evaluation, FlexCache reaches
1.26 times higher throughput and 25% lower cost compared to the
state-of-the-art diffusion approximate cache system.
|
2501.04014 | AICat: An AI Cataloguing Approach to Support the EU AI Act | cs.DL cs.AI cs.CY | The European Union's Artificial Intelligence Act (AI Act) requires providers
and deployers of high-risk AI applications to register their systems into the
EU database, wherein the information should be represented and maintained in an
easily-navigable and machine-readable manner. Given the uptake of open data and
Semantic Web-based approaches for other EU repositories, in particular the use
of the Data Catalogue vocabulary Application Profile (DCAT-AP), a similar
solution for managing the EU database of high-risk AI systems is needed. This
paper introduces AICat - an extension of DCAT for representing catalogues of AI
systems that provides consistency, machine-readability, searchability, and
interoperability in managing open metadata regarding AI systems. This open
approach to cataloguing ensures transparency, traceability, and accountability
in AI application markets beyond the immediate needs of high-risk AI compliance
in the EU. AICat is available online at https://w3id.org/aicat under the
CC-BY-4.0 license.
|
2501.04018 | MERCURY: A fast and versatile multi-resolution based global emulator of
compound climate hazards | physics.ao-ph cs.LG stat.AP | High-impact climate damages are often driven by compounding climate
conditions. For example, elevated heat stress conditions can arise from a
combination of high humidity and temperature. To explore future changes in
compounding hazards under a range of climate scenarios and with large
ensembles, climate emulators can provide light-weight, data-driven complements
to Earth System Models. Yet, only a few existing emulators can jointly emulate
multiple climate variables. In this study, we present the Multi-resolution
EmulatoR for CompoUnd climate Risk analYsis: MERCURY. MERCURY extends
multi-resolution analysis to a spatio-temporal framework for versatile
emulation of multiple variables. MERCURY leverages data-driven, image
compression techniques to generate emulations in a memory-efficient manner.
MERCURY consists of a regional component that represents the monthly, regional
response of a given variable to yearly Global Mean Temperature (GMT) using a
probabilistic regression based additive model, resolving regional
cross-correlations. It then adapts a reverse lifting-scheme operator to jointly
spatially disaggregate regional, monthly values to grid-cell level. We
demonstrate MERCURY's capabilities on representing the humid-heat metric, Wet
Bulb Globe Temperature, as derived from temperature and relative humidity
emulations. The emulated WBGT spatial correlations correspond well to those of
ESMs and the 95% and 97.5% quantiles of WBGT distributions are well captured,
with an average of 5% deviation. MERCURY's setup allows for region-specific
emulations from which one can efficiently "zoom" into the grid-cell level
across multiple variables by means of the reverse lifting-scheme operator. This
circumvents the traditional problem of having to emulate complete,
global-fields of climate data and resulting storage requirements.
|
2501.04023 | Approximation Rates in Fr\'echet Metrics: Barron Spaces, Paley-Wiener
Spaces, and Fourier Multipliers | math.NA cs.IT cs.LG cs.NA math.IT stat.ML | Operator learning is a recent development in the simulation of Partial
Differential Equations (PDEs) by means of neural networks. The idea behind this
approach is to learn the behavior of an operator, such that the resulting
neural network is an (approximate) mapping in infinite-dimensional spaces that
is capable of (approximately) simulating the solution operator governed by the
PDE. In our work, we study some general approximation capabilities for linear
differential operators by approximating the corresponding symbol in the Fourier
domain. Analogous to the structure of the class of H\"ormander-Symbols, we
consider the approximation with respect to a topology that is induced by a
sequence of semi-norms. In that sense, we measure the approximation error in
terms of a Fr\'echet metric, and our main result identifies sufficient
conditions for achieving a predefined approximation error. Secondly, we then
focus on a natural extension of our main theorem, in which we manage to reduce
the assumptions on the sequence of semi-norms. Based on existing approximation
results for the exponential spectral Barron space, we then present a concrete
example of symbols that can be approximated well.
|
2501.04038 | Listening and Seeing Again: Generative Error Correction for Audio-Visual
Speech Recognition | cs.MM cs.AI cs.SD eess.AS | Unlike traditional Automatic Speech Recognition (ASR), Audio-Visual Speech
Recognition (AVSR) takes audio and visual signals simultaneously to infer the
transcription. Recent studies have shown that Large Language Models (LLMs) can
be effectively used for Generative Error Correction (GER) in ASR by predicting
the best transcription from ASR-generated N-best hypotheses. However, these
LLMs lack the ability to simultaneously understand audio and visual, making the
GER approach challenging to apply in AVSR. In this work, we propose a novel GER
paradigm for AVSR, termed AVGER, that follows the concept of ``listening and
seeing again''. Specifically, we first use the powerful AVSR system to read the
audio and visual signals to get the N-Best hypotheses, and then use the
Q-former-based Multimodal Synchronous Encoder to read the audio and visual
information again and convert them into an audio and video compression
representation respectively that can be understood by LLM. Afterward, the
audio-visual compression representation and the N-Best hypothesis together
constitute a Cross-modal Prompt to guide the LLM in producing the best
transcription. In addition, we also proposed a Multi-Level Consistency
Constraint training criterion, including logits-level, utterance-level and
representations-level, to improve the correction accuracy while enhancing the
interpretability of audio and visual compression representations. The
experimental results on the LRS3 dataset show that our method outperforms
current mainstream AVSR systems. The proposed AVGER can reduce the Word Error
Rate (WER) by 24% compared to them. Code and models can be found at:
https://github.com/CircleRedRain/AVGER.
|
2501.04040 | A Survey on Large Language Models with some Insights on their
Capabilities and Limitations | cs.CL cs.AI cs.LG cs.NE | The rapid advancement of artificial intelligence, particularly with the
development of Large Language Models (LLMs) built on the transformer
architecture, has redefined the capabilities of natural language processing.
These models now exhibit remarkable performance across various language-related
tasks, such as text generation, question answering, translation, and
summarization, often rivaling human-like comprehension. More intriguingly, LLMs
have demonstrated emergent abilities extending beyond their core functions,
showing proficiency in tasks like commonsense reasoning, code generation, and
arithmetic. This survey paper explores the foundational components, scaling
mechanisms, and architectural strategies that drive these capabilities.
Emphasizing models like GPT and LLaMA, we analyze the impact of exponential
data and computational growth on LLM performance, while also addressing the
trade-offs associated with scaling. We also examine LLM applications across
sectors, such as healthcare, finance, education, and law, highlighting their
adaptability and potential to solve domain-specific challenges. Central to this
work are the questions of how LLMs generalize across diverse tasks, exhibit
planning, and reasoning abilities, and whether these emergent abilities can be
systematically elicited or enhanced. In particular, we provide some insights
into the CoT (Chain of Thought) and PoT (Plan of Thought) abilities within
LLMs, focusing on how pre-training data influences their emergence.
Additionally, we investigate LLM-modulo frameworks that integrate external
systems, allowing LLMs to handle complex, dynamic tasks. By analyzing these
factors, this paper aims to foster the ongoing discussion on the capabilities
and limits of LLMs, promoting their responsible development and application in
novel and increasingly complex environments.
|
2501.04046 | Traits of a Leader: User Influence Level Prediction through
Sociolinguistic Modeling | physics.soc-ph cs.AI cs.CY | Recognition of a user's influence level has attracted much attention as human
interactions move online. Influential users have the ability to sway others'
opinions to achieve some goals. As a result, predicting users' level of
influence can help to understand social networks, forecast trends, prevent
misinformation, etc. However, predicting user influence is a challenging
problem because the concept of influence is specific to a situation or a
domain, and user communications are limited to text. In this work, we define
user influence level as a function of community endorsement and develop a model
that significantly outperforms the baseline by leveraging demographic and
personality data. This approach consistently improves RankDCG scores across
eight different domains.
|
2501.04052 | The Power of Negative Zero: Datatype Customization for Quantized Large
Language Models | cs.LG cs.CL | Large language models (LLMs) have demonstrated remarkable performance across
various machine learning tasks, quickly becoming one of the most prevalent AI
workloads. Yet the substantial memory requirement of LLMs significantly hinders
their deployment for end users. Post-training quantization (PTQ) serves as one
of the most hardware-efficient methods to mitigate the memory and computational
demands of LLMs. Although the traditional integer (INT) datatype has received
widespread adoption in PTQ methods, floating-point (FP) quantization has
emerged as a viable alternative thanks to its effectiveness in fitting LLM
numerical distributions. However, the FP datatype in sign-magnitude binary
representation contains both positive and negative zero, which constrains its
representation capability, particularly under low precision (3 and 4 bits). In
this paper, we extend the basic FP datatype to perform Redundant Zero Remapping
(RaZeR), which remaps the negative zero FP encoding to a set of pre-defined
special values to maximally utilize FP quantization encodings and to better fit
LLM numerical distributions. Through careful selection of special values, RaZeR
outperforms conventional asymmetric INT quantization while achieving high
computational efficiency. We demonstrate that RaZeR can be seamlessly
integrated with quantization algorithms for both weights and KV-cache,
including advanced methods with clipping and transformations, and consistently
achieve better model accuracy. Additionally, we implement a fast GEMV kernel
with fused dequantization that efficiently converts the 4-bit RaZeR value to
FP16 through novel bit-level manipulation. On modern GPUs, our evaluation shows
that RaZeR improves the GEMV speed by up to 7.56$\times$ compared to the FP16
implementation, while achieving up to 2.72$\times$ speedup in the LLM decoding
throughput.
|
2501.04060 | SFADNet: Spatio-temporal Fused Graph based on Attention Decoupling
Network for Traffic Prediction | cs.LG | In recent years, traffic flow prediction has played a crucial role in the
management of intelligent transportation systems. However, traditional
prediction methods are often limited by static spatial modeling, making it
difficult to accurately capture the dynamic and complex relationships between
time and space, thereby affecting prediction accuracy. This paper proposes an
innovative traffic flow prediction network, SFADNet, which categorizes traffic
flow into multiple traffic patterns based on temporal and spatial feature
matrices. For each pattern, we construct an independent adaptive
spatio-temporal fusion graph based on a cross-attention mechanism, employing
residual graph convolution modules and time series modules to better capture
dynamic spatio-temporal relationships under different fine-grained traffic
patterns. Extensive experimental results demonstrate that SFADNet outperforms
current state-of-the-art baselines across four large-scale datasets.
|
2501.04061 | Causal Machine Learning Methods for Estimating Personalised Treatment
Effects -- Insights on validity from two large trials | cs.LG stat.ML | Causal machine learning (ML) methods hold great promise for advancing
precision medicine by estimating personalized treatment effects. However, their
reliability remains largely unvalidated in empirical settings. In this study,
we assessed the internal and external validity of 17 mainstream causal
heterogeneity ML methods -- including metalearners, tree-based methods, and
deep learning methods -- using data from two large randomized controlled
trials: the International Stroke Trial (N=19,435) and the Chinese Acute Stroke
Trial (N=21,106). Our findings reveal that none of the ML methods reliably
validated their performance, neither internal nor external, showing significant
discrepancies between training and test data on the proposed evaluation
metrics. The individualized treatment effects estimated from training data
failed to generalize to the test data, even in the absence of distribution
shifts. These results raise concerns about the current applicability of causal
ML models in precision medicine, and highlight the need for more robust
validation techniques to ensure generalizability.
|
2501.04062 | ChronoLLM: A Framework for Customizing Large Language Model for Digital
Twins generalization based on PyChrono | cs.SE cs.AI cs.CE | Recently, the integration of advanced simulation technologies with artificial
intelligence (AI) is revolutionizing science and engineering research.
ChronoLlama introduces a novel framework that customizes the open-source LLMs,
specifically for code generation, paired with PyChrono for multi-physics
simulations. This integration aims to automate and improve the creation of
simulation scripts, thus enhancing model accuracy and efficiency. This
combination harnesses the speed of AI-driven code generation with the
reliability of physics-based simulations, providing a powerful tool for
researchers and engineers. Empirical results indicate substantial enhancements
in simulation setup speed, accuracy of the generated codes, and overall
computational efficiency. ChronoLlama not only expedites the development and
testing of multibody systems but also spearheads a scalable, AI-enhanced
approach to managing intricate mechanical simulations. This pioneering
integration of cutting-edge AI with traditional simulation platforms represents
a significant leap forward in automating and optimizing design processes in
engineering applications.
|
2501.04063 | Fuzzy Information Entropy and Region Biased Matrix Factorization for Web
Service QoS Prediction | cs.LG | Nowadays, there are many similar services available on the internet, making
Quality of Service (QoS) a key concern for users. Since collecting QoS values
for all services through user invocations is impractical, predicting QoS values
is a more feasible approach. Matrix factorization is considered an effective
prediction method. However, most existing matrix factorization algorithms focus
on capturing global similarities between users and services, overlooking the
local similarities between users and their similar neighbors, as well as the
non-interactive effects between users and services. This paper proposes a
matrix factorization approach based on user information entropy and region
bias, which utilizes a similarity measurement method based on fuzzy information
entropy to identify similar neighbors of users. Simultaneously, it integrates
the region bias between each user and service linearly into matrix
factorization to capture the non-interactive features between users and
services. This method demonstrates improved predictive performance in more
realistic and complex network environments. Additionally, numerous experiments
are conducted on real-world QoS datasets. The experimental results show that
the proposed method outperforms some of the state-of-the-art methods in the
field at matrix densities ranging from 5% to 20%.
|
2501.04066 | FedKD-hybrid: Federated Hybrid Knowledge Distillation for Lithography
Hotspot Detection | cs.LG cs.AR | Federated Learning (FL) provides novel solutions for machine learning
(ML)-based lithography hotspot detection (LHD) under distributed
privacy-preserving settings. Currently, two research pipelines have been
investigated to aggregate local models and achieve global consensus, including
parameter/nonparameter based (also known as knowledge distillation, namely KD).
While these two kinds of methods show effectiveness in specific scenarios, we
note they have not fully utilized and transferred the information learned,
leaving the potential of FL-based LDH remains unexplored. Thus, we propose
FedKDhybrid in this study to mitigate the research gap. Specifically,
FedKD-hybrid clients agree on several identical layers across all participants
and a public dataset for achieving global consensus. During training, the
trained local model will be evaluated on the public dataset, and the generated
logits will be uploaded along with the identical layer parameters. The
aggregated information is consequently used to update local models via the
public dataset as a medium. We compare our proposed FedKD-hybrid with several
state-of-the-art (SOTA) FL methods under ICCAD-2012 and FAB (real-world
collected) datasets with different settings; the experimental results
demonstrate the superior performance of the FedKD-hybrid algorithm. Our code is
available at https://github.com/itsnotacie/NN-FedKD-hybrid
|
2501.04067 | Explainable Time Series Prediction of Tyre Energy in Formula One Race
Strategy | cs.LG cs.AI | Formula One (F1) race strategy takes place in a high-pressure and fast-paced
environment where split-second decisions can drastically affect race results.
Two of the core decisions of race strategy are when to make pit stops (i.e.
replace the cars' tyres) and which tyre compounds (hard, medium or soft, in
normal conditions) to select. The optimal pit stop decisions can be determined
by estimating the tyre degradation of these compounds, which in turn can be
computed from the energy applied to each tyre, i.e. the tyre energy. In this
work, we trained deep learning models, using the Mercedes-AMG PETRONAS F1
team's historic race data consisting of telemetry, to forecast tyre energies
during races. Additionally, we fitted XGBoost, a decision tree-based machine
learning algorithm, to the same dataset and compared the results, with both
giving impressive performance. Furthermore, we incorporated two different
explainable AI methods, namely feature importance and counterfactual
explanations, to gain insights into the reasoning behind the forecasts. Our
contributions thus result in an explainable, automated method which could
assist F1 teams in optimising their race strategy.
|
2501.04068 | Explainable Reinforcement Learning for Formula One Race Strategy | cs.LG cs.AI | In Formula One, teams compete to develop their cars and achieve the highest
possible finishing position in each race. During a race, however, teams are
unable to alter the car, so they must improve their cars' finishing positions
via race strategy, i.e. optimising their selection of which tyre compounds to
put on the car and when to do so. In this work, we introduce a reinforcement
learning model, RSRL (Race Strategy Reinforcement Learning), to control race
strategies in simulations, offering a faster alternative to the industry
standard of hard-coded and Monte Carlo-based race strategies. Controlling cars
with a pace equating to an expected finishing position of P5.5 (where P1
represents first place and P20 is last place), RSRL achieves an average
finishing position of P5.33 on our test race, the 2023 Bahrain Grand Prix,
outperforming the best baseline of P5.63. We then demonstrate, in a
generalisability study, how performance for one track or multiple tracks can be
prioritised via training. Further, we supplement model predictions with feature
importance, decision tree-based surrogate models, and decision tree
counterfactuals towards improving user trust in the model. Finally, we provide
illustrations which exemplify our approach in real-world situations, drawing
parallels between simulations and reality.
|
2501.04070 | More is not always better? Enhancing Many-Shot In-Context Learning with
Differentiated and Reweighting Objectives | cs.LG cs.AI cs.CL | Large language models (LLMs) excel at few-shot in-context learning (ICL)
without requiring parameter updates. However, as the number of ICL
demonstrations increases from a few to many, performance tends to plateau and
eventually decline. We identify two primary causes for this trend: the
suboptimal negative log-likelihood (NLL) optimization objective and the
incremental data noise. To address these issues, we introduce DrICL, a novel
optimization method that enhances model performance through Differentiated
Learning and advantage-based Reweighting objectives. Globally, DrICL utilizes
differentiated learning to optimize the NLL objective, ensuring that many-shot
performance surpasses zero-shot levels. Locally, it dynamically adjusts the
weighting of many-shot demonstrations by leveraging cumulative advantages
inspired by reinforcement learning, thereby improving generalization. This
approach allows the model to handle varying numbers of shots effectively,
mitigating the impact of noisy data. Recognizing the lack of multi-task
datasets with diverse many-shot distributions, we develop the Many-Shot ICL
Benchmark (ICL-50)-a large-scale benchmark of 50 tasks that cover shot numbers
from 1 to 350 within sequences of up to 8,000 tokens-for fine-tuning purposes.
ICL-50 facilitates the evaluation of many-shot ICL strategies across seven
prominent NLP tasks and 50 distinct datasets. Experimental results demonstrate
that LLMs enhanced with DrICL achieve significant improvements in many-shot
setups across various tasks, including both in-domain and out-of-domain
scenarios. We release the code and benchmark dataset hoping to facilitate
further research in many-shot ICL.
|
2501.04072 | Multi-armed Bandit and Backbone boost Lin-Kernighan-Helsgaun Algorithm
for the Traveling Salesman Problems | cs.DS cs.AI | The Lin-Kernighan-Helsguan (LKH) heuristic is a classic local search
algorithm for the Traveling Salesman Problem (TSP). LKH introduces an
$\alpha$-value to replace the traditional distance metric for evaluating the
edge quality, which leads to a significant improvement. However, we observe
that the $\alpha$-value does not make full use of the historical information
during the search, and single guiding information often makes LKH hard to
escape from some local optima. To address the above issues, we propose a novel
way to extract backbone information during the TSP local search process, which
is dynamic and can be updated once a local optimal solution is found. We
further propose to combine backbone information, $\alpha$-value, and distance
to evaluate the edge quality so as to guide the search. Moreover, we abstract
their different combinations to arms in a multi-armed bandit (MAB) and use an
MAB model to help the algorithm select an appropriate evaluation metric
dynamically. Both the backbone information and MAB can provide diverse guiding
information and learn from the search history to suggest the best metric. We
apply our methods to LKH and LKH-3, which is an extension version of LKH that
can be used to solve about 40 variant problems of TSP and Vehicle Routing
Problem (VRP). Extensive experiments show the excellent performance and
generalization capability of our proposed method, significantly improving LKH
for TSP and LKH-3 for two representative TSP and VRP variants, the Colored TSP
(CTSP) and Capacitated VRP with Time Windows (CVRPTW).
|
2501.04073 | Deep Learning for Ophthalmology: The State-of-the-Art and Future Trends | eess.IV cs.CV | The emergence of artificial intelligence (AI), particularly deep learning
(DL), has marked a new era in the realm of ophthalmology, offering
transformative potential for the diagnosis and treatment of posterior segment
eye diseases. This review explores the cutting-edge applications of DL across a
range of ocular conditions, including diabetic retinopathy, glaucoma,
age-related macular degeneration, and retinal vessel segmentation. We provide a
comprehensive overview of foundational ML techniques and advanced DL
architectures, such as CNNs, attention mechanisms, and transformer-based
models, highlighting the evolving role of AI in enhancing diagnostic accuracy,
optimizing treatment strategies, and improving overall patient care.
Additionally, we present key challenges in integrating AI solutions into
clinical practice, including ensuring data diversity, improving algorithm
transparency, and effectively leveraging multimodal data. This review
emphasizes AI's potential to improve disease diagnosis and enhance patient care
while stressing the importance of collaborative efforts to overcome these
barriers and fully harness AI's impact in advancing eye care.
|
2501.04074 | NeRFs are Mirror Detectors: Using Structural Similarity for Multi-View
Mirror Scene Reconstruction with 3D Surface Primitives | cs.CV | While neural radiance fields (NeRF) led to a breakthrough in photorealistic
novel view synthesis, handling mirroring surfaces still denotes a particular
challenge as they introduce severe inconsistencies in the scene representation.
Previous attempts either focus on reconstructing single reflective objects or
rely on strong supervision guidance in terms of additional user-provided
annotations of visible image regions of the mirrors, thereby limiting the
practical usability. In contrast, in this paper, we present NeRF-MD, a method
which shows that NeRFs can be considered as mirror detectors and which is
capable of reconstructing neural radiance fields of scenes containing mirroring
surfaces without the need for prior annotations. To this end, we first compute
an initial estimate of the scene geometry by training a standard NeRF using a
depth reprojection loss. Our key insight lies in the fact that parts of the
scene corresponding to a mirroring surface will still exhibit a significant
photometric inconsistency, whereas the remaining parts are already
reconstructed in a plausible manner. This allows us to detect mirror surfaces
by fitting geometric primitives to such inconsistent regions in this initial
stage of the training. Using this information, we then jointly optimize the
radiance field and mirror geometry in a second training stage to refine their
quality. We demonstrate the capability of our method to allow the faithful
detection of mirrors in the scene as well as the reconstruction of a single
consistent scene representation, and demonstrate its potential in comparison to
baseline and mirror-aware approaches.
|
2501.04099 | Neighbor displacement-based enhanced synthetic oversampling for
multiclass imbalanced data | cs.LG | Imbalanced multiclass datasets pose challenges for machine learning
algorithms. These datasets often contain minority classes that are important
for accurate prediction. Existing methods still suffer from sparse data and may
not accurately represent the original data patterns, leading to noise and poor
model performance. A hybrid method called Neighbor Displacement-based Enhanced
Synthetic Oversampling (NDESO) is proposed in this paper. This approach uses a
displacement strategy for noisy data points, computing the average distance to
their neighbors and moving them closer to their centroids. Random oversampling
is then performed to achieve dataset balance. Extensive evaluations compare 14
alternatives on nine classifiers across synthetic and 20 real-world datasets
with varying imbalance ratios. The results show that our method outperforms its
competitors regarding average G-mean score and achieves the lowest statistical
mean rank. This highlights its superiority and suitability for addressing data
imbalance in practical applications.
|
2501.04102 | Enhancing Distribution and Label Consistency for Graph
Out-of-Distribution Generalization | cs.LG cs.AI | To deal with distribution shifts in graph data, various graph
out-of-distribution (OOD) generalization techniques have been recently
proposed. These methods often employ a two-step strategy that first creates
augmented environments and subsequently identifies invariant subgraphs to
improve generalizability. Nevertheless, this approach could be suboptimal from
the perspective of consistency. First, the process of augmenting environments
by altering the graphs while preserving labels may lead to graphs that are not
realistic or meaningfully related to the origin distribution, thus lacking
distribution consistency. Second, the extracted subgraphs are obtained from
directly modifying graphs, and may not necessarily maintain a consistent
predictive relationship with their labels, thereby impacting label consistency.
In response to these challenges, we introduce an innovative approach that aims
to enhance these two types of consistency for graph OOD generalization. We
propose a modifier to obtain both augmented and invariant graphs in a unified
manner. With the augmented graphs, we enrich the training data without
compromising the integrity of label-graph relationships. The label consistency
enhancement in our framework further preserves the supervision information in
the invariant graph. We conduct extensive experiments on real-world datasets to
demonstrate the superiority of our framework over other state-of-the-art
baselines.
|
2501.04104 | Security by Design Issues in Autonomous Vehicles | eess.SY cs.CR cs.SY | As autonomous vehicle (AV) technology advances towards maturity, it becomes
imperative to examine the security vulnerabilities within these cyber-physical
systems. While conventional cyber-security concerns are often at the forefront
of discussions, it is essential to get deeper into the various layers of
vulnerability that are often overlooked within mainstream frameworks. Our goal
is to spotlight imminent challenges faced by AV operators and explore emerging
technologies for comprehensive solutions. This research outlines the diverse
security layers, spanning physical, cyber, coding, and communication aspects,
in the context of AVs. Furthermore, we provide insights into potential
solutions for each potential attack vector, ensuring that autonomous vehicles
remain secure and resilient in an evolving threat landscape.
|
2501.04105 | DeepVIVONet: Using deep neural operators to optimize sensor locations
with application to vortex-induced vibrations | cs.LG math.OC physics.flu-dyn | We introduce DeepVIVONet, a new framework for optimal dynamic reconstruction
and forecasting of the vortex-induced vibrations (VIV) of a marine riser, using
field data. We demonstrate the effectiveness of DeepVIVONet in accurately
reconstructing the motion of an off--shore marine riser by using sparse
spatio-temporal measurements. We also show the generalization of our model in
extrapolating to other flow conditions via transfer learning, underscoring its
potential to streamline operational efficiency and enhance predictive accuracy.
The trained DeepVIVONet serves as a fast and accurate surrogate model for the
marine riser, which we use in an outer--loop optimization algorithm to obtain
the optimal locations for placing the sensors. Furthermore, we employ an
existing sensor placement method based on proper orthogonal decomposition (POD)
to compare with our data-driven approach. We find that that while POD offers a
good approach for initial sensor placement, DeepVIVONet's adaptive capabilities
yield more precise and cost-effective configurations.
|
2501.04108 | TrojanDec: Data-free Detection of Trojan Inputs in Self-supervised
Learning | cs.CR cs.AI | An image encoder pre-trained by self-supervised learning can be used as a
general-purpose feature extractor to build downstream classifiers for various
downstream tasks. However, many studies showed that an attacker can embed a
trojan into an encoder such that multiple downstream classifiers built based on
the trojaned encoder simultaneously inherit the trojan behavior. In this work,
we propose TrojanDec, the first data-free method to identify and recover a test
input embedded with a trigger. Given a (trojaned or clean) encoder and a test
input, TrojanDec first predicts whether the test input is trojaned. If not, the
test input is processed in a normal way to maintain the utility. Otherwise, the
test input will be further restored to remove the trigger. Our extensive
evaluation shows that TrojanDec can effectively identify the trojan (if any)
from a given test input and recover it under state-of-the-art trojan attacks.
We further demonstrate by experiments that our TrojanDec outperforms the
state-of-the-art defenses.
|
Subsets and Splits
No community queries yet
The top public SQL queries from the community will appear here once available.