id
stringlengths 9
16
| title
stringlengths 4
278
| categories
stringlengths 5
104
| abstract
stringlengths 6
4.09k
|
|---|---|---|---|
2502.06243
|
Multi-Scale Transformer Architecture for Accurate Medical Image
Classification
|
cs.CV cs.LG
|
This study introduces an AI-driven skin lesion classification algorithm built
on an enhanced Transformer architecture, addressing the challenges of accuracy
and robustness in medical image analysis. By integrating a multi-scale feature
fusion mechanism and refining the self-attention process, the model effectively
extracts both global and local features, enhancing its ability to detect
lesions with ambiguous boundaries and intricate structures. Performance
evaluation on the ISIC 2017 dataset demonstrates that the improved Transformer
surpasses established AI models, including ResNet50, VGG19, ResNext, and Vision
Transformer, across key metrics such as accuracy, AUC, F1-Score, and Precision.
Grad-CAM visualizations further highlight the interpretability of the model,
showcasing strong alignment between the algorithm's focus areas and actual
lesion sites. This research underscores the transformative potential of
advanced AI models in medical imaging, paving the way for more accurate and
reliable diagnostic tools. Future work will explore the scalability of this
approach to broader medical imaging tasks and investigate the integration of
multimodal data to enhance AI-driven diagnostic frameworks for intelligent
healthcare.
|
2502.06244
|
PiKE: Adaptive Data Mixing for Multi-Task Learning Under Low Gradient
Conflicts
|
cs.LG
|
Modern machine learning models are trained on diverse datasets and tasks to
improve generalization. A key challenge in multitask learning is determining
the optimal data mixing and sampling strategy across different data sources.
Prior research in this multi-task learning setting has primarily focused on
mitigating gradient conflicts between tasks. However, we observe that many
real-world multitask learning scenarios-such as multilingual training and
multi-domain learning in large foundation models-exhibit predominantly positive
task interactions with minimal or no gradient conflict. Building on this
insight, we introduce PiKE (Positive gradient interaction-based K-task weights
Estimator), an adaptive data mixing algorithm that dynamically adjusts task
contributions throughout training. PiKE optimizes task sampling to minimize
overall loss, effectively leveraging positive gradient interactions with almost
no additional computational overhead. We establish theoretical convergence
guarantees for PiKE and demonstrate its superiority over static and
non-adaptive mixing strategies. Additionally, we extend PiKE to promote fair
learning across tasks, ensuring balanced progress and preventing task
underrepresentation. Empirical evaluations on large-scale language model
pretraining show that PiKE consistently outperforms existing heuristic and
static mixing strategies, leading to faster convergence and improved downstream
task performance.
|
2502.06247
|
Advance sharing for stabilizer-based quantum secret sharing schemes
|
quant-ph cs.CR cs.IT math.IT
|
In stabilizer-based quantum secret sharing schemes, it is known that some
shares can be distributed to participants before a secret is given to the
dealer. This distribution is known as advance sharing. It is already known that
a set of shares is advance shareable only if it is a forbidden set. However, it
was not known whether any forbidden set is advance shareable. We provide an
example of a set of shares such that it is a forbidden set but is not advance
shareable in the previous scheme. Furthermore, we propose a quantum secret
sharing scheme for quantum secrets such that any forbidden set is advance
shareable.
|
2502.06249
|
Conditioning through indifference in quantum mechanics
|
quant-ph cs.AI math.PR
|
We can learn (more) about the state a quantum system is in through
measurements. We look at how to describe the uncertainty about a quantum
system's state conditional on executing such measurements. We show that by
exploiting the interplay between desirability, coherence and indifference, a
general rule for conditioning can be derived. We then apply this rule to
conditioning on measurement outcomes, and show how it generalises to
conditioning on a set of measurement outcomes.
|
2502.06250
|
DGNO: A Novel Physics-aware Neural Operator for Solving Forward and
Inverse PDE Problems based on Deep, Generative Probabilistic Modeling
|
cs.LG math-ph math.MP
|
Solving parametric partial differential equations (PDEs) and associated
PDE-based, inverse problems is a central task in engineering and physics, yet
existing neural operator methods struggle with high-dimensional, discontinuous
inputs and require large amounts of {\em labeled} training data. We propose the
Deep Generative Neural Operator (DGNO), a physics-aware framework that
addresses these challenges by leveraging a deep, generative, probabilistic
model in combination with a set of lower-dimensional, latent variables that
simultaneously encode PDE-inputs and PDE-outputs. This formulation can make use
of unlabeled data and significantly improves inverse problem-solving,
particularly for discontinuous or discrete-valued input functions. DGNO
enforces physics constraints without labeled data by incorporating as virtual
observables, weak-form residuals based on compactly supported radial basis
functions (CSRBFs). These relax regularity constraints and eliminate
higher-order derivatives from the objective function. We also introduce
MultiONet, a novel neural operator architecture, which is a more expressive
generalization of the popular DeepONet that significantly enhances the
approximating power of the proposed model. These innovations make DGNO
particularly effective for challenging forward and inverse, PDE-based problems,
such as those involving multi-phase media. Numerical experiments demonstrate
that DGNO achieves higher accuracy across multiple benchmarks while exhibiting
robustness to noise and strong generalization to out-of-distribution cases. Its
adaptability, and the ability to handle sparse, noisy data while providing
probabilistic estimates, make DGNO a powerful tool for scientific and
engineering applications.
|
2502.06252
|
Evaluating Entity Retrieval in Electronic Health Records: a Semantic Gap
Perspective
|
cs.IR cs.CL
|
Entity retrieval plays a crucial role in the utilization of Electronic Health
Records (EHRs) and is applied across a wide range of clinical practices.
However, a comprehensive evaluation of this task is lacking due to the absence
of a public benchmark. In this paper, we propose the development and release of
a novel benchmark for evaluating entity retrieval in EHRs, with a particular
focus on the semantic gap issue. Using discharge summaries from the MIMIC-III
dataset, we incorporate ICD codes and prescription labels associated with the
notes as queries, and annotate relevance judgments using GPT-4. In total, we
use 1,000 patient notes, generate 1,246 queries, and provide over 77,000
relevance annotations. To offer the first assessment of the semantic gap, we
introduce a novel classification system for relevance matches. Leveraging
GPT-4, we categorize each relevant pair into one of five categories: string,
synonym, abbreviation, hyponym, and implication. Using the proposed benchmark,
we evaluate several retrieval methods, including BM25, query expansion, and
state-of-the-art dense retrievers. Our findings show that BM25 provides a
strong baseline but struggles with semantic matches. Query expansion
significantly improves performance, though it slightly reduces string match
capabilities. Dense retrievers outperform traditional methods, particularly for
semantic matches, and general-domain dense retrievers often surpass those
trained specifically in the biomedical domain.
|
2502.06255
|
Towards Efficient and Intelligent Laser Weeding: Method and Dataset for
Weed Stem Detection
|
cs.CV cs.AI
|
Weed control is a critical challenge in modern agriculture, as weeds compete
with crops for essential nutrient resources, significantly reducing crop yield
and quality. Traditional weed control methods, including chemical and
mechanical approaches, have real-life limitations such as associated
environmental impact and efficiency. An emerging yet effective approach is
laser weeding, which uses a laser beam as the stem cutter. Although there have
been studies that use deep learning in weed recognition, its application in
intelligent laser weeding still requires a comprehensive understanding. Thus,
this study represents the first empirical investigation of weed recognition for
laser weeding. To increase the efficiency of laser beam cut and avoid damaging
the crops of interest, the laser beam shall be directly aimed at the weed root.
Yet, weed stem detection remains an under-explored problem. We integrate the
detection of crop and weed with the localization of weed stem into one
end-to-end system. To train and validate the proposed system in a real-life
scenario, we curate and construct a high-quality weed stem detection dataset
with human annotations. The dataset consists of 7,161 high-resolution pictures
collected in the field with annotations of 11,151 instances of weed.
Experimental results show that the proposed system improves weeding accuracy by
6.7% and reduces energy cost by 32.3% compared to existing weed recognition
systems.
|
2502.06257
|
K-ON: Stacking Knowledge On the Head Layer of Large Language Model
|
cs.CL cs.AI
|
Recent advancements in large language models (LLMs) have significantly
improved various natural language processing (NLP) tasks. Typically, LLMs are
trained to predict the next token, aligning well with many NLP tasks. However,
in knowledge graph (KG) scenarios, entities are the fundamental units and
identifying an entity requires at least several tokens. This leads to a
granularity mismatch between KGs and natural languages. To address this issue,
we propose K-ON, which integrates KG knowledge into the LLM by employing
multiple head layers for next k-step prediction. K-ON can not only generate
entity-level results in one step, but also enables contrastive loss against
entities, which is the most powerful tool in KG representation learning.
Experimental results show that K-ON outperforms state-of-the-art methods that
incorporate text and even the other modalities.
|
2502.06258
|
Emergent Response Planning in LLM
|
cs.CL cs.LG
|
In this work, we argue that large language models (LLMs), though trained to
predict only the next token, exhibit emergent planning behaviors:
$\textbf{their hidden representations encode future outputs beyond the next
token}$. Through simple probing, we demonstrate that LLM prompt representations
encode global attributes of their entire responses, including
$\textit{structural attributes}$ (response length, reasoning steps),
$\textit{content attributes}$ (character choices in storywriting,
multiple-choice answers at the end of response), and $\textit{behavioral
attributes}$ (answer confidence, factual consistency). In addition to
identifying response planning, we explore how it scales with model size across
tasks and how it evolves during generation. The findings that LLMs plan ahead
for the future in their hidden representations suggests potential applications
for improving transparency and generation control.
|
2502.06261
|
Reducing Variance Caused by Communication in Decentralized Multi-agent
Deep Reinforcement Learning
|
cs.LG
|
In decentralized multi-agent deep reinforcement learning (MADRL),
communication can help agents to gain a better understanding of the environment
to better coordinate their behaviors. Nevertheless, communication may involve
uncertainty, which potentially introduces variance to the learning of
decentralized agents. In this paper, we focus on a specific decentralized MADRL
setting with communication and conduct a theoretical analysis to study the
variance that is caused by communication in policy gradients. We propose
modular techniques to reduce the variance in policy gradients during training.
We adopt our modular techniques into two existing algorithms for decentralized
MADRL with communication and evaluate them on multiple tasks in the StarCraft
Multi-Agent Challenge and Traffic Junction domains. The results show that
decentralized MADRL communication methods extended with our proposed techniques
not only achieve high-performing agents but also reduce variance in policy
gradients during training.
|
2502.06268
|
Spectral-factorized Positive-definite Curvature Learning for NN Training
|
stat.ML cs.LG
|
Many training methods, such as Adam(W) and Shampoo, learn a positive-definite
curvature matrix and apply an inverse root before preconditioning. Recently,
non-diagonal training methods, such as Shampoo, have gained significant
attention; however, they remain computationally inefficient and are limited to
specific types of curvature information due to the costly matrix root
computation via matrix decomposition. To address this, we propose a Riemannian
optimization approach that dynamically adapts spectral-factorized
positive-definite curvature estimates, enabling the efficient application of
arbitrary matrix roots and generic curvature learning. We demonstrate the
efficacy and versatility of our approach in positive-definite matrix
optimization and covariance adaptation for gradient-free optimization, as well
as its efficiency in curvature learning for neural net training.
|
2502.06269
|
Progressive Collaborative and Semantic Knowledge Fusion for Generative
Recommendation
|
cs.IR
|
With the recent surge in interest surrounding generative paradigms,
generative recommendation has increasingly attracted the attention of
researchers in the recommendation community. This paradigm generally consists
of two stages. In the first stage, pretrained semantic embeddings or
collaborative ID embeddings are quantized to create item codes, aiming to
capture and preserve rich semantic or collaborative knowledge within these
codes. The second stage involves utilizing these discrete codes to perform an
autoregressive sequence generation task. Existing methods often either overlook
collaborative or semantic knowledge, or combine the two roughly. In this paper,
we observe that naively concatenating representations from semantic and
collaborative modality leads to a semantic domination issue, where the
resulting representation is overly influenced by semantic information,
effectively overshadowing the collaborative representation. Consequently,
downstream recommendation tasks fail to fully exploit the knowledge from both
modalities, resulting in suboptimal performance. To address this, we propose a
progressive collaborative and semantic knowledge fusion model for generative
recommendation, named PRORec, which integrates semantic and collaborative
knowledge with a unified code through a two-stage framework. Specifically, in
the first stage, we propose a cross-modality knowledge alignment task, which
integrates semantic knowledge into collaborative embeddings, enhancing their
representational capability. In the second stage, we propose an in-modality
knowledge distillation task, designed to effectively capture and integrate
knowledge from both semantic and collaborative modalities. Extensive
experiments on three widely used benchmarks validate the effectiveness of our
approach, demonstrating its superiority compared to existing methods.
|
2502.06272
|
Beyond Batch Learning: Global Awareness Enhanced Domain Adaptation
|
cs.LG
|
In domain adaptation (DA), the effectiveness of deep learning-based models is
often constrained by batch learning strategies that fail to fully apprehend the
global statistical and geometric characteristics of data distributions.
Addressing this gap, we introduce 'Global Awareness Enhanced Domain Adaptation'
(GAN-DA), a novel approach that transcends traditional batch-based limitations.
GAN-DA integrates a unique predefined feature representation (PFR) to
facilitate the alignment of cross-domain distributions, thereby achieving a
comprehensive global statistical awareness. This representation is innovatively
expanded to encompass orthogonal and common feature aspects, which enhances the
unification of global manifold structures and refines decision boundaries for
more effective DA. Our extensive experiments, encompassing 27 diverse
cross-domain image classification tasks, demonstrate GAN-DA's remarkable
superiority, outperforming 24 established DA methods by a significant margin.
Furthermore, our in-depth analyses shed light on the decision-making processes,
revealing insights into the adaptability and efficiency of GAN-DA. This
approach not only addresses the limitations of existing DA methodologies but
also sets a new benchmark in the realm of domain adaptation, offering broad
implications for future research and applications in this field.
|
2502.06274
|
HODDI: A Dataset of High-Order Drug-Drug Interactions for Computational
Pharmacovigilance
|
cs.LG cs.AI q-bio.MN
|
Drug-side effect research is vital for understanding adverse reactions
arising in complex multi-drug therapies. However, the scarcity of higher-order
datasets that capture the combinatorial effects of multiple drugs severely
limits progress in this field. Existing resources such as TWOSIDES primarily
focus on pairwise interactions. To fill this critical gap, we introduce HODDI,
the first Higher-Order Drug-Drug Interaction Dataset, constructed from U.S.
Food and Drug Administration (FDA) Adverse Event Reporting System (FAERS)
records spanning the past decade, to advance computational pharmacovigilance.
HODDI contains 109,744 records involving 2,506 unique drugs and 4,569 unique
side effects, specifically curated to capture multi-drug interactions and their
collective impact on adverse effects. Comprehensive statistical analyses
demonstrate HODDI's extensive coverage and robust analytical metrics, making it
a valuable resource for studying higher-order drug relationships. Evaluating
HODDI with multiple models, we found that simple Multi-Layer Perceptron (MLP)
can outperform graph models, while hypergraph models demonstrate superior
performance in capturing complex multi-drug interactions, further validating
HODDI's effectiveness. Our findings highlight the inherent value of
higher-order information in drug-side effect prediction and position HODDI as a
benchmark dataset for advancing research in pharmacovigilance, drug safety, and
personalized medicine. The dataset and codes are available at
https://github.com/TIML-Group/HODDI.
|
2502.06279
|
DebateBench: A Challenging Long Context Reasoning Benchmark For Large
Language Models
|
cs.CL cs.LG
|
We introduce DebateBench, a novel dataset consisting of an extensive
collection of transcripts and metadata from some of the world's most
prestigious competitive debates. The dataset consists of British Parliamentary
debates from prestigious debating tournaments on diverse topics, annotated with
detailed speech-level scores and house rankings sourced from official
adjudication data. We curate 256 speeches across 32 debates with each debate
being over 1 hour long with each input being an average of 32,000 tokens.
Designed to capture long-context, large-scale reasoning tasks, DebateBench
provides a benchmark for evaluating modern large language models (LLMs) on
their ability to engage in argumentation, deliberation, and alignment with
human experts. To do well on DebateBench, the LLMs must perform in-context
learning to understand the rules and evaluation criteria of the debates, then
analyze 8 seven minute long speeches and reason about the arguments presented
by all speakers to give the final results. Our preliminary evaluation using GPT
o1, GPT-4o, and Claude Haiku, shows that LLMs struggle to perform well on
DebateBench, highlighting the need to develop more sophisticated techniques for
improving their performance.
|
2502.06280
|
IceBerg: Debiased Self-Training for Class-Imbalanced Node Classification
|
cs.LG
|
Graph Neural Networks (GNNs) have achieved great success in dealing with
non-Euclidean graph-structured data and have been widely deployed in many
real-world applications. However, their effectiveness is often jeopardized
under class-imbalanced training sets. Most existing studies have analyzed
class-imbalanced node classification from a supervised learning perspective,
but they do not fully utilize the large number of unlabeled nodes in
semi-supervised scenarios. We claim that the supervised signal is just the tip
of the iceberg and a large number of unlabeled nodes have not yet been
effectively utilized. In this work, we propose IceBerg, a debiased
self-training framework to address the class-imbalanced and few-shot challenges
for GNNs at the same time. Specifically, to figure out the Matthew effect and
label distribution shift in self-training, we propose Double Balancing, which
can largely improve the performance of existing baselines with just a few lines
of code as a simple plug-and-play module. Secondly, to enhance the long-range
propagation capability of GNNs, we disentangle the propagation and
transformation operations of GNNs. Therefore, the weak supervision signals can
propagate more effectively to address the few-shot issue. In summary, we find
that leveraging unlabeled nodes can significantly enhance the performance of
GNNs in class-imbalanced and few-shot scenarios, and even small, surgical
modifications can lead to substantial performance improvements. Systematic
experiments on benchmark datasets show that our method can deliver considerable
performance gain over existing class-imbalanced node classification baselines.
Additionally, due to IceBerg's outstanding ability to leverage unsupervised
signals, it also achieves state-of-the-art results in few-shot node
classification scenarios. The code of IceBerg is available at:
https://github.com/ZhixunLEE/IceBerg.
|
2502.06281
|
Application of quantum machine learning using quantum kernel algorithms
on multiclass neuron M type classification
|
quant-ph cs.LG
|
The functional characterization of different neuronal types has been a
longstanding and crucial challenge. With the advent of physical quantum
computers, it has become possible to apply quantum machine learning algorithms
to translate theoretical research into practical solutions. Previous studies
have shown the advantages of quantum algorithms on artificially generated
datasets, and initial experiments with small binary classification problems
have yielded comparable outcomes to classical algorithms. However, it is
essential to investigate the potential quantum advantage using real-world data.
To the best of our knowledge, this study is the first to propose the
utilization of quantum systems to classify neuron morphologies, thereby
enhancing our understanding of the performance of automatic multiclass neuron
classification using quantum kernel methods. We examined the influence of
feature engineering on classification accuracy and found that quantum kernel
methods achieved similar performance to classical methods, with certain
advantages observed in various configurations.
|
2502.06282
|
Jakiro: Boosting Speculative Decoding with Decoupled Multi-Head via MoE
|
cs.CL cs.AI cs.LG
|
Speculative decoding (SD) accelerates large language model inference by using
a smaller draft model to predict multiple tokens, which are then verified in
parallel by the larger target model. However, the limited capacity of the draft
model often necessitates tree-based sampling to improve prediction accuracy,
where multiple candidates are generated at each step. We identify a key
limitation in this approach: the candidates at the same step are derived from
the same representation, limiting diversity and reducing overall effectiveness.
To address this, we propose Jakiro, leveraging Mixture of Experts (MoE), where
independent experts generate diverse predictions, effectively decoupling
correlations among candidates. Furthermore, we introduce a hybrid inference
strategy, combining autoregressive decoding for initial tokens with parallel
decoding for subsequent stages, and enhance the latter with contrastive
mechanism in features to improve accuracy. Our method significantly boosts
prediction accuracy and achieves higher inference speedups. Extensive
experiments across diverse models validate the effectiveness and robustness of
our approach, establishing a new SOTA in speculative decoding. Our codes are
available at https://github.com/haiduo/Jakiro.
|
2502.06283
|
On the Expressiveness of Rational ReLU Neural Networks With Bounded
Depth
|
cs.LG cs.DM
|
To confirm that the expressive power of ReLU neural networks grows with their
depth, the function $F_n = \max \{0,x_1,\ldots,x_n\}$ has been considered in
the literature. A conjecture by Hertrich, Basu, Di Summa, and Skutella [NeurIPS
2021] states that any ReLU network that exactly represents $F_n$ has at least
$\lceil\log_2 (n+1)\rceil$ hidden layers. The conjecture has recently been
confirmed for networks with integer weights by Haase, Hertrich, and Loho [ICLR
2023].
We follow up on this line of research and show that, within ReLU networks
whose weights are decimal fractions, $F_n$ can only be represented by networks
with at least $\lceil\log_3 (n+1)\rceil$ hidden layers. Moreover, if all
weights are $N$-ary fractions, then $F_n$ can only be represented by networks
with at least $\Omega( \frac{\ln n}{\ln \ln N})$ layers. These results are a
partial confirmation of the above conjecture for rational ReLU networks, and
provide the first non-constant lower bound on the depth of practically relevant
ReLU networks.
|
2502.06285
|
End-to-End Multi-Microphone Speaker Extraction Using Relative Transfer
Functions
|
cs.SD cs.AI
|
This paper introduces a multi-microphone method for extracting a desired
speaker from a mixture involving multiple speakers and directional noise in a
reverberant environment. In this work, we propose leveraging the instantaneous
relative transfer function (RTF), estimated from a reference utterance recorded
in the same position as the desired source. The effectiveness of the RTF-based
spatial cue is compared with direction of arrival (DOA)-based spatial cue and
the conventional spectral embedding. Experimental results in challenging
acoustic scenarios demonstrate that using spatial cues yields better
performance than the spectral-based cue and that the instantaneous RTF
outperforms the DOA-based spatial cue.
|
2502.06287
|
CT-UIO: Continuous-Time UWB-Inertial-Odometer Localization Using
Non-Uniform B-spline with Fewer Anchors
|
cs.RO
|
Ultra-wideband (UWB) based positioning with fewer anchors has attracted
significant research interest in recent years, especially under
energy-constrained conditions. However, most existing methods rely on
discrete-time representations and smoothness priors to infer a robot's motion
states, which often struggle with ensuring multi-sensor data synchronization.
In this paper, we present an efficient UWB-Inertial-odometer localization
system, utilizing a non-uniform B-spline framework with fewer anchors. Unlike
traditional uniform B-spline-based continuous-time methods, we introduce an
adaptive knot-span adjustment strategy for non-uniform continuous-time
trajectory representation. This is accomplished by adjusting control points
dynamically based on movement speed. To enable efficient fusion of IMU and
odometer data, we propose an improved Extended Kalman Filter (EKF) with
innovation-based adaptive estimation to provide short-term accurate motion
prior. Furthermore, to address the challenge of achieving a fully observable
UWB localization system under few-anchor conditions, the Virtual Anchor (VA)
generation method based on multiple hypotheses is proposed. At the backend, we
propose a CT-UIO factor graph with an adaptive sliding window for global
trajectory estimation. Comprehensive experiments conducted on corridor and
exhibition hall datasets validate the proposed system's high precision and
robust performance. The codebase and datasets of this work will be open-sourced
at https://github.com/JasonSun623/CT-UIO.
|
2502.06288
|
Enhancing Ground-to-Aerial Image Matching for Visual Misinformation
Detection Using Semantic Segmentation
|
cs.CV
|
The recent advancements in generative AI techniques, which have significantly
increased the online dissemination of altered images and videos, have raised
serious concerns about the credibility of digital media available on the
Internet and distributed through information channels and social networks. This
issue particularly affects domains that rely heavily on trustworthy data, such
as journalism, forensic analysis, and Earth observation. To address these
concerns, the ability to geolocate a non-geo-tagged ground-view image without
external information, such as GPS coordinates, has become increasingly
critical. This study tackles the challenge of linking a ground-view image,
potentially exhibiting varying fields of view (FoV), to its corresponding
satellite image without the aid of GPS data. To achieve this, we propose a
novel four-stream Siamese-like architecture, the Quadruple Semantic Align Net
(SAN-QUAD), which extends previous state-of-the-art (SOTA) approaches by
leveraging semantic segmentation applied to both ground and satellite imagery.
Experimental results on a subset of the CVUSA dataset demonstrate significant
improvements of up to 9.8\% over prior methods across various FoV settings.
|
2502.06289
|
Is an Ultra Large Natural Image-Based Foundation Model Superior to a
Retina-Specific Model for Detecting Ocular and Systemic Diseases?
|
eess.IV cs.AI cs.CV
|
The advent of foundation models (FMs) is transforming medical domain. In
ophthalmology, RETFound, a retina-specific FM pre-trained sequentially on 1.4
million natural images and 1.6 million retinal images, has demonstrated high
adaptability across clinical applications. Conversely, DINOv2, a
general-purpose vision FM pre-trained on 142 million natural images, has shown
promise in non-medical domains. However, its applicability to clinical tasks
remains underexplored. To address this, we conducted head-to-head evaluations
by fine-tuning RETFound and three DINOv2 models (large, base, small) for ocular
disease detection and systemic disease prediction tasks, across eight
standardized open-source ocular datasets, as well as the Moorfields AlzEye and
the UK Biobank datasets. DINOv2-large model outperformed RETFound in detecting
diabetic retinopathy (AUROC=0.850-0.952 vs 0.823-0.944, across three datasets,
all P<=0.007) and multi-class eye diseases (AUROC=0.892 vs. 0.846, P<0.001). In
glaucoma, DINOv2-base model outperformed RETFound (AUROC=0.958 vs 0.940,
P<0.001). Conversely, RETFound achieved superior performance over all DINOv2
models in predicting heart failure, myocardial infarction, and ischaemic stroke
(AUROC=0.732-0.796 vs 0.663-0.771, all P<0.001). These trends persisted even
with 10% of the fine-tuning data. These findings showcase the distinct
scenarios where general-purpose and domain-specific FMs excel, highlighting the
importance of aligning FM selection with task-specific requirements to optimise
clinical performance.
|
2502.06292
|
Occupancy-SLAM: An Efficient and Robust Algorithm for Simultaneously
Optimizing Robot Poses and Occupancy Map
|
cs.RO
|
Joint optimization of poses and features has been extensively studied and
demonstrated to yield more accurate results in feature-based SLAM problems.
However, research on jointly optimizing poses and non-feature-based maps
remains limited. Occupancy maps are widely used non-feature-based environment
representations because they effectively classify spaces into obstacles, free
areas, and unknown regions, providing robots with spatial information for
various tasks. In this paper, we propose Occupancy-SLAM, a novel
optimization-based SLAM method that enables the joint optimization of robot
trajectory and the occupancy map through a parameterized map representation.
The key novelty lies in optimizing both robot poses and occupancy values at
different cell vertices simultaneously, a significant departure from existing
methods where the robot poses need to be optimized first before the map can be
estimated. Evaluations using simulations and practical 2D laser datasets
demonstrate that the proposed approach can robustly obtain more accurate robot
trajectories and occupancy maps than state-of-the-art techniques with
comparable computational time. Preliminary results in the 3D case further
confirm the potential of the proposed method in practical 3D applications,
achieving more accurate results than existing methods.
|
2502.06295
|
DVFS-Aware DNN Inference on GPUs: Latency Modeling and Performance
Analysis
|
cs.LG cs.NI
|
The rapid development of deep neural networks (DNNs) is inherently
accompanied by the problem of high computational costs. To tackle this
challenge, dynamic voltage frequency scaling (DVFS) is emerging as a promising
technology for balancing the latency and energy consumption of DNN inference by
adjusting the computing frequency of processors. However, most existing models
of DNN inference time are based on the CPU-DVFS technique, and directly
applying the CPU-DVFS model to DNN inference on GPUs will lead to significant
errors in optimizing latency and energy consumption. In this paper, we propose
a DVFS-aware latency model to precisely characterize DNN inference time on
GPUs. We first formulate the DNN inference time based on extensive experiment
results for different devices and analyze the impact of fitting parameters.
Then by dividing DNNs into multiple blocks and obtaining the actual inference
time, the proposed model is further verified. Finally, we compare our proposed
model with the CPU-DVFS model in two specific cases. Evaluation results
demonstrate that local inference optimization with our proposed model achieves
a reduction of no less than 66% and 69% in inference time and energy
consumption respectively. In addition, cooperative inference with our proposed
model can improve the partition policy and reduce the energy consumption
compared to the CPU-DVFS model.
|
2502.06298
|
SeaExam and SeaBench: Benchmarking LLMs with Local Multilingual
Questions in Southeast Asia
|
cs.CL cs.AI
|
This study introduces two novel benchmarks, SeaExam and SeaBench, designed to
evaluate the capabilities of Large Language Models (LLMs) in Southeast Asian
(SEA) application scenarios. Unlike existing multilingual datasets primarily
derived from English translations, these benchmarks are constructed based on
real-world scenarios from SEA regions. SeaExam draws from regional educational
exams to form a comprehensive dataset that encompasses subjects such as local
history and literature. In contrast, SeaBench is crafted around multi-turn,
open-ended tasks that reflect daily interactions within SEA communities. Our
evaluations demonstrate that SeaExam and SeaBench more effectively discern LLM
performance on SEA language tasks compared to their translated benchmarks. This
highlights the importance of using real-world queries to assess the
multilingual capabilities of LLMs.
|
2502.06300
|
The impact of allocation strategies in subset learning on the expressive
power of neural networks
|
cs.LG
|
In traditional machine learning, models are defined by a set of parameters,
which are optimized to perform specific tasks. In neural networks, these
parameters correspond to the synaptic weights. However, in reality, it is often
infeasible to control or update all weights. This challenge is not limited to
artificial networks but extends to biological networks, such as the brain,
where the extent of distributed synaptic weight modification during learning
remains unclear. Motivated by these insights, we theoretically investigate how
different allocations of a fixed number of learnable weights influence the
capacity of neural networks. Using a teacher-student setup, we introduce a
benchmark to quantify the expressivity associated with each allocation. We
establish conditions under which allocations have maximal or minimal expressive
power in linear recurrent neural networks and linear multi-layer feedforward
networks. For suboptimal allocations, we propose heuristic principles to
estimate their expressivity. These principles extend to shallow ReLU networks
as well. Finally, we validate our theoretical findings with empirical
experiments. Our results emphasize the critical role of strategically
distributing learnable weights across the network, showing that a more
widespread allocation generally enhances the network's expressive power.
|
2502.06301
|
Utilizing Novelty-based Evolution Strategies to Train Transformers in
Reinforcement Learning
|
cs.LG cs.NE
|
In this paper, we experiment with novelty-based variants of OpenAI-ES, the
NS-ES and NSR-ES algorithms, and evaluate their effectiveness in training
complex, transformer-based architectures designed for the problem of
reinforcement learning such as Decision Transformers. We also test if we can
accelerate the novelty-based training of these larger models by seeding the
training by a pretrained models. By this, we build on our previous work, where
we tested the ability of evolution strategies - specifically the aforementioned
OpenAI-ES - to train the Decision Transformer architecture. The results were
mixed. NS-ES showed progress, but it would clearly need many more iterations
for it to yield interesting results. NSR-ES, on the other hand, proved quite
capable of being straightforwardly used on larger models, since its performance
appears as similar between the feed-forward model and Decision Transformer, as
it was for the OpenAI-ES in our previous work.
|
2502.06302
|
Latent Convergence Modulation in Large Language Models: A Novel Approach
to Iterative Contextual Realignment
|
cs.CL
|
Token prediction stability remains a challenge in autoregressive generative
models, where minor variations in early inference steps often lead to
significant semantic drift over extended sequences. A structured modulation
mechanism was introduced to regulate hidden state transitions, ensuring that
latent representation trajectories remain aligned with prior contextual
dependencies while preserving generative flexibility. The modulation framework
was designed to function within transformer-based architectures, dynamically
constraining representation evolution without imposing external memory
dependencies or extensive architectural modifications. Empirical evaluations
demonstrated that structured latent adjustments contributed to reductions in
perplexity fluctuations, entropy variance, and lexical instability, improving
coherence in long-form text generation. Gradient propagation stability was
further analyzed, revealing that the modulation process led to smoother
optimization pathways, mitigating erratic fluctuations in weight updates across
successive inference steps. The computational efficiency of the modulation
process was assessed, showing that its integration within transformer-based
architectures introduced only marginal overhead while maintaining compatibility
with existing optimization frameworks. The structured modulation constraints
also influenced syntactic variation, preventing excessive repetition while
maintaining balanced sentence length distributions. Comparative evaluations
against baseline models reinforced the role of controlled latent state
evolution in improving pronoun resolution, logical consistency, and contextual
alignment across autoregressive text generation tasks.
|
2502.06307
|
Cell Nuclei Detection and Classification in Whole Slide Images with
Transformers
|
cs.CV
|
Accurate and efficient cell nuclei detection and classification in
histopathological Whole Slide Images (WSIs) are pivotal for digital pathology
applications. Traditional cell segmentation approaches, while commonly used,
are computationally expensive and require extensive post-processing, limiting
their practicality for high-throughput clinical settings. In this paper, we
propose a paradigm shift from segmentation to detection for extracting cell
information from WSIs, introducing CellNuc-DETR as a more effective solution.
We evaluate the accuracy performance of CellNuc-DETR on the PanNuke dataset and
conduct cross-dataset evaluations on CoNSeP and MoNuSeg to assess robustness
and generalization capabilities. Our results demonstrate state-of-the-art
performance in both cell nuclei detection and classification tasks.
Additionally, we assess the efficiency of CellNuc-DETR on large WSIs, showing
that it not only outperforms current methods in accuracy but also significantly
reduces inference times. Specifically, CellNuc-DETR is twice as fast as the
fastest segmentation-based method, HoVer-NeXt, while achieving substantially
higher accuracy. Moreover, it surpasses CellViT in accuracy and is
approximately ten times more efficient in inference speed on WSIs. These
results establish CellNuc-DETR as a superior approach for cell analysis in
digital pathology, combining high accuracy with computational efficiency.
|
2502.06309
|
Analog In-memory Training on General Non-ideal Resistive Elements: The
Impact of Response Functions
|
cs.LG cs.AR math.OC
|
As the economic and environmental costs of training and deploying large
vision or language models increase dramatically, analog in-memory computing
(AIMC) emerges as a promising energy-efficient solution. However, the training
perspective, especially its training dynamic, is underexplored. In AIMC
hardware, the trainable weights are represented by the conductance of resistive
elements and updated using consecutive electrical pulses. Among all the
physical properties of resistive elements, the response to the pulses directly
affects the training dynamics. This paper first provides a theoretical
foundation for gradient-based training on AIMC hardware and studies the impact
of response functions. We demonstrate that noisy update and asymmetric response
functions negatively impact Analog SGD by imposing an implicit penalty term on
the objective. To overcome the issue, Tiki-Taka, a residual learning algorithm,
converges exactly to a critical point by optimizing a main array and a residual
array bilevelly. The conclusion is supported by simulations validating our
theoretical insights.
|
2502.06314
|
From Pixels to Components: Eigenvector Masking for Visual Representation
Learning
|
cs.LG cs.AI cs.CV
|
Predicting masked from visible parts of an image is a powerful
self-supervised approach for visual representation learning. However, the
common practice of masking random patches of pixels exhibits certain failure
modes, which can prevent learning meaningful high-level features, as required
for downstream tasks. We propose an alternative masking strategy that operates
on a suitable transformation of the data rather than on the raw pixels.
Specifically, we perform principal component analysis and then randomly mask a
subset of components, which accounts for a fixed ratio of the data variance.
The learning task then amounts to reconstructing the masked components from the
visible ones. Compared to local patches of pixels, the principal components of
images carry more global information. We thus posit that predicting masked from
visible components involves more high-level features, allowing our masking
strategy to extract more useful representations. This is corroborated by our
empirical findings which demonstrate improved image classification performance
for component over pixel masking. Our method thus constitutes a simple and
robust data-driven alternative to traditional masked image modeling approaches.
|
2502.06316
|
Can AI Examine Novelty of Patents?: Novelty Evaluation Based on the
Correspondence between Patent Claim and Prior Art
|
cs.CL
|
Assessing the novelty of patent claims is a critical yet challenging task
traditionally performed by patent examiners. While advancements in NLP have
enabled progress in various patent-related tasks, novelty assessment remains
unexplored. This paper introduces a novel challenge by evaluating the ability
of large language models (LLMs) to assess patent novelty by comparing claims
with cited prior art documents, following the process similar to that of patent
examiners done. We present the first dataset specifically designed for novelty
evaluation, derived from real patent examination cases, and analyze the
capabilities of LLMs to address this task. Our study reveals that while
classification models struggle to effectively assess novelty, generative models
make predictions with a reasonable level of accuracy, and their explanations
are accurate enough to understand the relationship between the target patent
and prior art. These findings demonstrate the potential of LLMs to assist in
patent evaluation, reducing the workload for both examiners and applicants. Our
contributions highlight the limitations of current models and provide a
foundation for improving AI-driven patent analysis through advanced models and
refined datasets.
|
2502.06323
|
A physics-based data-driven model for CO$_2$ gas diffusion electrodes to
drive automated laboratories
|
cond-mat.mtrl-sci cs.LG
|
The electrochemical reduction of atmospheric CO$_2$ into high-energy
molecules with renewable energy is a promising avenue for energy storage that
can take advantage of existing infrastructure especially in areas where
sustainable alternatives to fossil fuels do not exist. Automated laboratories
are currently being developed and used to optimize the composition and
operating conditions of gas diffusion electrodes (GDEs), the device in which
this reaction takes place. Improving the efficiency of GDEs is crucial for this
technology to become viable. Here we present a modeling framework to
efficiently explore the high-dimensional parameter space of GDE designs in an
active learning context. At the core of the framework is an uncertainty-aware
physics model calibrated with experimental data. The model has the flexibility
to capture various input parameter spaces and any carbon products which can be
modeled with Tafel kinetics. It is interpretable, and a Gaussian process layer
can capture deviations of real data from the function space of the physical
model itself. We deploy the model in a simulated active learning setup with
real electrochemical data gathered by the AdaCarbon automated laboratory and
show that it can be used to efficiently traverse the multi-dimensional
parameter space.
|
2502.06324
|
UniDemoir\'e: Towards Universal Image Demoir\'eing with Data Generation
and Synthesis
|
cs.CV cs.AI
|
Image demoir\'eing poses one of the most formidable challenges in image
restoration, primarily due to the unpredictable and anisotropic nature of
moir\'e patterns. Limited by the quantity and diversity of training data,
current methods tend to overfit to a single moir\'e domain, resulting in
performance degradation for new domains and restricting their robustness in
real-world applications. In this paper, we propose a universal image
demoir\'eing solution, UniDemoir\'e, which has superior generalization
capability. Notably, we propose innovative and effective data generation and
synthesis methods that can automatically provide vast high-quality moir\'e
images to train a universal demoir\'eing model. Our extensive experiments
demonstrate the cutting-edge performance and broad potential of our approach
for generalized image demoir\'eing.
|
2502.06327
|
Prompt-Driven Continual Graph Learning
|
cs.LG cs.AI
|
Continual Graph Learning (CGL), which aims to accommodate new tasks over
evolving graph data without forgetting prior knowledge, is garnering
significant research interest. Mainstream solutions adopt the memory
replay-based idea, ie, caching representative data from earlier tasks for
retraining the graph model. However, this strategy struggles with scalability
issues for constantly evolving graphs and raises concerns regarding data
privacy. Inspired by recent advancements in the prompt-based learning paradigm,
this paper introduces a novel prompt-driven continual graph learning
(PROMPTCGL) framework, which learns a separate prompt for each incoming task
and maintains the underlying graph neural network model fixed. In this way,
PROMPTCGL naturally avoids catastrophic forgetting of knowledge from previous
tasks. More specifically, we propose hierarchical prompting to instruct the
model from both feature- and topology-level to fully address the variability of
task graphs in dynamic continual learning. Additionally, we develop a
personalized prompt generator to generate tailored prompts for each graph node
while minimizing the number of prompts needed, leading to constant memory
consumption regardless of the graph scale. Extensive experiments on four
benchmarks show that PROMPTCGL achieves superior performance against existing
CGL approaches while significantly reducing memory consumption. Our code is
available at https://github.com/QiWang98/PromptCGL.
|
2502.06329
|
Expect the Unexpected: FailSafe Long Context QA for Finance
|
cs.CL
|
We propose a new long-context financial benchmark, FailSafeQA, designed to
test the robustness and context-awareness of LLMs against six variations in
human-interface interactions in LLM-based query-answer systems within finance.
We concentrate on two case studies: Query Failure and Context Failure. In the
Query Failure scenario, we perturb the original query to vary in domain
expertise, completeness, and linguistic accuracy. In the Context Failure case,
we simulate the uploads of degraded, irrelevant, and empty documents. We employ
the LLM-as-a-Judge methodology with Qwen2.5-72B-Instruct and use fine-grained
rating criteria to define and calculate Robustness, Context Grounding, and
Compliance scores for 24 off-the-shelf models. The results suggest that
although some models excel at mitigating input perturbations, they must balance
robust answering with the ability to refrain from hallucinating. Notably,
Palmyra-Fin-128k-Instruct, recognized as the most compliant model, maintained
strong baseline performance but encountered challenges in sustaining robust
predictions in 17% of test cases. On the other hand, the most robust model,
OpenAI o3-mini, fabricated information in 41% of tested cases. The results
demonstrate that even high-performing models have significant room for
improvement and highlight the role of FailSafeQA as a tool for developing LLMs
optimized for dependability in financial applications. The dataset is available
at: https://huggingface.co/datasets/Writer/FailSafeQA
|
2502.06331
|
Conformal Prediction Regions are Imprecise Highest Density Regions
|
stat.ML cs.LG math.PR
|
Recently, Cella and Martin proved how, under an assumption called consonance,
a credal set (i.e. a closed and convex set of probabilities) can be derived
from the conformal transducer associated with transductive conformal
prediction. We show that the Imprecise Highest Density Region (IHDR) associated
with such a credal set corresponds to the classical Conformal Prediction
Region. In proving this result, we relate the set of probability density/mass
functions (pdf/pmf's) associated with the elements of the credal set to the
imprecise probabilistic concept of a cloud. As a result, we establish new
relationships between Conformal Prediction and Imprecise Probability (IP)
theories. A byproduct of our presentation is the discovery that consonant
plausibility functions are monoid homomorphisms, a new algebraic property of an
IP tool.
|
2502.06335
|
Microcanonical Langevin Ensembles: Advancing the Sampling of Bayesian
Neural Networks
|
cs.LG
|
Despite recent advances, sampling-based inference for Bayesian Neural
Networks (BNNs) remains a significant challenge in probabilistic deep learning.
While sampling-based approaches do not require a variational distribution
assumption, current state-of-the-art samplers still struggle to navigate the
complex and highly multimodal posteriors of BNNs. As a consequence, sampling
still requires considerably longer inference times than non-Bayesian methods
even for small neural networks, despite recent advances in making software
implementations more efficient. Besides the difficulty of finding
high-probability regions, the time until samplers provide sufficient
exploration of these areas remains unpredictable. To tackle these challenges,
we introduce an ensembling approach that leverages strategies from optimization
and a recently proposed sampler called Microcanonical Langevin Monte Carlo
(MCLMC) for efficient, robust and predictable sampling performance. Compared to
approaches based on the state-of-the-art No-U-Turn Sampler, our approach
delivers substantial speedups up to an order of magnitude, while maintaining or
improving predictive performance and uncertainty quantification across diverse
tasks and data modalities. The suggested Microcanonical Langevin Ensembles and
modifications to MCLMC additionally enhance the method's predictability in
resource requirements, facilitating easier parallelization. All in all, the
proposed method offers a promising direction for practical, scalable inference
for BNNs.
|
2502.06336
|
DefTransNet: A Transformer-based Method for Non-Rigid Point Cloud
Registration in the Simulation of Soft Tissue Deformation
|
cs.CV cs.AI
|
Soft-tissue surgeries, such as tumor resections, are complicated by tissue
deformations that can obscure the accurate location and shape of tissues. By
representing tissue surfaces as point clouds and applying non-rigid point cloud
registration (PCR) methods, surgeons can better understand tissue deformations
before, during, and after surgery. Existing non-rigid PCR methods, such as
feature-based approaches, struggle with robustness against challenges like
noise, outliers, partial data, and large deformations, making accurate point
correspondence difficult. Although learning-based PCR methods, particularly
Transformer-based approaches, have recently shown promise due to their
attention mechanisms for capturing interactions, their robustness remains
limited in challenging scenarios. In this paper, we present DefTransNet, a
novel end-to-end Transformer-based architecture for non-rigid PCR. DefTransNet
is designed to address the key challenges of deformable registration, including
large deformations, outliers, noise, and partial data, by inputting source and
target point clouds and outputting displacement vector fields. The proposed
method incorporates a learnable transformation matrix to enhance robustness to
affine transformations, integrates global and local geometric information, and
captures long-range dependencies among points using Transformers. We validate
our approach on four datasets: ModelNet, SynBench, 4DMatch, and DeformedTissue,
using both synthetic and real-world data to demonstrate the generalization of
our proposed method. Experimental results demonstrate that DefTransNet
outperforms current state-of-the-art registration networks across various
challenging conditions. Our code and data are publicly available.
|
2502.06337
|
Accelerating Outlier-robust Rotation Estimation by Stereographic
Projection
|
cs.CV cs.RO
|
Rotation estimation plays a fundamental role in many computer vision and
robot tasks. However, efficiently estimating rotation in large inputs
containing numerous outliers (i.e., mismatches) and noise is a recognized
challenge. Many robust rotation estimation methods have been designed to
address this challenge. Unfortunately, existing methods are often inapplicable
due to their long computation time and the risk of local optima. In this paper,
we propose an efficient and robust rotation estimation method. Specifically,
our method first investigates geometric constraints involving only the rotation
axis. Then, it uses stereographic projection and spatial voting techniques to
identify the rotation axis and angle. Furthermore, our method efficiently
obtains the optimal rotation estimation and can estimate multiple rotations
simultaneously. To verify the feasibility of our method, we conduct comparative
experiments using both synthetic and real-world data. The results show that,
with GPU assistance, our method can solve large-scale ($10^6$ points) and
severely corrupted (90\% outlier rate) rotation estimation problems within 0.07
seconds, with an angular error of only 0.01 degrees, which is superior to
existing methods in terms of accuracy and efficiency.
|
2502.06338
|
Zero-shot Depth Completion via Test-time Alignment with Affine-invariant
Depth Prior
|
cs.CV
|
Depth completion, predicting dense depth maps from sparse depth measurements,
is an ill-posed problem requiring prior knowledge. Recent methods adopt
learning-based approaches to implicitly capture priors, but the priors
primarily fit in-domain data and do not generalize well to out-of-domain
scenarios. To address this, we propose a zero-shot depth completion method
composed of an affine-invariant depth diffusion model and test-time alignment.
We use pre-trained depth diffusion models as depth prior knowledge, which
implicitly understand how to fill in depth for scenes. Our approach aligns the
affine-invariant depth prior with metric-scale sparse measurements, enforcing
them as hard constraints via an optimization loop at test-time. Our zero-shot
depth completion method demonstrates generalization across various domain
datasets, achieving up to a 21\% average performance improvement over the
previous state-of-the-art methods while enhancing spatial understanding by
sharpening scene details. We demonstrate that aligning a monocular
affine-invariant depth prior with sparse metric measurements is a proven
strategy to achieve domain-generalizable depth completion without relying on
extensive training data. Project page:
https://hyoseok1223.github.io/zero-shot-depth-completion/.
|
2502.06341
|
Facial Analysis Systems and Down Syndrome
|
cs.CV cs.AI cs.HC cs.LG
|
The ethical, social and legal issues surrounding facial analysis technologies
have been widely debated in recent years. Key critics have argued that these
technologies can perpetuate bias and discrimination, particularly against
marginalized groups. We contribute to this field of research by reporting on
the limitations of facial analysis systems with the faces of people with Down
syndrome: this particularly vulnerable group has received very little attention
in the literature so far. This study involved the creation of a specific
dataset of face images. An experimental group with faces of people with Down
syndrome, and a control group with faces of people who are not affected by the
syndrome. Two commercial tools were tested on the dataset, along three tasks:
gender recognition, age prediction and face labelling. The results show an
overall lower accuracy of prediction in the experimental group, and other
specific patterns of performance differences: i) high error rates in gender
recognition in the category of males with Down syndrome; ii) adults with Down
syndrome were more often incorrectly labelled as children; iii) social
stereotypes are propagated in both the control and experimental groups, with
labels related to aesthetics more often associated with women, and labels
related to education level and skills more often associated with men. These
results, although limited in scope, shed new light on the biases that alter
face classification when applied to faces of people with Down syndrome. They
confirm the structural limitation of the technology, which is inherently
dependent on the datasets used to train the models.
|
2502.06342
|
The exponential distribution of the orders of demonstrative, numeral,
adjective and noun
|
cs.CL physics.soc-ph
|
The frequency of the preferred order for a noun phrase formed by
demonstrative, numeral, adjective and noun has received significant attention
over the last two decades. We investigate the actual distribution of the
preferred 24 possible orders. There is no consensus on whether it can be
well-fitted by an exponential or a power law distribution. We find that an
exponential distribution is a much better model. This finding and other
circumstances where an exponential-like distribution is found challenge the
view that power-law distributions, e.g., Zipf's law for word frequencies, are
inevitable. We also investigate which of two exponential distributions gives a
better fit: an exponential model where the 24 orders have non-zero probability
or an exponential model where the number of orders that can have non-zero
probability is variable. When parsimony and generalizability are prioritized,
we find strong support for the exponential model where all 24 orders have
non-zero probability. This finding suggests that there is no hard constraint on
word order variation and then unattested orders merely result from
undersampling, consistently with Cysouw's view.
|
2502.06343
|
Causal Lifting of Neural Representations: Zero-Shot Generalization for
Causal Inferences
|
cs.LG stat.ML
|
A plethora of real-world scientific investigations is waiting to scale with
the support of trustworthy predictive models that can reduce the need for
costly data annotations. We focus on causal inferences on a target experiment
with unlabeled factual outcomes, retrieved by a predictive model fine-tuned on
a labeled similar experiment. First, we show that factual outcome estimation
via Empirical Risk Minimization (ERM) may fail to yield valid causal inferences
on the target population, even in a randomized controlled experiment and
infinite training samples. Then, we propose to leverage the observed
experimental settings during training to empower generalization to downstream
interventional investigations, ``Causal Lifting'' the predictive model. We
propose Deconfounded Empirical Risk Minimization (DERM), a new simple learning
procedure minimizing the risk over a fictitious target population, preventing
potential confounding effects. We validate our method on both synthetic and
real-world scientific data. Notably, for the first time, we zero-shot
generalize causal inferences on ISTAnt dataset (without annotation) by causal
lifting a predictive model on our experiment variant.
|
2502.06348
|
AiRacleX: Automated Detection of Price Oracle Manipulations via
LLM-Driven Knowledge Mining and Prompt Generation
|
cs.CR cs.AI
|
Decentralized finance (DeFi) applications depend on accurate price oracles to
ensure secure transactions, yet these oracles are highly vulnerable to
manipulation, enabling attackers to exploit smart contract vulnerabilities for
unfair asset valuation and financial gain. Detecting such manipulations
traditionally relies on the manual effort of experienced experts, presenting
significant challenges. In this paper, we propose a novel LLM-driven framework
that automates the detection of price oracle manipulations by leveraging the
complementary strengths of different LLM models (LLMs). Our approach begins
with domain-specific knowledge extraction, where an LLM model synthesizes
precise insights about price oracle vulnerabilities from top-tier academic
papers, eliminating the need for profound expertise from developers or
auditors. This knowledge forms the foundation for a second LLM model to
generate structured, context-aware chain of thought prompts, which guide a
third LLM model in accurately identifying manipulation patterns in smart
contracts. We validate the effectiveness of framework through experiments on 60
known vulnerabilities from 46 real-world DeFi attacks or projects spanning 2021
to 2023. The best performing combination of LLMs (Haiku-Haiku-4o-mini)
identified by AiRacleX demonstrate a 2.58-times improvement in recall (0.667 vs
0.259) compared to the state-of-the-art tool GPTScan, while maintaining
comparable precision. Furthermore, our framework demonstrates the feasibility
of replacing commercial models with open-source alternatives, enhancing privacy
and security for developers.
|
2502.06349
|
Provably Near-Optimal Federated Ensemble Distillation with Negligible
Overhead
|
cs.LG
|
Federated ensemble distillation addresses client heterogeneity by generating
pseudo-labels for an unlabeled server dataset based on client predictions and
training the server model using the pseudo-labeled dataset. The unlabeled
server dataset can either be pre-existing or generated through a data-free
approach. The effectiveness of this approach critically depends on the method
of assigning weights to client predictions when creating pseudo-labels,
especially in highly heterogeneous settings. Inspired by theoretical results
from GANs, we propose a provably near-optimal weighting method that leverages
client discriminators trained with a server-distributed generator and local
datasets. Our experiments on various image classification tasks demonstrate
that the proposed method significantly outperforms baselines. Furthermore, we
show that the additional communication cost, client-side privacy leakage, and
client-side computational overhead introduced by our method are negligible,
both in scenarios with and without a pre-existing server dataset.
|
2502.06351
|
Calibrating LLMs with Information-Theoretic Evidential Deep Learning
|
cs.LG
|
Fine-tuned large language models (LLMs) often exhibit overconfidence,
particularly when trained on small datasets, resulting in poor calibration and
inaccurate uncertainty estimates. Evidential Deep Learning (EDL), an
uncertainty-aware approach, enables uncertainty estimation in a single forward
pass, making it a promising method for calibrating fine-tuned LLMs. However,
despite its computational efficiency, EDL is prone to overfitting, as its
training objective can result in overly concentrated probability distributions.
To mitigate this, we propose regularizing EDL by incorporating an information
bottleneck (IB). Our approach IB-EDL suppresses spurious information in the
evidence generated by the model and encourages truly predictive information to
influence both the predictions and uncertainty estimates. Extensive experiments
across various fine-tuned LLMs and tasks demonstrate that IB-EDL outperforms
both existing EDL and non-EDL approaches. By improving the trustworthiness of
LLMs, IB-EDL facilitates their broader adoption in domains requiring high
levels of confidence calibration. Code is available at
https://github.com/sandylaker/ib-edl.
|
2502.06352
|
LANTERN++: Enhanced Relaxed Speculative Decoding with Static Tree
Drafting for Visual Auto-regressive Models
|
cs.CV
|
Speculative decoding has been widely used to accelerate autoregressive (AR)
text generation. However, its effectiveness in visual AR models remains limited
due to token selection ambiguity, where multiple tokens receive similarly low
probabilities, reducing acceptance rates. While dynamic tree drafting has been
proposed to improve speculative decoding, we show that it fails to mitigate
token selection ambiguity, resulting in shallow draft trees and suboptimal
acceleration. To address this, we introduce LANTERN++, a novel framework that
integrates static tree drafting with a relaxed acceptance condition, allowing
drafts to be selected independently of low-confidence predictions. This enables
deeper accepted sequences, improving decoding efficiency while preserving image
quality. Extensive experiments on state-of-the-art visual AR models demonstrate
that LANTERN++ significantly accelerates inference, achieving up to
$\mathbf{\times 2.56}$ speedup over standard AR decoding while maintaining high
image quality.
|
2502.06354
|
Guidance-base Diffusion Models for Improving Photoacoustic Image Quality
|
cs.CV
|
Photoacoustic(PA) imaging is a non-destructive and non-invasive technology
for visualizing minute blood vessel structures in the body using ultrasonic
sensors. In PA imaging, the image quality of a single-shot image is poor, and
it is necessary to improve the image quality by averaging many single-shot
images. Therefore, imaging the entire subject requires high imaging costs. In
our study, we propose a method to improve the quality of PA images using
diffusion models. In our method, we improve the reverse diffusion process using
sensor information of PA imaging and introduce a guidance method using imaging
condition information to generate high-quality images.
|
2502.06355
|
Fine-tuning Multimodal Transformers on Edge: A Parallel Split Learning
Approach
|
cs.DC cs.LG
|
Multimodal transformers integrate diverse data types like images, audio, and
text, advancing tasks such as audio-visual understanding and image-text
retrieval; yet their high parameterization limits deployment on
resource-constrained edge devices. Split Learning (SL), which partitions models
at a designated cut-layer to offload compute-intensive operations to the
server, offers a promising approach for distributed training of multimodal
transformers, though its application remains underexplored. We present MPSL, a
parallel SL approach for computational efficient fine-tuning of multimodal
transformers in a distributed manner, while eliminating label sharing, client
synchronization, and per-client sub-model management. MPSL employs lightweight
client-side tokenizers and a unified modality-agnostic encoder, allowing
flexible adaptation to task-specific needs. Our evaluation across 7 multimodal
datasets demonstrates that MPSL matches or outperforms Federated Learning,
reduces client-side computations by 250x, and achieves superior scalability in
communication cost with model growth. Through extensive analysis, we highlight
task suitability, trade-offs, and scenarios where MPSL excels, inspiring
further exploration.
|
2502.06358
|
Towards bandit-based prompt-tuning for in-the-wild foundation agents
|
cs.LG
|
Prompting has emerged as the dominant paradigm for adapting large,
pre-trained transformer-based models to downstream tasks. The Prompting
Decision Transformer (PDT) enables large-scale, multi-task offline
reinforcement learning pre-training by leveraging stochastic trajectory prompts
to identify the target task. However, these prompts are sampled uniformly from
expert demonstrations, overlooking a critical limitation: Not all prompts are
equally informative for differentiating between tasks. To address this, we
propose an inference time bandit-based prompt-tuning framework that explores
and optimizes trajectory prompt selection to enhance task performance. Our
experiments indicate not only clear performance gains due to bandit-based
prompt-tuning, but also better sample complexity, scalability, and prompt space
exploration compared to prompt-tuning baselines.
|
2502.06359
|
Occlusion-Aware Contingency Safety-Critical Planning for Autonomous
Vehicles
|
cs.RO
|
Ensuring safe driving while maintaining travel efficiency for autonomous
vehicles in dynamic and occluded environments is a critical challenge. This
paper proposes an occlusion-aware contingency safety-critical planning approach
for real-time autonomous driving in such environments. Leveraging reachability
analysis for risk assessment, forward reachable sets of occluded phantom
vehicles are computed to quantify dynamic velocity boundaries. These velocity
boundaries are incorporated into a biconvex nonlinear programming (NLP)
formulation, enabling simultaneous optimization of exploration and fallback
trajectories within a receding horizon planning framework. To facilitate
real-time optimization and ensure coordination between trajectories, we employ
the consensus alternating direction method of multipliers (ADMM) to decompose
the biconvex NLP problem into low-dimensional convex subproblems. The
effectiveness of the proposed approach is validated through simulation studies
and real-world experiments in occluded intersections. Experimental results
demonstrate enhanced safety and improved travel efficiency, enabling real-time
safe trajectory generation in dynamic occluded intersections under varying
obstacle conditions. A video showcasing the experimental results is available
at https://youtu.be/CHayG7NChqM.
|
2502.06361
|
Weld n'Cut: Automated fabrication of inflatable fabric actuators
|
cs.RO
|
Lightweight, durable textile-based inflatable soft actuators are widely used
in soft robotics, particularly for wearable robots in rehabilitation and in
enhancing human performance in demanding jobs. Fabricating these actuators
typically involves multiple steps: heat-sealable fabrics are fused with a heat
press, and non-stick masking layers define internal chambers. These layers must
be carefully removed post-fabrication, often making the process labor-intensive
and prone to errors. To address these challenges and improve the accuracy and
performance of inflatable actuators, we introduce the Weld n'Cut platform-an
open-source, automated manufacturing process that combines ultrasonic welding
for fusing textile layers with an oscillating knife for precise cuts, enabling
the creation of complex inflatable structures. We demonstrate the machine's
performance across various materials and designs with arbitrarily complex
geometries.
|
2502.06362
|
Proprioceptive Origami Manipulator
|
cs.RO
|
Origami offers a versatile framework for designing morphable structures and
soft robots by exploiting the geometry of folds. Tubular origami structures can
act as continuum manipulators that balance flexibility and strength. However,
precise control of such manipulators often requires reliance on vision-based
systems that limit their application in complex and cluttered environments.
Here, we propose a proprioceptive tendon-driven origami manipulator without
compromising its flexibility. Using conductive threads as actuating tendons, we
multiplex them with proprioceptive sensing capabilities. The change in the
active length of the tendons is reflected in their effective resistance, which
can be measured with a simple circuit. We correlated the change in the
resistance to the lengths of the tendons. We input this information into a
forward kinematic model to reconstruct the manipulator configuration and
end-effector position. This platform provides a foundation for the closed-loop
control of continuum origami manipulators while preserving their inherent
flexibility.
|
2502.06363
|
Improved Regret Analysis in Gaussian Process Bandits: Optimality for
Noiseless Reward, RKHS norm, and Non-Stationary Variance
|
cs.LG stat.ML
|
We study the Gaussian process (GP) bandit problem, whose goal is to minimize
regret under an unknown reward function lying in some reproducing kernel
Hilbert space (RKHS). The maximum posterior variance analysis is vital in
analyzing near-optimal GP bandit algorithms such as maximum variance reduction
(MVR) and phased elimination (PE). Therefore, we first show the new upper bound
of the maximum posterior variance, which improves the dependence of the noise
variance parameters of the GP. By leveraging this result, we refine the MVR and
PE to obtain (i) a nearly optimal regret upper bound in the noiseless setting
and (ii) regret upper bounds that are optimal with respect to the RKHS norm of
the reward function. Furthermore, as another application of our proposed bound,
we analyze the GP bandit under the time-varying noise variance setting, which
is the kernelized extension of the linear bandit with heteroscedastic noise.
For this problem, we show that MVR and PE-based algorithms achieve noise
variance-dependent regret upper bounds, which matches our regret lower bound.
|
2502.06364
|
Automatic Identification of Samples in Hip-Hop Music via Multi-Loss
Training and an Artificial Dataset
|
cs.SD cs.LG eess.AS
|
Sampling, the practice of reusing recorded music or sounds from another
source in a new work, is common in popular music genres like hip-hop and rap.
Numerous services have emerged that allow users to identify connections between
samples and the songs that incorporate them, with the goal of enhancing music
discovery. Designing a system that can perform the same task automatically is
challenging, as samples are commonly altered with audio effects like pitch- and
time-stretching and may only be seconds long. Progress on this task has been
minimal and is further blocked by the limited availability of training data.
Here, we show that a convolutional neural network trained on an artificial
dataset can identify real-world samples in commercial hip-hop music. We extract
vocal, harmonic, and percussive elements from several databases of
non-commercial music recordings using audio source separation, and train the
model to fingerprint a subset of these elements in transformed versions of the
original audio. We optimize the model using a joint classification and metric
learning loss and show that it achieves 13% greater precision on real-world
instances of sampling than a fingerprinting system using acoustic landmarks,
and that it can recognize samples that have been both pitch shifted and time
stretched. We also show that, for half of the commercial music recordings we
tested, our model is capable of locating the position of a sample to within
five seconds.
|
2502.06367
|
FOCUS -- Multi-View Foot Reconstruction From Synthetically Trained Dense
Correspondences
|
cs.CV
|
Surface reconstruction from multiple, calibrated images is a challenging task
- often requiring a large number of collected images with significant overlap.
We look at the specific case of human foot reconstruction. As with previous
successful foot reconstruction work, we seek to extract rich per-pixel geometry
cues from multi-view RGB images, and fuse these into a final 3D object. Our
method, FOCUS, tackles this problem with 3 main contributions: (i) SynFoot2, an
extension of an existing synthetic foot dataset to include a new data type:
dense correspondence with the parameterized foot model FIND; (ii) an
uncertainty-aware dense correspondence predictor trained on our synthetic
dataset; (iii) two methods for reconstructing a 3D surface from dense
correspondence predictions: one inspired by Structure-from-Motion, and one
optimization-based using the FIND model. We show that our reconstruction
achieves state-of-the-art reconstruction quality in a few-view setting,
performing comparably to state-of-the-art when many views are available, and
runs substantially faster. We release our synthetic dataset to the research
community. Code is available at: https://github.com/OllieBoyne/FOCUS
|
2502.06374
|
Hyperparameters in Score-Based Membership Inference Attacks
|
cs.LG cs.AI
|
Membership Inference Attacks (MIAs) have emerged as a valuable framework for
evaluating privacy leakage by machine learning models. Score-based MIAs are
distinguished, in particular, by their ability to exploit the confidence scores
that the model generates for particular inputs. Existing score-based MIAs
implicitly assume that the adversary has access to the target model's
hyperparameters, which can be used to train the shadow models for the attack.
In this work, we demonstrate that the knowledge of target hyperparameters is
not a prerequisite for MIA in the transfer learning setting. Based on this, we
propose a novel approach to select the hyperparameters for training the shadow
models for MIA when the attacker has no prior knowledge about them by matching
the output distributions of target and shadow models. We demonstrate that using
the new approach yields hyperparameters that lead to an attack near
indistinguishable in performance from an attack that uses target
hyperparameters to train the shadow models. Furthermore, we study the empirical
privacy risk of unaccounted use of training data for hyperparameter
optimization (HPO) in differentially private (DP) transfer learning. We find no
statistically significant evidence that performing HPO using training data
would increase vulnerability to MIA.
|
2502.06376
|
Many-Task Federated Fine-Tuning via Unified Task Vectors
|
cs.LG cs.CV
|
Federated Learning (FL) traditionally assumes homogeneous client tasks;
however, in real-world scenarios, clients often specialize in diverse tasks,
introducing task heterogeneity. To address this challenge, Many-Task FL
(MaT-FL) has emerged, enabling clients to collaborate effectively despite task
diversity. Existing MaT-FL approaches rely on client grouping or personalized
layers, requiring the server to manage individual models and failing to account
for clients handling multiple tasks. We propose MaTU, a MaT-FL approach that
enables joint learning of task vectors across clients, eliminating the need for
clustering or client-specific weight storage at the server. Our method
introduces a novel aggregation mechanism that determines task similarity based
on the direction of clients task vectors and constructs a unified task vector
encapsulating all tasks. To address task-specific requirements, we augment the
unified task vector with lightweight modulators that facilitate knowledge
transfer among related tasks while disentangling dissimilar ones. Evaluated
across 30 datasets, MaTU achieves superior performance over state-of-the-art
MaT-FL approaches, with results comparable to per-task fine-tuning, while
delivering significant communication savings.
|
2502.06379
|
Solving Linear-Gaussian Bayesian Inverse Problems with Decoupled
Diffusion Sequential Monte Carlo
|
cs.LG cs.AI stat.ML
|
A recent line of research has exploited pre-trained generative diffusion
models as priors for solving Bayesian inverse problems. We contribute to this
research direction by designing a sequential Monte Carlo method for
linear-Gaussian inverse problems which builds on ``decoupled diffusion", where
the generative process is designed such that larger updates to the sample are
possible. The method is asymptotically exact and we demonstrate the
effectiveness of our Decoupled Diffusion Sequential Monte Carlo (DDSMC)
algorithm on both synthetic data and image reconstruction tasks. Further, we
demonstrate how the approach can be extended to discrete data.
|
2502.06380
|
Structure-preserving contrastive learning for spatial time series
|
cs.LG cs.CV
|
Informative representations enhance model performance and generalisability in
downstream tasks. However, learning self-supervised representations for
spatially characterised time series, like traffic interactions, poses
challenges as it requires maintaining fine-grained similarity relations in the
latent space. In this study, we incorporate two structure-preserving
regularisers for the contrastive learning of spatial time series: one
regulariser preserves the topology of similarities between instances, and the
other preserves the graph geometry of similarities across spatial and temporal
dimensions. To balance contrastive learning and structure preservation, we
propose a dynamic mechanism that adaptively weighs the trade-off and stabilises
training. We conduct experiments on multivariate time series classification, as
well as macroscopic and microscopic traffic prediction. For all three tasks,
our approach preserves the structures of similarity relations more effectively
and improves state-of-the-art task performances. The proposed approach can be
applied to an arbitrary encoder and is particularly beneficial for time series
with spatial or geographical features. Furthermore, this study suggests that
higher similarity structure preservation indicates more informative and useful
representations. This may help to understand the contribution of representation
learning in pattern recognition with neural networks. Our code is made openly
accessible with all resulting data at https://github.com/yiru-jiao/spclt.
|
2502.06387
|
How Humans Help LLMs: Assessing and Incentivizing Human Preference
Annotators
|
cs.LG cs.GT econ.TH
|
Human-annotated preference data play an important role in aligning large
language models (LLMs). In this paper, we investigate the questions of
assessing the performance of human annotators and incentivizing them to provide
high-quality annotations. The quality assessment of language/text annotation
faces two challenges: (i) the intrinsic heterogeneity among annotators, which
prevents the classic methods that assume the underlying existence of a true
label; and (ii) the unclear relationship between the annotation quality and the
performance of downstream tasks, which excludes the possibility of inferring
the annotators' behavior based on the model performance trained from the
annotation data. Then we formulate a principal-agent model to characterize the
behaviors of and the interactions between the company and the human annotators.
The model rationalizes a practical mechanism of a bonus scheme to incentivize
annotators which benefits both parties and it underscores the importance of the
joint presence of an assessment system and a proper contract scheme. From a
technical perspective, our analysis extends the existing literature on the
principal-agent model by considering a continuous action space for the agent.
We show the gap between the first-best and the second-best solutions (under the
continuous action space) is of $\Theta(1/\sqrt{n \log n})$ for the binary
contracts and $\Theta(1/n)$ for the linear contracts, where $n$ is the number
of samples used for performance assessment; this contrasts with the known
result of $\exp(-\Theta(n))$ for the binary contracts when the action space is
discrete. Throughout the paper, we use real preference annotation data to
accompany our discussions.
|
2502.06390
|
When Data Manipulation Meets Attack Goals: An In-depth Survey of Attacks
for VLMs
|
cs.CV
|
Vision-Language Models (VLMs) have gained considerable prominence in recent
years due to their remarkable capability to effectively integrate and process
both textual and visual information. This integration has significantly
enhanced performance across a diverse spectrum of applications, such as scene
perception and robotics. However, the deployment of VLMs has also given rise to
critical safety and security concerns, necessitating extensive research to
assess the potential vulnerabilities these VLM systems may harbor. In this
work, we present an in-depth survey of the attack strategies tailored for VLMs.
We categorize these attacks based on their underlying objectives - namely
jailbreak, camouflage, and exploitation - while also detailing the various
methodologies employed for data manipulation of VLMs. Meanwhile, we outline
corresponding defense mechanisms that have been proposed to mitigate these
vulnerabilities. By discerning key connections and distinctions among the
diverse types of attacks, we propose a compelling taxonomy for VLM attacks.
Moreover, we summarize the evaluation metrics that comprehensively describe the
characteristics and impact of different attacks on VLMs. Finally, we conclude
with a discussion of promising future research directions that could further
enhance the robustness and safety of VLMs, emphasizing the importance of
ongoing exploration in this critical area of study. To facilitate community
engagement, we maintain an up-to-date project page, accessible at:
https://github.com/AobtDai/VLM_Attack_Paper_List.
|
2502.06392
|
TANGLED: Generating 3D Hair Strands from Images with Arbitrary Styles
and Viewpoints
|
cs.CV cs.GR
|
Hairstyles are intricate and culturally significant with various geometries,
textures, and structures. Existing text or image-guided generation methods fail
to handle the richness and complexity of diverse styles. We present TANGLED, a
novel approach for 3D hair strand generation that accommodates diverse image
inputs across styles, viewpoints, and quantities of input views. TANGLED
employs a three-step pipeline. First, our MultiHair Dataset provides 457
diverse hairstyles annotated with 74 attributes, emphasizing complex and
culturally significant styles to improve model generalization. Second, we
propose a diffusion framework conditioned on multi-view linearts that can
capture topological cues (e.g., strand density and parting lines) while
filtering out noise. By leveraging a latent diffusion model with
cross-attention on lineart features, our method achieves flexible and robust 3D
hair generation across diverse input conditions. Third, a parametric
post-processing module enforces braid-specific constraints to maintain
coherence in complex structures. This framework not only advances hairstyle
realism and diversity but also enables culturally inclusive digital avatars and
novel applications like sketch-based 3D strand editing for animation and
augmented reality.
|
2502.06394
|
SynthDetoxM: Modern LLMs are Few-Shot Parallel Detoxification Data
Annotators
|
cs.CL
|
Existing approaches to multilingual text detoxification are hampered by the
scarcity of parallel multilingual datasets. In this work, we introduce a
pipeline for the generation of multilingual parallel detoxification data. We
also introduce SynthDetoxM, a manually collected and synthetically generated
multilingual parallel text detoxification dataset comprising 16,000
high-quality detoxification sentence pairs across German, French, Spanish and
Russian. The data was sourced from different toxicity evaluation datasets and
then rewritten with nine modern open-source LLMs in few-shot setting. Our
experiments demonstrate that models trained on the produced synthetic datasets
have superior performance to those trained on the human-annotated
MultiParaDetox dataset even in data limited setting. Models trained on
SynthDetoxM outperform all evaluated LLMs in few-shot setting. We release our
dataset and code to help further research in multilingual text detoxification.
|
2502.06395
|
AppVLM: A Lightweight Vision Language Model for Online App Control
|
cs.AI
|
The utilisation of foundation models as smartphone assistants, termed app
agents, is a critical research challenge. These agents aim to execute human
instructions on smartphones by interpreting textual instructions and performing
actions via the device's interface. While promising, current approaches face
significant limitations. Methods that use large proprietary models, such as
GPT-4o, are computationally expensive, while those that use smaller fine-tuned
models often lack adaptability to out-of-distribution tasks. In this work, we
introduce AppVLM, a lightweight Vision-Language Model (VLM). First, we
fine-tune it offline on the AndroidControl dataset. Then, we refine its policy
by collecting data from the AndroidWorld environment and performing further
training iterations. Our results indicate that AppVLM achieves the highest
action prediction accuracy in offline evaluation on the AndroidControl dataset,
compared to all evaluated baselines, and matches GPT-4o in online task
completion success rate in the AndroidWorld environment, while being up to ten
times faster. This makes AppVLM a practical and efficient solution for
real-world deployment.
|
2502.06398
|
Learning Counterfactual Outcomes Under Rank Preservation
|
cs.LG stat.ML
|
Counterfactual inference aims to estimate the counterfactual outcome at the
individual level given knowledge of an observed treatment and the factual
outcome, with broad applications in fields such as epidemiology, econometrics,
and management science. Previous methods rely on a known structural causal
model (SCM) or assume the homogeneity of the exogenous variable and strict
monotonicity between the outcome and exogenous variable. In this paper, we
propose a principled approach for identifying and estimating the counterfactual
outcome. We first introduce a simple and intuitive rank preservation assumption
to identify the counterfactual outcome without relying on a known structural
causal model. Building on this, we propose a novel ideal loss for theoretically
unbiased learning of the counterfactual outcome and further develop a
kernel-based estimator for its empirical estimation. Our theoretical analysis
shows that the rank preservation assumption is not stronger than the
homogeneity and strict monotonicity assumptions, and shows that the proposed
ideal loss is convex, and the proposed estimator is unbiased. Extensive
semi-synthetic and real-world experiments are conducted to demonstrate the
effectiveness of the proposed method.
|
2502.06399
|
A Linearly Convergent Algorithm for Computing the Petz-Augustin
Information
|
quant-ph cs.IT math.IT math.OC
|
We propose an iterative algorithm for computing the Petz-Augustin information
of order $\alpha\in(1/2,1)\cup(1,\infty)$. The optimization error is guaranteed
to converge at a rate of $O\left(\vert 1-1/\alpha \vert^T\right)$, where $T$ is
the number of iterations. Let $n$ denote the cardinality of the input alphabet
of the classical-quantum channel, and $d$ the dimension of the quantum states.
The algorithm has an initialization time complexity of $O\left(n d^{3}\right)$
and a per-iteration time complexity of $O\left(n d^{2}+d^3\right)$. To the best
of our knowledge, this is the first algorithm for computing the Petz-Augustin
information with a non-asymptotic convergence guarantee.
|
2502.06401
|
Habitizing Diffusion Planning for Efficient and Effective Decision
Making
|
cs.LG
|
Diffusion models have shown great promise in decision-making, also known as
diffusion planning. However, the slow inference speeds limit their potential
for broader real-world applications. Here, we introduce Habi, a general
framework that transforms powerful but slow diffusion planning models into fast
decision-making models, which mimics the cognitive process in the brain that
costly goal-directed behavior gradually transitions to efficient habitual
behavior with repetitive practice. Even using a laptop CPU, the habitized model
can achieve an average 800+ Hz decision-making frequency (faster than previous
diffusion planners by orders of magnitude) on standard offline reinforcement
learning benchmarks D4RL, while maintaining comparable or even higher
performance compared to its corresponding diffusion planner. Our work proposes
a fresh perspective of leveraging powerful diffusion models for real-world
decision-making tasks. We also provide robust evaluations and analysis,
offering insights from both biological and engineering perspectives for
efficient and effective decision-making.
|
2502.06403
|
The AI off-switch problem as a signalling game: bounded rationality and
incomparability
|
cs.LG
|
The off-switch problem is a critical challenge in AI control: if an AI system
resists being switched off, it poses a significant risk. In this paper, we
model the off-switch problem as a signalling game, where a human decision-maker
communicates its preferences about some underlying decision problem to an AI
agent, which then selects actions to maximise the human's utility. We assume
that the human is a bounded rational agent and explore various bounded
rationality mechanisms. Using real machine learning models, we reprove prior
results and demonstrate that a necessary condition for an AI system to refrain
from disabling its off-switch is its uncertainty about the human's utility. We
also analyse how message costs influence optimal strategies and extend the
analysis to scenarios involving incomparability.
|
2502.06407
|
An Automated Machine Learning Framework for Surgical Suturing Action
Detection under Class Imbalance
|
cs.LG cs.RO
|
In laparoscopy surgical training and evaluation, real-time detection of
surgical actions with interpretable outputs is crucial for automated and
real-time instructional feedback and skill development. Such capability would
enable development of machine guided training systems. This paper presents a
rapid deployment approach utilizing automated machine learning methods, based
on surgical action data collected from both experienced and trainee surgeons.
The proposed approach effectively tackles the challenge of highly imbalanced
class distributions, ensuring robust predictions across varying skill levels of
surgeons. Additionally, our method partially incorporates model transparency,
addressing the reliability requirements in medical applications. Compared to
deep learning approaches, traditional machine learning models not only
facilitate efficient rapid deployment but also offer significant advantages in
interpretability. Through experiments, this study demonstrates the potential of
this approach to provide quick, reliable and effective real-time detection in
surgical training environments
|
2502.06412
|
Toolbox for Developing Physics Informed Neural Networks for Power
Systems Components
|
eess.SY cs.SY
|
This paper puts forward the vision of creating a library of
neural-network-based models for power system simulations. Traditional numerical
solvers struggle with the growing complexity of modern power systems,
necessitating faster and more scalable alternatives. Physics-Informed Neural
Networks (PINNs) offer promise to solve fast the ordinary differential
equations (ODEs) governing power system dynamics. This is vital for the
reliability, cost optimization, and real-time decision-making in the
electricity grid. Despite their potential, standardized frameworks to train
PINNs remain scarce. This poses a barrier for the broader adoption and
reproducibility of PINNs; it also does not allow the streamlined creation of a
PINN-based model library. This paper addresses these gaps. It introduces a
Python-based toolbox for developing PINNs tailored to power system components,
available on GitHub https://github. com/radiakos/PowerPINN. Using this
framework, we capture the dynamic characteristics of a 9th-order system, which
is probably the most complex power system component trained with a PINN to
date, demonstrating the toolbox capabilities, limitations, and potential
improvements. The toolbox is open and free to use by anyone interested in
creating PINN-based models for power system components.
|
2502.06415
|
Systematic Outliers in Large Language Models
|
cs.CL cs.AI cs.LG
|
Outliers have been widely observed in Large Language Models (LLMs),
significantly impacting model performance and posing challenges for model
compression. Understanding the functionality and formation mechanisms of these
outliers is critically important. Existing works, however, largely focus on
reducing the impact of outliers from an algorithmic perspective, lacking an
in-depth investigation into their causes and roles. In this work, we provide a
detailed analysis of the formation process, underlying causes, and functions of
outliers in LLMs. We define and categorize three types of outliers-activation
outliers, weight outliers, and attention outliers-and analyze their
distributions across different dimensions, uncovering inherent connections
between their occurrences and their ultimate influence on the attention
mechanism. Based on these observations, we hypothesize and explore the
mechanisms by which these outliers arise and function, demonstrating through
theoretical derivations and experiments that they emerge due to the
self-attention mechanism's softmax operation. These outliers act as implicit
context-aware scaling factors within the attention mechanism. As these outliers
stem from systematic influences, we term them systematic outliers. Our study
not only enhances the understanding of Transformer-based LLMs but also shows
that structurally eliminating outliers can accelerate convergence and improve
model compression. The code is avilable at
https://github.com/an-yongqi/systematic-outliers.
|
2502.06418
|
Robust Watermarks Leak: Channel-Aware Feature Extraction Enables
Adversarial Watermark Manipulation
|
cs.CV cs.CR
|
Watermarking plays a key role in the provenance and detection of AI-generated
content. While existing methods prioritize robustness against real-world
distortions (e.g., JPEG compression and noise addition), we reveal a
fundamental tradeoff: such robust watermarks inherently improve the redundancy
of detectable patterns encoded into images, creating exploitable information
leakage. To leverage this, we propose an attack framework that extracts leakage
of watermark patterns through multi-channel feature learning using a
pre-trained vision model. Unlike prior works requiring massive data or detector
access, our method achieves both forgery and detection evasion with a single
watermarked image. Extensive experiments demonstrate that our method achieves a
60\% success rate gain in detection evasion and 51\% improvement in forgery
accuracy compared to state-of-the-art methods while maintaining visual
fidelity. Our work exposes the robustness-stealthiness paradox: current
"robust" watermarks sacrifice security for distortion resistance, providing
insights for future watermark design.
|
2502.06419
|
Occ-LLM: Enhancing Autonomous Driving with Occupancy-Based Large
Language Models
|
cs.RO
|
Large Language Models (LLMs) have made substantial advancements in the field
of robotic and autonomous driving. This study presents the first
Occupancy-based Large Language Model (Occ-LLM), which represents a pioneering
effort to integrate LLMs with an important representation. To effectively
encode occupancy as input for the LLM and address the category imbalances
associated with occupancy, we propose Motion Separation Variational Autoencoder
(MS-VAE). This innovative approach utilizes prior knowledge to distinguish
dynamic objects from static scenes before inputting them into a tailored
Variational Autoencoder (VAE). This separation enhances the model's capacity to
concentrate on dynamic trajectories while effectively reconstructing static
scenes. The efficacy of Occ-LLM has been validated across key tasks, including
4D occupancy forecasting, self-ego planning, and occupancy-based scene question
answering. Comprehensive evaluations demonstrate that Occ-LLM significantly
surpasses existing state-of-the-art methodologies, achieving gains of about 6\%
in Intersection over Union (IoU) and 4\% in mean Intersection over Union (mIoU)
for the task of 4D occupancy forecasting. These findings highlight the
transformative potential of Occ-LLM in reshaping current paradigms within
robotic and autonomous driving.
|
2502.06424
|
CS-SHAP: Extending SHAP to Cyclic-Spectral Domain for Better
Interpretability of Intelligent Fault Diagnosis
|
cs.LG cs.AI
|
Neural networks (NNs), with their powerful nonlinear mapping and end-to-end
capabilities, are widely applied in mechanical intelligent fault diagnosis
(IFD). However, as typical black-box models, they pose challenges in
understanding their decision basis and logic, limiting their deployment in
high-reliability scenarios. Hence, various methods have been proposed to
enhance the interpretability of IFD. Among these, post-hoc approaches can
provide explanations without changing model architecture, preserving its
flexibility and scalability. However, existing post-hoc methods often suffer
from limitations in explanation forms. They either require preprocessing that
disrupts the end-to-end nature or overlook fault mechanisms, leading to
suboptimal explanations. To address these issues, we derived the
cyclic-spectral (CS) transform and proposed the CS-SHAP by extending Shapley
additive explanations (SHAP) to the CS domain. CS-SHAP can evaluate
contributions from both carrier and modulation frequencies, aligning more
closely with fault mechanisms and delivering clearer and more accurate
explanations. Three datasets are utilized to validate the superior
interpretability of CS-SHAP, ensuring its correctness, reproducibility, and
practical performance. With open-source code and outstanding interpretability,
CS-SHAP has the potential to be widely adopted and become the post-hoc
interpretability benchmark in IFD, even in other classification tasks. The code
is available on https://github.com/ChenQian0618/CS-SHAP.
|
2502.06425
|
Generating Privacy-Preserving Personalized Advice with Zero-Knowledge
Proofs and LLMs
|
cs.CR cs.AI
|
Large language models (LLMs) are increasingly utilized in domains such as
finance, healthcare, and interpersonal relationships to provide advice tailored
to user traits and contexts. However, this personalization often relies on
sensitive data, raising critical privacy concerns and necessitating data
minimization. To address these challenges, we propose a framework that
integrates zero-knowledge proof (ZKP) technology, specifically zkVM, with
LLM-based chatbots. This integration enables privacy-preserving data sharing by
verifying user traits without disclosing sensitive information. Our research
introduces both an architecture and a prompting strategy for this approach.
Through empirical evaluation, we clarify the current constraints and
performance limitations of both zkVM and the proposed prompting strategy,
thereby demonstrating their practical feasibility in real-world scenarios.
|
2502.06427
|
Hybrid State-Space and GRU-based Graph Tokenization Mamba for
Hyperspectral Image Classification
|
cs.CV
|
Hyperspectral image (HSI) classification plays a pivotal role in domains such
as environmental monitoring, agriculture, and urban planning. However, it faces
significant challenges due to the high-dimensional nature of the data and the
complex spectral-spatial relationships inherent in HSI. Traditional methods,
including conventional machine learning and convolutional neural networks
(CNNs), often struggle to effectively capture these intricate spectral-spatial
features and global contextual information. Transformer-based models, while
powerful in capturing long-range dependencies, often demand substantial
computational resources, posing challenges in scenarios where labeled datasets
are limited, as is commonly seen in HSI applications. To overcome these
challenges, this work proposes GraphMamba, a hybrid model that combines
spectral-spatial token generation, graph-based token prioritization, and
cross-attention mechanisms. The model introduces a novel hybridization of
state-space modeling and Gated Recurrent Units (GRU), capturing both linear and
nonlinear spatial-spectral dynamics. GraphMamba enhances the ability to model
complex spatial-spectral relationships while maintaining scalability and
computational efficiency across diverse HSI datasets. Through comprehensive
experiments, we demonstrate that GraphMamba outperforms existing
state-of-the-art models, offering a scalable and robust solution for complex
HSI classification tasks.
|
2502.06428
|
CoS: Chain-of-Shot Prompting for Long Video Understanding
|
cs.CV
|
Multi-modal Large Language Models (MLLMs) struggle with long videos due to
the need for excessive visual tokens. These tokens exceed massively the context
length of MLLMs, resulting in filled by redundant task-irrelevant shots. How to
select shots is an unsolved critical problem: sparse sampling risks missing key
details, while exhaustive sampling overwhelms the model with irrelevant
content, leading to video misunderstanding. To solve this problem, we propose
Chain-of-Shot prompting (CoS). The key idea is to frame shot selection as
test-time visual prompt optimisation, choosing shots adaptive to video
understanding semantic task by optimising shots-task alignment. CoS has two key
parts: (1) a binary video summary mechanism that performs pseudo temporal
grounding, discovering a binary coding to identify task-relevant shots, and (2)
a video co-reasoning module that deploys the binary coding to pair (learning to
align) task-relevant positive shots with irrelevant negative shots. It embeds
the optimised shot selections into the original video, facilitating a focus on
relevant context to optimize long video understanding. Experiments across three
baselines and five datasets demonstrate the effectiveness and adaptability of
CoS. Code given in https://lwpyh.github.io/CoS.
|
2502.06430
|
Content-Driven Local Response: Supporting Sentence-Level and
Message-Level Mobile Email Replies With and Without AI
|
cs.HC cs.CL
|
Mobile emailing demands efficiency in diverse situations, which motivates the
use of AI. However, generated text does not always reflect how people want to
respond. This challenges users with AI involvement tradeoffs not yet considered
in email UIs. We address this with a new UI concept called Content-Driven Local
Response (CDLR), inspired by microtasking. This allows users to insert
responses into the email by selecting sentences, which additionally serves to
guide AI suggestions. The concept supports combining AI for local suggestions
and message-level improvements. Our user study (N=126) compared CDLR with
manual typing and full reply generation. We found that CDLR supports flexible
workflows with varying degrees of AI involvement, while retaining the benefits
of reduced typing and errors. This work contributes a new approach to
integrating AI capabilities: By redesigning the UI for workflows with and
without AI, we can empower users to dynamically adjust AI involvement.
|
2502.06431
|
FCVSR: A Frequency-aware Method for Compressed Video Super-Resolution
|
cs.CV
|
Compressed video super-resolution (SR) aims to generate high-resolution (HR)
videos from the corresponding low-resolution (LR) compressed videos. Recently,
some compressed video SR methods attempt to exploit the spatio-temporal
information in the frequency domain, showing great promise in super-resolution
performance. However, these methods do not differentiate various frequency
subbands spatially or capture the temporal frequency dynamics, potentially
leading to suboptimal results. In this paper, we propose a deep frequency-based
compressed video SR model (FCVSR) consisting of a motion-guided adaptive
alignment (MGAA) network and a multi-frequency feature refinement (MFFR)
module. Additionally, a frequency-aware contrastive loss is proposed for
training FCVSR, in order to reconstruct finer spatial details. The proposed
model has been evaluated on three public compressed video super-resolution
datasets, with results demonstrating its effectiveness when compared to
existing works in terms of super-resolution performance (up to a 0.14dB gain in
PSNR over the second-best model) and complexity.
|
2502.06432
|
Prompt-SID: Learning Structural Representation Prompt via Latent
Diffusion for Single-Image Denoising
|
cs.CV cs.AI
|
Many studies have concentrated on constructing supervised models utilizing
paired datasets for image denoising, which proves to be expensive and
time-consuming. Current self-supervised and unsupervised approaches typically
rely on blind-spot networks or sub-image pairs sampling, resulting in pixel
information loss and destruction of detailed structural information, thereby
significantly constraining the efficacy of such methods. In this paper, we
introduce Prompt-SID, a prompt-learning-based single image denoising framework
that emphasizes preserving of structural details. This approach is trained in a
self-supervised manner using downsampled image pairs. It captures
original-scale image information through structural encoding and integrates
this prompt into the denoiser. To achieve this, we propose a structural
representation generation model based on the latent diffusion process and
design a structural attention module within the transformer-based denoiser
architecture to decode the prompt. Additionally, we introduce a scale replay
training mechanism, which effectively mitigates the scale gap from images of
different resolutions. We conduct comprehensive experiments on synthetic,
real-world, and fluorescence imaging datasets, showcasing the remarkable
effectiveness of Prompt-SID.
|
2502.06434
|
Rethinking Large-scale Dataset Compression: Shifting Focus From Labels
to Images
|
cs.CV cs.LG
|
Dataset distillation and dataset pruning are two prominent techniques for
compressing datasets to improve computational and storage efficiency. Despite
their overlapping objectives, these approaches are rarely compared directly.
Even within each field, the evaluation protocols are inconsistent across
various methods, which complicates fair comparisons and hinders
reproducibility. Considering these limitations, we introduce in this paper a
benchmark that equitably evaluates methodologies across both distillation and
pruning literatures. Notably, our benchmark reveals that in the mainstream
dataset distillation setting for large-scale datasets, which heavily rely on
soft labels from pre-trained models, even randomly selected subsets can achieve
surprisingly competitive performance. This finding suggests that an
overemphasis on soft labels may be diverting attention from the intrinsic value
of the image data, while also imposing additional burdens in terms of
generation, storage, and application. To address these issues, we propose a new
framework for dataset compression, termed Prune, Combine, and Augment (PCA),
which focuses on leveraging image data exclusively, relies solely on hard
labels for evaluation, and achieves state-of-the-art performance in this setup.
By shifting the emphasis back to the images, our benchmark and PCA framework
pave the way for more balanced and accessible techniques in dataset compression
research. Our code is available at:
https://github.com/ArmandXiao/Rethinking-Dataset-Compression
|
2502.06438
|
FEMBA: Efficient and Scalable EEG Analysis with a Bidirectional Mamba
Foundation Model
|
cs.LG cs.AI
|
Accurate and efficient electroencephalography (EEG) analysis is essential for
detecting seizures and artifacts in long-term monitoring, with applications
spanning hospital diagnostics to wearable health devices. Robust EEG analytics
have the potential to greatly improve patient care. However, traditional deep
learning models, especially Transformer-based architectures, are hindered by
their quadratic time and memory complexity, making them less suitable for
resource-constrained environments. To address these challenges, we present
FEMBA (Foundational EEG Mamba + Bidirectional Architecture), a novel
self-supervised framework that establishes new efficiency benchmarks for EEG
analysis through bidirectional state-space modeling. Unlike Transformer-based
models, which incur quadratic time and memory complexity, FEMBA scales linearly
with sequence length, enabling more scalable and efficient processing of
extended EEG recordings. Trained on over 21,000 hours of unlabeled EEG and
fine-tuned on three downstream tasks, FEMBA achieves competitive performance in
comparison with transformer models, with significantly lower computational
cost. Specifically, it reaches 81.82% balanced accuracy (0.8921 AUROC) on TUAB
and 0.949 AUROC on TUAR, while a tiny 7.8M-parameter variant demonstrates
viability for resource-constrained devices. These results pave the way for
scalable, general-purpose EEG analytics in both clinical and highlight FEMBA as
a promising candidate for wearable applications.
|
2502.06439
|
Testing software for non-discrimination: an updated and extended audit
in the Italian car insurance domain
|
cs.SE cs.AI cs.HC cs.LG
|
Context. As software systems become more integrated into society's
infrastructure, the responsibility of software professionals to ensure
compliance with various non-functional requirements increases. These
requirements include security, safety, privacy, and, increasingly,
non-discrimination.
Motivation. Fairness in pricing algorithms grants equitable access to basic
services without discriminating on the basis of protected attributes.
Method. We replicate a previous empirical study that used black box testing
to audit pricing algorithms used by Italian car insurance companies, accessible
through a popular online system. With respect to the previous study, we
enlarged the number of tests and the number of demographic variables under
analysis.
Results. Our work confirms and extends previous findings, highlighting the
problematic permanence of discrimination across time: demographic variables
significantly impact pricing to this day, with birthplace remaining the main
discriminatory factor against individuals not born in Italian cities. We also
found that driver profiles can determine the number of quotes available to the
user, denying equal opportunities to all.
Conclusion. The study underscores the importance of testing for
non-discrimination in software systems that affect people's everyday lives.
Performing algorithmic audits over time makes it possible to evaluate the
evolution of such algorithms. It also demonstrates the role that empirical
software engineering can play in making software systems more accountable.
|
2502.06440
|
SIGMA: Sheaf-Informed Geometric Multi-Agent Pathfinding
|
cs.RO cs.AI cs.MA
|
The Multi-Agent Path Finding (MAPF) problem aims to determine the shortest
and collision-free paths for multiple agents in a known, potentially
obstacle-ridden environment. It is the core challenge for robotic deployments
in large-scale logistics and transportation. Decentralized learning-based
approaches have shown great potential for addressing the MAPF problems,
offering more reactive and scalable solutions. However, existing learning-based
MAPF methods usually rely on agents making decisions based on a limited field
of view (FOV), resulting in short-sighted policies and inefficient cooperation
in complex scenarios. There, a critical challenge is to achieve consensus on
potential movements between agents based on limited observations and
communications. To tackle this challenge, we introduce a new framework that
applies sheaf theory to decentralized deep reinforcement learning, enabling
agents to learn geometric cross-dependencies between each other through local
consensus and utilize them for tightly cooperative decision-making. In
particular, sheaf theory provides a mathematical proof of conditions for
achieving global consensus through local observation. Inspired by this, we
incorporate a neural network to approximately model the consensus in latent
space based on sheaf theory and train it through self-supervised learning.
During the task, in addition to normal features for MAPF as in previous works,
each agent distributedly reasons about a learned consensus feature, leading to
efficient cooperation on pathfinding and collision avoidance. As a result, our
proposed method demonstrates significant improvements over state-of-the-art
learning-based MAPF planners, especially in relatively large and complex
scenarios, demonstrating its superiority over baselines in various simulations
and real-world robot experiments.
|
2502.06443
|
Low-dimensional Functions are Efficiently Learnable under Randomly
Biased Distributions
|
cs.LG stat.ML
|
The problem of learning single index and multi index models has gained
significant interest as a fundamental task in high-dimensional statistics. Many
recent works have analysed gradient-based methods, particularly in the setting
of isotropic data distributions, often in the context of neural network
training. Such studies have uncovered precise characterisations of algorithmic
sample complexity in terms of certain analytic properties of the target
function, such as the leap, information, and generative exponents. These
properties establish a quantitative separation between low and high complexity
learning tasks. In this work, we show that high complexity cases are rare.
Specifically, we prove that introducing a small random perturbation to the data
distribution--via a random shift in the first moment--renders any Gaussian
single index model as easy to learn as a linear function. We further extend
this result to a class of multi index models, namely sparse Boolean functions,
also known as Juntas.
|
2502.06445
|
Benchmarking Vision-Language Models on Optical Character Recognition in
Dynamic Video Environments
|
cs.CV
|
This paper introduces an open-source benchmark for evaluating Vision-Language
Models (VLMs) on Optical Character Recognition (OCR) tasks in dynamic video
environments. We present a curated dataset containing 1,477 manually annotated
frames spanning diverse domains, including code editors, news broadcasts,
YouTube videos, and advertisements. Three state of the art VLMs - Claude-3,
Gemini-1.5, and GPT-4o are benchmarked against traditional OCR systems such as
EasyOCR and RapidOCR. Evaluation metrics include Word Error Rate (WER),
Character Error Rate (CER), and Accuracy. Our results highlight the strengths
and limitations of VLMs in video-based OCR tasks, demonstrating their potential
to outperform conventional OCR models in many scenarios. However, challenges
such as hallucinations, content security policies, and sensitivity to occluded
or stylized text remain. The dataset and benchmarking framework are publicly
available to foster further research.
|
2502.06452
|
SparseFocus: Learning-based One-shot Autofocus for Microscopy with
Sparse Content
|
cs.CV q-bio.QM
|
Autofocus is necessary for high-throughput and real-time scanning in
microscopic imaging. Traditional methods rely on complex hardware or iterative
hill-climbing algorithms. Recent learning-based approaches have demonstrated
remarkable efficacy in a one-shot setting, avoiding hardware modifications or
iterative mechanical lens adjustments. However, in this paper, we highlight a
significant challenge that the richness of image content can significantly
affect autofocus performance. When the image content is sparse, previous
autofocus methods, whether traditional climbing-hill or learning-based, tend to
fail. To tackle this, we propose a content-importance-based solution, named
SparseFocus, featuring a novel two-stage pipeline. The first stage measures the
importance of regions within the image, while the second stage calculates the
defocus distance from selected important regions. To validate our approach and
benefit the research community, we collect a large-scale dataset comprising
millions of labelled defocused images, encompassing both dense, sparse and
extremely sparse scenarios. Experimental results show that SparseFocus
surpasses existing methods, effectively handling all levels of content
sparsity. Moreover, we integrate SparseFocus into our Whole Slide Imaging (WSI)
system that performs well in real-world applications. The code and dataset will
be made available upon the publication of this paper.
|
2502.06453
|
MATH-Perturb: Benchmarking LLMs' Math Reasoning Abilities against Hard
Perturbations
|
cs.LG cs.AI cs.CL
|
Large language models have demonstrated impressive performance on challenging
mathematical reasoning tasks, which has triggered the discussion of whether the
performance is achieved by true reasoning capability or memorization. To
investigate this question, prior work has constructed mathematical benchmarks
when questions undergo simple perturbations -- modifications that still
preserve the underlying reasoning patterns of the solutions. However, no work
has explored hard perturbations, which fundamentally change the nature of the
problem so that the original solution steps do not apply. To bridge the gap, we
construct MATH-P-Simple and MATH-P-Hard via simple perturbation and hard
perturbation, respectively. Each consists of 279 perturbed math problems
derived from level-5 (hardest) problems in the MATH dataset (Hendrycksmath et.
al., 2021). We observe significant performance drops on MATH-P-Hard across
various models, including o1-mini (-16.49%) and gemini-2.0-flash-thinking
(-12.9%). We also raise concerns about a novel form of memorization where
models blindly apply learned problem-solving skills without assessing their
applicability to modified contexts. This issue is amplified when using original
problems for in-context learning. We call for research efforts to address this
challenge, which is critical for developing more robust and reliable reasoning
models.
|
2502.06460
|
Group-CLIP Uncertainty Modeling for Group Re-Identification
|
cs.CV
|
Group Re-Identification (Group ReID) aims matching groups of pedestrians
across non-overlapping cameras. Unlike single-person ReID, Group ReID focuses
more on the changes in group structure, emphasizing the number of members and
their spatial arrangement. However, most methods rely on certainty-based
models, which consider only the specific group structures in the group images,
often failing to match unseen group configurations. To this end, we propose a
novel Group-CLIP UncertaintyModeling (GCUM) approach that adapts group text
descriptions to undetermined accommodate member and layout variations.
Specifically, we design a Member Variant Simulation (MVS)module that simulates
member exclusions using a Bernoulli distribution and a Group Layout Adaptation
(GLA) module that generates uncertain group text descriptions with
identity-specific tokens. In addition, we design a Group
RelationshipConstruction Encoder (GRCE) that uses group features to refine
individual features, and employ cross-modal contrastive loss to obtain
generalizable knowledge from group text descriptions. It is worth noting that
we are the first to employ CLIP to GroupReID, and extensive experiments show
that GCUM significantly outperforms state-of-the-art Group ReID methods.
|
2502.06466
|
Inflatable Kirigami Crawlers
|
cs.RO
|
Kirigami offers unique opportunities for guided morphing by leveraging the
geometry of the cuts. This work presents inflatable kirigami crawlers created
by introducing cut patterns into heat-sealable textiles to achieve locomotion
upon cyclic pneumatic actuation. Inflating traditional air pouches results in
symmetric bulging and contraction. In inflated kirigami actuators, the
accumulated compressive forces uniformly break the symmetry, enhance
contraction compared to simple air pouches by two folds, and trigger local
rotation of the sealed edges that overlap and self-assemble into an architected
surface with emerging scale-like features. As a result, the inflatable kirigami
actuators exhibit a uniform, controlled contraction with asymmetric localized
out-of-plane deformations. This process allows us to harness the geometric and
material nonlinearities to imbue inflatable textile-based kirigami actuators
with predictable locomotive functionalities. We thoroughly characterized the
programmed deformations of these actuators and their impact on friction. We
found that the kirigami actuators exhibit directional anisotropic friction
properties when inflated, having higher friction coefficients against the
direction of the movement, enabling them to move across surfaces with varying
roughness. We further enhanced the functionality of inflatable kirigami
actuators by introducing multiple channels and segments to create functional
soft robotic prototypes with versatile locomotion capabilities.
|
2502.06468
|
Beyond Literal Token Overlap: Token Alignability for Multilinguality
|
cs.CL
|
Previous work has considered token overlap, or even similarity of token
distributions, as predictors for multilinguality and cross-lingual knowledge
transfer in language models. However, these very literal metrics assign large
distances to language pairs with different scripts, which can nevertheless show
good cross-linguality. This limits the explanatory strength of token overlap
for knowledge transfer between language pairs that use distinct scripts or
follow different orthographic conventions. In this paper, we propose subword
token alignability as a new way to understand the impact and quality of
multilingual tokenisation. In particular, this metric predicts multilinguality
much better when scripts are disparate and the overlap of literal tokens is
low. We analyse this metric in the context of both encoder and decoder models,
look at data size as a potential distractor, and discuss how this insight may
be applied to multilingual tokenisation in future work. We recommend our
subword token alignability metric for identifying optimal language pairs for
cross-lingual transfer, as well as to guide the construction of better
multilingual tokenisers in the future. We publish our code and reproducibility
details.
|
2502.06469
|
Stochastic MPC with Online-optimized Policies and Closed-loop Guarantees
|
eess.SY cs.SY math.OC
|
This paper proposes a stochastic model predictive control method for linear
systems affected by additive Gaussian disturbances. Closed-loop satisfaction of
probabilistic constraints and recursive feasibility of the underlying convex
optimization problem is guaranteed. Optimization over feedback policies online
increases performance and reduces conservatism compared to fixed-feedback
approaches. The central mechanism is a finitely determined maximal admissible
set for probabilistic constraints, together with the reconditioning of the
predicted probabilistic constraints on the current knowledge at every time
step. The proposed method's reduced conservatism and improved performance in
terms of the achieved closed-loop cost is demonstrated in a numerical example.
|
2502.06470
|
A Survey of Theory of Mind in Large Language Models: Evaluations,
Representations, and Safety Risks
|
cs.CL cs.AI
|
Theory of Mind (ToM), the ability to attribute mental states to others and
predict their behaviour, is fundamental to social intelligence. In this paper,
we survey studies evaluating behavioural and representational ToM in Large
Language Models (LLMs), identify important safety risks from advanced LLM ToM
capabilities, and suggest several research directions for effective evaluation
and mitigation of these risks.
|
2502.06472
|
KARMA: Leveraging Multi-Agent LLMs for Automated Knowledge Graph
Enrichment
|
cs.CL cs.AI cs.CE cs.DL
|
Maintaining comprehensive and up-to-date knowledge graphs (KGs) is critical
for modern AI systems, but manual curation struggles to scale with the rapid
growth of scientific literature. This paper presents KARMA, a novel framework
employing multi-agent large language models (LLMs) to automate KG enrichment
through structured analysis of unstructured text. Our approach employs nine
collaborative agents, spanning entity discovery, relation extraction, schema
alignment, and conflict resolution that iteratively parse documents, verify
extracted knowledge, and integrate it into existing graph structures while
adhering to domain-specific schema. Experiments on 1,200 PubMed articles from
three different domains demonstrate the effectiveness of KARMA in knowledge
graph enrichment, with the identification of up to 38,230 new entities while
achieving 83.1\% LLM-verified correctness and reducing conflict edges by 18.6\%
through multi-layer assessments.
|
2502.06474
|
UniMoD: Efficient Unified Multimodal Transformers with Mixture-of-Depths
|
cs.CV
|
Unified multimodal transformers, which handle both generation and
understanding tasks within a shared parameter space, have received increasing
attention in recent research. Although various unified transformers have been
proposed, training these models is costly due to redundant tokens and heavy
attention computation. In the past, studies on large language models have
demonstrated that token pruning methods, such as Mixture of Depths (MoD), can
significantly improve computational efficiency. MoD employs a router to select
the most important ones for processing within a transformer layer. However,
directly applying MoD-based token pruning to unified transformers will result
in suboptimal performance because different tasks exhibit varying levels of
token redundancy. In our work, we analyze the unified transformers by (1)
examining attention weight patterns, (2) evaluating the layer importance and
token redundancy, and (3) analyzing task interactions. Our findings reveal that
token redundancy is primarily influenced by different tasks and layers.
Building on these findings, we introduce UniMoD, a task-aware token pruning
method that employs a separate router for each task to determine which tokens
should be pruned. We apply our method to Show-o and Emu3, reducing training
FLOPs by approximately 15% in Show-o and 40% in Emu3, while maintaining or
improving performance on several benchmarks. Code will be released at
https://github.com/showlab/UniMoD.
|
2502.06476
|
Image Intrinsic Scale Assessment: Bridging the Gap Between Quality and
Resolution
|
cs.CV
|
Image Quality Assessment (IQA) measures and predicts perceived image quality
by human observers. Although recent studies have highlighted the critical
influence that variations in the scale of an image have on its perceived
quality, this relationship has not been systematically quantified. To bridge
this gap, we introduce the Image Intrinsic Scale (IIS), defined as the largest
scale where an image exhibits its highest perceived quality. We also present
the Image Intrinsic Scale Assessment (IISA) task, which involves subjectively
measuring and predicting the IIS based on human judgments. We develop a
subjective annotation methodology and create the IISA-DB dataset, comprising
785 image-IIS pairs annotated by experts in a rigorously controlled
crowdsourcing study. Furthermore, we propose WIISA (Weak-labeling for Image
Intrinsic Scale Assessment), a strategy that leverages how the IIS of an image
varies with downscaling to generate weak labels. Experiments show that applying
WIISA during the training of several IQA methods adapted for IISA consistently
improves the performance compared to using only ground-truth labels. We will
release the code, dataset, and pre-trained models upon acceptance.
|
2502.06480
|
Logarithmic Regret of Exploration in Average Reward Markov Decision
Processes
|
cs.LG stat.ML
|
In average reward Markov decision processes, state-of-the-art algorithms for
regret minimization follow a well-established framework: They are model-based,
optimistic and episodic. First, they maintain a confidence region from which
optimistic policies are computed using a well-known subroutine called Extended
Value Iteration (EVI). Second, these policies are used over time windows called
episodes, each ended by the Doubling Trick (DT) rule or a variant thereof. In
this work, without modifying EVI, we show that there is a significant advantage
in replacing (DT) by another simple rule, that we call the Vanishing
Multiplicative (VM) rule. When managing episodes with (VM), the algorithm's
regret is, both in theory and in practice, as good if not better than with
(DT), while the one-shot behavior is greatly improved. More specifically, the
management of bad episodes (when sub-optimal policies are being used) is much
better under (VM) than (DT) by making the regret of exploration logarithmic
rather than linear. These results are made possible by a new in-depth
understanding of the contrasting behaviors of confidence regions during good
and bad episodes.
|
Subsets and Splits
No community queries yet
The top public SQL queries from the community will appear here once available.