id
stringlengths 9
16
| title
stringlengths 4
278
| categories
stringlengths 5
104
| abstract
stringlengths 6
4.09k
|
|---|---|---|---|
2501.04675
|
Enhancing Financial VQA in Vision Language Models using Intermediate
Structured Representations
|
cs.CL cs.AI cs.CV cs.LG
|
Chart interpretation is crucial for visual data analysis, but accurately
extracting information from charts poses significant challenges for automated
models. This study investigates the fine-tuning of DEPLOT, a modality
conversion module that translates the image of a plot or chart to a linearized
table, on a custom dataset of 50,000 bar charts. The dataset comprises simple,
stacked, and grouped bar charts, targeting the unique structural features of
these visualizations. The finetuned DEPLOT model is evaluated against its base
version using a test set of 1,000 images and two metrics: Relative Mapping
Similarity (RMS), which measures categorical mapping accuracy, and Relative
Number Set Similarity (RNSS), which evaluates numerical interpretation
accuracy. To further explore the reasoning capabilities of large language
models (LLMs), we curate an additional set of 100 bar chart images paired with
question answer sets. Our findings demonstrate that providing a structured
intermediate table alongside the image significantly enhances LLM reasoning
performance compared to direct image queries.
|
2501.04678
|
RadGPT: Constructing 3D Image-Text Tumor Datasets
|
eess.IV cs.CV
|
With over 85 million CT scans performed annually in the United States,
creating tumor-related reports is a challenging and time-consuming task for
radiologists. To address this need, we present RadGPT, an Anatomy-Aware
Vision-Language AI Agent for generating detailed reports from CT scans. RadGPT
first segments tumors, including benign cysts and malignant tumors, and their
surrounding anatomical structures, then transforms this information into both
structured reports and narrative reports. These reports provide tumor size,
shape, location, attenuation, volume, and interactions with surrounding blood
vessels and organs. Extensive evaluation on unseen hospitals shows that RadGPT
can produce accurate reports, with high sensitivity/specificity for small tumor
(<2 cm) detection: 80/73% for liver tumors, 92/78% for kidney tumors, and
77/77% for pancreatic tumors. For large tumors, sensitivity ranges from 89% to
97%. The results significantly surpass the state-of-the-art in abdominal CT
report generation.
RadGPT generated reports for 17 public datasets. Through radiologist review
and refinement, we have ensured the reports' accuracy, and created the first
publicly available image-text 3D medical dataset, comprising over 1.8 million
text tokens and 2.7 million images from 9,262 CT scans, including 2,947 tumor
scans/reports of 8,562 tumor instances. Our reports can: (1) localize tumors in
eight liver sub-segments and three pancreatic sub-segments annotated per-voxel;
(2) determine pancreatic tumor stage (T1-T4) in 260 reports; and (3) present
individual analyses of multiple tumors--rare in human-made reports.
Importantly, 948 of the reports are for early-stage tumors.
|
2501.04682
|
Towards System 2 Reasoning in LLMs: Learning How to Think With Meta
Chain-of-Thought
|
cs.AI cs.CL
|
We propose a novel framework, Meta Chain-of-Thought (Meta-CoT), which extends
traditional Chain-of-Thought (CoT) by explicitly modeling the underlying
reasoning required to arrive at a particular CoT. We present empirical evidence
from state-of-the-art models exhibiting behaviors consistent with in-context
search, and explore methods for producing Meta-CoT via process supervision,
synthetic data generation, and search algorithms. Finally, we outline a
concrete pipeline for training a model to produce Meta-CoTs, incorporating
instruction tuning with linearized search traces and reinforcement learning
post-training. Finally, we discuss open research questions, including scaling
laws, verifier roles, and the potential for discovering novel reasoning
algorithms. This work provides a theoretical and practical roadmap to enable
Meta-CoT in LLMs, paving the way for more powerful and human-like reasoning in
artificial intelligence.
|
2501.04683
|
Toward Sufficient Statistical Power in Algorithmic Bias Assessment: A
Test for ABROCA
|
stat.ML cs.LG
|
Algorithmic bias is a pressing concern in educational data mining (EDM), as
it risks amplifying inequities in learning outcomes. The Area Between ROC
Curves (ABROCA) metric is frequently used to measure discrepancies in model
performance across demographic groups to quantify overall model fairness.
However, its skewed distribution--especially when class or group imbalances
exist--makes significance testing challenging. This study investigates ABROCA's
distributional properties and contributes robust methods for its significance
testing. Specifically, we address (1) whether ABROCA follows any known
distribution, (2) how to reliably test for algorithmic bias using ABROCA, and
(3) the statistical power achievable with ABROCA-based bias assessments under
typical EDM sample specifications. Simulation results confirm that ABROCA does
not match standard distributions, including those suited to accommodate
skewness. We propose nonparametric randomization tests for ABROCA and
demonstrate that reliably detecting bias with ABROCA requires large sample
sizes or substantial effect sizes, particularly in imbalanced settings.
Findings suggest that ABROCA-based bias evaluation based on sample sizes common
in EDM tends to be underpowered, undermining the reliability of conclusions
about model fairness. By offering open-source code to simulate power and
statistically test ABROCA, this paper aims to foster more reliable statistical
testing in EDM research. It supports broader efforts toward replicability and
equity in educational modeling.
|
2501.04686
|
URSA: Understanding and Verifying Chain-of-thought Reasoning in
Multimodal Mathematics
|
cs.CL cs.AI cs.LG
|
Chain-of-Thought (CoT) reasoning is widely used to enhance the mathematical
reasoning capabilities of large language models (LLMs). The introduction of
process supervision for CoT trajectories has sparked discussions on improving
test-time scaling, thereby unlocking the System 2-style thinking capabilities
of these models. However, in multimodal mathematical reasoning, the scarcity of
high-quality CoT training data has hindered existing models from achieving both
deliberate reasoning and fine-grained verification. In this work, we propose a
novel framework that introduces System 2-style thinking to multimodal
mathematical reasoning. We introduce a three-module CoT data synthesis process
that integrates CoT distillation, trajectory-format rewriting, and format
unification. This process generates MMathCoT-1M, a high-quality CoT reasoning
instruction fine-tuning dataset. Furthermore, we implement a dual-view
trajectory labeling automation that targets both visual grounding fidelity and
deductive chain validity, resulting in the DualMath-1.1M dataset. The URSA-8B
model, trained on MMathCoT-1M, achieves new state-of-the-art (SOTA) performance
among similarly sized multimodal LLMs on six popular reasoning benchmarks.
Training URSA-8B further on the DualMath-1.1M dataset yields URSA-RM-8B, a
verifier that enhances URSA-8B's test-time performance and surpasses strong
closed-source multimodal MLLMs like GPT-4o. The model weights, training data,
and code have been open-sourced: https://github.com/URSA-MATH/URSA-MATH.
|
2501.04689
|
SPAR3D: Stable Point-Aware Reconstruction of 3D Objects from Single
Images
|
cs.CV cs.GR
|
We study the problem of single-image 3D object reconstruction. Recent works
have diverged into two directions: regression-based modeling and generative
modeling. Regression methods efficiently infer visible surfaces, but struggle
with occluded regions. Generative methods handle uncertain regions better by
modeling distributions, but are computationally expensive and the generation is
often misaligned with visible surfaces. In this paper, we present SPAR3D, a
novel two-stage approach aiming to take the best of both directions. The first
stage of SPAR3D generates sparse 3D point clouds using a lightweight point
diffusion model, which has a fast sampling speed. The second stage uses both
the sampled point cloud and the input image to create highly detailed meshes.
Our two-stage design enables probabilistic modeling of the ill-posed
single-image 3D task while maintaining high computational efficiency and great
output fidelity. Using point clouds as an intermediate representation further
allows for interactive user edits. Evaluated on diverse datasets, SPAR3D
demonstrates superior performance over previous state-of-the-art methods, at an
inference speed of 0.7 seconds. Project page with code and model:
https://spar3d.github.io
|
2501.04690
|
Comparative Analysis of Quantum and Classical Support Vector Classifiers
for Software Bug Prediction: An Exploratory Study
|
cs.SE cs.LG
|
Purpose: Quantum computing promises to transform problem-solving across
various domains with rapid and practical solutions. Within Software Evolution
and Maintenance, Quantum Machine Learning (QML) remains mostly an underexplored
domain, particularly in addressing challenges such as detecting buggy software
commits from code repositories. Methods: In this study, we investigate the
practical application of Quantum Support Vector Classifiers (QSVC) for
detecting buggy software commits across 14 open-source software projects with
diverse dataset sizes encompassing 30,924 data instances. We compare the QML
algorithm PQSVC (Pegasos QSVC) and QSVC against the classical Support Vector
Classifier (SVC). Our technique addresses large datasets in QSVC algorithms by
dividing them into smaller subsets. We propose and evaluate an aggregation
method to combine predictions from these models to detect the entire test
dataset. We also introduce an incremental testing methodology to overcome the
difficulties of quantum feature mapping during the testing approach. Results:
The study shows the effectiveness of QSVC and PQSVC in detecting buggy software
commits. The aggregation technique successfully combines predictions from
smaller data subsets, enhancing the overall detection accuracy for the entire
test dataset. The incremental testing methodology effectively manages the
challenges associated with quantum feature mapping during the testing process.
Conclusion: We contribute to the advancement of QML algorithms in defect
prediction, unveiling the potential for further research in this domain. The
specific scenario of the Short-Term Activity Frame (STAF) highlights the early
detection of buggy software commits during the initial developmental phases of
software systems, particularly when dataset sizes remain insufficient to train
machine learning models.
|
2501.04693
|
Beyond Sight: Finetuning Generalist Robot Policies with Heterogeneous
Sensors via Language Grounding
|
cs.RO cs.AI
|
Interacting with the world is a multi-sensory experience: achieving effective
general-purpose interaction requires making use of all available modalities --
including vision, touch, and audio -- to fill in gaps from partial observation.
For example, when vision is occluded reaching into a bag, a robot should rely
on its senses of touch and sound. However, state-of-the-art generalist robot
policies are typically trained on large datasets to predict robot actions
solely from visual and proprioceptive observations. In this work, we propose
FuSe, a novel approach that enables finetuning visuomotor generalist policies
on heterogeneous sensor modalities for which large datasets are not readily
available by leveraging natural language as a common cross-modal grounding. We
combine a multimodal contrastive loss with a sensory-grounded language
generation loss to encode high-level semantics. In the context of robot
manipulation, we show that FuSe enables performing challenging tasks that
require reasoning jointly over modalities such as vision, touch, and sound in a
zero-shot setting, such as multimodal prompting, compositional cross-modal
prompting, and descriptions of objects it interacts with. We show that the same
recipe is applicable to widely different generalist policies, including both
diffusion-based generalist policies and large vision-language-action (VLA)
models. Extensive experiments in the real world show that FuSeis able to
increase success rates by over 20% compared to all considered baselines.
|
2501.04694
|
EpiCoder: Encompassing Diversity and Complexity in Code Generation
|
cs.CL cs.AI
|
Effective instruction tuning is indispensable for optimizing code LLMs,
aligning model behavior with user expectations and enhancing model performance
in real-world applications. However, most existing methods focus on code
snippets, which are limited to specific functionalities and rigid structures,
restricting the complexity and diversity of the synthesized data. To address
these limitations, we introduce a novel feature tree-based synthesis framework
inspired by Abstract Syntax Trees (AST). Unlike AST, which captures syntactic
structure of code, our framework models semantic relationships between code
elements, enabling the generation of more nuanced and diverse data. The feature
tree is constructed from raw data and refined iteratively to increase the
quantity and diversity of the extracted features. This process enables the
identification of more complex patterns and relationships within the code. By
sampling subtrees with controlled depth and breadth, our framework allows
precise adjustments to the complexity of the generated code, supporting a wide
range of tasks from simple function-level operations to intricate multi-file
scenarios. We fine-tuned widely-used base models to create the EpiCoder series,
achieving state-of-the-art performance at both the function and file levels
across multiple benchmarks. Notably, empirical evidence indicates that our
approach shows significant potential in synthesizing highly complex
repository-level code data. Further analysis elucidates the merits of this
approach by rigorously assessing data complexity and diversity through software
engineering principles and LLM-as-a-judge method.
|
2501.04695
|
Re-ranking the Context for Multimodal Retrieval Augmented Generation
|
cs.LG cs.CV cs.IR cs.IT math.IT
|
Retrieval-augmented generation (RAG) enhances large language models (LLMs) by
incorporating external knowledge to generate a response within a context with
improved accuracy and reduced hallucinations. However, multi-modal RAG systems
face unique challenges: (i) the retrieval process may select irrelevant entries
to user query (e.g., images, documents), and (ii) vision-language models or
multi-modal language models like GPT-4o may hallucinate when processing these
entries to generate RAG output. In this paper, we aim to address the first
challenge, i.e, improving the selection of relevant context from the
knowledge-base in retrieval phase of the multi-modal RAG. Specifically, we
leverage the relevancy score (RS) measure designed in our previous work for
evaluating the RAG performance to select more relevant entries in retrieval
process. The retrieval based on embeddings, say CLIP-based embedding, and
cosine similarity usually perform poorly particularly for multi-modal data. We
show that by using a more advanced relevancy measure, one can enhance the
retrieval process by selecting more relevant pieces from the knowledge-base and
eliminate the irrelevant pieces from the context by adaptively selecting
up-to-$k$ entries instead of fixed number of entries. Our evaluation using COCO
dataset demonstrates significant enhancement in selecting relevant context and
accuracy of the generated response.
|
2501.04696
|
Test-Time Optimization for Domain Adaptive Open Vocabulary Segmentation
|
cs.CV
|
We present Seg-TTO, a novel framework for zero-shot, open-vocabulary semantic
segmentation (OVSS), designed to excel in specialized domain tasks. While
current open vocabulary approaches show impressive performance on standard
segmentation benchmarks under zero-shot settings, they fall short of supervised
counterparts on highly domain-specific datasets. We focus on
segmentation-specific test-time optimization to address this gap. Segmentation
requires an understanding of multiple concepts within a single image while
retaining the locality and spatial structure of representations. We propose a
novel self-supervised objective adhering to these requirements and use it to
align the model parameters with input images at test time. In the textual
modality, we learn multiple embeddings for each category to capture diverse
concepts within an image, while in the visual modality, we calculate
pixel-level losses followed by embedding aggregation operations specific to
preserving spatial structure. Our resulting framework termed Seg-TTO is a
plug-in-play module. We integrate Seg-TTO with three state-of-the-art OVSS
approaches and evaluate across 22 challenging OVSS tasks covering a range of
specialized domains. Our Seg-TTO demonstrates clear performance improvements
across these establishing new state-of-the-art. Code:
https://github.com/UlinduP/SegTTO.
|
2501.04697
|
Grokking at the Edge of Numerical Stability
|
cs.LG cs.AI cs.CV stat.ML
|
Grokking, the sudden generalization that occurs after prolonged overfitting,
is a surprising phenomenon challenging our understanding of deep learning.
Although significant progress has been made in understanding grokking, the
reasons behind the delayed generalization and its dependence on regularization
remain unclear. In this work, we argue that without regularization, grokking
tasks push models to the edge of numerical stability, introducing floating
point errors in the Softmax function, which we refer to as Softmax Collapse
(SC). We demonstrate that SC prevents grokking and that mitigating SC enables
grokking without regularization. Investigating the root cause of SC, we find
that beyond the point of overfitting, the gradients strongly align with what we
call the na\"ive loss minimization (NLM) direction. This component of the
gradient does not alter the model's predictions but decreases the loss by
scaling the logits, typically by scaling the weights along their current
direction. We show that this scaling of the logits explains the delay in
generalization characteristic of grokking and eventually leads to SC, halting
further learning. To validate our hypotheses, we introduce two key
contributions that address the challenges in grokking tasks: StableMax, a new
activation function that prevents SC and enables grokking without
regularization, and $\perp$Grad, a training algorithm that promotes quick
generalization in grokking tasks by preventing NLM altogether. These
contributions provide new insights into grokking, elucidating its delayed
generalization, reliance on regularization, and the effectiveness of existing
grokking-inducing methods. Code for this paper is available at
https://github.com/LucasPrietoAl/grokking-at-the-edge-of-numerical-stability.
|
2501.04698
|
ConceptMaster: Multi-Concept Video Customization on Diffusion
Transformer Models Without Test-Time Tuning
|
cs.CV
|
Text-to-video generation has made remarkable advancements through diffusion
models. However, Multi-Concept Video Customization (MCVC) remains a significant
challenge. We identify two key challenges in this task: 1) the identity
decoupling problem, where directly adopting existing customization methods
inevitably mix attributes when handling multiple concepts simultaneously, and
2) the scarcity of high-quality video-entity pairs, which is crucial for
training such a model that represents and decouples various concepts well. To
address these challenges, we introduce ConceptMaster, an innovative framework
that effectively tackles the critical issues of identity decoupling while
maintaining concept fidelity in customized videos. Specifically, we introduce a
novel strategy of learning decoupled multi-concept embeddings that are injected
into the diffusion models in a standalone manner, which effectively guarantees
the quality of customized videos with multiple identities, even for highly
similar visual concepts. To further overcome the scarcity of high-quality MCVC
data, we carefully establish a data construction pipeline, which enables
systematic collection of precise multi-concept video-entity data across diverse
concepts. A comprehensive benchmark is designed to validate the effectiveness
of our model from three critical dimensions: concept fidelity, identity
decoupling ability, and video generation quality across six different concept
composition scenarios. Extensive experiments demonstrate that our ConceptMaster
significantly outperforms previous approaches for this task, paving the way for
generating personalized and semantically accurate videos across multiple
concepts.
|
2501.04699
|
EditAR: Unified Conditional Generation with Autoregressive Models
|
cs.CV
|
Recent progress in controllable image generation and editing is largely
driven by diffusion-based methods. Although diffusion models perform
exceptionally well in specific tasks with tailored designs, establishing a
unified model is still challenging. In contrast, autoregressive models
inherently feature a unified tokenized representation, which simplifies the
creation of a single foundational model for various tasks. In this work, we
propose EditAR, a single unified autoregressive framework for a variety of
conditional image generation tasks, e.g., image editing, depth-to-image,
edge-to-image, segmentation-to-image. The model takes both images and
instructions as inputs, and predicts the edited images tokens in a vanilla
next-token paradigm. To enhance the text-to-image alignment, we further propose
to distill the knowledge from foundation models into the autoregressive
modeling process. We evaluate its effectiveness across diverse tasks on
established benchmarks, showing competitive performance to various
state-of-the-art task-specific methods. Project page:
https://jitengmu.github.io/EditAR/
|
2501.04700
|
Planarian Neural Networks: Evolutionary Patterns from Basic Bilateria
Shaping Modern Artificial Neural Network Architectures
|
cs.NE cs.AI cs.CV cs.LG
|
This study examined the viability of enhancing the prediction accuracy of
artificial neural networks (ANNs) in image classification tasks by developing
ANNs with evolution patterns similar to those of biological neural networks.
ResNet is a widely used family of neural networks with both deep and wide
variants; therefore, it was selected as the base model for our investigation.
The aim of this study is to improve the image classification performance of
ANNs via a novel approach inspired by the biological nervous system
architecture of planarians, which comprises a brain and two nerve cords. We
believe that the unique neural architecture of planarians offers valuable
insights into the performance enhancement of ANNs. The proposed planarian
neural architecture-based neural network was evaluated on the CIFAR-10 and
CIFAR-100 datasets. Our results indicate that the proposed method exhibits
higher prediction accuracy than the baseline neural network models in image
classification tasks. These findings demonstrate the significant potential of
biologically inspired neural network architectures in improving the performance
of ANNs in a wide range of applications.
|
2501.04712
|
Pressing Intensity: An Intuitive Measure for Pressing in Soccer
|
stat.AP cs.LG
|
Pressing is a fundamental defensive strategy in football, characterized by
applying pressure on the ball owning team to regain possession. Despite its
significance, existing metrics for measuring pressing often lack precision or
comprehensive consideration of positional data, player movement and speed. This
research introduces an innovative framework for quantifying pressing intensity,
leveraging advancements in positional tracking data and components from
Spearman's Pitch Control model. Our method integrates player velocities,
movement directions, and reaction times to compute the time required for a
defender to intercept an attacker or the ball. This time-to-intercept measure
is then transformed into probabilistic values using a logistic function,
enabling dynamic and intuitive analysis of pressing situations at the
individual frame level. the model captures how every player's movement
influences pressure on the field, offering actionable insights for coaches,
analysts, and decision-makers. By providing a robust and intepretable metric,
our approach facilitates the identification of pressing strategies, advanced
situational analyses, and the derivation of metrics, advancing the analytical
capabilities for modern football.
|
2501.04718
|
Knowledge-Guided Biomarker Identification for Label-Free Single-Cell
RNA-Seq Data: A Reinforcement Learning Perspective
|
q-bio.GN cs.AI
|
Gene panel selection aims to identify the most informative genomic biomarkers
in label-free genomic datasets. Traditional approaches, which rely on domain
expertise, embedded machine learning models, or heuristic-based iterative
optimization, often introduce biases and inefficiencies, potentially obscuring
critical biological signals. To address these challenges, we present an
iterative gene panel selection strategy that harnesses ensemble knowledge from
existing gene selection algorithms to establish preliminary boundaries or prior
knowledge, which guide the initial search space. Subsequently, we incorporate
reinforcement learning through a reward function shaped by expert behavior,
enabling dynamic refinement and targeted selection of gene panels. This
integration mitigates biases stemming from initial boundaries while
capitalizing on RL's stochastic adaptability. Comprehensive comparative
experiments, case studies, and downstream analyses demonstrate the
effectiveness of our method, highlighting its improved precision and efficiency
for label-free biomarker discovery. Our results underscore the potential of
this approach to advance single-cell genomics data analysis.
|
2501.04719
|
Calculating Customer Lifetime Value and Churn using Beta Geometric
Negative Binomial and Gamma-Gamma Distribution in a NFT based setting
|
stat.AP cs.AI
|
Customer Lifetime Value (CLV) is an important metric that measures the total
value a customer will bring to a business over their lifetime. The Beta
Geometric Negative Binomial Distribution (BGNBD) and Gamma Gamma Distribution
are two models that can be used to calculate CLV, taking into account both the
frequency and value of customer transactions. This article explains the BGNBD
and Gamma Gamma Distribution models, and how they can be used to calculate CLV
for NFT (Non-Fungible Token) transaction data in a blockchain setting. By
estimating the parameters of these models using historical transaction data,
businesses can gain insights into the lifetime value of their customers and
make data-driven decisions about marketing and customer retention strategies.
|
2501.04721
|
A Shape-Based Functional Index for Objective Assessment of Pediatric
Motor Function
|
stat.AP cs.LG physics.med-ph
|
Clinical assessments for neuromuscular disorders, such as Spinal Muscular
Atrophy (SMA) and Duchenne Muscular Dystrophy (DMD), continue to rely on
subjective measures to monitor treatment response and disease progression. We
introduce a novel method using wearable sensors to objectively assess motor
function during daily activities in 19 patients with DMD, 9 with SMA, and 13
age-matched controls. Pediatric movement data is complex due to confounding
factors such as limb length variations in growing children and variability in
movement speed. Our approach uses Shape-based Principal Component Analysis to
align movement trajectories and identify distinct kinematic patterns, including
variations in motion speed and asymmetry. Both DMD and SMA cohorts have
individuals with motor function on par with healthy controls. Notably, patients
with SMA showed greater activation of the motion asymmetry pattern. We further
combined projections on these principal components with partial least squares
(PLS) to identify a covariation mode with a canonical correlation of r = 0.78
(95% CI: [0.34, 0.94]) with muscle fat infiltration, the Brooke score (a motor
function score), and age-related degenerative changes, proposing a novel motor
function index. This data-driven method can be deployed in home settings,
enabling better longitudinal tracking of treatment efficacy for children with
neuromuscular disorders.
|
2501.04724
|
Guiding Treatment Strategies: The Role of Adjuvant Anti-Her2 Neu Therapy
and Skin/Nipple Involvement in Local Recurrence-Free Survival in Breast
Cancer Patients
|
stat.AP cs.LG
|
This study explores how causal inference models, specifically the Linear
Non-Gaussian Acyclic Model (LiNGAM), can extract causal relationships between
demographic factors, treatments, conditions, and outcomes from observational
patient data, enabling insights beyond correlation. Unlike traditional
randomized controlled trials (RCTs), which establish causal relationships
within narrowly defined populations, our method leverages broader observational
data, improving generalizability. Using over 40 features in the Duke MRI Breast
Cancer dataset, we found that Adjuvant Anti-Her2 Neu Therapy increased local
recurrence-free survival by 169 days, while Skin/Nipple involvement reduced it
by 351 days. These findings highlight the therapy's importance for
Her2-positive patients and the need for targeted interventions for high-risk
cases, informing personalized treatment strategies.
|
2501.04727
|
A New Underdetermined Framework for Sparse Estimation of Fault Location
for Transmission Lines Using Limited Current Measurements
|
eess.SY cs.SY
|
This letter proposes an alternative underdetermined framework for fault
location that utilizes current measurements along with the branch-bus matrix,
providing another option besides the traditional voltage-based methods. To
enhance fault location accuracy in the presence of multiple outliers, the
robust YALL1 algorithm is used to resist outlier interference and accurately
recover the sparse vector, thereby pinpointing the fault precisely. The results
on the IEEE 39-bus test system demonstrate the effectiveness and robustness of
the proposed method.
|
2501.04729
|
Stability Exchange near Folds: Analysis of an end-loaded Elastica with a
Lever Arm
|
math.OC cond-mat.soft cs.RO
|
Numerous problems in physical sciences can be expressed as
parameter-dependent variational problems. The associated family of equilibria
may or may not exist realistically and can be determined after examining its
stability. Hence, it is crucial to determine the stability and track its
transitions. Generally, the stability characteristics of the equilibria change
near the folds in the parameter space. The direction of stability change can be
encoded through a particular projection of the solutions. In this article, we
identify such projections for variational problems characterized by fixed-free
ends, a class of problems frequently found in mechanics. Using the developed
theory, we study an Elastica subject to an end load applied through a rigid
lever arm. The examples revealed several instances of snap-back instability in
these systems. These findings may aid in enhancing the design of soft robot
arms and other innovative switching mechanisms.
|
2501.04730
|
Relative Phase Equivariant Deep Neural Systems for Physical Layer
Communications
|
cs.IT cs.NI math.IT
|
In the era of telecommunications, the increasing demand for complex and
specialized communication systems has led to a focus on improving physical
layer communications. Artificial intelligence (AI) has emerged as a promising
solution avenue for doing so. Deep neural receivers have already shown
significant promise in improving the performance of communications systems.
However, a major challenge lies in developing deep neural receivers that match
the energy efficiency and speed of traditional receivers. This work
investigates the incorporation of inductive biases in the physical layer using
group-equivariant deep learning to improve the parameter efficiency of deep
neural receivers. We do so by constructing a deep neural receiver that is
equivariant with respect to the phase of arrival. We show that the inclusion of
relative phase equivariance significantly reduces the error rate of deep neural
receivers at similar model sizes. Thus, we show the potential of
group-equivariant deep learning in the domain of physical layer communications.
|
2501.04732
|
SNR-EQ-JSCC: Joint Source-Channel Coding with SNR-Based Embedding and
Query
|
cs.IT cs.AI math.IT
|
Coping with the impact of dynamic channels is a critical issue in joint
source-channel coding (JSCC)-based semantic communication systems. In this
paper, we propose a lightweight channel-adaptive semantic coding architecture
called SNR-EQ-JSCC. It is built upon the generic Transformer model and achieves
channel adaptation (CA) by Embedding the signal-to-noise ratio (SNR) into the
attention blocks and dynamically adjusting attention scores through
channel-adaptive Queries. Meanwhile, penalty terms are introduced in the loss
function to stabilize the training process. Considering that instantaneous SNR
feedback may be imperfect, we propose an alternative method that uses only the
average SNR, which requires no retraining of SNR-EQ-JSCC. Simulation results
conducted on image transmission demonstrate that the proposed SNR-EQJSCC
outperforms the state-of-the-art SwinJSCC in peak signal-to-noise ratio (PSNR)
and perception metrics while only requiring 0.05% of the storage overhead and
6.38% of the computational complexity for CA. Moreover, the channel-adaptive
query method demonstrates significant improvements in perception metrics. When
instantaneous SNR feedback is imperfect, SNR-EQ-JSCC using only the average SNR
still surpasses baseline schemes.
|
2501.04733
|
AI-Driven Reinvention of Hydrological Modeling for Accurate Predictions
and Interpretation to Transform Earth System Modeling
|
cs.AI cs.ET cs.LG physics.ao-ph
|
Traditional equation-driven hydrological models often struggle to accurately
predict streamflow in challenging regional Earth systems like the Tibetan
Plateau, while hybrid and existing algorithm-driven models face difficulties in
interpreting hydrological behaviors. This work introduces HydroTrace, an
algorithm-driven, data-agnostic model that substantially outperforms these
approaches, achieving a Nash-Sutcliffe Efficiency of 98% and demonstrating
strong generalization on unseen data. Moreover, HydroTrace leverages advanced
attention mechanisms to capture spatial-temporal variations and
feature-specific impacts, enabling the quantification and spatial resolution of
streamflow partitioning as well as the interpretation of hydrological behaviors
such as glacier-snow-streamflow interactions and monsoon dynamics.
Additionally, a large language model (LLM)-based application allows users to
easily understand and apply HydroTrace's insights for practical purposes. These
advancements position HydroTrace as a transformative tool in hydrological and
broader Earth system modeling, offering enhanced prediction accuracy and
interpretability.
|
2501.04734
|
Generative Style Transfer for MRI Image Segmentation: A Case of Glioma
Segmentation in Sub-Saharan Africa
|
eess.IV cs.AI cs.LG physics.med-ph
|
In Sub-Saharan Africa (SSA), the utilization of lower-quality Magnetic
Resonance Imaging (MRI) technology raises questions about the applicability of
machine learning methods for clinical tasks. This study aims to provide a
robust deep learning-based brain tumor segmentation (BraTS) method tailored for
the SSA population using a threefold approach. Firstly, the impact of domain
shift from the SSA training data on model efficacy was examined, revealing no
significant effect. Secondly, a comparative analysis of 3D and 2D
full-resolution models using the nnU-Net framework indicates similar
performance of both the models trained for 300 epochs achieving a five-fold
cross-validation score of 0.93. Lastly, addressing the performance gap observed
in SSA validation as opposed to the relatively larger BraTS glioma (GLI)
validation set, two strategies are proposed: fine-tuning SSA cases using the
GLI+SSA best-pretrained 2D fullres model at 300 epochs, and introducing a novel
neural style transfer-based data augmentation technique for the SSA cases. This
investigation underscores the potential of enhancing brain tumor prediction
within SSA's unique healthcare landscape.
|
2501.04735
|
Topology-based deep-learning segmentation method for deep anterior
lamellar keratoplasty (DALK) surgical guidance using M-mode OCT data
|
eess.IV cs.CV
|
Deep Anterior Lamellar Keratoplasty (DALK) is a partial-thickness corneal
transplant procedure used to treat corneal stromal diseases. A crucial step in
this procedure is the precise separation of the deep stroma from Descemet's
membrane (DM) using the Big Bubble technique. To simplify the tasks of needle
insertion and pneumo-dissection in this technique, we previously developed an
Optical Coherence Tomography (OCT)-guided, eye-mountable robot that uses
real-time tracking of corneal layers from M-mode OCT signals for control.
However, signal noise and instability during manipulation of the OCT fiber
sensor-integrated needle have hindered the performance of conventional
deep-learning segmentation methods, resulting in rough and inaccurate detection
of corneal layers. To address these challenges, we have developed a
topology-based deep-learning segmentation method that integrates a topological
loss function with a modified network architecture. This approach effectively
reduces the effects of noise and improves segmentation speed, precision, and
stability. Validation using in vivo, ex vivo, and hybrid rabbit eye datasets
demonstrates that our method outperforms traditional loss-based techniques,
providing fast, accurate, and robust segmentation of the epithelium and DM to
guide surgery.
|
2501.04746
|
Towards resilient cities: A hybrid simulation framework for risk
mitigation through data driven decision making
|
cs.MA cs.SY eess.SY
|
Providing a comprehensive view of the city operation and offering useful
metrics for decision making is a well known challenge for urban risk analysis
systems. Existing systems are, in many cases, generalizations of previous
domain specific tools and or methodologies that may not cover all urban
interdependencies and makes it difficult to have homogeneous indicators. In
order to overcome this limitation while seeking for effective support to
decision makers, this article introduces a novel hybrid simulation framework
for risk mitigation. The framework is built on a proposed city concept that
considers urban space as a Complex Adaptive System composed by interconnected
Critical Infrastructures. In this concept, a Social System, which models daily
patterns and social interactions of the citizens in the Urban Landscape, drives
the CIs demand to configure the full city picture. The frameworks hybrid design
integrates agent based and network based modeling by breaking down city agents
into system dependent subagents, to enable both inter and intra system
interaction simulation, respectively. A layered structure of indicators at
different aggregation levels is also developed, to ensure that decisions are
not only data driven but also explainable. Therefore, the proposed simulation
framework can serve as a DSS tool that allows the quantitative analysis of the
impact of threats at different levels. First, system level metrics can be used
to get a broad view on the city resilience. Then, agent level metrics back
those figures and provide better explainability. On implementation, the
proposed framework enables component reusability (for eased coding), simulation
federation (enabling the integration of existing system oriented simulators),
discrete simulation in accelerated time (for rapid scenario simulation) and
decision oriented visualization (for informed outputs).
|
2501.04747
|
Discovering new robust local search algorithms with neuro-evolution
|
cs.NE cs.AI
|
This paper explores a novel approach aimed at overcoming existing challenges
in the realm of local search algorithms. Our aim is to improve the decision
process that takes place within a local search algorithm so as to make the best
possible transitions in the neighborhood at each iteration. To improve this
process, we propose to use a neural network that has the same input information
as conventional local search algorithms. In this paper, which is an extension
of the work [Goudet et al. 2024] presented at EvoCOP2024, we investigate
different ways of representing this information so as to make the algorithm as
efficient as possible but also robust to monotonic transformations of the
problem objective function. To assess the efficiency of this approach, we
develop an experimental setup centered around NK landscape problems, offering
the flexibility to adjust problem size and ruggedness. This approach offers a
promising avenue for the emergence of new local search algorithms and the
improvement of their problem-solving capabilities for black-box problems.
|
2501.04750
|
Efficient License Plate Recognition in Videos Using Visual Rhythm and
Accumulative Line Analysis
|
cs.CV cs.LG
|
Video-based Automatic License Plate Recognition (ALPR) involves extracting
vehicle license plate text information from video captures. Traditional systems
typically rely heavily on high-end computing resources and utilize multiple
frames to recognize license plates, leading to increased computational
overhead. In this paper, we propose two methods capable of efficiently
extracting exactly one frame per vehicle and recognizing its license plate
characters from this single image, thus significantly reducing computational
demands. The first method uses Visual Rhythm (VR) to generate time-spatial
images from videos, while the second employs Accumulative Line Analysis (ALA),
a novel algorithm based on single-line video processing for real-time
operation. Both methods leverage YOLO for license plate detection within the
frame and a Convolutional Neural Network (CNN) for Optical Character
Recognition (OCR) to extract textual information. Experiments on real videos
demonstrate that the proposed methods achieve results comparable to traditional
frame-by-frame approaches, with processing speeds three times faster.
|
2501.04752
|
A mathematical model for the bullying dynamics in schools
|
physics.soc-ph cs.SI
|
We analyze a mathematical model to understand the dynamics of bullying in
schools. The model considers a population divided into four groups: susceptible
individuals, bullies, individuals exposed to bullying, and violent individuals.
Transitions between these states occur at rates designed to capture the complex
interactions among students, influenced by factors such as romantic rejection,
conflicts with peers and teachers, and other school-related challenges. These
interactions can escalate into bullying and violent behavior. The model also
incorporates the role of parents and school administrators in mitigating
bullying through intervention strategies. The results suggest that bullying can
be effectively controlled if anti-bullying programs implemented by schools are
sufficiently robust. Additionally, the conditions under which bullying persists
are explored.
|
2501.04754
|
Development of an Adaptive Sliding Mode Controller using Neural Networks
for Trajectory Tracking of a Cylindrical Manipulator
|
eess.SY cs.RO cs.SY physics.app-ph
|
Cylindrical manipulators are extensively used in industrial automation,
especially in emerging technologies like 3D printing, which represents a
significant future trend. However, controlling the trajectory of nonlinear
models with system uncertainties remains a critical challenge, often leading to
reduced accuracy and reliability. To address this, the study develops an
Adaptive Sliding Mode Controller (ASMC) integrated with Neural Networks (NNs)
to improve trajectory tracking for cylindrical manipulators. The ASMC leverages
the robustness of sliding mode control and the adaptability of neural networks
to handle uncertainties and dynamic variations effectively. Simulation results
validate that the proposed ASMC-NN achieves high trajectory tracking accuracy,
fast response time, and enhanced reliability, making it a promising solution
for applications in 3D printing and beyond.
|
2501.04755
|
Improving Human-Robot Teaching by Quantifying and Reducing Mental Model
Mismatch
|
cs.RO cs.HC
|
The rapid development of artificial intelligence and robotics has had a
significant impact on our lives, with intelligent systems increasingly
performing tasks traditionally performed by humans. Efficient knowledge
transfer requires matching the mental model of the human teacher with the
capabilities of the robot learner. This paper introduces the Mental Model
Mismatch (MMM) Score, a feedback mechanism designed to quantify and reduce
mismatches by aligning human teaching behavior with robot learning behavior.
Using Large Language Models (LLMs), we analyze teacher intentions in natural
language to generate adaptive feedback. A study with 150 participants teaching
a virtual robot to solve a puzzle game shows that intention-based feedback
significantly outperforms traditional performance-based feedback or no
feedback. The results suggest that intention-based feedback improves
instructional outcomes, improves understanding of the robot's learning process
and reduces misconceptions. This research addresses a critical gap in
human-robot interaction (HRI) by providing a method to quantify and mitigate
discrepancies between human mental models and robot capabilities, with the goal
of improving robot learning and human teaching effectiveness.
|
2501.04757
|
DAREK -- Distance Aware Error for Kolmogorov Networks
|
eess.SP cs.LG
|
In this paper, we provide distance-aware error bounds for Kolmogorov Arnold
Networks (KANs). We call our new error bounds estimator DAREK -- Distance Aware
Error for Kolmogorov networks. Z. Liu et al. provide error bounds, which may be
loose, lack distance-awareness, and are defined only up to an unknown constant
of proportionality. We review the error bounds for Newton's polynomial, which
is then generalized to an arbitrary spline, under Lipschitz continuity
assumptions. We then extend these bounds to nested compositions of splines,
arriving at error bounds for KANs. We evaluate our method by estimating an
object's shape from sparse laser scan points. We use KAN to fit a smooth
function to the scans and provide error bounds for the fit. We find that our
method is faster than Monte Carlo approaches, and that our error bounds enclose
the true obstacle shape reliably.
|
2501.04759
|
Optimize the parameters of the PID Controller using Genetic Algorithm
for Robot Manipulators
|
eess.SY cs.RO cs.SY math.OC
|
This paper presents the design a Proportional-Integral-Derivative (PID)
controller with optimized parameters for a two-degree-of-freedom robotic arm. A
genetic algorithm (GA) is proposed to optimize the controller parameters,
addressing the challenges in determining PID controller parameters for highly
nonlinear systems like robotic arms compared to traditional methods. The
GA-optimized PID controller significantly improves control accuracy and
performance over traditional control methods. Simulation results demonstrate
that the robotic arm system operates with high precision and stability.
Additionally, the shortened trajectory tracking response time enhances the
feasibility of applying this control algorithm in realworld scenarios. This
research not only confirms the suitability of PID-GA for robotic arms and
similar systems but also opens new avenues for applying this algorithm to real
physical systems.
|
2501.04761
|
Evolution of Spots and Stripes in Cellular Automata
|
nlin.CG cs.NE
|
Cellular automata are computers, similar to Turing machines. The main
difference is that Turing machines use a one-dimensional tape, whereas cellular
automata use a two-dimensional grid. The best-known cellular automaton is the
Game of Life, which is a universal computer. It belongs to a family of cellular
automata with 262,144 members. Playing the Game of Life generally involves
engineering; that is, assembling a device composed of various parts that are
combined to achieve a specific intended result. Instead of engineering cellular
automata, we propose evolving cellular automata. Evolution applies mutation and
selection to a population of organisms. If a mutation increases the fitness of
an organism, it may have many descendants, displacing the less fit organisms.
Unlike engineering, evolution does not work towards an imagined goal. Evolution
works towards increasing fitness, with no expectations about the specific form
of the final result. Mutation, selection, and fitness yield structures that
appear to be more organic and life-like than engineered structures. In our
experiments, the patterns resulting from evolving cellular automata look much
like the spots on leopards and the stripes on tigers.
|
2501.04762
|
Efficient and Responsible Adaptation of Large Language Models for Robust
and Equitable Top-k Recommendations
|
cs.IR cs.LG
|
Conventional recommendation systems (RSs) are typically optimized to enhance
performance metrics uniformly across all training samples, inadvertently
overlooking the needs of diverse user populations. The performance disparity
among various populations can harm the model's robustness to sub-populations
due to the varying user properties. While large language models (LLMs) show
promise in enhancing RS performance, their practical applicability is hindered
by high costs, inference latency, and degraded performance on long user
queries. To address these challenges, we propose a hybrid task allocation
framework designed to promote social good by equitably serving all user groups.
By adopting a two-phase approach, we promote a strategic assignment of tasks
for efficient and responsible adaptation of LLMs. Our strategy works by first
identifying the weak and inactive users that receive a suboptimal ranking
performance by RSs. Next, we use an in-context learning approach for such
users, wherein each user interaction history is contextualized as a distinct
ranking task. We evaluate our hybrid framework by incorporating eight different
recommendation algorithms and three different LLMs -- both open and
close-sourced. Our results on three real-world datasets show a significant
reduction in weak users and improved robustness to subpopulations without
disproportionately escalating costs.
|
2501.04763
|
Search engines in polarized media environment: Auditing political
information curation on Google and Bing prior to 2024 US elections
|
cs.CY cs.IR cs.SI
|
Search engines play an important role in the context of modern elections. By
curating information in response to user queries, search engines influence how
individuals are informed about election-related developments and perceive the
media environment in which elections take place. It has particular implications
for (perceived) polarization, especially if search engines' curation results in
a skewed treatment of information sources based on their political leaning.
Until now, however, it is unclear whether such a partisan gap emerges through
information curation on search engines and what user- and system-side factors
affect it. To address this shortcoming, we audit the two largest Western search
engines, Google and Bing, prior to the 2024 US presidential elections and
examine how these search engines' organic search results and additional
interface elements represent election-related information depending on the
queries' slant, user location, and time when the search was conducted. Our
findings indicate that both search engines tend to prioritize left-leaning
media sources, with the exact scope of search results' ideological slant
varying between Democrat- and Republican-focused queries. We also observe
limited effects of location- and time-based factors on organic search results,
whereas results for additional interface elements were more volatile over time
and specific US states. Together, our observations highlight that search
engines' information curation actively mirrors the partisan divides present in
the US media environments and has the potential to contribute to (perceived)
polarization within these environments.
|
2501.04764
|
Video Summarisation with Incident and Context Information using
Generative AI
|
cs.CV cs.MM
|
The proliferation of video content production has led to vast amounts of
data, posing substantial challenges in terms of analysis efficiency and
resource utilization. Addressing this issue calls for the development of robust
video analysis tools. This paper proposes a novel approach leveraging
Generative Artificial Intelligence (GenAI) to facilitate streamlined video
analysis. Our tool aims to deliver tailored textual summaries of user-defined
queries, offering a focused insight amidst extensive video datasets. Unlike
conventional frameworks that offer generic summaries or limited action
recognition, our method harnesses the power of GenAI to distil relevant
information, enhancing analysis precision and efficiency. Employing YOLO-V8 for
object detection and Gemini for comprehensive video and text analysis, our
solution achieves heightened contextual accuracy. By combining YOLO with
Gemini, our approach furnishes textual summaries extracted from extensive CCTV
footage, enabling users to swiftly navigate and verify pertinent events without
the need for exhaustive manual review. The quantitative evaluation revealed a
similarity of 72.8%, while the qualitative assessment rated an accuracy of 85%,
demonstrating the capability of the proposed method.
|
2501.04765
|
TREAD: Token Routing for Efficient Architecture-agnostic Diffusion
Training
|
cs.CV cs.AI
|
Diffusion models have emerged as the mainstream approach for visual
generation. However, these models usually suffer from sample inefficiency and
high training costs. This issue is particularly pronounced in the standard
diffusion transformer architecture due to its quadratic complexity relative to
input length. Recent works have addressed this by reducing the number of tokens
processed in the model, often through masking. In contrast, this work aims to
improve the training efficiency of the diffusion backbone by using predefined
routes that store this information until it is reintroduced to deeper layers of
the model, rather than discarding these tokens entirely. Further, we combine
multiple routes and introduce an adapted auxiliary loss that accounts for all
applied routes. Our method is not limited to the common transformer-based model
- it can also be applied to state-space models. Unlike most current approaches,
TREAD achieves this without architectural modifications. Finally, we show that
our method reduces the computational cost and simultaneously boosts model
performance on the standard benchmark ImageNet-1K 256 x 256 in
class-conditional synthesis. Both of these benefits multiply to a convergence
speedup of 9.55x at 400K training iterations compared to DiT and 25.39x
compared to the best benchmark performance of DiT at 7M training iterations.
|
2501.04766
|
Decoding rank metric Reed-Muller codes
|
cs.IT math.CO math.IT
|
In this article, we investigate the decoding of the rank metric Reed--Muller
codes introduced by Augot, Couvreur, Lavauzelle and Neri in 2021. We propose a
polynomial time algorithm that rests on the structure of Dickson matrices,
works on any such code and corrects up to half the minimum distance.
|
2501.04782
|
GaussianVideo: Efficient Video Representation via Hierarchical Gaussian
Splatting
|
cs.CV
|
Efficient neural representations for dynamic video scenes are critical for
applications ranging from video compression to interactive simulations. Yet,
existing methods often face challenges related to high memory usage, lengthy
training times, and temporal consistency. To address these issues, we introduce
a novel neural video representation that combines 3D Gaussian splatting with
continuous camera motion modeling. By leveraging Neural ODEs, our approach
learns smooth camera trajectories while maintaining an explicit 3D scene
representation through Gaussians. Additionally, we introduce a spatiotemporal
hierarchical learning strategy, progressively refining spatial and temporal
features to enhance reconstruction quality and accelerate convergence. This
memory-efficient approach achieves high-quality rendering at impressive speeds.
Experimental results show that our hierarchical learning, combined with robust
camera motion modeling, captures complex dynamic scenes with strong temporal
consistency, achieving state-of-the-art performance across diverse video
datasets in both high- and low-motion scenarios.
|
2501.04783
|
Traffic Simulations: Multi-City Calibration of Metropolitan Highway
Networks
|
cs.ET cs.SY eess.SY
|
This paper proposes an approach to perform travel demand calibration for
high-resolution stochastic traffic simulators. It employs abundant travel times
at the path-level, departing from the standard practice of resorting to scarce
segment-level sensor counts. The proposed approach is shown to tackle
high-dimensional instances in a sample-efficient way. For the first time, case
studies on 6 metropolitan highway networks are carried out, considering a total
of 54 calibration scenarios. This is the first work to show the ability of a
calibration algorithm to systematically scale across networks. Compared to the
state-of-the-art simultaneous perturbation stochastic approximation (SPSA)
algorithm, the proposed approach enhances fit to field data by an average 43.5%
with a maximum improvement of 80.0%, and does so within fewer simulation calls.
|
2501.04784
|
Leveraging Registers in Vision Transformers for Robust Adaptation
|
cs.CV cs.LG
|
Vision Transformers (ViTs) have shown success across a variety of tasks due
to their ability to capture global image representations. Recent studies have
identified the existence of high-norm tokens in ViTs, which can interfere with
unsupervised object discovery. To address this, the use of "registers" which
are additional tokens that isolate high norm patch tokens while capturing
global image-level information has been proposed. While registers have been
studied extensively for object discovery, their generalization properties
particularly in out-of-distribution (OOD) scenarios, remains underexplored. In
this paper, we examine the utility of register token embeddings in providing
additional features for improving generalization and anomaly rejection. To that
end, we propose a simple method that combines the special CLS token embedding
commonly employed in ViTs with the average-pooled register embeddings to create
feature representations which are subsequently used for training a downstream
classifier. We find that this enhances OOD generalization and anomaly
rejection, while maintaining in-distribution (ID) performance. Extensive
experiments across multiple ViT backbones trained with and without registers
reveal consistent improvements of 2-4\% in top-1 OOD accuracy and a 2-3\%
reduction in false positive rates for anomaly detection. Importantly, these
gains are achieved without additional computational overhead.
|
2501.04793
|
A Novel Observer Design for LuGre Friction Estimation and Control
|
eess.SY cs.SY
|
Dynamic components of the friction may directly impact the stability and
performance of the motion control systems. The LuGre model is a prevalent
friction model utilized to express this dynamic behavior. Since the LuGre model
is very comprehensive, friction compensation based on it might be challenging.
Inspired by this, we develop a novel observer to estimate and compensate for
LuGre friction. Furthermore, we present a Lyapunov stability analysis to show
that observer dynamics are asymptotically stable under certain conditions.
Compared to its counterparts, the proposed observer constitutes a simple and
standalone scheme that can be utilized with arbitrary control inputs in a
straightforward way. As a primary difference, the presented observer estimates
velocity and uses the velocity error to estimate friction in addition to
control input. The extensive simulations revealed that the introduced observer
enhances position and velocity tracking performance in the presence of
friction.
|
2501.04794
|
A Steerable Deep Network for Model-Free Diffusion MRI Registration
|
eess.IV cs.CV cs.LG
|
Nonrigid registration is vital to medical image analysis but remains
challenging for diffusion MRI (dMRI) due to its high-dimensional,
orientation-dependent nature. While classical methods are accurate, they are
computationally demanding, and deep neural networks, though efficient, have
been underexplored for nonrigid dMRI registration compared to structural
imaging. We present a novel, deep learning framework for model-free, nonrigid
registration of raw diffusion MRI data that does not require explicit
reorientation. Unlike previous methods relying on derived representations such
as diffusion tensors or fiber orientation distribution functions, in our
approach, we formulate the registration as an equivariant diffeomorphism of
position-and-orientation space. Central to our method is an
$\mathsf{SE}(3)$-equivariant UNet that generates velocity fields while
preserving the geometric properties of a raw dMRI's domain. We introduce a new
loss function based on the maximum mean discrepancy in Fourier space,
implicitly matching ensemble average propagators across images. Experimental
results on Human Connectome Project dMRI data demonstrate competitive
performance compared to state-of-the-art approaches, with the added advantage
of bypassing the overhead for estimating derived representations. This work
establishes a foundation for data-driven, geometry-aware dMRI registration
directly in the acquisition space.
|
2501.04796
|
Democratic Resilience and Sociotechnical Shocks
|
cs.SI cs.SY eess.SY stat.AP
|
We focus on the potential fragility of democratic elections given modern
information-communication technologies (ICT) in the Web 2.0 era. Our work
provides an explanation for the cascading attrition of public officials
recently in the United States and offers potential policy interventions from a
dynamic system's perspective. We propose that micro-level heterogeneity across
individuals within crucial institutions leads to vulnerabilities of election
support systems at the macro scale. Our analysis provides comparative
statistics to measure the fragility of systems against targeted harassment,
disinformation campaigns, and other adversarial manipulations that are now
cheaper to scale and deploy. Our analysis also informs policy interventions
that seek to retain public officials and increase voter turnout. We show how
limited resources (for example, salary incentives to public officials and
targeted interventions to increase voter turnout) can be allocated at the
population level to improve these outcomes and maximally enhance democratic
resilience. On the one hand, structural and individual heterogeneity cause
systemic fragility that adversarial actors can exploit, but also provide
opportunities for effective interventions that offer significant global
improvements from limited and localized actions.
|
2501.04799
|
Cued Speech Generation Leveraging a Pre-trained Audiovisual
Text-to-Speech Model
|
cs.CL
|
This paper presents a novel approach for the automatic generation of Cued
Speech (ACSG), a visual communication system used by people with hearing
impairment to better elicit the spoken language. We explore transfer learning
strategies by leveraging a pre-trained audiovisual autoregressive
text-to-speech model (AVTacotron2). This model is reprogrammed to infer Cued
Speech (CS) hand and lip movements from text input. Experiments are conducted
on two publicly available datasets, including one recorded specifically for
this study. Performance is assessed using an automatic CS recognition system.
With a decoding accuracy at the phonetic level reaching approximately 77%, the
results demonstrate the effectiveness of our approach.
|
2501.04802
|
Reproducing HotFlip for Corpus Poisoning Attacks in Dense Retrieval
|
cs.IR cs.CL
|
HotFlip is a topical gradient-based word substitution method for attacking
language models. Recently, this method has been further applied to attack
retrieval systems by generating malicious passages that are injected into a
corpus, i.e., corpus poisoning. However, HotFlip is known to be computationally
inefficient, with the majority of time being spent on gradient accumulation for
each query-passage pair during the adversarial token generation phase, making
it impossible to generate an adequate number of adversarial passages in a
reasonable amount of time. Moreover, the attack method itself assumes access to
a set of user queries, a strong assumption that does not correspond to how
real-world adversarial attacks are usually performed. In this paper, we first
significantly boost the efficiency of HotFlip, reducing the adversarial
generation process from 4 hours per document to only 15 minutes, using the same
hardware. We further contribute experiments and analysis on two additional
tasks: (1) transfer-based black-box attacks, and (2) query-agnostic attacks.
Whenever possible, we provide comparisons between the original method and our
improved version. Our experiments demonstrate that HotFlip can effectively
attack a variety of dense retrievers, with an observed trend that its attack
performance diminishes against more advanced and recent methods. Interestingly,
we observe that while HotFlip performs poorly in a black-box setting,
indicating limited capacity for generalization, in query-agnostic scenarios its
performance is correlated to the volume of injected adversarial passages.
|
2501.04811
|
Fast, Fine-Grained Equivalence Checking for Neural Decompilers
|
cs.LG cs.CR cs.SE
|
Neural decompilers are machine learning models that reconstruct the source
code from an executable program. Critical to the lifecycle of any machine
learning model is an evaluation of its effectiveness. However, existing
techniques for evaluating neural decompilation models have substantial
weaknesses, especially when it comes to showing the correctness of the neural
decompiler's predictions. To address this, we introduce codealign, a novel
instruction-level code equivalence technique designed for neural decompilers.
We provide a formal definition of a relation between equivalent instructions,
which we term an equivalence alignment. We show how codealign generates
equivalence alignments, then evaluate codealign by comparing it with symbolic
execution. Finally, we show how the information codealign provides-which parts
of the functions are equivalent and how well the variable names match-is
substantially more detailed than existing state-of-the-art evaluation metrics,
which report unitless numbers measuring similarity.
|
2501.04815
|
Towards Generalizable Trajectory Prediction Using Dual-Level
Representation Learning And Adaptive Prompting
|
cs.CV
|
Existing vehicle trajectory prediction models struggle with generalizability,
prediction uncertainties, and handling complex interactions. It is often due to
limitations like complex architectures customized for a specific dataset and
inefficient multimodal handling. We propose Perceiver with Register queries
(PerReg+), a novel trajectory prediction framework that introduces: (1)
Dual-Level Representation Learning via Self-Distillation (SD) and Masked
Reconstruction (MR), capturing global context and fine-grained details.
Additionally, our approach of reconstructing segmentlevel trajectories and lane
segments from masked inputs with query drop, enables effective use of
contextual information and improves generalization; (2) Enhanced Multimodality
using register-based queries and pretraining, eliminating the need for
clustering and suppression; and (3) Adaptive Prompt Tuning during fine-tuning,
freezing the main architecture and optimizing a small number of prompts for
efficient adaptation. PerReg+ sets a new state-of-the-art performance on
nuScenes [1], Argoverse 2 [2], and Waymo Open Motion Dataset (WOMD) [3].
Remarkable, our pretrained model reduces the error by 6.8% on smaller datasets,
and multi-dataset training enhances generalization. In cross-domain tests,
PerReg+ reduces B-FDE by 11.8% compared to its non-pretrained variant.
|
2501.04816
|
Probabilistic Skip Connections for Deterministic Uncertainty
Quantification in Deep Neural Networks
|
cs.LG stat.ML
|
Deterministic uncertainty quantification (UQ) in deep learning aims to
estimate uncertainty with a single pass through a network by leveraging outputs
from the network's feature extractor. Existing methods require that the feature
extractor be both sensitive and smooth, ensuring meaningful input changes
produce meaningful changes in feature vectors. Smoothness enables
generalization, while sensitivity prevents feature collapse, where distinct
inputs are mapped to identical feature vectors. To meet these requirements,
current deterministic methods often retrain networks with spectral
normalization. Instead of modifying training, we propose using measures of
neural collapse to identify an existing intermediate layer that is both
sensitive and smooth. We then fit a probabilistic model to the feature vector
of this intermediate layer, which we call a probabilistic skip connection
(PSC). Through empirical analysis, we explore the impact of spectral
normalization on neural collapse and demonstrate that PSCs can effectively
disentangle aleatoric and epistemic uncertainty. Additionally, we show that
PSCs achieve uncertainty quantification and out-of-distribution (OOD) detection
performance that matches or exceeds existing single-pass methods requiring
training modifications. By retrofitting existing models, PSCs enable
high-quality UQ and OOD capabilities without retraining.
|
2501.04817
|
Decentralised Resource Sharing in TinyML: Wireless Bilayer Gossip
Parallel SGD for Collaborative Learning
|
cs.LG cs.AI
|
With the growing computational capabilities of microcontroller units (MCUs),
edge devices can now support machine learning models. However, deploying
decentralised federated learning (DFL) on such devices presents key challenges,
including intermittent connectivity, limited communication range, and dynamic
network topologies. This paper proposes a novel framework, bilayer Gossip
Decentralised Parallel Stochastic Gradient Descent (GD PSGD), designed to
address these issues in resource-constrained environments. The framework
incorporates a hierarchical communication structure using Distributed Kmeans
(DKmeans) clustering for geographic grouping and a gossip protocol for
efficient model aggregation across two layers: intra-cluster and inter-cluster.
We evaluate the framework's performance against the Centralised Federated
Learning (CFL) baseline using the MCUNet model on the CIFAR-10 dataset under
IID and Non-IID conditions. Results demonstrate that the proposed method
achieves comparable accuracy to CFL on IID datasets, requiring only 1.8
additional rounds for convergence. On Non-IID datasets, the accuracy loss
remains under 8\% for moderate data imbalance. These findings highlight the
framework's potential to support scalable and privacy-preserving learning on
edge devices with minimal performance trade-offs.
|
2501.04819
|
Planing It by Ear: Convolutional Neural Networks for Acoustic Anomaly
Detection in Industrial Wood Planers
|
cs.SD cs.AI eess.AS
|
In recent years, the wood product industry has been facing a skilled labor
shortage. The result is more frequent sudden failures, resulting in additional
costs for these companies already operating in a very competitive market.
Moreover, sawmills are challenging environments for machinery and sensors.
Given that experienced machine operators may be able to diagnose defects or
malfunctions, one possible way of assisting novice operators is through
acoustic monitoring. As a step towards the automation of wood-processing
equipment and decision support systems for machine operators, in this paper, we
explore using a deep convolutional autoencoder for acoustic anomaly detection
of wood planers on a new real-life dataset. Specifically, our convolutional
autoencoder with skip connections (Skip-CAE) and our Skip-CAE transformer
outperform the DCASE autoencoder baseline, one-class SVM, isolation forest and
a published convolutional autoencoder architecture, respectively obtaining an
area under the ROC curve of 0.846 and 0.875 on a dataset of real-factory planer
sounds. Moreover, we show that adding skip connections and attention mechanism
under the form of a transformer encoder-decoder helps to further improve the
anomaly detection capabilities.
|
2501.04820
|
Unifying the Extremes: Developing a Unified Model for Detecting and
Predicting Extremist Traits and Radicalization
|
cs.SI cs.CL cs.CY
|
The proliferation of ideological movements into extremist factions via social
media has become a global concern. While radicalization has been studied
extensively within the context of specific ideologies, our ability to
accurately characterize extremism in more generalizable terms remains
underdeveloped. In this paper, we propose a novel method for extracting and
analyzing extremist discourse across a range of online community forums. By
focusing on verbal behavioral signatures of extremist traits, we develop a
framework for quantifying extremism at both user and community levels. Our
research identifies 11 distinct factors, which we term ``The Extremist
Eleven,'' as a generalized psychosocial model of extremism. Applying our method
to various online communities, we demonstrate an ability to characterize
ideologically diverse communities across the 11 extremist traits. We
demonstrate the power of this method by analyzing user histories from members
of the incel community. We find that our framework accurately predicts which
users join the incel community up to 10 months before their actual entry with
an AUC of $>0.6$, steadily increasing to AUC ~0.9 three to four months before
the event. Further, we find that upon entry into an extremist forum, the users
tend to maintain their level of extremism within the community, while still
remaining distinguishable from the general online discourse. Our findings
contribute to the study of extremism by introducing a more holistic,
cross-ideological approach that transcends traditional, trait-specific models.
|
2501.04823
|
Learning Robot Safety from Sparse Human Feedback using Conformal
Prediction
|
cs.RO math.OC stat.AP
|
Ensuring robot safety can be challenging; user-defined constraints can miss
edge cases, policies can become unsafe even when trained from safe data, and
safety can be subjective. Thus, we learn about robot safety by showing policy
trajectories to a human who flags unsafe behavior. From this binary feedback,
we use the statistical method of conformal prediction to identify a region of
states, potentially in learned latent space, guaranteed to contain a
user-specified fraction of future policy errors. Our method is
sample-efficient, as it builds on nearest neighbor classification and avoids
withholding data as is common with conformal prediction. By alerting if the
robot reaches the suspected unsafe region, we obtain a warning system that
mimics the human's safety preferences with guaranteed miss rate. From video
labeling, our system can detect when a quadcopter visuomotor policy will fail
to steer through a designated gate. We present an approach for policy
improvement by avoiding the suspected unsafe region. With it we improve a model
predictive controller's safety, as shown in experimental testing with 30
quadcopter flights across 6 navigation tasks. Code and videos are provided.
|
2501.04826
|
Intelligent Gradient Boosting Algorithms for Estimating Strength of
Modified Subgrade Soil
|
cs.LG cs.AI cs.CE
|
The performance of pavement under loading depends on the strength of the
subgrade. However, experimental estimation of properties of pavement strengths
such as California bearing ratio (CBR), unconfined compressive strength (UCS)
and resistance value (R) are often tedious, time-consuming and costly, thereby
inspiring a growing interest in machine learning based tools which are simple,
cheap and fast alternatives. Thus, the potential application of two boosting
techniques; categorical boosting (CatBoost) and extreme gradient boosting
(XGBoost) and support vector regression (SVR), is similarly explored in this
study for estimation of properties of subgrade soil modified with hydrated lime
activated rice husk ash (HARSH). Using 121 experimental data samples of varying
proportions of HARSH, plastic limit, liquid limit, plasticity index, clay
activity, optimum moisture content, and maximum dry density as input for CBR,
UCS and R estimation, four evaluation metrics namely coefficient of
determination (R2), root mean squared error (RMSE), mean absolute error (MAE)
and mean absolute percentage error (MAPE) are used to evaluate the models'
performance. The results indicate that XGBoost outperformed CatBoost and SVR in
estimating these properties, yielding R2 of 0.9994, 0.9995 and 0.9999 in
estimating the CBR, UCS and R respectively. Also, SVR outperformed CatBoost in
estimating the CBR and R with R2 of 0.9997 respectively. On the other hand,
CatBoost outperformed SVR in estimating the UCS with R2 of 0.9994. Feature
sensitivity analysis shows that the three machine learning techniques are
unanimous that increasing HARSH proportion lead to values of the estimated
properties respectively. A comparison with previous results also shows
superiority of XGBoost in estimating subgrade properties.
|
2501.04828
|
Building Foundations for Natural Language Processing of Historical
Turkish: Resources and Models
|
cs.CL
|
This paper introduces foundational resources and models for natural language
processing (NLP) of historical Turkish, a domain that has remained
underexplored in computational linguistics. We present the first named entity
recognition (NER) dataset, HisTR and the first Universal Dependencies treebank,
OTA-BOUN for a historical form of the Turkish language along with
transformer-based models trained using these datasets for named entity
recognition, dependency parsing, and part-of-speech tagging tasks.
Additionally, we introduce Ottoman Text Corpus (OTC), a clean corpus of
transliterated historical Turkish texts that spans a wide range of historical
periods. Our experimental results show significant improvements in the
computational analysis of historical Turkish, achieving promising results in
tasks that require understanding of historical linguistic structures. They also
highlight existing challenges, such as domain adaptation and language
variations across time periods. All of the presented resources and models are
made available at https://huggingface.co/bucolin to serve as a benchmark for
future progress in historical Turkish NLP.
|
2501.04830
|
A Deep Learning-Based Method for Power System Resilience Evaluation
|
eess.SY cs.SY
|
Power systems are critical infrastructure in modern society, and power
outages can cause significant disruptions to communities and individuals' daily
lives. The resilience of a power system measures its ability to maintain power
supply during highly disruptive events such as hurricanes, earthquakes, and
thunderstorms. Traditional methods for quantifying power system resilience
include statistics-based and simulation-based approaches. Statistics-based
methods offer a retrospective analysis of system performance without requiring
a physical model, while simulation-based methods necessitate detailed physical
system information and often simplify real-world scenarios. This paper
introduces a deep learning-based method for evaluating power system resilience
using historical power outage data. The method leverages the generalization
capabilities of deep learning models and incorporates socio-economic and
demographic factors as weighting terms to highlight the impacts on vulnerable
demographic groups. The effectiveness of the proposed method is demonstrated
through two case studies: one with real historical outage data and the other
with simulated outage records. This approach provides valuable insights into
measuring power system resilience against hazardous weather events without
requiring a physical model of the target systems. The evaluation results can
further guide the planning of distributed energy resources for resilience
enhancement.
|
2501.04831
|
Quantum Hybrid Support Vector Machines for Stress Detection in Older
Adults
|
quant-ph cs.LG
|
Stress can increase the possibility of cognitive impairment and decrease the
quality of life in older adults. Smart healthcare can deploy quantum machine
learning to enable preventive and diagnostic support. This work introduces a
unique technique to address stress detection as an anomaly detection problem
that uses quantum hybrid support vector machines. With the help of a wearable
smartwatch, we mapped baseline sensor reading as normal data and stressed
sensor reading as anomaly data using cortisol concentration as the ground
truth. We have used quantum computing techniques to explore the complex feature
spaces with kernel-based preprocessing. We illustrate the usefulness of our
method by doing experimental validation on 40 older adults with the help of the
TSST protocol. Our findings highlight that using a limited number of features,
quantum machine learning provides improved accuracy compared to classical
methods. We also observed that the recall value using quantum machine learning
is higher compared to the classical method. The higher recall value illustrates
the potential of quantum machine learning in healthcare, as missing anomalies
could result in delayed diagnostics or treatment.
|
2501.04832
|
ActPC-Geom: Towards Scalable Online Neural-Symbolic Learning via
Accelerating Active Predictive Coding with Information Geometry & Diverse
Cognitive Mechanisms
|
cs.AI cs.LG cs.NE
|
This paper introduces ActPC-Geom, an approach to accelerate Active Predictive
Coding (ActPC) in neural networks by integrating information geometry,
specifically using Wasserstein-metric-based methods for measure-dependent
gradient flows. We propose replacing KL-divergence in ActPC's predictive error
assessment with the Wasserstein metric, suggesting this may enhance network
robustness.
To make this computationally feasible, we present strategies including: (1)
neural approximators for inverse measure-dependent Laplacians, (2) approximate
kernel PCA embeddings for low-rank approximations feeding into these
approximators, and (3) compositional hypervector embeddings derived from kPCA
outputs, with algebra optimized for fuzzy FCA lattices learned through neural
architectures analyzing network states.
This results in an ActPC architecture capable of real-time online learning
and integrating continuous (e.g., transformer-like or Hopfield-net-like) and
discrete symbolic ActPC networks, including frameworks like OpenCog Hyperon or
ActPC-Chem for algorithmic chemistry evolution. Shared probabilistic,
concept-lattice, and hypervector models enable symbolic-subsymbolic
integration.
Key features include (1) compositional reasoning via hypervector embeddings
in transformer-like architectures for tasks like commonsense reasoning, and (2)
Hopfield-net dynamics enabling associative long-term memory and
attractor-driven cognitive features.
We outline how ActPC-Geom combines few-shot learning with online weight
updates, enabling deliberative thinking and seamless symbolic-subsymbolic
reasoning. Ideas from Galois connections are explored for efficient hybrid
ActPC/ActPC-Chem processing. Finally, we propose a specialized HPC design
optimized for real-time focused attention and deliberative reasoning tailored
to ActPC-Geom's demands.
|
2501.04835
|
Do Code LLMs Understand Design Patterns?
|
cs.SE cs.AI
|
Code Large Language Models (LLMs) demonstrate great versatility in adapting
to various downstream tasks, including code generation and completion, as well
as bug detection and fixing. However, Code LLMs often fail to capture existing
coding standards, leading to the generation of code that conflicts with the
required design patterns for a given project. As a result, developers must
post-process to adapt the generated code to the project's design norms. In this
work, we empirically investigate the biases of Code LLMs in software
development. Through carefully designed experiments, we assess the models'
understanding of design patterns across recognition, comprehension, and
generation. Our findings reveal that biases in Code LLMs significantly affect
the reliability of downstream tasks.
|
2501.04839
|
DRL-Based Medium-Term Planning of Renewable-Integrated Self-Scheduling
Cascaded Hydropower to Guide Wholesale Market Participation
|
eess.SY cs.SY
|
For self-scheduling cascaded hydropower (S-CHP) facilities, medium-term
planning is a critical step that coordinates water availability over the
medium-term horizon, providing water usage guidance for their short-term
operations in wholesale market participation. Typically, medium-term planning
strategies (e.g., reservoir storage targets at the end of each short-term
period) are determined by either optimization methods or rules of thumb.
However, with the integration of variable renewable energy sources (VRESs),
optimization-based methods suffer from deviations between the anticipated and
actual reservoir storage, while rules of thumb could be financially
conservative, thereby compromising short-term operating profitability in
wholesale market participation. This paper presents a deep reinforcement
learning (DRL)-based framework to derive medium-term planning policies for
VRES-integrated S-CHPs (VS-CHPs), which can leverage contextual information
underneath individual short-term periods and train planning policies by their
induced short-term operating profits in wholesale market participation. The
proposed DRL-based framework offers two practical merits. First, its planning
strategies consider both seasonal requirements of reservoir storage and needs
for short-term operating profits. Second, it adopts a multi-parametric
programming-based strategy to accelerate the expensive training process
associated with multi-step short-term operations. Finally, the DRL-based
framework is evaluated on a real-world VS-CHP, demonstrating its advantages
over current practice.
|
2501.04844
|
Enhancing Listened Speech Decoding from EEG via Parallel Phoneme
Sequence Prediction
|
eess.AS cs.AI cs.CL eess.SP
|
Brain-computer interfaces (BCI) offer numerous human-centered application
possibilities, particularly affecting people with neurological disorders. Text
or speech decoding from brain activities is a relevant domain that could
augment the quality of life for people with impaired speech perception. We
propose a novel approach to enhance listened speech decoding from
electroencephalography (EEG) signals by utilizing an auxiliary phoneme
predictor that simultaneously decodes textual phoneme sequences. The proposed
model architecture consists of three main parts: EEG module, speech module, and
phoneme predictor. The EEG module learns to properly represent EEG signals into
EEG embeddings. The speech module generates speech waveforms from the EEG
embeddings. The phoneme predictor outputs the decoded phoneme sequences in text
modality. Our proposed approach allows users to obtain decoded listened speech
from EEG signals in both modalities (speech waveforms and textual phoneme
sequences) simultaneously, eliminating the need for a concatenated sequential
pipeline for each modality. The proposed approach also outperforms previous
methods in both modalities. The source code and speech samples are publicly
available.
|
2501.04845
|
Intelligent experiments through real-time AI: Fast Data Processing and
Autonomous Detector Control for sPHENIX and future EIC detectors
|
physics.ins-det cs.LG hep-ex nucl-ex
|
This R\&D project, initiated by the DOE Nuclear Physics AI-Machine Learning
initiative in 2022, leverages AI to address data processing challenges in
high-energy nuclear experiments (RHIC, LHC, and future EIC). Our focus is on
developing a demonstrator for real-time processing of high-rate data streams
from sPHENIX experiment tracking detectors. The limitations of a 15 kHz maximum
trigger rate imposed by the calorimeters can be negated by intelligent use of
streaming technology in the tracking system. The approach efficiently
identifies low momentum rare heavy flavor events in high-rate p+p collisions
(3MHz), using Graph Neural Network (GNN) and High Level Synthesis for Machine
Learning (hls4ml). Success at sPHENIX promises immediate benefits, minimizing
resources and accelerating the heavy-flavor measurements. The approach is
transferable to other fields. For the EIC, we develop a DIS-electron tagger
using Artificial Intelligence - Machine Learning (AI-ML) algorithms for
real-time identification, showcasing the transformative potential of AI and
FPGA technologies in high-energy nuclear and particle experiments real-time
data processing pipelines.
|
2501.04846
|
EDMB: Edge Detector with Mamba
|
cs.CV
|
Transformer-based models have made significant progress in edge detection,
but their high computational cost is prohibitive. Recently, vision Mamba have
shown excellent ability in efficiently capturing long-range dependencies.
Drawing inspiration from this, we propose a novel edge detector with Mamba,
termed EDMB, to efficiently generate high-quality multi-granularity edges. In
EDMB, Mamba is combined with a global-local architecture, therefore it can
focus on both global information and fine-grained cues. The fine-grained cues
play a crucial role in edge detection, but are usually ignored by ordinary
Mamba. We design a novel decoder to construct learnable Gaussian distributions
by fusing global features and fine-grained features. And the multi-grained
edges are generated by sampling from the distributions. In order to make
multi-granularity edges applicable to single-label data, we introduce Evidence
Lower Bound loss to supervise the learning of the distributions. On the
multi-label dataset BSDS500, our proposed EDMB achieves competitive
single-granularity ODS 0.837 and multi-granularity ODS 0.851 without
multi-scale test or extra PASCAL-VOC data. Remarkably, EDMB can be extended to
single-label datasets such as NYUDv2 and BIPED. The source code is available at
https://github.com/Li-yachuan/EDMB.
|
2501.04848
|
Exploring Large Language Models for Semantic Analysis and Categorization
of Android Malware
|
cs.CR cs.AI
|
Malware analysis is a complex process of examining and evaluating malicious
software's functionality, origin, and potential impact. This arduous process
typically involves dissecting the software to understand its components,
infection vector, propagation mechanism, and payload. Over the years, deep
reverse engineering of malware has become increasingly tedious, mainly due to
modern malicious codebases' fast evolution and sophistication. Essentially,
analysts are tasked with identifying the elusive needle in the haystack within
the complexities of zero-day malware, all while under tight time constraints.
Thus, in this paper, we explore leveraging Large Language Models (LLMs) for
semantic malware analysis to expedite the analysis of known and novel samples.
Built on GPT-4o-mini model, \msp is designed to augment malware analysis for
Android through a hierarchical-tiered summarization chain and strategic prompt
engineering. Additionally, \msp performs malware categorization, distinguishing
potential malware from benign applications, thereby saving time during the
malware reverse engineering process. Despite not being fine-tuned for Android
malware analysis, we demonstrate that through optimized and advanced prompt
engineering \msp can achieve up to 77% classification accuracy while providing
highly robust summaries at functional, class, and package levels. In addition,
leveraging the backward tracing of the summaries from package to function
levels allowed us to pinpoint the precise code snippets responsible for
malicious behavior.
|
2501.04852
|
Classification of Self-Dual Constacyclic Codes of Prime Power Length
$p^s$ Over $\frac{\mathbb{F}_{p^m}[u]}{\left\langle u^3\right\rangle} $
|
cs.IT math.IT math.RA
|
Let $\mathbb{F}_{p^m}$ be a finite field of cardinality $p^m$, where $p$ is a
prime number and $m$ is a positive integer. Self-dual constacyclic codes of
length \( p^s \) over \( \frac{\mathbb{F}_{p^m}[u]}{\langle u^3 \rangle} \)
exist only when \( p = 2 \). In this work, we classify and enumerate all
self-dual cyclic codes of length \( 2^s \) over \(
\frac{\mathbb{F}_{2^m}[u]}{\langle u^3 \rangle} \), thereby completing the
classification and enumeration of self-dual constacyclic codes of length \( p^s
\) over \( \frac{\mathbb{F}_{p^m}[u]}{\langle u^3 \rangle} \). Additionally, we
correct and improve results from B. Kim and Y. Lee (2020) in
\cite{kim2020classification}.
|
2501.04854
|
Higher-order Delsarte Dual LPs: Lifting, Constructions and Completeness
|
cs.IT cs.DM math.CO math.IT
|
A central and longstanding open problem in coding theory is the
rate-versus-distance trade-off for binary error-correcting codes. In a seminal
work, Delsarte introduced a family of linear programs establishing relaxations
on the size of optimum codes. To date, the state-of-the-art upper bounds for
binary codes come from dual feasible solutions to these LPs. Still, these
bounds are exponentially far from the best-known existential constructions.
Recently, hierarchies of linear programs extending and strengthening
Delsarte's original LPs were introduced for linear codes, which we refer to as
higher-order Delsarte LPs. These new hierarchies were shown to provably
converge to the actual value of optimum codes, namely, they are complete
hierarchies. Therefore, understanding them and their dual formulations becomes
a valuable line of investigation. Nonetheless, their higher-order structure
poses challenges. In fact, analysis of all known convex programming hierarchies
strengthening Delsarte's original LPs has turned out to be exceedingly
difficult and essentially nothing is known, stalling progress in the area since
the 1970s.
Our main result is an analysis of the higher-order Delsarte LPs via their
dual formulation. Although quantitatively, our current analysis only matches
the best-known upper bounds, it shows, for the first time, how to tame the
complexity of analyzing a hierarchy strengthening Delsarte's original LPs. In
doing so, we reach a better understanding of the structure of the hierarchy,
which may serve as the foundation for further quantitative improvements. We
provide two additional structural results for this hierarchy. First, we show
how to \emph{explicitly} lift any feasible dual solution from level $k$ to a
(suitable) larger level $\ell$ while retaining the objective value. Second, we
give a novel proof of completeness using the dual formulation.
|
2501.04855
|
A new rotation-free isogeometric thin shell formulation and a
corresponding continuity constraint for patch boundaries
|
cs.CE
|
This paper presents a general non-linear computational formulation for
rotation-free thin shells based on isogeometric finite elements. It is a
displacement-based formulation that admits general material models. The
formulation allows for a wide range of constitutive laws, including both shell
models that are extracted from existing 3D continua using numerical integration
and those that are directly formulated in 2D manifold form, like the Koiter,
Canham and Helfrich models. Further, a unified approach to enforce the
$G^1$-continuity between patches, fix the angle between surface folds, enforce
symmetry conditions and prescribe rotational Dirichlet boundary conditions, is
presented using penalty and Lagrange multiplier methods. The formulation is
fully described in the natural curvilinear coordinate system of the finite
element description, which facilitates an efficient computational
implementation. It contains existing isogeometric thin shell formulations as
special cases. Several classical numerical benchmark examples are considered to
demonstrate the robustness and accuracy of the proposed formulation. The
presented constitutive models, in particular the simple mixed Koiter model that
does not require any thickness integration, show excellent performance, even
for large deformations.
|
2501.04858
|
Advancing Retrieval-Augmented Generation for Persian: Development of
Language Models, Comprehensive Benchmarks, and Best Practices for
Optimization
|
cs.CL
|
This paper examines the specific obstacles of constructing
Retrieval-Augmented Generation(RAG) systems in low-resource languages, with a
focus on Persian's complicated morphology and versatile syntax. The research
aims to improve retrieval and generation accuracy by introducing
Persian-specific models, namely MatinaRoberta(a masked language model) and
MatinaSRoberta(a fine-tuned Sentence-BERT), along with a comprehensive
benchmarking framework. Three datasets-general knowledge(PQuad), scientifically
specialized texts, and organizational reports, were used to assess these models
after they were trained on a varied corpus of 73.11 billion Persian tokens. The
methodology involved extensive pretraining, fine-tuning with tailored loss
functions, and systematic evaluations using both traditional metrics and the
Retrieval-Augmented Generation Assessment framework. The results show that
MatinaSRoberta outperformed previous embeddings, achieving superior contextual
relevance and retrieval accuracy across datasets. Temperature tweaking, chunk
size modifications, and document summary indexing were explored to enhance RAG
setups. Larger models like Llama-3.1 (70B) consistently demonstrated the
highest generation accuracy, while smaller models faced challenges with
domain-specific and formal contexts. The findings underscore the potential for
developing RAG systems in Persian through customized embeddings and
retrieval-generation settings and highlight the enhancement of NLP applications
such as search engines and legal document analysis in low-resource languages.
|
2501.04860
|
Exploring the Use of Robots for Diary Studies
|
cs.RO cs.HC
|
As interest in studying in-the-wild human-robot interaction grows, there is a
need for methods to collect data over time and in naturalistic or potentially
private environments. HRI researchers have increasingly used the diary method
for these studies, asking study participants to self-administer a structured
data collection instrument, i.e., a diary, over a period of time. Although the
diary method offers a unique window into settings that researchers may not have
access to, they also lack the interactivity and probing that interview-based
methods offer. In this paper, we explore a novel data collection method in
which a robot plays the role of an interactive diary. We developed the Diary
Robot system and performed in-home deployments for a week to evaluate the
feasibility and effectiveness of this approach. Using traditional text-based
and audio-based diaries as benchmarks, we found that robots are able to
effectively elicit the intended information. We reflect on our findings, and
describe scenarios where the utilization of robots in diary studies as a data
collection instrument may be especially applicable.
|
2501.04861
|
LayerMix: Enhanced Data Augmentation through Fractal Integration for
Robust Deep Learning
|
cs.CV
|
Deep learning models have demonstrated remarkable performance across various
computer vision tasks, yet their vulnerability to distribution shifts remains a
critical challenge. Despite sophisticated neural network architectures,
existing models often struggle to maintain consistent performance when
confronted with Out-of-Distribution (OOD) samples, including natural
corruptions, adversarial perturbations, and anomalous patterns. We introduce
LayerMix, an innovative data augmentation approach that systematically enhances
model robustness through structured fractal-based image synthesis. By
meticulously integrating structural complexity into training datasets, our
method generates semantically consistent synthetic samples that significantly
improve neural network generalization capabilities. Unlike traditional
augmentation techniques that rely on random transformations, LayerMix employs a
structured mixing pipeline that preserves original image semantics while
introducing controlled variability. Extensive experiments across multiple
benchmark datasets, including CIFAR-10, CIFAR-100, ImageNet-200, and
ImageNet-1K demonstrate LayerMixs superior performance in classification
accuracy and substantially enhances critical Machine Learning (ML) safety
metrics, including resilience to natural image corruptions, robustness against
adversarial attacks, improved model calibration and enhanced prediction
consistency. LayerMix represents a significant advancement toward developing
more reliable and adaptable artificial intelligence systems by addressing the
fundamental challenges of deep learning generalization. The code is available
at https://github.com/ahmadmughees/layermix.
|
2501.04864
|
A hybrid pressure formulation of the face-centred finite volume method
for viscous laminar incompressible flows
|
math.NA cs.CE cs.NA physics.flu-dyn
|
This work presents a hybrid pressure face-centred finite volume (FCFV) solver
to simulate steady-state incompressible Navier-Stokes flows. The method
leverages the robustness, in the incompressible limit, of the hybridisable
discontinuous Galerkin paradigm for compressible and weakly compressible flows
to derive the formulation of a novel, low-order face-based discretisation. The
incompressibility constraint is enforced in a weak sense, by introducing an
inter-cell mass flux defined in terms of a new, hybrid variable, representing
the pressure at the cell faces. This results in a new hybridisation strategy
where cell variables (velocity, pressure and deviatoric strain rate tensor) are
expressed as a function of velocity and pressure at the barycentre of the cell
faces. The hybrid pressure formulation provides first-order convergence of all
variables, including the stress, independently of cell type, stretching and
distortion. Numerical benchmarks of Navier-Stokes flows at low and moderate
Reynolds numbers, in two and three dimensions, are presented to evaluate
accuracy and robustness of the method. In particular, the hybrid pressure
formulation outperforms the FCFV method when convective effects are relevant,
achieving accurate predictions on significantly coarser meshes.
|
2501.04870
|
Deep Transfer $Q$-Learning for Offline Non-Stationary Reinforcement
Learning
|
stat.ML cs.LG
|
In dynamic decision-making scenarios across business and healthcare,
leveraging sample trajectories from diverse populations can significantly
enhance reinforcement learning (RL) performance for specific target
populations, especially when sample sizes are limited. While existing transfer
learning methods primarily focus on linear regression settings, they lack
direct applicability to reinforcement learning algorithms. This paper pioneers
the study of transfer learning for dynamic decision scenarios modeled by
non-stationary finite-horizon Markov decision processes, utilizing neural
networks as powerful function approximators and backward inductive learning. We
demonstrate that naive sample pooling strategies, effective in regression
settings, fail in Markov decision processes.To address this challenge, we
introduce a novel ``re-weighted targeting procedure'' to construct
``transferable RL samples'' and propose ``transfer deep $Q^*$-learning'',
enabling neural network approximation with theoretical guarantees. We assume
that the reward functions are transferable and deal with both situations in
which the transition densities are transferable or nontransferable. Our
analytical techniques for transfer learning in neural network approximation and
transition density transfers have broader implications, extending to supervised
transfer learning with neural networks and domain shift scenarios. Empirical
experiments on both synthetic and real datasets corroborate the advantages of
our method, showcasing its potential for improving decision-making through
strategically constructing transferable RL samples in non-stationary
reinforcement learning contexts.
|
2501.04871
|
RieszBoost: Gradient Boosting for Riesz Regression
|
stat.ML cs.LG stat.ME
|
Answering causal questions often involves estimating linear functionals of
conditional expectations, such as the average treatment effect or the effect of
a longitudinal modified treatment policy. By the Riesz representation theorem,
these functionals can be expressed as the expected product of the conditional
expectation of the outcome and the Riesz representer, a key component in doubly
robust estimation methods. Traditionally, the Riesz representer is estimated
indirectly by deriving its explicit analytical form, estimating its components,
and substituting these estimates into the known form (e.g., the inverse
propensity score). However, deriving or estimating the analytical form can be
challenging, and substitution methods are often sensitive to practical
positivity violations, leading to higher variance and wider confidence
intervals. In this paper, we propose a novel gradient boosting algorithm to
directly estimate the Riesz representer without requiring its explicit
analytical form. This method is particularly suited for tabular data, offering
a flexible, nonparametric, and computationally efficient alternative to
existing methods for Riesz regression. Through simulation studies, we
demonstrate that our algorithm performs on par with or better than indirect
estimation techniques across a range of functionals, providing a user-friendly
and robust solution for estimating causal quantities.
|
2501.04873
|
Back Home: A Machine Learning Approach to Seashell Classification and
Ecosystem Restoration
|
cs.CV cs.AI cs.LG
|
In Costa Rica, an average of 5 tons of seashells are extracted from
ecosystems annually. Confiscated seashells, cannot be returned to their
ecosystems due to the lack of origin recognition. To address this issue, we
developed a convolutional neural network (CNN) specifically for seashell
identification. We built a dataset from scratch, consisting of approximately
19000 images from the Pacific and Caribbean coasts. Using this dataset, the
model achieved a classification accuracy exceeding 85%. The model has been
integrated into a user-friendly application, which has classified over 36,000
seashells to date, delivering real-time results within 3 seconds per image. To
further enhance the system's accuracy, an anomaly detection mechanism was
incorporated to filter out irrelevant or anomalous inputs, ensuring only valid
seashell images are processed.
|
2501.04877
|
Real-Time Textless Dialogue Generation
|
cs.CL cs.AI cs.SD eess.AS
|
Recent advancements in large language models (LLMs) have led to significant
progress in text-based dialogue systems. These systems can now generate
high-quality responses that are accurate and coherent across a wide range of
topics and tasks. However, spoken dialogue systems still lag behind in terms of
naturalness. They tend to produce robotic interactions, with issues such as
slow response times, overly generic or cautious replies, and a lack of natural
rhythm and fluid turn-taking. This shortcoming is largely due to the
over-reliance on the traditional cascaded design, which involve separate,
sequential components, as well as the use of text as an intermediate
representation. This paper propose a real-time, textless spoken dialogue
generation model (RTTL-DG) that aims to overcome these challenges. Our system
enables fluid turn-taking and generates responses with minimal delay by
processing streaming spoken conversation directly. Additionally, our model
incorporates backchannels, filters, laughter, and other paralinguistic signals,
which are often absent in cascaded dialogue systems, to create more natural and
human-like interactions. The implementations and generated samples are
available in our repository: https://github.com/mailong25/rts2s-dg
|
2501.04878
|
Topological Classification of points in $Z^2$ by using Topological
Numbers for $2$D discrete binary images
|
cs.CV cs.CG
|
In this paper, we propose a topological classification of points for 2D
discrete binary images. This classification is based on the values of the
calculus of topological numbers. Six classes of points are proposed: isolated
point, interior point, simple point, curve point, point of intersection of 3
curves, point of intersection of 4 curves. The number of configurations of each
class is also given.
|
2501.04879
|
Multilinear Tensor Low-Rank Approximation for Policy-Gradient Methods in
Reinforcement Learning
|
cs.LG
|
Reinforcement learning (RL) aims to estimate the action to take given a
(time-varying) state, with the goal of maximizing a cumulative reward function.
Predominantly, there are two families of algorithms to solve RL problems:
value-based and policy-based methods, with the latter designed to learn a
probabilistic parametric policy from states to actions. Most contemporary
approaches implement this policy using a neural network (NN). However, NNs
usually face issues related to convergence, architectural suitability,
hyper-parameter selection, and underutilization of the redundancies of the
state-action representations (e.g. locally similar states). This paper
postulates multi-linear mappings to efficiently estimate the parameters of the
RL policy. More precisely, we leverage the PARAFAC decomposition to design
tensor low-rank policies. The key idea involves collecting the policy
parameters into a tensor and leveraging tensor-completion techniques to enforce
low rank. We establish theoretical guarantees of the proposed methods for
various policy classes and validate their efficacy through numerical
experiments. Specifically, we demonstrate that tensor low-rank policy models
reduce computational and sample complexities in comparison to NN models while
achieving similar rewards.
|
2501.04880
|
Leveraging Log Probabilities in Language Models to Forecast Future
Events
|
cs.CL cs.LG
|
In the constantly changing field of data-driven decision making, accurately
predicting future events is crucial for strategic planning in various sectors.
The emergence of Large Language Models (LLMs) marks a significant advancement
in this area, offering advanced tools that utilise extensive text data for
prediction. In this industry paper, we introduce a novel method for AI-driven
foresight using LLMs. Building on top of previous research, we employ data on
current trends and their trajectories for generating forecasts on 15 different
topics. Subsequently, we estimate their probabilities via a multi-step approach
based on log probabilities. We show we achieve a Brier score of 0.186, meaning
a +26% improvement over random chance and a +19% improvement over
widely-available AI systems.
|
2501.04881
|
Geophysical inverse problems with measurement-guided diffusion models
|
physics.geo-ph cs.LG
|
Solving inverse problems with the reverse process of a diffusion model
represents an appealing avenue to produce highly realistic, yet diverse
solutions from incomplete and possibly noisy measurements, ultimately enabling
uncertainty quantification at scale. However, because of the intractable nature
of the score function of the likelihood term (i.e., $\nabla_{\mathbf{x}_t}
p(\mathbf{y} | \mathbf{x}_t)$), various samplers have been proposed in the
literature that use different (more or less accurate) approximations of such a
gradient to guide the diffusion process towards solutions that match the
observations. In this work, I consider two sampling algorithms recently
proposed under the name of Diffusion Posterior Sampling (DPS) and
Pseudo-inverse Guided Diffusion Model (PGDM), respectively. In DSP, the
guidance term used at each step of the reverse diffusion process is obtained by
applying the adjoint of the modeling operator to the residual obtained from a
one-step denoising estimate of the solution. On the other hand, PGDM utilizes a
pseudo-inverse operator that originates from the fact that the one-step
denoised solution is not assumed to be deterministic, rather modeled as a
Gaussian distribution. Through an extensive set of numerical examples on two
geophysical inverse problems (namely, seismic interpolation and seismic
inversion), I show that two key aspects for the success of any
measurement-guided diffusion process are: i) our ability to re-parametrize the
inverse problem such that the sought after model is bounded between -1 and 1 (a
pre-requisite for any diffusion model); ii) the choice of the training dataset
used to learn the implicit prior that guides the reverse diffusion process.
Numerical examples on synthetic and field datasets reveal that PGDM outperforms
DPS in both scenarios at limited additional cost.
|
2501.04882
|
Reach Measurement, Optimization and Frequency Capping In Targeted Online
Advertising Under k-Anonymity
|
cs.GT cs.AI cs.LG stat.AP stat.ML
|
The growth in the use of online advertising to foster brand awareness over
recent years is largely attributable to the ubiquity of social media. One
pivotal technology contributing to the success of online brand advertising is
frequency capping, a mechanism that enables marketers to control the number of
times an ad is shown to a specific user. However, the very foundation of this
technology is being scrutinized as the industry gravitates towards advertising
solutions that prioritize user privacy. This paper delves into the issue of
reach measurement and optimization within the context of $k$-anonymity, a
privacy-preserving model gaining traction across major online advertising
platforms. We outline how to report reach within this new privacy landscape and
demonstrate how probabilistic discounting, a probabilistic adaptation of
traditional frequency capping, can be employed to optimize campaign
performance. Experiments are performed to assess the trade-off between user
privacy and the efficacy of online brand advertising. Notably, we discern a
significant dip in performance as long as privacy is introduced, yet this comes
with a limited additional cost for advertising platforms to offer their users
more privacy.
|
2501.04894
|
A Look into How Machine Learning is Reshaping Engineering Models: the
Rise of Analysis Paralysis, Optimal yet Infeasible Solutions, and the
Inevitable Rashomon Paradox
|
cs.LG stat.ME
|
The widespread acceptance of empirically derived codal provisions and
equations in civil engineering stands in stark contrast to the skepticism
facing machine learning (ML) models, despite their shared statistical
foundations. This paper examines this philosophical tension through the lens of
structural engineering and explores how integrating ML challenges traditional
engineering philosophies and professional identities. Recent efforts have
documented how ML enhances predictive accuracy, optimizes designs, and analyzes
complex behaviors. However, one might also raise concerns about the diminishing
role of human intuition and the interpretability of algorithms. To showcase
this rarely explored front, this paper presents how ML can be successfully
integrated into various engineering problems by means of formulation via
deduction, induction, and abduction. Then, this paper identifies three
principal paradoxes that could arise when adopting ML: analysis paralysis
(increased prediction accuracy leading to a reduced understanding of physical
mechanisms), infeasible solutions (optimization resulting in unconventional
designs that challenge engineering intuition), and the Rashomon effect (where
contradictions in explainability methods and physics arise). This paper
concludes by addressing these paradoxes and arguing the need to rethink
epistemological shifts in engineering and engineering education and
methodologies to harmonize traditional principles with ML.
|
2501.04896
|
Quantifying Itch and its Impact on Sleep Using Machine Learning and
Radio Signals
|
cs.LG cs.AI cs.CY
|
Chronic itch affects 13% of the US population, is highly debilitating, and
underlies many medical conditions. A major challenge in clinical care and new
therapeutics development is the lack of an objective measure for quantifying
itch, leading to reliance on subjective measures like patients' self-assessment
of itch severity. In this paper, we show that a home radio device paired with
artificial intelligence (AI) can concurrently capture scratching and evaluate
its impact on sleep quality by analyzing radio signals bouncing in the
environment. The device eliminates the need for wearable sensors or skin
contact, enabling monitoring of chronic itch over extended periods at home
without burdening patients or interfering with their skin condition. To
validate the technology, we conducted an observational clinical study of
chronic pruritus patients, monitored at home for one month using both the radio
device and an infrared camera. Comparing the output of the device to ground
truth data from the camera demonstrates its feasibility and accuracy (ROC AUC =
0.997, sensitivity = 0.825, specificity = 0.997). The results reveal a
significant correlation between scratching and low sleep quality, manifested as
a reduction in sleep efficiency (R = 0.6, p < 0.001) and an increase in sleep
latency (R = 0.68, p < 0.001). Our study underscores the potential of passive,
long-term, at-home monitoring of chronic scratching and its sleep implications,
offering a valuable tool for both clinical care of chronic itch patients and
pharmaceutical clinical trials.
|
2501.04897
|
Online Continual Learning: A Systematic Literature Review of Approaches,
Challenges, and Benchmarks
|
cs.LG
|
Online Continual Learning (OCL) is a critical area in machine learning,
focusing on enabling models to adapt to evolving data streams in real-time
while addressing challenges such as catastrophic forgetting and the
stability-plasticity trade-off. This study conducts the first comprehensive
Systematic Literature Review (SLR) on OCL, analyzing 81 approaches, extracting
over 1,000 features (specific tasks addressed by these approaches), and
identifying more than 500 components (sub-models within approaches, including
algorithms and tools). We also review 83 datasets spanning applications like
image classification, object detection, and multimodal vision-language tasks.
Our findings highlight key challenges, including reducing computational
overhead, developing domain-agnostic solutions, and improving scalability in
resource-constrained environments. Furthermore, we identify promising
directions for future research, such as leveraging self-supervised learning for
multimodal and sequential data, designing adaptive memory mechanisms that
integrate sparse retrieval and generative replay, and creating efficient
frameworks for real-world applications with noisy or evolving task boundaries.
By providing a rigorous and structured synthesis of the current state of OCL,
this review offers a valuable resource for advancing this field and addressing
its critical challenges and opportunities. The complete SLR methodology steps
and extracted data are publicly available through the provided link:
https://github.com/kiyan-rezaee/
Systematic-Literature-Review-on-Online-Continual-Learning
|
2501.04898
|
Optimality and Adaptivity of Deep Neural Features for Instrumental
Variable Regression
|
stat.ML cs.LG
|
We provide a convergence analysis of deep feature instrumental variable
(DFIV) regression (Xu et al., 2021), a nonparametric approach to IV regression
using data-adaptive features learned by deep neural networks in two stages. We
prove that the DFIV algorithm achieves the minimax optimal learning rate when
the target structural function lies in a Besov space. This is shown under
standard nonparametric IV assumptions, and an additional smoothness assumption
on the regularity of the conditional distribution of the covariate given the
instrument, which controls the difficulty of Stage 1. We further demonstrate
that DFIV, as a data-adaptive algorithm, is superior to fixed-feature (kernel
or sieve) IV methods in two ways. First, when the target function possesses low
spatial homogeneity (i.e., it has both smooth and spiky/discontinuous regions),
DFIV still achieves the optimal rate, while fixed-feature methods are shown to
be strictly suboptimal. Second, comparing with kernel-based two-stage
regression estimators, DFIV is provably more data efficient in the Stage 1
samples.
|
2501.04899
|
SUGAR: Leveraging Contextual Confidence for Smarter Retrieval
|
cs.CL cs.AI
|
Bearing in mind the limited parametric knowledge of Large Language Models
(LLMs), retrieval-augmented generation (RAG) which supplies them with the
relevant external knowledge has served as an approach to mitigate the issue of
hallucinations to a certain extent. However, uniformly retrieving supporting
context makes response generation source-inefficient, as triggering the
retriever is not always necessary, or even inaccurate, when a model gets
distracted by noisy retrieved content and produces an unhelpful answer.
Motivated by these issues, we introduce Semantic Uncertainty Guided Adaptive
Retrieval (SUGAR), where we leverage context-based entropy to actively decide
whether to retrieve and to further determine between single-step and multi-step
retrieval. Our empirical results show that selective retrieval guided by
semantic uncertainty estimation improves the performance across diverse
question answering tasks, as well as achieves a more efficient inference.
|
2501.04901
|
ThriftLLM: On Cost-Effective Selection of Large Language Models for
Classification Queries
|
cs.DB
|
In recent years, large language models (LLMs) have demonstrated remarkable
capabilities in comprehending and generating natural language content. An
increasing number of services offer LLMs for various tasks via APIs. Different
LLMs demonstrate expertise in different domains of queries (e.g., text
classification queries). Meanwhile, LLMs of different scales, complexity, and
performance are priced diversely. Driven by this, several researchers are
investigating strategies for selecting an ensemble of LLMs, aiming to decrease
overall usage costs while enhancing performance. However, to the best of our
knowledge, none of the existing works addresses the problem, how to find an LLM
ensemble subject to a cost budget, which maximizes the ensemble performance
with guarantees.
In this paper, we formalize the performance of an ensemble of models (LLMs)
using the notion of prediction accuracy which we formally define. We develop an
approach for aggregating responses from multiple LLMs to enhance ensemble
performance. Building on this, we formulate the Optimal Ensemble Selection
problem of selecting a set of LLMs subject to a cost budget that maximizes the
overall prediction accuracy. We show that prediction accuracy is non-decreasing
and non-submodular and provide evidence that the Optimal Ensemble Selection
problem is likely to be NP-hard. By leveraging a submodular function that upper
bounds prediction accuracy, we develop an algorithm called ThriftLLM and prove
that it achieves an instance-dependent approximation guarantee with high
probability. In addition, it achieves state-of-the-art performance for text
classification and entity matching queries on multiple real-world datasets
against various baselines in our extensive experimental evaluation, while using
a relatively lower cost budget, strongly supporting the effectiveness and
superiority of our method.
|
2501.04903
|
Towards understanding the bias in decision trees
|
stat.ML cs.LG
|
There is a widespread and longstanding belief that machine learning models
are biased towards the majority (or negative) class when learning from
imbalanced data, leading them to neglect or ignore the minority (or positive)
class. In this study, we show that this belief is not necessarily correct for
decision trees, and that their bias can actually be in the opposite direction.
Motivated by a recent simulation study that suggested that decision trees can
be biased towards the minority class, our paper aims to reconcile the conflict
between that study and decades of other works. First, we critically evaluate
past literature on this problem, finding that failing to consider the data
generating process has led to incorrect conclusions about the bias in decision
trees. We then prove that, under specific conditions related to the predictors,
decision trees fit to purity and trained on a dataset with only one positive
case are biased towards the minority class. Finally, we demonstrate that splits
in a decision tree are also biased when there is more than one positive case.
Our findings have implications on the use of popular tree-based models, such as
random forests.
|
2501.04904
|
JELLY: Joint Emotion Recognition and Context Reasoning with LLMs for
Conversational Speech Synthesis
|
cs.CL cs.SD eess.AS
|
Recently, there has been a growing demand for conversational speech synthesis
(CSS) that generates more natural speech by considering the conversational
context. To address this, we introduce JELLY, a novel CSS framework that
integrates emotion recognition and context reasoning for generating appropriate
speech in conversation by fine-tuning a large language model (LLM) with
multiple partial LoRA modules. We propose an Emotion-aware Q-former encoder,
which enables the LLM to perceive emotions in speech. The encoder is trained to
align speech emotions with text, utilizing datasets of emotional speech. The
entire model is then fine-tuned with conversational speech data to infer
emotional context for generating emotionally appropriate speech in
conversation. Our experimental results demonstrate that JELLY excels in
emotional context modeling, synthesizing speech that naturally aligns with
conversation, while mitigating the scarcity of emotional conversational speech
datasets.
|
2501.04911
|
A Machine Learning Model for Crowd Density Classification in Hajj Video
Frames
|
cs.CV cs.CY
|
Managing the massive annual gatherings of Hajj and Umrah presents significant
challenges, particularly as the Saudi government aims to increase the number of
pilgrims. Currently, around two million pilgrims attend Hajj and 26 million
attend Umrah making crowd control especially in critical areas like the Grand
Mosque during Tawaf, a major concern. Additional risks arise in managing dense
crowds at key sites such as Arafat where the potential for stampedes, fires and
pandemics poses serious threats to public safety. This research proposes a
machine learning model to classify crowd density into three levels: moderate
crowd, overcrowded and very dense crowd in video frames recorded during Hajj,
with a flashing red light to alert organizers in real-time when a very dense
crowd is detected. While current research efforts in processing Hajj
surveillance videos focus solely on using CNN to detect abnormal behaviors,
this research focuses more on high-risk crowds that can lead to disasters.
Hazardous crowd conditions require a robust method, as incorrect classification
could trigger unnecessary alerts and government intervention, while failure to
classify could result in disaster. The proposed model integrates Local Binary
Pattern (LBP) texture analysis, which enhances feature extraction for
differentiating crowd density levels, along with edge density and area-based
features. The model was tested on the KAU-Smart Crowd 'HAJJv2' dataset which
contains 18 videos from various key locations during Hajj including 'Massaa',
'Jamarat', 'Arafat' and 'Tawaf'. The model achieved an accuracy rate of 87%
with a 2.14% error percentage (misclassification rate), demonstrating its
ability to detect and classify various crowd conditions effectively. That
contributes to enhanced crowd management and safety during large-scale events
like Hajj.
|
2501.04914
|
From Mesh Completion to AI Designed Crown
|
cs.CV cs.LG
|
Designing a dental crown is a time-consuming and labor intensive process. Our
goal is to simplify crown design and minimize the tediousness of making manual
adjustments while still ensuring the highest level of accuracy and consistency.
To this end, we present a new end- to-end deep learning approach, coined Dental
Mesh Completion (DMC), to generate a crown mesh conditioned on a point cloud
context. The dental context includes the tooth prepared to receive a crown and
its surroundings, namely the two adjacent teeth and the three closest teeth in
the opposing jaw. We formulate crown generation in terms of completing this
point cloud context. A feature extractor first converts the input point cloud
into a set of feature vectors that represent local regions in the point cloud.
The set of feature vectors is then fed into a transformer to predict a new set
of feature vectors for the missing region (crown). Subsequently, a point
reconstruction head, followed by a multi-layer perceptron, is used to predict a
dense set of points with normals. Finally, a differentiable point-to-mesh layer
serves to reconstruct the crown surface mesh. We compare our DMC method to a
graph-based convolutional neural network which learns to deform a crown mesh
from a generic crown shape to the target geometry. Extensive experiments on our
dataset demonstrate the effectiveness of our method, which attains an average
of 0.062 Chamfer Distance.The code is available
at:https://github.com/Golriz-code/DMC.gi
|
2501.04916
|
SpecTf: Transformers Enable Data-Driven Imaging Spectroscopy Cloud
Detection
|
cs.LG
|
Current and upcoming generations of visible-shortwave infrared (VSWIR)
imaging spectrometers promise unprecedented capacity to quantify Earth System
processes across the globe. However, reliable cloud screening remains a
fundamental challenge for these instruments, where traditional spatial and
temporal approaches are limited by cloud variability and limited temporal
coverage. The Spectroscopic Transformer (SpecTf) addresses these challenges
with a spectroscopy-specific deep learning architecture that performs cloud
detection using only spectral information (no spatial or temporal data are
required). By treating spectral measurements as sequences rather than image
channels, SpecTf learns fundamental physical relationships without relying on
spatial context. Our experiments demonstrate that SpecTf significantly
outperforms the current baseline approach implemented for the EMIT instrument,
and performs comparably with other machine learning methods with orders of
magnitude fewer learned parameters. Critically, we demonstrate SpecTf's
inherent interpretability through its attention mechanism, revealing physically
meaningful spectral features the model has learned. Finally, we present
SpecTf's potential for cross-instrument generalization by applying it to a
different instrument on a different platform without modifications, opening the
door to instrument agnostic data driven algorithms for future imaging
spectroscopy tasks.
|
2501.04926
|
FLowHigh: Towards Efficient and High-Quality Audio Super-Resolution with
Single-Step Flow Matching
|
eess.AS cs.AI cs.CL cs.SD
|
Audio super-resolution is challenging owing to its ill-posed nature.
Recently, the application of diffusion models in audio super-resolution has
shown promising results in alleviating this challenge. However, diffusion-based
models have limitations, primarily the necessity for numerous sampling steps,
which causes significantly increased latency when synthesizing high-quality
audio samples. In this paper, we propose FLowHigh, a novel approach that
integrates flow matching, a highly efficient generative model, into audio
super-resolution. We also explore probability paths specially tailored for
audio super-resolution, which effectively capture high-resolution audio
distributions, thereby enhancing reconstruction quality. The proposed method
generates high-fidelity, high-resolution audio through a single-step sampling
process across various input sampling rates. The experimental results on the
VCTK benchmark dataset demonstrate that FLowHigh achieves state-of-the-art
performance in audio super-resolution, as evaluated by log-spectral distance
and ViSQOL while maintaining computational efficiency with only a single-step
sampling process.
|
2501.04927
|
Investigating Numerical Translation with Large Language Models
|
cs.CL
|
The inaccurate translation of numbers can lead to significant security
issues, ranging from financial setbacks to medical inaccuracies. While large
language models (LLMs) have made significant advancements in machine
translation, their capacity for translating numbers has not been thoroughly
explored. This study focuses on evaluating the reliability of LLM-based machine
translation systems when handling numerical data. In order to systematically
test the numerical translation capabilities of currently open source LLMs, we
have constructed a numerical translation dataset between Chinese and English
based on real business data, encompassing ten types of numerical translation.
Experiments on the dataset indicate that errors in numerical translation are a
common issue, with most open-source LLMs faltering when faced with our test
scenarios. Especially when it comes to numerical types involving large units
like ``million", ``billion", and "yi", even the latest llama3.1 8b model can
have error rates as high as 20%. Finally, we introduce three potential
strategies to mitigate the numerical mistranslations for large units.
|
2501.04928
|
Image2CADSeq: Computer-Aided Design Sequence and Knowledge Inference
from Product Images
|
cs.CV cs.AI
|
Computer-aided design (CAD) tools empower designers to design and modify 3D
models through a series of CAD operations, commonly referred to as a CAD
sequence. In scenarios where digital CAD files are not accessible, reverse
engineering (RE) has been used to reconstruct 3D CAD models. Recent advances
have seen the rise of data-driven approaches for RE, with a primary focus on
converting 3D data, such as point clouds, into 3D models in boundary
representation (B-rep) format. However, obtaining 3D data poses significant
challenges, and B-rep models do not reveal knowledge about the 3D modeling
process of designs. To this end, our research introduces a novel data-driven
approach with an Image2CADSeq neural network model. This model aims to reverse
engineer CAD models by processing images as input and generating CAD sequences.
These sequences can then be translated into B-rep models using a solid modeling
kernel. Unlike B-rep models, CAD sequences offer enhanced flexibility to modify
individual steps of model creation, providing a deeper understanding of the
construction process of CAD models. To quantitatively and rigorously evaluate
the predictive performance of the Image2CADSeq model, we have developed a
multi-level evaluation framework for model assessment. The model was trained on
a specially synthesized dataset, and various network architectures were
explored to optimize the performance. The experimental and validation results
show great potential for the model in generating CAD sequences from 2D image
data.
|
2501.04929
|
What Drives You to Interact?: The Role of User Motivation for a Robot in
the Wild
|
cs.HC cs.RO
|
In this paper, we aim to understand how user motivation shapes human-robot
interaction (HRI) in the wild. To explore this, we conducted a field study by
deploying a fully autonomous conversational robot in a shopping mall over two
days. Through sequential video analysis, we identified five patterns of
interaction fluency (Smooth, Awkward, Active, Messy, and Quiet), four types of
user motivation for interacting with the robot (Function, Experiment,
Curiosity, and Education), and user positioning towards the robot. We further
analyzed how these motivations and positioning influence interaction fluency.
Our findings suggest that incorporating users' motivation types into the design
of robot behavior can enhance interaction fluency, engagement, and user
satisfaction in real-world HRI scenarios.
|
2501.04931
|
Jailbreaking Multimodal Large Language Models via Shuffle Inconsistency
|
cs.CR cs.AI cs.CL
|
Multimodal Large Language Models (MLLMs) have achieved impressive performance
and have been put into practical use in commercial applications, but they still
have potential safety mechanism vulnerabilities. Jailbreak attacks are red
teaming methods that aim to bypass safety mechanisms and discover MLLMs'
potential risks. Existing MLLMs' jailbreak methods often bypass the model's
safety mechanism through complex optimization methods or carefully designed
image and text prompts. Despite achieving some progress, they have a low attack
success rate on commercial closed-source MLLMs. Unlike previous research, we
empirically find that there exists a Shuffle Inconsistency between MLLMs'
comprehension ability and safety ability for the shuffled harmful instruction.
That is, from the perspective of comprehension ability, MLLMs can understand
the shuffled harmful text-image instructions well. However, they can be easily
bypassed by the shuffled harmful instructions from the perspective of safety
ability, leading to harmful responses. Then we innovatively propose a
text-image jailbreak attack named SI-Attack. Specifically, to fully utilize the
Shuffle Inconsistency and overcome the shuffle randomness, we apply a
query-based black-box optimization method to select the most harmful shuffled
inputs based on the feedback of the toxic judge model. A series of experiments
show that SI-Attack can improve the attack's performance on three benchmarks.
In particular, SI-Attack can obviously improve the attack success rate for
commercial MLLMs such as GPT-4o or Claude-3.5-Sonnet.
|
2501.04934
|
Plug-and-Play DISep: Separating Dense Instances for Scene-to-Pixel
Weakly-Supervised Change Detection in High-Resolution Remote Sensing Images
|
cs.CV
|
Existing Weakly-Supervised Change Detection (WSCD) methods often encounter
the problem of "instance lumping" under scene-level supervision, particularly
in scenarios with a dense distribution of changed instances (i.e., changed
objects). In these scenarios, unchanged pixels between changed instances are
also mistakenly identified as changed, causing multiple changes to be
mistakenly viewed as one. In practical applications, this issue prevents the
accurate quantification of the number of changes. To address this issue, we
propose a Dense Instance Separation (DISep) method as a plug-and-play solution,
refining pixel features from a unified instance perspective under scene-level
supervision. Specifically, our DISep comprises a three-step iterative training
process: 1) Instance Localization: We locate instance candidate regions for
changed pixels using high-pass class activation maps. 2) Instance Retrieval: We
identify and group these changed pixels into different instance IDs through
connectivity searching. Then, based on the assigned instance IDs, we extract
corresponding pixel-level features on a per-instance basis. 3) Instance
Separation: We introduce a separation loss to enforce intra-instance pixel
consistency in the embedding space, thereby ensuring separable instance feature
representations. The proposed DISep adds only minimal training cost and no
inference cost. It can be seamlessly integrated to enhance existing WSCD
methods. We achieve state-of-the-art performance by enhancing {three
Transformer-based and four ConvNet-based methods} on the LEVIR-CD, WHU-CD,
DSIFN-CD, SYSU-CD, and CDD datasets. Additionally, our DISep can be used to
improve fully-supervised change detection methods. Code is available at
https://github.com/zhenghuizhao/Plug-and-Play-DISep-for-Change-Detection.
|
Subsets and Splits
No community queries yet
The top public SQL queries from the community will appear here once available.