id stringlengths 9 16 | title stringlengths 4 278 | categories stringlengths 5 104 | abstract stringlengths 6 4.09k |
|---|---|---|---|
2501.03508 | A Sequential Optimal Learning Approach to Automated Prompt Engineering
in Large Language Models | cs.CL | Designing effective prompts is essential to guiding large language models
(LLMs) toward desired responses. Automated prompt engineering aims to reduce
reliance on manual effort by streamlining the design, refinement, and
optimization of natural language prompts. This paper proposes an optimal
learning framework for automated prompt engineering, designed to sequentially
identify effective prompt features while efficiently allocating a limited
evaluation budget. We introduce a feature-based method to express prompts,
which significantly broadens the search space. Bayesian regression is employed
to utilize correlations among similar prompts, accelerating the learning
process. To efficiently explore the large space of prompt features for a high
quality prompt, we adopt the forward-looking Knowledge-Gradient (KG) policy for
sequential optimal learning. The KG policy is computed efficiently by solving
mixed-integer second-order cone optimization problems, making it scalable and
capable of accommodating prompts characterized only through constraints. We
demonstrate that our method significantly outperforms a set of benchmark
strategies assessed on instruction induction tasks. The results highlight the
advantages of using the KG policy for prompt learning given a limited
evaluation budget. Our framework provides a solution to deploying automated
prompt engineering in a wider range applications where prompt evaluation is
costly.
|
2501.03510 | Salient Region Matching for Fully Automated MR-TRUS Registration | eess.IV cs.CV | Prostate cancer is a leading cause of cancer-related mortality in men. The
registration of magnetic resonance (MR) and transrectal ultrasound (TRUS) can
provide guidance for the targeted biopsy of prostate cancer. In this study, we
propose a salient region matching framework for fully automated MR-TRUS
registration. The framework consists of prostate segmentation, rigid alignment
and deformable registration. Prostate segmentation is performed using two
segmentation networks on MR and TRUS respectively, and the predicted salient
regions are used for the rigid alignment. The rigidly-aligned MR and TRUS
images serve as initialization for the deformable registration. The deformable
registration network has a dual-stream encoder with cross-modal spatial
attention modules to facilitate multi-modality feature learning, and a salient
region matching loss to consider both structure and intensity similarity within
the prostate region. Experiments on a public MR-TRUS dataset demonstrate that
our method achieves satisfactory registration results, outperforming several
cutting-edge methods. The code is publicly available at
https://github.com/mock1ngbrd/salient-region-matching.
|
2501.03515 | Effects of Robot Competency and Motion Legibility on Human Correction
Feedback | cs.RO | As robot deployments become more commonplace, people are likely to take on
the role of supervising robots (i.e., correcting their mistakes) rather than
directly teaching them. Prior works on Learning from Corrections (LfC) have
relied on three key assumptions to interpret human feedback: (1) people correct
the robot only when there is significant task objective divergence; (2) people
can accurately predict if a correction is necessary; and (3) people trade off
precision and physical effort when giving corrections. In this work, we study
how two key factors (robot competency and motion legibility) affect how people
provide correction feedback and their implications on these existing
assumptions. We conduct a user study ($N=60$) under an LfC setting where
participants supervise and correct a robot performing pick-and-place tasks. We
find that people are more sensitive to suboptimal behavior by a highly
competent robot compared to an incompetent robot when the motions are legible
($p=0.0015$) and predictable ($p=0.0055$). In addition, people also tend to
withhold necessary corrections ($p < 0.0001$) when supervising an incompetent
robot and are more prone to offering unnecessary ones ($p = 0.0171$) when
supervising a highly competent robot. We also find that physical effort
positively correlates with correction precision, providing empirical evidence
to support this common assumption. We also find that this correlation is
significantly weaker for an incompetent robot with legible motions than an
incompetent robot with predictable motions ($p = 0.0075$). Our findings offer
insights for accounting for competency and legibility when designing robot
interaction behaviors and learning task objectives from corrections.
|
2501.03516 | The Multiple Equal-Difference Structure of Cyclotomic Cosets | math.NT cs.IT math.IT | In this paper we introduce the definition of equal-difference cyclotomic
coset, and prove that in general any cyclotomic coset can be decomposed into a
disjoint union of equal-difference subsets. Among the equal-difference
decompositions of a cyclotomic coset, an important class consists of those in
the form of cyclotomic decompositions, called the multiple equal-difference
representations of the coset. There is an equivalent correspondence between the
multiple equal-difference representations of $q$-cyclotomic cosets modulo $n$
and the irreducible factorizations of $X^{n}-1$ in binomial form over finite
extension fields of $\mathbb{F}_{q}$. We give an explicit characterization of
the multiple equal-difference representations of any $q$-cyclotomic coset
modulo $n$, through which a criterion for $X^{n}-1$ factoring into irreducible
binomials is obtained. In addition, we represent an algorithm to simplify the
computation of the leaders of cyclotomic cosets.
|
2501.03518 | Transfer Learning for Deep-Unfolded Combinatorial Optimization Solver
with Quantum Annealer | quant-ph cs.LG | Quantum annealing (QA) has attracted research interest as a sampler and
combinatorial optimization problem (COP) solver. A recently proposed
sampling-based solver for QA significantly reduces the required number of
qubits, being capable of large COPs. In relation to this, a trainable
sampling-based COP solver has been proposed that optimizes its internal
parameters from a dataset by using a deep learning technique called deep
unfolding. Although learning the internal parameters accelerates the
convergence speed, the sampler in the trainable solver is restricted to using a
classical sampler owing to the training cost. In this study, to utilize QA in
the trainable solver, we propose classical-quantum transfer learning, where
parameters are trained classically, and the trained parameters are used in the
solver with QA. The results of numerical experiments demonstrate that the
trainable quantum COP solver using classical-quantum transfer learning improves
convergence speed and execution time over the original solver.
|
2501.03523 | Vocal Tract Length Warped Features for Spoken Keyword Spotting | cs.SD cs.AI cs.LG eess.AS | In this paper, we propose several methods that incorporate vocal tract length
(VTL) warped features for spoken keyword spotting (KWS). The first method,
VTL-independent KWS, involves training a single deep neural network (DNN) that
utilizes VTL features with various warping factors. During training, a specific
VTL feature is randomly selected per epoch, allowing the exploration of VTL
variations. During testing, the VTL features with different warping factors of
a test utterance are scored against the DNN and combined with equal weight. In
the second method scores the conventional features of a test utterance (without
VTL warping) against the DNN. The third method, VTL-concatenation KWS,
concatenates VTL warped features to form high-dimensional features for KWS.
Evaluations carried out on the English Google Command dataset demonstrate that
the proposed methods improve the accuracy of KWS.
|
2501.03525 | TexHOI: Reconstructing Textures of 3D Unknown Objects in Monocular
Hand-Object Interaction Scenes | cs.CV | Reconstructing 3D models of dynamic, real-world objects with high-fidelity
textures from monocular frame sequences has been a challenging problem in
recent years. This difficulty stems from factors such as shadows, indirect
illumination, and inaccurate object-pose estimations due to occluding
hand-object interactions. To address these challenges, we propose a novel
approach that predicts the hand's impact on environmental visibility and
indirect illumination on the object's surface albedo. Our method first learns
the geometry and low-fidelity texture of the object, hand, and background
through composite rendering of radiance fields. Simultaneously, we optimize the
hand and object poses to achieve accurate object-pose estimations. We then
refine physics-based rendering parameters - including roughness, specularity,
albedo, hand visibility, skin color reflections, and environmental illumination
- to produce precise albedo, and accurate hand illumination and shadow regions.
Our approach surpasses state-of-the-art methods in texture reconstruction and,
to the best of our knowledge, is the first to account for hand-object
interactions in object texture reconstruction.
|
2501.03526 | FgC2F-UDiff: Frequency-guided and Coarse-to-fine Unified Diffusion Model
for Multi-modality Missing MRI Synthesis | eess.IV cs.CV cs.LG | Multi-modality magnetic resonance imaging (MRI) is essential for the
diagnosis and treatment of brain tumors. However, missing modalities are
commonly observed due to limitations in scan time, scan corruption, artifacts,
motion, and contrast agent intolerance. Synthesis of missing MRI has been a
means to address the limitations of modality insufficiency in clinical practice
and research. However, there are still some challenges, such as poor
generalization, inaccurate non-linear mapping, and slow processing speeds. To
address the aforementioned issues, we propose a novel unified synthesis model,
the Frequency-guided and Coarse-to-fine Unified Diffusion Model (FgC2F-UDiff),
designed for multiple inputs and outputs. Specifically, the Coarse-to-fine
Unified Network (CUN) fully exploits the iterative denoising properties of
diffusion models, from global to detail, by dividing the denoising process into
two stages, coarse and fine, to enhance the fidelity of synthesized images.
Secondly, the Frequency-guided Collaborative Strategy (FCS) harnesses
appropriate frequency information as prior knowledge to guide the learning of a
unified, highly non-linear mapping. Thirdly, the Specific-acceleration Hybrid
Mechanism (SHM) integrates specific mechanisms to accelerate the diffusion
model and enhance the feasibility of many-to-many synthesis. Extensive
experimental evaluations have demonstrated that our proposed FgC2F-UDiff model
achieves superior performance on two datasets, validated through a
comprehensive assessment that includes both qualitative observations and
quantitative metrics, such as PSNR SSIM, LPIPS, and FID.
|
2501.03533 | Anomaly Triplet-Net: Progress Recognition Model Using Deep Metric
Learning Considering Occlusion for Manual Assembly Work | cs.CV | In this paper, a progress recognition method consider occlusion using deep
metric learning is proposed to visualize the product assembly process in a
factory. First, the target assembly product is detected from images acquired
from a fixed-point camera installed in the factory using a deep learning-based
object detection method. Next, the detection area is cropped from the image.
Finally, by using a classification method based on deep metric learning on the
cropped image, the progress of the product assembly work is estimated as a
rough progress step.
As a specific progress estimation model, we propose an Anomaly Triplet-Net
that adds anomaly samples to Triplet Loss for progress estimation considering
occlusion.
In experiments, an 82.9% success rate is achieved for the progress estimation
method using Anomaly Triplet-Net.
We also experimented with the practicality of the sequence of detection,
cropping, and progression estimation, and confirmed the effectiveness of the
overall system.
|
2501.03535 | SenseRAG: Constructing Environmental Knowledge Bases with Proactive
Querying for LLM-Based Autonomous Driving | cs.AI cs.RO | This study addresses the critical need for enhanced situational awareness in
autonomous driving (AD) by leveraging the contextual reasoning capabilities of
large language models (LLMs). Unlike traditional perception systems that rely
on rigid, label-based annotations, it integrates real-time, multimodal sensor
data into a unified, LLMs-readable knowledge base, enabling LLMs to dynamically
understand and respond to complex driving environments. To overcome the
inherent latency and modality limitations of LLMs, a proactive
Retrieval-Augmented Generation (RAG) is designed for AD, combined with a
chain-of-thought prompting mechanism, ensuring rapid and context-rich
understanding. Experimental results using real-world Vehicle-to-everything
(V2X) datasets demonstrate significant improvements in perception and
prediction performance, highlighting the potential of this framework to enhance
safety, adaptability, and decision-making in next-generation AD systems.
|
2501.03538 | Efficient and Accurate Tuberculosis Diagnosis: Attention Residual U-Net
and Vision Transformer Based Detection Framework | eess.IV cs.CV | Tuberculosis (TB), an infectious disease caused by Mycobacterium
tuberculosis, continues to be a major global health threat despite being
preventable and curable. This burden is particularly high in low and middle
income countries. Microscopy remains essential for diagnosing TB by enabling
direct visualization of Mycobacterium tuberculosis in sputum smear samples,
offering a cost effective approach for early detection and effective treatment.
Given the labour-intensive nature of microscopy, automating the detection of
bacilli in microscopic images is crucial to improve both the expediency and
reliability of TB diagnosis. The current methodologies for detecting
tuberculosis bacilli in bright field microscopic sputum smear images are
hindered by limited automation capabilities, inconsistent segmentation quality,
and constrained classification precision. This paper proposes a twostage deep
learning methodology for tuberculosis bacilli detection, comprising bacilli
segmentation followed by classification. In the initial phase, an advanced
U-Net model employing attention blocks and residual connections is proposed to
segment microscopic sputum smear images, enabling the extraction of Regions of
Interest (ROIs). The extracted ROIs are then classified using a Vision
Transformer, which we specifically customized as TBViT to enhance the precise
detection of bacilli within the images. For the experiments, a newly developed
dataset of microscopic sputum smear images derived from Ziehl-Neelsen-stained
slides is used in conjunction with existing public datasets. The qualitative
and quantitative evaluation of the experiments using various metrics
demonstrates that the proposed model achieves significantly improved
segmentation performance, higher classification accuracy, and a greater level
of automation, surpassing existing methods.
|
2501.03539 | Enhanced Tuberculosis Bacilli Detection using Attention-Residual U-Net
and Ensemble Classification | eess.IV cs.CV | Tuberculosis (TB), caused by Mycobacterium tuberculosis, remains a critical
global health issue, necessitating timely diagnosis and treatment. Current
methods for detecting tuberculosis bacilli from bright field microscopic sputum
smear images suffer from low automation, inadequate segmentation performance,
and limited classification accuracy. This paper proposes an efficient hybrid
approach that combines deep learning for segmentation and an ensemble model for
classification. An enhanced U-Net model incorporating attention blocks and
residual connections is introduced to precisely segment microscopic sputum
smear images, facilitating the extraction of Regions of Interest (ROIs). These
ROIs are subsequently classified using an ensemble classifier comprising
Support Vector Machine (SVM), Random Forest, and Extreme Gradient Boost
(XGBoost), resulting in an accurate identification of bacilli within the
images. Experiments conducted on a newly created dataset, along with public
datasets, demonstrate that the proposed model achieves superior segmentation
performance, higher classification accuracy, and enhanced automation compared
to existing methods.
|
2501.03540 | Deep Learning within Tabular Data: Foundations, Challenges, Advances and
Future Directions | cs.LG cs.AI | Tabular data remains one of the most prevalent data types across a wide range
of real-world applications, yet effective representation learning for this
domain poses unique challenges due to its irregular patterns, heterogeneous
feature distributions, and complex inter-column dependencies. This survey
provides a comprehensive review of state-of-the-art techniques in tabular data
representation learning, structured around three foundational design elements:
training data, neural architectures, and learning objectives. Unlike prior
surveys that focus primarily on either architecture design or learning
strategies, we adopt a holistic perspective that emphasizes the universality
and robustness of representation learning methods across diverse downstream
tasks. We examine recent advances in data augmentation and generation,
specialized neural network architectures tailored to tabular data, and
innovative learning objectives that enhance representation quality.
Additionally, we highlight the growing influence of self-supervised learning
and the adaptation of transformer-based foundation models for tabular data. Our
review is based on a systematic literature search using rigorous inclusion
criteria, encompassing 127 papers published since 2020 in top-tier conferences
and journals. Through detailed analysis and comparison, we identify emerging
trends, critical gaps, and promising directions for future research, aiming to
guide the development of more generalizable and effective tabular data
representation methods.
|
2501.03543 | Distributionally Robust Joint Chance-Constrained Optimal Power Flow
using Relative Entropy | math.OC cs.SY eess.SY | Designing robust algorithms for the optimal power flow (OPF) problem is
critical for the control of large-scale power systems under uncertainty. The
chance-constrained OPF (CCOPF) problem provides a natural formulation of the
trade-off between the operating cost and the constraint satisfaction rate. In
this work, we propose a new data-driven algorithm for the CCOPF problem, based
on distributionally robust optimization (DRO). \revise{We show that the
proposed reformulation of the distributionally robust chance constraints is
exact, whereas other approaches in the CCOPF literature rely on conservative
approximations. We establish out-of-sample robustness guarantees for the
distributionally robust solution and prove that the solution is the most
efficient among all approaches enjoying the same guarantees.} We apply the
proposed algorithm to the the CCOPF problem and compare the performance of our
approach with existing methods using simulations on IEEE benchmark power
systems.
|
2501.03544 | PromptGuard: Soft Prompt-Guided Unsafe Content Moderation for
Text-to-Image Models | cs.CV cs.AI cs.CR | Text-to-image (T2I) models have been shown to be vulnerable to misuse,
particularly in generating not-safe-for-work (NSFW) content, raising serious
ethical concerns. In this work, we present PromptGuard, a novel content
moderation technique that draws inspiration from the system prompt mechanism in
large language models (LLMs) for safety alignment. Unlike LLMs, T2I models lack
a direct interface for enforcing behavioral guidelines. Our key idea is to
optimize a safety soft prompt that functions as an implicit system prompt
within the T2I model's textual embedding space. This universal soft prompt (P*)
directly moderates NSFW inputs, enabling safe yet realistic image generation
without altering the inference efficiency or requiring proxy models. Extensive
experiments across three datasets demonstrate that PromptGuard effectively
mitigates NSFW content generation while preserving high-quality benign outputs.
PromptGuard achieves 7.8 times faster than prior content moderation methods,
surpassing eight state-of-the-art defenses with an optimal unsafe ratio down to
5.84%.
|
2501.03545 | Beyond Factual Accuracy: Evaluating Coverage of Diverse Factual
Information in Long-form Text Generation | cs.CL | This paper presents ICAT, an evaluation framework for measuring coverage of
diverse factual information in long-form text generation. ICAT breaks down a
long output text into a list of atomic claims and not only verifies each claim
through retrieval from a (reliable) knowledge source, but also computes the
alignment between the atomic factual claims and various aspects expected to be
presented in the output. We study three implementations of the ICAT framework,
each with a different assumption on the availability of aspects and alignment
method. By adopting data from the diversification task in the TREC Web Track
and the ClueWeb corpus, we evaluate the ICAT framework. We demonstrate strong
correlation with human judgments and provide comprehensive evaluation across
multiple state-of-the-art LLMs. Our framework further offers interpretable and
fine-grained analysis of diversity and coverage. Its modular design allows for
easy adaptation to different domains and datasets, making it a valuable tool
for evaluating the qualitative aspects of long-form responses produced by LLMs.
|
2501.03552 | Proxy Control Barrier Functions: Integrating Barrier-Based and
Lyapunov-Based Safety-Critical Control Design | eess.SY cs.SY math.OC | This work introduces a novel Proxy Control Barrier Function (PCBF) scheme
that integrates barrier-based and Lyapunov-based safety-critical control
strategies for strict-feedback systems with potentially unknown dynamics. The
proposed method employs a modular design procedure, decomposing the original
system into a proxy subsystem and a virtual tracking subsystem that are
controlled by the control barrier function (CBF)-based and Lyapunov-based
controllers, respectively. By integrating these separately designed
controllers, the overall system's safety is ensured. Moreover, a new
filter-based disturbance observer is utilized to design a PCBF-based safe
controller for strict-feedback systems subject to mismatched disturbances. This
approach broadens the class of systems to which CBF-based methods can be
applied and significantly simplifies CBF construction by requiring only the
model of the proxy subsystem. The effectiveness of the proposed method is
demonstrated through numerical simulations.
|
2501.03560 | KG-TRICK: Unifying Textual and Relational Information Completion of
Knowledge for Multilingual Knowledge Graphs | cs.CL cs.AI cs.LG | Multilingual knowledge graphs (KGs) provide high-quality relational and
textual information for various NLP applications, but they are often
incomplete, especially in non-English languages. Previous research has shown
that combining information from KGs in different languages aids either
Knowledge Graph Completion (KGC), the task of predicting missing relations
between entities, or Knowledge Graph Enhancement (KGE), the task of predicting
missing textual information for entities. Although previous efforts have
considered KGC and KGE as independent tasks, we hypothesize that they are
interdependent and mutually beneficial. To this end, we introduce KG-TRICK, a
novel sequence-to-sequence framework that unifies the tasks of textual and
relational information completion for multilingual KGs. KG-TRICK demonstrates
that: i) it is possible to unify the tasks of KGC and KGE into a single
framework, and ii) combining textual information from multiple languages is
beneficial to improve the completeness of a KG. As part of our contributions,
we also introduce WikiKGE10++, the largest manually-curated benchmark for
textual information completion of KGs, which features over 25,000 entities
across 10 diverse languages.
|
2501.03562 | Rethinking Adversarial Attacks in Reinforcement Learning from Policy
Distribution Perspective | cs.LG cs.AI | Deep Reinforcement Learning (DRL) suffers from uncertainties and inaccuracies
in the observation signal in realworld applications. Adversarial attack is an
effective method for evaluating the robustness of DRL agents. However, existing
attack methods targeting individual sampled actions have limited impacts on the
overall policy distribution, particularly in continuous action spaces. To
address these limitations, we propose the Distribution-Aware Projected Gradient
Descent attack (DAPGD). DAPGD uses distribution similarity as the gradient
perturbation input to attack the policy network, which leverages the entire
policy distribution rather than relying on individual samples. We utilize the
Bhattacharyya distance in DAPGD to measure policy similarity, enabling
sensitive detection of subtle but critical differences between probability
distributions. Our experiment results demonstrate that DAPGD achieves SOTA
results compared to the baselines in three robot navigation tasks, achieving an
average 22.03% higher reward drop compared to the best baseline.
|
2501.03565 | Bridged Semantic Alignment for Zero-shot 3D Medical Image Diagnosis | cs.CV | 3D medical images such as Computed tomography (CT) are widely used in
clinical practice, offering a great potential for automatic diagnosis.
Supervised learning-based approaches have achieved significant progress but
rely heavily on extensive manual annotations, limited by the availability of
training data and the diversity of abnormality types. Vision-language alignment
(VLA) offers a promising alternative by enabling zero-shot learning without
additional annotations. However, we empirically discover that the visual and
textural embeddings after alignment endeavors from existing VLA methods form
two well-separated clusters, presenting a wide gap to be bridged. To bridge
this gap, we propose a Bridged Semantic Alignment (BrgSA) framework. First, we
utilize a large language model to perform semantic summarization of reports,
extracting high-level semantic information. Second, we design a Cross-Modal
Knowledge Interaction (CMKI) module that leverages a cross-modal knowledge bank
as a semantic bridge, facilitating interaction between the two modalities,
narrowing the gap, and improving their alignment. To comprehensively evaluate
our method, we construct a benchmark dataset that includes 15 underrepresented
abnormalities as well as utilize two existing benchmark datasets. Experimental
results demonstrate that BrgSA achieves state-of-the-art performances on both
public benchmark datasets and our custom-labeled dataset, with significant
improvements in zero-shot diagnosis of underrepresented abnormalities.
|
2501.03566 | Applying Large Language Models in Knowledge Graph-based Enterprise
Modeling: Challenges and Opportunities | cs.MA cs.AI cs.SE | The role of large language models (LLMs) in enterprise modeling has recently
started to shift from academic research to that of industrial applications.
Thereby, LLMs represent a further building block for the machine-supported
generation of enterprise models. In this paper we employ a knowledge
graph-based approach for enterprise modeling and investigate the potential
benefits of LLMs in this context. In addition, the findings of an expert survey
and ChatGPT-4o-based experiments demonstrate that LLM-based model generations
exhibit minimal variability, yet remain constrained to specific tasks, with
reliability declining for more intricate tasks. The survey results further
suggest that the supervision and intervention of human modeling experts are
essential to ensure the accuracy and integrity of the generated models.
|
2501.03567 | Evaluating Image Caption via Cycle-consistent Text-to-Image Generation | cs.CV | Evaluating image captions typically relies on reference captions, which are
costly to obtain and exhibit significant diversity and subjectivity. While
reference-free evaluation metrics have been proposed, most focus on cross-modal
evaluation between captions and images. Recent research has revealed that the
modality gap generally exists in the representation of contrastive
learning-based multi-modal systems, undermining the reliability of
cross-modality metrics like CLIPScore. In this paper, we propose CAMScore, a
cyclic reference-free automatic evaluation metric for image captioning models.
To circumvent the aforementioned modality gap, CAMScore utilizes a
text-to-image model to generate images from captions and subsequently evaluates
these generated images against the original images. Furthermore, to provide
fine-grained information for a more comprehensive evaluation, we design a
three-level evaluation framework for CAMScore that encompasses pixel-level,
semantic-level, and objective-level perspectives. Extensive experiment results
across multiple benchmark datasets show that CAMScore achieves a superior
correlation with human judgments compared to existing reference-based and
reference-free metrics, demonstrating the effectiveness of the framework.
|
2501.03568 | Advanced Tutorial: Label-Efficient Two-Sample Tests | cs.LG stat.ME | Hypothesis testing is a statistical inference approach used to determine
whether data supports a specific hypothesis. An important type is the
two-sample test, which evaluates whether two sets of data points are from
identical distributions. This test is widely used, such as by clinical
researchers comparing treatment effectiveness. This tutorial explores
two-sample testing in a context where an analyst has many features from two
samples, but determining the sample membership (or labels) of these features is
costly. In machine learning, a similar scenario is studied in active learning.
This tutorial extends active learning concepts to two-sample testing within
this \textit{label-costly} setting while maintaining statistical validity and
high testing power. Additionally, the tutorial discusses practical applications
of these label-efficient two-sample tests.
|
2501.03571 | AADNet: Exploring EEG Spatiotemporal Information for Fast and Accurate
Orientation and Timbre Detection of Auditory Attention Based on A Cue-Masked
Paradigm | cs.LG cs.SD eess.AS q-bio.NC | Auditory attention decoding from electroencephalogram (EEG) could infer to
which source the user is attending in noisy environments. Decoding algorithms
and experimental paradigm designs are crucial for the development of technology
in practical applications. To simulate real-world scenarios, this study
proposed a cue-masked auditory attention paradigm to avoid information leakage
before the experiment. To obtain high decoding accuracy with low latency, an
end-to-end deep learning model, AADNet, was proposed to exploit the
spatiotemporal information from the short time window of EEG signals. The
results showed that with a 0.5-second EEG window, AADNet achieved an average
accuracy of 93.46% and 91.09% in decoding auditory orientation attention (OA)
and timbre attention (TA), respectively. It significantly outperformed five
previous methods and did not need the knowledge of the original audio source.
This work demonstrated that it was possible to detect the orientation and
timbre of auditory attention from EEG signals fast and accurately. The results
are promising for the real-time multi-property auditory attention decoding,
facilitating the application of the neuro-steered hearing aids and other
assistive listening devices.
|
2501.03572 | From Code to Compliance: Assessing ChatGPT's Utility in Designing an
Accessible Webpage -- A Case Study | cs.HC cs.AI cs.CL | Web accessibility ensures that individuals with disabilities can access and
interact with digital content without barriers, yet a significant majority of
most used websites fail to meet accessibility standards. This study evaluates
ChatGPT's (GPT-4o) ability to generate and improve web pages in line with Web
Content Accessibility Guidelines (WCAG). While ChatGPT can effectively address
accessibility issues when prompted, its default code often lacks compliance,
reflecting limitations in its training data and prevailing inaccessible web
practices. Automated and manual testing revealed strengths in resolving simple
issues but challenges with complex tasks, requiring human oversight and
additional iterations. Unlike prior studies, we incorporate manual evaluation,
dynamic elements, and use the visual reasoning capability of ChatGPT along with
the prompts to fix accessibility issues. Providing screenshots alongside
prompts enhances the LLM's ability to address accessibility issues by allowing
it to analyze surrounding components, such as determining appropriate contrast
colors. We found that effective prompt engineering, such as providing concise,
structured feedback and incorporating visual aids, significantly enhances
ChatGPT's performance. These findings highlight the potential and limitations
of large language models for accessible web development, offering practical
guidance for developers to create more inclusive websites.
|
2501.03573 | Neural Cellular Automata and Deep Equilibrium Models | cs.NE cs.FL | This essay discusses the connections and differences between two emerging
paradigms in deep learning, namely Neural Cellular Automata and Deep
Equilibrium Models, and train a simple Deep Equilibrium Convolutional model to
demonstrate the inherent similarity of NCA and DEQ based methods. Finally, this
essay speculates about ways to combine theoretical and practical aspects of
both approaches for future research.
|
2501.03575 | Cosmos World Foundation Model Platform for Physical AI | cs.CV cs.AI cs.LG cs.RO | Physical AI needs to be trained digitally first. It needs a digital twin of
itself, the policy model, and a digital twin of the world, the world model. In
this paper, we present the Cosmos World Foundation Model Platform to help
developers build customized world models for their Physical AI setups. We
position a world foundation model as a general-purpose world model that can be
fine-tuned into customized world models for downstream applications. Our
platform covers a video curation pipeline, pre-trained world foundation models,
examples of post-training of pre-trained world foundation models, and video
tokenizers. To help Physical AI builders solve the most critical problems of
our society, we make our platform open-source and our models open-weight with
permissive licenses available via https://github.com/NVIDIA/Cosmos.
|
2501.03577 | Wireless Channel Measurements and Characterization in Industrial IoT
Scenarios | eess.SY cs.SY | Wireless Fidelity (Wi-Fi) communication technologies hold significant
potential for realizing the Industrial Internet of Things (IIoT). In this
paper, both Single-Input Single-Output (SISO) and polarized Multiple-Input
Multiple-Output (MIMO) channel measurements are conducted in an IIoT scenario
at the less congested Wi-Fi band, i.e., 5.5~GHz. The purpose is to investigate
wireless characteristics of communications between access points and terminals
mounted on automated guided vehicles as well as those surrounding manufacturing
areas. For SISO channel measurements, statistical properties including the
delay Power Spectral Density (PSD), path loss, shadowing fading, delay spread,
excess delay, K-factor, and amplitude distribution of small-scale fading are
analyzed and compared with those observed in an office scenario. For MIMO
channel measurements, results show that there are multiple Dense Multipath
Component (DMC) processes in the delay PSD. An estimation algorithm based on
the algorithm for a single DMC process is proposed to effectively process the
multi-processes data. Moreover, delay, angular, power, and polarization
properties of DMCs are investigated and compared with those of specular
multipath components. Furthermore, effects of DMCs on Singular Values (SVs) and
channel capacities are explored. Ignoring DMCs can overestimate SVs and
underestimate channel capacities.
|
2501.03580 | BASIC: Semi-supervised Multi-organ Segmentation with Balanced Subclass
Regularization and Semantic-conflict Penalty | cs.CV | Semi-supervised learning (SSL) has shown notable potential in relieving the
heavy demand of dense prediction tasks on large-scale well-annotated datasets,
especially for the challenging multi-organ segmentation (MoS). However, the
prevailing class-imbalance problem in MoS caused by the substantial variations
in organ size exacerbates the learning difficulty of the SSL network. To
address this issue, in this paper, we propose an innovative semi-supervised
network with BAlanced Subclass regularIzation and semantic-Conflict penalty
mechanism (BASIC) to effectively learn the unbiased knowledge for
semi-supervised MoS. Concretely, we construct a novel auxiliary subclass
segmentation (SCS) task based on priorly generated balanced subclasses, thus
deeply excavating the unbiased information for the main MoS task with the
fashion of multi-task learning. Additionally, based on a mean teacher
framework, we elaborately design a balanced subclass regularization to utilize
the teacher predictions of SCS task to supervise the student predictions of MoS
task, thus effectively transferring unbiased knowledge to the MoS subnetwork
and alleviating the influence of the class-imbalance problem. Considering the
similar semantic information inside the subclasses and their corresponding
original classes (i.e., parent classes), we devise a semantic-conflict penalty
mechanism to give heavier punishments to the conflicting SCS predictions with
wrong parent classes and provide a more accurate constraint to the MoS
predictions. Extensive experiments conducted on two publicly available
datasets, i.e., the WORD dataset and the MICCAI FLARE 2022 dataset, have
verified the superior performance of our proposed BASIC compared to other
state-of-the-art methods.
|
2501.03583 | STContext: A Multifaceted Dataset for Developing Context-aware
Spatio-temporal Crowd Mobility Prediction Models | cs.AI cs.LG | In smart cities, context-aware spatio-temporal crowd flow prediction (STCFP)
models leverage contextual features (e.g., weather) to identify unusual crowd
mobility patterns and enhance prediction accuracy. However, the best practice
for incorporating contextual features remains unclear due to inconsistent usage
of contextual features in different papers. Developing a multifaceted dataset
with rich types of contextual features and STCFP scenarios is crucial for
establishing a principled context modeling paradigm. Existing open crowd flow
datasets lack an adequate range of contextual features, which poses an urgent
requirement to build a multifaceted dataset to fill these research gaps. To
this end, we create STContext, a multifaceted dataset for developing
context-aware STCFP models. Specifically, STContext provides nine
spatio-temporal datasets across five STCFP scenarios and includes ten
contextual features, including weather, air quality index, holidays, points of
interest, road networks, etc. Besides, we propose a unified workflow for
incorporating contextual features into deep STCFP methods, with steps including
feature transformation, dependency modeling, representation fusion, and
training strategies. Through extensive experiments, we have obtained several
useful guidelines for effective context modeling and insights for future
research. The STContext is open-sourced at
https://github.com/Liyue-Chen/STContext.
|
2501.03584 | Discriminative Representation learning via Attention-Enhanced
Contrastive Learning for Short Text Clustering | cs.LG cs.CL | Contrastive learning has gained significant attention in short text
clustering, yet it has an inherent drawback of mistakenly identifying samples
from the same category as negatives and then separating them in the feature
space (false negative separation), which hinders the generation of superior
representations. To generate more discriminative representations for efficient
clustering, we propose a novel short text clustering method, called
Discriminative Representation learning via \textbf{A}ttention-\textbf{E}nhanced
\textbf{C}ontrastive \textbf{L}earning for Short Text Clustering
(\textbf{AECL}). The \textbf{AECL} consists of two modules which are the
pseudo-label generation module and the contrastive learning module. Both
modules build a sample-level attention mechanism to capture similarity
relationships between samples and aggregate cross-sample features to generate
consistent representations. Then, the former module uses the more
discriminative consistent representation to produce reliable supervision
information for assist clustering, while the latter module explores similarity
relationships and consistent representations optimize the construction of
positive samples to perform similarity-guided contrastive learning, effectively
addressing the false negative separation issue. Experimental results
demonstrate that the proposed \textbf{AECL} outperforms state-of-the-art
methods. If the paper is accepted, we will open-source the code.
|
2501.03585 | Collision Risk Quantification and Conflict Resolution in Trajectory
Tracking for Acceleration-Actuated Multi-Robot Systems | cs.RO | One of the pivotal challenges in a multi-robot system is how to give
attention to accuracy and efficiency while ensuring safety. Prior arts cannot
strictly guarantee collision-free for an arbitrarily large number of robots or
the results are considerably conservative. Smoothness of the avoidance
trajectory also needs to be further optimized. This paper proposes an
accelerationactuated simultaneous obstacle avoidance and trajectory tracking
method for arbitrarily large teams of robots, that provides a nonconservative
collision avoidance strategy and gives approaches for deadlock avoidance. We
propose two ways of deadlock resolution, one involves incorporating an
auxiliary velocity vector into the error function of the trajectory tracking
module, which is proven to have no influence on global convergence of the
tracking error. Furthermore, unlike the traditional methods that they address
conflicts after a deadlock occurs, our decision-making mechanism avoids the
near-zero velocity, which is much more safer and efficient in crowed
environments. Extensive comparison show that the proposed method is superior to
the existing studies when deployed in a large-scale robot system, with minimal
invasiveness.
|
2501.03592 | A Value Mapping Virtual Staining Framework for Large-scale Histological
Imaging | eess.IV cs.CV physics.optics | The emergence of virtual staining technology provides a rapid and efficient
alternative for researchers in tissue pathology. It enables the utilization of
unlabeled microscopic samples to generate virtual replicas of chemically
stained histological slices, or facilitate the transformation of one staining
type into another. The remarkable performance of generative networks, such as
CycleGAN, offers an unsupervised learning approach for virtual coloring,
overcoming the limitations of high-quality paired data required in supervised
learning. Nevertheless, large-scale color transformation necessitates
processing large field-of-view images in patches, often resulting in
significant boundary inconsistency and artifacts. Additionally, the
transformation between different colorized modalities typically needs further
efforts to modify loss functions and tune hyperparameters for independent
training of networks. In this study, we introduce a general virtual staining
framework that is adaptable to various conditions. We propose a loss function
based on the value mapping constraint to ensure the accuracy of virtual
coloring between different pathological modalities, termed the Value Mapping
Generative Adversarial Network (VM-GAN). Meanwhile, we present a
confidence-based tiling method to address the challenge of boundary
inconsistency arising from patch-wise processing. Experimental results on
diverse data with varying staining protocols demonstrate that our method
achieves superior quantitative indicators and improved visual perception.
|
2501.03598 | RecKG: Knowledge Graph for Recommender Systems | cs.IR cs.AI | Knowledge graphs have proven successful in integrating heterogeneous data
across various domains. However, there remains a noticeable dearth of research
on their seamless integration among heterogeneous recommender systems, despite
knowledge graph-based recommender systems garnering extensive research
attention. This study aims to fill this gap by proposing RecKG, a standardized
knowledge graph for recommender systems. RecKG ensures the consistent
representation of entities across different datasets, accommodating diverse
attribute types for effective data integration. Through a meticulous
examination of various recommender system datasets, we select attributes for
RecKG, ensuring standardized formatting through consistent naming conventions.
By these characteristics, RecKG can seamlessly integrate heterogeneous data
sources, enabling the discovery of additional semantic information within the
integrated knowledge graph. We apply RecKG to standardize real-world datasets,
subsequently developing an application for RecKG using a graph database.
Finally, we validate RecKG's achievement in interoperability through a
qualitative evaluation between RecKG and other studies.
|
2501.03605 | ConcealGS: Concealing Invisible Copyright Information in 3D Gaussian
Splatting | cs.CV cs.MM eess.IV | With the rapid development of 3D reconstruction technology, the widespread
distribution of 3D data has become a future trend. While traditional visual
data (such as images and videos) and NeRF-based formats already have mature
techniques for copyright protection, steganographic techniques for the emerging
3D Gaussian Splatting (3D-GS) format have yet to be fully explored. To address
this, we propose ConcealGS, an innovative method for embedding implicit
information into 3D-GS. By introducing the knowledge distillation and gradient
optimization strategy based on 3D-GS, ConcealGS overcomes the limitations of
NeRF-based models and enhances the robustness of implicit information and the
quality of 3D reconstruction. We evaluate ConcealGS in various potential
application scenarios, and experimental results have demonstrated that
ConcealGS not only successfully recovers implicit information but also has
almost no impact on rendering quality, providing a new approach for embedding
invisible and recoverable information into 3D models in the future.
|
2501.03606 | VTAO-BiManip: Masked Visual-Tactile-Action Pre-training with Object
Understanding for Bimanual Dexterous Manipulation | cs.RO cs.CV | Bimanual dexterous manipulation remains significant challenges in robotics
due to the high DoFs of each hand and their coordination. Existing single-hand
manipulation techniques often leverage human demonstrations to guide RL methods
but fail to generalize to complex bimanual tasks involving multiple sub-skills.
In this paper, we introduce VTAO-BiManip, a novel framework that combines
visual-tactile-action pretraining with object understanding to facilitate
curriculum RL to enable human-like bimanual manipulation. We improve prior
learning by incorporating hand motion data, providing more effective guidance
for dual-hand coordination than binary tactile feedback. Our pretraining model
predicts future actions as well as object pose and size using masked multimodal
inputs, facilitating cross-modal regularization. To address the multi-skill
learning challenge, we introduce a two-stage curriculum RL approach to
stabilize training. We evaluate our method on a bottle-cap unscrewing task,
demonstrating its effectiveness in both simulated and real-world environments.
Our approach achieves a success rate that surpasses existing visual-tactile
pretraining methods by over 20%.
|
2501.03608 | A 3D Continuous-Space Electromagnetic Channel Model for 6G Tri-Polarized
Multi-user Communications | eess.SY cs.SY | It is envisioned that the sixth generation (6G) and beyond 6G (B6G) wireless
communication networks will enable global coverage in space, air, ground, and
sea. In this case, both base stations and users can be mobile and will tend to
move continuously in three-dimensional (3D) space. Therefore, obtaining channel
state information (CSI) in 3D continuous-space is crucial for the design and
performance evaluation of future 6G and B6G wireless systems. On the other
hand, new 6G technologies such as integrated sensing and communications (ISAC)
will also require prior knowledge of CSI in 3D continuous-space. In this paper,
a 3D continuous-space electromagnetic channel model is proposed for
tri-polarized multi-user communications, taking into account scatterers and
spherical wavefronts. Scattered fields are calculated using the method of
moments (MoM) with high accuracy. Spherical wave functions are utilized to
decompose the dyadic Green's functions that connect the transmitted source
currents and the received electric fields. Simulation results demonstrate that
transmit power, apertures, scatterers, and sample intervals have significant
impacts on statistical properties and channel capacities, providing insights
into the performance of continuous-space electromagnetic channel models and the
design of future wireless systems.
|
2501.03611 | Is social media hindering or helping Academic Performance? A case study
of Walter Sisulu University Buffalo City Campus | cs.CY cs.SI | Social media platforms are popular among higher education students and have
seen increased usage for academic purposes, especially during the COVID-19
pandemic. However, excessive use of social media can negatively impact
students' academic performance. This preliminary study examines social media's
impact on students' academic performance at Walter Sisulu University (WSU),
Buffalo City campus. Using a positivist paradigm and a quantitative approach,
randomly sampled data were collected from 71 students through a survey to
identify trends and generate preliminary insights. Results indicate that while
social media can facilitate academic work, it predominantly acts as a
distraction, negatively affecting academic performance, particularly for
first-year students. Notably, 84.5% of the students spend more than four hours
daily on social media, and 39.4% agree that it negatively impacts their
assignment completion. The study underscores the need for students to balance
their social media use and academic responsibilities, highlighting the
importance of this issue. Recommendations for achieving this balance, such as
adopting time management strategies and integrating social media into teaching
methodologies, are discussed.
|
2501.03616 | BTMTrack: Robust RGB-T Tracking via Dual-template Bridging and
Temporal-Modal Candidate Elimination | cs.CV | RGB-T tracking leverages the complementary strengths of RGB and thermal
infrared (TIR) modalities to address challenging scenarios such as low
illumination and adverse weather. However, existing methods often fail to
effectively integrate temporal information and perform efficient cross-modal
interactions, which constrain their adaptability to dynamic targets. In this
paper, we propose BTMTrack, a novel framework for RGB-T tracking. The core of
our approach lies in the dual-template backbone network and the Temporal-Modal
Candidate Elimination (TMCE) strategy. The dual-template backbone effectively
integrates temporal information, while the TMCE strategy focuses the model on
target-relevant tokens by evaluating temporal and modal correlations, reducing
computational overhead and avoiding irrelevant background noise. Building upon
this foundation, we propose the Temporal Dual Template Bridging (TDTB) module,
which facilitates precise cross-modal fusion through dynamically filtered
tokens. This approach further strengthens the interaction between templates and
the search region. Extensive experiments conducted on three benchmark datasets
demonstrate the effectiveness of BTMTrack. Our method achieves state-of-the-art
performance, with a 72.3% precision rate on the LasHeR test set and competitive
results on RGBT210 and RGBT234 datasets.
|
2501.03619 | Deep Learning-based Compression Detection for explainable Face Image
Quality Assessment | cs.CV | The assessment of face image quality is crucial to ensure reliable face
recognition. In order to provide data subjects and operators with explainable
and actionable feedback regarding captured face images, relevant quality
components have to be measured. Quality components that are known to negatively
impact the utility of face images include JPEG and JPEG 2000 compression
artefacts, among others. Compression can result in a loss of important image
details which may impair the recognition performance. In this work, deep neural
networks are trained to detect the compression artefacts in a face images. For
this purpose, artefact-free facial images are compressed with the JPEG and JPEG
2000 compression algorithms. Subsequently, the PSNR and SSIM metrics are
employed to obtain training labels based on which neural networks are trained
using a single network to detect JPEG and JPEG 2000 artefacts, respectively.
The evaluation of the proposed method shows promising results: in terms of
detection accuracy, error rates of 2-3% are obtained for utilizing PSNR labels
during training. In addition, we show that error rates of different open-source
and commercial face recognition systems can be significantly reduced by
discarding face images exhibiting severe compression artefacts. To minimize
resource consumption, EfficientNetV2 serves as basis for the presented
algorithm, which is available as part of the OFIQ software.
|
2501.03624 | LlaMADRS: Prompting Large Language Models for Interview-Based Depression
Assessment | cs.HC cs.CL | This study introduces LlaMADRS, a novel framework leveraging open-source
Large Language Models (LLMs) to automate depression severity assessment using
the Montgomery-Asberg Depression Rating Scale (MADRS). We employ a zero-shot
prompting strategy with carefully designed cues to guide the model in
interpreting and scoring transcribed clinical interviews. Our approach, tested
on 236 real-world interviews from the Context-Adaptive Multimodal Informatics
(CAMI) dataset, demonstrates strong correlations with clinician assessments.
The Qwen 2.5--72b model achieves near-human level agreement across most MADRS
items, with Intraclass Correlation Coefficients (ICC) closely approaching those
between human raters. We provide a comprehensive analysis of model performance
across different MADRS items, highlighting strengths and current limitations.
Our findings suggest that LLMs, with appropriate prompting, can serve as
efficient tools for mental health assessment, potentially increasing
accessibility in resource-limited settings. However, challenges remain,
particularly in assessing symptoms that rely on non-verbal cues, underscoring
the need for multimodal approaches in future work.
|
2501.03627 | Coupled Hierarchical Structure Learning using Tree-Wasserstein Distance | cs.LG stat.ML | In many applications, both data samples and features have underlying
hierarchical structures. However, existing methods for learning these latent
structures typically focus on either samples or features, ignoring possible
coupling between them. In this paper, we introduce a coupled hierarchical
structure learning method using tree-Wasserstein distance (TWD). Our method
jointly computes TWDs for samples and features, representing their latent
hierarchies as trees. We propose an iterative, unsupervised procedure to build
these sample and feature trees based on diffusion geometry, hyperbolic
geometry, and wavelet filters. We show that this iterative procedure converges
and empirically improves the quality of the constructed trees. The method is
also computationally efficient and scales well in high-dimensional settings.
Our method can be seamlessly integrated with hyperbolic graph convolutional
networks (HGCN). We demonstrate that our method outperforms competing
approaches in sparse approximation and unsupervised Wasserstein distance
learning on several word-document and single-cell RNA-sequencing datasets. In
addition, integrating our method into HGCN enhances performance in link
prediction and node classification tasks.
|
2501.03628 | A Novel Approach to Real-Time Short-Term Traffic Prediction based on
Distributed Fiber-Optic Sensing and Data Assimilation with a Stochastic
Cell-Automata Model | cond-mat.stat-mech cs.SY eess.SY nlin.CG physics.soc-ph | This paper demonstrates real-time short-term traffic flow prediction through
distributed fiber-optic sensing (DFOS) and data assimilation with a stochastic
cell-automata-based traffic model. Traffic congestion on expressways is a
severe issue. To alleviate its negative impacts, it is necessary to optimize
traffic flow prior to becoming serious congestion. For this purpose, real-time
short-term traffic flow prediction is promising. However, conventional traffic
monitoring apparatus used in prediction methods faces a technical issue due to
the sparsity in traffic flow data. To overcome the issue for realizing
real-time traffic prediction, this paper employs DFOS, which enables to obtain
spatially continuous and real-time traffic flow data along the road without
dead zones. Using mean velocities derived from DFOS data as a feature
extraction, this paper proposes a real-time data assimilation method for the
short-term prediction. As the theoretical model, the stochastic
Nishinari-Fukui-Schadschneider model is adopted. Future traffic flow is
simulated with the optimal values of model parameters estimated from observed
mean velocities and the initial condition estimated as the latest microscopic
traffic state. This concept is validated using two congestion scenarios
obtained in Japanese expressways. The results show that the mean absolute error
of the predicted mean velocities is 10-15 km/h in the prediction horizon of 30
minutes. Furthermore, the prediction error in congestion length and travel time
decreases by 40-84% depending on congestion scenarios when compared with
conventional methods with traffic counters. This paper concludes that real-time
data assimilation using DFOS enables an accurate short-term traffic prediction.
|
2501.03629 | CFFormer: Cross CNN-Transformer Channel Attention and Spatial Feature
Fusion for Improved Segmentation of Low Quality Medical Images | cs.CV | Hybrid CNN-Transformer models are designed to combine the advantages of
Convolutional Neural Networks (CNNs) and Transformers to efficiently model both
local information and long-range dependencies. However, most research tends to
focus on integrating the spatial features of CNNs and Transformers, while
overlooking the critical importance of channel features. This is particularly
significant for model performance in low-quality medical image segmentation.
Effective channel feature extraction can significantly enhance the model's
ability to capture contextual information and improve its representation
capabilities. To address this issue, we propose a hybrid CNN-Transformer model,
CFFormer, and introduce two modules: the Cross Feature Channel Attention (CFCA)
module and the X-Spatial Feature Fusion (XFF) module. The model incorporates
dual encoders, with the CNN encoder focusing on capturing local features and
the Transformer encoder modeling global features. The CFCA module filters and
facilitates interactions between the channel features from the two encoders,
while the XFF module effectively reduces the significant semantic information
differences in spatial features, enabling a smooth and cohesive spatial feature
fusion. We evaluate our model across eight datasets covering five modalities to
test its generalization capability. Experimental results demonstrate that our
model outperforms current state-of-the-art (SOTA) methods, with particularly
superior performance on datasets characterized by blurry boundaries and low
contrast.
|
2501.03630 | MC-VTON: Minimal Control Virtual Try-On Diffusion Transformer | cs.CV | Virtual try-on methods based on diffusion models achieve realistic try-on
effects. They use an extra reference network or an additional image encoder to
process multiple conditional image inputs, which adds complexity pre-processing
and additional computational costs. Besides, they require more than 25
inference steps, bringing longer inference time. In this work, with the
development of diffusion transformer (DiT), we rethink the necessity of
additional reference network or image encoder and introduce MC-VTON, which
leverages DiT's intrinsic backbone to seamlessly integrate minimal conditional
try-on inputs. Compared to existing methods, the superiority of MC-VTON is
demonstrated in four aspects: (1) Superior detail fidelity. Our DiT-based
MC-VTON exhibits superior fidelity in preserving fine-grained details. (2)
Simplified network and inputs. We remove any extra reference network or image
encoder. We also remove unnecessary conditions like the long prompt, pose
estimation, human parsing, and depth map. We require only the masked person
image and the garment image. (3) Parameter-efficient training. To process the
try-on task, we fine-tune the FLUX.1-dev with only 39.7M additional parameters
(0.33% of the backbone parameters). (4) Less inference steps. We apply
distillation diffusion on MC-VTON and only need 8 steps to generate a realistic
try-on image, with only 86.8M additional parameters (0.72% of the backbone
parameters). Experiments show that MC-VTON achieves superior qualitative and
quantitative results with fewer condition inputs, trainable parameters, and
inference steps than baseline methods.
|
2501.03631 | Exploring Iterative Manifold Constraint for Zero-shot Image Editing | cs.CV | Editability and fidelity are two essential demands for text-driven image
editing, which expects that the editing area should align with the target
prompt and the rest remain unchanged separately. The current cutting-edge
editing methods usually obey an "inversion-then-editing" pipeline, where the
input image is inverted to an approximate Gaussian noise ${z}_T$, based on
which a sampling process is conducted using the target prompt. Nevertheless, we
argue that it is not a good choice to use a near-Gaussian noise as a pivot for
further editing since it would bring plentiful fidelity errors. We verify this
by a pilot analysis, discovering that intermediate-inverted latents can achieve
a better trade-off between editability and fidelity than the fully-inverted
${z}_T$. Based on this, we propose a novel zero-shot editing paradigm dubbed
ZZEdit, which first locates a qualified intermediate-inverted latent marked as
${z}_p$ as a better editing pivot, which is sufficient-for-editing while
structure-preserving. Then, a ZigZag process is designed to execute denoising
and inversion alternately, which progressively inject target guidance to
${z}_p$ while preserving the structure information of $p$ step. Afterwards, to
achieve the same step number of inversion and denoising, we execute a pure
sampling process under the target prompt. Essentially, our ZZEdit performs
iterative manifold constraint between the manifold of $M_{p}$ and $M_{p-1}$,
leading to fewer fidelity errors. Extensive experiments highlight the
effectiveness of ZZEdit in diverse image editing scenarios compared with the
"inversion-then-editing" pipeline.
|
2501.03635 | MHGNet: Multi-Heterogeneous Graph Neural Network for Traffic Prediction | cs.LG cs.AI | In recent years, traffic flow prediction has played a crucial role in the
management of intelligent transportation systems. However, traditional
forecasting methods often model non-Euclidean low-dimensional traffic data as a
simple graph with single-type nodes and edges, failing to capture similar
trends among nodes of the same type. To address this limitation, this paper
proposes MHGNet, a novel framework for modeling spatiotemporal
multi-heterogeneous graphs. Within this framework, the STD Module decouples
single-pattern traffic data into multi-pattern traffic data through feature
mappings of timestamp embedding matrices and node embedding matrices.
Subsequently, the Node Clusterer leverages the Euclidean distance between nodes
and different types of limit points to perform clustering with O(N) time
complexity. The nodes within each cluster undergo residual subgraph convolution
within the spatiotemporal fusion subgraphs generated by the DSTGG Module,
followed by processing in the SIE Module for node repositioning and
redistribution of weights. To validate the effectiveness of MHGNet, this paper
conducts extensive ablation studies and quantitative evaluations on four widely
used benchmarks, demonstrating its superior performance.
|
2501.03637 | Advancing the Understanding of Fine-Grained 3D Forest Structures using
Digital Cousins and Simulation-to-Reality: Methods and Datasets | cs.CV | Understanding and analyzing the spatial semantics and structure of forests is
essential for accurate forest resource monitoring and ecosystem research.
However, the lack of large-scale and annotated datasets has limited the
widespread use of advanced intelligent techniques in this field. To address
this challenge, a fully automated synthetic data generation and processing
framework based on the concepts of Digital Cousins and Simulation-to-Reality
(Sim2Real) is proposed, offering versatility and scalability to any size and
platform. Using this process, we created the Boreal3D, the world's largest
forest point cloud dataset. It includes 1000 highly realistic and structurally
diverse forest plots across four different platforms, totaling 48,403 trees and
over 35.3 billion points. Each point is labeled with semantic, instance, and
viewpoint information, while each tree is described with structural parameters
such as diameter, crown width, leaf area, and total volume. We designed and
conducted extensive experiments to evaluate the potential of Boreal3D in
advancing fine-grained 3D forest structure analysis in real-world applications.
The results demonstrate that with certain strategies, models pre-trained on
synthetic data can significantly improve performance when applied to real
forest datasets. Especially, the findings reveal that fine-tuning with only 20%
of real-world data enables the model to achieve performance comparable to
models trained exclusively on entire real-world data, highlighting the value
and potential of our proposed framework. The Boreal3D dataset, and more
broadly, the synthetic data augmentation framework, is poised to become a
critical resource for advancing research in large-scale 3D forest scene
understanding and structural parameter estimation.
|
2501.03639 | A case study on the transformative potential of AI in software
engineering on LeetCode and ChatGPT | cs.DB cs.SE | The recent surge in the field of generative artificial intelligence (GenAI)
has the potential to bring about transformative changes across a range of
sectors, including software engineering and education. As GenAI tools, such as
OpenAI's ChatGPT, are increasingly utilised in software engineering, it becomes
imperative to understand the impact of these technologies on the software
product. This study employs a methodological approach, comprising web scraping
and data mining from LeetCode, with the objective of comparing the software
quality of Python programs produced by LeetCode users with that generated by
GPT-4o. In order to gain insight into these matters, this study addresses the
question whether GPT-4o produces software of superior quality to that produced
by humans.
The findings indicate that GPT-4o does not present a considerable impediment
to code quality, understandability, or runtime when generating code on a
limited scale. Indeed, the generated code even exhibits significantly lower
values across all three metrics in comparison to the user-written code.
However, no significantly superior values were observed for the generated code
in terms of memory usage in comparison to the user code, which contravened the
expectations. Furthermore, it will be demonstrated that GPT-4o encountered
challenges in generalising to problems that were not included in the training
data set.
This contribution presents a first large-scale study comparing generated code
with human-written code based on LeetCode platform based on multiple measures
including code quality, code understandability, time behaviour and resource
utilisation. All data is publicly available for further research.
|
2501.03643 | Effective and Efficient Mixed Precision Quantization of Speech
Foundation Models | cs.SD cs.AI eess.AS | This paper presents a novel mixed-precision quantization approach for speech
foundation models that tightly integrates mixed-precision learning and
quantized model parameter estimation into one single model compression stage.
Experiments conducted on LibriSpeech dataset with fine-tuned wav2vec2.0-base
and HuBERT-large models suggest the resulting mixed-precision quantized models
increased the lossless compression ratio by factors up to 1.7x and 1.9x over
the respective uniform-precision and two-stage mixed-precision quantized
baselines that perform precision learning and model parameters quantization in
separate and disjointed stages, while incurring no statistically word error
rate (WER) increase over the 32-bit full-precision models. The system
compression time of wav2vec2.0-base and HuBERT-large models is reduced by up to
1.9 and 1.5 times over the two-stage mixed-precision baselines, while both
produce lower WERs. The best-performing 3.5-bit mixed-precision quantized
HuBERT-large model produces a lossless compression ratio of 8.6x over the
32-bit full-precision system.
|
2501.03647 | Hierarchical Datacubes | cs.DB | Many approaches have been proposed to pre-compute data cubes in order to
efficiently respond to OLAP queries in data warehouses. However, few have
proposed solutions integrating all of the possible outcomes, and it is this
idea that leads the integration of hierarchical dimensions into these
responses. To meet this need, we propose, in this paper, a complete
redefinition of the framework and the formal definition of traditional database
analysis through the prism of hierarchical dimensions. After characterizing the
hierarchical data cube lattice, we introduce the hierarchical data cube and its
most concise reduced representation, the closed hierarchical data cube. It
offers compact replication so as to optimize storage space by removing
redundancies of strongly correlated data. Such data are typical of data
warehouses, and in particular in video games, our field of study and
experimentation, where hierarchical dimension attributes are widely
represented.
|
2501.03653 | Study of Frictional and Impact Transients in Active-Passive Mechanical
Pair | eess.SY cs.SY | We consider an active-passive mechanical pair in which the relative motion of
the latter is constrained by the mechanical impact. The system dynamics is
described by the previously introduced modeling frameworks of force transition
and dissipation through the nonlinear Coulomb friction and structural damping,
the later in accord with Hertzian contact theory. The focus of the recent study
is on combining both interaction mechanisms, and the detailed experimental
evaluation which discloses validity of the modeling assumptions. Such
mechanical pair interactions can be found in various mechatronic systems and
mechanisms, like for example clutches, backlash elements, sliding items on the
shaking and inclining surfaces, conveyor belts and others. This practical study
demonstrates and discusses the transients of a vibro-impact dynamics and shows
theoretical developments in line with experimental evaluation.
|
2501.03654 | Data Augmentation for Deep Learning Regression Tasks by Machine Learning
Models | cs.LG | Deep learning (DL) models have gained prominence in domains such as computer
vision and natural language processing but remain underutilized for regression
tasks involving tabular data. In these cases, traditional machine learning (ML)
models often outperform DL models. In this study, we propose and evaluate
various data augmentation (DA) techniques to improve the performance of DL
models for tabular data regression tasks. We compare the performance gain of
Neural Networks by different DA strategies ranging from a naive method of
duplicating existing observations and adding noise to a more sophisticated DA
strategy that preserves the underlying statistical relationship in the data.
Our analysis demonstrates that the advanced DA method significantly improves DL
model performance across multiple datasets and regression tasks, resulting in
an average performance increase of over 10\% compared to baseline models
without augmentation. The efficacy of these DA strategies was rigorously
validated across 30 distinct datasets, with multiple iterations and evaluations
using three different automated deep learning (AutoDL) frameworks: AutoKeras,
H2O, and AutoGluon. This study demonstrates that by leveraging advanced DA
techniques, DL models can realize their full potential in regression tasks,
thereby contributing to broader adoption and enhanced performance in practical
applications.
|
2501.03659 | DehazeGS: Seeing Through Fog with 3D Gaussian Splatting | cs.CV | Current novel view synthesis tasks primarily rely on high-quality and clear
images. However, in foggy scenes, scattering and attenuation can significantly
degrade the reconstruction and rendering quality. Although NeRF-based dehazing
reconstruction algorithms have been developed, their use of deep fully
connected neural networks and per-ray sampling strategies leads to high
computational costs. Moreover, NeRF's implicit representation struggles to
recover fine details from hazy scenes. In contrast, recent advancements in 3D
Gaussian Splatting achieve high-quality 3D scene reconstruction by explicitly
modeling point clouds into 3D Gaussians. In this paper, we propose leveraging
the explicit Gaussian representation to explain the foggy image formation
process through a physically accurate forward rendering process. We introduce
DehazeGS, a method capable of decomposing and rendering a fog-free background
from participating media using only muti-view foggy images as input. We model
the transmission within each Gaussian distribution to simulate the formation of
fog. During this process, we jointly learn the atmospheric light and scattering
coefficient while optimizing the Gaussian representation of the hazy scene. In
the inference stage, we eliminate the effects of scattering and attenuation on
the Gaussians and directly project them onto a 2D plane to obtain a clear view.
Experiments on both synthetic and real-world foggy datasets demonstrate that
DehazeGS achieves state-of-the-art performance in terms of both rendering
quality and computational efficiency. visualizations are available at
https://dehazegs.github.io/
|
2501.03664 | Local Compositional Complexity: How to Detect a Human-readable Messsage | cs.CV | Data complexity is an important concept in the natural sciences and related
areas, but lacks a rigorous and computable definition. In this paper, we focus
on a particular sense of complexity that is high if the data is structured in a
way that could serve to communicate a message. In this sense, human speech,
written language, drawings, diagrams and photographs are high complexity,
whereas data that is close to uniform throughout or populated by random values
is low complexity. We describe a general framework for measuring data
complexity based on dividing the shortest description of the data into a
structured and an unstructured portion, and taking the size of the former as
the complexity score. We outline an application of this framework in
statistical mechanics that may allow a more objective characterisation of the
macrostate and entropy of a physical system. Then, we derive a more precise and
computable definition geared towards human communication, by proposing local
compositionality as an appropriate specific structure. We demonstrate
experimentally that this method can distinguish meaningful signals from noise
or repetitive signals in auditory, visual and text domains, and could
potentially help determine whether an extra-terrestrial signal contained a
message.
|
2501.03666 | Hybrid Machine Learning Model with a Constrained Action Space for
Trajectory Prediction | cs.RO cs.LG | Trajectory prediction is crucial to advance autonomous driving, improving
safety, and efficiency. Although end-to-end models based on deep learning have
great potential, they often do not consider vehicle dynamic limitations,
leading to unrealistic predictions. To address this problem, this work
introduces a novel hybrid model that combines deep learning with a kinematic
motion model. It is able to predict object attributes such as acceleration and
yaw rate and generate trajectories based on them. A key contribution is the
incorporation of expert knowledge into the learning objective of the deep
learning model. This results in the constraint of the available action space,
thus enabling the prediction of physically feasible object attributes and
trajectories, thereby increasing safety and robustness. The proposed hybrid
model facilitates enhanced interpretability, thereby reinforcing the
trustworthiness of deep learning methods and promoting the development of safe
planning solutions. Experiments conducted on the publicly available real-world
Argoverse dataset demonstrate realistic driving behaviour, with benchmark
comparisons and ablation studies showing promising results.
|
2501.03670 | A Diversity-Enhanced Knowledge Distillation Model for Practical Math
Word Problem Solving | cs.CL cs.AI | Math Word Problem (MWP) solving is a critical task in natural language
processing, has garnered significant research interest in recent years. Various
recent studies heavily rely on Seq2Seq models and their extensions (e.g.,
Seq2Tree and Graph2Tree) to generate mathematical equations. While effective,
these models struggle to generate diverse but counterpart solution equations,
limiting their generalization across various math problem scenarios. In this
paper, we introduce a novel Diversity-enhanced Knowledge Distillation (DivKD)
model for practical MWP solving. Our approach proposes an adaptive diversity
distillation method, in which a student model learns diverse equations by
selectively transferring high-quality knowledge from a teacher model.
Additionally, we design a diversity prior-enhanced student model to better
capture the diversity distribution of equations by incorporating a conditional
variational auto-encoder. Extensive experiments on {four} MWP benchmark
datasets demonstrate that our approach achieves higher answer accuracy than
strong baselines while maintaining high efficiency for practical applications.
|
2501.03671 | Imitation Learning of MPC with Neural Networks: Error Guarantees and
Sparsification | eess.SY cs.LG cs.SY | This paper presents a framework for bounding the approximation error in
imitation model predictive controllers utilizing neural networks. Leveraging
the Lipschitz properties of these neural networks, we derive a bound that
guides dataset design to ensure the approximation error remains at chosen
limits. We discuss how this method can be used to design a stable neural
network controller with performance guarantees employing existing robust model
predictive control approaches for data generation. Additionally, we introduce a
training adjustment, which is based on the sensitivities of the optimization
problem and reduces dataset density requirements based on the derived bounds.
We verify that the proposed augmentation results in improvements to the
network's predictive capabilities and a reduction of the Lipschitz constant.
Moreover, on a simulated inverted pendulum problem, we show that the approach
results in a closer match of the closed-loop behavior between the imitation and
the original model predictive controller.
|
2501.03674 | Action Quality Assessment via Hierarchical Pose-guided Multi-stage
Contrastive Regression | cs.CV cs.AI | Action Quality Assessment (AQA), which aims at automatic and fair evaluation
of athletic performance, has gained increasing attention in recent years.
However, athletes are often in rapid movement and the corresponding visual
appearance variances are subtle, making it challenging to capture fine-grained
pose differences and leading to poor estimation performance. Furthermore, most
common AQA tasks, such as diving in sports, are usually divided into multiple
sub-actions, each of which contains different durations. However, existing
methods focus on segmenting the video into fixed frames, which disrupts the
temporal continuity of sub-actions resulting in unavoidable prediction errors.
To address these challenges, we propose a novel action quality assessment
method through hierarchically pose-guided multi-stage contrastive regression.
Firstly, we introduce a multi-scale dynamic visual-skeleton encoder to capture
fine-grained spatio-temporal visual and skeletal features. Then, a procedure
segmentation network is introduced to separate different sub-actions and obtain
segmented features. Afterwards, the segmented visual and skeletal features are
both fed into a multi-modal fusion module as physics structural priors, to
guide the model in learning refined activity similarities and variances.
Finally, a multi-stage contrastive learning regression approach is employed to
learn discriminative representations and output prediction results. In
addition, we introduce a newly-annotated FineDiving-Pose Dataset to improve the
current low-quality human pose labels. In experiments, the results on
FineDiving and MTL-AQA datasets demonstrate the effectiveness and superiority
of our proposed approach. Our source code and dataset are available at
https://github.com/Lumos0507/HP-MCoRe.
|
2501.03675 | SMIR: Efficient Synthetic Data Pipeline To Improve Multi-Image Reasoning | cs.CV | Vision-Language Models (VLMs) excel at understanding single images, aided by
high-quality instruction datasets. However, multi-image reasoning remains
underexplored in the open-source community due to two key challenges: (1)
scaling datasets with correlated images and complex reasoning instructions is
resource-intensive, and (2) robust evaluation benchmarks for multi-image tasks
are lacking. To address this, we introduce SMiR, a synthetic data-generation
pipeline for multi-image reasoning, along with a high-quality dataset generated
using this pipeline. SMiR efficiently extracts correlated images via multimodal
embeddings, integrates visual and descriptive information, and leverages
open-source LLMs to generate quality instructions. Using this approach, we
produce 160K synthetic training samples, offering a cost-effective alternative
to closed-source solutions. Additionally, we present SMiR-Bench, a multi-image
reasoning benchmark comprising 200 diverse examples across seven complex
reasoning tasks. SMiR-Bench is multi-turn and employs a VLM judge to evaluate
free-form responses, providing a comprehensive assessment of model
expressiveness and reasoning capability across modalities. We demonstrate the
effectiveness of SMiR by fine-tuning open-source VLMs and evaluating them on
SMiR-Bench.
|
2501.03676 | SALE-Based Offline Reinforcement Learning with Ensemble Q-Networks | cs.LG cs.AI | In this work, we build upon the offline reinforcement learning algorithm TD7,
which incorporates State-Action Learned Embeddings (SALE) and a prioritized
experience replay buffer (LAP). We propose a model-free actor-critic algorithm
that integrates ensemble Q-networks and a gradient diversity penalty from EDAC.
The ensemble Q-networks introduce penalties to guide the actor network toward
in-distribution actions, effectively addressing the challenge of
out-of-distribution actions. Meanwhile, the gradient diversity penalty
encourages diverse Q-value gradients, further suppressing overestimation for
out-of-distribution actions. Additionally, our method retains an adjustable
behavior cloning (BC) term that directs the actor network toward dataset
actions during early training stages, while gradually reducing its influence as
the precision of the Q-ensemble improves. These enhancements work
synergistically to improve the stability and precision of the training.
Experimental results on the D4RL MuJoCo benchmarks demonstrate that our
algorithm achieves higher convergence speed, stability, and performance
compared to existing methods.
|
2501.03681 | SLAM: Towards Efficient Multilingual Reasoning via Selective Language
Alignment | cs.CL cs.AI | Despite the significant improvements achieved by large language models (LLMs)
in English reasoning tasks, these models continue to struggle with multilingual
reasoning. Recent studies leverage a full-parameter and two-stage training
paradigm to teach models to first understand non-English questions and then
reason. However, this method suffers from both substantial computational
resource computing and catastrophic forgetting. The fundamental cause is that,
with the primary goal of enhancing multilingual comprehension, an excessive
number of irrelevant layers and parameters are tuned during the first stage.
Given our findings that the representation learning of languages is merely
conducted in lower-level layers, we propose an efficient multilingual reasoning
alignment approach that precisely identifies and fine-tunes the layers
responsible for handling multilingualism. Experimental results show that our
method, SLAM, only tunes 6 layers' feed-forward sub-layers including 6.5-8% of
all parameters within 7B and 13B LLMs, achieving superior average performance
than all strong baselines across 10 languages. Meanwhile, SLAM only involves
one training stage, reducing training time by 4.1-11.9 compared to the
two-stage method.
|
2501.03687 | Run-and-tumble chemotaxis using reinforcement learning | q-bio.CB cs.LG physics.bio-ph | Bacterial cells use run-and-tumble motion to climb up attractant
concentration gradient in their environment. By extending the uphill runs and
shortening the downhill runs the cells migrate towards the higher attractant
zones. Motivated by this, we formulate a reinforcement learning (RL) algorithm
where an agent moves in one dimension in the presence of an attractant
gradient. The agent can perform two actions: either persistent motion in the
same direction or reversal of direction. We assign costs for these actions
based on the recent history of the agent's trajectory. We ask the question:
which RL strategy works best in different types of attractant environment. We
quantify efficiency of the RL strategy by the ability of the agent (a) to
localize in the favorable zones after large times, and (b) to learn about its
complete environment. Depending on the attractant profile and the initial
condition, we find an optimum balance is needed between exploration and
exploitation to ensure the most efficient performance.
|
2501.03689 | MAJL: A Model-Agnostic Joint Learning Framework for Music Source
Separation and Pitch Estimation | cs.SD cs.AI eess.AS | Music source separation and pitch estimation are two vital tasks in music
information retrieval. Typically, the input of pitch estimation is obtained
from the output of music source separation. Therefore, existing methods have
tried to perform these two tasks simultaneously, so as to leverage the mutually
beneficial relationship between both tasks. However, these methods still face
two critical challenges that limit the improvement of both tasks: the lack of
labeled data and joint learning optimization. To address these challenges, we
propose a Model-Agnostic Joint Learning (MAJL) framework for both tasks. MAJL
is a generic framework and can use variant models for each task. It includes a
two-stage training method and a dynamic weighting method named Dynamic Weights
on Hard Samples (DWHS), which addresses the lack of labeled data and joint
learning optimization, respectively. Experimental results on public music
datasets show that MAJL outperforms state-of-the-art methods on both tasks,
with significant improvements of 0.92 in Signal-to-Distortion Ratio (SDR) for
music source separation and 2.71% in Raw Pitch Accuracy (RPA) for pitch
estimation. Furthermore, comprehensive studies not only validate the
effectiveness of each component of MAJL, but also indicate the great generality
of MAJL in adapting to different model architectures.
|
2501.03691 | Stabilization of Strictly Pre-Dissipative Receding Horizon Linear
Quadratic Control by Terminal Costs | math.OC cs.SY eess.SY | Asymptotic stability in receding horizon control is obtained under a strict
pre-dissipativity assumption, in the presence of suitable state constraints. In
this paper we analyze how terminal constraints can be replaced by suitable
terminal costs. We restrict to the linear-quadratic setting as that allows us
to obtain stronger results, while we analyze the full nonlinear case in a
separate contribution.
|
2501.03696 | Exploring Molecule Generation Using Latent Space Graph Diffusion | cs.LG cs.AI | Generating molecular graphs is a challenging task due to their discrete
nature and the competitive objectives involved. Diffusion models have emerged
as SOTA approaches in data generation across various modalities. For molecular
graphs, graph neural networks (GNNs) as a diffusion backbone have achieved
impressive results. Latent space diffusion, where diffusion occurs in a
low-dimensional space via an autoencoder, has demonstrated computational
efficiency. However, the literature on latent space diffusion for molecular
graphs is scarce, and no commonly accepted best practices exist. In this work,
we explore different approaches and hyperparameters, contrasting generative
flow models (denoising diffusion, flow matching, heat dissipation) and
architectures (GNNs and E(3)-equivariant GNNs). Our experiments reveal a high
sensitivity to the choice of approach and design decisions. Code is made
available at
github.com/Prashanth-Pombala/Molecule-Generation-using-Latent-Space-Graph-Diffusion.
|
2501.03697 | Deep Networks are Reproducing Kernel Chains | cs.LG math.FA stat.ML | Identifying an appropriate function space for deep neural networks remains a
key open question. While shallow neural networks are naturally associated with
Reproducing Kernel Banach Spaces (RKBS), deep networks present unique
challenges. In this work, we extend RKBS to chain RKBS (cRKBS), a new framework
that composes kernels rather than functions, preserving the desirable
properties of RKBS. We prove that any deep neural network function is a neural
cRKBS function, and conversely, any neural cRKBS function defined on a finite
dataset corresponds to a deep neural network. This approach provides a sparse
solution to the empirical risk minimization problem, requiring no more than $N$
neurons per layer, where $N$ is the number of data points.
|
2501.03699 | Motion-Aware Generative Frame Interpolation | cs.CV | Generative frame interpolation, empowered by large-scale pre-trained video
generation models, has demonstrated remarkable advantages in complex scenes.
However, existing methods heavily rely on the generative model to independently
infer the correspondences between input frames, an ability that is inadequately
developed during pre-training. In this work, we propose a novel framework,
termed Motion-aware Generative frame interpolation (MoG), to significantly
enhance the model's motion awareness by integrating explicit motion guidance.
Specifically we investigate two key questions: what can serve as an effective
motion guidance, and how we can seamlessly embed this guidance into the
generative model. For the first question, we reveal that the intermediate flow
from flow-based interpolation models could efficiently provide task-oriented
motion guidance. Regarding the second, we first obtain guidance-based
representations of intermediate frames by warping input frames' representations
using guidance, and then integrate them into the model at both latent and
feature levels. To demonstrate the versatility of our method, we train MoG on
both real-world and animation datasets. Comprehensive evaluations show that our
MoG significantly outperforms the existing methods in both domains, achieving
superior video quality and improved fidelity.
|
2501.03700 | AuxDepthNet: Real-Time Monocular 3D Object Detection with
Depth-Sensitive Features | cs.CV cs.AI | Monocular 3D object detection is a challenging task in autonomous systems due
to the lack of explicit depth information in single-view images. Existing
methods often depend on external depth estimators or expensive sensors, which
increase computational complexity and hinder real-time performance. To overcome
these limitations, we propose AuxDepthNet, an efficient framework for real-time
monocular 3D object detection that eliminates the reliance on external depth
maps or pre-trained depth models. AuxDepthNet introduces two key components:
the Auxiliary Depth Feature (ADF) module, which implicitly learns
depth-sensitive features to improve spatial reasoning and computational
efficiency, and the Depth Position Mapping (DPM) module, which embeds depth
positional information directly into the detection process to enable accurate
object localization and 3D bounding box regression. Leveraging the DepthFusion
Transformer architecture, AuxDepthNet globally integrates visual and
depth-sensitive features through depth-guided interactions, ensuring robust and
efficient detection. Extensive experiments on the KITTI dataset show that
AuxDepthNet achieves state-of-the-art performance, with $\text{AP}_{3D}$ scores
of 24.72\% (Easy), 18.63\% (Moderate), and 15.31\% (Hard), and
$\text{AP}_{\text{BEV}}$ scores of 34.11\% (Easy), 25.18\% (Moderate), and
21.90\% (Hard) at an IoU threshold of 0.7.
|
2501.03707 | A Poincar\'e Lower Bound Approach for Performance Trade-offs in MIMO
ISAC Systems with Blockage | cs.IT math.IT | Characterizing the performance trade-offs between sensing and communication
subsystems is essential for enabling integrated sensing and communication
systems. Various metrics exist for each subsystem; however, this study focuses
on the ergodic capacity of the communication subsystem. Due to the complexity
of deriving the sensing mean square error (MSE) and the inapplicability of the
Bayesian Cram\'er-Rao Bound to channels with discrete or mixed distributions,
this work proposes a Poincar\'e lower bound on the sensing MSE to address these
issues. An achievable inner bound for the rate-sensing trade-off in a fading
multiple-input multiple-output channel with additive white Gaussian noise and
blockage probability is established. In addition, a strategy that is
asymptotically optimal for sensing is provided.
|
2501.03711 | Unsupervised Speech Segmentation: A General Approach Using Speech
Language Models | cs.CL cs.AI cs.LG cs.SD eess.AS | In this paper, we introduce an unsupervised approach for Speech Segmentation,
which builds on previously researched approaches, e.g., Speaker Diarization,
while being applicable to an inclusive set of acoustic-semantic distinctions,
paving a path towards a general Unsupervised Speech Segmentation approach.
Unlike traditional speech and audio segmentation, which mainly focuses on
spectral changes in the input signal, e.g., phone segmentation, our approach
tries to segment the spoken utterance into chunks with differing
acoustic-semantic styles, focusing on acoustic-semantic information that does
not translate well into text, e.g., emotion or speaker. While most Speech
Segmentation tasks only handle one style change, e.g., emotion diarization, our
approach tries to handle multiple acoustic-semantic style changes. Leveraging
recent advances in Speech Language Models (SLMs), we propose a simple
unsupervised method to segment a given speech utterance. We empirically
demonstrate the effectiveness of the proposed approach by considering several
setups. Results suggest that the proposed method is superior to the evaluated
baselines on boundary detection, segment purity, and over-segmentation. Code is
available at
https://github.com/avishaiElmakies/unsupervised_speech_segmentation_using_slm.
|
2501.03714 | MoDec-GS: Global-to-Local Motion Decomposition and Temporal Interval
Adjustment for Compact Dynamic 3D Gaussian Splatting | cs.CV | 3D Gaussian Splatting (3DGS) has made significant strides in scene
representation and neural rendering, with intense efforts focused on adapting
it for dynamic scenes. Despite delivering remarkable rendering quality and
speed, existing methods struggle with storage demands and representing complex
real-world motions. To tackle these issues, we propose MoDecGS, a
memory-efficient Gaussian splatting framework designed for reconstructing novel
views in challenging scenarios with complex motions. We introduce GlobaltoLocal
Motion Decomposition (GLMD) to effectively capture dynamic motions in a
coarsetofine manner. This approach leverages Global Canonical Scaffolds (Global
CS) and Local Canonical Scaffolds (Local CS), extending static Scaffold
representation to dynamic video reconstruction. For Global CS, we propose
Global Anchor Deformation (GAD) to efficiently represent global dynamics along
complex motions, by directly deforming the implicit Scaffold attributes which
are anchor position, offset, and local context features. Next, we finely adjust
local motions via the Local Gaussian Deformation (LGD) of Local CS explicitly.
Additionally, we introduce Temporal Interval Adjustment (TIA) to automatically
control the temporal coverage of each Local CS during training, allowing
MoDecGS to find optimal interval assignments based on the specified number of
temporal segments. Extensive evaluations demonstrate that MoDecGS achieves an
average 70% reduction in model size over stateoftheart methods for dynamic 3D
Gaussians from realworld dynamic videos while maintaining or even improving
rendering quality.
|
2501.03715 | Neural Deconstruction Search for Vehicle Routing Problems | cs.AI cs.LG | Autoregressive construction approaches generate solutions to vehicle routing
problems in a step-by-step fashion, leading to high-quality solutions that are
nearing the performance achieved by handcrafted, operations research
techniques. In this work, we challenge the conventional paradigm of sequential
solution construction and introduce an iterative search framework where
solutions are instead deconstructed by a neural policy. Throughout the search,
the neural policy collaborates with a simple greedy insertion algorithm to
rebuild the deconstructed solutions. Our approach surpasses the performance of
state-of-the-art operations research methods across three challenging vehicle
routing problems of various problem sizes.
|
2501.03717 | Materialist: Physically Based Editing Using Single-Image Inverse
Rendering | cs.CV cs.AI cs.GR | To perform image editing based on single-view, inverse physically based
rendering, we present a method combining a learning-based approach with
progressive differentiable rendering. Given an image, our method leverages
neural networks to predict initial material properties. Progressive
differentiable rendering is then used to optimize the environment map and
refine the material properties with the goal of closely matching the rendered
result to the input image. We require only a single image while other inverse
rendering methods based on the rendering equation require multiple views. In
comparison to single-view methods that rely on neural renderers, our approach
achieves more realistic light material interactions, accurate shadows, and
global illumination. Furthermore, with optimized material properties and
illumination, our method enables a variety of tasks, including physically based
material editing, object insertion, and relighting. We also propose a method
for material transparency editing that operates effectively without requiring
full scene geometry. Compared with methods based on Stable Diffusion, our
approach offers stronger interpretability and more realistic light refraction
based on empirical results.
|
2501.03722 | Self-adaptive vision-language model for 3D segmentation of pulmonary
artery and vein | cs.CV cs.AI | Accurate segmentation of pulmonary structures iscrucial in clinical
diagnosis, disease study, and treatment planning. Significant progress has been
made in deep learning-based segmentation techniques, but most require much
labeled data for training. Consequently, developing precise segmentation
methods that demand fewer labeled datasets is paramount in medical image
analysis. The emergence of pre-trained vision-language foundation models, such
as CLIP, recently opened the door for universal computer vision tasks.
Exploiting the generalization ability of these pre-trained foundation models on
downstream tasks, such as segmentation, leads to unexpected performance with a
relatively small amount of labeled data. However, exploring these models for
pulmonary artery-vein segmentation is still limited. This paper proposes a
novel framework called Language-guided self-adaptive Cross-Attention Fusion
Framework. Our method adopts pre-trained CLIP as a strong feature extractor for
generating the segmentation of 3D CT scans, while adaptively aggregating the
cross-modality of text and image representations. We propose a s pecially
designed adapter module to fine-tune pre-trained CLIP with a self-adaptive
learning strategy to effectively fuse the two modalities of embeddings. We
extensively validate our method on a local dataset, which is the largest
pulmonary artery-vein CT dataset to date and consists of 718 labeled data in
total. The experiments show that our method outperformed other state-of-the-art
methods by a large margin. Our data and code will be made publicly available
upon acceptance.
|
2501.03727 | Detecting Neurocognitive Disorders through Analyses of Topic Evolution
and Cross-modal Consistency in Visual-Stimulated Narratives | eess.AS cs.LG | Early detection of neurocognitive disorders (NCDs) is crucial for timely
intervention and disease management. Speech analysis offers a non-intrusive and
scalable screening method, particularly through narrative tasks in
neuropsychological assessment tools. Traditional narrative analysis often
focuses on local indicators in microstructure, such as word usage and syntax.
While these features provide insights into language production abilities, they
often fail to capture global narrative patterns, or microstructures.
Macrostructures include coherence, thematic organization, and logical
progressions, reflecting essential cognitive skills potentially critical for
recognizing NCDs. Addressing this gap, we propose to investigate specific
cognitive and linguistic challenges by analyzing topical shifts, temporal
dynamics, and the coherence of narratives over time, aiming to reveal cognitive
deficits by identifying narrative impairments, and exploring their impact on
communication and cognition. The investigation is based on the CU-MARVEL Rabbit
Story corpus, which comprises recordings of a story-telling task from 758 older
adults. We developed two approaches: the Dynamic Topic Models (DTM)-based
temporal analysis to examine the evolution of topics over time, and the
Text-Image Temporal Alignment Network (TITAN) to evaluate the coherence between
spoken narratives and visual stimuli. DTM-based approach validated the
effectiveness of dynamic topic consistency as a macrostructural metric
(F1=0.61, AUC=0.78). The TITAN approach achieved the highest performance
(F1=0.72, AUC=0.81), surpassing established microstructural and macrostructural
feature sets. Cross-comparison and regression tasks further demonstrated the
effectiveness of proposed dynamic macrostructural modeling approaches for NCD
detection.
|
2501.03729 | Realistic Test-Time Adaptation of Vision-Language Models | cs.CV | The zero-shot capabilities of Vision-Language Models (VLMs) have been widely
leveraged to improve predictive performance. However, previous works on
transductive or test-time adaptation (TTA) often make strong assumptions about
the data distribution, such as the presence of all classes. Our work challenges
these favorable deployment scenarios, and introduces a more realistic
evaluation framework, including: (i) a variable number of effective classes for
adaptation within a single batch, and (ii) non-i.i.d. batches of test samples
in online adaptation settings. We provide comprehensive evaluations,
comparisons, and ablation studies that demonstrate how current transductive or
TTA methods for VLMs systematically compromise the models' initial zero-shot
robustness across various realistic scenarios, favoring performance gains under
advantageous assumptions about the test samples' distributions. Furthermore, we
introduce StatA, a versatile method that could handle a wide range of
deployment scenarios, including those with a variable number of effective
classes at test time. Our approach incorporates a novel regularization term
designed specifically for VLMs, which acts as a statistical anchor preserving
the initial text-encoder knowledge, particularly in low-data regimes. Code
available at https://github.com/MaxZanella/StatA.
|
2501.03737 | Re-Visible Dual-Domain Self-Supervised Deep Unfolding Network for MRI
Reconstruction | eess.IV cs.CV | Magnetic Resonance Imaging (MRI) is widely used in clinical practice, but
suffered from prolonged acquisition time. Although deep learning methods have
been proposed to accelerate acquisition and demonstrate promising performance,
they rely on high-quality fully-sampled datasets for training in a supervised
manner. However, such datasets are time-consuming and expensive-to-collect,
which constrains their broader applications. On the other hand, self-supervised
methods offer an alternative by enabling learning from under-sampled data
alone, but most existing methods rely on further partitioned under-sampled
k-space data as model's input for training, resulting in a loss of valuable
information. Additionally, their models have not fully incorporated image
priors, leading to degraded reconstruction performance. In this paper, we
propose a novel re-visible dual-domain self-supervised deep unfolding network
to address these issues when only under-sampled datasets are available.
Specifically, by incorporating re-visible dual-domain loss, all under-sampled
k-space data are utilized during training to mitigate information loss caused
by further partitioning. This design enables the model to implicitly adapt to
all under-sampled k-space data as input. Additionally, we design a deep
unfolding network based on Chambolle and Pock Proximal Point Algorithm
(DUN-CP-PPA) to achieve end-to-end reconstruction, incorporating imaging
physics and image priors to guide the reconstruction process. By employing a
Spatial-Frequency Feature Extraction (SFFE) block to capture global and local
feature representation, we enhance the model's efficiency to learn
comprehensive image priors. Experiments conducted on the fastMRI and IXI
datasets demonstrate that our method significantly outperforms state-of-the-art
approaches in terms of reconstruction performance.
|
2501.03746 | A Multimodal Lightweight Approach to Fault Diagnosis of Induction Motors
in High-Dimensional Dataset | cs.LG cs.SY eess.SP eess.SY | An accurate AI-based diagnostic system for induction motors (IMs) holds the
potential to enhance proactive maintenance, mitigating unplanned downtime and
curbing overall maintenance costs within an industrial environment. Notably,
among the prevalent faults in IMs, a Broken Rotor Bar (BRB) fault is frequently
encountered. Researchers have proposed various fault diagnosis approaches using
signal processing (SP), machine learning (ML), deep learning (DL), and hybrid
architectures for BRB faults. One limitation in the existing literature is the
training of these architectures on relatively small datasets, risking
overfitting when implementing such systems in industrial environments. This
paper addresses this limitation by implementing large-scale data of BRB faults
by using a transfer-learning-based lightweight DL model named ShuffleNetV2 for
diagnosing one, two, three, and four BRB faults using current and vibration
signal data. Spectral images for training and testing are generated using a
Short-Time Fourier Transform (STFT). The dataset comprises 57,500 images, with
47,500 used for training and 10,000 for testing. Remarkably, the ShuffleNetV2
model exhibited superior performance, in less computational cost as well as
accurately classifying 98.856% of spectral images. To further enhance the
visualization of harmonic sidebands resulting from broken bars, Fast Fourier
Transform (FFT) is applied to current and vibration data. The paper also
provides insights into the training and testing times for each model,
contributing to a comprehensive understanding of the proposed fault diagnosis
methodology. The findings of our research provide valuable insights into the
performance and efficiency of different ML and DL models, offering a foundation
for the development of robust fault diagnosis systems for induction motors in
industrial settings.
|
2501.03747 | Context-Alignment: Activating and Enhancing LLM Capabilities in Time
Series | cs.LG cs.CL stat.AP | Recently, leveraging pre-trained Large Language Models (LLMs) for time series
(TS) tasks has gained increasing attention, which involves activating and
enhancing LLMs' capabilities. Many methods aim to activate LLMs' capabilities
based on token-level alignment but overlook LLMs' inherent strength on natural
language processing -- their deep understanding of linguistic logic and
structure rather than superficial embedding processing. We propose
Context-Alignment, a new paradigm that aligns TS with a linguistic component in
the language environments familiar to LLMs to enable LLMs to contextualize and
comprehend TS data, thereby activating their capabilities. Specifically, such
context-level alignment comprises structural alignment and logical alignment,
which is achieved by a Dual-Scale Context-Alignment GNNs (DSCA-GNNs) applied to
TS-language multimodal inputs. Structural alignment utilizes dual-scale nodes
to describe hierarchical structure in TS-language, enabling LLMs treat long TS
data as a whole linguistic component while preserving intrinsic token features.
Logical alignment uses directed edges to guide logical relationships, ensuring
coherence in the contextual semantics. Demonstration examples prompt are
employed to construct Demonstration Examples based Context-Alignment (DECA)
following DSCA-GNNs framework. DECA can be flexibly and repeatedly integrated
into various layers of pre-trained LLMs to improve awareness of logic and
structure, thereby enhancing performance. Extensive experiments show the
effectiveness of DECA and the importance of Context-Alignment across tasks,
particularly in few-shot and zero-shot forecasting, confirming that
Context-Alignment provide powerful prior knowledge on context.
|
2501.03763 | 3D Printable Gradient Lattice Design for Multi-Stiffness Robotic Fingers | cs.RO | Human fingers achieve exceptional dexterity and adaptability by combining
structures with varying stiffness levels, from soft tissues (low) to tendons
and cartilage (medium) to bones (high). This paper explores developing a
robotic finger with similar multi-stiffness characteristics. Specifically, we
propose using a lattice configuration, parameterized by voxel size and unit
cell geometry, to optimize and achieve fine-tuned stiffness properties with
high granularity. A significant advantage of this approach is the feasibility
of 3D printing the designs in a single process, eliminating the need for manual
assembly of elements with differing stiffness. Based on this method, we present
a novel, human-like finger, and a soft gripper. We integrate the latter with a
rigid manipulator and demonstrate the effectiveness in pick and place tasks.
|
2501.03764 | SelectiveFinetuning: Enhancing Transfer Learning in Sleep Staging
through Selective Domain Alignment | eess.SP cs.AI | In practical sleep stage classification, a key challenge is the variability
of EEG data across different subjects and environments. Differences in
physiology, age, health status, and recording conditions can lead to domain
shifts between data. These domain shifts often result in decreased model
accuracy and reliability, particularly when the model is applied to new data
with characteristics different from those it was originally trained on, which
is a typical manifestation of negative transfer. To address this, we propose
SelectiveFinetuning in this paper. Our method utilizes a pretrained Multi
Resolution Convolutional Neural Network (MRCNN) to extract EEG features,
capturing the distinctive characteristics of different sleep stages. To
mitigate the effect of domain shifts, we introduce a domain aligning mechanism
that employs Earth Mover Distance (EMD) to evaluate and select source domain
data closely matching the target domain. By finetuning the model with selective
source data, our SelectiveFinetuning enhances the model's performance on target
domain that exhibits domain shifts compared to the data used for training.
Experimental results show that our method outperforms existing baselines,
offering greater robustness and adaptability in practical scenarios where data
distributions are often unpredictable.
|
2501.03765 | Image Segmentation: Inducing graph-based learning | cs.CV eess.IV | This study explores the potential of graph neural networks (GNNs) to enhance
semantic segmentation across diverse image modalities. We evaluate the
effectiveness of a novel GNN-based U-Net architecture on three distinct
datasets: PascalVOC, a standard benchmark for natural image segmentation,
WoodScape, a challenging dataset of fisheye images commonly used in autonomous
driving, introducing significant geometric distortions; and ISIC2016, a dataset
of dermoscopic images for skin lesion segmentation. We compare our proposed
UNet-GNN model against established convolutional neural networks (CNNs) based
segmentation models, including U-Net and U-Net++, as well as the
transformer-based SwinUNet. Unlike these methods, which primarily rely on local
convolutional operations or global self-attention, GNNs explicitly model
relationships between image regions by constructing and operating on a graph
representation of the image features. This approach allows the model to capture
long-range dependencies and complex spatial relationships, which we hypothesize
will be particularly beneficial for handling geometric distortions present in
fisheye imagery and capturing intricate boundaries in medical images. Our
analysis demonstrates the versatility of GNNs in addressing diverse
segmentation challenges and highlights their potential to improve segmentation
accuracy in various applications, including autonomous driving and medical
image analysis.
|
2501.03767 | AutoFish: Dataset and Benchmark for Fine-grained Analysis of Fish | cs.CV | Automated fish documentation processes are in the near future expected to
play an essential role in sustainable fisheries management and for addressing
challenges of overfishing. In this paper, we present a novel and publicly
available dataset named AutoFish designed for fine-grained fish analysis. The
dataset comprises 1,500 images of 454 specimens of visually similar fish placed
in various constellations on a white conveyor belt and annotated with instance
segmentation masks, IDs, and length measurements. The data was collected in a
controlled environment using an RGB camera. The annotation procedure involved
manual point annotations, initial segmentation masks proposed by the Segment
Anything Model (SAM), and subsequent manual correction of the masks. We
establish baseline instance segmentation results using two variations of the
Mask2Former architecture, with the best performing model reaching an mAP of
89.15%. Additionally, we present two baseline length estimation methods, the
best performing being a custom MobileNetV2-based regression model reaching an
MAE of 0.62cm in images with no occlusion and 1.38cm in images with occlusion.
Link to project page: https://vap.aau.dk/autofish/.
|
2501.03769 | Multi-label Cross-lingual automatic music genre classification from
lyrics with Sentence BERT | cs.IR cs.LG cs.SD eess.AS | Music genres are shaped by both the stylistic features of songs and the
cultural preferences of artists' audiences. Automatic classification of music
genres using lyrics can be useful in several applications such as
recommendation systems, playlist creation, and library organization. We present
a multi-label, cross-lingual genre classification system based on multilingual
sentence embeddings generated by sBERT. Using a bilingual Portuguese-English
dataset with eight overlapping genres, we demonstrate the system's ability to
train on lyrics in one language and predict genres in another. Our approach
outperforms the baseline approach of translating lyrics and using a
bag-of-words representation, improving the genrewise average F1-Score from 0.35
to 0.69. The classifier uses a one-vs-all architecture, enabling it to assign
multiple genre labels to a single lyric. Experimental results reveal that
dataset centralization notably improves cross-lingual performance. This
approach offers a scalable solution for genre classification across
underrepresented languages and cultural domains, advancing the capabilities of
music information retrieval systems.
|
2501.03775 | Strip R-CNN: Large Strip Convolution for Remote Sensing Object Detection | cs.CV | While witnessed with rapid development, remote sensing object detection
remains challenging for detecting high aspect ratio objects. This paper shows
that large strip convolutions are good feature representation learners for
remote sensing object detection and can detect objects of various aspect ratios
well. Based on large strip convolutions, we build a new network architecture
called Strip R-CNN, which is simple, efficient, and powerful. Unlike recent
remote sensing object detectors that leverage large-kernel convolutions with
square shapes, our Strip R-CNN takes advantage of sequential orthogonal large
strip convolutions to capture spatial information. In addition, we enhance the
localization capability of remote-sensing object detectors by decoupling the
detection heads and equipping the localization head with strip convolutions to
better localize the target objects. Extensive experiments on several
benchmarks, e.g., DOTA, FAIR1M, HRSC2016, and DIOR, show that our Strip R-CNN
can largely improve previous works. Notably, our 30M model achieves 82.75% mAP
on DOTA-v1.0, setting a new state-of-the-art record.Code is available at
https://github.com/YXB-NKU/Strip-R-CNN.
|
2501.03782 | Vision Transformer Neural Architecture Search for Out-of-Distribution
Generalization: Benchmark and Insights | cs.LG | While ViTs have achieved across machine learning tasks, deploying them in
real-world scenarios faces a critical challenge: generalizing under OoD shifts.
A crucial research gap exists in understanding how to design ViT architectures,
both manually and automatically, for better OoD generalization. To this end, we
introduce OoD-ViT-NAS, the first systematic benchmark for ViTs NAS focused on
OoD generalization. This benchmark includes 3000 ViT architectures of varying
computational budgets evaluated on 8 common OoD datasets. Using this benchmark,
we analyze factors contributing to OoD generalization. Our findings reveal key
insights. First, ViT architecture designs significantly affect OoD
generalization. Second, ID accuracy is often a poor indicator of OoD accuracy,
highlighting the risk of optimizing ViT architectures solely for ID
performance. Third, we perform the first study of NAS for ViTs OoD robustness,
analyzing 9 Training-free NAS methods. We find that existing Training-free NAS
methods are largely ineffective in predicting OoD accuracy despite excelling at
ID accuracy. Simple proxies like Param or Flop surprisingly outperform complex
Training-free NAS methods in predicting OoD accuracy. Finally, we study how ViT
architectural attributes impact OoD generalization and discover that increasing
embedding dimensions generally enhances performance. Our benchmark shows that
ViT architectures exhibit a wide range of OoD accuracy, with up to 11.85%
improvement for some OoD shifts. This underscores the importance of studying
ViT architecture design for OoD. We believe OoD-ViT-NAS can catalyze further
research into how ViT designs influence OoD generalization.
|
2501.03783 | How to Select Pre-Trained Code Models for Reuse? A Learning Perspective | cs.SE cs.CL | Pre-training a language model and then fine-tuning it has shown to be an
efficient and effective technique for a wide range of code intelligence tasks,
such as code generation, code summarization, and vulnerability detection.
However, pretraining language models on a large-scale code corpus is
computationally expensive. Fortunately, many off-the-shelf Pre-trained Code
Models (PCMs), such as CodeBERT, CodeT5, CodeGen, and Code Llama, have been
released publicly. These models acquire general code understanding and
generation capability during pretraining, which enhances their performance on
downstream code intelligence tasks. With an increasing number of these public
pre-trained models, selecting the most suitable one to reuse for a specific
task is essential. In this paper, we systematically investigate the reusability
of PCMs. We first explore three intuitive model selection methods that select
by size, training data, or brute-force fine-tuning. Experimental results show
that these straightforward techniques either perform poorly or suffer high
costs. Motivated by these findings, we explore learning-based model selection
strategies that utilize pre-trained models without altering their parameters.
Specifically, we train proxy models to gauge the performance of pre-trained
models, and measure the distribution deviation between a model's latent
features and the task's labels, using their closeness as an indicator of model
transferability. We conduct experiments on 100 widely-used opensource PCMs for
code intelligence tasks, with sizes ranging from 42.5 million to 3 billion
parameters. The results demonstrate that learning-based selection methods
reduce selection time to 100 seconds, compared to 2,700 hours with brute-force
fine-tuning, with less than 6% performance degradation across related tasks.
|
2501.03786 | KAnoCLIP: Zero-Shot Anomaly Detection through Knowledge-Driven Prompt
Learning and Enhanced Cross-Modal Integration | cs.CV | Zero-shot anomaly detection (ZSAD) identifies anomalies without needing
training samples from the target dataset, essential for scenarios with privacy
concerns or limited data. Vision-language models like CLIP show potential in
ZSAD but have limitations: relying on manually crafted fixed textual
descriptions or anomaly prompts is time-consuming and prone to semantic
ambiguity, and CLIP struggles with pixel-level anomaly segmentation, focusing
more on global semantics than local details. To address these limitations, We
introduce KAnoCLIP, a novel ZSAD framework that leverages vision-language
models. KAnoCLIP combines general knowledge from a Large Language Model
(GPT-3.5) and fine-grained, image-specific knowledge from a Visual Question
Answering system (Llama3) via Knowledge-Driven Prompt Learning (KnPL). KnPL
uses a knowledge-driven (KD) loss function to create learnable anomaly prompts,
removing the need for fixed text prompts and enhancing generalization. KAnoCLIP
includes the CLIP visual encoder with V-V attention (CLIP-VV), Bi-Directional
Cross-Attention for Multi-Level Cross-Modal Interaction (Bi-CMCI), and
Conv-Adapter. These components preserve local visual semantics, improve local
cross-modal fusion, and align global visual features with textual information,
enhancing pixel-level anomaly detection. KAnoCLIP achieves state-of-the-art
performance in ZSAD across 12 industrial and medical datasets, demonstrating
superior generalization compared to existing methods.
|
2501.03795 | Self-Adaptive ERP: Embedding NLP into Petri-Net creation and Model
Matching | cs.SE cs.AI | Enterprise Resource Planning (ERP) consultants play a vital role in
customizing systems to meet specific business needs by processing large amounts
of data and adapting functionalities. However, the process is
resource-intensive, time-consuming, and requires continuous adjustments as
business demands evolve. This research introduces a Self-Adaptive ERP Framework
that automates customization using enterprise process models and system usage
analysis. It leverages Artificial Intelligence (AI) & Natural Language
Processing (NLP) for Petri nets to transform business processes into adaptable
models, addressing both structural and functional matching. The framework,
built using Design Science Research (DSR) and a Systematic Literature Review
(SLR), reduces reliance on manual adjustments, improving ERP customization
efficiency and accuracy while minimizing the need for consultants.
|
2501.03799 | Some properties and applications of the new quantum $f$-divergences | quant-ph cs.IT math.IT | Recently, a new definition for quantum $f$-divergences was introduced based
on an integral representation. These divergences have shown remarkable
properties, for example when investigating contraction coefficients under noisy
channels. At the same time, many properties well known for other definitions
have remained elusive for the new quantum $f$-divergence because of its unusual
representation. In this work, we investigate alternative ways of expressing
these quantum $f$-divergences. We leverage these expressions to prove new
properties of these $f$-divergences and demonstrate some applications. In
particular, we give a new proof of the achievability of the quantum Chernoff
bound by establishing a strengthening of an inequality by Audenaert et al. We
also establish inequalities between some previously known Renyi divergences and
the new Renyi divergence. We further investigate some monotonicity and
convexity properties of the new $f$-divergences, and prove inequalities between
these divergences for various functions.
|
2501.03800 | MADation: Face Morphing Attack Detection with Foundation Models | cs.CV cs.CR | Despite the considerable performance improvements of face recognition
algorithms in recent years, the same scientific advances responsible for this
progress can also be used to create efficient ways to attack them, posing a
threat to their secure deployment. Morphing attack detection (MAD) systems aim
to detect a specific type of threat, morphing attacks, at an early stage,
preventing them from being considered for verification in critical processes.
Foundation models (FM) learn from extensive amounts of unlabelled data,
achieving remarkable zero-shot generalization to unseen domains. Although this
generalization capacity might be weak when dealing with domain-specific
downstream tasks such as MAD, FMs can easily adapt to these settings while
retaining the built-in knowledge acquired during pre-training. In this work, we
recognize the potential of FMs to perform well in the MAD task when properly
adapted to its specificities. To this end, we adapt FM CLIP architectures with
LoRA weights while simultaneously training a classification header. The
proposed framework, MADation surpasses our alternative FM and transformer-based
frameworks and constitutes the first adaption of FMs to the MAD task. MADation
presents competitive results with current MAD solutions in the literature and
even surpasses them in several evaluation scenarios. To encourage
reproducibility and facilitate further research in MAD, we publicly release the
implementation of MADation at https://github.com/gurayozgur/MADation
|
2501.03802 | Quasi-optimal cyclic orbit codes | cs.IT math.CO math.IT | We focus on two aspects of cyclic orbit codes: invariants under equivalence
and quasi-optimality. Regarding the first aspect, we establish a connection
between the codewords of a cyclic orbit code and a certain linear set on the
projective line. This allows us to derive new bounds on the parameters of the
code. In the second part, we study a particular family of (quasi-)optimal
cyclic orbit codes and derive a general existence theorem for quasi-optimal
codes in even-dimensional vector spaces over finite fields of any
characteristic. Finally, for our particular code family we describe the
automorphism groups under the general linear group and a suitable Galois group.
|
2501.03805 | Detecting the Undetectable: Assessing the Efficacy of Current Spoof
Detection Methods Against Seamless Speech Edits | cs.SD cs.CL eess.AS | Neural speech editing advancements have raised concerns about their misuse in
spoofing attacks. Traditional partially edited speech corpora primarily focus
on cut-and-paste edits, which, while maintaining speaker consistency, often
introduce detectable discontinuities. Recent methods, like
A\textsuperscript{3}T and Voicebox, improve transitions by leveraging
contextual information. To foster spoofing detection research, we introduce the
Speech INfilling Edit (SINE) dataset, created with Voicebox. We detailed the
process of re-implementing Voicebox training and dataset creation. Subjective
evaluations confirm that speech edited using this novel technique is more
challenging to detect than conventional cut-and-paste methods. Despite human
difficulty, experimental results demonstrate that self-supervised-based
detectors can achieve remarkable performance in detection, localization, and
generalization across different edit methods. The dataset and related models
will be made publicly available.
|
2501.03811 | Extending ChatGPT with a Browserless System for Web Product Price
Extraction | cs.IR | With the advenement of ChatGPT, we can find very clean, precise answers to a
varied amount of questions. However, for questions such as 'find the price of
the lemon cake at zingerman's', the answer looks like 'I can't browse the web
right now'. In this paper, we propose a system, called Wextractor, which
extends ChatGPT to answer questions as the one mentioned before. Obviously, our
system cannot be labeled as `artificial intelligence'. Simply, it offers to
cover a kind of transactional search that is not included in the current
version of ChatGPT. Moreover, Wextractor includes two improvements with respect
to the initial version: social extraction and pointing pattern extraction to
improve the answer speed.
|
2501.03819 | An innovative mixed reality approach for Robotics Surgery | cs.RO | Robotic-assisted procedures offer numerous advantages over traditional
approaches, including improved dexterity, reduced fatigue, minimized trauma,
and superior outcomes. However, the main challenge of these systems remains the
poor visualization and perception of the surgical field. The goal of this paper
is to provide an innovative approach concerning an application able to improve
the surgical procedures offering assistance in both preplanning and
intraoperative steps of the surgery. The system has been designed to offer a
better understanding of the patient through techniques that provide medical
images visualization, 3D anatomical structures perception and robotic planning.
The application was designed to be intuitive and user friendly, providing an
augmented reality experience through the Hololens 2 device. It was tested in
laboratory conditions, yielding positive results.
|
2501.03821 | The Choice of Normalization Influences Shrinkage in Regularized
Regression | stat.ML cs.LG stat.ME | Regularized models are often sensitive to the scales of the features in the
data and it has therefore become standard practice to normalize (center and
scale) the features before fitting the model. But there are many different ways
to normalize the features and the choice may have dramatic effects on the
resulting model. In spite of this, there has so far been no research on this
topic. In this paper, we begin to bridge this knowledge gap by studying
normalization in the context of lasso, ridge, and elastic net regression. We
focus on normal and binary features and show that the class balances of binary
features directly influences the regression coefficients and that this effect
depends on the combination of normalization and regularization methods used. We
demonstrate that this effect can be mitigated by scaling binary features with
their variance in the case of the lasso and standard deviation in the case of
ridge regression, but that this comes at the cost of increased variance. For
the elastic net, we show that scaling the penalty weights, rather than the
features, can achieve the same effect. Finally, we also tackle mixes of binary
and normal features as well as interactions and provide some initial results on
how to normalize features in these cases.
|
2501.03824 | Online Reinforcement Learning-Based Dynamic Adaptive Evaluation Function
for Real-Time Strategy Tasks | cs.AI | Effective evaluation of real-time strategy tasks requires adaptive mechanisms
to cope with dynamic and unpredictable environments. This study proposes a
method to improve evaluation functions for real-time responsiveness to
battle-field situation changes, utilizing an online reinforcement
learning-based dynam-ic weight adjustment mechanism within the real-time
strategy game. Building on traditional static evaluation functions, the method
employs gradient descent in online reinforcement learning to update weights
dynamically, incorporating weight decay techniques to ensure stability.
Additionally, the AdamW optimizer is integrated to adjust the learning rate and
decay rate of online reinforcement learning in real time, further reducing the
dependency on manual parameter tun-ing. Round-robin competition experiments
demonstrate that this method signifi-cantly enhances the application
effectiveness of the Lanchester combat model evaluation function, Simple
evaluation function, and Simple Sqrt evaluation function in planning algorithms
including IDABCD, IDRTMinimax, and Port-folio AI. The method achieves a notable
improvement in scores, with the en-hancement becoming more pronounced as the
map size increases. Furthermore, the increase in evaluation function
computation time induced by this method is kept below 6% for all evaluation
functions and planning algorithms. The pro-posed dynamic adaptive evaluation
function demonstrates a promising approach for real-time strategy task
evaluation.
|
2501.03825 | Deep Sylvester Posterior Inference for Adaptive Compressed Sensing in
Ultrasound Imaging | eess.IV cs.AI cs.CV | Ultrasound images are commonly formed by sequential acquisition of
beam-steered scan-lines. Minimizing the number of required scan-lines can
significantly enhance frame rate, field of view, energy efficiency, and data
transfer speeds. Existing approaches typically use static subsampling schemes
in combination with sparsity-based or, more recently, deep-learning-based
recovery. In this work, we introduce an adaptive subsampling method that
maximizes intrinsic information gain in-situ, employing a Sylvester Normalizing
Flow encoder to infer an approximate Bayesian posterior under partial
observation in real-time. Using the Bayesian posterior and a deep generative
model for future observations, we determine the subsampling scheme that
maximizes the mutual information between the subsampled observations, and the
next frame of the video. We evaluate our approach using the EchoNet cardiac
ultrasound video dataset and demonstrate that our active sampling method
outperforms competitive baselines, including uniform and variable-density
random sampling, as well as equidistantly spaced scan-lines, improving mean
absolute reconstruction error by 15%. Moreover, posterior inference and the
sampling scheme generation are performed in just 0.015 seconds (66Hz), making
it fast enough for real-time 2D ultrasound imaging applications.
|
2501.03826 | Investigating the Impact of Data Selection Strategies on Language Model
Performance | cs.CL cs.LG | Data selection is critical for enhancing the performance of language models,
particularly when aligning training datasets with a desired target
distribution. This study explores the effects of different data selection
methods and feature types on model performance. We evaluate whether selecting
data subsets can influence downstream tasks, whether n-gram features improve
alignment with target distributions, and whether embedding-based neural
features provide complementary benefits. Through comparative experiments using
baseline random selection methods and distribution aligned approaches, we
provide insights into the interplay between data selection strategies and model
training efficacy. All code for this study can be found on
\href{https://github.com/jgu13/HIR-Hybrid-Importance-Resampling-for-Language-Models}{github
repository}.
|
Subsets and Splits
No community queries yet
The top public SQL queries from the community will appear here once available.