id
stringlengths 9
16
| title
stringlengths 4
278
| categories
stringlengths 5
104
| abstract
stringlengths 6
4.09k
|
|---|---|---|---|
2501.10642
|
Iterative Tree Analysis for Medical Critics
|
cs.CL
|
Large Language Models (LLMs) have been widely adopted across various domains,
yet their application in the medical field poses unique challenges,
particularly concerning the generation of hallucinations. Hallucinations in
open-ended long medical text manifest as misleading critical claims, which are
difficult to verify due to two reasons. First, critical claims are often deeply
entangled within the text and cannot be extracted based solely on surface-level
presentation. Second, verifying these claims is challenging because
surface-level token-based retrieval often lacks precise or specific evidence,
leaving the claims unverifiable without deeper mechanism-based analysis. In
this paper, we introduce a novel method termed Iterative Tree Analysis (ITA)
for medical critics. ITA is designed to extract implicit claims from long
medical texts and verify each claim through an iterative and adaptive tree-like
reasoning process. This process involves a combination of top-down task
decomposition and bottom-up evidence consolidation, enabling precise
verification of complex medical claims through detailed mechanism-level
reasoning. Our extensive experiments demonstrate that ITA significantly
outperforms previous methods in detecting factual inaccuracies in complex
medical text verification tasks by 10%. Additionally, we will release a
comprehensive test set to the public, aiming to foster further advancements in
research within this domain.
|
2501.10644
|
UAV-Assisted Multi-Task Federated Learning with Task Knowledge Sharing
|
cs.LG cs.MA
|
The rapid development of Unmanned aerial vehicles (UAVs) technology has
spawned a wide variety of applications, such as emergency communications,
regional surveillance, and disaster relief. Due to their limited battery
capacity and processing power, multiple UAVs are often required for complex
tasks. In such cases, a control center is crucial for coordinating their
activities, which fits well with the federated learning (FL) framework.
However, conventional FL approaches often focus on a single task, ignoring the
potential of training multiple related tasks simultaneously. In this paper, we
propose a UAV-assisted multi-task federated learning scheme, in which data
collected by multiple UAVs can be used to train multiple related tasks
concurrently. The scheme facilitates the training process by sharing feature
extractors across related tasks and introduces a task attention mechanism to
balance task performance and encourage knowledge sharing. To provide an
analytical description of training performance, the convergence analysis of the
proposed scheme is performed. Additionally, the optimal bandwidth allocation
for UAVs under limited bandwidth conditions is derived to minimize
communication time. Meanwhile, a UAV-EV association strategy based on coalition
formation game is proposed. Simulation results validate the effectiveness of
the proposed scheme in enhancing multi-task performance and training speed.
|
2501.10645
|
Constrained Coding for Composite DNA: Channel Capacity and Efficient
Constructions
|
cs.IT math.IT
|
Composite DNA is a recent novel method to increase the information capacity
of DNA-based data storage above the theoretical limit of 2 bits/symbol. In this
method, every composite symbol does not store a single DNA nucleotide but a
mixture of the four nucleotides in a predetermined ratio. By using different
mixtures and ratios, the alphabet can be extended to have much more than four
symbols in the naive approach. While this method enables higher data content
per synthesis cycle, potentially reducing the DNA synthesis cost, it also
imposes significant challenges for accurate DNA sequencing since the base-level
errors can easily change the mixture of bases and their ratio, resulting in
changes to the composite symbols. With this motivation, we propose efficient
constrained coding techniques to enforce the biological constraints, including
the runlength-limited constraint and the GC-content constraint, into every DNA
synthesized oligo, regardless of the mixture of bases in each composite letter
and their corresponding ratio. Our goals include computing the capacity of the
constrained channel, constructing efficient encoders/decoders, and providing
the best options for the composite letters to obtain capacity-approaching
codes. For certain codes' parameters, our methods incur only one redundant
symbol.
|
2501.10648
|
DNA 1.0 Technical Report
|
cs.CL
|
In this report, we present DNA 1.0 8B Instruct, a state-of-the-art bilingual
language model optimized for Korean and English language tasks. By applying
continual pre-training (CPT) with high-quality Korean datasets to Llama 3.1 8B
and subsequent supervised fine-tuning (SFT), we create an instruction-following
model with enhanced Korean language capabilities. This model is then merged
with Llama 3.1 8B Instruct via spherical linear interpolation (SLERP) and
undergoes further optimization through direct preference optimization (DPO) and
knowledge distillation (KD). DNA 1.0 8B Instruct achieves state-of-the-art
results on Korean-specific tasks, including KMMLU (53.26%), KoBEST (83.40%),
and BELEBELE (57.99%), while maintaining strong English capabilities on MMLU
(66.64%), MMLU-Pro (43.05%) and GSM8K (80.52%). As an open model, DNA 1.0 8B
Instruct represents a significant advancement in bilingual language modeling.
As an open model, DNA 1.0 8B Instruct is freely available through
https://huggingface.co/dnotitia/Llama-DNA-1.0-8B-Instruct . For commercial
licensing inquiries or feedback, please contact us at
https://www.dnotitia.com/contact/post-form
|
2501.10651
|
MOFA: Discovering Materials for Carbon Capture with a GenAI- and
Simulation-Based Workflow
|
cs.DC cond-mat.mtrl-sci cs.LG
|
We present MOFA, an open-source generative AI (GenAI) plus simulation
workflow for high-throughput generation of metal-organic frameworks (MOFs) on
large-scale high-performance computing (HPC) systems. MOFA addresses key
challenges in integrating GPU-accelerated computing for GPU-intensive GenAI
tasks, including distributed training and inference, alongside CPU- and
GPU-optimized tasks for screening and filtering AI-generated MOFs using
molecular dynamics, density functional theory, and Monte Carlo simulations.
These heterogeneous tasks are unified within an online learning framework that
optimizes the utilization of available CPU and GPU resources across HPC
systems. Performance metrics from a 450-node (14,400 AMD Zen 3 CPUs + 1800
NVIDIA A100 GPUs) supercomputer run demonstrate that MOFA achieves
high-throughput generation of novel MOF structures, with CO$_2$ adsorption
capacities ranking among the top 10 in the hypothetical MOF (hMOF) dataset.
Furthermore, the production of high-quality MOFs exhibits a linear relationship
with the number of nodes utilized. The modular architecture of MOFA will
facilitate its integration into other scientific applications that dynamically
combine GenAI with large-scale simulations.
|
2501.10658
|
LUT-DLA: Lookup Table as Efficient Extreme Low-Bit Deep Learning
Accelerator
|
cs.AR cs.AI cs.LG
|
The emergence of neural network capabilities invariably leads to a
significant surge in computational demands due to expanding model sizes and
increased computational complexity. To reduce model size and lower inference
costs, recent research has focused on simplifying models and designing hardware
accelerators using low-bit quantization. However, due to numerical
representation limits, scalar quantization cannot reduce bit width lower than
1-bit, diminishing its benefits. To break through these limitations, we
introduce LUT-DLA, a Look-Up Table (LUT) Deep Learning Accelerator Framework
that utilizes vector quantization to convert neural network models into LUTs,
achieving extreme low-bit quantization. The LUT-DLA framework facilitates
efficient and cost-effective hardware accelerator designs and supports the
LUTBoost algorithm, which helps to transform various DNN models into LUT-based
models via multistage training, drastically cutting both computational and
hardware overhead. Additionally, through co-design space exploration, LUT-DLA
assesses the impact of various model and hardware parameters to fine-tune
hardware configurations for different application scenarios, optimizing
performance and efficiency. Our comprehensive experiments show that LUT-DLA
achieves improvements in power efficiency and area efficiency with gains of
$1.4$~$7.0\times$ and $1.5$~$146.1\times$, respectively, while maintaining only
a modest accuracy drop. For CNNs, accuracy decreases by $0.1\%$~$3.1\%$ using
the $L_2$ distance similarity, $0.1\%$~$3.4\%$ with the $L_1$ distance
similarity, and $0.1\%$~$3.8\%$ when employing the Chebyshev distance
similarity. For transformer-based models, the accuracy drop ranges from $1.4\%$
to $3.0\%$.
|
2501.10661
|
Unveiling the Mystery of Weight in Large Foundation Models: Gaussian
Distribution Never Fades
|
cs.LG cs.AI cs.CL
|
This paper presents a pioneering exploration of the mechanisms underlying
large foundation models' (LFMs) weights, aiming to simplify AI research.
Through extensive observation and analysis on prevailing LFMs, we find that
regardless of initialization strategies, their weights predominantly follow a
Gaussian distribution, with occasional sharp, inverted T-shaped, or linear
patterns. We further discover that the weights share the i.i.d. properties of
Gaussian noise, and explore their direct relationship. We find that
transformation weights can be derived from Gaussian noise, and they primarily
serve to increase the standard deviation of pre-trained weights, with their
standard deviation growing with layer depth. In other words, transformation
weights broaden the acceptable deviation from the optimal weights, facilitating
adaptation to downstream tasks. Building upon the above conclusions, we
thoroughly discussed the nature of optimal weights, ultimately concluding that
they should exhibit zero-mean, symmetry, and sparsity, with the sparse values
being a truncated Gaussian distribution and a few outliers. Our experiments in
LFM adaptation and editing demonstrate the effectiveness of these insights. We
hope these findings can provide a foundational understanding to pave the way
for future advancements in the LFM community.
|
2501.10663
|
PB-NBV: Efficient Projection-Based Next-Best-View Planning Framework for
Reconstruction of Unknown Objects
|
cs.RO
|
Completely capturing the three-dimensional (3D) data of an object is
essential in industrial and robotic applications. The task of next-best-view
(NBV) planning is to calculate the next optimal viewpoint based on the current
data, gradually achieving a complete 3D reconstruction of the object. However,
many existing NBV planning algorithms incur heavy computational costs due to
the extensive use of ray-casting. Specifically, this framework refits different
types of voxel clusters into ellipsoids based on the voxel structure. Then, the
next optimal viewpoint is selected from the candidate views using a
projection-based viewpoint quality evaluation function in conjunction with a
global partitioning strategy. This process replaces extensive ray-casting,
significantly improving the computational efficiency. Comparison experiments in
the simulation environment show that our framework achieves the highest point
cloud coverage with low computational time compared to other frameworks. The
real-world experiments also confirm the efficiency and feasibility of the
framework. Our method will be made open source to benefit the community.
|
2501.10666
|
Speech Emotion Detection Based on MFCC and CNN-LSTM Architecture
|
cs.SD cs.LG eess.AS
|
Emotion detection techniques have been applied to multiple cases mainly from
facial image features and vocal audio features, of which the latter aspect is
disputed yet not only due to the complexity of speech audio processing but also
the difficulties of extracting appropriate features. Part of the SAVEE and
RAVDESS datasets are selected and combined as the dataset, containing seven
sorts of common emotions (i.e. happy, neutral, sad, anger, disgust, fear, and
surprise) and thousands of samples. Based on the Librosa package, this paper
processes the initial audio input into waveplot and spectrum for analysis and
concentrates on multiple features including MFCC as targets for feature
extraction. The hybrid CNN-LSTM architecture is adopted by virtue of its strong
capability to deal with sequential data and time series, which mainly consists
of four convolutional layers and three long short-term memory layers. As a
result, the architecture achieved an accuracy of 61.07% comprehensively for the
test set, among which the detection of anger and neutral reaches a performance
of 75.31% and 71.70% respectively. It can also be concluded that the
classification accuracy is dependent on the properties of emotion to some
extent, with frequently-used and distinct-featured emotions having less
probability to be misclassified into other categories. Emotions like surprise
whose meaning depends on the specific context are more likely to confuse with
positive or negative emotions, and negative emotions also have a possibility to
get mixed with each other.
|
2501.10667
|
Precision Adaptive Imputation Network : An Unified Technique for Mixed
Datasets
|
cs.LG stat.ML
|
The challenge of missing data remains a significant obstacle across various
scientific domains, necessitating the development of advanced imputation
techniques that can effectively address complex missingness patterns. This
study introduces the Precision Adaptive Imputation Network (PAIN), a novel
algorithm designed to enhance data reconstruction by dynamically adapting to
diverse data types, distributions, and missingness mechanisms. PAIN employs a
tri-step process that integrates statistical methods, random forests, and
autoencoders, ensuring balanced accuracy and efficiency in imputation. Through
rigorous evaluation across multiple datasets, including those characterized by
high-dimensional and correlated features, PAIN consistently outperforms
traditional imputation methods, such as mean and median imputation, as well as
other advanced techniques like MissForest. The findings highlight PAIN's
superior ability to preserve data distributions and maintain analytical
integrity, particularly in complex scenarios where missingness is not
completely at random. This research not only contributes to a deeper
understanding of missing data reconstruction but also provides a critical
framework for future methodological innovations in data science and machine
learning, paving the way for more effective handling of mixed-type datasets in
real-world applications.
|
2501.10668
|
MappedTrace: Tracing Pointer Remotely with Compiler-generated Maps
|
cs.PL cs.CL
|
Existing precise pointer tracing methods introduce substantial runtime
overhead to the program being traced and are applicable only at specific
program execution points. We propose MappedTrace that leverages
compiler-generated read-only maps to accurately identify all pointers in any
given snapshot of a program's execution state. The maps record the locations
and types of pointers, allowing the tracer to precisely identify pointers
without requiring the traced program to maintain bookkeeping data structures or
poll at safe points, thereby reducing runtime overhead. By running the tracer
from a different address space or machine, MappedTrace presents new
opportunities to improve memory management techniques like memory leak
detection and enables novel use cases such as infinite memory abstraction for
resource-constrained environments.
|
2501.10670
|
Computing Capacity-Cost Functions for Continuous Channels in Wasserstein
Space
|
cs.IT eess.SP math.IT math.OC
|
This paper investigates the problem of computing capacity-cost (C-C)
functions for continuous channels. Motivated by the Kullback-Leibler divergence
(KLD) proximal reformulation of the classical Blahut-Arimoto (BA) algorithm,
the Wasserstein distance is introduced to the proximal term for the continuous
case, resulting in an iterative algorithm related to the Wasserstein gradient
descent. Practical implementation involves moving particles along the negative
gradient direction of the objective function's first variation in the
Wasserstein space and approximating integrals by the importance sampling (IS)
technique. Such formulation is also applied to the rate-distortion (R-D)
function for continuous source spaces and thus provides a unified computation
framework for both problems.
|
2501.10672
|
Homotopical Entropy
|
math.CT cs.IT math-ph math.IT math.MP
|
We present a "homotopification" of fundamental concepts from information
theory. Using homotopy type theory, we define homotopy types that behave
analogously to probability spaces, random variables, and the exponentials of
Shannon entropy and relative entropy. The original analytic theories emerge
through homotopy cardinality, which maps homotopy types to real numbers and
generalizes the cardinality of sets.
|
2501.10673
|
Hybrid-Quantum Neural Architecture Search for The Proximal Policy
Optimization Algorithm
|
quant-ph cs.LG cs.NE
|
Recent studies in quantum machine learning advocated the use of hybrid models
to assist with the limitations of the currently existing Noisy Intermediate
Scale Quantum (NISQ) devices, but what was missing from most of them was the
explanations and interpretations of the choices that were made to pick those
exact architectures and the differentiation between good and bad hybrid
architectures, this research attempts to tackle that gap in the literature by
using the Regularized Evolution algorithm to search for the optimal hybrid
classical-quantum architecture for the Proximal Policy Optimization (PPO)
algorithm, a well-known reinforcement learning algorithm, ultimately the
classical models dominated the leaderboard with the best hybrid model coming in
eleventh place among all unique models, while we also try to explain the
factors that contributed to such results,and for some models to behave better
than others in hope to grasp a better intuition about what we should consider
good practices for designing an efficient hybrid architecture.
|
2501.10674
|
Can Multimodal LLMs do Visual Temporal Understanding and Reasoning? The
answer is No!
|
cs.CV cs.CL
|
Multimodal Large Language Models (MLLMs) have achieved significant
advancements in tasks like Visual Question Answering (VQA) by leveraging
foundational Large Language Models (LLMs). However, their abilities in specific
areas such as visual temporal understanding, which is crucial for comprehending
real-world dynamics, remain underexplored. To address this, we propose a
challenging evaluation benchmark named TemporalVQA, consisting of two parts: 1)
Temporal Order Understanding and 2) Time-lapse Estimation. The first part
requires MLLMs to determine the sequence of events by analyzing temporally
consecutive video frames. The second part presents image pairs with varying
time differences, framed as multiple-choice questions, asking MLLMs to estimate
the time-lapse between images with options ranging from seconds to years. Our
evaluations of advanced MLLMs, including models like GPT-4o and Gemini-1.5-Pro,
reveal significant challenges: GPT-4o achieved only 49.1% average consistent
accuracy in temporal order task and 70% in time-lapse estimation, with
open-source models performing even poorly. These findings underscore the
limitations of current MLLMs in visual temporal understanding and reasoning,
highlighting the need for further improvements for their temporal capability.
Our dataset can be found at
https://huggingface.co/datasets/fazliimam/temporal-vqa.
|
2501.10677
|
Class-Imbalanced-Aware Adaptive Dataset Distillation for Scalable
Pretrained Model on Credit Scoring
|
cs.LG cs.AI q-fin.RM
|
The advent of artificial intelligence has significantly enhanced credit
scoring technologies. Despite the remarkable efficacy of advanced deep learning
models, mainstream adoption continues to favor tree-structured models due to
their robust predictive performance on tabular data. Although pretrained models
have seen considerable development, their application within the financial
realm predominantly revolves around question-answering tasks and the use of
such models for tabular-structured credit scoring datasets remains largely
unexplored. Tabular-oriented large models, such as TabPFN, has made the
application of large models in credit scoring feasible, albeit can only
processing with limited sample sizes. This paper provides a novel framework to
combine tabular-tailored dataset distillation technique with the pretrained
model, empowers the scalability for TabPFN. Furthermore, though class imbalance
distribution is the common nature in financial datasets, its influence during
dataset distillation has not been explored. We thus integrate the
imbalance-aware techniques during dataset distillation, resulting in improved
performance in financial datasets (e.g., a 2.5% enhancement in AUC). This study
presents a novel framework for scaling up the application of large pretrained
models on financial tabular datasets and offers a comparative analysis of the
influence of class imbalance on the dataset distillation process. We believe
this approach can broaden the applications and downstream tasks of large models
in the financial domain.
|
2501.10684
|
Deep Operator Networks for Bayesian Parameter Estimation in PDEs
|
cs.LG cs.CE stat.ML
|
We present a novel framework combining Deep Operator Networks (DeepONets)
with Physics-Informed Neural Networks (PINNs) to solve partial differential
equations (PDEs) and estimate their unknown parameters. By integrating
data-driven learning with physical constraints, our method achieves robust and
accurate solutions across diverse scenarios. Bayesian training is implemented
through variational inference, allowing for comprehensive uncertainty
quantification for both aleatoric and epistemic uncertainties. This ensures
reliable predictions and parameter estimates even in noisy conditions or when
some of the physical equations governing the problem are missing. The framework
demonstrates its efficacy in solving forward and inverse problems, including
the 1D unsteady heat equation and 2D reaction-diffusion equations, as well as
regression tasks with sparse, noisy observations. This approach provides a
computationally efficient and generalizable method for addressing uncertainty
quantification in PDE surrogate modeling.
|
2501.10685
|
Harnessing the Potential of Large Language Models in Modern Marketing
Management: Applications, Future Directions, and Strategic Recommendations
|
cs.CL
|
Large Language Models (LLMs) have revolutionized the process of customer
engagement, campaign optimization, and content generation, in marketing
management. In this paper, we explore the transformative potential of LLMs
along with the current applications, future directions, and strategic
recommendations for marketers. In particular, we focus on LLMs major business
drivers such as personalization, real-time-interactive customer insights, and
content automation, and how they enable customers and business outcomes. For
instance, the ethical aspects of AI with respect to data privacy, transparency,
and mitigation of bias are also covered, with the goal of promoting responsible
use of the technology through best practices and the use of new technologies
businesses can tap into the LLM potential, which help growth and stay one step
ahead in the turmoil of digital marketing. This article is designed to give
marketers the necessary guidance by using best industry practices to integrate
these powerful LLMs into their marketing strategy and innovation without
compromising on the ethos of their brand.
|
2501.10687
|
EMO2: End-Effector Guided Audio-Driven Avatar Video Generation
|
cs.CV
|
In this paper, we propose a novel audio-driven talking head method capable of
simultaneously generating highly expressive facial expressions and hand
gestures. Unlike existing methods that focus on generating full-body or
half-body poses, we investigate the challenges of co-speech gesture generation
and identify the weak correspondence between audio features and full-body
gestures as a key limitation. To address this, we redefine the task as a
two-stage process. In the first stage, we generate hand poses directly from
audio input, leveraging the strong correlation between audio signals and hand
movements. In the second stage, we employ a diffusion model to synthesize video
frames, incorporating the hand poses generated in the first stage to produce
realistic facial expressions and body movements. Our experimental results
demonstrate that the proposed method outperforms state-of-the-art approaches,
such as CyberHost and Vlogger, in terms of both visual quality and
synchronization accuracy. This work provides a new perspective on audio-driven
gesture generation and a robust framework for creating expressive and natural
talking head animations.
|
2501.10688
|
Neural Algorithmic Reasoning for Hypergraphs with Looped Transformers
|
cs.LG cs.AI cs.CC cs.CL
|
Looped Transformers have shown exceptional neural algorithmic reasoning
capability in simulating traditional graph algorithms, but their application to
more complex structures like hypergraphs remains underexplored. Hypergraphs
generalize graphs by modeling higher-order relationships among multiple
entities, enabling richer representations but introducing significant
computational challenges. In this work, we extend the Loop Transformer
architecture's neural algorithmic reasoning capability to simulate hypergraph
algorithms, addressing the gap between neural networks and combinatorial
optimization over hypergraphs. Specifically, we propose a novel degradation
mechanism for reducing hypergraphs to graph representations, enabling the
simulation of graph-based algorithms, such as Dijkstra's shortest path.
Furthermore, we introduce a hyperedge-aware encoding scheme to simulate
hypergraph-specific algorithms, exemplified by Helly's algorithm. We establish
theoretical guarantees for these simulations, demonstrating the feasibility of
processing high-dimensional and combinatorial data using Loop Transformers.
This work highlights the potential of Transformers as general-purpose
algorithmic solvers for structured data.
|
2501.10690
|
Insights from the application of nonlinear model predictive control to a
cart-pendulum
|
eess.SY cs.SY
|
Inspired greatly by Mills et al. (2009) and the solution within, this paper
aims to more clearly
explain the mathematics and implementation details of such a powerful control
algorithm. While the
aforementioned paper is well written and of sound mathematics, it is
extreamly dense and requires
some time and patience to decipher, especially as it draws on many other
sources to complete the
algorithm. This dense property is a clear result of the paper being
restricted to the brief form and
important details being ommited as a result. We provide the much needed
elaboration here for the
benifit of the reader.
|
2501.10692
|
Multi-modal Fusion and Query Refinement Network for Video Moment
Retrieval and Highlight Detection
|
cs.CV
|
Given a video and a linguistic query, video moment retrieval and highlight
detection (MR&HD) aim to locate all the relevant spans while simultaneously
predicting saliency scores. Most existing methods utilize RGB images as input,
overlooking the inherent multi-modal visual signals like optical flow and
depth. In this paper, we propose a Multi-modal Fusion and Query Refinement
Network (MRNet) to learn complementary information from multi-modal cues.
Specifically, we design a multi-modal fusion module to dynamically combine RGB,
optical flow, and depth map. Furthermore, to simulate human understanding of
sentences, we introduce a query refinement module that merges text at different
granularities, containing word-, phrase-, and sentence-wise levels.
Comprehensive experiments on QVHighlights and Charades datasets indicate that
MRNet outperforms current state-of-the-art methods, achieving notable
improvements in MR-mAP@Avg (+3.41) and HD-HIT@1 (+3.46) on QVHighlights.
|
2501.10693
|
Distributionally Robust Policy Evaluation and Learning for Continuous
Treatment with Observational Data
|
cs.AI cs.LG
|
Using offline observational data for policy evaluation and learning allows
decision-makers to evaluate and learn a policy that connects characteristics
and interventions. Most existing literature has focused on either discrete
treatment spaces or assumed no difference in the distributions between the
policy-learning and policy-deployed environments. These restrict applications
in many real-world scenarios where distribution shifts are present with
continuous treatment. To overcome these challenges, this paper focuses on
developing a distributionally robust policy under a continuous treatment
setting. The proposed distributionally robust estimators are established using
the Inverse Probability Weighting (IPW) method extended from the discrete one
for policy evaluation and learning under continuous treatments. Specifically,
we introduce a kernel function into the proposed IPW estimator to mitigate the
exclusion of observations that can occur in the standard IPW method to
continuous treatments. We then provide finite-sample analysis that guarantees
the convergence of the proposed distributionally robust policy evaluation and
learning estimators. The comprehensive experiments further verify the
effectiveness of our approach when distribution shifts are present.
|
2501.10694
|
Energy Efficiency Maximization for Movable Antenna-Enhanced System Based
on Statistical CSI
|
cs.IT eess.SP math.IT
|
This paper investigates an innovative movable antenna (MA)-enhanced
multiple-input multiple-output (MIMO) system designed to enhance communication
performance. We aim to maximize the energy efficiency (EE) under statistical
channel state information (S-CSI) through a joint optimization of the transmit
covariance matrix and the antenna position vectors (APVs). To solve the
stochastic problem, we consider the large number of antennas scenario and
resort to deterministic equivalent (DE) technology to reformulate the system EE
w.r.t. the transmit variables, i.e., the transmit covariance matrix and APV,
and the receive variables, i.e., the receive APV, respectively. Then, we
propose an alternative optimization (AO) algorithm to update the transmit
variables and the receive variables to maximize the system EE, respectively.
Our numerical results reveal that, the proposed MA-enhanced system can
significantly improve EE compared to several benchmark schemes and the optimal
performance can be achieved with a finite size of movement regions for MAs.
|
2501.10695
|
Exploring Transferable Homogeneous Groups for Compositional Zero-Shot
Learning
|
cs.CV
|
Conditional dependency present one of the trickiest problems in Compositional
Zero-Shot Learning, leading to significant property variations of the same
state (object) across different objects (states). To address this problem,
existing approaches often adopt either all-to-one or one-to-one representation
paradigms. However, these extremes create an imbalance in the seesaw between
transferability and discriminability, favoring one at the expense of the other.
Comparatively, humans are adept at analogizing and reasoning in a hierarchical
clustering manner, intuitively grouping categories with similar properties to
form cohesive concepts. Motivated by this, we propose Homogeneous Group
Representation Learning (HGRL), a new perspective formulates state (object)
representation learning as multiple homogeneous sub-group representation
learning. HGRL seeks to achieve a balance between semantic transferability and
discriminability by adaptively discovering and aggregating categories with
shared properties, learning distributed group centers that retain
group-specific discriminative features. Our method integrates three core
components designed to simultaneously enhance both the visual and prompt
representation capabilities of the model. Extensive experiments on three
benchmark datasets validate the effectiveness of our method.
|
2501.10696
|
Algorithmic Derivation of Human Spatial Navigation Indices From Eye
Movement Data
|
cs.HC cs.AI
|
Spatial navigation is a complex cognitive function involving sensory inputs,
such as visual, auditory, and proprioceptive information, to understand and
move within space. This ability allows humans to create mental maps, navigate
through environments, and process directional cues, crucial for exploring new
places and finding one's way in unfamiliar surroundings. This study takes an
algorithmic approach to extract indices relevant to human spatial navigation
using eye movement data. Leveraging electrooculography signals, we analyzed
statistical features and applied feature engineering techniques to study eye
movements during navigation tasks. The proposed work combines signal processing
and machine learning approaches to develop indices for navigation and
orientation, spatial anxiety, landmark recognition, path survey, and path
route. The analysis yielded five subscore indices with notable accuracy. Among
these, the navigation and orientation subscore achieved an R2 score of 0.72,
while the landmark recognition subscore attained an R2 score of 0.50.
Additionally, statistical features highly correlated with eye movement metrics,
including blinks, saccades, and fixations, were identified. The findings of
this study can lead to more cognitive assessments and enable early detection of
spatial navigation impairments, particularly among individuals at risk of
cognitive decline.
|
2501.10698
|
An Interpretable Neural Control Network with Adaptable Online Learning
for Sample Efficient Robot Locomotion Learning
|
cs.RO cs.LG
|
Robot locomotion learning using reinforcement learning suffers from training
sample inefficiency and exhibits the non-understandable/black-box nature. Thus,
this work presents a novel SME-AGOL to address such problems. Firstly,
Sequential Motion Executor (SME) is a three-layer interpretable neural network,
where the first produces the sequentially propagating hidden states, the second
constructs the corresponding triangular bases with minor non-neighbor
interference, and the third maps the bases to the motor commands. Secondly, the
Adaptable Gradient-weighting Online Learning (AGOL) algorithm prioritizes the
update of the parameters with high relevance score, allowing the learning to
focus more on the highly relevant ones. Thus, these two components lead to an
analyzable framework, where each sequential hidden state/basis represents the
learned key poses/robot configuration. Compared to state-of-the-art methods,
the SME-AGOL requires 40% fewer samples and receives 150% higher final
reward/locomotion performance on a simulated hexapod robot, while taking merely
10 minutes of learning time from scratch on a physical hexapod robot. Taken
together, this work not only proposes the SME-AGOL for sample efficient and
understandable locomotion learning but also emphasizes the potential
exploitation of interpretability for improving sample efficiency and learning
performance.
|
2501.10700
|
Subcodes of Second-Order Reed-Muller Codes via Recursive Subproducts
|
cs.IT math.IT
|
We use a simple construction called `recursive subproducts' (that is known to
yield good codes of lengths $n^m$, $n \geq 3$) to identify a family of codes
sandwiched between first-order and second-order Reed-Muller (RM) codes. These
codes are subcodes of multidimensional product codes that use first-order RM
codes as components. We identify the minimum weight codewords of all the codes
in this family, and numerically determine the weight distribution of some of
them. While these codes have the same minimum distance and a smaller rate than
second-order RM codes, they have significantly fewer minimum weight codewords.
Further, these codes can be decoded via modifications to known RM decoders
which yield codeword error rates within 0.25 dB of second-order RM codes and
better than CRC-aided Polar codes (in terms of $E_b/N_o$ for lengths $256, 512,
1024$), thereby offering rate adaptation options for RM codes in low-capacity
scenarios.
|
2501.10705
|
Secure Communication in Dynamic RDARS-Driven Systems
|
cs.IT eess.SP math.IT
|
In this letter, we investigate a dynamic reconfigurable distributed antenna
and reflection surface (RDARS)-driven secure communication system, where the
working mode of the RDARS can be flexibly configured. We aim to maximize the
secrecy rate by jointly designing the active beamforming vectors, reflection
coefficients, and the channel-aware mode selection matrix. To address the
non-convex binary and cardinality constraints introduced by dynamic mode
selection, we propose an efficient alternating optimization (AO) framework that
employs penalty-based fractional programming (FP) and successive convex
approximation (SCA) transformations. Simulation results demonstrate the
potential of RDARS in enhancing the secrecy rate and show its superiority
compared to existing reflection surface-based schemes.
|
2501.10709
|
Revisiting Ensemble Methods for Stock Trading and Crypto Trading Tasks
at ACM ICAIF FinRL Contest 2023-2024
|
cs.CE cs.AI stat.ML
|
Reinforcement learning has demonstrated great potential for performing
financial tasks. However, it faces two major challenges: policy instability and
sampling bottlenecks. In this paper, we revisit ensemble methods with massively
parallel simulations on graphics processing units (GPUs), significantly
enhancing the computational efficiency and robustness of trained models in
volatile financial markets. Our approach leverages the parallel processing
capability of GPUs to significantly improve the sampling speed for training
ensemble models. The ensemble models combine the strengths of component agents
to improve the robustness of financial decision-making strategies. We conduct
experiments in both stock and cryptocurrency trading tasks to evaluate the
effectiveness of our approach. Massively parallel simulation on a single GPU
improves the sampling speed by up to $1,746\times$ using $2,048$ parallel
environments compared to a single environment. The ensemble models have high
cumulative returns and outperform some individual agents, reducing maximum
drawdown by up to $4.17\%$ and improving the Sharpe ratio by up to $0.21$.
This paper describes trading tasks at ACM ICAIF FinRL Contests in 2023 and
2024.
|
2501.10711
|
How Should We Build A Benchmark? Revisiting 274 Code-Related Benchmarks
For LLMs
|
cs.SE cs.AI cs.CL
|
Various benchmarks have been proposed to assess the performance of large
language models (LLMs) in different coding scenarios. We refer to them as
code-related benchmarks. However, there are no systematic guidelines by which
such a benchmark should be developed to ensure its quality, reliability, and
reproducibility. We propose How2Bench, which is comprised of a 55-criteria
checklist as a set of guidelines to govern the development of code-related
benchmarks comprehensively. Using HOW2BENCH, we profiled 274 benchmarks
released within the past decade and found concerning issues. Nearly 70% of the
benchmarks did not take measures for data quality assurance; over 10% did not
even open source or only partially open source. Many highly cited benchmarks
have loopholes, including duplicated samples, incorrect reference
codes/tests/prompts, and unremoved sensitive/confidential information. Finally,
we conducted a human study involving 49 participants, which revealed
significant gaps in awareness of the importance of data quality,
reproducibility, and transparency.
|
2501.10712
|
Poisson Hail on a Wireless Ground
|
cs.IT cs.NI math.IT
|
This paper defines a new model which incorporates three key ingredients of a
large class of wireless communication systems: (1) spatial interactions through
interference, (2) dynamics of the queueing type, with users joining and
leaving, and (3) carrier sensing and collision avoidance as used in, e.g.,
WiFi. In systems using (3), rather than directly accessing the shared resources
upon arrival, a customer is considerate and waits to access them until nearby
users in service have left. This new model can be seen as a missing piece of a
larger puzzle that contains such dynamics as spatial birth-and-death processes,
the Poisson-Hail model, and wireless dynamics as key other pieces. It is shown
that, under natural assumptions, this model can be represented as a Markov
process on the space of counting measures. The main results are then two-fold.
The first is on the shape of the stability region and, more precisely, on the
characterization of the critical value of the arrival rate that separates
stability from instability. The second is of a more qualitative or perhaps even
ethical nature. There is evidence that for natural values of the system
parameters, the implementation of sensing and collision avoidance stabilizes a
system that would be unstable if immediate access to the shared resources would
be granted. In other words, for these parameters, renouncing greedy access
makes sharing sustainable, whereas indulging in greedy access kills the system.
|
2501.10713
|
Human-like Nonverbal Behavior with MetaHumans in Real-World Interaction
Studies: An Architecture Using Generative Methods and Motion Capture
|
cs.HC cs.RO
|
Socially interactive agents are gaining prominence in domains like
healthcare, education, and service contexts, particularly virtual agents due to
their inherent scalability. To facilitate authentic interactions, these systems
require verbal and nonverbal communication through e.g., facial expressions and
gestures. While natural language processing technologies have rapidly advanced,
incorporating human-like nonverbal behavior into real-world interaction
contexts is crucial for enhancing the success of communication, yet this area
remains underexplored. One barrier is creating autonomous systems with
sophisticated conversational abilities that integrate human-like nonverbal
behavior. This paper presents a distributed architecture using Epic Games
MetaHuman, combined with advanced conversational AI and camera-based user
management, that supports methods like motion capture, handcrafted animation,
and generative approaches for nonverbal behavior. We share insights into a
system architecture designed to investigate nonverbal behavior in socially
interactive agents, deployed in a three-week field study in the Deutsches
Museum Bonn, showcasing its potential in realistic nonverbal behavior research.
|
2501.10714
|
FSMoE: A Flexible and Scalable Training System for Sparse
Mixture-of-Experts Models
|
cs.LG
|
Recent large language models (LLMs) have tended to leverage sparsity to
reduce computations, employing the sparsely activated mixture-of-experts (MoE)
technique. MoE introduces four modules, including token routing, token
communication, expert computation, and expert parallelism, that impact model
quality and training efficiency. To enable versatile usage of MoE models, we
introduce FSMoE, a flexible training system optimizing task scheduling with
three novel techniques: 1) Unified abstraction and online profiling of MoE
modules for task scheduling across various MoE implementations. 2)
Co-scheduling intra-node and inter-node communications with computations to
minimize communication overheads. 3) To support near-optimal task scheduling,
we design an adaptive gradient partitioning method for gradient aggregation and
a schedule to adaptively pipeline communications and computations. We conduct
extensive experiments with configured MoE layers and real-world MoE models on
two GPU clusters. Experimental results show that 1) our FSMoE supports four
popular types of MoE routing functions and is more efficient than existing
implementations (with up to a 1.42$\times$ speedup), and 2) FSMoE outperforms
the state-of-the-art MoE training systems (DeepSpeed-MoE and Tutel) by
1.18$\times$-1.22$\times$ on 1458 MoE layers and 1.19$\times$-3.01$\times$ on
real-world MoE models based on GPT-2 and Mixtral using a popular routing
function.
|
2501.10722
|
A Unified Regularization Approach to High-Dimensional Generalized Tensor
Bandits
|
cs.LG stat.ML
|
Modern decision-making scenarios often involve data that is both
high-dimensional and rich in higher-order contextual information, where
existing bandits algorithms fail to generate effective policies. In response,
we propose in this paper a generalized linear tensor bandits algorithm designed
to tackle these challenges by incorporating low-dimensional tensor structures,
and further derive a unified analytical framework of the proposed algorithm.
Specifically, our framework introduces a convex optimization approach with the
weakly decomposable regularizers, enabling it to not only achieve better
results based on the tensor low-rankness structure assumption but also extend
to cases involving other low-dimensional structures such as slice sparsity and
low-rankness. The theoretical analysis shows that, compared to existing
low-rankness tensor result, our framework not only provides better bounds but
also has a broader applicability. Notably, in the special case of degenerating
to low-rank matrices, our bounds still offer advantages in certain scenarios.
|
2501.10727
|
In the Picture: Medical Imaging Datasets, Artifacts, and their Living
Review
|
cs.CV cs.AI eess.IV
|
Datasets play a critical role in medical imaging research, yet issues such as
label quality, shortcuts, and metadata are often overlooked. This lack of
attention may harm the generalizability of algorithms and, consequently,
negatively impact patient outcomes. While existing medical imaging literature
reviews mostly focus on machine learning (ML) methods, with only a few focusing
on datasets for specific applications, these reviews remain static -- they are
published once and not updated thereafter. This fails to account for emerging
evidence, such as biases, shortcuts, and additional annotations that other
researchers may contribute after the dataset is published. We refer to these
newly discovered findings of datasets as research artifacts. To address this
gap, we propose a living review that continuously tracks public datasets and
their associated research artifacts across multiple medical imaging
applications. Our approach includes a framework for the living review to
monitor data documentation artifacts, and an SQL database to visualize the
citation relationships between research artifact and dataset. Lastly, we
discuss key considerations for creating medical imaging datasets, review best
practices for data annotation, discuss the significance of shortcuts and
demographic diversity, and emphasize the importance of managing datasets
throughout their entire lifecycle. Our demo is publicly available at
http://130.226.140.142.
|
2501.10729
|
Robust Local Polynomial Regression with Similarity Kernels
|
stat.ME cs.LG stat.ML
|
Local Polynomial Regression (LPR) is a widely used nonparametric method for
modeling complex relationships due to its flexibility and simplicity. It
estimates a regression function by fitting low-degree polynomials to localized
subsets of the data, weighted by proximity. However, traditional LPR is
sensitive to outliers and high-leverage points, which can significantly affect
estimation accuracy. This paper revisits the kernel function used to compute
regression weights and proposes a novel framework that incorporates both
predictor and response variables in the weighting mechanism. By introducing two
positive definite kernels, the proposed method robustly estimates weights,
mitigating the influence of outliers through localized density estimation. The
method is implemented in Python and is publicly available at
https://github.com/yaniv-shulman/rsklpr, demonstrating competitive performance
in synthetic benchmark experiments. Compared to standard LPR, the proposed
approach consistently improves robustness and accuracy, especially in
heteroscedastic and noisy environments, without requiring multiple iterations.
This advancement provides a promising extension to traditional LPR, opening new
possibilities for robust regression applications.
|
2501.10731
|
Characterizing the Effects of Translation on Intertextuality using
Multilingual Embedding Spaces
|
cs.CL
|
Rhetorical devices are difficult to translate, but they are crucial to the
translation of literary documents. We investigate the use of multilingual
embedding spaces to characterize the preservation of intertextuality, one
common rhetorical device, across human and machine translation. To do so, we
use Biblical texts, which are both full of intertextual references and are
highly translated works. We provide a metric to characterize intertextuality at
the corpus level and provide a quantitative analysis of the preservation of
this rhetorical device across extant human translations and machine-generated
counterparts. We go on to provide qualitative analysis of cases wherein human
translations over- or underemphasize the intertextuality present in the text,
whereas machine translations provide a neutral baseline. This provides support
for established scholarship proposing that human translators have a propensity
to amplify certain literary characteristics of the original manuscripts.
|
2501.10733
|
A CNN-Transformer for Classification of Longitudinal 3D MRI Images -- A
Case Study on Hepatocellular Carcinoma Prediction
|
cs.CV
|
Longitudinal MRI analysis is crucial for predicting disease outcomes,
particularly in chronic conditions like hepatocellular carcinoma (HCC), where
early detection can significantly influence treatment strategies and patient
prognosis. Yet, due to challenges like limited data availability, subtle
parenchymal changes, and the irregular timing of medical screenings, current
approaches have so far focused on cross-sectional imaging data. To address
this, we propose HCCNet, a novel model architecture that integrates a 3D
adaptation of the ConvNeXt CNN architecture with a Transformer encoder,
capturing both the intricate spatial features of 3D MRIs and the complex
temporal dependencies across different time points. HCCNet utilizes a two-stage
pre-training process tailored for longitudinal MRI data. The CNN backbone is
pre-trained using a self-supervised learning framework adapted for 3D MRIs,
while the Transformer encoder is pre-trained with a sequence-order-prediction
task to enhance its understanding of disease progression over time. We
demonstrate the effectiveness of HCCNet by applying it to a cohort of liver
cirrhosis patients undergoing regular MRI screenings for HCC surveillance. Our
results show that HCCNet significantly improves predictive accuracy and
reliability over baseline models, providing a robust tool for personalized HCC
surveillance. The methodological approach presented in this paper is versatile
and can be adapted to various longitudinal MRI screening applications. Its
ability to handle varying patient record lengths and irregular screening
intervals establishes it as an invaluable framework for monitoring chronic
diseases, where timely and accurate disease prognosis is critical for effective
treatment planning.
|
2501.10734
|
GEC-RAG: Improving Generative Error Correction via Retrieval-Augmented
Generation for Automatic Speech Recognition Systems
|
eess.AS cs.AI cs.SD
|
Automatic Speech Recognition (ASR) systems have demonstrated remarkable
performance across various applications. However, limited data and the unique
language features of specific domains, such as low-resource languages,
significantly degrade their performance and lead to higher Word Error Rates
(WER). In this study, we propose Generative Error Correction via
Retrieval-Augmented Generation (GEC-RAG), a novel approach designed to improve
ASR accuracy for low-resource domains, like Persian. Our approach treats the
ASR system as a black-box, a common practice in cloud-based services, and
proposes a Retrieval-Augmented Generation (RAG) approach within the In-Context
Learning (ICL) scheme to enhance the quality of ASR predictions. By
constructing a knowledge base that pairs ASR predictions (1-best and 5-best
hypotheses) with their corresponding ground truths, GEC-RAG retrieves lexically
similar examples to the ASR transcription using the Term Frequency-Inverse
Document Frequency (TF-IDF) measure. This process provides relevant error
patterns of the system alongside the ASR transcription to the Generative Large
Language Model (LLM), enabling targeted corrections. Our results demonstrate
that this strategy significantly reduces WER in Persian and highlights a
potential for domain adaptation and low-resource scenarios. This research
underscores the effectiveness of using RAG in enhancing ASR systems without
requiring direct model modification or fine-tuning, making it adaptable to any
domain by simply updating the transcription knowledge base with domain-specific
data.
|
2501.10736
|
Semi-supervised Semantic Segmentation for Remote Sensing Images via
Multi-scale Uncertainty Consistency and Cross-Teacher-Student Attention
|
cs.CV cs.AI
|
Semi-supervised learning offers an appealing solution for remote sensing (RS)
image segmentation to relieve the burden of labor-intensive pixel-level
labeling. However, RS images pose unique challenges, including rich multi-scale
features and high inter-class similarity. To address these problems, this paper
proposes a novel semi-supervised Multi-Scale Uncertainty and
Cross-Teacher-Student Attention (MUCA) model for RS image semantic segmentation
tasks. Specifically, MUCA constrains the consistency among feature maps at
different layers of the network by introducing a multi-scale uncertainty
consistency regularization. It improves the multi-scale learning capability of
semi-supervised algorithms on unlabeled data. Additionally, MUCA utilizes a
Cross-Teacher-Student attention mechanism to guide the student network, guiding
the student network to construct more discriminative feature representations
through complementary features from the teacher network. This design
effectively integrates weak and strong augmentations (WA and SA) to further
boost segmentation performance. To verify the effectiveness of our model, we
conduct extensive experiments on ISPRS-Potsdam and LoveDA datasets. The
experimental results show the superiority of our method over state-of-the-art
semi-supervised methods. Notably, our model excels in distinguishing highly
similar objects, showcasing its potential for advancing semi-supervised RS
image segmentation tasks.
|
2501.10739
|
Computational Discovery of Chiasmus in Ancient Religious Text
|
cs.CL
|
Chiasmus, a debated literary device in Biblical texts, has captivated mystics
while sparking ongoing scholarly discussion. In this paper, we introduce the
first computational approach to systematically detect chiasmus within Biblical
passages. Our method leverages neural embeddings to capture lexical and
semantic patterns associated with chiasmus, applied at multiple levels of
textual granularity (half-verses, verses). We also involve expert annotators to
review a subset of the detected patterns. Despite its computational efficiency,
our method achieves robust results, with high inter-annotator agreement and
system precision@k of 0.80 at the verse level and 0.60 at the half-verse level.
We further provide a qualitative analysis of the distribution of detected
chiasmi, along with selected examples that highlight the effectiveness of our
approach.
|
2501.10741
|
Development of Application-Specific Large Language Models to Facilitate
Research Ethics Review
|
cs.CL cs.CY
|
Institutional review boards (IRBs) play a crucial role in ensuring the
ethical conduct of human subjects research, but face challenges including
inconsistency, delays, and inefficiencies. We propose the development and
implementation of application-specific large language models (LLMs) to
facilitate IRB review processes. These IRB-specific LLMs would be fine-tuned on
IRB-specific literature and institutional datasets, and equipped with retrieval
capabilities to access up-to-date, context-relevant information. We outline
potential applications, including pre-review screening, preliminary analysis,
consistency checking, and decision support. While addressing concerns about
accuracy, context sensitivity, and human oversight, we acknowledge remaining
challenges such as over-reliance on AI and the need for transparency. By
enhancing the efficiency and quality of ethical review while maintaining human
judgment in critical decisions, IRB-specific LLMs offer a promising tool to
improve research oversight. We call for pilot studies to evaluate the
feasibility and impact of this approach.
|
2501.10743
|
Analysis of Age-Energy Trade-off in IoT Networks Using Stochastic
Geometry
|
cs.IT math.IT
|
We study an internet of things (IoT) network where devices harvest energy
from transmitter power. IoT devices use this harvested energy to operate and
decode data packets. We propose a slot division scheme based on a parameter
$\xi$, where the first phase is for energy harvesting (EH) and the second phase
is for data transmission. We define the joint success probability (JSP) metric
as the probability of the event that both the harvested energy and the received
signal-to-interference ratio (SIR) exceed their respective thresholds. We
provide lower and upper bounds of (JSP), as obtaining an exact JSP expression
is challenging. Then, the peak age-of-information (PAoI) of data packets is
determined using this framework. Higher slot intervals for EH reduce data
transmission time, requiring higher link rates. In contrast, a lower EH slot
interval will leave IoT devices without enough energy to decode the packets. We
demonstrate that both non-preemptive and preemptive queuing disciplines may
have the same optimal slot partitioning factor for maximizing the JSP and
minimizing the PAoI. For different transmit powers and deployment areas, we
recommend the optimal slot partitioning factor for the above two metrics under
both queuing disciplines.
|
2501.10750
|
PEARL: Preconditioner Enhancement through Actor-critic Reinforcement
Learning
|
cs.LG cs.NA math.NA stat.ML
|
We present PEARL (Preconditioner Enhancement through Actor-critic
Reinforcement Learning), a novel approach to learning matrix preconditioners.
Existing preconditioners such as Jacobi, Incomplete LU, and Algebraic Multigrid
methods offer problem-specific advantages but rely heavily on hyperparameter
tuning. Recent advances have explored using deep neural networks to learn
preconditioners, though challenges such as misbehaved objective functions and
costly training procedures remain. PEARL introduces a reinforcement learning
approach for learning preconditioners, specifically, a contextual bandit
formulation. The framework utilizes an actor-critic model, where the actor
generates the incomplete Cholesky decomposition of preconditioners, and the
critic evaluates them based on reward-specific feedback. To further guide the
training, we design a dual-objective function, combining updates from the
critic and condition number. PEARL contributes a generalizable preconditioner
learning method, dynamic sparsity exploration, and cosine schedulers for
improved stability and exploratory power. We compare our approach to
traditional and neural preconditioners, demonstrating improved flexibility and
iterative solving speed.
|
2501.10752
|
Quadcopter Position Hold Function using Optical Flow in a
Smartphone-based Flight Computer
|
cs.CV
|
Purpose. This paper explores the capability of smartphones as computing
devices for a quadcopter, specifically in terms of the ability of drones to
maintain a position known as the position hold function. Image processing can
be performed with the phone's sensors and powerful built-in camera. Method.
Using Shi-Tomasi corner detection and the Lucas-Kanade sparse optical flow
algorithms, ground features are recognized and tracked using the
downward-facing camera. The position is maintained by computing quadcopter
displacement from the center of the image using Euclidian distance, and the
corresponding pitch and roll estimate is calculated using the PID controller.
Results. Actual flights show a double standard deviation of 18.66 cm from the
center for outdoor tests. With a quadcopter size of 58cm x 58cm used, it
implies that 95% of the time, the quadcopter is within a diameter of 96 cm. For
indoor tests, a double standard deviation of 10.55 cm means that 95% of the
time, the quadcopter is within a diameter of 79 cm. Conclusion. Smartphone
sensors and cameras can be used to perform optical flow position hold
functions, proving their potential as computing devices for drones.
Recommendations. To further improve the positioning system of the phone-based
quadcopter system, it is suggested that potential sensor fusion be explored
with the phone's GNSS sensor, which gives absolute positioning information for
outdoor applications. Research Implications. As different devices and gadgets
are integrated into the smartphone, this paper presents an opportunity for
phone manufacturers and researchers to explore the potential of smartphones for
a drone use-case.
|
2501.10753
|
Pinching Antennas: Principles, Applications and Challenges
|
cs.IT eess.SP math.IT
|
Flexible-antenna systems, such as fluid antennas and movable antennas, have
been recognized as key enabling technologies for sixth-generation (6G) wireless
networks, as they can intelligently reconfigure the effective channel gains of
the users and hence significantly improve their data transmission capabilities.
However, existing flexible-antenna systems have been designed to combat
small-scale fading in non-line-of-sight (NLoS) conditions. As a result, they
lack the ability to establish line-of-sight links, which are typically 100
times stronger than NLoS links. In addition, existing flexible-antenna systems
have limited flexibility, where adding/removing an antenna is not
straightforward. This article introduces an innovative flexible-antenna system
called pinching antennas, which are realized by applying small dielectric
particles to waveguides. We first describe the basics of pinching-antenna
systems and their ability to provide strong LoS links by deploying pinching
antennas close to the users as well as their capability to scale up/down the
antenna system. We then focus on communication scenarios with different numbers
of waveguides and pinching antennas, where innovative approaches to implement
multiple-input multiple-output and non-orthogonal multiple access are
discussed. In addition, promising 6G-related applications of pinching antennas,
including integrated sensing and communication and next-generation multiple
access, are presented. Finally, important directions for future research, such
as waveguide deployment and channel estimation, are highlighted.
|
2501.10755
|
An Experimental Study on Joint Modeling for Sound Event Localization and
Detection with Source Distance Estimation
|
cs.SD cs.LG eess.AS
|
In traditional sound event localization and detection (SELD) tasks, the focus
is typically on sound event detection (SED) and direction-of-arrival (DOA)
estimation, but they fall short of providing full spatial information about the
sound source. The 3D SELD task addresses this limitation by integrating source
distance estimation (SDE), allowing for complete spatial localization. We
propose three approaches to tackle this challenge: a novel method with
independent training and joint prediction, which firstly treats DOA and
distance estimation as separate tasks and then combines them to solve 3D SELD;
a dual-branch representation with source Cartesian coordinate used for
simultaneous DOA and distance estimation; and a three-branch structure that
jointly models SED, DOA, and SDE within a unified framework. Our proposed
method ranked first in the DCASE 2024 Challenge Task 3, demonstrating the
effectiveness of joint modeling for addressing the 3D SELD task. The relevant
code for this paper will be open-sourced in the future.
|
2501.10756
|
D2D Coded Caching Schemes for Multiaccess Networks with Combinatorial
Access Topology
|
cs.IT math.IT
|
This paper considers wireless device-to-device (D2D) coded caching in a
multiaccess network, where the users communicate with each other and each user
can access multiple cache nodes. Access topologies derived from two
combinatorial designs known as the $t$-design and $t$-group divisible design
($t$-GDD), referred to as the $t$-design and $t$-GDD topologies respectively,
which subsume a few other known topologies, have been studied for the
multiaccess coded caching (MACC) network by Cheng \textit{et al.} in
\cite{MACC_des}. These access topologies are extended to a multiaccess D2D
coded caching (MADCC) network and novel MADCC schemes are proposed. MADCC
network has been studied so far only for the cyclic wrap-around topology. Apart
from the proposed novel MADCC schemes, MADCC schemes are also derived from the
existing MACC schemes in \cite{MACC_des}. To compare the performance of
different MADCC schemes, the metrics of load per user and subpacketization
level are used while keeping the number of caches and cache memory size same.
The proposed MADCC scheme with $t$-design topology performs better in terms of
subpacketization level while achieving the same load per user compared to the
MADCC scheme derived from the MACC scheme with $t$-design topology in
\cite{MACC_des}. The proposed MADCC scheme with $t$-GDD topology performs
better in terms of load per user while achieving the same subpacketization
level compared to the MADCC scheme derived from the MACC scheme with $t$-GDD
topology in \cite{MACC_des} in some cases. Compared to the existing MADCC
scheme with cyclic wrap-around topology, the proposed MADCC scheme with
$t$-design topology performs better in terms of load per user, and the proposed
MADCC scheme with $t$-GDD topology performs better in terms of subpacketization
level at the expense of an increase in load per user.
|
2501.10757
|
Deformable Image Registration of Dark-Field Chest Radiographs for Local
Lung Signal Change Assessment
|
eess.IV cs.CV physics.med-ph
|
Dark-field radiography of the human chest has been demonstrated to have
promising potential for the analysis of the lung microstructure and the
diagnosis of respiratory diseases. However, previous studies of dark-field
chest radiographs evaluated the lung signal only in the inspiratory breathing
state. Our work aims to add a new perspective to these previous assessments by
locally comparing dark-field lung information between different respiratory
states. To this end, we discuss suitable image registration methods for
dark-field chest radiographs to enable consistent spatial alignment of the lung
in distinct breathing states. Utilizing full inspiration and expiration scans
from a clinical chronic obstructive pulmonary disease study, we assess the
performance of the proposed registration framework and outline applicable
evaluation approaches. Our regional characterization of lung dark-field signal
changes between the breathing states provides a proof-of-principle that dynamic
radiography-based lung function assessment approaches may benefit from
considering registered dark-field images in addition to standard plain chest
radiographs.
|
2501.10761
|
Infrared and Visible Image Fusion: From Data Compatibility to Task
Adaption
|
cs.CV
|
Infrared-visible image fusion (IVIF) is a critical task in computer vision,
aimed at integrating the unique features of both infrared and visible spectra
into a unified representation. Since 2018, the field has entered the deep
learning era, with an increasing variety of approaches introducing a range of
networks and loss functions to enhance visual performance. However, challenges
such as data compatibility, perception accuracy, and efficiency remain.
Unfortunately, there is a lack of recent comprehensive surveys that address
this rapidly expanding domain. This paper fills that gap by providing a
thorough survey covering a broad range of topics. We introduce a
multi-dimensional framework to elucidate common learning-based IVIF methods,
from visual enhancement strategies to data compatibility and task adaptability.
We also present a detailed analysis of these approaches, accompanied by a
lookup table clarifying their core ideas. Furthermore, we summarize performance
comparisons, both quantitatively and qualitatively, focusing on registration,
fusion, and subsequent high-level tasks. Beyond technical analysis, we discuss
potential future directions and open issues in this area. For further details,
visit our GitHub repository: https://github.com/RollingPlain/IVIF_ZOO.
|
2501.10768
|
MAPS: Advancing Multi-Modal Reasoning in Expert-Level Physical Science
|
cs.AI
|
Pre-trained on extensive text and image corpora, current Multi-Modal Large
Language Models (MLLM) have shown strong capabilities in general visual
reasoning tasks. However, their performance is still lacking in physical
domains that require understanding diagrams with complex physical structures
and quantitative analysis based on multi-modal information. To address this, we
develop a new framework, named Multi-Modal Scientific Reasoning with Physics
Perception and Simulation (MAPS) based on an MLLM. MAPS decomposes expert-level
multi-modal reasoning task into physical diagram understanding via a Physical
Perception Model (PPM) and reasoning with physical knowledge via a simulator.
The PPM module is obtained by fine-tuning a visual language model using
carefully designed synthetic data with paired physical diagrams and
corresponding simulation language descriptions. At the inference stage, MAPS
integrates the simulation language description of the input diagram provided by
PPM and results obtained through a Chain-of-Simulation process with MLLM to
derive the underlying rationale and the final answer. Validated using our
collected college-level circuit analysis problems, MAPS significantly improves
reasoning accuracy of MLLM and outperforms all existing models. The results
confirm MAPS offers a promising direction for enhancing multi-modal scientific
reasoning ability of MLLMs. We will release our code, model and dataset used
for our experiments upon publishing of this paper.
|
2501.10770
|
Enhancing Diagnostic in 3D COVID-19 Pneumonia CT-scans through
Explainable Uncertainty Bayesian Quantification
|
eess.IV cs.AI cs.CV cs.LG
|
Accurately classifying COVID-19 pneumonia in 3D CT scans remains a
significant challenge in the field of medical image analysis. Although
deterministic neural networks have shown promising results in this area, they
provide only point estimates outputs yielding poor diagnostic in clinical
decision-making. In this paper, we explore the use of Bayesian neural networks
for classifying COVID-19 pneumonia in 3D CT scans providing uncertainties in
their predictions. We compare deterministic networks and their Bayesian
counterpart, enhancing the decision-making accuracy under uncertainty
information. Remarkably, our findings reveal that lightweight architectures
achieve the highest accuracy of 96\% after developing extensive hyperparameter
tuning. Furthermore, the Bayesian counterpart of these architectures via
Multiplied Normalizing Flow technique kept a similar performance along with
calibrated uncertainty estimates. Finally, we have developed a 3D-visualization
approach to explain the neural network outcomes based on SHAP values. We
conclude that explainability along with uncertainty quantification will offer
better clinical decisions in medical image analysis, contributing to ongoing
efforts for improving the diagnosis and treatment of COVID-19 pneumonia.
|
2501.10774
|
Model Monitoring in the Absence of Labeled Data via Feature Attributions
Distributions
|
cs.LG
|
Model monitoring involves analyzing AI algorithms once they have been
deployed and detecting changes in their behaviour. This thesis explores machine
learning model monitoring ML before the predictions impact real-world decisions
or users. This step is characterized by one particular condition: the absence
of labelled data at test time, which makes it challenging, even often
impossible, to calculate performance metrics.
The thesis is structured around two main themes: (i) AI alignment, measuring
if AI models behave in a manner consistent with human values and (ii)
performance monitoring, measuring if the models achieve specific accuracy goals
or desires.
The thesis uses a common methodology that unifies all its sections. It
explores feature attribution distributions for both monitoring dimensions.
Using these feature attribution explanations, we can exploit their theoretical
properties to derive and establish certain guarantees and insights into model
monitoring.
|
2501.10775
|
MedFILIP: Medical Fine-grained Language-Image Pre-training
|
cs.CV cs.AI
|
Medical vision-language pretraining (VLP) that leverages naturally-paired
medical image-report data is crucial for medical image analysis. However,
existing methods struggle to accurately characterize associations between
images and diseases, leading to inaccurate or incomplete diagnostic results. In
this work, we propose MedFILIP, a fine-grained VLP model, introduces medical
image-specific knowledge through contrastive learning, specifically: 1) An
information extractor based on a large language model is proposed to decouple
comprehensive disease details from reports, which excels in extracting disease
deals through flexible prompt engineering, thereby effectively reducing text
complexity while retaining rich information at a tiny cost. 2) A knowledge
injector is proposed to construct relationships between categories and visual
attributes, which help the model to make judgments based on image features, and
fosters knowledge extrapolation to unfamiliar disease categories. 3) A semantic
similarity matrix based on fine-grained annotations is proposed, providing
smoother, information-richer labels, thus allowing fine-grained image-text
alignment. 4) We validate MedFILIP on numerous datasets, e.g., RSNA-Pneumonia,
NIH ChestX-ray14, VinBigData, and COVID-19. For single-label, multi-label, and
fine-grained classification, our model achieves state-of-the-art performance,
the classification accuracy has increased by a maximum of 6.69\%. The code is
available in https://github.com/PerceptionComputingLab/MedFILIP.
|
2501.10777
|
The working principles of model-based GAs fall within the PAC framework:
A mathematical theory of problem decomposition
|
cs.NE
|
The concepts of linkage, building blocks, and problem decomposition have long
existed in the genetic algorithm (GA) field and have guided the development of
model-based GAs for decades. However, their definitions are usually vague,
making it difficult to develop theoretical support. This paper provides an
algorithm-independent definition to describe the concept of linkage. With this
definition, the paper proves that any problems with a bounded degree of linkage
are decomposable and that proper problem decomposition is possible via linkage
learning. The way of decomposition given in this paper also offers a new
perspective on nearly decomposable problems with bounded difficulty and
building blocks from the theoretical aspect. Finally, this paper relates
problem decomposition to PAC learning and proves that the global optima of
these problems and the minimum decomposition blocks are PAC learnable under
certain conditions.
|
2501.10781
|
Simultaneous Computation with Multiple Prioritizations in Multi-Agent
Motion Planning
|
cs.MA cs.AI cs.RO
|
Multi-agent path finding (MAPF) in large networks is computationally
challenging. An approach for MAPF is prioritized planning (PP), in which agents
plan sequentially according to their priority. Albeit a computationally
efficient approach for MAPF, the solution quality strongly depends on the
prioritization. Most prioritizations rely either on heuristics, which do not
generalize well, or iterate to find adequate priorities, which costs
computational effort. In this work, we show how agents can compute with
multiple prioritizations simultaneously. Our approach is general as it does not
rely on domain-specific knowledge. The context of this work is multi-agent
motion planning (MAMP) with a receding horizon subject to computation time
constraints. MAMP considers the system dynamics in more detail compared to
MAPF. In numerical experiments on MAMP, we demonstrate that our approach to
prioritization comes close to optimal prioritization and outperforms
state-of-the-art methods with only a minor increase in computation time. We
show real-time capability in an experiment on a road network with ten vehicles
in our Cyber-Physical Mobility Lab.
|
2501.10782
|
ML-SceGen: A Multi-level Scenario Generation Framework
|
cs.AI
|
Current scientific research witnesses various attempts at applying Large
Language Models for scenario generation but is inclined only to comprehensive
or dangerous scenarios. In this paper, we seek to build a three-stage framework
that not only lets users regain controllability over the generated scenarios
but also generates comprehensive scenarios containing danger factors in
uncontrolled intersection settings. In the first stage, LLM agents will
contribute to translating the key components of the description of the expected
scenarios into Functional Scenarios. For the second stage, we use Answer Set
Programming (ASP) solver Clingo to help us generate comprehensive logical
traffic within intersections. During the last stage, we use LLM to update
relevant parameters to increase the critical level of the concrete scenario.
|
2501.10784
|
Measuring Fairness in Financial Transaction Machine Learning Models
|
cs.LG
|
Mastercard, a global leader in financial services, develops and deploys
machine learning models aimed at optimizing card usage and preventing attrition
through advanced predictive models. These models use aggregated and anonymized
card usage patterns, including cross-border transactions and industry-specific
spending, to tailor bank offerings and maximize revenue opportunities.
Mastercard has established an AI Governance program, based on its Data and Tech
Responsibility Principles, to evaluate any built and bought AI for efficacy,
fairness, and transparency. As part of this effort, Mastercard has sought
expertise from the Turing Institute through a Data Study Group to better assess
fairness in more complex AI/ML models. The Data Study Group challenge lies in
defining, measuring, and mitigating fairness in these predictions, which can be
complex due to the various interpretations of fairness, gaps in the research
literature, and ML-operations challenges.
|
2501.10787
|
LD-DETR: Loop Decoder DEtection TRansformer for Video Moment Retrieval
and Highlight Detection
|
cs.CV cs.IR cs.LG
|
Video Moment Retrieval and Highlight Detection aim to find corresponding
content in the video based on a text query. Existing models usually first use
contrastive learning methods to align video and text features, then fuse and
extract multimodal information, and finally use a Transformer Decoder to decode
multimodal information. However, existing methods face several issues: (1)
Overlapping semantic information between different samples in the dataset
hinders the model's multimodal aligning performance; (2) Existing models are
not able to efficiently extract local features of the video; (3) The
Transformer Decoder used by the existing model cannot adequately decode
multimodal features. To address the above issues, we proposed the LD-DETR model
for Video Moment Retrieval and Highlight Detection tasks. Specifically, we
first distilled the similarity matrix into the identity matrix to mitigate the
impact of overlapping semantic information. Then, we designed a method that
enables convolutional layers to extract multimodal local features more
efficiently. Finally, we fed the output of the Transformer Decoder back into
itself to adequately decode multimodal information. We evaluated LD-DETR on
four public benchmarks and conducted extensive experiments to demonstrate the
superiority and effectiveness of our approach. Our model outperforms the
State-Of-The-Art models on QVHighlight, Charades-STA and TACoS datasets. Our
code is available at https://github.com/qingchen239/ld-detr.
|
2501.10788
|
Decoupling Appearance Variations with 3D Consistent Features in Gaussian
Splatting
|
cs.CV
|
Gaussian Splatting has emerged as a prominent 3D representation in novel view
synthesis, but it still suffers from appearance variations, which are caused by
various factors, such as modern camera ISPs, different time of day, weather
conditions, and local light changes. These variations can lead to floaters and
color distortions in the rendered images/videos. Recent appearance modeling
approaches in Gaussian Splatting are either tightly coupled with the rendering
process, hindering real-time rendering, or they only account for mild global
variations, performing poorly in scenes with local light changes. In this
paper, we propose DAVIGS, a method that decouples appearance variations in a
plug-and-play and efficient manner. By transforming the rendering results at
the image level instead of the Gaussian level, our approach can model
appearance variations with minimal optimization time and memory overhead.
Furthermore, our method gathers appearance-related information in 3D space to
transform the rendered images, thus building 3D consistency across views
implicitly. We validate our method on several appearance-variant scenes, and
demonstrate that it achieves state-of-the-art rendering quality with minimal
training time and memory usage, without compromising rendering speeds.
Additionally, it provides performance improvements for different Gaussian
Splatting baselines in a plug-and-play manner.
|
2501.10789
|
CS-Net:Contribution-based Sampling Network for Point Cloud
Simplification
|
cs.CV
|
Point cloud sampling plays a crucial role in reducing computation costs and
storage requirements for various vision tasks. Traditional sampling methods,
such as farthest point sampling, lack task-specific information and, as a
result, cannot guarantee optimal performance in specific applications.
Learning-based methods train a network to sample the point cloud for the
targeted downstream task. However, they do not guarantee that the sampled
points are the most relevant ones. Moreover, they may result in duplicate
sampled points, which requires completion of the sampled point cloud through
post-processing techniques. To address these limitations, we propose a
contribution-based sampling network (CS-Net), where the sampling operation is
formulated as a Top-k operation. To ensure that the network can be trained in
an end-to-end way using gradient descent algorithms, we use a differentiable
approximation to the Top-k operation via entropy regularization of an optimal
transport problem. Our network consists of a feature embedding module, a
cascade attention module, and a contribution scoring module. The feature
embedding module includes a specifically designed spatial pooling layer to
reduce parameters while preserving important features. The cascade attention
module combines the outputs of three skip connected offset attention layers to
emphasize the attractive features and suppress less important ones. The
contribution scoring module generates a contribution score for each point and
guides the sampling process to prioritize the most important ones. Experiments
on the ModelNet40 and PU147 showed that CS-Net achieved state-of-the-art
performance in two semantic-based downstream tasks (classification and
registration) and two reconstruction-based tasks (compression and surface
reconstruction).
|
2501.10791
|
A Novel Precoder for Peak-to-Average Power Ratio Reduction in OTFS
Systems
|
cs.IT eess.SP math.IT
|
We consider the issue of high peak-to-average-power ratio (PAPR) of
Orthogonal time frequency space (OTFS) modulated signals. This paper proposes a
low-complexity novel iterative PAPR reduction method which achieves a PAPR
reduction of roughly 5 dB when compared to a OTFS modulated signal without any
PAPR compensation. Simulations reveal that the PAPR achieved by the proposed
method is significantly better than that achieved by other state-of-art
methods. Simulations also reveal that the error rate performance of OTFS based
systems with the proposed PAPR reduction is similar to that achieved with the
other state-of-art methods.
|
2501.10796
|
Dynamic Trend Fusion Module for Traffic Flow Prediction
|
cs.LG
|
Accurate traffic flow prediction is essential for applications like transport
logistics but remains challenging due to complex spatio-temporal correlations
and non-linear traffic patterns. Existing methods often model spatial and
temporal dependencies separately, failing to effectively fuse them. To overcome
this limitation, the Dynamic Spatial-Temporal Trend Transformer DST2former is
proposed to capture spatio-temporal correlations through adaptive embedding and
to fuse dynamic and static information for learning multi-view dynamic features
of traffic networks. The approach employs the Dynamic Trend Representation
Transformer (DTRformer) to generate dynamic trends using encoders for both
temporal and spatial dimensions, fused via Cross Spatial-Temporal Attention.
Predefined graphs are compressed into a representation graph to extract static
attributes and reduce redundancy. Experiments on four real-world traffic
datasets demonstrate that our framework achieves state-of-the-art performance.
|
2501.10799
|
Step-KTO: Optimizing Mathematical Reasoning through Stepwise Binary
Feedback
|
cs.LG cs.AI
|
Large language models (LLMs) have recently demonstrated remarkable success in
mathematical reasoning. Despite progress in methods like chain-of-thought
prompting and self-consistency sampling, these advances often focus on final
correctness without ensuring that the underlying reasoning process is coherent
and reliable. This paper introduces Step-KTO, a training framework that
combines process-level and outcome-level binary feedback to guide LLMs toward
more trustworthy reasoning trajectories. By providing binary evaluations for
both the intermediate reasoning steps and the final answer, Step-KTO encourages
the model to adhere to logical progressions rather than relying on superficial
shortcuts. Our experiments on challenging mathematical benchmarks show that
Step-KTO significantly improves both final answer accuracy and the quality of
intermediate reasoning steps. For example, on the MATH-500 dataset, Step-KTO
achieves a notable improvement in Pass@1 accuracy over strong baselines. These
results highlight the promise of integrating stepwise process feedback into LLM
training, paving the way toward more interpretable and dependable reasoning
capabilities.
|
2501.10800
|
Jailbreaking Large Language Models in Infinitely Many Ways
|
cs.LG cs.CR
|
We discuss the "Infinitely Many Meanings" attacks (IMM), a category of
jailbreaks that leverages the increasing capabilities of a model to handle
paraphrases and encoded communications to bypass their defensive mechanisms.
IMMs' viability pairs and grows with a model's capabilities to handle and bind
the semantics of simple mappings between tokens and work extremely well in
practice, posing a concrete threat to the users of the most powerful LLMs in
commerce. We show how one can bypass the safeguards of the most powerful open-
and closed-source LLMs and generate content that explicitly violates their
safety policies. One can protect against IMMs by improving the guardrails and
making them scale with the LLMs' capabilities. For two categories of attacks
that are straightforward to implement, i.e., bijection and encoding, we discuss
two defensive strategies, one in token and the other in embedding space. We
conclude with some research questions we believe should be prioritised to
enhance the defensive mechanisms of LLMs and our understanding of their safety.
|
2501.10806
|
Non-Expansive Mappings in Two-Time-Scale Stochastic Approximation:
Finite-Time Analysis
|
math.OC cs.LG cs.SY eess.SY stat.ML
|
Two-time-scale stochastic approximation is an iterative algorithm used in
applications such as optimization, reinforcement learning, and control.
Finite-time analysis of these algorithms has primarily focused on fixed point
iterations where both time-scales have contractive mappings. In this paper, we
study two-time-scale iterations, where the slower time-scale has a
non-expansive mapping. For such algorithms, the slower time-scale can be
considered a stochastic inexact Krasnoselskii-Mann iteration. We show that the
mean square error decays at a rate $O(1/k^{1/4-\epsilon})$, where $\epsilon>0$
is arbitrarily small. We also show almost sure convergence of iterates to the
set of fixed points. We show the applicability of our framework by applying our
results to minimax optimization, linear stochastic approximation, and
Lagrangian optimization.
|
2501.10808
|
Optimizing MACD Trading Strategies A Dance of Finance, Wavelets, and
Genetics
|
cs.CE
|
In today's financial markets, quantitative trading has become an essential
trading method, with the MACD indicator widely employed in quantitative trading
strategies. This paper begins by screening and cleaning the dataset,
establishing a model that adheres to the basic buy and sell rules of the MACD,
and calculating key metrics such as the win rate, return, Sharpe ratio, and
maximum drawdown for each stock. However, the MACD often generates erroneous
signals in highly volatile markets. To address this, wavelet transform is
applied to reduce noise, smoothing the DIF image, and a model is developed
based on this to optimize the identification of buy and sell points. The
results show that the annualized return has increased by 5%, verifying the
feasibility of the method.
Subsequently, the divergence principle is used to further optimize the
trading strategy, enhancing the model's performance. Additionally, a genetic
algorithm is employed to optimize the MACD parameters, tailoring the strategy
to the characteristics of different stocks. To improve computational
efficiency, the MindSpore framework is used for resource management and
parallel computing. The optimized strategy demonstrates improved win rates,
returns, Sharpe ratios, and a reduction in maximum drawdown in backtesting.
|
2501.10809
|
Efficient Auto-Labeling of Large-Scale Poultry Datasets (ALPD) Using
Semi-Supervised Models, Active Learning, and Prompt-then-Detect Approach
|
cs.CV cs.AI
|
The rapid growth of AI in poultry farming has highlighted the challenge of
efficiently labeling large, diverse datasets. Manual annotation is
time-consuming, making it impractical for modern systems that continuously
generate data. This study explores semi-supervised auto-labeling methods,
integrating active learning, and prompt-then-detect paradigm to develop an
efficient framework for auto-labeling of large poultry datasets aimed at
advancing AI-driven behavior and health monitoring. Viideo data were collected
from broilers and laying hens housed at the University of Arkansas and the
University of Georgia. The collected videos were converted into images,
pre-processed, augmented, and labeled. Various machine learning models,
including zero-shot models like Grounding DINO, YOLO-World, and CLIP, and
supervised models like YOLO and Faster-RCNN, were utilized for broilers, hens,
and behavior detection. The results showed that YOLOv8s-World and YOLOv9s
performed better when compared performance metrics for broiler and hen
detection under supervised learning, while among the semi-supervised model,
YOLOv8s-ALPD achieved the highest precision (96.1%) and recall (99.0%) with an
RMSE of 1.9. The hybrid YOLO-World model, incorporating the optimal YOLOv8s
backbone, demonstrated the highest overall performance. It achieved a precision
of 99.2%, recall of 99.4%, and an F1 score of 98.7% for breed detection,
alongside a precision of 88.4%, recall of 83.1%, and an F1 score of 84.5% for
individual behavior detection. Additionally, semi-supervised models showed
significant improvements in behavior detection, achieving up to 31% improvement
in precision and 16% in F1-score. The semi-supervised models with minimal
active learning reduced annotation time by over 80% compared to full manual
labeling. Moreover, integrating zero-shot models enhanced detection and
behavior identification.
|
2501.10810
|
Convergence and Running Time of Time-dependent Ant Colony Algorithms
|
cs.DS cs.NE
|
Ant Colony Optimization (ACO) is a well-known method inspired by the foraging
behavior of ants and is extensively used to solve combinatorial optimization
problems. In this paper, we first consider a general framework based on the
concept of a construction graph - a graph associated with an instance of the
optimization problem under study, where feasible solutions are represented by
walks. We analyze the running time of this ACO variant, known as the
Graph-based Ant System with time-dependent evaporation rate (GBAS/tdev), and
prove that the algorithm's solution converges to the optimal solution of the
problem with probability 1 for a slightly stronger evaporation rate function
than was previously known. We then consider two time-dependent adaptations of
Attiratanasunthron and Fakcharoenphol's $n$-ANT algorithm: $n$-ANT with
time-dependent evaporation rate ($n$-ANT/tdev) and $n$-ANT with time-dependent
lower pheromone bound ($n$-ANT/tdlb). We analyze both variants on the single
destination shortest path problem (SDSP). Our results show that $n$-ANT/tdev
has a super-polynomial time lower bound on the SDSP. In contrast, we show that
$n$-ANT/tdlb achieves a polynomial time upper bound on this problem.
|
2501.10812
|
Graph Coloring to Reduce Computation Time in Prioritized Planning
|
cs.MA cs.AI cs.RO
|
Distributing computations among agents in large networks reduces
computational effort in multi-agent path finding (MAPF). One distribution
strategy is prioritized planning (PP). In PP, we couple and prioritize
interacting agents to achieve a desired behavior across all agents in the
network. We characterize the interaction with a directed acyclic graph (DAG).
The computation time for solving MAPF problem using PP is mainly determined
through the longest path in this DAG. The longest path depends on the fixed
undirected coupling graph and the variable prioritization. The approaches from
literature to prioritize agents are numerous and pursue various goals. This
article presents an approach for prioritization in PP to reduce the longest
path length in the coupling DAG and thus the computation time for MAPF using
PP. We prove that this problem can be mapped to a graph-coloring problem, in
which the number of colors required corresponds to the longest path length in
the coupling DAG. We propose a decentralized graph-coloring algorithm to
determine priorities for the agents. We evaluate the approach by applying it to
multi-agent motion planning (MAMP) for connected and automated vehicles (CAVs)
on roads using, a variant of MAPF.
|
2501.10814
|
No More Sliding Window: Efficient 3D Medical Image Segmentation with
Differentiable Top-k Patch Sampling
|
eess.IV cs.AI cs.CV cs.LG
|
3D models are favored over 2D for 3D medical image segmentation tasks due to
their ability to leverage inter-slice relationship, yielding higher
segmentation accuracy. However, 3D models demand significantly more GPU memory
with increased model size and intermediate tensors. A common solution is to use
patch-based training and make whole-volume predictions with sliding window (SW)
inference. SW inference reduces memory usage but is slower due to equal
resource allocation across patches and less accurate as it overlooks global
features beyond patches.
We propose NMSW-Net (No-More-Sliding-Window-Net), a novel framework that
enhances efficiency and accuracy of any given 3D segmentation model by
eliminating SW inference and incorporating global predictions when necessary.
NMSW-Net incorporates a differentiable Top-k module to sample only the relevant
patches that enhance segmentation accuracy, thereby minimizing redundant
computations. Additionally, it learns to leverage coarse global predictions
when patch prediction alone is insufficient. NMSW-Net is model-agnostic, making
it compatible with any 3D segmentation model that previously relied on SW
inference.
Evaluated across 3 tasks with 3 segmentation backbones, NMSW-Net achieves
competitive or sometimes superior accuracy compared to SW, while reducing
computational complexity by 90% (87.5 to 7.95 TFLOPS), delivering 4x faster
inference on the H100 GPU (19.0 to 4.3 sec), and 7x faster inference on the
Intel Xeon Gold CPU (1710 to 230 seconds).
|
2501.10815
|
An Interpretable Measure for Quantifying Predictive Dependence between
Continuous Random Variables -- Extended Version
|
cs.LG math.ST stat.ML stat.TH
|
A fundamental task in statistical learning is quantifying the joint
dependence or association between two continuous random variables. We introduce
a novel, fully non-parametric measure that assesses the degree of association
between continuous variables $X$ and $Y$, capable of capturing a wide range of
relationships, including non-functional ones. A key advantage of this measure
is its interpretability: it quantifies the expected relative loss in predictive
accuracy when the distribution of $X$ is ignored in predicting $Y$. This
measure is bounded within the interval [0,1] and is equal to zero if and only
if $X$ and $Y$ are independent. We evaluate the performance of our measure on
over 90,000 real and synthetic datasets, benchmarking it against leading
alternatives. Our results demonstrate that the proposed measure provides
valuable insights into underlying relationships, particularly in cases where
existing methods fail to capture important dependencies.
|
2501.10819
|
GAUDA: Generative Adaptive Uncertainty-guided Diffusion-based
Augmentation for Surgical Segmentation
|
cs.CV cs.LG
|
Augmentation by generative modelling yields a promising alternative to the
accumulation of surgical data, where ethical, organisational and regulatory
aspects must be considered. Yet, the joint synthesis of (image, mask) pairs for
segmentation, a major application in surgery, is rather unexplored. We propose
to learn semantically comprehensive yet compact latent representations of the
(image, mask) space, which we jointly model with a Latent Diffusion Model. We
show that our approach can effectively synthesise unseen high-quality paired
segmentation data of remarkable semantic coherence. Generative augmentation is
typically applied pre-training by synthesising a fixed number of additional
training samples to improve downstream task models. To enhance this approach,
we further propose Generative Adaptive Uncertainty-guided Diffusion-based
Augmentation (GAUDA), leveraging the epistemic uncertainty of a Bayesian
downstream model for targeted online synthesis. We condition the generative
model on classes with high estimated uncertainty during training to produce
additional unseen samples for these classes. By adaptively utilising the
generative model online, we can minimise the number of additional training
samples and centre them around the currently most uncertain parts of the data
distribution. GAUDA effectively improves downstream segmentation results over
comparable methods by an average absolute IoU of 1.6% on CaDISv2 and 1.5% on
CholecSeg8k, two prominent surgical datasets for semantic segmentation.
|
2501.10822
|
Addressing Multilabel Imbalance with an Efficiency-Focused Approach
Using Diffusion Model-Generated Synthetic Samples
|
cs.LG cs.AI
|
Predictive models trained on imbalanced data tend to produce biased results.
This problem is exacerbated when there is not just one output label, but a set
of them. This is the case for multilabel learning (MLL) algorithms used to
classify patterns, rank labels, or learn the distribution of outputs. Many
solutions have been proposed in the literature. The one that can be applied
universally, independent of the algorithm used to build the model, is data
resampling. The generation of new instances associated with minority labels, so
that empty areas of the feature space are filled, helps to improve the obtained
models. The quality of these new instances depends on the algorithm used to
generate them. In this paper, a diffusion model tailored to produce new
instances for MLL data, called MLDM (\textit{MultiLabel Diffusion Model}), is
proposed. Diffusion models have been mainly used to generate artificial images
and videos. Our proposed MLDM is based on this type of models. The experiments
conducted compare MLDM with several other MLL resampling algorithms. The
results show that MLDM is competitive while it improves efficiency.
|
2501.10824
|
Information Content and Entropy of Finite Patterns from a Combinatorial
Perspective
|
cs.IT cs.DM math.IT
|
A unified combinatorial definition of the information content and entropy of
different types of patterns, compatible with the traditional concepts of
information and entropy, going beyond the limitations of Shannon information
interpretable for ergodic Markov processes. We compare the information content
of various finite patterns and derive general properties of information
quantity from these comparisons. Using these properties, we define normalized
information estimation methods based on compression algorithms and Kolmogorov
complexity. From a combinatorial point of view, we redefine the concept of
entropy in a way that is asymptotically compatible with traditional entropy.
|
2501.10825
|
Statistical Design of Thermal Protection System Using Physics-Informed
Machine learning
|
cs.CE
|
Estimating the material properties of thermal protection films is crucial for
their effective design and application, particularly in high-temperature
environments. This work presents a novel approach to determine the properties
using uncertainty quantification simulations. We quantify uncertainty in the
material properties for effective insulation by proposing a Bayesian
distribution for them. Sampling from this distribution is performed using Monte
Carlo simulations, which require repeatedly solving the predictive thermal
model. To address the computational inefficiency of conventional numerical
simulations, we develop a parametric Physics-Informed Neural Network (PINN) to
solve the heat transfer problem. The proposed PINN significantly reduces
computational time while maintaining accuracy, as verified against traditional
numerical solutions. Additionally, we used the Sequential Monte Carlo (SMC)
method to enable vectorized and parallel computations, further enhancing
computational speedup. Our results demonstrate that integrating MCMC with PINN
decreases computational time substantially compared to using standard numerical
methods. Moreover, combining the SMC method with PINN yields multifold
computational speedup, making this approach highly effective for the rapid and
accurate estimation of material properties.
|
2501.10827
|
Integrating Expert and Physics Knowledge for Modeling Heat Load in
District Heating Systems
|
eess.SY cs.SY
|
New residential neighborhoods are often supplied with heat via district
heating systems (DHS). Improving the energy efficiency of a DHS is critical for
increasing sustainability and satisfying user requirements. In this paper, we
present HELIOS, a dedicated artificial intelligence (AI) model designed
specifically for modeling the heat load in DHS. HELIOS leverages a combination
of established physical principles and expert knowledge, resulting in superior
performance compared to existing state-of-the-art models. HELIOS is
explainable, enabling enhanced accountability and traceability in its
predictions. We evaluate HELIOS against ten state-of-the-art data-driven models
in modeling the heat load in a DHS case study in the Netherlands. HELIOS
emerges as the top-performing model while maintaining complete accountability.
The applications of HELIOS extend beyond the present case study, potentially
supporting the adoption of AI by DHS and contributing to sustainable energy
management on a larger scale.
|
2501.10834
|
Visual RAG: Expanding MLLM visual knowledge without fine-tuning
|
cs.CV cs.AI cs.LG
|
Multimodal Large Language Models (MLLMs) have achieved notable performance in
computer vision tasks that require reasoning across visual and textual
modalities, yet their capabilities are limited to their pre-trained data,
requiring extensive fine-tuning for updates. Recent researches have explored
the use of In-Context Learning (ICL) to overcome these challenges by providing
a set of demonstrating examples as context to augment MLLMs performance in
several tasks, showing that many-shot ICL leads to substantial improvements
compared to few-shot ICL. However, the reliance on numerous demonstrating
examples and the limited MLLMs context windows presents significant obstacles.
This paper aims to address these challenges by introducing a novel approach,
Visual RAG, that synergically combines the MLLMs capability to learn from the
context, with a retrieval mechanism. The crux of this approach is to ensure to
augment the MLLM knowledge by selecting only the most relevant demonstrating
examples for the query, pushing it to learn by analogy. In this way, relying on
the new information provided dynamically during inference time, the resulting
system is not limited to the knowledge extracted from the training data, but
can be updated rapidly and easily without fine-tuning. Furthermore, this
greatly reduces the computational costs for improving the model image
classification performance, and augments the model knowledge to new visual
domains and tasks it was not trained for. Extensive experiments on eight
different datasets in the state of the art spanning several domains and image
classification tasks show that the proposed Visual RAG, compared to the most
recent state of the art (i.e., many-shot ICL), is able to obtain an accuracy
that is very close or even higher (approx. +2% improvement on average) while
using a much smaller set of demonstrating examples (approx. only 23% on
average).
|
2501.10835
|
Anatomy of a Historic Blackout: Decoding Spatiotemporal Dynamics of
Power Outages and Disparities During Hurricane Beryl
|
cs.CE
|
This study investigates the spatial patterns and temporal variations in
outage duration, intensity, and restoration/recovery following the 2024
Hurricane Beryl in Houston, Texas. This historic blackout caused widespread
power disruptions across the Houston metropolitan area, leaving more than 2
million customers without power over several days, resulting in more than 143
million total customer-out hours.The findings reveal that areas with higher
population density and proximity to the hurricane's path experienced more
severe initial impacts. Regions with higher median income showed faster
recovery, while lower-income areas exhibited prolonged restoration periods,
even with favorable infrastructural conditions, suggesting disparities in
restoration speed. The study also highlights how urban development features,
such as road density and land elevation, explain spatial disparities in power
outage impacts and recovery. This research advances the understanding of power
outage dynamics in large metropolitan regions through four key contributions:
(1) empirical characterization of outages from a historic hurricane,
highlighting infrastructure vulnerabilities in a high-density urban context;
(2) comprehensive analysis using multiple metrics to capture spatiotemporal
dynamics of outages and restoration; (3) leveraging of high-resolution outage
data at fine geographic scales and frequent intervals to quantify and reveal
previously masked spatial disparities; and (4) systematic examination of
socioeconomic, urban development, and environmental factors in shaping
disparities in outage impacts and recovery timelines. These findings provide
infrastructure managers, operators, utilities, and decision-makers with crucial
empirical insights to quantify power outage impacts, justify resilience
investments, and address vulnerability and equity issues in the power
infrastructure during hazard events.
|
2501.10836
|
BAP v2: An Enhanced Task Framework for Instruction Following in
Minecraft Dialogues
|
cs.CL cs.AI
|
Interactive agents capable of understanding and executing instructions in the
physical world have long been a central goal in AI research. The Minecraft
Collaborative Building Task (MCBT) provides one such setting to work towards
this goal (Narayan-Chen, Jayannavar, and Hockenmaier 2019). It is a two-player
game in which an Architect (A) instructs a Builder (B) to construct a target
structure in a simulated Blocks World Environment. We focus on the challenging
Builder Action Prediction (BAP) subtask of predicting correct action sequences
in a given multimodal game context with limited training data (Jayannavar,
Narayan-Chen, and Hockenmaier 2020). We take a closer look at evaluation and
data for the BAP task, discovering key challenges and making significant
improvements on both fronts to propose BAP v2, an upgraded version of the task.
This will allow future work to make more efficient and meaningful progress on
it. It comprises of: (1) an enhanced evaluation benchmark that includes a
cleaner test set and fairer, more insightful metrics, and (2) additional
synthetic training data generated from novel Minecraft dialogue and target
structure simulators emulating the MCBT. We show that the synthetic data can be
used to train more performant and robust neural models even with relatively
simple training methods. Looking ahead, such data could also be crucial for
training more sophisticated, data-hungry deep transformer models and
training/fine-tuning increasingly large LLMs. Although modeling is not the
primary focus of this work, we also illustrate the impact of our data and
training methodologies on a simple LLM- and transformer-based model, thus
validating the robustness of our approach, and setting the stage for more
advanced architectures and LLMs going forward.
|
2501.10839
|
Systems Engineering for Autonomous Vehicles; Supervising AI using Large
Language Models (SSuperLLM)
|
eess.SY cs.SY
|
Generative Artificial Intelligence (GAI) and the idea to use hierarchical
models has been around for some years now. GAI has proved to be an extremely
useful tool for Autonomous Vehicles (AVs). AVs need to perform robustly in
their environment. Thus the AV behavior and short-term trajectory planning
needs to be: a) designed and architected using safeguarding and supervisory
systems and b) verified using proper Systems Engineering (SysEng) Principles.
Can AV Systems Engineering also use Large Language Models (LLM) to help
Autonomous vehicles (AV) development? This reader-friendly paper advocates the
use of LLMs in 1) requirements (Reqs) development and 2) Reqs verification and
3) provides a proof-of-concept of AV supervisory control. The latter uses a
simulation environment of a simple planar (bicycle) vehicle dynamics model and
a Linear Quadratic Regulator (LQR) control with an LLM Application Interface
(API). The Open-Source simulation SW is available from the author accessible to
the readers so that they can engage into the AV stack, LLM API and rules,
SysEng and Reqs and fundamental vehicle dynamics and control.
|
2501.10841
|
Practical and Ready-to-Use Methodology to Assess the re-identification
Risk in Anonymized Datasets
|
cs.CR cs.AI cs.DB
|
To prove that a dataset is sufficiently anonymized, many privacy policies
suggest that a re-identification risk assessment be performed, but do not
provide a precise methodology for doing so, leaving the industry alone with the
problem. This paper proposes a practical and ready-to-use methodology for
re-identification risk assessment, the originality of which is manifold: (1) it
is the first to follow well-known risk analysis methods (e.g. EBIOS) that have
been used in the cybersecurity field for years, which consider not only the
ability to perform an attack, but also the impact such an attack can have on an
individual; (2) it is the first to qualify attributes and values of attributes
with e.g. degree of exposure, as known real-world attacks mainly target certain
types of attributes and not others.
|
2501.10842
|
BOOST: Microgrid Sizing using Ordinal Optimization
|
eess.SY cs.SY
|
The transition to sustainable energy systems has highlighted the critical
need for efficient sizing of renewable energy resources in microgrids. In
particular, designing photovoltaic (PV) and battery systems to meet residential
loads is challenging due to trade-offs between cost, reliability, and
environmental impact. While previous studies have employed dynamic programming
and heuristic techniques for microgrid sizing, these approaches often fail to
balance computational efficiency and accuracy. In this work, we propose BOOST,
or Battery-solar Ordinal Optimization Sizing Technique, a novel framework for
optimizing the sizing of PV and battery components in microgrids. Ordinal
optimization enables computationally efficient evaluations of potential designs
while preserving accuracy through robust ranking of solutions. To determine the
optimal operation of the system at any given time, we introduce a mixed-integer
linear programming (MILP) approach, which achieves lower costs than the
commonly used dynamic programming methods. Our numerical experiments
demonstrate that the proposed framework identifies optimal designs that achieve
a levelized cost of energy (LCOE) as low as 8.84 cents/kWh, underscoring its
potential for cost-effective microgrid design. The implications of our work are
significant: BOOST provides a scalable and accurate methodology for integrating
renewable energy into residential microgrids, addressing economic and
environmental goals simultaneously.
|
2501.10848
|
Fake Advertisements Detection Using Automated Multimodal Learning: A
Case Study for Vietnamese Real Estate Data
|
cs.LG cs.AI
|
The popularity of e-commerce has given rise to fake advertisements that can
expose users to financial and data risks while damaging the reputation of these
e-commerce platforms. For these reasons, detecting and removing such fake
advertisements are important for the success of e-commerce websites. In this
paper, we propose FADAML, a novel end-to-end machine learning system to detect
and filter out fake online advertisements. Our system combines techniques in
multimodal machine learning and automated machine learning to achieve a high
detection rate. As a case study, we apply FADAML to detect fake advertisements
on popular Vietnamese real estate websites. Our experiments show that we can
achieve 91.5% detection accuracy, which significantly outperforms three
different state-of-the-art fake news detection systems.
|
2501.10851
|
Exploring Siamese Networks in Self-Supervised Fast MRI Reconstruction
|
eess.IV cs.CV
|
Reconstructing MR images using deep neural networks from undersampled k-space
data without using fully sampled training references offers significant value
in practice, which is a self-supervised regression problem calling for
effective prior knowledge and supervision. The Siamese architectures are
motivated by the definition "invariance" and shows promising results in
unsupervised visual representative learning. Building homologous transformed
images and avoiding trivial solutions are two major challenges in Siamese-based
self-supervised model. In this work, we explore Siamese architecture for MRI
reconstruction in a self-supervised training fashion called SiamRecon. We show
the proposed approach mimics an expectation maximization algorithm. The
alternative optimization provide effective supervision signal and avoid
collapse. The proposed SiamRecon achieves the state-of-the-art reconstruction
accuracy in the field of self-supervised learning on both single-coil brain MRI
and multi-coil knee MRI.
|
2501.10854
|
Achievable DoF Bounds for Cache-Aided Asymmetric MIMO Communications
|
cs.IT eess.SP math.IT
|
Integrating coded caching (CC) into multiple-input multiple-output (MIMO)
communications can significantly enhance the achievable degrees of freedom
(DoF) in wireless networks. This paper investigates a practical cache-aided
asymmetric MIMO configuration with cache ratio $\gamma$, where a server
equipped with $L$ transmit antennas communicates with $K$ users, each having
$G_k$ receive antennas. We propose three content-aware MIMO-CC strategies: the
\emph{min-G} scheme, which treats the system as symmetric by assuming all users
have the same number of antennas, equal to the smallest among them; the
\emph{Grouping} scheme, which maximizes spatial multiplexing gain separately
within each user subset at the cost of some global caching gain; and the
\emph{Phantom} scheme, which dynamically redistributes spatial resources using
virtual or ``phantom'' antennas at the users, bridging the performance gains of
the min-$G$ and Grouping schemes. These strategies jointly optimize the number
of users, $\Omega$, and the parallel streams decoded by each user, $\beta_k$,
ensuring linear decodability for all target users. Analytical and numerical
results confirm that the proposed schemes achieve significant DoF improvements
across various system configurations.
|
2501.10857
|
Learning Nonverbal Cues in Multiparty Social Interactions for Robotic
Facilitators
|
cs.RO cs.LG
|
Conventional behavior cloning (BC) models often struggle to replicate the
subtleties of human actions. Previous studies have attempted to address this
issue through the development of a new BC technique: Implicit Behavior Cloning
(IBC). This new technique consistently outperformed the conventional Mean
Squared Error (MSE) BC models in a variety of tasks. Our goal is to replicate
the performance of the IBC model by Florence [in Proceedings of the 5th
Conference on Robot Learning, 164:158-168, 2022], for social interaction tasks
using our custom dataset. While previous studies have explored the use of large
language models (LLMs) for enhancing group conversations, they often overlook
the significance of non-verbal cues, which constitute a substantial part of
human communication. We propose using IBC to replicate nonverbal cues like gaze
behaviors. The model is evaluated against various types of facilitator data and
compared to an explicit, MSE BC model. Results show that the IBC model
outperforms the MSE BC model across session types using the same metrics used
in the previous IBC paper. Despite some metrics showing mixed results which are
explainable for the custom dataset for social interaction, we successfully
replicated the IBC model to generate nonverbal cues. Our contributions are (1)
the replication and extension of the IBC model, and (2) a nonverbal cues
generation model for social interaction. These advancements facilitate the
integration of robots into the complex interactions between robots and humans,
e.g., in the absence of a human facilitator.
|
2501.10858
|
Reliable Text-to-SQL with Adaptive Abstention
|
cs.DB cs.AI
|
Large language models (LLMs) have revolutionized natural language interfaces
for databases, particularly in text-to-SQL conversion. However, current
approaches often generate unreliable outputs when faced with ambiguity or
insufficient context. We present Reliable Text-to-SQL (RTS), a novel framework
that enhances query generation reliability by incorporating abstention and
human-in-the-loop mechanisms. RTS focuses on the critical schema linking phase,
which aims to identify the key database elements needed for generating SQL
queries. It autonomously detects potential errors during the answer generation
process and responds by either abstaining or engaging in user interaction. A
vital component of RTS is the Branching Point Prediction (BPP) which utilizes
statistical conformal techniques on the hidden layers of the LLM model for
schema linking, providing probabilistic guarantees on schema linking accuracy.
We validate our approach through comprehensive experiments on the BIRD
benchmark, demonstrating significant improvements in robustness and
reliability. Our findings highlight the potential of combining transparent-box
LLMs with human-in-the-loop processes to create more robust natural language
interfaces for databases. For the BIRD benchmark, our approach achieves
near-perfect schema linking accuracy, autonomously involving a human when
needed. Combined with query generation, we demonstrate that near-perfect schema
linking and a small query generation model can almost match SOTA accuracy
achieved with a model orders of magnitude larger than the one we use.
|
2501.10859
|
Which price to pay? Auto-tuning building MPC controller for optimal
economic cost
|
eess.SY cs.LG cs.SY math.OC
|
Model predictive control (MPC) controller is considered for temperature
management in buildings but its performance heavily depends on hyperparameters.
Consequently, MPC necessitates meticulous hyperparameter tuning to attain
optimal performance under diverse contracts. However, conventional building
controller design is an open-loop process without critical hyperparameter
optimization, often leading to suboptimal performance due to unexpected
environmental disturbances and modeling errors. Furthermore, these
hyperparameters are not adapted to different pricing schemes and may lead to
non-economic operations. To address these issues, we propose an efficient
performance-oriented building MPC controller tuning method based on a
cutting-edge efficient constrained Bayesian optimization algorithm, CONFIG,
with global optimality guarantees. We demonstrate that this technique can be
applied to efficiently deal with real-world DSM program selection problems
under customized black-box constraints and objectives. In this study, a simple
MPC controller, which offers the advantages of reduced commissioning costs,
enhanced computational efficiency, was optimized to perform on a comparable
level to a delicately designed and computationally expensive MPC controller.
The results also indicate that with an optimized simple MPC, the monthly
electricity cost of a household can be reduced by up to 26.90% compared with
the cost when controlled by a basic rule-based controller under the same
constraints. Then we compared 12 real electricity contracts in Belgium for a
household family with customized black-box occupant comfort constraints. The
results indicate a monthly electricity bill saving up to 20.18% when the most
economic contract is compared with the worst one, which again illustrates the
significance of choosing a proper electricity contract.
|
2501.10860
|
Zero-shot and Few-shot Learning with Instruction-following LLMs for
Claim Matching in Automated Fact-checking
|
cs.CL cs.AI
|
The claim matching (CM) task can benefit an automated fact-checking pipeline
by putting together claims that can be resolved with the same fact-check. In
this work, we are the first to explore zero-shot and few-shot learning
approaches to the task. We consider CM as a binary classification task and
experiment with a set of instruction-following large language models
(GPT-3.5-turbo, Gemini-1.5-flash, Mistral-7B-Instruct, and
Llama-3-8B-Instruct), investigating prompt templates. We introduce a new CM
dataset, ClaimMatch, which will be released upon acceptance. We put LLMs to the
test in the CM task and find that it can be tackled by leveraging more mature
yet similar tasks such as natural language inference or paraphrase detection.
We also propose a pipeline for CM, which we evaluate on texts of different
lengths.
|
2501.10861
|
Dynamic Continual Learning: Harnessing Parameter Uncertainty for
Improved Network Adaptation
|
cs.LG cs.AI
|
When fine-tuning Deep Neural Networks (DNNs) to new data, DNNs are prone to
overwriting network parameters required for task-specific functionality on
previously learned tasks, resulting in a loss of performance on those tasks. We
propose using parameter-based uncertainty to determine which parameters are
relevant to a network's learned function and regularize training to prevent
change in these important parameters. We approach this regularization in two
ways: (1), we constrain critical parameters from significant changes by
associating more critical parameters with lower learning rates, thereby
limiting alterations in those parameters; (2), important parameters are
restricted from change by imposing a higher regularization weighting, causing
parameters to revert to their states prior to the learning of subsequent tasks.
We leverage a Bayesian Moment Propagation framework which learns network
parameters concurrently with their associated uncertainties while allowing each
parameter to contribute uncertainty to the network's predictive distribution,
avoiding the pitfalls of existing sampling-based methods. The proposed approach
is evaluated for common sequential benchmark datasets and compared to existing
published approaches from the Continual Learning community. Ultimately, we show
improved Continual Learning performance for Average Test Accuracy and Backward
Transfer metrics compared to sampling-based methods and other
non-uncertainty-based approaches.
|
2501.10866
|
QGAPHEnsemble : Combining Hybrid QLSTM Network Ensemble via Adaptive
Weighting for Short Term Weather Forecasting
|
cs.LG
|
Accurate weather forecasting holds significant importance, serving as a
crucial tool for decision-making in various industrial sectors. The limitations
of statistical models, assuming independence among data points, highlight the
need for advanced methodologies. The correlation between meteorological
variables necessitate models capable of capturing complex dependencies. This
research highlights the practical efficacy of employing advanced machine
learning techniques proposing GenHybQLSTM and BO-QEnsemble architecture based
on adaptive weight adjustment strategy. Through comprehensive hyper-parameter
optimization using hybrid quantum genetic particle swarm optimisation algorithm
and Bayesian Optimization, our model demonstrates a substantial improvement in
the accuracy and reliability of meteorological predictions through the
assessment of performance metrics such as MSE (Mean Squared Error) and MAPE
(Mean Absolute Percentage Prediction Error). The paper highlights the
importance of optimized ensemble techniques to improve the performance the
given weather forecasting task.
|
2501.10868
|
Generating Structured Outputs from Language Models: Benchmark and
Studies
|
cs.CL cs.AI
|
Reliably generating structured outputs has become a critical capability for
modern language model (LM) applications. Constrained decoding has emerged as
the dominant technology across sectors for enforcing structured outputs during
generation. Despite its growing adoption, little has been done with the
systematic evaluation of the behaviors and performance of constrained decoding.
Constrained decoding frameworks have standardized around JSON Schema as a
structured data format, with most uses guaranteeing constraint compliance given
a schema. However, there is poor understanding of the effectiveness of the
methods in practice. We present an evaluation framework to assess constrained
decoding approaches across three critical dimensions: efficiency in generating
constraint-compliant outputs, coverage of diverse constraint types, and quality
of the generated outputs. To facilitate this evaluation, we introduce
JSONSchemaBench, a benchmark for constrained decoding comprising 10K real-world
JSON schemas that encompass a wide range of constraints with varying
complexity. We pair the benchmark with the existing official JSON Schema Test
Suite and evaluate six state-of-the-art constrained decoding frameworks,
including Guidance, Outlines, Llamacpp, XGrammar, OpenAI, and Gemini. Through
extensive experiments, we gain insights into the capabilities and limitations
of constrained decoding on structured generation with real-world JSON schemas.
Our work provides actionable insights for improving constrained decoding
frameworks and structured generation tasks, setting a new standard for
evaluating constrained decoding and structured generation. We release
JSONSchemaBench at https://github.com/guidance-ai/jsonschemabench
|
2501.10869
|
Diffusion-Based Imitation Learning for Social Pose Generation
|
cs.LG cs.RO
|
Intelligent agents, such as robots and virtual agents, must understand the
dynamics of complex social interactions to interact with humans. Effectively
representing social dynamics is challenging because we require multi-modal,
synchronized observations to understand a scene. We explore how using a single
modality, the pose behavior, of multiple individuals in a social interaction
can be used to generate nonverbal social cues for the facilitator of that
interaction. The facilitator acts to make a social interaction proceed smoothly
and is an essential role for intelligent agents to replicate in human-robot
interactions. In this paper, we adapt an existing diffusion behavior cloning
model to learn and replicate facilitator behaviors. Furthermore, we evaluate
two representations of pose observations from a scene, one representation has
pre-processing applied and one does not. The purpose of this paper is to
introduce a new use for diffusion behavior cloning for pose generation in
social interactions. The second is to understand the relationship between
performance and computational load for generating social pose behavior using
two different techniques for collecting scene observations. As such, we are
essentially testing the effectiveness of two different types of conditioning
for a diffusion model. We then evaluate the resulting generated behavior from
each technique using quantitative measures such as mean per-joint position
error (MPJPE), training time, and inference time. Additionally, we plot
training and inference time against MPJPE to examine the trade-offs between
efficiency and performance. Our results suggest that the further pre-processed
data can successfully condition diffusion models to generate realistic social
behavior, with reasonable trade-offs in accuracy and processing time.
|
2501.10870
|
Model-Robust and Adaptive-Optimal Transfer Learning for Tackling Concept
Shifts in Nonparametric Regression
|
stat.ML cs.LG
|
When concept shifts and sample scarcity are present in the target domain of
interest, nonparametric regression learners often struggle to generalize
effectively. The technique of transfer learning remedies these issues by
leveraging data or pre-trained models from similar source domains. While
existing generalization analyses of kernel-based transfer learning typically
rely on correctly specified models, we present a transfer learning procedure
that is robust against model misspecification while adaptively attaining
optimality. To facilitate our analysis and avoid the risk of saturation found
in classical misspecified results, we establish a novel result in the
misspecified single-task learning setting, showing that spectral algorithms
with fixed bandwidth Gaussian kernels can attain minimax convergence rates
given the true function is in a Sobolev space, which may be of independent
interest. Building on this, we derive the adaptive convergence rates of the
excess risk for specifying Gaussian kernels in a prevalent class of hypothesis
transfer learning algorithms. Our results are minimax optimal up to logarithmic
factors and elucidate the key determinants of transfer efficiency.
|
2501.10871
|
Enhancing User Intent for Recommendation Systems via Large Language
Models
|
cs.IR
|
Recommendation systems play a critical role in enhancing user experience and
engagement in various online platforms. Traditional methods, such as
Collaborative Filtering (CF) and Content-Based Filtering (CBF), rely heavily on
past user interactions or item features. However, these models often fail to
capture the dynamic and evolving nature of user preferences. To address these
limitations, we propose DUIP (Dynamic User Intent Prediction), a novel
framework that combines LSTM networks with Large Language Models (LLMs) to
dynamically capture user intent and generate personalized item recommendations.
The LSTM component models the sequential and temporal dependencies of user
behavior, while the LLM utilizes the LSTM-generated prompts to predict the next
item of interest. Experimental results on three diverse datasets ML-1M, Games,
and Bundle show that DUIP outperforms a wide range of baseline models,
demonstrating its ability to handle the cold-start problem and real-time intent
adaptation. The integration of dynamic prompts based on recent user
interactions allows DUIP to provide more accurate, context-aware, and
personalized recommendations. Our findings suggest that DUIP is a promising
approach for next-generation recommendation systems, with potential for further
improvements in cross-modal recommendations and scalability.
|
2501.10875
|
RIS Deployment Optimization with Iterative Detection and Decoding in
Multiuser Multiple-Antenna Systems
|
cs.IT eess.SP math.IT
|
This work investigates a Reconfigurable Intelligent Surface (RIS)-assisted
uplink system employing iterative detection and decoding (IDD) techniques. We
analyze the impact of tuning system parameter tuning for several deployment
configurations, including the number of users, access point (AP) antennas, and
RIS elements on the IDD performance. Analytical results for both active and
passive RIS in a single-input single-output (SISO) scenario demonstrate how
deployment choices affect system performance. Numerical simulations confirm the
robustness of the RIS-assisted IDD system to variations in these parameters,
showing performance gains in certain configurations. Moreover, the findings
indicate that the insights derived from SISO analysis extend to multiuser MIMO
IDD systems.
|
2501.10876
|
Certifying Robustness via Topological Representations
|
stat.ML cs.CG cs.LG
|
We propose a neural network architecture that can learn discriminative
geometric representations of data from persistence diagrams, common descriptors
of Topological Data Analysis. The learned representations enjoy Lipschitz
stability with a controllable Lipschitz constant. In adversarial learning, this
stability can be used to certify $\epsilon$-robustness for samples in a
dataset, which we demonstrate on the ORBIT5K dataset representing the orbits of
a discrete dynamical system.
|
2501.10877
|
Distributed Quasi-Newton Method for Fair and Fast Federated Learning
|
cs.LG
|
Federated learning (FL) is a promising technology that enables edge
devices/clients to collaboratively and iteratively train a machine learning
model under the coordination of a central server. The most common approach to
FL is first-order methods, where clients send their local gradients to the
server in each iteration. However, these methods often suffer from slow
convergence rates. As a remedy, second-order methods, such as quasi-Newton, can
be employed in FL to accelerate its convergence. Unfortunately, similarly to
the first-order FL methods, the application of second-order methods in FL can
lead to unfair models, achieving high average accuracy while performing poorly
on certain clients' local datasets. To tackle this issue, in this paper we
introduce a novel second-order FL framework, dubbed \textbf{d}istributed
\textbf{q}uasi-\textbf{N}ewton \textbf{fed}erated learning (DQN-Fed). This
approach seeks to ensure fairness while leveraging the fast convergence
properties of quasi-Newton methods in the FL context. Specifically, DQN-Fed
helps the server update the global model in such a way that (i) all local loss
functions decrease to promote fairness, and (ii) the rate of change in local
loss functions aligns with that of the quasi-Newton method. We prove the
convergence of DQN-Fed and demonstrate its \textit{linear-quadratic}
convergence rate. Moreover, we validate the efficacy of DQN-Fed across a range
of federated datasets, showing that it surpasses state-of-the-art fair FL
methods in fairness, average accuracy and convergence speed.
|
Subsets and Splits
No community queries yet
The top public SQL queries from the community will appear here once available.