id
stringlengths 9
16
| title
stringlengths 4
278
| categories
stringlengths 5
104
| abstract
stringlengths 6
4.09k
|
|---|---|---|---|
2501.15087
|
PatchRec: Multi-Grained Patching for Efficient LLM-based Sequential
Recommendation
|
cs.IR
|
Large Language Models for sequential recommendation (LLM4SR), which transform
user-item interactions into language modeling, have shown promising results.
However, due to the limitations of context window size and the computational
costs associated with Large Language Models (LLMs), current approaches
primarily truncate user history by only considering the textual information of
items from the most recent interactions in the input prompt. This truncation
fails to fully capture the long-term behavioral patterns of users. To address
this, we propose a multi-grained patching framework -- PatchRec. It compresses
the textual tokens of an item title into a compact item patch, and further
compresses multiple item patches into a denser session patch, with earlier
interactions being compressed to a greater degree. The framework consists of
two stages: (1) Patch Pre-training, which familiarizes LLMs with item-level
compression patterns, and (2) Patch Fine-tuning, which teaches LLMs to model
sequences at multiple granularities. Through this simple yet effective
approach, empirical results demonstrate that PatchRec outperforms existing
methods, achieving significant performance gains with fewer tokens fed to the
LLM. Specifically, PatchRec shows up to a 32% improvement in HR@20 on the
Goodreads dataset over uncompressed baseline, while using only 7% of the
tokens. This multi-grained sequence modeling paradigm, with an adjustable
compression ratio, enables LLMs to be efficiently deployed in real-world
recommendation systems that handle extremely long user behavior sequences.
|
2501.15089
|
LongReason: A Synthetic Long-Context Reasoning Benchmark via Context
Expansion
|
cs.CL
|
Large language models (LLMs) have demonstrated remarkable progress in
understanding long-context inputs. However, benchmarks for evaluating the
long-context reasoning abilities of LLMs fall behind the pace. Existing
benchmarks often focus on a narrow range of tasks or those that do not demand
complex reasoning. To address this gap and enable a more comprehensive
evaluation of the long-context reasoning capabilities of current LLMs, we
propose a new synthetic benchmark, LongReason, which is constructed by
synthesizing long-context reasoning questions from a varied set of
short-context reasoning questions through context expansion. LongReason
consists of 794 multiple-choice reasoning questions with diverse reasoning
patterns across three task categories: reading comprehension, logical
inference, and mathematical word problems. We evaluate 21 LLMs on LongReason,
revealing that most models experience significant performance drops as context
length increases. Our further analysis shows that even state-of-the-art LLMs
still have significant room for improvement in providing robust reasoning
across different tasks. We will open-source LongReason to support the
comprehensive evaluation of LLMs' long-context reasoning capabilities.
|
2501.15090
|
Speech Translation Refinement using Large Language Models
|
cs.CL
|
Recent advancements in large language models (LLMs) have demonstrated their
remarkable capabilities across various language tasks. Inspired by the success
of text-to-text translation refinement, this paper investigates how LLMs can
improve the performance of speech translation by introducing a joint refinement
process. Through the joint refinement of speech translation (ST) and automatic
speech recognition (ASR) transcription via LLMs, the performance of the ST
model is significantly improved in both training-free in-context learning and
parameter-efficient fine-tuning scenarios. Additionally, we explore the effect
of document-level context on refinement under the context-aware fine-tuning
scenario. Experimental results on the MuST-C and CoVoST 2 datasets, which
include seven translation tasks, demonstrate the effectiveness of the proposed
approach using several popular LLMs including GPT-3.5-turbo, LLaMA3-8B, and
Mistral-12B. Further analysis further suggests that jointly refining both
transcription and translation yields better performance compared to refining
translation alone. Meanwhile, incorporating document-level context
significantly enhances refinement performance. We release our code and datasets
on GitHub.
|
2501.15091
|
Deep Reinforcement Learning for Energy Efficiency Maximization in
RSMA-IRS-Assisted ISAC System
|
cs.IT eess.SP math.IT
|
This paper proposes a three-dimensional (3D) geometry-based channel model to
accurately represent intelligent reflecting surfaces (IRS)-enhanced integrated
sensing and communication (ISAC) networks using rate-splitting multiple access
(RSMA) in practical urban environments. Based on this model, we formulate an
energy efficiency (EE) maximization problem that incorporates transceiver
beamforming constraints, IRS phase adjustments, and quality-of-service (QoS)
requirements to optimize communication and sensing functions. To solve this
problem, we use the proximal policy optimization (PPO) algorithm within a deep
reinforcement learning (DRL) framework. Our numerical results confirm the
effectiveness of the proposed method in improving EE and satisfying QoS
requirements. Additionally, we observe that system EE drops at higher
frequencies, especially under double-Rayleigh fading.
|
2501.15096
|
Towards Better Robustness: Progressively Joint Pose-3DGS Learning for
Arbitrarily Long Videos
|
cs.CV
|
3D Gaussian Splatting (3DGS) has emerged as a powerful representation due to
its efficiency and high-fidelity rendering. However, 3DGS training requires a
known camera pose for each input view, typically obtained by
Structure-from-Motion (SfM) pipelines. Pioneering works have attempted to relax
this restriction but still face difficulties when handling long sequences with
complex camera trajectories. In this work, we propose Rob-GS, a robust
framework to progressively estimate camera poses and optimize 3DGS for
arbitrarily long video sequences. Leveraging the inherent continuity of videos,
we design an adjacent pose tracking method to ensure stable pose estimation
between consecutive frames. To handle arbitrarily long inputs, we adopt a
"divide and conquer" scheme that adaptively splits the video sequence into
several segments and optimizes them separately. Extensive experiments on the
Tanks and Temples dataset and our collected real-world dataset show that our
Rob-GS outperforms the state-of-the-arts.
|
2501.15098
|
CFT-RAG: An Entity Tree Based Retrieval Augmented Generation Algorithm
With Cuckoo Filter
|
cs.LG cs.AI
|
Although retrieval-augmented generation(RAG) significantly improves
generation quality by retrieving external knowledge bases and integrating
generated content, it faces computational efficiency bottlenecks, particularly
in knowledge retrieval tasks involving hierarchical structures for Tree-RAG.
This paper proposes a Tree-RAG acceleration method based on the improved Cuckoo
Filter, which optimizes entity localization during the retrieval process to
achieve significant performance improvements. Tree-RAG effectively organizes
entities through the introduction of a hierarchical tree structure, while the
Cuckoo Filter serves as an efficient data structure that supports rapid
membership queries and dynamic updates. The experiment results demonstrate that
our method is much faster than naive Tree-RAG while maintaining high levels of
generative quality. When the number of trees is large, our method is hundreds
of times faster than naive Tree-RAG. Our work is available at
https://github.com/TUPYP7180/CFT-RAG-2025.
|
2501.15099
|
Bringing RGB and IR Together: Hierarchical Multi-Modal Enhancement for
Robust Transmission Line Detection
|
cs.CV cs.LG
|
Ensuring a stable power supply in rural areas relies heavily on effective
inspection of power equipment, particularly transmission lines (TLs). However,
detecting TLs from aerial imagery can be challenging when dealing with
misalignments between visible light (RGB) and infrared (IR) images, as well as
mismatched high- and low-level features in convolutional networks. To address
these limitations, we propose a novel Hierarchical Multi-Modal Enhancement
Network (HMMEN) that integrates RGB and IR data for robust and accurate TL
detection. Our method introduces two key components: (1) a Mutual Multi-Modal
Enhanced Block (MMEB), which fuses and enhances hierarchical RGB and IR feature
maps in a coarse-to-fine manner, and (2) a Feature Alignment Block (FAB) that
corrects misalignments between decoder outputs and IR feature maps by
leveraging deformable convolutions. We employ MobileNet-based encoders for both
RGB and IR inputs to accommodate edge-computing constraints and reduce
computational overhead. Experimental results on diverse weather and lighting
conditionsfog, night, snow, and daytimedemonstrate the superiority and
robustness of our approach compared to state-of-the-art methods, resulting in
fewer false positives, enhanced boundary delineation, and better overall
detection performance. This framework thus shows promise for practical
large-scale power line inspections with unmanned aerial vehicles.
|
2501.15103
|
Each Rank Could be an Expert: Single-Ranked Mixture of Experts LoRA for
Multi-Task Learning
|
cs.LG cs.AI
|
Low-Rank Adaptation (LoRA) is widely used for adapting large language models
(LLMs) to specific domains due to its efficiency and modularity. Meanwhile,
vanilla LoRA struggles with task conflicts in multi-task scenarios. Recent
works adopt Mixture of Experts (MoE) by treating each LoRA module as an expert,
thereby mitigating task interference through multiple specialized LoRA modules.
While effective, these methods often isolate knowledge within individual tasks,
failing to fully exploit the shared knowledge across related tasks. In this
paper, we establish a connection between single LoRA and multi-LoRA MoE,
integrating them into a unified framework. We demonstrate that the dynamic
routing of multiple LoRAs is functionally equivalent to rank partitioning and
block-level activation within a single LoRA. We further empirically demonstrate
that finer-grained LoRA partitioning, within the same total and activated
parameter constraints, leads to better performance gains across heterogeneous
tasks. Building on these findings, we propose Single-ranked Mixture of Experts
LoRA (\textbf{SMoRA}), which embeds MoE into LoRA by \textit{treating each rank
as an independent expert}. With a \textit{dynamic rank-wise activation}
mechanism, SMoRA promotes finer-grained knowledge sharing while mitigating task
conflicts. Experiments demonstrate that SMoRA activates fewer parameters yet
achieves better performance in multi-task scenarios.
|
2501.15105
|
A New Approach for Knowledge Generation Using Active Inference
|
cs.AI q-bio.NC
|
There are various models proposed on how knowledge is generated in the human
brain including the semantic networks model. Although this model has been
widely studied and even computational models are presented, but, due to various
limits and inefficiencies in the generation of different types of knowledge,
its application is limited to semantic knowledge because of has been formed
according to semantic memory and declarative knowledge and has many limits in
explaining various procedural and conditional knowledge. Given the importance
of providing an appropriate model for knowledge generation, especially in the
areas of improving human cognitive functions or building intelligent machines,
improving existing models in knowledge generation or providing more
comprehensive models is of great importance. In the current study, based on the
free energy principle of the brain, is the researchers proposed a model for
generating three types of declarative, procedural, and conditional knowledge.
While explaining different types of knowledge, this model is capable to compute
and generate concepts from stimuli based on probabilistic mathematics and the
action-perception process (active inference). The proposed model is
unsupervised learning that can update itself using a combination of different
stimuli as a generative model can generate new concepts of unsupervised
received stimuli. In this model, the active inference process is used in the
generation of procedural and conditional knowledge and the perception process
is used to generate declarative knowledge.
|
2501.15106
|
In-Context Operator Learning for Linear Propagator Models
|
q-fin.TR cs.LG math.OC q-fin.CP
|
We study operator learning in the context of linear propagator models for
optimal order execution problems with transient price impact \`a la Bouchaud et
al. (2004) and Gatheral (2010). Transient price impact persists and decays over
time according to some propagator kernel. Specifically, we propose to use
In-Context Operator Networks (ICON), a novel transformer-based neural network
architecture introduced by Yang et al. (2023), which facilitates data-driven
learning of operators by merging offline pre-training with an online few-shot
prompting inference. First, we train ICON to learn the operator from various
propagator models that maps the trading rate to the induced transient price
impact. The inference step is then based on in-context prediction, where ICON
is presented only with a few examples. We illustrate that ICON is capable of
accurately inferring the underlying price impact model from the data prompts,
even with propagator kernels not seen in the training data. In a second step,
we employ the pre-trained ICON model provided with context as a surrogate
operator in solving an optimal order execution problem via a neural network
control policy, and demonstrate that the exact optimal execution strategies
from Abi Jaber and Neuman (2022) for the models generating the context are
correctly retrieved. Our introduced methodology is very general, offering a new
approach to solving optimal stochastic control problems with unknown state
dynamics, inferred data-efficiently from a limited number of examples by
leveraging the few-shot and transfer learning capabilities of transformer
networks.
|
2501.15108
|
Knowledge Hierarchy Guided Biological-Medical Dataset Distillation for
Domain LLM Training
|
cs.CL
|
The rapid advancement of large language models (LLMs) in biological-medical
applications has highlighted a gap between their potential and the limited
scale and often low quality of available open-source annotated textual
datasets. In addition, the inherent complexity of the biomedical knowledge
hierarchy significantly hampers efforts to bridge this gap.Can LLMs themselves
play a pivotal role in overcoming this limitation? Motivated by this question,
we investigate this challenge in the present study.We propose a framework that
automates the distillation of high-quality textual training data from the
extensive scientific literature. Our approach self-evaluates and generates
questions that are more closely aligned with the biomedical domain, guided by
the biomedical knowledge hierarchy through medical subject headings (MeSH).
This comprehensive framework establishes an automated workflow, thereby
eliminating the need for manual intervention. Furthermore, we conducted
comprehensive experiments to evaluate the impact of our framework-generated
data on downstream language models of varying sizes. Our approach substantially
improves question-answering tasks compared to pre-trained models from the life
sciences domain and powerful close-source models represented by GPT-4. Notably,
the generated AI-Ready dataset enabled the Llama3-70B base model to outperform
GPT-4 using MedPrompt with multiple times the number of parameters. Detailed
case studies and ablation experiments underscore the significance of each
component within our framework
|
2501.15109
|
Clear Preferences Leave Traces: Reference Model-Guided Sampling for
Preference Learning
|
cs.LG cs.AI
|
Direct Preference Optimization (DPO) has emerged as a de-facto approach for
aligning language models with human preferences. Recent work has shown DPO's
effectiveness relies on training data quality. In particular, clear quality
differences between preferred and rejected responses enhance learning
performance. Current methods for identifying and obtaining such high-quality
samples demand additional resources or external models. We discover that
reference model probability space naturally detects high-quality training
samples. Using this insight, we present a sampling strategy that achieves
consistent improvements (+0.1 to +0.4) on MT-Bench while using less than half
(30-50%) of the training data. We observe substantial improvements (+0.4 to
+0.98) for technical tasks (coding, math, and reasoning) across multiple models
and hyperparameter settings.
|
2501.15111
|
HumanOmni: A Large Vision-Speech Language Model for Human-Centric Video
Understanding
|
cs.CV
|
In human-centric scenes, the ability to simultaneously understand visual and
auditory information is crucial. While recent omni models can process multiple
modalities, they generally lack effectiveness in human-centric scenes due to
the absence of large-scale, specialized datasets and non-targeted
architectures. In this work, we developed HumanOmni, the industry's first
human-centric Omni-multimodal large language model. We constructed a dataset
containing over 2.4 million human-centric video clips with detailed captions
and more than 14 million instructions, facilitating the understanding of
diverse human-centric scenes. HumanOmni includes three specialized branches for
understanding different types of scenes. It adaptively fuses features from
these branches based on user instructions, significantly enhancing visual
understanding in scenes centered around individuals. Moreover, HumanOmni
integrates audio features to ensure a comprehensive understanding of
environments and individuals. Our experiments validate HumanOmni's advanced
capabilities in handling human-centric scenes across a variety of tasks,
including emotion recognition, facial expression description, and action
understanding. Our model will be open-sourced to facilitate further development
and collaboration within both academia and industry.
|
2501.15113
|
Task-KV: Task-aware KV Cache Optimization via Semantic Differentiation
of Attention Heads
|
cs.CL
|
KV cache is a widely used acceleration technique for large language models
(LLMs) inference. However, its memory requirement grows rapidly with input
length. Previous studies have reduced the size of KV cache by either removing
the same number of unimportant tokens for all attention heads or by allocating
differentiated KV cache budgets for pre-identified attention heads. However,
due to the importance of attention heads varies across different tasks, the
pre-identified attention heads fail to adapt effectively to various downstream
tasks. To address this issue, we propose Task-KV, a method that leverages the
semantic differentiation of attention heads to allocate differentiated KV cache
budgets across various tasks. We demonstrate that attention heads far from the
semantic center (called heterogeneous heads) make an significant contribution
to task outputs and semantic understanding. In contrast, other attention heads
play the role of aggregating important information and focusing reasoning.
Task-KV allocates full KV cache budget to heterogeneous heads to preserve
comprehensive semantic information, while reserving a small number of recent
tokens and attention sinks for non-heterogeneous heads. Furthermore, we
innovatively introduce middle activations to preserve key contextual
information aggregated from non-heterogeneous heads. To dynamically perceive
semantic differences among attention heads, we design a semantic separator to
distinguish heterogeneous heads from non-heterogeneous ones based on their
distances from the semantic center. Experimental results on multiple benchmarks
and different model architectures demonstrate that Task-KV significantly
outperforms existing baseline methods.
|
2501.15118
|
ABXI: Invariant Interest Adaptation for Task-Guided Cross-Domain
Sequential Recommendation
|
cs.IR
|
Cross-Domain Sequential Recommendation (CDSR) has recently gained attention
for countering data sparsity by transferring knowledge across domains. A common
approach merges domain-specific sequences into cross-domain sequences, serving
as bridges to connect domains. One key challenge is to correctly extract the
shared knowledge among these sequences and appropriately transfer it. Most
existing works directly transfer unfiltered cross-domain knowledge rather than
extracting domain-invariant components and adaptively integrating them into
domain-specific modelings. Another challenge lies in aligning the
domain-specific and cross-domain sequences. Existing methods align these
sequences based on timestamps, but this approach can cause prediction
mismatches when the current tokens and their targets belong to different
domains. In such cases, the domain-specific knowledge carried by the current
tokens may degrade performance. To address these challenges, we propose the
A-B-Cross-to-Invariant Learning Recommender (ABXI). Specifically, leveraging
LoRA's effectiveness for efficient adaptation, ABXI incorporates two types of
LoRAs to facilitate knowledge adaptation. First, all sequences are processed
through a shared encoder that employs a domain LoRA for each sequence, thereby
preserving unique domain characteristics. Next, we introduce an invariant
projector that extracts domain-invariant interests from cross-domain
representations, utilizing an invariant LoRA to adapt these interests into
modeling each specific domain. Besides, to avoid prediction mismatches, all
domain-specific sequences are aligned to match the domains of the cross-domain
ground truths. Experimental results on three datasets demonstrate that our
approach outperforms other CDSR counterparts by a large margin. The codes are
available in https://github.com/DiMarzioBian/ABXI.
|
2501.15119
|
Efficient Video Neural Network Processing Based on Motion Estimation
|
cs.CV eess.IV
|
Video neural network (VNN) processing using the conventional pipeline first
converts Bayer video information into human understandable RGB videos using
image signal processing (ISP) on a pixel by pixel basis. Then, VNN processing
is performed on a frame by frame basis. Both ISP and VNN are computationally
expensive with high power consumption and latency. In this paper, we propose an
efficient VNN processing framework. Instead of using ISP, computer vision tasks
are directly accomplished using Bayer pattern information. To accelerate VNN
processing, motion estimation is introduced to find temporal redundancies in
input video data so as to avoid repeated and unnecessary computations.
Experiments show greater than 67\% computation reduction, while maintaining
computer vision task accuracy for typical computer vision tasks and data sets.
|
2501.15120
|
Technology Mapping with Large Language Models
|
cs.IR cs.DB cs.ET cs.LG
|
In today's fast-evolving business landscape, having insight into the
technology stacks that organizations use is crucial for forging partnerships,
uncovering market openings, and informing strategic choices. However,
conventional technology mapping, which typically hinges on keyword searches,
struggles with the sheer scale and variety of data available, often failing to
capture nascent technologies. To overcome these hurdles, we present STARS
(Semantic Technology and Retrieval System), a novel framework that harnesses
Large Language Models (LLMs) and Sentence-BERT to pinpoint relevant
technologies within unstructured content, build comprehensive company profiles,
and rank each firm's technologies according to their operational importance. By
integrating entity extraction with Chain-of-Thought prompting and employing
semantic ranking, STARS provides a precise method for mapping corporate
technology portfolios. Experimental results show that STARS markedly boosts
retrieval accuracy, offering a versatile and high-performance solution for
cross-industry technology mapping.
|
2501.15122
|
Snapshot Compressed Imaging Based Single-Measurement Computer Vision for
Videos
|
cs.CV cs.AI
|
Snapshot compressive imaging (SCI) is a promising technique for capturing
high-speed video at low bandwidth and low power, typically by compressing
multiple frames into a single measurement. However, similar to traditional CMOS
image sensor based imaging systems, SCI also faces challenges in low-lighting
photon-limited and low-signal-to-noise-ratio image conditions. In this paper,
we propose a novel Compressive Denoising Autoencoder (CompDAE) using the
STFormer architecture as the backbone, to explicitly model noise
characteristics and provide computer vision functionalities such as edge
detection and depth estimation directly from compressed sensing measurements,
while accounting for realistic low-photon conditions. We evaluate the
effectiveness of CompDAE across various datasets and demonstrated significant
improvements in task performance compared to conventional RGB-based methods. In
the case of ultra-low-lighting (APC $\leq$ 20) while conventional methods
failed, the proposed algorithm can still maintain competitive performance.
|
2501.15125
|
FreqMoE: Enhancing Time Series Forecasting through Frequency
Decomposition Mixture of Experts
|
cs.LG
|
Long-term time series forecasting is essential in areas like finance and
weather prediction. Besides traditional methods that operate in the time
domain, many recent models transform time series data into the frequency domain
to better capture complex patterns. However, these methods often use filtering
techniques to remove certain frequency signals as noise, which may
unintentionally discard important information and reduce prediction accuracy.
To address this, we propose the Frequency Decomposition Mixture of Experts
(FreqMoE) model, which dynamically decomposes time series data into frequency
bands, each processed by a specialized expert. A gating mechanism adjusts the
importance of each output of expert based on frequency characteristics, and the
aggregated results are fed into a prediction module that iteratively refines
the forecast using residual connections. Our experiments demonstrate that
FreqMoE outperforms state-of-the-art models, achieving the best performance on
51 out of 70 metrics across all tested datasets, while significantly reducing
the number of required parameters to under 50k, providing notable efficiency
advantages.
|
2501.15128
|
MAP-based Problem-Agnostic diffusion model for Inverse Problems
|
eess.IV cs.CV
|
Diffusion models have indeed shown great promise in solving inverse problems
in image processing. In this paper, we propose a novel, problem-agnostic
diffusion model called the maximum a posteriori (MAP)-based guided term
estimation method for inverse problems. We divide the conditional score
function into two terms according to Bayes' rule: the unconditional score
function and the guided term. We design the MAP-based guided term estimation
method, while the unconditional score function is approximated by an existing
score network. To estimate the guided term, we base on the assumption that the
space of clean natural images is inherently smooth, and introduce a MAP
estimate of the $t$-th latent variable. We then substitute this estimation into
the expression of the inverse problem and obtain the approximation of the
guided term. We evaluate our method extensively on super-resolution,
inpainting, and denoising tasks, and demonstrate comparable performance to
DDRM, DMPS, DPS and $\Pi$GDM.
|
2501.15129
|
EvoRL: A GPU-accelerated Framework for Evolutionary Reinforcement
Learning
|
cs.NE
|
Evolutionary Reinforcement Learning (EvoRL) has emerged as a promising
approach to overcoming the limitations of traditional reinforcement learning
(RL) by integrating the Evolutionary Computation (EC) paradigm with RL.
However, the population-based nature of EC significantly increases
computational costs, thereby restricting the exploration of algorithmic design
choices and scalability in large-scale settings. To address this challenge, we
introduce $\texttt{$\textbf{EvoRL}$}$, the first end-to-end EvoRL framework
optimized for GPU acceleration. The framework executes the entire training
pipeline on accelerators, including environment simulations and EC processes,
leveraging hierarchical parallelism through vectorization and compilation
techniques to achieve superior speed and scalability. This design enables the
efficient training of large populations on a single machine. In addition to its
performance-oriented design, $\texttt{$\textbf{EvoRL}$}$ offers a comprehensive
platform for EvoRL research, encompassing implementations of traditional RL
algorithms (e.g., A2C, PPO, DDPG, TD3, SAC), Evolutionary Algorithms (e.g.,
CMA-ES, OpenES, ARS), and hybrid EvoRL paradigms such as Evolutionary-guided RL
(e.g., ERL, CEM-RL) and Population-Based AutoRL (e.g., PBT). The framework's
modular architecture and user-friendly interface allow researchers to
seamlessly integrate new components, customize algorithms, and conduct fair
benchmarking and ablation studies. The project is open-source and available at:
https://github.com/EMI-Group/evorl.
|
2501.15130
|
Community Detection in Large-Scale Complex Networks via Structural
Entropy Game
|
cs.SI
|
Community detection is a critical task in graph theory, social network
analysis, and bioinformatics, where communities are defined as clusters of
densely interconnected nodes. However, detecting communities in large-scale
networks with millions of nodes and billions of edges remains challenging due
to the inefficiency and unreliability of existing methods. Moreover, many
current approaches are limited to specific graph types, such as unweighted or
undirected graphs, reducing their broader applicability. To address these
issues, we propose a novel heuristic community detection algorithm, termed
CoDeSEG, which identifies communities by minimizing the two-dimensional (2D)
structural entropy of the network within a potential game framework. In the
game, nodes decide to stay in current community or move to another based on a
strategy that maximizes the 2D structural entropy utility function.
Additionally, we introduce a structural entropy-based node overlapping
heuristic for detecting overlapping communities, with a near-linear time
complexity.Experimental results on real-world networks demonstrate that CoDeSEG
is the fastest method available and achieves state-of-the-art performance in
overlapping normalized mutual information (ONMI) and F1 score.
|
2501.15131
|
Difference vs. Quotient: A Novel Algorithm for Dominant Eigenvalue
Problem
|
math.OC cs.LG
|
The computation of the dominant eigenvector of symmetric positive
semidefinite matrices is a cornerstone operation in numerous machine learning
applications. Traditional approaches predominantly rely on the constrained
Quotient formulation, which underpins most existing methods. However, these
methods often suffer from challenges related to computational efficiency and
dependence on spectral prior knowledge. This paper introduces a novel
perspective by reformulating the eigenvalue problem using an unconstrained
Difference formulation. This new approach sheds light on classical methods,
revealing that the power method can be interpreted as a specific instance of
Difference of Convex Algorithms. Building on this insight, we develop a
generalized family of Difference-Type methods, which encompasses the power
method as a special case. Within this family, we propose the Split-Merge
algorithm, which achieves maximal acceleration without spectral prior knowledge
and operates solely through matrix-vector products, making it both efficient
and easy to implement. Extensive empirical evaluations on both synthetic and
real-world datasets highlight that the Split-Merge algorithm achieves over a
$\boldsymbol{10\times}$ speedup compared to the basic power method, offering
significant advancements in efficiency and practicality for large-scale machine
learning problems.
|
2501.15138
|
TranStable: Towards Robust Pixel-level Online Video Stabilization by
Jointing Transformer and CNN
|
cs.CV
|
Video stabilization often struggles with distortion and excessive cropping.
This paper proposes a novel end-to-end framework, named TranStable, to address
these challenges, comprising a genera tor and a discriminator. We establish
TransformerUNet (TUNet) as the generator to utilize the Hierarchical Adaptive
Fusion Module (HAFM), integrating Transformer and CNN to leverage both global
and local features across multiple visual cues. By modeling frame-wise
relationships, it generates robust pixel-level warping maps for stable
geometric transformations. Furthermore, we design the Stability Discriminator
Module (SDM), which provides pixel-wise supervision for authenticity and
consistency in training period, ensuring more complete field-of-view while
minimizing jitter artifacts and enhancing visual fidelity. Extensive
experiments on NUS, DeepStab, and Selfie benchmarks demonstrate
state-of-the-art performance.
|
2501.15140
|
Analyzing and Boosting the Power of Fine-Grained Visual Recognition for
Multi-modal Large Language Models
|
cs.CV cs.AI cs.CL cs.LG
|
Multi-modal large language models (MLLMs) have shown remarkable abilities in
various visual understanding tasks. However, MLLMs still struggle with
fine-grained visual recognition (FGVR), which aims to identify
subordinate-level categories from images. This can negatively impact more
advanced capabilities of MLLMs, such as object-centric visual question
answering and reasoning. In our study, we revisit three quintessential
capabilities of MLLMs for FGVR, including object information extraction,
category knowledge reserve, object-category alignment, and position of the root
cause as a misalignment problem. To address this issue, we present Finedefics,
an MLLM that enhances the model's FGVR capability by incorporating informative
attribute descriptions of objects into the training phase. We employ
contrastive learning on object-attribute pairs and attribute-category pairs
simultaneously and use examples from similar but incorrect categories as hard
negatives, naturally bringing representations of visual objects and category
names closer. Extensive evaluations across multiple popular FGVR datasets
demonstrate that Finedefics outperforms existing MLLMs of comparable parameter
sizes, showcasing its remarkable efficacy. The code is available at
https://github.com/PKU-ICST-MIPL/Finedefics_ICLR2025.
|
2501.15142
|
DAGPrompT: Pushing the Limits of Graph Prompting with a
Distribution-aware Graph Prompt Tuning Approach
|
cs.LG cs.AI
|
The pre-train then fine-tune approach has advanced GNNs by enabling general
knowledge capture without task-specific labels. However, an objective gap
between pre-training and downstream tasks limits its effectiveness. Recent
graph prompting methods aim to close this gap through task reformulations and
learnable prompts. Despite this, they struggle with complex graphs like
heterophily graphs. Freezing the GNN encoder can reduce the impact of
prompting, while simple prompts fail to handle diverse hop-level distributions.
This paper identifies two key challenges in adapting graph prompting methods
for complex graphs: (1) adapting the model to new distributions in downstream
tasks to mitigate pre-training and fine-tuning discrepancies from heterophily
and (2) customizing prompts for hop-specific node requirements. To overcome
these challenges, we propose Distribution-aware Graph Prompt Tuning
(DAGPrompT), which integrates a GLoRA module for optimizing the GNN encoder's
projection matrix and message-passing schema through low-rank adaptation.
DAGPrompT also incorporates hop-specific prompts accounting for varying graph
structures and distributions among hops. Evaluations on 10 datasets and 14
baselines demonstrate that DAGPrompT improves accuracy by up to 4.79 in node
and graph classification tasks, setting a new state-of-the-art while preserving
efficiency. Codes are available at GitHub.
|
2501.15144
|
Exploring Primitive Visual Measurement Understanding and the Role of
Output Format in Learning in Vision-Language Models
|
cs.CV
|
This work investigates the capabilities of current vision-language models
(VLMs) in visual understanding and attribute measurement of primitive shapes
using a benchmark focused on controlled 2D shape configurations with variations
in spatial positioning, occlusion, rotation, size, and shape attributes such as
type, quadrant, center-coordinates, rotation, occlusion status, and color as
shown in Figure 1 and supplementary Figures S3-S81. We fine-tune
state-of-the-art VLMs (2B-8B parameters) using Low-Rank Adaptation (LoRA) and
validate them on multiple out-of-domain (OD) scenarios from our proposed
benchmark. Our findings reveal that coherent sentence-based outputs outperform
tuple formats, particularly in OD scenarios with large domain gaps.
Additionally, we demonstrate that scaling numeric tokens during loss
computation enhances numerical approximation capabilities, further improving
performance on spatial and measurement tasks. These results highlight the
importance of output format design, loss scaling strategies, and robust
generalization techniques in enhancing the training and fine-tuning of VLMs,
particularly for tasks requiring precise spatial approximations and strong OD
generalization.
|
2501.15147
|
A Causality-aware Paradigm for Evaluating Creativity of Multimodal Large
Language Models
|
cs.AI cs.HC
|
Recently, numerous benchmarks have been developed to evaluate the logical
reasoning abilities of large language models (LLMs). However, assessing the
equally important creative capabilities of LLMs is challenging due to the
subjective, diverse, and data-scarce nature of creativity, especially in
multimodal scenarios. In this paper, we consider the comprehensive pipeline for
evaluating the creativity of multimodal LLMs, with a focus on suitable
evaluation platforms and methodologies. First, we find the Oogiri game, a
creativity-driven task requiring humor, associative thinking, and the ability
to produce unexpected responses to text, images, or both. This game aligns well
with the input-output structure of modern multimodal LLMs and benefits from a
rich repository of high-quality, human-annotated creative responses, making it
an ideal platform for studying LLM creativity. Next, beyond using the Oogiri
game for standard evaluations like ranking and selection, we propose LoTbench,
an interactive, causality-aware evaluation framework, to further address some
intrinsic risks in standard evaluations, such as information leakage and
limited interpretability. The proposed LoTbench not only quantifies LLM
creativity more effectively but also visualizes the underlying creative thought
processes. Our results show that while most LLMs exhibit constrained
creativity, the performance gap between LLMs and humans is not insurmountable.
Furthermore, we observe a strong correlation between results from the
multimodal cognition benchmark MMMU and LoTbench, but only a weak connection
with traditional creativity metrics. This suggests that LoTbench better aligns
with human cognitive theories, highlighting cognition as a critical foundation
in the early stages of creativity and enabling the bridging of diverse
concepts. https://lotbench.github.io
|
2501.15149
|
Mapping Galaxy Images Across Ultraviolet, Visible and Infrared Bands
Using Generative Deep Learning
|
astro-ph.IM astro-ph.GA cs.AI
|
We demonstrate that generative deep learning can translate galaxy
observations across ultraviolet, visible, and infrared photometric bands.
Leveraging mock observations from the Illustris simulations, we develop and
validate a supervised image-to-image model capable of performing both band
interpolation and extrapolation. The resulting trained models exhibit high
fidelity in generating outputs, as verified by both general image comparison
metrics (MAE, SSIM, PSNR) and specialized astronomical metrics (GINI
coefficient, M20). Moreover, we show that our model can be used to predict
real-world observations, using data from the DECaLS survey as a case study.
These findings highlight the potential of generative learning to augment
astronomical datasets, enabling efficient exploration of multi-band information
in regions where observations are incomplete. This work opens new pathways for
optimizing mission planning, guiding high-resolution follow-ups, and enhancing
our understanding of galaxy morphology and evolution.
|
2501.15151
|
SpikSSD: Better Extraction and Fusion for Object Detection with Spiking
Neuron Networks
|
cs.CV
|
As the third generation of neural networks, Spiking Neural Networks (SNNs)
have gained widespread attention due to their low energy consumption and
biological interpretability. Recently, SNNs have made considerable advancements
in computer vision. However, efficiently conducting feature extraction and
fusion under the spiking characteristics of SNNs for object detection remains a
pressing challenge. To address this problem, we propose the SpikSSD, a novel
Spiking Single Shot Multibox Detector. Specifically, we design a full-spiking
backbone network, MDS-ResNet, which effectively adjusts the membrane synaptic
input distribution at each layer, achieving better spiking feature extraction.
Additionally, for spiking feature fusion, we introduce the Spiking Bi-direction
Fusion Module (SBFM), which for the first time realizes bi-direction fusion of
spiking features, enhancing the multi-scale detection capability of the model.
Experimental results show that SpikSSD achieves 40.8% mAP on the GEN1 dataset,
76.3% and 52.4% mAP@0.5 on VOC 2007 and COCO 2017 datasets respectively with
the lowest firing rate, outperforming existing SNN-based approaches at ultralow
energy consumption. This work sets a new benchmark for future research in
SNN-based object detection. Our code is publicly available in
https://github.com/yimeng-fan/SpikSSD.
|
2501.15157
|
Median of Forests for Robust Density Estimation
|
stat.ML cs.LG
|
Robust density estimation refers to the consistent estimation of the density
function even when the data is contaminated by outliers. We find that existing
forest density estimation at a certain point is inherently resistant to the
outliers outside the cells containing the point, which we call
\textit{non-local outliers}, but not resistant to the rest \textit{local
outliers}. To achieve robustness against all outliers, we propose an ensemble
learning algorithm called \textit{medians of forests for robust density
estimation} (\textit{MFRDE}), which adopts a pointwise median operation on
forest density estimators fitted on subsampled datasets. Compared to existing
robust kernel-based methods, MFRDE enables us to choose larger subsampling
sizes, sacrificing less accuracy for density estimation while achieving
robustness. On the theoretical side, we introduce the local outlier exponent to
quantify the number of local outliers. Under this exponent, we show that even
if the number of outliers reaches a certain polynomial order in the sample
size, MFRDE is able to achieve almost the same convergence rate as the same
algorithm on uncontaminated data, whereas robust kernel-based methods fail. On
the practical side, real data experiments show that MFRDE outperforms existing
robust kernel-based methods. Moreover, we apply MFRDE to anomaly detection to
showcase a further application.
|
2501.15163
|
Learning with Noisy Labels: the Exploration of Error Bounds in
Classification
|
cs.LG stat.ML
|
Numerous studies have shown that label noise can lead to poor generalization
performance, negatively affecting classification accuracy. Therefore,
understanding the effectiveness of classifiers trained using deep neural
networks in the presence of noisy labels is of considerable practical
significance. In this paper, we focus on the error bounds of excess risks for
classification problems with noisy labels within deep learning frameworks. We
begin by exploring loss functions with noise-tolerant properties, ensuring that
the empirical minimizer on noisy data aligns with that on the true data. Next,
we estimate the error bounds of the excess risks, expressed as a sum of
statistical error and approximation error. We estimate the statistical error on
a dependent (mixing) sequence, bounding it with the help of the associated
independent block sequence. For the approximation error, we first express the
classifiers as the composition of the softmax function and a continuous
function from $[0,1]^d$ to $\mathbb{R}^K$. The main task is then to estimate
the approximation error for the continuous function from $[0,1]^d$ to
$\mathbb{R}^K$. Finally, we focus on the curse of dimensionality based on the
low-dimensional manifold assumption.
|
2501.15164
|
UAV-Assisted MEC Architecture for Collaborative Task Offloading in Urban
IoT Environment
|
cs.NI cs.SY eess.SP eess.SY
|
Mobile edge computing (MEC) is a promising technology to meet the increasing
demands and computing limitations of complex Internet of Things (IoT) devices.
However, implementing MEC in urban environments can be challenging due to
factors like high device density, complex infrastructure, and limited network
coverage. Network congestion and connectivity issues can adversely affect user
satisfaction. Hence, in this article, we use unmanned aerial vehicle
(UAV)-assisted collaborative MEC architecture to facilitate task offloading of
IoT devices in urban environments. We utilize the combined capabilities of UAVs
and ground edge servers (ESs) to maximize user satisfaction and thereby also
maximize the service provider's (SP) profit. We design IoT task-offloading as
joint IoT-UAV-ES association and UAV-network topology optimization problem. Due
to NP-hard nature, we break the problem into two subproblems: offload strategy
optimization and UAV topology optimization. We develop a Three-sided Matching
with Size and Cyclic preference (TMSC) based task offloading algorithm to find
stable association between IoTs, UAVs, and ESs to achieve system objective. We
also propose a K-means based iterative algorithm to decide the minimum number
of UAVs and their positions to provide offloading services to maximum IoTs in
the system. Finally, we demonstrate the efficacy of the proposed task
offloading scheme over benchmark schemes through simulation-based evaluation.
The proposed scheme outperforms by 19%, 12%, and 25% on average in terms of
percentage of served IoTs, average user satisfaction, and SP profit,
respectively, with 25% lesser UAVs, making it an effective solution to support
IoT task requirements in urban environments using UAV-assisted MEC
architecture.
|
2501.15165
|
A* Based Algorithm for Reduced Complexity ML Decoding of Tailbiting
Codes
|
cs.IT math.IT
|
The A* algorithm is a graph search algorithm which has shown good results in
terms of computational complexity for Maximum Likelihood (ML) decoding of
tailbiting convolutional codes. The decoding of tailbiting codes with this
algorithm is performed in two phases. In the first phase, a typical Viterbi
decoding is employed to collect information regarding the trellis. The A*
algorithm is then applied in the second phase, using the information obtained
in the first one to calculate the heuristic function. The improvements proposed
in this work decrease the computational complexity of the A* algorithm using
further information from the first phase of the algorithm. This information is
used for obtaining a more accurate heuristic function and finding early
terminating conditions for the A* algorithm. Simulation results show that the
proposed modifications decrease the complexity of ML decoding with the A*
algorithm in terms of the performed number of operations.
|
2501.15167
|
Enhancing Intent Understanding for Ambiguous Prompts through
Human-Machine Co-Adaptation
|
cs.CV
|
Today's image generation systems are capable of producing realistic and
high-quality images. However, user prompts often contain ambiguities, making it
difficult for these systems to interpret users' actual intentions.
Consequently, many users must modify their prompts several times to ensure the
generated images meet their expectations. While some methods focus on enhancing
prompts to make the generated images fit user needs, the model is still hard to
understand users' real needs, especially for non-expert users. In this
research, we aim to enhance the visual parameter-tuning process, making the
model user-friendly for individuals without specialized knowledge and better
understand user needs. We propose a human-machine co-adaption strategy using
mutual information between the user's prompts and the pictures under
modification as the optimizing target to make the system better adapt to user
needs. We find that an improved model can reduce the necessity for multiple
rounds of adjustments. We also collect multi-round dialogue datasets with
prompts and images pairs and user intent. Various experiments demonstrate the
effectiveness of the proposed method in our proposed dataset. Our annotation
tools and several examples of our dataset are available at
https://zenodo.org/records/14876029 for easier review. And we will open source
our full dataset and code.
|
2501.15172
|
DeepDIVE: Optimizing Input-Constrained Distributions for Composite DNA
Storage via Multinomial Channel
|
cs.IT eess.SP math.IT
|
We address the challenge of optimizing the capacity-achieving input
distribution for a multinomial channel under the constraint of limited input
support size, which is a crucial aspect in the design of DNA storage systems.
We propose an algorithm that further elaborates the Multidimensional Dynamic
Assignment Blahut-Arimoto (M-DAB) algorithm. Our proposed algorithm integrates
variational autoencoder for determining the optimal locations of input
distribution, into the alternating optimization of the input distribution
locations and weights.
|
2501.15174
|
On Spectral Approach to the Synthesis of Shaping Filters
|
eess.SY cs.SY math.OC math.PR
|
This paper describes various approaches to modeling a random process with a
given rational power spectral density. The main attention is paid to the
spectral form of mathematical description, which allows one to obtain a
relation for the shaping filter using a transfer function without any
additional calculations. The paper provides all necessary relations for the
implementation of the shaping filter based on the spectral form of mathematical
description.
|
2501.15175
|
Option-ID Based Elimination For Multiple Choice Questions
|
cs.CL cs.AI cs.LG
|
Multiple choice questions (MCQs) are a popular and important task for
evaluating large language models (LLMs). Based on common strategies people use
when answering MCQs, the process of elimination (PoE) has been proposed as an
effective problem-solving method. Existing methods to the PoE generally fall
into two categories: one involves having the LLM directly select the incorrect
options, while the other involves scoring the options. However, both methods
incur high computational costs and often perform worse than methods that
directly answer the MCQs with the option IDs. To address this issue, this paper
proposes a PoE based on option ID. Specifically, our method eliminates option
by selecting the option ID with the lowest probability. We conduct experiments
with 10 different LLMs in zero-shot settings on 7 publicly available datasets.
The experimental results demonstrate that our method significantly improves the
LLM's performance. Further analysis reveals that the sequential elimination
strategy can effectively enhance the LLM's reasoning ability. Additionally, we
find that sequential elimination is also applicable to few-shot settings and
can be combined with debias methods to further improve LLM's performance.
|
2501.15183
|
Generating Negative Samples for Multi-Modal Recommendation
|
cs.IR
|
Multi-modal recommender systems (MMRS) have gained significant attention due
to their ability to leverage information from various modalities to enhance
recommendation quality. However, existing negative sampling techniques often
struggle to effectively utilize the multi-modal data, leading to suboptimal
performance. In this paper, we identify two key challenges in negative sampling
for MMRS: (1) producing cohesive negative samples contrasting with positive
samples and (2) maintaining a balanced influence across different modalities.
To address these challenges, we propose NegGen, a novel framework that utilizes
multi-modal large language models (MLLMs) to generate balanced and contrastive
negative samples. We design three different prompt templates to enable NegGen
to analyze and manipulate item attributes across multiple modalities, and then
generate negative samples that introduce better supervision signals and ensure
modality balance. Furthermore, NegGen employs a causal learning module to
disentangle the effect of intervened key features and irrelevant item
attributes, enabling fine-grained learning of user preferences. Extensive
experiments on real-world datasets demonstrate the superior performance of
NegGen compared to state-of-the-art methods in both negative sampling and
multi-modal recommendation.
|
2501.15186
|
An Iterative Deep Ritz Method for Monotone Elliptic Problems
|
math.NA cs.LG cs.NA
|
In this work, we present a novel iterative deep Ritz method (IDRM) for
solving a general class of elliptic problems. It is inspired by the iterative
procedure for minimizing the loss during the training of the neural network,
but at each step encodes the geometry of the underlying function space and
incorporates a convex penalty to enhance the performance of the algorithm. The
algorithm is applicable to elliptic problems involving a monotone operator (not
necessarily of variational form) and does not impose any stringent regularity
assumption on the solution. It improves several existing neural PDE solvers,
e.g., physics informed neural network and deep Ritz method, in terms of the
accuracy for the concerned class of elliptic problems. Further, we establish a
convergence rate for the method using tools from geometry of Banach spaces and
theory of monotone operators, and also analyze the learning error. To
illustrate the effectiveness of the method, we present several challenging
examples, including a comparative study with existing techniques.
|
2501.15187
|
Uni-Sign: Toward Unified Sign Language Understanding at Scale
|
cs.CV
|
Sign language pre-training has gained increasing attention for its ability to
enhance performance across various sign language understanding (SLU) tasks.
However, existing methods often suffer from a gap between pre-training and
fine-tuning, leading to suboptimal results. To address this, we propose
Uni-Sign, a unified pre-training framework that eliminates the gap between
pre-training and downstream SLU tasks through a large-scale generative
pre-training strategy and a novel fine-tuning paradigm. First, we introduce
CSL-News, a large-scale Chinese Sign Language (CSL) dataset containing 1,985
hours of video paired with textual annotations, which enables effective
large-scale pre-training. Second, Uni-Sign unifies SLU tasks by treating
downstream tasks as a single sign language translation (SLT) task during
fine-tuning, ensuring seamless knowledge transfer between pre-training and
fine-tuning. Furthermore, we incorporate a prior-guided fusion (PGF) module and
a score-aware sampling strategy to efficiently fuse pose and RGB information,
addressing keypoint inaccuracies and improving computational efficiency.
Extensive experiments across multiple SLU benchmarks demonstrate that Uni-Sign
achieves state-of-the-art performance across multiple downstream SLU tasks.
Dataset and code are available at github.com/ZechengLi19/Uni-Sign.
|
2501.15188
|
Who is the root in a syntactic dependency structure?
|
cs.CL cs.SI physics.soc-ph
|
The syntactic structure of a sentence can be described as a tree that
indicates the syntactic relationships between words. In spite of significant
progress in unsupervised methods that retrieve the syntactic structure of
sentences, guessing the right direction of edges is still a challenge. As in a
syntactic dependency structure edges are oriented away from the root, the
challenge of guessing the right direction can be reduced to finding an
undirected tree and the root. The limited performance of current unsupervised
methods demonstrates the lack of a proper understanding of what a root vertex
is from first principles. We consider an ensemble of centrality scores, some
that only take into account the free tree (non-spatial scores) and others that
take into account the position of vertices (spatial scores). We test the
hypothesis that the root vertex is an important or central vertex of the
syntactic dependency structure. We confirm that hypothesis and find that the
best performance in guessing the root is achieved by novel scores that only
take into account the position of a vertex and that of its neighbours. We
provide theoretical and empirical foundations towards a universal notion of
rootness from a network science perspective.
|
2501.15189
|
Extracting Forward Invariant Sets from Neural Network-Based Control
Barrier Functions
|
cs.LG cs.RO cs.SY eess.SY stat.ML
|
Training Neural Networks (NNs) to serve as Barrier Functions (BFs) is a
popular way to improve the safety of autonomous dynamical systems. Despite
significant practical success, these methods are not generally guaranteed to
produce true BFs in a provable sense, which undermines their intended use as
safety certificates. In this paper, we consider the problem of formally
certifying a learned NN as a BF with respect to state avoidance for an
autonomous system: viz. computing a region of the state space on which the
candidate NN is provably a BF. In particular, we propose a sound algorithm that
efficiently produces such a certificate set for a shallow NN. Our algorithm
combines two novel approaches: it first uses NN reachability tools to identify
a subset of states for which the output of the NN does not increase along
system trajectories; then, it uses a novel enumeration algorithm for hyperplane
arrangements to find the intersection of the NN's zero-sub-level set with the
first set of states. In this way, our algorithm soundly finds a subset of
states on which the NN is certified as a BF. We further demonstrate the
effectiveness of our algorithm at certifying for real-world NNs as BFs in two
case studies. We complemented these with scalability experiments that
demonstrate the efficiency of our algorithm.
|
2501.15190
|
A Floating Normalization Scheme for Deep Learning-Based Custom-Range
Parameter Extraction in BSIM-CMG Compact Models
|
cs.LG eess.SP
|
A deep-learning (DL) based methodology for automated extraction of BSIM-CMG
compact model parameters from experimental gate capacitance vs gate voltage
(Cgg-Vg) and drain current vs gate voltage (Id-Vg) measurements is proposed in
this paper. The proposed method introduces a floating normalization scheme
within a cascaded forward and inverse ANN architecture enabling user-defined
parameter extraction ranges. Unlike conventional DL-based extraction
techniques, which are often constrained by fixed normalization ranges, the
floating normalization approach adapts dynamically to user-specified ranges,
allowing for fine-tuned control over the extracted parameters. Experimental
validation, using a TCAD calibrated 14 nm FinFET process, demonstrates high
accuracy for both Cgg-Vg and Id-Vg parameter extraction. The proposed framework
offers enhanced flexibility, making it applicable to various compact models
beyond BSIM-CMG.
|
2501.15194
|
Reliable Pseudo-labeling via Optimal Transport with Attention for Short
Text Clustering
|
cs.LG stat.CO stat.ML
|
Short text clustering has gained significant attention in the data mining
community. However, the limited valuable information contained in short texts
often leads to low-discriminative representations, increasing the difficulty of
clustering. This paper proposes a novel short text clustering framework, called
Reliable \textbf{P}seudo-labeling via \textbf{O}ptimal \textbf{T}ransport with
\textbf{A}ttention for Short Text Clustering (\textbf{POTA}), that generate
reliable pseudo-labels to aid discriminative representation learning for
clustering. Specially, \textbf{POTA} first implements an instance-level
attention mechanism to capture the semantic relationships among samples, which
are then incorporated as a semantic consistency regularization term into an
optimal transport problem. By solving this OT problem, we can yield reliable
pseudo-labels that simultaneously account for sample-to-sample semantic
consistency and sample-to-cluster global structure information. Additionally,
the proposed OT can adaptively estimate cluster distributions, making
\textbf{POTA} well-suited for varying degrees of imbalanced datasets. Then, we
utilize the pseudo-labels to guide contrastive learning to generate
discriminative representations and achieve efficient clustering. Extensive
experiments demonstrate \textbf{POTA} outperforms state-of-the-art methods. The
code is available at:
\href{https://github.com/YZH0905/POTA-STC/tree/main}{https://github.com/YZH0905/POTA-STC/tree/main}.
|
2501.15196
|
A Review on Self-Supervised Learning for Time Series Anomaly Detection:
Recent Advances and Open Challenges
|
stat.ML cs.LG
|
Time series anomaly detection presents various challenges due to the
sequential and dynamic nature of time-dependent data. Traditional unsupervised
methods frequently encounter difficulties in generalization, often overfitting
to known normal patterns observed during training and struggling to adapt to
unseen normality. In response to this limitation, self-supervised techniques
for time series have garnered attention as a potential solution to undertake
this obstacle and enhance the performance of anomaly detectors. This paper
presents a comprehensive review of the recent methods that make use of
self-supervised learning for time series anomaly detection. A taxonomy is
proposed to categorize these methods based on their primary characteristics,
facilitating a clear understanding of their diversity within this field. The
information contained in this survey, along with additional details that will
be periodically updated, is available on the following GitHub repository:
https://github.com/Aitorzan3/Awesome-Self-Supervised-Time-Series-Anomaly-Detection.
|
2501.15198
|
Towards Conscious Service Robots
|
cs.RO cs.AI
|
Deep learning's success in perception, natural language processing, etc.
inspires hopes for advancements in autonomous robotics. However, real-world
robotics face challenges like variability, high-dimensional state spaces,
non-linear dependencies, and partial observability. A key issue is
non-stationarity of robots, environments, and tasks, leading to performance
drops with out-of-distribution data. Unlike current machine learning models,
humans adapt quickly to changes and new tasks due to a cognitive architecture
that enables systematic generalization and meta-cognition. Human brain's System
1 handles routine tasks unconsciously, while System 2 manages complex tasks
consciously, facilitating flexible problem-solving and self-monitoring. For
robots to achieve human-like learning and reasoning, they need to integrate
causal models, working memory, planning, and metacognitive processing. By
incorporating human cognition insights, the next generation of service robots
will handle novel situations and monitor themselves to avoid risks and mitigate
errors.
|
2501.15201
|
A Training-free Synthetic Data Selection Method for Semantic
Segmentation
|
cs.CV
|
Training semantic segmenter with synthetic data has been attracting great
attention due to its easy accessibility and huge quantities. Most previous
methods focused on producing large-scale synthetic image-annotation samples and
then training the segmenter with all of them. However, such a solution remains
a main challenge in that the poor-quality samples are unavoidable, and using
them to train the model will damage the training process. In this paper, we
propose a training-free Synthetic Data Selection (SDS) strategy with CLIP to
select high-quality samples for building a reliable synthetic dataset.
Specifically, given massive synthetic image-annotation pairs, we first design a
Perturbation-based CLIP Similarity (PCS) to measure the reliability of
synthetic image, thus removing samples with low-quality images. Then we propose
a class-balance Annotation Similarity Filter (ASF) by comparing the synthetic
annotation with the response of CLIP to remove the samples related to
low-quality annotations. The experimental results show that using our method
significantly reduces the data size by half, while the trained segmenter
achieves higher performance. The code is released at
https://github.com/tanghao2000/SDS.
|
2501.15203
|
Reinforcement Learning Controlled Adaptive PSO for Task Offloading in
IIoT Edge Computing
|
cs.LG cs.DC
|
Industrial Internet of Things (IIoT) applications demand efficient task
offloading to handle heavy data loads with minimal latency. Mobile Edge
Computing (MEC) brings computation closer to devices to reduce latency and
server load, optimal performance requires advanced optimization techniques. We
propose a novel solution combining Adaptive Particle Swarm Optimization (APSO)
with Reinforcement Learning, specifically Soft Actor Critic (SAC), to enhance
task offloading decisions in MEC environments. This hybrid approach leverages
swarm intelligence and predictive models to adapt to dynamic variables such as
human interactions and environmental changes. Our method improves resource
management and service quality, achieving optimal task offloading and resource
distribution in IIoT edge computing.
|
2501.15206
|
Engineering-Oriented Design of Drift-Resilient MTJ Random Number
Generator via Hybrid Control Strategies
|
physics.app-ph cond-mat.dis-nn cs.SY eess.SY
|
In the quest for secure and reliable random number generation, Magnetic
Tunnel Junctions (MTJs) have emerged as a promising technology due to their
unique ability to exploit the stochastic nature of magnetization switching.
This paper presents an engineering-oriented design of a drift-resilient
MTJ-based True Random Number Generator (TRNG) utilizing a hybrid control
strategy. We address the critical issue of switching probability drift, which
can compromise the randomness and bias the output of MTJ-based TRNGs. Our
approach combines a self-stabilization strategy, which dynamically adjusts the
driving voltage based on real-time feedback, with pulse width modulation to
enhance control over the switching probability. Through comprehensive
experimental and simulation results, we demonstrate significant improvements in
the stability, uniformity, and quality of the random numbers generated. The
proposed system offers flexibility and adaptability for diverse applications,
making it a reliable solution for high-quality randomness in cryptography,
secure communications, and beyond.
|
2501.15207
|
Hybrid Near/Far-Field Frequency-Dependent Beamforming via Joint
Phase-Time Arrays
|
cs.IT eess.SP math.IT
|
Joint phase-time arrays (JPTA) emerge as a cost-effective and
energy-efficient architecture for frequency-dependent beamforming in wideband
communications by utilizing both true-time delay units and phase shifters. This
paper exploits the potential of JPTA to simultaneously serve multiple users in
both near- and far-field regions with a single radio frequency chain. The goal
is to jointly optimize JPTA-based beamforming and subband allocation to
maximize overall system performance. To this end, we formulate a system utility
maximization problem, including sum-rate maximization and proportional fairness
as special cases. We develop a 3-step alternating optimization (AO) algorithm
and an efficient deep learning (DL) method for this problem. The DL approach
includes a 2-layer convolutional neural network, a 3-layer graph attention
network (GAT), and a normalization module for resource and beamforming
optimization. The GAT efficiently captures the interactions between resource
allocation and analog beamformers. Simulation results confirm that JPTA
outperforms conventional phased arrays (PA) in enhancing user rate and strikes
a good balance between PA and fully-digital approach in energy efficiency.
Employing a logarithmic utility function for user rates ensures greater
fairness than maximizing sum-rates. Furthermore, the DL network achieves
comparable performance to the AO approach, while having orders of magnitude
lower computational complexity.
|
2501.15211
|
"Stones from Other Hills can Polish Jade": Zero-shot Anomaly Image
Synthesis via Cross-domain Anomaly Injection
|
cs.CV
|
Industrial image anomaly detection (IAD) is a pivotal topic with huge value.
Due to anomaly's nature, real anomalies in a specific modern industrial domain
(i.e. domain-specific anomalies) are usually too rare to collect, which
severely hinders IAD. Thus, zero-shot anomaly synthesis (ZSAS), which
synthesizes pseudo anomaly images without any domain-specific anomaly, emerges
as a vital technique for IAD. However, existing solutions are either unable to
synthesize authentic pseudo anomalies, or require cumbersome training. Thus, we
focus on ZSAS and propose a brand-new paradigm that can realize both authentic
and training-free ZSAS. It is based on a chronically-ignored fact: Although
domain-specific anomalies are rare, real anomalies from other domains (i.e.
cross-domain anomalies) are actually abundant and directly applicable to ZSAS.
Specifically, our new ZSAS paradigm makes three-fold contributions: First, we
propose a novel method named Cross-domain Anomaly Injection (CAI), which
directly exploits cross-domain anomalies to enable highly authentic ZSAS in a
training-free manner. Second, to supply CAI with sufficient cross-domain
anomalies, we build the first domain-agnostic anomaly dataset within our best
knowledge, which provides ZSAS with abundant real anomaly patterns. Third, we
propose a CAI-guided Diffusion Mechanism, which further breaks the quantity
limit of real anomalies and enable unlimited anomaly synthesis. Our
head-to-head comparison with existing ZSAS solutions justifies our paradigm's
superior performance for IAD and demonstrates it as an effective and pragmatic
ZSAS solution.
|
2501.15214
|
Zero-shot Robotic Manipulation with Language-guided Instruction and
Formal Task Planning
|
cs.RO cs.LG
|
Robotic manipulation is often challenging due to the long-horizon tasks and
the complex object relationships. A common solution is to develop a task and
motion planning framework that integrates planning for high-level task and
low-level motion. Recently, inspired by the powerful reasoning ability of Large
Language Models (LLMs), LLM-based planning approaches have achieved remarkable
progress. However, these methods still heavily rely on expert-specific
knowledge, often generating invalid plans for unseen and unfamiliar tasks. To
address this issue, we propose an innovative language-guided symbolic task
planning (LM-SymOpt) framework with optimization. It is the first expert-free
planning framework since we combine the world knowledge from LLMs with formal
reasoning, resulting in improved generalization capability to new tasks.
Specifically, differ to most existing work, our LM-SymOpt employs LLMs to
translate natural language instructions into symbolic representations, thereby
representing actions as high-level symbols and reducing the search space for
planning. Next, after evaluating the action probability of completing the task
using LLMs, a weighted random sampling method is introduced to generate
candidate plans. Their feasibility is assessed through symbolic reasoning and
their cost efficiency is then evaluated using trajectory optimization for
selecting the optimal planning. Our experimental results show that LM-SymOpt
outperforms existing LLM-based planning approaches.
|
2501.15217
|
Predictive Lagrangian Optimization for Constrained Reinforcement
Learning
|
cs.LG cs.SY eess.SY
|
Constrained optimization is popularly seen in reinforcement learning for
addressing complex control tasks. From the perspective of dynamic system,
iteratively solving a constrained optimization problem can be framed as the
temporal evolution of a feedback control system. Classical constrained
optimization methods, such as penalty and Lagrangian approaches, inherently use
proportional and integral feedback controllers. In this paper, we propose a
more generic equivalence framework to build the connection between constrained
optimization and feedback control system, for the purpose of developing more
effective constrained RL algorithms. Firstly, we define that each step of the
system evolution determines the Lagrange multiplier by solving a multiplier
feedback optimal control problem (MFOCP). In this problem, the control input is
multiplier, the state is policy parameters, the dynamics is described by policy
gradient descent, and the objective is to minimize constraint violations. Then,
we introduce a multiplier guided policy learning (MGPL) module to perform
policy parameters updating. And we prove that the resulting optimal policy,
achieved through alternating MFOCP and MGPL, aligns with the solution of the
primal constrained RL problem, thereby establishing our equivalence framework.
Furthermore, we point out that the existing PID Lagrangian is merely one
special case within our framework that utilizes a PID controller. We also
accommodate the integration of other various feedback controllers, thereby
facilitating the development of new algorithms. As a representative, we employ
model predictive control (MPC) as the feedback controller and consequently
propose a new algorithm called predictive Lagrangian optimization (PLO).
Numerical experiments demonstrate its superiority over the PID Lagrangian
method, achieving a larger feasible region up to 7.2% and a comparable average
reward.
|
2501.15219
|
Faster Machine Translation Ensembling with Reinforcement Learning and
Competitive Correction
|
cs.CL
|
Ensembling neural machine translation (NMT) models to produce higher-quality
translations than the $L$ individual models has been extensively studied.
Recent methods typically employ a candidate selection block (CSB) and an
encoder-decoder fusion block (FB), requiring inference across \textit{all}
candidate models, leading to significant computational overhead, generally
$\Omega(L)$. This paper introduces \textbf{SmartGen}, a reinforcement learning
(RL)-based strategy that improves the CSB by selecting a small, fixed number of
candidates and identifying optimal groups to pass to the fusion block for each
input sentence. Furthermore, previously, the CSB and FB were trained
independently, leading to suboptimal NMT performance. Our DQN-based
\textbf{SmartGen} addresses this by using feedback from the FB block as a
reward during training. We also resolve a key issue in earlier methods, where
candidates were passed to the FB without modification, by introducing a
Competitive Correction Block (CCB). Finally, we validate our approach with
extensive experiments on English-Hindi translation tasks in both directions.
|
2501.15221
|
Performance analysis of tail-minimization and the linear rate of
convergence of a proximal algorithm for sparse signal recovery
|
cs.IT math.IT
|
Recovery error bounds of tail-minimization and the rate of convergence of an
efficient proximal alternating algorithm for sparse signal recovery are
considered in this article. Tail-minimization focuses on minimizing the energy
in the complement $T^c$ of an estimated support $T$. Under the restricted
isometry property (RIP) condition, we prove that tail-$\ell_1$ minimization can
exactly recover sparse signals in the noiseless case for a given $T$. In the
noisy case, two recovery results for the tail-$\ell_1$ minimization and the
tail-lasso models are established. Error bounds are improved over existing
results. Additionally, we show that the RIP condition becomes surprisingly
relaxed, allowing the RIP constant to approach $1$ as the estimation $T$
closely approximates the true support $S$. Finally, an efficient proximal
alternating minimization algorithm is introduced for solving the tail-lasso
problem using Hadamard product parametrization. The linear rate of convergence
is established using the Kurdyka-{\L}ojasiewicz inequality. Numerical results
demonstrate that the proposed algorithm significantly improves signal recovery
performance compared to state-of-the-art techniques.
|
2501.15223
|
Efficient and Interpretable Neural Networks Using Complex Lehmer
Transform
|
cs.LG cs.AI
|
We propose an efficient and interpretable neural network with a novel
activation function called the weighted Lehmer transform. This new activation
function enables adaptive feature selection and extends to the complex domain,
capturing phase-sensitive and hierarchical relationships within data. Notably,
it provides greater interpretability and transparency compared to existing
machine learning models, facilitating a deeper understanding of its
functionality and decision-making processes. We analyze the mathematical
properties of both real-valued and complex-valued Lehmer activation units and
demonstrate their applications in modeling nonlinear interactions. Empirical
evaluations demonstrate that our proposed neural network achieves competitive
accuracy on benchmark datasets with significantly improved computational
efficiency. A single layer of real-valued or complex-valued Lehmer activation
units is shown to deliver state-of-the-art performance, balancing efficiency
with interpretability.
|
2501.15225
|
SEAL: Scaling to Emphasize Attention for Long-Context Retrieval
|
cs.CL cs.AI cs.LG
|
In this work, we introduce a novel approach called Scaling to Emphasize
Attention for Long-context retrieval (SEAL), which enhances the retrieval
performance of large language models (LLMs) over extended contexts. Previous
studies have shown that each attention head in LLMs has a unique functionality
and collectively contributes to the overall behavior of the model. Similarly,
we observe that specific heads are closely tied to long-context retrieval,
showing positive or negative correlation with retrieval scores. Built on this
insight, we propose a learning-based mechanism using zero-shot generated data
to emphasize these heads, improving the model's performance in long-context
retrieval tasks. By applying SEAL, we can achieve significant improvements in
in-domain retrieval performance, including document QA tasks from LongBench,
and considerable improvements in out-of-domain cases. Additionally, when
combined with existing training-free context extension techniques, SEAL extends
the context limits of LLMs while maintaining highly reliable outputs, opening
new avenues for research in this field.
|
2501.15227
|
Detecting Unauthorized Drones with Cell-Free Integrated Sensing and
Communication
|
cs.IT eess.SP math.IT
|
Integrated sensing and communication (ISAC) boosts network efficiency by
using existing resources for diverse sensing applications. In this work, we
propose a cell-free massive MIMO (multiple-input multiple-output)-ISAC
framework to detect unauthorized drones while simultaneously ensuring
communication requirements. We develop a detector to identify passive aerial
targets by analyzing signals from distributed access points (APs). In addition
to the precision of the sensing, timeliness of the sensing information is also
crucial due to the risk of drones leaving the area before the sensing procedure
is finished. We introduce the age of sensing (AoS) and sensing coverage as our
sensing performance metrics and propose a joint sensing blocklength and power
optimization algorithm to minimize AoS and maximize sensing coverage while
meeting communication requirements. Moreover, we propose an adaptive weight
selection algorithm based on concave-convex procedure to balance the inherent
trade-off between AoS and sensing coverage. Our numerical results show that
increasing the communication requirements would significantly reduce both the
sensing coverage and the timeliness of the sensing. Furthermore, the proposed
adaptive weight selection algorithm can provide high sensing coverage and
reduce the AoS by 45% compared to the fixed weights, demonstrating efficient
utilization of both power and sensing blocklength.
|
2501.15228
|
Improving Retrieval-Augmented Generation through Multi-Agent
Reinforcement Learning
|
cs.CL cs.IR
|
Retrieval-augmented generation (RAG) is extensively utilized to incorporate
external, current knowledge into large language models, thereby minimizing
hallucinations. A standard RAG pipeline may comprise several components, such
as query rewriting, document retrieval, document filtering, and answer
generation. However, these components are typically optimized separately
through supervised fine-tuning, which can lead to misalignments between the
objectives of individual modules and the overarching aim of generating accurate
answers in question-answering (QA) tasks. Although recent efforts have explored
reinforcement learning (RL) to optimize specific RAG components, these
approaches often focus on overly simplistic pipelines with only two components
or do not adequately address the complex interdependencies and collaborative
interactions among the modules. To overcome these challenges, we propose
treating the RAG pipeline as a multi-agent cooperative task, with each
component regarded as an RL agent. Specifically, we present MMOA-RAG, a
Multi-Module joint Optimization Algorithm for RAG, which employs multi-agent
reinforcement learning to harmonize all agents' goals towards a unified reward,
such as the F1 score of the final answer. Experiments conducted on various QA
datasets demonstrate that MMOA-RAG improves the overall pipeline performance
and outperforms existing baselines. Furthermore, comprehensive ablation studies
validate the contributions of individual components and the adaptability of
MMOA-RAG across different RAG components and datasets. The code of MMOA-RAG is
on https://github.com/chenyiqun/MMOA-RAG.
|
2501.15235
|
Large-Scale Riemannian Meta-Optimization via Subspace Adaptation
|
cs.LG cs.CV
|
Riemannian meta-optimization provides a promising approach to solving
non-linear constrained optimization problems, which trains neural networks as
optimizers to perform optimization on Riemannian manifolds. However, existing
Riemannian meta-optimization methods take up huge memory footprints in
large-scale optimization settings, as the learned optimizer can only adapt
gradients of a fixed size and thus cannot be shared across different Riemannian
parameters. In this paper, we propose an efficient Riemannian meta-optimization
method that significantly reduces the memory burden for large-scale
optimization via a subspace adaptation scheme. Our method trains neural
networks to individually adapt the row and column subspaces of Riemannian
gradients, instead of directly adapting the full gradient matrices in existing
Riemannian meta-optimization methods. In this case, our learned optimizer can
be shared across Riemannian parameters with different sizes. Our method reduces
the model memory consumption by six orders of magnitude when optimizing an
orthogonal mainstream deep neural network (e.g., ResNet50). Experiments on
multiple Riemannian tasks show that our method can not only reduce the memory
consumption but also improve the performance of Riemannian meta-optimization.
|
2501.15240
|
Hardware-Aware DNN Compression for Homogeneous Edge Devices
|
cs.LG cs.AI
|
Deploying deep neural networks (DNNs) across homogeneous edge devices (the
devices with the same SKU labeled by the manufacturer) often assumes identical
performance among them. However, once a device model is widely deployed, the
performance of each device becomes different after a period of running. This is
caused by the differences in user configurations, environmental conditions,
manufacturing variances, battery degradation, etc. Existing DNN compression
methods have not taken this scenario into consideration and can not guarantee
good compression results in all homogeneous edge devices. To address this, we
propose Homogeneous-Device Aware Pruning (HDAP), a hardware-aware DNN
compression framework explicitly designed for homogeneous edge devices, aiming
to achieve optimal average performance of the compressed model across all
devices. To deal with the difficulty of time-consuming hardware-aware
evaluations for thousands or millions of homogeneous edge devices, HDAP
partitions all the devices into several device clusters, which can dramatically
reduce the number of devices to evaluate and use the surrogate-based evaluation
instead of hardware evaluation in real-time. Experiments on ResNet50 and
MobileNetV1 with the ImageNet dataset show that HDAP consistently achieves
lower average inference latency compared with state-of-the-art methods, with
substantial speedup gains (e.g., 2.86 $\times$ speedup at 1.0G FLOPs for
ResNet50) on the homogeneous device clusters. HDAP offers an effective solution
for scalable, high-performance DNN deployment methods for homogeneous edge
devices.
|
2501.15245
|
ASRank: Zero-Shot Re-Ranking with Answer Scent for Document Retrieval
|
cs.CL
|
Retrieval-Augmented Generation (RAG) models have drawn considerable attention
in modern open-domain question answering. The effectiveness of RAG depends on
the quality of the top retrieved documents. However, conventional retrieval
methods sometimes fail to rank the most relevant documents at the top. In this
paper, we introduce ASRank, a new re-ranking method based on scoring retrieved
documents using zero-shot answer scent which relies on a pre-trained large
language model to compute the likelihood of the document-derived answers
aligning with the answer scent. Our approach demonstrates marked improvements
across several datasets, including NQ, TriviaQA, WebQA, ArchivalQA, HotpotQA,
and Entity Questions. Notably, ASRank increases Top-1 retrieval accuracy on NQ
from $19.2\%$ to $46.5\%$ for MSS and $22.1\%$ to $47.3\%$ for BM25. It also
shows strong retrieval performance on several datasets compared to
state-of-the-art methods (47.3 Top-1 by ASRank vs 35.4 by UPR by BM25).
|
2501.15247
|
Prompting ChatGPT for Chinese Learning as L2: A CEFR and EBCL Level
Study
|
cs.CL cs.AI
|
The use of chatbots in language learning has evolved significantly since the
1960s, becoming more sophisticated platforms as generative AI emerged. These
tools now simulate natural conversations, adapting to individual learners'
needs, including those studying Chinese. Our study explores how learners can
use specific prompts to engage Large Language Models (LLM) as personalized
chatbots, aiming to target their language level based on the Common European
Framework of Reference for Languages (CEFR) and the European Benchmarking
Chinese Language (EBCL) project. Focusing on A1, A1+ and A2 levels, we examine
the teaching of Chinese, which presents unique challenges due to its
logographic writing system. Our goal is to develop prompts that integrate oral
and written skills, using high-frequency character lists and controlling oral
lexical productions. These tools, powered by generative AI, aim to enhance
language practice by crossing lexical and sinographic recurrence. While
generative AI shows potential as a personalized tutor, further evaluation is
needed to assess its effectiveness. We conducted a systematic series of
experiments using ChatGPT models to evaluate their adherence to constraints
specified in the prompts. The results indicate that incorporating level A1 and
A1+ characters, along with the associated reference list, significantly
enhances compliance with the EBCL character set. Properly prompted, LLMs can
increase exposure to the target language and offer interactive exchanges to
develop language skills.
|
2501.15248
|
Enhancing Fetal Plane Classification Accuracy with Data Augmentation
Using Diffusion Models
|
cs.CV
|
Ultrasound imaging is widely used in medical diagnosis, especially for fetal
health assessment. However, the availability of high-quality annotated
ultrasound images is limited, which restricts the training of machine learning
models. In this paper, we investigate the use of diffusion models to generate
synthetic ultrasound images to improve the performance on fetal plane
classification. We train different classifiers first on synthetic images and
then fine-tune them with real images. Extensive experimental results
demonstrate that incorporating generated images into training pipelines leads
to better classification accuracy than training with real images alone. The
findings suggest that generating synthetic data using diffusion models can be a
valuable tool in overcoming the challenges of data scarcity in ultrasound
medical imaging.
|
2501.15249
|
An Automatic Sound and Complete Abstraction Method for Generalized
Planning with Baggable Types
|
cs.AI
|
Generalized planning is concerned with how to find a single plan to solve
multiple similar planning instances. Abstractions are widely used for solving
generalized planning, and QNP (qualitative numeric planning) is a popular
abstract model. Recently, Cui et al. showed that a plan solves a sound and
complete abstraction of a generalized planning problem if and only if the
refined plan solves the original problem. However, existing work on automatic
abstraction for generalized planning can hardly guarantee soundness let alone
completeness. In this paper, we propose an automatic sound and complete
abstraction method for generalized planning with baggable types. We use a
variant of QNP, called bounded QNP (BQNP), where integer variables are
increased or decreased by only one. Since BQNP is undecidable, we propose and
implement a sound but incomplete solver for BQNP. We present an automatic
method to abstract a BQNP problem from a classical planning instance with
baggable types. The basic idea for abstraction is to introduce a counter for
each bag of indistinguishable tuples of objects. We define a class of domains
called proper baggable domains, and show that for such domains, the BQNP
problem got by our automatic method is a sound and complete abstraction for a
generalized planning problem whose instances share the same bags with the given
instance but the sizes of the bags might be different. Thus, the refined plan
of a solution to the BQNP problem is a solution to the generalized planning
problem. Finally, we implement our abstraction method and experiments on a
number of domains demonstrate the promise of our approach.
|
2501.15253
|
Generalizable Deepfake Detection via Effective Local-Global Feature
Extraction
|
cs.CV
|
The rapid advancement of GANs and diffusion models has led to the generation
of increasingly realistic fake images, posing significant hidden dangers and
threats to society. Consequently, deepfake detection has become a pressing
issue in today's world. While some existing methods focus on forgery features
from either a local or global perspective, they often overlook the
complementary nature of these features. Other approaches attempt to incorporate
both local and global features but rely on simplistic strategies, such as
cropping, which fail to capture the intricate relationships between local
features. To address these limitations, we propose a novel method that
effectively combines local spatial-frequency domain features with global
frequency domain information, capturing detailed and holistic forgery traces.
Specifically, our method uses Discrete Wavelet Transform (DWT) and sliding
windows to tile forged features and leverages attention mechanisms to extract
local spatial-frequency domain information. Simultaneously, the phase component
of the Fast Fourier Transform (FFT) is integrated with attention mechanisms to
extract global frequency domain information, complementing the local features
and ensuring the integrity of forgery detection. Comprehensive evaluations on
open-world datasets generated by 34 distinct generative models demonstrate a
significant improvement of 2.9% over existing state-of-the-art methods.
|
2501.15255
|
Lightweight and Post-Training Structured Pruning for On-Device Large
Lanaguage Models
|
cs.LG cs.AI
|
Considering the hardware-friendly characteristics and broad applicability,
structured pruning has emerged as an efficient solution to reduce the resource
demands of large language models (LLMs) on resource-constrained devices.
Traditional structured pruning methods often need fine-tuning to recover
performance loss, which incurs high memory overhead and substantial data
requirements, rendering them unsuitable for on-device applications.
Additionally, post-training structured pruning techniques typically necessitate
specific activation functions or architectural modifications, thereby limiting
their scope of applications. Herein, we introduce COMP, a lightweight
post-training structured pruning method that employs a hybrid-granularity
pruning strategy. COMP initially prunes selected model layers based on their
importance at a coarse granularity, followed by fine-grained neuron pruning
within the dense layers of each remaining model layer. To more accurately
evaluate neuron importance, COMP introduces a new matrix condition-based
metric. Subsequently, COMP utilizes mask tuning to recover accuracy without the
need for fine-tuning, significantly reducing memory consumption. Experimental
results demonstrate that COMP improves performance by 6.13\% on the LLaMA-2-7B
model with a 20\% pruning ratio compared to LLM-Pruner, while simultaneously
reducing memory overhead by 80\%.
|
2501.15257
|
Pre-trained Model Guided Mixture Knowledge Distillation for Adversarial
Federated Learning
|
cs.CV
|
This paper aims to improve the robustness of a small global model while
maintaining clean accuracy under adversarial attacks and non-IID challenges in
federated learning. By leveraging the concise knowledge embedded in the class
probabilities from a pre-trained model for both clean and adversarial image
classification, we propose a Pre-trained Model-guided Adversarial Federated
Learning (PM-AFL) training paradigm. This paradigm integrates vanilla mixture
and adversarial mixture knowledge distillation to effectively balance accuracy
and robustness while promoting local models to learn from diverse data.
Specifically, for clean accuracy, we adopt a dual distillation strategy where
the class probabilities of randomly paired images and their blended versions
are aligned between the teacher model and the local models. For adversarial
robustness, we use a similar distillation approach but replace clean samples on
the local side with adversarial examples. Moreover, considering the bias
between local and global models, we also incorporate a consistency
regularization term to ensure that local adversarial predictions stay aligned
with their corresponding global clean ones. These strategies collectively
enable local models to absorb diverse knowledge from the teacher model while
maintaining close alignment with the global model, thereby mitigating
overfitting to local optima and enhancing the generalization of the global
model. Experiments demonstrate that the PM-AFL-based paradigm outperforms other
methods that integrate defense strategies by a notable margin.
|
2501.15259
|
Scalable Decentralized Learning with Teleportation
|
cs.LG math.OC stat.ML
|
Decentralized SGD can run with low communication costs, but its sparse
communication characteristics deteriorate the convergence rate, especially when
the number of nodes is large. In decentralized learning settings, communication
is assumed to occur on only a given topology, while in many practical cases,
the topology merely represents a preferred communication pattern, and
connecting to arbitrary nodes is still possible. Previous studies have tried to
alleviate the convergence rate degradation in these cases by designing
topologies with large spectral gaps. However, the degradation is still
significant when the number of nodes is substantial. In this work, we propose
TELEPORTATION. TELEPORTATION activates only a subset of nodes, and the active
nodes fetch the parameters from previous active nodes. Then, the active nodes
update their parameters by SGD and perform gossip averaging on a relatively
small topology comprising only the active nodes. We show that by activating
only a proper number of nodes, TELEPORTATION can completely alleviate the
convergence rate degradation. Furthermore, we propose an efficient
hyperparameter-tuning method to search for the appropriate number of nodes to
be activated. Experimentally, we showed that TELEPORTATION can train neural
networks more stably and achieve higher accuracy than Decentralized SGD.
|
2501.15260
|
Breaking the Stigma! Unobtrusively Probe Symptoms in Depression Disorder
Diagnosis Dialogue
|
cs.CL cs.CY
|
Stigma has emerged as one of the major obstacles to effectively diagnosing
depression, as it prevents users from open conversations about their struggles.
This requires advanced questioning skills to carefully probe the presence of
specific symptoms in an unobtrusive manner. While recent efforts have been made
on depression-diagnosis-oriented dialogue systems, they largely ignore this
problem, ultimately hampering their practical utility. To this end, we propose
a novel and effective method, UPSD$^{4}$, developing a series of strategies to
promote a sense of unobtrusiveness within the dialogue system and assessing
depression disorder by probing symptoms. We experimentally show that UPSD$^{4}$
demonstrates a significant improvement over current baselines, including
unobtrusiveness evaluation of dialogue content and diagnostic accuracy. We
believe our work contributes to developing more accessible and user-friendly
tools for addressing the widespread need for depression diagnosis.
|
2501.15262
|
Dynamic Estimation of Tea Flowering Based on an Improved YOLOv5 and ANN
Model
|
cs.CV q-bio.QM
|
Tea flowers play a crucial role in taxonomic research and hybrid breeding for
the tea plant. Tea flowering consumes the plant's nutrients, and flower
thinning can regulate carbon-nitrogen metabolism, enhancing the yield and
quality of young shoots. As traditional methods of observing tea flower traits
are labor-intensive and inaccurate, we propose an effective framework for tea
flowering quantifying. In this study, a highly representative and diverse
dataset was constructed by collecting flower images from 29 tea accessions.
Based on this dataset, the TflosYOLO model was built on the YOLOv5 architecture
and enhanced with the Squeeze-and-Excitation (SE) network, which is the first
model to offer a viable solution for detecting tea flowers and predicting
flower quantities. The TflosYOLO model achieved an mAP50 of 0.874,
outperforming YOLOv5, YOLOv7 and YOLOv8. Furthermore, this model was tested on
34 datasets encompassing 26 tea accessions, five flowering stages, various
lighting conditions, and pruned/unpruned plants, demonstrating high
generalization and robustness. The correlation coefficient ($ R^2 $) between
the predicted and actual flower counts was 0.974. Additionally, the TFSC (Tea
Flowering Stage Classification) model - a novel Artificial Neural Network (ANN)
was designed for automatic classification of the flowering stages. TFSC
achieved an accuracy of 0.899. Dynamic analysis of flowering across 29 tea
accessions in 2023 and 2024 was conducted, revealed significant variability in
flower quantity and dynamics, with genetically similar accessions showing more
consistent flowering patterns. This framework provides a solution for
quantifying tea flowering, and can serve as a reference for precision
horticulture.
|
2501.15263
|
Explainable YOLO-Based Dyslexia Detection in Synthetic Handwriting Data
|
cs.CV cs.LG
|
Dyslexia affects reading and writing skills across many languages. This work
describes a new application of YOLO-based object detection to isolate and label
handwriting patterns (Normal, Reversal, Corrected) within synthetic images that
resemble real words. Individual letters are first collected, preprocessed into
32x32 samples, then assembled into larger synthetic 'words' to simulate
realistic handwriting. Our YOLOv11 framework simultaneously localizes each
letter and classifies it into one of three categories, reflecting key dyslexia
traits. Empirically, we achieve near-perfect performance, with precision,
recall, and F1 metrics typically exceeding 0.999. This surpasses earlier
single-letter approaches that rely on conventional CNNs or transfer-learning
classifiers (for example, MobileNet-based methods in Robaa et al.
arXiv:2410.19821). Unlike simpler pipelines that consider each letter in
isolation, our solution processes complete word images, resulting in more
authentic representations of handwriting. Although relying on synthetic data
raises concerns about domain gaps, these experiments highlight the promise of
YOLO-based detection for faster and more interpretable dyslexia screening.
Future work will expand to real-world handwriting, other languages, and deeper
explainability methods to build confidence among educators, clinicians, and
families.
|
2501.15265
|
Kernel-Based Anomaly Detection Using Generalized Hyperbolic Processes
|
cs.LG
|
We present a novel approach to anomaly detection by integrating Generalized
Hyperbolic (GH) processes into kernel-based methods. The GH distribution, known
for its flexibility in modeling skewness, heavy tails, and kurtosis, helps to
capture complex patterns in data that deviate from Gaussian assumptions. We
propose a GH-based kernel function and utilize it within Kernel Density
Estimation (KDE) and One-Class Support Vector Machines (OCSVM) to develop
anomaly detection frameworks. Theoretical results confirmed the positive
semi-definiteness and consistency of the GH-based kernel, ensuring its
suitability for machine learning applications. Empirical evaluation on
synthetic and real-world datasets showed that our method improves detection
performance in scenarios involving heavy-tailed and asymmetric or imbalanced
distributions. https://github.com/paulinebourigault/GHKernelAnomalyDetect
|
2501.15266
|
Enhanced Intrusion Detection in IIoT Networks: A Lightweight Approach
with Autoencoder-Based Feature Learning
|
cs.LG
|
The rapid expansion of the Industrial Internet of Things (IIoT) has
significantly advanced digital technologies and interconnected industrial
systems, creating substantial opportunities for growth. However, this growth
has also heightened the risk of cyberattacks, necessitating robust security
measures to protect IIoT networks. Intrusion Detection Systems (IDS) are
essential for identifying and preventing abnormal network behaviors and
malicious activities. Despite the potential of Machine Learning (ML)--based IDS
solutions, existing models often face challenges with class imbalance and
multiclass IIoT datasets, resulting in reduced detection accuracy. This
research directly addresses these challenges by implementing six innovative
approaches to enhance IDS performance, including leveraging an autoencoder for
dimensional reduction, which improves feature learning and overall detection
accuracy. Our proposed Decision Tree model achieved an exceptional F1 score and
accuracy of 99.94% on the Edge-IIoTset dataset. Furthermore, we prioritized
lightweight model design, ensuring deployability on resource-constrained edge
devices. Notably, we are the first to deploy our model on a Jetson Nano,
achieving inference times of 0.185 ms for binary classification and 0.187 ms
for multiclass classification. These results highlight the novelty and
robustness of our approach, offering a practical and efficient solution to the
challenges posed by imbalanced and multiclass IIoT datasets, thereby enhancing
the detection and prevention of network intrusions.
|
2501.15268
|
New Evaluation Paradigm for Lexical Simplification
|
cs.CL
|
Lexical Simplification (LS) methods use a three-step pipeline: complex word
identification, substitute generation, and substitute ranking, each with
separate evaluation datasets. We found large language models (LLMs) can
simplify sentences directly with a single prompt, bypassing the traditional
pipeline. However, existing LS datasets are not suitable for evaluating these
LLM-generated simplified sentences, as they focus on providing substitutes for
single complex words without identifying all complex words in a sentence.
To address this gap, we propose a new annotation method for constructing an
all-in-one LS dataset through human-machine collaboration. Automated methods
generate a pool of potential substitutes, which human annotators then assess,
suggesting additional alternatives as needed. Additionally, we explore
LLM-based methods with single prompts, in-context learning, and
chain-of-thought techniques. We introduce a multi-LLMs collaboration approach
to simulate each step of the LS task. Experimental results demonstrate that LS
based on multi-LLMs approaches significantly outperforms existing baselines.
|
2501.15269
|
Mirage in the Eyes: Hallucination Attack on Multi-modal Large Language
Models with Only Attention Sink
|
cs.LG cs.CR cs.CV
|
Fusing visual understanding into language generation, Multi-modal Large
Language Models (MLLMs) are revolutionizing visual-language applications. Yet,
these models are often plagued by the hallucination problem, which involves
generating inaccurate objects, attributes, and relationships that do not match
the visual content. In this work, we delve into the internal attention
mechanisms of MLLMs to reveal the underlying causes of hallucination, exposing
the inherent vulnerabilities in the instruction-tuning process.
We propose a novel hallucination attack against MLLMs that exploits attention
sink behaviors to trigger hallucinated content with minimal image-text
relevance, posing a significant threat to critical downstream applications.
Distinguished from previous adversarial methods that rely on fixed patterns,
our approach generates dynamic, effective, and highly transferable visual
adversarial inputs, without sacrificing the quality of model responses.
Comprehensive experiments on 6 prominent MLLMs demonstrate the efficacy of our
attack in compromising black-box MLLMs even with extensive mitigating
mechanisms, as well as the promising results against cutting-edge commercial
APIs, such as GPT-4o and Gemini 1.5. Our code is available at
https://huggingface.co/RachelHGF/Mirage-in-the-Eyes.
|
2501.15270
|
Inductive Biases for Zero-shot Systematic Generalization in
Language-informed Reinforcement Learning
|
cs.LG cs.AI cs.CL
|
Sample efficiency and systematic generalization are two long-standing
challenges in reinforcement learning. Previous studies have shown that
involving natural language along with other observation modalities can improve
generalization and sample efficiency due to its compositional and open-ended
nature. However, to transfer these properties of language to the
decision-making process, it is necessary to establish a proper language
grounding mechanism. One approach to this problem is applying inductive biases
to extract fine-grained and informative representations from the observations,
which makes them more connectable to the language units. We provide
architecture-level inductive biases for modularity and sparsity mainly based on
Neural Production Systems (NPS). Alongside NPS, we assign a central role to
memory in our architecture. It can be seen as a high-level information
aggregator which feeds policy/value heads with comprehensive information and
simultaneously guides selective attention in NPS through attentional feedback.
Our results in the BabyAI environment suggest that the proposed model's
systematic generalization and sample efficiency are improved significantly
compared to previous models. An extensive ablation study on variants of the
proposed method is conducted, and the effectiveness of each employed technique
on generalization, sample efficiency, and training stability is specified.
|
2501.15271
|
Killing it with Zero-Shot: Adversarially Robust Novelty Detection
|
cs.LG
|
Novelty Detection (ND) plays a crucial role in machine learning by
identifying new or unseen data during model inference. This capability is
especially important for the safe and reliable operation of automated systems.
Despite advances in this field, existing techniques often fail to maintain
their performance when subject to adversarial attacks. Our research addresses
this gap by marrying the merits of nearest-neighbor algorithms with robust
features obtained from models pretrained on ImageNet. We focus on enhancing the
robustness and performance of ND algorithms. Experimental results demonstrate
that our approach significantly outperforms current state-of-the-art methods
across various benchmarks, particularly under adversarial conditions. By
incorporating robust pretrained features into the k-NN algorithm, we establish
a new standard for performance and robustness in the field of robust ND. This
work opens up new avenues for research aimed at fortifying machine learning
systems against adversarial vulnerabilities. Our implementation is publicly
available at https://github.com/rohban-lab/ZARND.
|
2501.15272
|
Safe and Agile Transportation of Cable-Suspended Payload via Multiple
Aerial Robots
|
cs.RO
|
Transporting a heavy payload using multiple aerial robots (MARs) is an
efficient manner to extend the load capacity of a single aerial robot. However,
existing schemes for the multiple aerial robots transportation system (MARTS)
still lack the capability to generate a collision-free and dynamically feasible
trajectory in real-time and further track an agile trajectory especially when
there are no sensors available to measure the states of payload and cable.
Therefore, they are limited to low-agility transportation in simple
environments. To bridge the gap, we propose complete planning and control
schemes for the MARTS, achieving safe and agile aerial transportation (SAAT) of
a cable-suspended payload in complex environments. Flatness maps for the aerial
robot considering the complete kinematical constraint and the dynamical
coupling between each aerial robot and payload are derived. To improve the
responsiveness for the generation of the safe, dynamically feasible, and agile
trajectory in complex environments, a real-time spatio-temporal trajectory
planning scheme is proposed for the MARTS. Besides, we break away from the
reliance on the state measurement for both the payload and cable, as well as
the closed-loop control for the payload, and propose a fully distributed
control scheme to track the agile trajectory that is robust against imprecise
payload mass and non-point mass payload. The proposed schemes are extensively
validated through benchmark comparisons, ablation studies, and simulations.
Finally, extensive real-world experiments are conducted on a MARTS integrated
by three aerial robots with onboard computers and sensors. The result validates
the efficiency and robustness of our proposed schemes for SAAT in complex
environments.
|
2501.15273
|
Into the Void: Mapping the Unseen Gaps in High Dimensional Data
|
cs.LG cs.HC
|
We present a comprehensive pipeline, augmented by a visual analytics system
named ``GapMiner'', that is aimed at exploring and exploiting untapped
opportunities within the empty areas of high-dimensional datasets. Our approach
begins with an initial dataset and then uses a novel Empty Space Search
Algorithm (ESA) to identify the center points of these uncharted voids, which
are regarded as reservoirs containing potentially valuable novel
configurations. Initially, this process is guided by user interactions
facilitated by GapMiner. GapMiner visualizes the Empty Space Configurations
(ESC) identified by the search within the context of the data, enabling domain
experts to explore and adjust ESCs using a linked parallel-coordinate display.
These interactions enhance the dataset and contribute to the iterative training
of a connected deep neural network (DNN). As the DNN trains, it gradually
assumes the task of identifying high-potential ESCs, diminishing the need for
direct user involvement. Ultimately, once the DNN achieves adequate accuracy,
it autonomously guides the exploration of optimal configurations by predicting
performance and refining configurations, using a combination of gradient ascent
and improved empty-space searches. Domain users were actively engaged
throughout the development of our system. Our findings demonstrate that our
methodology consistently produces substantially superior novel configurations
compared to conventional randomization-based methods. We illustrate the
effectiveness of our method through several case studies addressing various
objectives, including parameter optimization, adversarial learning, and
reinforcement learning.
|
2501.15276
|
Exploring the Collaborative Co-Creation Process with AI: A Case Study in
Novice Music Production
|
cs.HC cs.AI
|
Artificial intelligence is reshaping creative domains, yet its co-creative
processes, especially in group settings with novice users, remain under
explored. To bridge this gap, we conducted a case study in a college-level
course where nine undergraduate students were tasked with creating three
original music tracks using AI tools over 10 weeks. The study spanned the
entire creative journey from ideation to releasing these songs on Spotify.
Participants leveraged AI for music and lyric production, cover art, and
distribution. Our findings highlight how AI transforms creative workflows:
accelerating ideation but compressing the traditional preparation stage, and
requiring novices to navigate a challenging idea selection and validation
phase. We also identified a new "collaging and refinement" stage, where
participants creatively combined diverse AI-generated outputs into cohesive
works. Furthermore, AI influenced group social dynamics and role division among
human creators. Based on these insights, we propose the Human-AI Co-Creation
Stage Model and the Human-AI Agency Model, offering new perspectives on
collaborative co-creation with AI.
|
2501.15278
|
PIP: Perturbation-based Iterative Pruning for Large Language Models
|
cs.LG cs.CL
|
The rapid increase in the parameter counts of Large Language Models (LLMs),
reaching billions or even trillions, presents significant challenges for their
practical deployment, particularly in resource-constrained environments. To
ease this issue, we propose PIP (Perturbation-based Iterative Pruning), a novel
double-view structured pruning method to optimize LLMs, which combines
information from two different views: the unperturbed view and the perturbed
view. With the calculation of gradient differences, PIP iteratively prunes
those that struggle to distinguish between these two views. Our experiments
show that PIP reduces the parameter count by approximately 20% while retaining
over 85% of the original model's accuracy across varied benchmarks. In some
cases, the performance of the pruned model is within 5% of the unpruned
version, demonstrating PIP's ability to preserve key aspects of model
effectiveness. Moreover, PIP consistently outperforms existing state-of-the-art
(SOTA) structured pruning methods, establishing it as a leading technique for
optimizing LLMs in environments with constrained resources. Our code is
available at: https://github.com/caoyiiiiii/PIP.
|
2501.15280
|
Who's Driving? Game Theoretic Path Risk of AGI Development
|
cs.AI cs.CY cs.GT
|
Who controls the development of Artificial General Intelligence (AGI) might
matter less than how we handle the fight for control itself. We formalize this
"steering wheel problem" as humanity's greatest near-term existential risk may
stem not from misaligned AGI, but from the dynamics of competing to develop it.
Just as a car crash can occur from passengers fighting over the wheel before
reaching any destination, catastrophic outcomes could arise from development
competition long before AGI exists. While technical alignment research focuses
on ensuring safe arrival, we show how coordination failures during development
could drive us off the cliff first.
We present a game theoretic framework modeling AGI development dynamics and
prove conditions for sustainable cooperative equilibria. Drawing from nuclear
control while accounting for AGI's unique characteristics, we propose concrete
mechanisms including pre-registration, shared technical infrastructure, and
automated deterrence to stabilize cooperation. Our key insight is that AGI
creates network effects in safety: shared investments become more valuable as
participation grows, enabling mechanism designs where cooperation dominates
defection. This work bridges formal methodology and policy frameworks,
providing foundations for practical governance of AGI competition risks.
|
2501.15281
|
Pre-training a Transformer-Based Generative Model Using a Small Sepedi
Dataset
|
cs.CL cs.AI cs.LG
|
Due to the scarcity of data in low-resourced languages, the development of
language models for these languages has been very slow. Currently, pre-trained
language models have gained popularity in natural language processing,
especially, in developing domain-specific models for low-resourced languages.
In this study, we experiment with the impact of using occlusion-based
techniques when training a language model for a text generation task. We curate
2 new datasets, the Sepedi monolingual (SepMono) dataset from several South
African resources and the Sepedi radio news (SepNews) dataset from the radio
news domain. We use the SepMono dataset to pre-train transformer-based models
using the occlusion and non-occlusion pre-training techniques and compare
performance. The SepNews dataset is specifically used for fine-tuning. Our
results show that the non-occlusion models perform better compared to the
occlusion-based models when measuring validation loss and perplexity. However,
analysis of the generated text using the BLEU score metric, which measures the
quality of the generated text, shows a slightly higher BLEU score for the
occlusion-based models compared to the non-occlusion models.
|
2501.15282
|
AutoG: Towards automatic graph construction from tabular data
|
cs.LG
|
Recent years have witnessed significant advancements in graph machine
learning (GML), with its applications spanning numerous domains. However, the
focus of GML has predominantly been on developing powerful models, often
overlooking a crucial initial step: constructing suitable graphs from common
data formats, such as tabular data. This construction process is fundamental to
applying graphbased models, yet it remains largely understudied and lacks
formalization. Our research aims to address this gap by formalizing the graph
construction problem and proposing an effective solution. We identify two
critical challenges to achieve this goal: 1. The absence of dedicated datasets
to formalize and evaluate the effectiveness of graph construction methods, and
2. Existing automatic construction methods can only be applied to some specific
cases, while tedious human engineering is required to generate high-quality
graphs. To tackle these challenges, we present a two-fold contribution. First,
we introduce a set of datasets to formalize and evaluate graph construction
methods. Second, we propose an LLM-based solution, AutoG, automatically
generating high-quality graph schemas without human intervention. The
experimental results demonstrate that the quality of constructed graphs is
critical to downstream task performance, and AutoG can generate high-quality
graphs that rival those produced by human experts.
|
2501.15283
|
Are Human Interactions Replicable by Generative Agents? A Case Study on
Pronoun Usage in Hierarchical Interactions
|
cs.CL
|
As Large Language Models (LLMs) advance in their capabilities, researchers
have increasingly employed them for social simulation. In this paper, we
investigate whether interactions among LLM agents resemble those of humans.
Specifically, we focus on the pronoun usage difference between leaders and
non-leaders, examining whether the simulation would lead to human-like pronoun
usage patterns during the LLMs' interactions. Our evaluation reveals the
significant discrepancies between LLM-based simulations and human pronoun
usage, with prompt-based or specialized agents failing to demonstrate
human-like pronoun usage patterns. In addition, we reveal that even if LLMs
understand the human pronoun usage patterns, they fail to demonstrate them in
the actual interaction process. Our study highlights the limitations of social
simulations based on LLM agents, urging caution in using such social simulation
in practitioners' decision-making process.
|
2501.15286
|
Efficient Point Clouds Upsampling via Flow Matching
|
cs.CV eess.SP
|
Diffusion models are a powerful framework for tackling ill-posed problems,
with recent advancements extending their use to point cloud upsampling. Despite
their potential, existing diffusion models struggle with inefficiencies as they
map Gaussian noise to real point clouds, overlooking the geometric information
inherent in sparse point clouds. To address these inefficiencies, we propose
PUFM, a flow matching approach to directly map sparse point clouds to their
high-fidelity dense counterparts. Our method first employs midpoint
interpolation to sparse point clouds, resolving the density mismatch between
sparse and dense point clouds. Since point clouds are unordered
representations, we introduce a pre-alignment method based on Earth Mover's
Distance (EMD) optimization to ensure coherent interpolation between sparse and
dense point clouds, which enables a more stable learning path in flow matching.
Experiments on synthetic datasets demonstrate that our method delivers superior
upsampling quality but with fewer sampling steps. Further experiments on
ScanNet and KITTI also show that our approach generalizes well on RGB-D point
clouds and LiDAR point clouds, making it more practical for real-world
applications.
|
2501.15288
|
A Two-Stage CAE-Based Federated Learning Framework for Efficient Jamming
Detection in 5G Networks
|
cs.CR cs.LG
|
Cyber-security for 5G networks is drawing notable attention due to an
increase in complex jamming attacks that could target the critical 5G Radio
Frequency (RF) domain. These attacks pose a significant risk to heterogeneous
network (HetNet) architectures, leading to degradation in network performance.
Conventional machine-learning techniques for jamming detection rely on
centralized training while increasing the odds of data privacy. To address
these challenges, this paper proposes a decentralized two-stage federated
learning (FL) framework for jamming detection in 5G femtocells. Our proposed
distributed framework encompasses using the Federated Averaging (FedAVG)
algorithm to train a Convolutional Autoencoder (CAE) for unsupervised learning.
In the second stage, we use a fully connected network (FCN) built on the
pre-trained CAE encoder that is trained using Federated Proximal (FedProx)
algorithm to perform supervised classification. Our experimental results depict
that our proposed framework (FedAVG and FedProx) accomplishes efficient
training and prediction across non-IID client datasets without compromising
data privacy. Specifically, our framework achieves a precision of 0.94, recall
of 0.90, F1-score of 0.92, and an accuracy of 0.92, while minimizing
communication rounds to 30 and achieving robust convergence in detecting jammed
signals with an optimal client count of 6.
|
2501.15290
|
Advanced Real-Time Fraud Detection Using RAG-Based LLMs
|
cs.CR cs.AI
|
Artificial Intelligence has become a double edged sword in modern society
being both a boon and a bane. While it empowers individuals it also enables
malicious actors to perpetrate scams such as fraudulent phone calls and user
impersonations. This growing threat necessitates a robust system to protect
individuals In this paper we introduce a novel real time fraud detection
mechanism using Retrieval Augmented Generation technology to address this
challenge on two fronts. First our system incorporates a continuously updating
policy checking feature that transcribes phone calls in real time and uses RAG
based models to verify that the caller is not soliciting private information
thus ensuring transparency and the authenticity of the conversation. Second we
implement a real time user impersonation check with a two step verification
process to confirm the callers identity ensuring accountability. A key
innovation of our system is the ability to update policies without retraining
the entire model enhancing its adaptability. We validated our RAG based
approach using synthetic call recordings achieving an accuracy of 97.98 percent
and an F1score of 97.44 percent with 100 calls outperforming state of the art
methods. This robust and flexible fraud detection system is well suited for
real world deployment.
|
2501.15293
|
Deep Learning in Early Alzheimer's disease's Detection: A Comprehensive
Survey of Classification, Segmentation, and Feature Extraction Methods
|
cs.LG
|
Alzheimers disease is a deadly neurological condition, impairing important
memory and brain functions. Alzheimers disease promotes brain shrinkage,
ultimately leading to dementia. Dementia diagnosis typically takes 2.8 to 4.4
years after the first clinical indication. Advancements in computing and
information technology have led to many techniques of studying Alzheimers
disease. Early identification and therapy are crucial for preventing Alzheimers
disease, as early-onset dementia hits people before the age of 65, while
late-onset dementia occurs after this age. According to the 2015 World
Alzheimers disease Report, there are 46.8 million individuals worldwide
suffering from dementia, with an anticipated 74.7 million more by 2030 and
131.5 million by 2050. Deep Learning has outperformed conventional Machine
Learning techniques by identifying intricate structures in high-dimensional
data. Convolutional Neural Network (CNN) and Recurrent Neural Network (RNN),
have achieved an accuracy of up to 96.0% for Alzheimers disease classification,
and 84.2% for mild cognitive impairment (MCI) conversion prediction. There have
been few literature surveys available on applying ML to predict dementia,
lacking in congenital observations. However, this survey has focused on a
specific data channel for dementia detection. This study evaluated Deep
Learning algorithms for early Alzheimers disease detection, using openly
accessible datasets, feature segmentation, and classification methods. This
article also has identified research gaps and limits in detecting Alzheimers
disease, which can inform future research.
|
2501.15296
|
You Only Prune Once: Designing Calibration-Free Model Compression With
Policy Learning
|
cs.CL
|
The ever-increasing size of large language models (LLMs) presents significant
challenges for deployment due to their heavy computational and memory
requirements. Current model pruning techniques attempt to alleviate these
issues by relying heavily on external calibration datasets to determine which
parameters to prune or compress, thus limiting their flexibility and
scalability across different compression ratios. Moreover, these methods often
cause severe performance degradation, particularly in downstream tasks, when
subjected to higher compression rates. In this paper, we propose PruneNet, a
novel model compression method that addresses these limitations by
reformulating model pruning as a policy learning process. PruneNet decouples
the pruning process from the model architecture, eliminating the need for
calibration datasets. It learns a stochastic pruning policy to assess parameter
importance solely based on intrinsic model properties while preserving the
spectral structure to minimize information loss. PruneNet can compress the
LLaMA-2-7B model in just 15 minutes, achieving over 80% retention of its
zero-shot performance with a 30% compression ratio, outperforming existing
methods that retain only 75% performance. Furthermore, on complex multitask
language understanding tasks, PruneNet demonstrates its robustness by
preserving up to 80% performance of the original model, proving itself a
superior alternative to conventional structured compression techniques.
|
2501.15301
|
Separable Computation of Information Measures
|
cs.IT cs.LG math.IT stat.ML
|
We study a separable design for computing information measures, where the
information measure is computed from learned feature representations instead of
raw data. Under mild assumptions on the feature representations, we demonstrate
that a class of information measures admit such separable computation,
including mutual information, $f$-information, Wyner's common information,
G{\'a}cs--K{\"o}rner common information, and Tishby's information bottleneck.
Our development establishes several new connections between information
measures and the statistical dependence structure. The characterizations also
provide theoretical guarantees of practical designs for estimating information
measures through representation learning.
|
2501.15304
|
Music Generation using Human-In-The-Loop Reinforcement Learning
|
cs.SD cs.AI cs.HC cs.LG eess.AS
|
This paper presents an approach that combines Human-In-The-Loop Reinforcement
Learning (HITL RL) with principles derived from music theory to facilitate
real-time generation of musical compositions. HITL RL, previously employed in
diverse applications such as modelling humanoid robot mechanics and enhancing
language models, harnesses human feedback to refine the training process. In
this study, we develop a HILT RL framework that can leverage the constraints
and principles in music theory. In particular, we propose an episodic tabular
Q-learning algorithm with an epsilon-greedy exploration policy. The system
generates musical tracks (compositions), continuously enhancing its quality
through iterative human-in-the-loop feedback. The reward function for this
process is the subjective musical taste of the user.
|
2501.15305
|
Enhancing Disaster Resilience with UAV-Assisted Edge Computing: A
Reinforcement Learning Approach to Managing Heterogeneous Edge Devices
|
cs.ET cs.AI cs.DC
|
Edge sensing and computing is rapidly becoming part of intelligent
infrastructure architecture leading to operational reliance on such systems in
disaster or emergency situations. In such scenarios there is a high chance of
power supply failure due to power grid issues, and communication system issues
due to base stations losing power or being damaged by the elements, e.g.,
flooding, wildfires etc. Mobile edge computing in the form of unmanned aerial
vehicles (UAVs) has been proposed to provide computation offloading from these
devices to conserve their battery, while the use of UAVs as relay network nodes
has also been investigated previously. This paper considers the use of UAVs
with further constraints on power and connectivity to prolong the life of the
network while also ensuring that the data is received from the edge nodes in a
timely manner. Reinforcement learning is used to investigate numerous scenarios
of various levels of power and communication failure. This approach is able to
identify the device most likely to fail in a given scenario, thus providing
priority guidance for maintenance personnel. The evacuations of a rural town
and urban downtown area are also simulated to demonstrate the effectiveness of
the approach at extending the life of the most critical edge devices.
|
2501.15309
|
Investigating the Feasibility of Patch-based Inference for Generalized
Diffusion Priors in Inverse Problems for Medical Images
|
eess.IV cs.CV cs.LG
|
Plug-and-play approaches to solving inverse problems such as restoration and
super-resolution have recently benefited from Diffusion-based generative priors
for natural as well as medical images. However, solutions often use the
standard albeit computationally intensive route of training and inferring with
the whole image on the diffusion prior. While patch-based approaches to
evaluating diffusion priors in plug-and-play methods have received some
interest, they remain an open area of study. In this work, we explore the
feasibility of the usage of patches for training and inference of a diffusion
prior on MRI images. We explore the minor adaptation necessary for artifact
avoidance, the performance and the efficiency of memory usage of patch-based
methods as well as the adaptability of whole image training to patch-based
evaluation - evaluating across multiple plug-and-play methods, tasks and
datasets.
|
2501.15310
|
The Multicultural Medical Assistant: Can LLMs Improve Medical ASR Errors
Across Borders?
|
cs.CL cs.SD eess.AS
|
The global adoption of Large Language Models (LLMs) in healthcare shows
promise to enhance clinical workflows and improve patient outcomes. However,
Automatic Speech Recognition (ASR) errors in critical medical terms remain a
significant challenge. These errors can compromise patient care and safety if
not detected. This study investigates the prevalence and impact of ASR errors
in medical transcription in Nigeria, the United Kingdom, and the United States.
By evaluating raw and LLM-corrected transcriptions of accented English in these
regions, we assess the potential and limitations of LLMs to address challenges
related to accents and medical terminology in ASR. Our findings highlight
significant disparities in ASR accuracy across regions and identify specific
conditions under which LLM corrections are most effective.
|
2501.15316
|
ToMoE: Converting Dense Large Language Models to Mixture-of-Experts
through Dynamic Structural Pruning
|
cs.LG cs.CL
|
Large Language Models (LLMs) have demonstrated remarkable abilities in
tackling a wide range of complex tasks. However, their huge computational and
memory costs raise significant challenges in deploying these models on
resource-constrained devices or efficiently serving them. Prior approaches have
attempted to alleviate these problems by permanently removing less important
model structures, yet these methods often result in substantial performance
degradation due to the permanent deletion of model parameters. In this work, we
tried to mitigate this issue by reducing the number of active parameters
without permanently removing them. Specifically, we introduce a differentiable
dynamic pruning method that pushes dense models to maintain a fixed number of
active parameters by converting their MLP layers into a Mixture of Experts
(MoE) architecture. Our method, even without fine-tuning, consistently
outperforms previous structural pruning techniques across diverse model
families, including Phi-2, LLaMA-2, LLaMA-3, and Qwen-2.5.
|
2501.15318
|
A Post-Processing-Based Fair Federated Learning Framework
|
cs.LG cs.AI cs.CY
|
Federated Learning (FL) allows collaborative model training among distributed
parties without pooling local datasets at a central server. However, the
distributed nature of FL poses challenges in training fair federated learning
models. The existing techniques are often limited in offering fairness
flexibility to clients and performance. We formally define and empirically
analyze a simple and intuitive post-processing-based framework to improve group
fairness in FL systems. This framework can be divided into two stages: a
standard FL training stage followed by a completely decentralized local
debiasing stage. In the first stage, a global model is trained without fairness
constraints using a standard federated learning algorithm (e.g. FedAvg). In the
second stage, each client applies fairness post-processing on the global model
using their respective local dataset. This allows for customized fairness
improvements based on clients' desired and context-guided fairness
requirements. We demonstrate two well-established post-processing techniques in
this framework: model output post-processing and final layer fine-tuning. We
evaluate the framework against three common baselines on four different
datasets, including tabular, signal, and image data, each with varying levels
of data heterogeneity across clients. Our work shows that this framework not
only simplifies fairness implementation in FL but also provides significant
fairness improvements with minimal accuracy loss or even accuracy gain, across
data modalities and machine learning methods, being especially effective in
more heterogeneous settings.
|
2501.15319
|
PSO and the Traveling Salesman Problem: An Intelligent Optimization
Approach
|
cs.NE math.OC
|
The Traveling Salesman Problem (TSP) is a well-known combinatorial
optimization problem that aims to find the shortest possible route that visits
each city exactly once and returns to the starting point. This paper explores
the application of Particle Swarm Optimization (PSO), a population-based
optimization algorithm, to solve TSP. Although PSO was originally designed for
continuous optimization problems, this work adapts PSO for the discrete nature
of TSP by treating the order of cities as a permutation. A local search
strategy, including 2-opt and 3-opt techniques, is applied to improve the
solution after updating the particle positions. The performance of the proposed
PSO algorithm is evaluated using benchmark TSP instances and compared to other
popular optimization algorithms, such as Genetic Algorithms (GA) and Simulated
Annealing (SA). Results show that PSO performs well for small to medium-sized
problems, though its performance diminishes for larger instances due to
difficulties in escaping local optima. This paper concludes that PSO is a
promising approach for solving TSP, with potential for further improvement
through hybridization with other optimization techniques.
|
Subsets and Splits
No community queries yet
The top public SQL queries from the community will appear here once available.