id stringlengths 9 16 | title stringlengths 4 278 | categories stringlengths 5 104 | abstract stringlengths 6 4.09k |
|---|---|---|---|
2501.00785 | NMM-HRI: Natural Multi-modal Human-Robot Interaction with Voice and
Deictic Posture via Large Language Model | cs.RO | Translating human intent into robot commands is crucial for the future of
service robots in an aging society. Existing Human-Robot Interaction (HRI)
systems relying on gestures or verbal commands are impractical for the elderly
due to difficulties with complex syntax or sign language. To address the
challenge, this paper introduces a multi-modal interaction framework that
combines voice and deictic posture information to create a more natural HRI
system. The visual cues are first processed by the object detection model to
gain a global understanding of the environment, and then bounding boxes are
estimated based on depth information. By using a large language model (LLM)
with voice-to-text commands and temporally aligned selected bounding boxes,
robot action sequences can be generated, while key control syntax constraints
are applied to avoid potential LLM hallucination issues. The system is
evaluated on real-world tasks with varying levels of complexity using a
Universal Robots UR3e manipulator. Our method demonstrates significantly better
performance in HRI in terms of accuracy and robustness. To benefit the research
community and the general public, we will make our code and design open-source.
|
2501.00790 | LENS-XAI: Redefining Lightweight and Explainable Network Security
through Knowledge Distillation and Variational Autoencoders for Scalable
Intrusion Detection in Cybersecurity | cs.CR cs.AI cs.CY cs.ET | The rapid proliferation of Industrial Internet of Things (IIoT) systems
necessitates advanced, interpretable, and scalable intrusion detection systems
(IDS) to combat emerging cyber threats. Traditional IDS face challenges such as
high computational demands, limited explainability, and inflexibility against
evolving attack patterns. To address these limitations, this study introduces
the Lightweight Explainable Network Security framework (LENS-XAI), which
combines robust intrusion detection with enhanced interpretability and
scalability. LENS-XAI integrates knowledge distillation, variational
autoencoder models, and attribution-based explainability techniques to achieve
high detection accuracy and transparency in decision-making. By leveraging a
training set comprising 10% of the available data, the framework optimizes
computational efficiency without sacrificing performance. Experimental
evaluation on four benchmark datasets: Edge-IIoTset, UKM-IDS20, CTU-13, and
NSL-KDD, demonstrates the framework's superior performance, achieving detection
accuracies of 95.34%, 99.92%, 98.42%, and 99.34%, respectively. Additionally,
the framework excels in reducing false positives and adapting to complex attack
scenarios, outperforming existing state-of-the-art methods. Key strengths of
LENS-XAI include its lightweight design, suitable for resource-constrained
environments, and its scalability across diverse IIoT and cybersecurity
contexts. Moreover, the explainability module enhances trust and transparency,
critical for practical deployment in dynamic and sensitive applications. This
research contributes significantly to advancing IDS by addressing computational
efficiency, feature interpretability, and real-world applicability. Future work
could focus on extending the framework to ensemble AI systems for distributed
environments, further enhancing its robustness and adaptability.
|
2501.00795 | Multimodal Large Models Are Effective Action Anticipators | cs.CV | The task of long-term action anticipation demands solutions that can
effectively model temporal dynamics over extended periods while deeply
understanding the inherent semantics of actions. Traditional approaches, which
primarily rely on recurrent units or Transformer layers to capture long-term
dependencies, often fall short in addressing these challenges. Large Language
Models (LLMs), with their robust sequential modeling capabilities and extensive
commonsense knowledge, present new opportunities for long-term action
anticipation. In this work, we introduce the ActionLLM framework, a novel
approach that treats video sequences as successive tokens, leveraging LLMs to
anticipate future actions. Our baseline model simplifies the LLM architecture
by setting future tokens, incorporating an action tuning module, and reducing
the textual decoder layer to a linear layer, enabling straightforward action
prediction without the need for complex instructions or redundant descriptions.
To further harness the commonsense reasoning of LLMs, we predict action
categories for observed frames and use sequential textual clues to guide
semantic understanding. In addition, we introduce a Cross-Modality Interaction
Block, designed to explore the specificity within each modality and capture
interactions between vision and textual modalities, thereby enhancing
multimodal tuning. Extensive experiments on benchmark datasets demonstrate the
superiority of the proposed ActionLLM framework, encouraging a promising
direction to explore LLMs in the context of action anticipation. Code is
available at https://github.com/2tianyao1/ActionLLM.git.
|
2501.00798 | Make Shuffling Great Again: A Side-Channel Resistant Fisher-Yates
Algorithm for Protecting Neural Networks | cs.CR cs.AI | Neural network models implemented in embedded devices have been shown to be
susceptible to side-channel attacks (SCAs), allowing recovery of proprietary
model parameters, such as weights and biases. There are already available
countermeasure methods currently used for protecting cryptographic
implementations that can be tailored to protect embedded neural network models.
Shuffling, a hiding-based countermeasure that randomly shuffles the order of
computations, was shown to be vulnerable to SCA when the Fisher-Yates algorithm
is used. In this paper, we propose a design of an SCA-secure version of the
Fisher-Yates algorithm. By integrating the masking technique for modular
reduction and Blakely's method for modular multiplication, we effectively
remove the vulnerability in the division operation that led to side-channel
leakage in the original version of the algorithm. We experimentally evaluate
that the countermeasure is effective against SCA by implementing a correlation
power analysis attack on an embedded neural network model implemented on ARM
Cortex-M4. Compared to the original proposal, the memory overhead is $2\times$
the biggest layer of the network, while the time overhead varies from $4\%$ to
$0.49\%$ for a layer with $100$ and $1000$ neurons, respectively.
|
2501.00799 | Follow The Approximate Sparse Leader for No-Regret Online Sparse Linear
Approximation | cs.LG math.OC | We consider the problem of \textit{online sparse linear approximation}, where
one predicts the best sparse approximation of a sequence of measurements in
terms of linear combination of columns of a given measurement matrix. Such
online prediction problems are ubiquitous, ranging from medical trials to web
caching to resource allocation. The inherent difficulty of offline recovery
also makes the online problem challenging. In this letter, we propose
Follow-The-Approximate-Sparse-Leader, an efficient online meta-policy to
address this online problem. Through a detailed theoretical analysis, we prove
that under certain assumptions on the measurement sequence, the proposed policy
enjoys a data-dependent sublinear upper bound on the static regret, which can
range from logarithmic to square-root. Numerical simulations are performed to
corroborate the theoretical findings and demonstrate the efficacy of the
proposed online policy.
|
2501.00803 | Reasoning-Oriented and Analogy-Based Methods for Locating and Editing in
Zero-Shot Event-Relational Reasoning | cs.CL cs.AI | Zero-shot event-relational reasoning is an important task in natural language
processing, and existing methods jointly learn a variety of event-relational
prefixes and inference-form prefixes to achieve such tasks. However, training
prefixes consumes large computational resources and lacks interpretability.
Additionally, learning various relational and inferential knowledge
inefficiently exploits the connections between tasks. Therefore, we first
propose a method for Reasoning-Oriented Locating and Editing (ROLE), which
locates and edits the key modules of the language model for reasoning about
event relations, enhancing interpretability and also resource-efficiently
optimizing the reasoning ability. Subsequently, we propose a method for
Analogy-Based Locating and Editing (ABLE), which efficiently exploits the
similarities and differences between tasks to optimize the zero-shot reasoning
capability. Experimental results show that ROLE improves interpretability and
reasoning performance with reduced computational cost. ABLE achieves SOTA
results in zero-shot reasoning.
|
2501.00804 | Automatic Text Pronunciation Correlation Generation and Application for
Contextual Biasing | eess.AS cs.CL | Effectively distinguishing the pronunciation correlations between different
written texts is a significant issue in linguistic acoustics. Traditionally,
such pronunciation correlations are obtained through manually designed
pronunciation lexicons. In this paper, we propose a data-driven method to
automatically acquire these pronunciation correlations, called automatic text
pronunciation correlation (ATPC). The supervision required for this method is
consistent with the supervision needed for training end-to-end automatic speech
recognition (E2E-ASR) systems, i.e., speech and corresponding text annotations.
First, the iteratively-trained timestamp estimator (ITSE) algorithm is employed
to align the speech with their corresponding annotated text symbols. Then, a
speech encoder is used to convert the speech into speech embeddings. Finally,
we compare the speech embeddings distances of different text symbols to obtain
ATPC. Experimental results on Mandarin show that ATPC enhances E2E-ASR
performance in contextual biasing and holds promise for dialects or languages
lacking artificial pronunciation lexicons.
|
2501.00805 | SLIDE: Integrating Speech Language Model with LLM for Spontaneous Spoken
Dialogue Generation | eess.AS cs.CL cs.SD | Recently, ``textless" speech language models (SLMs) based on speech units
have made huge progress in generating naturalistic speech, including non-verbal
vocalizations. However, the generated speech samples often lack semantic
coherence. In this paper, we propose SLM and LLM Integration for spontaneous
spoken Dialogue gEneration (SLIDE). Specifically, we first utilize an LLM to
generate the textual content of spoken dialogue. Next, we convert the textual
dialogues into phoneme sequences and use a two-tower transformer-based duration
predictor to predict the duration of each phoneme. Finally, an SLM conditioned
on the spoken phoneme sequences is used to vocalize the textual dialogue.
Experimental results on the Fisher dataset demonstrate that our system can
generate naturalistic spoken dialogue while maintaining high semantic
coherence.
|
2501.00811 | Regression Guided Strategy to Automated Facial Beauty Optimization
through Image Synthesis | cs.CV cs.LG | The use of beauty filters on social media, which enhance the appearance of
individuals in images, is a well-researched area, with existing methods proving
to be highly effective. Traditionally, such enhancements are performed using
rule-based approaches that leverage domain knowledge of facial features
associated with attractiveness, applying very specific transformations to
maximize these attributes. In this work, we present an alternative approach
that projects facial images as points on the latent space of a pre-trained GAN,
which are then optimized to produce beautiful faces. The movement of the latent
points is guided by a newly developed facial beauty evaluation regression
network, which learns to distinguish attractive facial features, outperforming
many existing facial beauty evaluation models in this domain. By using this
data-driven approach, our method can automatically capture holistic patterns in
beauty directly from data rather than relying on predefined rules, enabling
more dynamic and potentially broader applications of facial beauty editing.
This work demonstrates a potential new direction for automated aesthetic
enhancement, offering a complementary alternative to existing methods.
|
2501.00816 | MixSA: Training-free Reference-based Sketch Extraction via
Mixture-of-Self-Attention | cs.CV | Current sketch extraction methods either require extensive training or fail
to capture a wide range of artistic styles, limiting their practical
applicability and versatility. We introduce Mixture-of-Self-Attention (MixSA),
a training-free sketch extraction method that leverages strong diffusion priors
for enhanced sketch perception. At its core, MixSA employs a
mixture-of-self-attention technique, which manipulates self-attention layers by
substituting the keys and values with those from reference sketches. This
allows for the seamless integration of brushstroke elements into initial
outline images, offering precise control over texture density and enabling
interpolation between styles to create novel, unseen styles. By aligning
brushstroke styles with the texture and contours of colored images,
particularly in late decoder layers handling local textures, MixSA addresses
the common issue of color averaging by adjusting initial outlines. Evaluated
with various perceptual metrics, MixSA demonstrates superior performance in
sketch quality, flexibility, and applicability. This approach not only
overcomes the limitations of existing methods but also empowers users to
generate diverse, high-fidelity sketches that more accurately reflect a wide
range of artistic expressions.
|
2501.00817 | Hardness of Learning Fixed Parities with Neural Networks | cs.LG stat.ML | Learning parity functions is a canonical problem in learning theory, which
although computationally tractable, is not amenable to standard learning
algorithms such as gradient-based methods. This hardness is usually explained
via statistical query lower bounds [Kearns, 1998]. However, these bounds only
imply that for any given algorithm, there is some worst-case parity function
that will be hard to learn. Thus, they do not explain why fixed parities - say,
the full parity function over all coordinates - are difficult to learn in
practice, at least with standard predictors and gradient-based methods [Abbe
and Boix-Adsera, 2022]. In this paper, we address this open problem, by showing
that for any fixed parity of some minimal size, using it as a target function
to train one-hidden-layer ReLU networks with perturbed gradient descent will
fail to produce anything meaningful. To establish this, we prove a new result
about the decay of the Fourier coefficients of linear threshold (or weighted
majority) functions, which may be of independent interest.
|
2501.00818 | SPARNet: Continual Test-Time Adaptation via Sample Partitioning Strategy
and Anti-Forgetting Regularization | cs.CV | Test-time Adaptation (TTA) aims to improve model performance when the model
encounters domain changes after deployment. The standard TTA mainly considers
the case where the target domain is static, while the continual TTA needs to
undergo a sequence of domain changes. This encounters a significant challenge
as the model needs to adapt for the long-term and is unaware of when the domain
changes occur. The quality of pseudo-labels is hard to guarantee. Noisy
pseudo-labels produced by simple self-training methods can cause error
accumulation and catastrophic forgetting. In this work, we propose a new
framework named SPARNet which consists of two parts, sample partitioning
strategy and anti-forgetting regularization. The sample partition strategy
divides samples into two groups, namely reliable samples and unreliable
samples. According to the characteristics of each group of samples, we choose
different strategies to deal with different groups of samples. This ensures
that reliable samples contribute more to the model. At the same time, the
negative impacts of unreliable samples are eliminated by the mean teacher's
consistency learning. Finally, we introduce a regularization term to alleviate
the catastrophic forgetting problem, which can limit important parameters from
excessive changes. This term enables long-term adaptation of parameters in the
network. The effectiveness of our method is demonstrated in continual TTA
scenario by conducting a large number of experiments on CIFAR10-C, CIFAR100-C
and ImageNet-C.
|
2501.00823 | Decoupling Knowledge and Reasoning in Transformers: A Modular
Architecture with Generalized Cross-Attention | cs.LG cs.AI cs.CL | Transformers have achieved remarkable success across diverse domains, but
their monolithic architecture presents challenges in interpretability,
adaptability, and scalability. This paper introduces a novel modular
Transformer architecture that explicitly decouples knowledge and reasoning
through a generalized cross-attention mechanism to a globally shared knowledge
base with layer-specific transformations, specifically designed for effective
knowledge retrieval. Critically, we provide a rigorous mathematical derivation
demonstrating that the Feed-Forward Network (FFN) in a standard Transformer is
a specialized case (a closure) of this generalized cross-attention, revealing
its role in implicit knowledge retrieval and validating our design. This
theoretical framework provides a new lens for understanding FFNs and lays the
foundation for future research exploring enhanced interpretability,
adaptability, and scalability, enabling richer interplay with external
knowledge bases and other systems.
|
2501.00824 | Information Sifting Funnel: Privacy-preserving Collaborative Inference
Against Model Inversion Attacks | cs.CR cs.IT math.IT | The complexity of neural networks and inference tasks, coupled with demands
for computational efficiency and real-time feedback, poses significant
challenges for resource-constrained edge devices. Collaborative inference
mitigates this by assigning shallow feature extraction to edge devices and
offloading features to the cloud for further inference, reducing computational
load. However, transmitted features remain susceptible to model inversion
attacks (MIAs), which can reconstruct original input data. Current defenses,
such as perturbation and information bottleneck techniques, offer explainable
protection but face limitations, including the lack of standardized criteria
for assessing MIA difficulty, challenges in mutual information estimation, and
trade-offs among usability, privacy, and deployability.
To address these challenges, we introduce the first criterion to evaluate MIA
difficulty in collaborative inference, supported by theoretical analysis of
existing attacks and defenses, validated using experiments with the Mutual
Information Neural Estimator (MINE). Based on these findings, we propose
SiftFunnel, a privacy-preserving framework for collaborative inference. The
edge model is trained with linear and non-linear correlation constraints to
reduce redundant information in transmitted features, enhancing privacy
protection. Label smoothing and a cloud-based upsampling module are added to
balance usability and privacy. To improve deployability, the edge model
incorporates a funnel-shaped structure and attention mechanisms, preserving
both privacy and usability. Extensive experiments demonstrate that SiftFunnel
outperforms state-of-the-art defenses against MIAs, achieving superior privacy
protection with less than 3% accuracy loss and striking an optimal balance
among usability, privacy, and practicality.
|
2501.00826 | LLM-Powered Multi-Agent System for Automated Crypto Portfolio Management | q-fin.TR cs.AI | Cryptocurrency investment is inherently difficult due to its shorter history
compared to traditional assets, the need to integrate vast amounts of data from
various modalities, and the requirement for complex reasoning. While deep
learning approaches have been applied to address these challenges, their
black-box nature raises concerns about trust and explainability. Recently,
large language models (LLMs) have shown promise in financial applications due
to their ability to understand multi-modal data and generate explainable
decisions. However, single LLM faces limitations in complex, comprehensive
tasks such as asset investment. These limitations are even more pronounced in
cryptocurrency investment, where LLMs have less domain-specific knowledge in
their training corpora.
To overcome these challenges, we propose an explainable, multi-modal,
multi-agent framework for cryptocurrency investment. Our framework uses
specialized agents that collaborate within and across teams to handle subtasks
such as data analysis, literature integration, and investment decision-making
for the top 30 cryptocurrencies by market capitalization. The expert training
module fine-tunes agents using multi-modal historical data and professional
investment literature, while the multi-agent investment module employs
real-time data to make informed cryptocurrency investment decisions. Unique
intrateam and interteam collaboration mechanisms enhance prediction accuracy by
adjusting final predictions based on confidence levels within agent teams and
facilitating information sharing between teams. Empirical evaluation using data
from November 2023 to September 2024 demonstrates that our framework
outperforms single-agent models and market benchmarks in classification, asset
pricing, portfolio, and explainability performance.
|
2501.00828 | Embedding Style Beyond Topics: Analyzing Dispersion Effects Across
Different Language Models | cs.CL cs.AI | This paper analyzes how writing style affects the dispersion of embedding
vectors across multiple, state-of-the-art language models. While early
transformer models primarily aligned with topic modeling, this study examines
the role of writing style in shaping embedding spaces. Using a literary corpus
that alternates between topics and styles, we compare the sensitivity of
language models across French and English. By analyzing the particular impact
of style on embedding dispersion, we aim to better understand how language
models process stylistic information, contributing to their overall
interpretability.
|
2501.00829 | An LLM-Empowered Adaptive Evolutionary Algorithm For Multi-Component
Deep Learning Systems | cs.NE cs.AI | Multi-objective evolutionary algorithms (MOEAs) are widely used for searching
optimal solutions in complex multi-component applications. Traditional MOEAs
for multi-component deep learning (MCDL) systems face challenges in enhancing
the search efficiency while maintaining the diversity. To combat these, this
paper proposes $\mu$MOEA, the first LLM-empowered adaptive evolutionary search
algorithm to detect safety violations in MCDL systems. Inspired by the
context-understanding ability of Large Language Models (LLMs), $\mu$MOEA
promotes the LLM to comprehend the optimization problem and generate an initial
population tailed to evolutionary objectives. Subsequently, it employs adaptive
selection and variation to iteratively produce offspring, balancing the
evolutionary efficiency and diversity. During the evolutionary process, to
navigate away from the local optima, $\mu$MOEA integrates the evolutionary
experience back into the LLM. This utilization harnesses the LLM's quantitative
reasoning prowess to generate differential seeds, breaking away from current
optimal solutions. We evaluate $\mu$MOEA in finding safety violations of MCDL
systems, and compare its performance with state-of-the-art MOEA methods.
Experimental results show that $\mu$MOEA can significantly improve the
efficiency and diversity of the evolutionary search.
|
2501.00830 | LLM+AL: Bridging Large Language Models and Action Languages for Complex
Reasoning about Actions | cs.CL cs.AI | Large Language Models (LLMs) have made significant strides in various
intelligent tasks but still struggle with complex action reasoning tasks that
require systematic search. To address this limitation, we propose a method that
bridges the natural language understanding capabilities of LLMs with the
symbolic reasoning strengths of action languages. Our approach, termed
"LLM+AL," leverages the LLM's strengths in semantic parsing and commonsense
knowledge generation alongside the action language's proficiency in automated
reasoning based on encoded knowledge. We compare LLM+AL against
state-of-the-art LLMs, including ChatGPT-4, Claude 3 Opus, Gemini Ultra 1.0,
and o1-preview, using benchmarks for complex reasoning about actions. Our
findings indicate that, although all methods exhibit errors, LLM+AL, with
relatively minimal human corrections, consistently leads to correct answers,
whereas standalone LLMs fail to improve even with human feedback. LLM+AL also
contributes to automated generation of action languages.
|
2501.00836 | Recognizing Artistic Style of Archaeological Image Fragments Using Deep
Style Extrapolation | cs.CV | Ancient artworks obtained in archaeological excavations usually suffer from a
certain degree of fragmentation and physical degradation. Often, fragments of
multiple artifacts from different periods or artistic styles could be found on
the same site. With each fragment containing only partial information about its
source, and pieces from different objects being mixed, categorizing broken
artifacts based on their visual cues could be a challenging task, even for
professionals. As classification is a common function of many machine learning
models, the power of modern architectures can be harnessed for efficient and
accurate fragment classification. In this work, we present a generalized
deep-learning framework for predicting the artistic style of image fragments,
achieving state-of-the-art results for pieces with varying styles and
geometries.
|
2501.00838 | Spatially-guided Temporal Aggregation for Robust Event-RGB Optical Flow
Estimation | cs.CV cs.LG | Current optical flow methods exploit the stable appearance of frame (or RGB)
data to establish robust correspondences across time. Event cameras, on the
other hand, provide high-temporal-resolution motion cues and excel in
challenging scenarios. These complementary characteristics underscore the
potential of integrating frame and event data for optical flow estimation.
However, most cross-modal approaches fail to fully utilize the complementary
advantages, relying instead on simply stacking information. This study
introduces a novel approach that uses a spatially dense modality to guide the
aggregation of the temporally dense event modality, achieving effective
cross-modal fusion. Specifically, we propose an event-enhanced frame
representation that preserves the rich texture of frames and the basic
structure of events. We use the enhanced representation as the guiding modality
and employ events to capture temporally dense motion information. The robust
motion features derived from the guiding modality direct the aggregation of
motion information from events. To further enhance fusion, we propose a
transformer-based module that complements sparse event motion features with
spatially rich frame information and enhances global information propagation.
Additionally, a mix-fusion encoder is designed to extract comprehensive
spatiotemporal contextual features from both modalities. Extensive experiments
on the MVSEC and DSEC-Flow datasets demonstrate the effectiveness of our
framework. Leveraging the complementary strengths of frames and events, our
method achieves leading performance on the DSEC-Flow dataset. Compared to the
event-only model, frame guidance improves accuracy by 10\%. Furthermore, it
outperforms the state-of-the-art fusion-based method with a 4\% accuracy gain
and a 45\% reduction in inference time.
|
2501.00840 | Distilled Lifelong Self-Adaptation for Configurable Systems | cs.SE cs.AI | Modern configurable systems provide tremendous opportunities for engineering
future intelligent software systems. A key difficulty thereof is how to
effectively self-adapt the configuration of a running system such that its
performance (e.g., runtime and throughput) can be optimized under time-varying
workloads. This unfortunately remains unaddressed in existing approaches as
they either overlook the available past knowledge or rely on static
exploitation of past knowledge without reasoning the usefulness of information
when planning for self-adaptation. In this paper, we tackle this challenging
problem by proposing DLiSA, a framework that self-adapts configurable systems.
DLiSA comes with two properties: firstly, it supports lifelong planning, and
thereby the planning process runs continuously throughout the lifetime of the
system, allowing dynamic exploitation of the accumulated knowledge for rapid
adaptation. Secondly, the planning for a newly emerged workload is boosted via
distilled knowledge seeding, in which the knowledge is dynamically purified
such that only useful past configurations are seeded when necessary, mitigating
misleading information. Extensive experiments suggest that the proposed DLiSA
significantly outperforms state-of-the-art approaches, demonstrating a
performance improvement of up to 229% and a resource acceleration of up to
2.22x on generating promising adaptation configurations. All data and sources
can be found at our repository: https://github.com/ideas-labo/dlisa.
|
2501.00843 | FusionSORT: Fusion Methods for Online Multi-object Visual Tracking | cs.CV | In this work, we investigate four different fusion methods for associating
detections to tracklets in multi-object visual tracking. In addition to
considering strong cues such as motion and appearance information, we also
consider weak cues such as height intersection-over-union (height-IoU) and
tracklet confidence information in the data association using different fusion
methods. These fusion methods include minimum, weighted sum based on IoU,
Kalman filter (KF) gating, and hadamard product of costs due to the different
cues. We conduct extensive evaluations on validation sets of MOT17, MOT20 and
DanceTrack datasets, and find out that the choice of a fusion method is key for
data association in multi-object visual tracking. We hope that this
investigative work helps the computer vision research community to use the
right fusion method for data association in multi-object visual tracking.
|
2501.00848 | IllusionBench: A Large-scale and Comprehensive Benchmark for Visual
Illusion Understanding in Vision-Language Models | cs.CV | Current Visual Language Models (VLMs) show impressive image understanding but
struggle with visual illusions, especially in real-world scenarios. Existing
benchmarks focus on classical cognitive illusions, which have been learned by
state-of-the-art (SOTA) VLMs, revealing issues such as hallucinations and
limited perceptual abilities. To address this gap, we introduce IllusionBench,
a comprehensive visual illusion dataset that encompasses not only classic
cognitive illusions but also real-world scene illusions. This dataset features
1,051 images, 5,548 question-answer pairs, and 1,051 golden text descriptions
that address the presence, causes, and content of the illusions. We evaluate
ten SOTA VLMs on this dataset using true-or-false, multiple-choice, and
open-ended tasks. In addition to real-world illusions, we design trap illusions
that resemble classical patterns but differ in reality, highlighting
hallucination issues in SOTA models. The top-performing model, GPT-4o, achieves
80.59% accuracy on true-or-false tasks and 76.75% on multiple-choice questions,
but still lags behind human performance. In the semantic description task,
GPT-4o's hallucinations on classical illusions result in low scores for trap
illusions, even falling behind some open-source models. IllusionBench is, to
the best of our knowledge, the largest and most comprehensive benchmark for
visual illusions in VLMs to date.
|
2501.00851 | Scale-wise Bidirectional Alignment Network for Referring Remote Sensing
Image Segmentation | cs.CV | The goal of referring remote sensing image segmentation (RRSIS) is to extract
specific pixel-level regions within an aerial image via a natural language
expression. Recent advancements, particularly Transformer-based fusion designs,
have demonstrated remarkable progress in this domain. However, existing methods
primarily focus on refining visual features using language-aware guidance
during the cross-modal fusion stage, neglecting the complementary
vision-to-language flow. This limitation often leads to irrelevant or
suboptimal representations. In addition, the diverse spatial scales of ground
objects in aerial images pose significant challenges to the visual perception
capabilities of existing models when conditioned on textual inputs. In this
paper, we propose an innovative framework called Scale-wise Bidirectional
Alignment Network (SBANet) to address these challenges for RRSIS. Specifically,
we design a Bidirectional Alignment Module (BAM) with learnable query tokens to
selectively and effectively represent visual and linguistic features,
emphasizing regions associated with key tokens. BAM is further enhanced with a
dynamic feature selection block, designed to provide both macro- and
micro-level visual features, preserving global context and local details to
facilitate more effective cross-modal interaction. Furthermore, SBANet
incorporates a text-conditioned channel and spatial aggregator to bridge the
gap between the encoder and decoder, enhancing cross-scale information exchange
in complex aerial scenarios. Extensive experiments demonstrate that our
proposed method achieves superior performance in comparison to previous
state-of-the-art methods on the RRSIS-D and RefSegRS datasets, both
quantitatively and qualitatively. The code will be released after publication.
|
2501.00852 | Hybridising Reinforcement Learning and Heuristics for Hierarchical
Directed Arc Routing Problems | cs.LG | The Hierarchical Directed Capacitated Arc Routing Problem (HDCARP) is an
extension of the Capacitated Arc Routing Problem (CARP), where the arcs of a
graph are divided into classes based on their priority. The traversal of these
classes is determined by either precedence constraints or a hierarchical
objective, resulting in two distinct HDCARP variants. To the best of our
knowledge, only one matheuristic has been proposed for these variants, but it
performs relatively slowly, particularly for large-scale instances (Ha et al.,
2024). In this paper, we propose a fast heuristic to efficiently address the
computational challenges of HDCARP. Furthermore, we incorporate Reinforcement
Learning (RL) into our heuristic to effectively guide the selection of local
search operators, resulting in a hybrid algorithm. We name this hybrid
algorithm as the Hybrid Reinforcement Learning and Heuristic Algorithm for
Directed Arc Routing (HRDA). The hybrid algorithm adapts to changes in the
problem dynamically, using real-time feedback to improve routing strategies and
solution's quality by integrating heuristic methods. Extensive computational
experiments on artificial instances demonstrate that this hybrid approach
significantly improves the speed of the heuristic without deteriorating the
solution quality. Our source code is publicly available at:
https://github.com/HySonLab/ArcRoute
|
2501.00854 | A Graphical Approach to State Variable Selection in Off-policy Learning | stat.ME cs.LG | Sequential decision problems are widely studied across many areas of science.
A key challenge when learning policies from historical data - a practice
commonly referred to as off-policy learning - is how to ``identify'' the impact
of a policy of interest when the observed data are not randomized. Off-policy
learning has mainly been studied in two settings: dynamic treatment regimes
(DTRs), where the focus is on controlling confounding in medical problems with
short decision horizons, and offline reinforcement learning (RL), where the
focus is on dimension reduction in closed systems such as games. The gap
between these two well studied settings has limited the wider application of
off-policy learning to many real-world problems. Using the theory for causal
inference based on acyclic directed mixed graph (ADMGs), we provide a set of
graphical identification criteria in general decision processes that encompass
both DTRs and MDPs. We discuss how our results relate to the often implicit
causal assumptions made in the DTR and RL literatures and further clarify
several common misconceptions. Finally, we present a realistic simulation study
for the dynamic pricing problem encountered in container logistics, and
demonstrate how violations of our graphical criteria can lead to suboptimal
policies.
|
2501.00855 | What is a Social Media Bot? A Global Comparison of Bot and Human
Characteristics | cs.CY cs.AI cs.SI | Chatter on social media is 20% bots and 80% humans. Chatter by bots and
humans is consistently different: bots tend to use linguistic cues that can be
easily automated while humans use cues that require dialogue understanding.
Bots use words that match the identities they choose to present, while humans
may send messages that are not related to the identities they present. Bots and
humans differ in their communication structure: sampled bots have a star
interaction structure, while sampled humans have a hierarchical structure.
These conclusions are based on a large-scale analysis of social media tweets
across ~200mil users across 7 events. Social media bots took the world by storm
when social-cybersecurity researchers realized that social media users not only
consisted of humans but also of artificial agents called bots. These bots wreck
havoc online by spreading disinformation and manipulating narratives. Most
research on bots are based on special-purposed definitions, mostly predicated
on the event studied. This article first begins by asking, "What is a bot?",
and we study the underlying principles of how bots are different from humans.
We develop a first-principle definition of a social media bot. With this
definition as a premise, we systematically compare characteristics between bots
and humans across global events, and reflect on how the software-programmed bot
is an Artificial Intelligent algorithm, and its potential for evolution as
technology advances. Based on our results, we provide recommendations for the
use and regulation of bots. Finally, we discuss open challenges and future
directions: Detect, to systematically identify these automated and potentially
evolving bots; Differentiate, to evaluate the goodness of the bot in terms of
their content postings and relationship interactions; Disrupt, to moderate the
impact of malicious bots.
|
2501.00856 | Advances in UAV Avionics Systems Architecture, Classification and
Integration: A Comprehensive Review and Future Perspectives | eess.SY cs.SY | Avionics systems of an Unmanned Aerial Vehicle (UAV) or drone are the
critical electronic components found onboard that regulate, navigate, and
control UAV travel while ensuring public safety. Contemporary UAV avionics work
together to facilitate success of UAV missions by enabling stable
communication, secure identification protocols, novel energy solutions,
multi-sensor accurate perception and autonomous navigation, precise path
planning, that guarantees collision avoidance, reliable trajectory control, and
efficient data transfer within the UAV system. Moreover, special consideration
must be given to electronic warfare threats prevention, detection, and
mitigation, and the regulatory framework associated with UAV operations. This
review presents the role and taxonomy of each UAV avionics system while
covering shortcomings and benefits of available alternatives within each
system. UAV communication systems, antennas, and location communication
tracking are surveyed. Identification systems that respond to air-to-air or
air-to-ground interrogating signals are presented. UAV classical and more
innovative power sources are discussed. The rapid development of perception
systems improves UAV autonomous navigation and control capabilities. The paper
reviews common perception systems, navigation techniques, path planning
approaches, obstacle avoidance methods, and tracking control. Modern electronic
warfare uses advanced techniques and has to be counteracted by equally advanced
methods to keep the public safe. Consequently, this work presents a detailed
overview of common electronic warfare threats and state-of-the-art
countermeasures and defensive aids. UAV safety occurrences are analyzed in the
context of national regulatory framework and the certification process. Databus
communication and standards for UAVs are reviewed as they enable efficient and
fast real-time data transfer.
|
2501.00862 | DiffETM: Diffusion Process Enhanced Embedded Topic Model | cs.CL cs.AI cs.IR cs.LG | The embedded topic model (ETM) is a widely used approach that assumes the
sampled document-topic distribution conforms to the logistic normal
distribution for easier optimization. However, this assumption oversimplifies
the real document-topic distribution, limiting the model's performance. In
response, we propose a novel method that introduces the diffusion process into
the sampling process of document-topic distribution to overcome this limitation
and maintain an easy optimization process. We validate our method through
extensive experiments on two mainstream datasets, proving its effectiveness in
improving topic modeling performance.
|
2501.00865 | Negative to Positive Co-learning with Aggressive Modality Dropout | cs.CL cs.LG | This paper aims to document an effective way to improve multimodal
co-learning by using aggressive modality dropout. We find that by using
aggressive modality dropout we are able to reverse negative co-learning (NCL)
to positive co-learning (PCL). Aggressive modality dropout can be used to
"prep" a multimodal model for unimodal deployment, and dramatically increases
model performance during negative co-learning, where during some experiments we
saw a 20% gain in accuracy. We also benchmark our modality dropout technique
against PCL to show that our modality drop out technique improves co-learning
during PCL, although it does not have as much as an substantial effect as it
does during NCL. Github: https://github.com/nmagal/modality_drop_for_colearning
|
2501.00867 | Interactionalism: Re-Designing Higher Learning for the Large Language
Agent Era | cs.HC cs.MA | We introduce Interactionalism as a new set of guiding principles and
heuristics for the design and architecture of learning now available due to
Generative AI (GenAI) platforms. Specifically, we articulate interactional
intelligence as a net new skill set that is increasingly important when core
cognitive tasks are automatable and augmentable by GenAI functions. We break
down these skills into core sets of meta-cognitive and meta-emotional
components and show how working with Large Language Model (LLM)-based agents
can be proactively used to help develop learners. Interactionalism is not
advanced as a theory of learning; but as a blueprint for the practice of
learning - in coordination with GenAI.
|
2501.00868 | Large Language Models Are Read/Write Policy-Makers for Simultaneous
Generation | cs.CL | Simultaneous generation models write generation results while reading
streaming inputs, necessitating a policy-maker to determine the appropriate
output timing. Existing simultaneous generation methods generally adopt the
traditional encoder-decoder architecture and learn the generation and
policy-making capabilities through complex dynamic programming techniques.
Although LLMs excel at text generation, they face challenges in taking on the
role of policy-makers through traditional training methods, limiting their
exploration in simultaneous generation. To overcome these limitations, we
propose a novel LLM-driven Simultaneous Generation (LSG) framework, which
allows the off-the-shelf LLM to decide the generation timing and produce output
concurrently. Specifically, LSG selects the generation policy that minimizes
latency as the baseline policy. Referring to the baseline policy, LSG enables
the LLM to devise an improved generation policy that better balances latency
and generation quality, and writes generation results accordingly. Experiments
on simultaneous translation and streaming automatic speech recognition tasks
show that our method can achieve state-of-the-art performance utilizing the
open-source LLMs and demonstrate practicality in real-world scenarios.
|
2501.00872 | Observer-Based Data-Driven Consensus Control for Nonlinear Multi-Agent
Systems against DoS and FDI attacks | eess.SY cs.SY | Existing data-driven control methods generally do not address False Data
Injection (FDI) and Denial-of-Service (DoS) attacks simultaneously. This letter
introduces a distributed data-driven attack-resilient consensus problem under
both FDI and DoS attacks and proposes a data-driven consensus control
framework, consisting of a group of comprehensive attack-resilient observers.
The proposed group of observers is designed to estimate FDI attacks, external
disturbances, and lumped disturbances, combined with a DoS attack compensation
mechanism. A rigorous stability analysis of the approach is provided to ensure
the boundedness of the distributed neighborhood estimation consensus error. The
effectiveness of the approach is validated through numerical examples involving
both leaderless consensus and leader-follower consensus, demonstrating
significantly improved resilient performance compared to existing data-driven
control approaches.
|
2501.00873 | Exploring Structured Semantic Priors Underlying Diffusion Score for
Test-time Adaptation | cs.CV cs.LG | Capitalizing on the complementary advantages of generative and discriminative
models has always been a compelling vision in machine learning, backed by a
growing body of research. This work discloses the hidden semantic structure
within score-based generative models, unveiling their potential as effective
discriminative priors. Inspired by our theoretical findings, we propose DUSA to
exploit the structured semantic priors underlying diffusion score to facilitate
the test-time adaptation of image classifiers or dense predictors. Notably,
DUSA extracts knowledge from a single timestep of denoising diffusion, lifting
the curse of Monte Carlo-based likelihood estimation over timesteps. We
demonstrate the efficacy of our DUSA in adapting a wide variety of competitive
pre-trained discriminative models on diverse test-time scenarios. Additionally,
a thorough ablation study is conducted to dissect the pivotal elements in DUSA.
Code is publicly available at https://github.com/BIT-DA/DUSA.
|
2501.00874 | LUSIFER: Language Universal Space Integration for Enhanced Multilingual
Embeddings with Large Language Models | cs.CL cs.IR | Recent advancements in large language models (LLMs) based embedding models
have established new state-of-the-art benchmarks for text embedding tasks,
particularly in dense vector-based retrieval. However, these models
predominantly focus on English, leaving multilingual embedding capabilities
largely unexplored. To address this limitation, we present LUSIFER, a novel
zero-shot approach that adapts LLM-based embedding models for multilingual
tasks without requiring multilingual supervision. LUSIFER's architecture
combines a multilingual encoder, serving as a language-universal learner, with
an LLM-based embedding model optimized for embedding-specific tasks. These
components are seamlessly integrated through a minimal set of trainable
parameters that act as a connector, effectively transferring the multilingual
encoder's language understanding capabilities to the specialized embedding
model. Additionally, to comprehensively evaluate multilingual embedding
performance, we introduce a new benchmark encompassing 5 primary embedding
tasks, 123 diverse datasets, and coverage across 14 languages. Extensive
experimental results demonstrate that LUSIFER significantly enhances the
multilingual performance across various embedding tasks, particularly for
medium and low-resource languages, without requiring explicit multilingual
training data.
|
2501.00876 | A Novel Approach using CapsNet and Deep Belief Network for Detection and
Identification of Oral Leukopenia | eess.IV cs.CV cs.LG | Oral cancer constitutes a significant global health concern, resulting in
277,484 fatalities in 2023, with the highest prevalence observed in low- and
middle-income nations. Facilitating automation in the detection of possibly
malignant and malignant lesions in the oral cavity could result in
cost-effective and early disease diagnosis. Establishing an extensive
repository of meticulously annotated oral lesions is essential. In this
research photos are being collected from global clinical experts, who have been
equipped with an annotation tool to generate comprehensive labelling. This
research presents a novel approach for integrating bounding box annotations
from various doctors. Additionally, Deep Belief Network combined with CAPSNET
is employed to develop automated systems that extracted intricate patterns to
address this challenging problem. This study evaluated two deep learning-based
computer vision methodologies for the automated detection and classification of
oral lesions to facilitate the early detection of oral cancer: image
classification utilizing CAPSNET. Image classification attained an F1 score of
94.23% for detecting photos with lesions 93.46% for identifying images
necessitating referral. Object detection attained an F1 score of 89.34% for
identifying lesions for referral. Subsequent performances are documented about
classification based on the sort of referral decision. Our preliminary findings
indicate that deep learning possesses the capability to address this complex
problem.
|
2501.00877 | FGAseg: Fine-Grained Pixel-Text Alignment for Open-Vocabulary Semantic
Segmentation | cs.CV | Open-vocabulary segmentation aims to identify and segment specific regions
and objects based on text-based descriptions. A common solution is to leverage
powerful vision-language models (VLMs), such as CLIP, to bridge the gap between
vision and text information. However, VLMs are typically pretrained for
image-level vision-text alignment, focusing on global semantic features. In
contrast, segmentation tasks require fine-grained pixel-level alignment and
detailed category boundary information, which VLMs alone cannot provide. As a
result, information extracted directly from VLMs can't meet the requirements of
segmentation tasks. To address this limitation, we propose FGAseg, a model
designed for fine-grained pixel-text alignment and category boundary
supplementation. The core of FGAseg is a Pixel-Level Alignment module that
employs a cross-modal attention mechanism and a text-pixel alignment loss to
refine the coarse-grained alignment from CLIP, achieving finer-grained
pixel-text semantic alignment. Additionally, to enrich category boundary
information, we introduce the alignment matrices as optimizable pseudo-masks
during forward propagation and propose Category Information Supplementation
module. These pseudo-masks, derived from cosine and convolutional similarity,
provide essential global and local boundary information between different
categories. By combining these two strategies, FGAseg effectively enhances
pixel-level alignment and category boundary information, addressing key
challenges in open-vocabulary segmentation. Extensive experiments demonstrate
that FGAseg outperforms existing methods on open-vocabulary semantic
segmentation benchmarks.
|
2501.00879 | TrustRAG: Enhancing Robustness and Trustworthiness in RAG | cs.CL | Retrieval-Augmented Generation (RAG) systems enhance large language models
(LLMs) by integrating external knowledge sources, enabling more accurate and
contextually relevant responses tailored to user queries. However, these
systems remain vulnerable to corpus poisoning attacks that can significantly
degrade LLM performance through the injection of malicious content. To address
these challenges, we propose TrustRAG, a robust framework that systematically
filters compromised and irrelevant contents before they are retrieved for
generation. Our approach implements a two-stage defense mechanism: At the first
stage, it employs K-means clustering to identify potential attack patterns in
retrieved documents using cosine similarity and ROUGE metrics as guidance,
effectively isolating suspicious content. Secondly, it performs a
self-assessment which detects malicious documents and resolves discrepancies
between the model's internal knowledge and external information. TrustRAG
functions as a plug-and-play, training-free module that integrates seamlessly
with any language model, whether open or closed-source. In addition, TrustRAG
maintains high contextual relevance while strengthening defenses against corpus
poisoning attacks. Through extensive experimental validation, we demonstrate
that TrustRAG delivers substantial improvements in retrieval accuracy,
efficiency, and attack resistance compared to existing approaches across
multiple model architectures and datasets. We have made TrustRAG available as
open-source software at \url{https://github.com/HuichiZhou/TrustRAG}.
|
2501.00880 | Improving Autoregressive Visual Generation with Cluster-Oriented Token
Prediction | cs.CV | Employing LLMs for visual generation has recently become a research focus.
However, the existing methods primarily transfer the LLM architecture to visual
generation but rarely investigate the fundamental differences between language
and vision. This oversight may lead to suboptimal utilization of visual
generation capabilities within the LLM framework. In this paper, we explore the
characteristics of visual embedding space under the LLM framework and discover
that the correlation between visual embeddings can help achieve more stable and
robust generation results. We present IAR, an Improved AutoRegressive Visual
Generation Method that enhances the training efficiency and generation quality
of LLM-based visual generation models. Firstly, we propose a Codebook
Rearrangement strategy that uses balanced k-means clustering algorithm to
rearrange the visual codebook into clusters, ensuring high similarity among
visual features within each cluster. Leveraging the rearranged codebook, we
propose a Cluster-oriented Cross-entropy Loss that guides the model to
correctly predict the cluster where the token is located. This approach ensures
that even if the model predicts the wrong token index, there is a high
probability the predicted token is located in the correct cluster, which
significantly enhances the generation quality and robustness. Extensive
experiments demonstrate that our method consistently enhances the model
training efficiency and performance from 100M to 1.4B, reducing the training
time by half while achieving the same FID. Additionally, our approach can be
applied to various LLM-based visual generation models and adheres to the
scaling law, providing a promising direction for future research in LLM-based
visual generation.
|
2501.00881 | Agentic Systems: A Guide to Transforming Industries with Vertical AI
Agents | cs.MA | The evolution of agentic systems represents a significant milestone in
artificial intelligence and modern software systems, driven by the demand for
vertical intelligence tailored to diverse industries. These systems enhance
business outcomes through adaptability, learning, and interaction with dynamic
environments. At the forefront of this revolution are Large Language Model
(LLM) agents, which serve as the cognitive backbone of these intelligent
systems. In response to the need for consistency and scalability, this work
attempts to define a level of standardization for Vertical AI agent design
patterns by identifying core building blocks and proposing a \textbf{Cognitive
Skills } Module, which incorporates domain-specific, purpose-built inference
capabilities. Building on these foundational concepts, this paper offers a
comprehensive introduction to agentic systems, detailing their core components,
operational patterns, and implementation strategies. It further explores
practical use cases and examples across various industries, highlighting the
transformative potential of LLM agents in driving industry-specific
applications.
|
2501.00882 | FullTransNet: Full Transformer with Local-Global Attention for Video
Summarization | cs.CV | Video summarization mainly aims to produce a compact, short, informative, and
representative synopsis of raw videos, which is of great importance for
browsing, analyzing, and understanding video content. Dominant video
summarization approaches are generally based on recurrent or convolutional
neural networks, even recent encoder-only transformers. We propose using full
transformer as an alternative architecture to perform video summarization. The
full transformer with an encoder-decoder structure, specifically designed for
handling sequence transduction problems, is naturally suitable for video
summarization tasks. This work considers supervised video summarization and
casts it as a sequence-to-sequence learning problem. Our key idea is to
directly apply the full transformer to the video summarization task, which is
intuitively sound and effective. Also, considering the efficiency problem, we
replace full attention with the combination of local and global sparse
attention, which enables modeling long-range dependencies while reducing
computational costs. Based on this, we propose a transformer-like architecture,
named FullTransNet, which has a full encoder-decoder structure with
local-global sparse attention for video summarization. Specifically, both the
encoder and decoder in FullTransNet are stacked the same way as ones in the
vanilla transformer, and the local-global sparse attention is used only at the
encoder side. Extensive experiments on two public multimedia benchmark datasets
SumMe and TVSum demonstrate that our proposed model can outperform other video
summarization approaches, achieving F-Measures of 54.4% on SumMe and 63.9% on
TVSum with relatively lower compute and memory requirements, verifying its
effectiveness and efficiency. The code and models are publicly available on
GitHub.
|
2501.00884 | Diversity Optimization for Travelling Salesman Problem via Deep
Reinforcement Learning | cs.LG cs.AI | Existing neural methods for the Travelling Salesman Problem (TSP) mostly aim
at finding a single optimal solution. To discover diverse yet high-quality
solutions for Multi-Solution TSP (MSTSP), we propose a novel deep reinforcement
learning based neural solver, which is primarily featured by an encoder-decoder
structured policy. Concretely, on the one hand, a Relativization Filter (RF) is
designed to enhance the robustness of the encoder to affine transformations of
the instances, so as to potentially improve the quality of the found solutions.
On the other hand, a Multi-Attentive Adaptive Active Search (MA3S) is tailored
to allow the decoders to strike a balance between the optimality and diversity.
Experimental evaluations on benchmark instances demonstrate the superiority of
our method over recent neural baselines across different metrics, and its
competitive performance against state-of-the-art traditional heuristics with
significantly reduced computational time, ranging from $1.3\times$ to
$15\times$ faster. Furthermore, we demonstrate that our method can also be
applied to the Capacitated Vehicle Routing Problem (CVRP).
|
2501.00885 | Representation in large language models | cs.CL cs.AI cs.LG | The extraordinary success of recent Large Language Models (LLMs) on a diverse
array of tasks has led to an explosion of scientific and philosophical
theorizing aimed at explaining how they do what they do. Unfortunately,
disagreement over fundamental theoretical issues has led to stalemate, with
entrenched camps of LLM optimists and pessimists often committed to very
different views of how these systems work. Overcoming stalemate requires
agreement on fundamental questions, and the goal of this paper is to address
one such question, namely: is LLM behavior driven partly by
representation-based information processing of the sort implicated in
biological cognition, or is it driven entirely by processes of memorization and
stochastic table look-up? This is a question about what kind of algorithm LLMs
implement, and the answer carries serious implications for higher level
questions about whether these systems have beliefs, intentions, concepts,
knowledge, and understanding. I argue that LLM behavior is partially driven by
representation-based information processing, and then I describe and defend a
series of practical techniques for investigating these representations and
developing explanations on their basis. The resulting account provides a
groundwork for future theorizing about language models and their successors.
|
2501.00888 | Unfolding the Headline: Iterative Self-Questioning for News Retrieval
and Timeline Summarization | cs.CL | In the fast-changing realm of information, the capacity to construct coherent
timelines from extensive event-related content has become increasingly
significant and challenging. The complexity arises in aggregating related
documents to build a meaningful event graph around a central topic. This paper
proposes CHRONOS - Causal Headline Retrieval for Open-domain News Timeline
SummarizatiOn via Iterative Self-Questioning, which offers a fresh perspective
on the integration of Large Language Models (LLMs) to tackle the task of
Timeline Summarization (TLS). By iteratively reflecting on how events are
linked and posing new questions regarding a specific news topic to gather
information online or from an offline knowledge base, LLMs produce and refresh
chronological summaries based on documents retrieved in each round.
Furthermore, we curate Open-TLS, a novel dataset of timelines on recent news
topics authored by professional journalists to evaluate open-domain TLS where
information overload makes it impossible to find comprehensive relevant
documents from the web. Our experiments indicate that CHRONOS is not only adept
at open-domain timeline summarization, but it also rivals the performance of
existing state-of-the-art systems designed for closed-domain applications,
where a related news corpus is provided for summarization.
|
2501.00889 | Evaluating Time Series Foundation Models on Noisy Periodic Time Series | cs.LG | While recent advancements in foundation models have significantly impacted
machine learning, rigorous tests on the performance of time series foundation
models (TSFMs) remain largely underexplored. This paper presents an empirical
study evaluating the zero-shot, long-horizon forecasting abilities of several
leading TSFMs over two synthetic datasets constituting noisy periodic time
series. We assess model efficacy across different noise levels, underlying
frequencies, and sampling rates. As benchmarks for comparison, we choose two
statistical techniques: a Fourier transform (FFT)-based approach and a linear
autoregressive (AR) model. Our findings demonstrate that while for time series
with bounded periods and higher sampling rates, TSFMs can match or outperform
the statistical approaches, their forecasting abilities deteriorate with longer
periods, higher noise levels, lower sampling rates and more complex shapes of
the time series.
|
2501.00890 | Spatial Temporal Attention based Target Vehicle Trajectory Prediction
for Internet of Vehicles | cs.RO cs.LG | Forecasting vehicle behavior within complex traffic environments is pivotal
within Intelligent Transportation Systems (ITS). Though this technology plays a
significant role in alleviating the prevalent operational difficulties in
logistics and transportation systems, the precise prediction of vehicle
trajectories still poses a substantial challenge. To address this, our study
introduces the Spatio Temporal Attention-based methodology for Target Vehicle
Trajectory Prediction (STATVTPred). This approach integrates Global Positioning
System(GPS) localization technology to track target movement and dynamically
predict the vehicle's future path using comprehensive spatio-temporal
trajectory data. We map the vehicle trajectory onto a directed graph, after
which spatial attributes are extracted via a Graph Attention Networks(GATs).
The Transformer technology is employed to yield temporal features from the
sequence. These elements are then amalgamated with local road network structure
maps to filter and deliver a smooth trajectory sequence, resulting in precise
vehicle trajectory prediction.This study validates our proposed STATVTPred
method on T-Drive and Chengdu taxi-trajectory datasets. The experimental
results demonstrate that STATVTPred achieves 6.38% and 10.55% higher Average
Match Rate (AMR) than the Transformer model on the Beijing and Chengdu
datasets, respectively. Compared to the LSTM Encoder-Decoder model, STATVTPred
boosts AMR by 37.45% and 36.06% on the same datasets. This is expected to
establish STATVTPred as a new approach for handling trajectory prediction of
targets in logistics and transportation scenarios, thereby enhancing prediction
accuracy.
|
2501.00891 | Demystifying Online Clustering of Bandits: Enhanced Exploration Under
Stochastic and Smoothed Adversarial Contexts | cs.LG cs.AI stat.ML | The contextual multi-armed bandit (MAB) problem is crucial in sequential
decision-making. A line of research, known as online clustering of bandits,
extends contextual MAB by grouping similar users into clusters, utilizing
shared features to improve learning efficiency. However, existing algorithms,
which rely on the upper confidence bound (UCB) strategy, struggle to gather
adequate statistical information to accurately identify unknown user clusters.
As a result, their theoretical analyses require several strong assumptions
about the "diversity" of contexts generated by the environment, leading to
impractical settings, complicated analyses, and poor practical performance.
Removing these assumptions has been a long-standing open problem in the
clustering of bandits literature. In this paper, we provide two solutions to
this open problem. First, following the i.i.d. context generation setting in
existing studies, we propose two novel algorithms, UniCLUB and PhaseUniCLUB,
which incorporate enhanced exploration mechanisms to accelerate cluster
identification. Remarkably, our algorithms require substantially weaker
assumptions while achieving regret bounds comparable to prior work. Second,
inspired by the smoothed analysis framework, we propose a more practical
setting that eliminates the requirement for i.i.d. context generation used in
previous studies, thus enhancing the performance of existing algorithms for
online clustering of bandits. Our technique can be applied to both graph-based
and set-based clustering of bandits frameworks. Extensive evaluations on both
synthetic and real-world datasets demonstrate that our proposed algorithms
consistently outperform existing approaches.
|
2501.00895 | Text2Earth: Unlocking Text-driven Remote Sensing Image Generation with a
Global-Scale Dataset and a Foundation Model | cs.CV | Generative foundation models have advanced large-scale text-driven natural
image generation, becoming a prominent research trend across various vertical
domains. However, in the remote sensing field, there is still a lack of
research on large-scale text-to-image (text2image) generation technology.
Existing remote sensing image-text datasets are small in scale and confined to
specific geographic areas and scene types. Besides, existing text2image methods
have struggled to achieve global-scale, multi-resolution controllable, and
unbounded image generation. To address these challenges, this paper presents
two key contributions: the Git-10M dataset and the Text2Earth foundation model.
Git-10M is a global-scale image-text dataset comprising 10 million image-text
pairs, 5 times larger than the previous largest one. The dataset covers a wide
range of geographic scenes and contains resolution information, significantly
surpassing existing datasets in both size and diversity. Building on Git-10M,
we propose Text2Earth, a 1.3 billion parameter generative foundation model
based on the diffusion framework to model global-scale remote sensing scenes.
Text2Earth integrates a resolution guidance mechanism, enabling users to
specify image resolutions. A dynamic condition adaptation strategy is proposed
for training and inference to improve image quality. Text2Earth excels in
zero-shot text2image generation and demonstrates robust generalization and
flexibility across multiple tasks, including unbounded scene construction,
image editing, and cross-modal image generation. This robust capability
surpasses previous models restricted to the basic fixed size and limited scene
types. On the previous benchmark dataset, Text2Earth outperforms previous
models with an improvement of +26.23 FID and +20.95% Zero-shot Cls-OA
metric.Our project page is \url{https://chen-yang-liu.github.io/Text2Earth}
|
2501.00906 | Large Language Model Based Multi-Agent System Augmented Complex Event
Processing Pipeline for Internet of Multimedia Things | cs.MA cs.AI cs.MM | This paper presents the development and evaluation of a Large Language Model
(LLM), also known as foundation models, based multi-agent system framework for
complex event processing (CEP) with a focus on video query processing use
cases. The primary goal is to create a proof-of-concept (POC) that integrates
state-of-the-art LLM orchestration frameworks with publish/subscribe (pub/sub)
tools to address the integration of LLMs with current CEP systems. Utilizing
the Autogen framework in conjunction with Kafka message brokers, the system
demonstrates an autonomous CEP pipeline capable of handling complex workflows.
Extensive experiments evaluate the system's performance across varying
configurations, complexities, and video resolutions, revealing the trade-offs
between functionality and latency. The results show that while higher agent
count and video complexities increase latency, the system maintains high
consistency in narrative coherence. This research builds upon and contributes
to, existing novel approaches to distributed AI systems, offering detailed
insights into integrating such systems into existing infrastructures.
|
2501.00907 | U-GIFT: Uncertainty-Guided Firewall for Toxic Speech in Few-Shot
Scenario | cs.SD cs.CL eess.AS | With the widespread use of social media, user-generated content has surged on
online platforms. When such content includes hateful, abusive, offensive, or
cyberbullying behavior, it is classified as toxic speech, posing a significant
threat to the online ecosystem's integrity and safety. While manual content
moderation is still prevalent, the overwhelming volume of content and the
psychological strain on human moderators underscore the need for automated
toxic speech detection. Previously proposed detection methods often rely on
large annotated datasets; however, acquiring such datasets is both costly and
challenging in practice. To address this issue, we propose an
uncertainty-guided firewall for toxic speech in few-shot scenarios, U-GIFT,
that utilizes self-training to enhance detection performance even when labeled
data is limited. Specifically, U-GIFT combines active learning with Bayesian
Neural Networks (BNNs) to automatically identify high-quality samples from
unlabeled data, prioritizing the selection of pseudo-labels with higher
confidence for training based on uncertainty estimates derived from model
predictions. Extensive experiments demonstrate that U-GIFT significantly
outperforms competitive baselines in few-shot detection scenarios. In the
5-shot setting, it achieves a 14.92\% performance improvement over the basic
model. Importantly, U-GIFT is user-friendly and adaptable to various
pre-trained language models (PLMs). It also exhibits robust performance in
scenarios with sample imbalance and cross-domain settings, while showcasing
strong generalization across various language applications. We believe that
U-GIFT provides an efficient solution for few-shot toxic speech detection,
offering substantial support for automated content moderation in cyberspace,
thereby acting as a firewall to promote advancements in cybersecurity.
|
2501.00909 | RIS-Aided Integrated Sensing and Communication Systems under
Dual-polarized Channels | cs.IT eess.SP math.IT | This paper considers reconfigurable intelligent surface (RIS)-aided
integrated sensing and communication (ISAC) systems under dual-polarized (DP)
channels.
Unlike the existing ISAC systems, which ignored polarization of
electromagnetic waves, this study adopts DP base station (BS) and DP RIS to
serve users with a pair of DP antennas.
The achievable sum rate is maximized through jointly optimizing the
beamforming matrix at the DP BS, and the reflecting coefficients at the DP RIS.
To address this problem, we first utilize the weighted minimum mean-square
error (WMMSE) method to transform the objective function into a more tractable
form, and then an alternating optimization (AO) method is employed to decouple
the original problem into two subproblems.
Due to the constant modulus constraint, the DP RIS reflection matrix
optimization problem is addressed by the majorization-minimization (MM) method.
For the DP beamforming matrix, we propose a penalty-based algorithm that can
obtain a low-complexity closed-form solution.
Simulation results validate the advantage of deploying DP transmit array and
DP RIS in the considered ISAC systems.
|
2501.00910 | Population Aware Diffusion for Time Series Generation | cs.LG cs.AI | Diffusion models have shown promising ability in generating high-quality time
series (TS) data. Despite the initial success, existing works mostly focus on
the authenticity of data at the individual level, but pay less attention to
preserving the population-level properties on the entire dataset. Such
population-level properties include value distributions for each dimension and
distributions of certain functional dependencies (e.g., cross-correlation, CC)
between different dimensions. For instance, when generating house energy
consumption TS data, the value distributions of the outside temperature and the
kitchen temperature should be preserved, as well as the distribution of CC
between them. Preserving such TS population-level properties is critical in
maintaining the statistical insights of the datasets, mitigating model bias,
and augmenting downstream tasks like TS prediction. Yet, it is often overlooked
by existing models. Hence, data generated by existing models often bear
distribution shifts from the original data. We propose Population-aware
Diffusion for Time Series (PaD-TS), a new TS generation model that better
preserves the population-level properties. The key novelties of PaD-TS include
1) a new training method explicitly incorporating TS population-level property
preservation, and 2) a new dual-channel encoder model architecture that better
captures the TS data structure. Empirical results in major benchmark datasets
show that PaD-TS can improve the average CC distribution shift score between
real and synthetic data by 5.9x while maintaining a performance comparable to
state-of-the-art models on individual-level authenticity.
|
2501.00911 | Aligning LLMs with Domain Invariant Reward Models | cs.LG | Aligning large language models (LLMs) to human preferences is challenging in
domains where preference data is unavailable. We address the problem of
learning reward models for such target domains by leveraging feedback collected
from simpler source domains, where human preferences are easier to obtain. Our
key insight is that, while domains may differ significantly, human preferences
convey \emph{domain-agnostic} concepts that can be effectively captured by a
reward model. We propose \method, a framework that trains domain-invariant
reward models by optimizing a dual loss: a domain loss that minimizes the
divergence between source and target distribution, and a source loss that
optimizes preferences on the source domain. We show \method is a general
approach that we evaluate and analyze across 4 distinct settings: (1)
Cross-lingual transfer (accuracy: $0.621 \rightarrow 0.661$), (2)
Clean-to-noisy (accuracy: $0.671 \rightarrow 0.703$), (3) Few-shot-to-full
transfer (accuracy: $0.845 \rightarrow 0.920$), and (4) Simple-to-complex tasks
transfer (correlation: $0.508 \rightarrow 0.556$). Our code, models and data
are available at \url{https://github.com/portal-cornell/dial}.
|
2501.00912 | AutoPresent: Designing Structured Visuals from Scratch | cs.CV cs.CL | Designing structured visuals such as presentation slides is essential for
communicative needs, necessitating both content creation and visual planning
skills. In this work, we tackle the challenge of automated slide generation,
where models produce slide presentations from natural language (NL)
instructions. We first introduce the SlidesBench benchmark, the first benchmark
for slide generation with 7k training and 585 testing examples derived from 310
slide decks across 10 domains. SlidesBench supports evaluations that are
(i)reference-based to measure similarity to a target slide, and
(ii)reference-free to measure the design quality of generated slides alone. We
benchmark end-to-end image generation and program generation methods with a
variety of models, and find that programmatic methods produce higher-quality
slides in user-interactable formats. Built on the success of program
generation, we create AutoPresent, an 8B Llama-based model trained on 7k pairs
of instructions paired with code for slide generation, and achieve results
comparable to the closed-source model GPT-4o. We further explore iterative
design refinement where the model is tasked to self-refine its own output, and
we found that this process improves the slide's quality. We hope that our work
will provide a basis for future work on generating structured visuals.
|
2501.00913 | $\beta$-DQN: Improving Deep Q-Learning By Evolving the Behavior | cs.LG cs.AI | While many sophisticated exploration methods have been proposed, their lack
of generality and high computational cost often lead researchers to favor
simpler methods like $\epsilon$-greedy. Motivated by this, we introduce
$\beta$-DQN, a simple and efficient exploration method that augments the
standard DQN with a behavior function $\beta$. This function estimates the
probability that each action has been taken at each state. By leveraging
$\beta$, we generate a population of diverse policies that balance exploration
between state-action coverage and overestimation bias correction. An adaptive
meta-controller is designed to select an effective policy for each episode,
enabling flexible and explainable exploration. $\beta$-DQN is straightforward
to implement and adds minimal computational overhead to the standard DQN.
Experiments on both simple and challenging exploration domains show that
$\beta$-DQN outperforms existing baseline methods across a wide range of tasks,
providing an effective solution for improving exploration in deep reinforcement
learning.
|
2501.00915 | Diffusion Policies for Generative Modeling of Spacecraft Trajectories | cs.RO cs.LG cs.SY eess.SY math.OC | Machine learning has demonstrated remarkable promise for solving the
trajectory generation problem and in paving the way for online use of
trajectory optimization for resource-constrained spacecraft. However, a key
shortcoming in current machine learning-based methods for trajectory generation
is that they require large datasets and even small changes to the original
trajectory design requirements necessitate retraining new models to learn the
parameter-to-solution mapping. In this work, we leverage compositional
diffusion modeling to efficiently adapt out-of-distribution data and problem
variations in a few-shot framework for 6 degree-of-freedom (DoF) powered
descent trajectory generation. Unlike traditional deep learning methods that
can only learn the underlying structure of one specific trajectory optimization
problem, diffusion models are a powerful generative modeling framework that
represents the solution as a probability density function (PDF) and this allows
for the composition of PDFs encompassing a variety of trajectory design
specifications and constraints. We demonstrate the capability of compositional
diffusion models for inference-time 6 DoF minimum-fuel landing site selection
and composable constraint representations. Using these samples as initial
guesses for 6 DoF powered descent guidance enables dynamically feasible and
computationally efficient trajectory generation.
|
2501.00917 | Hierarchical Vision-Language Alignment for Text-to-Image Generation via
Diffusion Models | cs.CV | Text-to-image generation has witnessed significant advancements with the
integration of Large Vision-Language Models (LVLMs), yet challenges remain in
aligning complex textual descriptions with high-quality, visually coherent
images. This paper introduces the Vision-Language Aligned Diffusion (VLAD)
model, a generative framework that addresses these challenges through a
dual-stream strategy combining semantic alignment and hierarchical diffusion.
VLAD utilizes a Contextual Composition Module (CCM) to decompose textual
prompts into global and local representations, ensuring precise alignment with
visual features. Furthermore, it incorporates a multi-stage diffusion process
with hierarchical guidance to generate high-fidelity images. Experiments
conducted on MARIO-Eval and INNOVATOR-Eval benchmarks demonstrate that VLAD
significantly outperforms state-of-the-art methods in terms of image quality,
semantic alignment, and text rendering accuracy. Human evaluations further
validate the superior performance of VLAD, making it a promising approach for
text-to-image generation in complex scenarios.
|
2501.00919 | Exploring Geometric Representational Alignment through Ollivier-Ricci
Curvature and Ricci Flow | cs.LG | Representational analysis explores how input data of a neural system are
encoded in high dimensional spaces of its distributed neural activations, and
how we can compare different systems, for instance, artificial neural networks
and brains, on those grounds. While existing methods offer important insights,
they typically do not account for local intrinsic geometrical properties within
the high-dimensional representation spaces. To go beyond these limitations, we
explore Ollivier-Ricci curvature and Ricci flow as tools to study the alignment
of representations between humans and artificial neural systems on a geometric
level. As a proof-of-principle study, we compared the representations of face
stimuli between VGG-Face, a human-aligned version of VGG-Face, and
corresponding human similarity judgments from a large online study. Using this
discrete geometric framework, we were able to identify local structural
similarities and differences by examining the distributions of node and edge
curvature and higher-level properties by detecting and comparing community
structure in the representational graphs.
|
2501.00921 | Aligning Netlist to Source Code using SynAlign | cs.AR cs.CL | In current chip design processes, using multiple tools to obtain a gate-level
netlist often results in the loss of source code correlation. SynAlign
addresses this challenge by automating the alignment process, simplifying
iterative design, reducing overhead, and maintaining correlation across various
tools. This enhances the efficiency and effectiveness of chip design workflows.
Improving characteristics such as frequency through iterative design is
essential for enhancing accelerators and chip designs. While synthesis tools
produce netlists with critical path information, designers often lack the tools
to trace these netlist cells back to their original source code. Mapping
netlist components to source code provides early feedback on timing and power
for frontend designers.
SynAlign automatically aligns post-optimized netlists with the original
source code without altering compilers or synthesis processes. Its alignment
strategy relies on the consistent design structure throughout the chip design
cycle, even with changes in compiler flow. This consistency allows engineers to
maintain a correlation between modified designs and the original source code
across various tools. Remarkably, SynAlign can tolerate up to 61\% design net
changes without impacting alignment accuracy.
|
2501.00924 | On the Low-Complexity of Fair Learning for Combinatorial Multi-Armed
Bandit | cs.LG | Combinatorial Multi-Armed Bandit with fairness constraints is a framework
where multiple arms form a super arm and can be pulled in each round under
uncertainty to maximize cumulative rewards while ensuring the minimum average
reward required by each arm. The existing pessimistic-optimistic algorithm
linearly combines virtual queue-lengths (tracking the fairness violations) and
Upper Confidence Bound estimates as a weight for each arm and selects a super
arm with the maximum total weight. The number of super arms could be
exponential to the number of arms in many scenarios. In wireless networks,
interference constraints can cause the number of super arms to grow
exponentially with the number of arms. Evaluating all the feasible super arms
to find the one with the maximum total weight can incur extremely high
computational complexity in the pessimistic-optimistic algorithm. To avoid
this, we develop a low-complexity fair learning algorithm based on the
so-called pick-and-compare approach that involves randomly picking $M$ feasible
super arms to evaluate. By setting $M$ to a constant, the number of comparison
steps in the pessimistic-optimistic algorithm can be reduced to a constant,
thereby significantly reducing the computational complexity. Our theoretical
proof shows this low-complexity design incurs only a slight sacrifice in
fairness and regret performance. Finally, we validate the theoretical result by
extensive simulations.
|
2501.00930 | Tight Constraint Prediction of Six-Degree-of-Freedom Transformer-based
Powered Descent Guidance | math.OC cs.LG cs.RO cs.SY eess.SY | This work introduces Transformer-based Successive Convexification (T-SCvx),
an extension of Transformer-based Powered Descent Guidance (T-PDG),
generalizable for efficient six-degree-of-freedom (DoF) fuel-optimal powered
descent trajectory generation. Our approach significantly enhances the sample
efficiency and solution quality for nonconvex-powered descent guidance by
employing a rotation invariant transformation of the sampled dataset. T-PDG was
previously applied to the 3-DoF minimum fuel powered descent guidance problem,
improving solution times by up to an order of magnitude compared to lossless
convexification (LCvx). By learning to predict the set of tight or active
constraints at the optimal control problem's solution, Transformer-based
Successive Convexification (T-SCvx) creates the minimal reduced-size problem
initialized with only the tight constraints, then uses the solution of this
reduced problem to warm-start the direct optimization solver. 6-DoF powered
descent guidance is known to be challenging to solve quickly and reliably due
to the nonlinear and non-convex nature of the problem, the discretization
scheme heavily influencing solution validity, and reference trajectory
initialization determining algorithm convergence or divergence. Our
contributions in this work address these challenges by extending T-PDG to learn
the set of tight constraints for the successive convexification (SCvx)
formulation of the 6-DoF powered descent guidance problem. In addition to
reducing the problem size, feasible and locally optimal reference trajectories
are also learned to facilitate convergence from the initial guess. T-SCvx
enables onboard computation of real-time guidance trajectories, demonstrated by
a 6-DoF Mars powered landing application problem.
|
2501.00935 | Multiscaled Multi-Head Attention-based Video Transformer Network for
Hand Gesture Recognition | cs.CV cs.HC | Dynamic gesture recognition is one of the challenging research areas due to
variations in pose, size, and shape of the signer's hand. In this letter,
Multiscaled Multi-Head Attention Video Transformer Network (MsMHA-VTN) for
dynamic hand gesture recognition is proposed. A pyramidal hierarchy of
multiscale features is extracted using the transformer multiscaled head
attention model. The proposed model employs different attention dimensions for
each head of the transformer which enables it to provide attention at the
multiscale level. Further, in addition to single modality, recognition
performance using multiple modalities is examined. Extensive experiments
demonstrate the superior performance of the proposed MsMHA-VTN with an overall
accuracy of 88.22\% and 99.10\% on NVGesture and Briareo datasets,
respectively.
|
2501.00941 | A Novel Diffusion Model for Pairwise Geoscience Data Generation with
Unbalanced Training Dataset | cs.LG cs.CV physics.geo-ph | Recently, the advent of generative AI technologies has made transformational
impacts on our daily lives, yet its application in scientific applications
remains in its early stages. Data scarcity is a major, well-known barrier in
data-driven scientific computing, so physics-guided generative AI holds
significant promise. In scientific computing, most tasks study the conversion
of multiple data modalities to describe physical phenomena, for example,
spatial and waveform in seismic imaging, time and frequency in signal
processing, and temporal and spectral in climate modeling; as such, multi-modal
pairwise data generation is highly required instead of single-modal data
generation, which is usually used in natural images (e.g., faces, scenery).
Moreover, in real-world applications, the unbalance of available data in terms
of modalities commonly exists; for example, the spatial data (i.e., velocity
maps) in seismic imaging can be easily simulated, but real-world seismic
waveform is largely lacking. While the most recent efforts enable the powerful
diffusion model to generate multi-modal data, how to leverage the unbalanced
available data is still unclear. In this work, we use seismic imaging in
subsurface geophysics as a vehicle to present ``UB-Diff'', a novel diffusion
model for multi-modal paired scientific data generation. One major innovation
is a one-in-two-out encoder-decoder network structure, which can ensure
pairwise data is obtained from a co-latent representation. Then, the co-latent
representation will be used by the diffusion process for pairwise data
generation. Experimental results on the OpenFWI dataset show that UB-Diff
significantly outperforms existing techniques in terms of Fr\'{e}chet Inception
Distance (FID) score and pairwise evaluation, indicating the generation of
reliable and useful multi-modal pairwise data.
|
2501.00942 | Efficient Unsupervised Shortcut Learning Detection and Mitigation in
Transformers | cs.LG cs.CV | Shortcut learning, i.e., a model's reliance on undesired features not
directly relevant to the task, is a major challenge that severely limits the
applications of machine learning algorithms, particularly when deploying them
to assist in making sensitive decisions, such as in medical diagnostics. In
this work, we leverage recent advancements in machine learning to create an
unsupervised framework that is capable of both detecting and mitigating
shortcut learning in transformers. We validate our method on multiple datasets.
Results demonstrate that our framework significantly improves both worst-group
accuracy (samples misclassified due to shortcuts) and average accuracy, while
minimizing human annotation effort. Moreover, we demonstrate that the detected
shortcuts are meaningful and informative to human experts, and that our
framework is computationally efficient, allowing it to be run on consumer
hardware.
|
2501.00944 | Diffusion Prism: Enhancing Diversity and Morphology Consistency in
Mask-to-Image Diffusion | cs.CV eess.IV | The emergence of generative AI and controllable diffusion has made
image-to-image synthesis increasingly practical and efficient. However, when
input images exhibit low entropy and sparse, the inherent characteristics of
diffusion models often result in limited diversity. This constraint
significantly interferes with data augmentation. To address this, we propose
Diffusion Prism, a training-free framework that efficiently transforms binary
masks into realistic and diverse samples while preserving morphological
features. We explored that a small amount of artificial noise will
significantly assist the image-denoising process. To prove this novel
mask-to-image concept, we use nano-dendritic patterns as an example to
demonstrate the merit of our method compared to existing controllable diffusion
models. Furthermore, we extend the proposed framework to other biological
patterns, highlighting its potential applications across various fields.
|
2501.00946 | Cached Adaptive Token Merging: Dynamic Token Reduction and Redundant
Computation Elimination in Diffusion Model | cs.CV | Diffusion models have emerged as a promising approach for generating
high-quality, high-dimensional images. Nevertheless, these models are hindered
by their high computational cost and slow inference, partly due to the
quadratic computational complexity of the self-attention mechanisms with
respect to input size. Various approaches have been proposed to address this
drawback. One such approach focuses on reducing the number of tokens fed into
the self-attention, known as token merging (ToMe). In our method, which is
called cached adaptive token merging(CA-ToMe), we calculate the similarity
between tokens and then merge the r proportion of the most similar tokens.
However, due to the repetitive patterns observed in adjacent steps and the
variation in the frequency of similarities, we aim to enhance this approach by
implementing an adaptive threshold for merging tokens and adding a caching
mechanism that stores similar pairs across several adjacent steps. Empirical
results demonstrate that our method operates as a training-free acceleration
method, achieving a speedup factor of 1.24 in the denoising process while
maintaining the same FID scores compared to existing approaches.
|
2501.00953 | Incremental Dialogue Management: Survey, Discussion, and Implications
for HRI | cs.CL cs.AI | Efforts towards endowing robots with the ability to speak have benefited from
recent advancements in NLP, in particular large language models. However, as
powerful as current models have become, they still operate on sentence or
multi-sentence level input, not on the word-by-word input that humans operate
on, affecting the degree of responsiveness that they offer, which is critical
in situations where humans interact with robots using speech. In this paper, we
review the literature on interactive systems that operate incrementally (i.e.,
at the word level or below it). We motivate the need for incremental systems,
survey incremental modeling of important aspects of dialogue like speech
recognition and language generation. Primary focus is on the part of the system
that makes decisions, known as the dialogue manager. We find that there is very
little research on incremental dialogue management, offer some requirements for
practical incremental dialogue management, and the implications of incremental
dialogue for embodied, robotic platforms.
|
2501.00954 | Enhancing Early Diabetic Retinopathy Detection through Synthetic DR1
Image Generation: A StyleGAN3 Approach | eess.IV cs.AI cs.CV | Diabetic Retinopathy (DR) is a leading cause of preventable blindness. Early
detection at the DR1 stage is critical but is hindered by a scarcity of
high-quality fundus images. This study uses StyleGAN3 to generate synthetic DR1
images characterized by microaneurysms with high fidelity and diversity. The
aim is to address data scarcity and enhance the performance of supervised
classifiers. A dataset of 2,602 DR1 images was used to train the model,
followed by a comprehensive evaluation using quantitative metrics, including
Frechet Inception Distance (FID), Kernel Inception Distance (KID), and
Equivariance with respect to translation (EQ-T) and rotation (EQ-R).
Qualitative assessments included Human Turing tests, where trained
ophthalmologists evaluated the realism of synthetic images. Spectral analysis
further validated image quality. The model achieved a final FID score of 17.29,
outperforming the mean FID of 21.18 (95 percent confidence interval - 20.83 to
21.56) derived from bootstrap resampling. Human Turing tests demonstrated the
model's ability to produce highly realistic images, though minor artifacts near
the borders were noted. These findings suggest that StyleGAN3-generated
synthetic DR1 images hold significant promise for augmenting training datasets,
enabling more accurate early detection of Diabetic Retinopathy. This
methodology highlights the potential of synthetic data in advancing medical
imaging and AI-driven diagnostics.
|
2501.00958 | 2.5 Years in Class: A Multimodal Textbook for Vision-Language
Pretraining | cs.CV cs.CL cs.LG | Compared to image-text pair data, interleaved corpora enable Vision-Language
Models (VLMs) to understand the world more naturally like humans. However, such
existing datasets are crawled from webpage, facing challenges like low
knowledge density, loose image-text relations, and poor logical coherence
between images. On the other hand, the internet hosts vast instructional videos
(e.g., online geometry courses) that are widely used by humans to learn
foundational subjects, yet these valuable resources remain underexplored in VLM
training. In this paper, we introduce a high-quality \textbf{multimodal
textbook} corpus with richer foundational knowledge for VLM pretraining. It
collects over 2.5 years of instructional videos, totaling 22,000 class hours.
We first use an LLM-proposed taxonomy to systematically gather instructional
videos. Then we progressively extract and refine visual (keyframes), audio
(ASR), and textual knowledge (OCR) from the videos, and organize as an
image-text interleaved corpus based on temporal order. Compared to its
counterparts, our video-centric textbook offers more coherent context, richer
knowledge, and better image-text alignment. Experiments demonstrate its superb
pretraining performance, particularly in knowledge- and reasoning-intensive
tasks like ScienceQA and MathVista. Moreover, VLMs pre-trained on our textbook
exhibit outstanding interleaved context awareness, leveraging visual and
textual cues in their few-shot context for task solving. Our code are available
at https://github.com/DAMO-NLP-SG/multimodal_textbook.
|
2501.00961 | The Silent Majority: Demystifying Memorization Effect in the Presence of
Spurious Correlations | cs.LG cs.AI cs.CV eess.IV | Machine learning models often rely on simple spurious features -- patterns in
training data that correlate with targets but are not causally related to them,
like image backgrounds in foreground classification. This reliance typically
leads to imbalanced test performance across minority and majority groups. In
this work, we take a closer look at the fundamental cause of such imbalanced
performance through the lens of memorization, which refers to the ability to
predict accurately on \textit{atypical} examples (minority groups) in the
training set but failing in achieving the same accuracy in the testing set.
This paper systematically shows the ubiquitous existence of spurious features
in a small set of neurons within the network, providing the first-ever evidence
that memorization may contribute to imbalanced group performance. Through three
experimental sources of converging empirical evidence, we find the property of
a small subset of neurons or channels in memorizing minority group information.
Inspired by these findings, we articulate the hypothesis: the imbalanced group
performance is a byproduct of ``noisy'' spurious memorization confined to a
small set of neurons. To further substantiate this hypothesis, we show that
eliminating these unnecessary spurious memorization patterns via a novel
framework during training can significantly affect the model performance on
minority groups. Our experimental results across various architectures and
benchmarks offer new insights on how neural networks encode core and spurious
knowledge, laying the groundwork for future research in demystifying robustness
to spurious correlation.
|
2501.00962 | OASIS Uncovers: High-Quality T2I Models, Same Old Stereotypes | cs.CV cs.CY cs.LG | Images generated by text-to-image (T2I) models often exhibit visual biases
and stereotypes of concepts such as culture and profession. Existing
quantitative measures of stereotypes are based on statistical parity that does
not align with the sociological definition of stereotypes and, therefore,
incorrectly categorizes biases as stereotypes. Instead of oversimplifying
stereotypes as biases, we propose a quantitative measure of stereotypes that
aligns with its sociological definition. We then propose OASIS to measure the
stereotypes in a generated dataset and understand their origins within the T2I
model. OASIS includes two scores to measure stereotypes from a generated image
dataset: (M1) Stereotype Score to measure the distributional violation of
stereotypical attributes, and (M2) WALS to measure spectral variance in the
images along a stereotypical attribute. OASIS also includes two methods to
understand the origins of stereotypes in T2I models: (U1) StOP to discover
attributes that the T2I model internally associates with a given concept, and
(U2) SPI to quantify the emergence of stereotypical attributes in the latent
space of the T2I model during image generation. Despite the considerable
progress in image fidelity, using OASIS, we conclude that newer T2I models such
as FLUX.1 and SDv3 contain strong stereotypical predispositions about concepts
and still generate images with widespread stereotypical attributes.
Additionally, the quantity of stereotypes worsens for nationalities with lower
Internet footprints.
|
2501.00967 | On the Implementation of a Bayesian Optimization Framework for
Interconnected Systems | stat.ML cs.LG | Bayesian optimization (BO) is an effective paradigm for the optimization of
expensive-to-sample systems. Standard BO learns the performance of a system
$f(x)$ by using a Gaussian Process (GP) model; this treats the system as a
black-box and limits its ability to exploit available structural knowledge
(e.g., physics and sparse interconnections in a complex system). Grey-box
modeling, wherein the performance function is treated as a composition of known
and unknown intermediate functions $f(x, y(x))$ (where $y(x)$ is a GP model)
offers a solution to this limitation; however, generating an analytical
probability density for $f$ from the Gaussian density of $y(x)$ is often an
intractable problem (e.g., when $f$ is nonlinear). Previous work has handled
this issue by using sampling techniques or by solving an auxiliary problem over
an augmented space where the values of $y(x)$ are constrained by confidence
intervals derived from the GP models; such solutions are computationally
intensive. In this work, we provide a detailed implementation of a recently
proposed grey-box BO paradigm, BOIS, that uses adaptive linearizations of $f$
to obtain analytical expressions for the statistical moments of the composite
function. We show that the BOIS approach enables the exploitation of structural
knowledge, such as that arising in interconnected systems as well as systems
that embed multiple GP models and combinations of physics and GP models. We
benchmark the effectiveness of BOIS against standard BO and existing grey-box
BO algorithms using a pair of case studies focused on chemical process
optimization and design. Our results indicate that BOIS performs as well as or
better than existing grey-box methods, while also being less computationally
intensive.
|
2501.00973 | Defense Strategies for Autonomous Multi-agent Systems: Ensuring Safety
and Resilience Under Exponentially Unbounded FDI Attacks | eess.SY cs.SY | False data injection (FDI) attacks pose a significant threat to autonomous
multi-agent systems (MASs). While resilient control strategies address FDI
attacks, they typically have strict assumptions on the attack signals and
overlook safety constraints, such as collision avoidance. In practical
applications, leader agents equipped with advanced sensors or weaponry span a
safe region to guide heterogeneous follower agents, ensuring coordinated
operations while addressing collision avoidance to prevent financial losses and
mission failures. This letter addresses these gaps by introducing and studying
the safety-aware and attack-resilient (SAAR) control problem under
exponentially unbounded FDI (EU-FDI) attacks. Specifically, a novel
attack-resilient observer layer (OL) is first designed to defend against EU-FDI
attacks on the OL. Then, by solving an optimization problem using the quadratic
programming (QP), the safety constraints for collision avoidance are further
integrated into the SAAR controller design to prevent collisions among
followers. An attack-resilient compensational signal is finally designed to
mitigate the adverse effects caused by the EU-FDI attack on control input layer
(CIL). Rigorous Lyapunov-based stability analysis certifies the SAAR
controller's effectiveness in ensuring both safety and resilience. This study
also pioneers a three-dimensional simulation of the SAAR containment control
problem for autonomous MASs, demonstrating its applicability in realistic
multi-agent scenarios.
|
2501.00975 | CoordFlow: Coordinate Flow for Pixel-wise Neural Video Representation | cs.CV cs.LG | In the field of video compression, the pursuit for better quality at lower
bit rates remains a long-lasting goal. Recent developments have demonstrated
the potential of Implicit Neural Representation (INR) as a promising
alternative to traditional transform-based methodologies. Video INRs can be
roughly divided into frame-wise and pixel-wise methods according to the
structure the network outputs. While the pixel-based methods are better for
upsampling and parallelization, frame-wise methods demonstrated better
performance. We introduce CoordFlow, a novel pixel-wise INR for video
compression. It yields state-of-the-art results compared to other pixel-wise
INRs and on-par performance compared to leading frame-wise techniques. The
method is based on the separation of the visual information into visually
consistent layers, each represented by a dedicated network that compensates for
the layer's motion. When integrated, a byproduct is an unsupervised
segmentation of video sequence. Objects motion trajectories are implicitly
utilized to compensate for visual-temporal redundancies. Additionally, the
proposed method provides inherent video upsampling, stabilization, inpainting,
and denoising capabilities.
|
2501.00982 | Are LLMs effective psychological assessors? Leveraging adaptive RAG for
interpretable mental health screening through psychometric practice | cs.CL cs.AI | In psychological practice, standardized questionnaires serve as essential
tools for assessing mental constructs (e.g., attitudes, traits, and emotions)
through structured questions (aka items). With the increasing prevalence of
social media platforms where users share personal experiences and emotions,
researchers are exploring computational methods to leverage this data for rapid
mental health screening. In this study, we propose a novel adaptive
Retrieval-Augmented Generation (RAG) approach that completes psychological
questionnaires by analyzing social media posts. Our method retrieves the most
relevant user posts for each question in a psychological survey and uses Large
Language Models (LLMs) to predict questionnaire scores in a zero-shot setting.
Our findings are twofold. First we demonstrate that this approach can
effectively predict users' responses to psychological questionnaires, such as
the Beck Depression Inventory II (BDI-II), achieving performance comparable to
or surpassing state-of-the-art models on Reddit-based benchmark datasets
without relying on training data. Second, we show how this methodology can be
generalized as a scalable screening tool, as the final assessment is
systematically derived by completing standardized questionnaires and tracking
how individual item responses contribute to the diagnosis, aligning with
established psychometric practices.
|
2501.00987 | Search Plurality | cs.IR cs.CY cs.HC | In light of Phillips' contention regarding the impracticality of Search
Neutrality, asserting that non-epistemic factors presently dictate result
prioritization, our objective in this study is to confront this constraint by
questioning prevailing design practices in search engines. We posit that the
concept of prioritization warrants scrutiny, along with the consistent
hierarchical ordering that underlies this lack of neutrality. We introduce the
term Search Plurality to encapsulate the idea of emphasizing the various means
a query can be approached. This is demonstrated in a design that prioritizes
the display of categories over specific search items, helping users grasp the
breadth of their search. Whether a query allows for multiple interpretations or
invites diverse opinions, the presentation of categories highlights the
significance of organizing data based on relevance, importance, and relative
significance, akin to traditional methods. However, unlike previous approaches,
this method enriches our comprehension of the overall information landscape,
countering the potential bias introduced by ranked lists.
|
2501.00988 | Optimizing Noise Schedules of Generative Models in High Dimensionss | cs.LG | Recent works have shown that diffusion models can undergo phase transitions,
the resolution of which is needed for accurately generating samples. This has
motivated the use of different noise schedules, the two most common choices
being referred to as variance preserving (VP) and variance exploding (VE). Here
we revisit these schedules within the framework of stochastic interpolants.
Using the Gaussian Mixture (GM) and Curie-Weiss (CW) data distributions as test
case models, we first investigate the effect of the variance of the initial
noise distribution and show that VP recovers the low-level feature (the
distribution of each mode) but misses the high-level feature (the asymmetry
between modes), whereas VE performs oppositely. We also show that this
dichotomy, which happens when denoising by a constant amount in each step, can
be avoided by using noise schedules specific to VP and VE that allow for the
recovery of both high- and low-level features. Finally we show that these
schedules yield generative models for the GM and CW model whose probability
flow ODE can be discretized using $\Theta_d(1)$ steps in dimension $d$ instead
of the $\Theta_d(\sqrt{d})$ steps required by constant denoising.
|
2501.00989 | Bootstrapped Reward Shaping | cs.LG cs.AI | In reinforcement learning, especially in sparse-reward domains, many
environment steps are required to observe reward information. In order to
increase the frequency of such observations, "potential-based reward shaping"
(PBRS) has been proposed as a method of providing a more dense reward signal
while leaving the optimal policy invariant. However, the required "potential
function" must be carefully designed with task-dependent knowledge to not deter
training performance. In this work, we propose a "bootstrapped" method of
reward shaping, termed BSRS, in which the agent's current estimate of the
state-value function acts as the potential function for PBRS. We provide
convergence proofs for the tabular setting, give insights into training
dynamics for deep RL, and show that the proposed method improves training speed
in the Atari suite.
|
2501.00990 | Cyber-physical Defense for Heterogeneous Multi-agent Systems Against
Exponentially Unbounded Attacks on Signed Digraphs | eess.SY cs.SY | Cyber-physical systems (CPSs) are subjected to attacks on both cyber and
physical spaces. In reality, the attackers could launch exponentially unbounded
false data injection (EU-FDI) attacks, which are more destructive and could
lead to the system's collapse or instability. Existing literature generally
addresses bounded attack signals and/or bounded-first-order-derivative attack
signals, which exposes the CPSs to significant threats. In contrast, this paper
proposes a fully-distributed attack-resilient bi-layer defense framework to
address the bipartite output containment problem for heterogeneous multi-agent
systems on signed digraphs, in the presence of EU-FDI attacks on both
cyber-physical layer (CPL) and observer layer (OL). First, we design
attack-resilient dynamic compensators that utilize data communicated on the OL
to estimate the convex combinations of the states and negative states of the
leaders. The attack-resilient compensators address the EU-FDI attacks on the OL
and guarantee the uniformly ultimately bounded (UUB) estimation of the leaders'
states. Then, by using the compensators' states, fully-distributed
attack-resilient controllers are designed on the CPL to further address the
EU-FDI attacks on the actuators. Rigorous mathematical proof based on Lyapunov
stability analysis is provided, establishing the theoretical soundness of the
proposed bi-layer resilient defense framework, by preserving the UUB consensus
and stability against EU-FDI attacks on both CPL and OL. Finally, a comparative
case study for heterogeneous multi-agent systems validate the enhanced
resilience of the proposed defense strategies.
|
2501.00995 | Is It Still Fair? Investigating Gender Fairness in Cross-Corpus Speech
Emotion Recognition | cs.LG | Speech emotion recognition (SER) is a vital component in various everyday
applications. Cross-corpus SER models are increasingly recognized for their
ability to generalize performance. However, concerns arise regarding fairness
across demographics in diverse corpora. Existing fairness research often
focuses solely on corpus-specific fairness, neglecting its generalizability in
cross-corpus scenarios. Our study focuses on this underexplored area, examining
the gender fairness generalizability in cross-corpus SER scenarios. We
emphasize that the performance of cross-corpus SER models and their fairness
are two distinct considerations. Moreover, we propose the approach of a
combined fairness adaptation mechanism to enhance gender fairness in the SER
transfer learning tasks by addressing both source and target genders. Our
findings bring one of the first insights into the generalizability of gender
fairness in cross-corpus SER systems.
|
2501.00999 | Exploring Information Processing in Large Language Models: Insights from
Information Bottleneck Theory | cs.CL cs.AI | Large Language Models (LLMs) have demonstrated remarkable performance across
a wide range of tasks by understanding input information and predicting
corresponding outputs. However, the internal mechanisms by which LLMs
comprehend input and make effective predictions remain poorly understood. In
this paper, we explore the working mechanism of LLMs in information processing
from the perspective of Information Bottleneck Theory. We propose a
non-training construction strategy to define a task space and identify the
following key findings: (1) LLMs compress input information into specific task
spaces (e.g., sentiment space, topic space) to facilitate task understanding;
(2) they then extract and utilize relevant information from the task space at
critical moments to generate accurate predictions. Based on these insights, we
introduce two novel approaches: an Information Compression-based Context
Learning (IC-ICL) and a Task-Space-guided Fine-Tuning (TS-FT). IC-ICL enhances
reasoning performance and inference efficiency by compressing retrieved example
information into the task space. TS-FT employs a space-guided loss to fine-tune
LLMs, encouraging the learning of more effective compression and selection
mechanisms. Experiments across multiple datasets validate the effectiveness of
task space construction. Additionally, IC-ICL not only improves performance but
also accelerates inference speed by over 40\%, while TS-FT achieves superior
results with a minimal strategy adjustment.
|
2501.01000 | Physics-informed Gaussian Processes for Safe Envelope Expansion | cs.LG | Flight test analysis often requires predefined test points with arbitrarily
tight tolerances, leading to extensive and resource-intensive experimental
campaigns. To address this challenge, we propose a novel approach to flight
test analysis using Gaussian processes (GPs) with physics-informed mean
functions to estimate aerodynamic quantities from arbitrary flight test data,
validated using real T-38 aircraft data collected in collaboration with the
United States Air Force Test Pilot School. We demonstrate our method by
estimating the pitching moment coefficient without requiring predefined or
repeated flight test points, significantly reducing the need for extensive
experimental campaigns. Our approach incorporates aerodynamic models as priors
within the GP framework, enhancing predictive accuracy across diverse flight
conditions and providing robust uncertainty quantification. Key contributions
include the integration of physics-based priors in a probabilistic model, which
allows for precise computation from arbitrary flight test maneuvers, and the
demonstration of our method capturing relevant dynamic characteristics such as
short-period mode behavior. The proposed framework offers a scalable and
generalizable solution for efficient data-driven flight test analysis and is
able to accurately predict the short period frequency and damping for the T-38
across several Mach and dynamic pressure profiles.
|
2501.01002 | Multi-Objective Optimization-Based Anonymization of Structured Data for
Machine Learning | cs.LG math.OC | Data is essential for secondary use, but ensuring its privacy while allowing
such use is a critical challenge. Various techniques have been proposed to
address privacy concerns in data sharing and publishing. However, these methods
often degrade data utility, impacting the performance of machine learning (ML)
models. Our research identifies key limitations in existing optimization models
for privacy preservation, particularly in handling categorical variables,
assessing data utility, and evaluating effectiveness across diverse datasets.
We propose a novel multi-objective optimization model that simultaneously
minimizes information loss and maximizes protection against attacks. This model
is empirically validated using diverse datasets and compared with two existing
algorithms. We assess information loss, the number of individuals subject to
linkage or homogeneity attacks, and ML performance after anonymization. The
results indicate that our model achieves lower information loss and more
effectively mitigates the risk of attacks, reducing the number of individuals
susceptible to these attacks compared to alternative algorithms in some cases.
Additionally, our model maintains comparative ML performance relative to the
original data or data anonymized by other methods. Our findings highlight
significant improvements in privacy protection and ML model performance,
offering a comprehensive framework for balancing privacy and utility in data
sharing.
|
2501.01003 | EasySplat: View-Adaptive Learning makes 3D Gaussian Splatting Easy | cs.CV | 3D Gaussian Splatting (3DGS) techniques have achieved satisfactory 3D scene
representation. Despite their impressive performance, they confront challenges
due to the limitation of structure-from-motion (SfM) methods on acquiring
accurate scene initialization, or the inefficiency of densification strategy.
In this paper, we introduce a novel framework EasySplat to achieve high-quality
3DGS modeling. Instead of using SfM for scene initialization, we employ a novel
method to release the power of large-scale pointmap approaches. Specifically,
we propose an efficient grouping strategy based on view similarity, and use
robust pointmap priors to obtain high-quality point clouds and camera poses for
3D scene initialization. After obtaining a reliable scene structure, we propose
a novel densification approach that adaptively splits Gaussian primitives based
on the average shape of neighboring Gaussian ellipsoids, utilizing KNN scheme.
In this way, the proposed method tackles the limitation on initialization and
optimization, leading to an efficient and accurate 3DGS modeling. Extensive
experiments demonstrate that EasySplat outperforms the current state-of-the-art
(SOTA) in handling novel view synthesis.
|
2501.01005 | FlashInfer: Efficient and Customizable Attention Engine for LLM
Inference Serving | cs.DC cs.AI cs.LG | Transformers, driven by attention mechanisms, form the foundation of large
language models (LLMs). As these models scale up, efficient GPU attention
kernels become essential for high-throughput and low-latency inference. Diverse
LLM applications demand flexible and high-performance attention solutions. We
present FlashInfer: a customizable and efficient attention engine for LLM
serving. FlashInfer tackles KV-cache storage heterogeneity using block-sparse
format and composable formats to optimize memory access and reduce redundancy.
It also offers a customizable attention template, enabling adaptation to
various settings through Just-In-Time (JIT) compilation. Additionally,
FlashInfer's load-balanced scheduling algorithm adjusts to dynamism of user
requests while maintaining compatibility with CUDAGraph which requires static
configuration. FlashInfer have been integrated into leading LLM serving
frameworks like SGLang, vLLM and MLC-Engine. Comprehensive kernel-level and
end-to-end evaluations demonstrate FlashInfer's ability to significantly boost
kernel performance across diverse inference scenarios: compared to
state-of-the-art LLM serving solutions, FlashInfer achieve 29-69%
inter-token-latency reduction compared to compiler backends for LLM serving
benchmark, 28-30% latency reduction for long-context inference, and 13-17%
speedup for LLM serving with parallel generation.
|
2501.01007 | Deep Reinforcement Learning for Job Scheduling and Resource Management
in Cloud Computing: An Algorithm-Level Review | cs.DC cs.AI | Cloud computing has revolutionized the provisioning of computing resources,
offering scalable, flexible, and on-demand services to meet the diverse
requirements of modern applications. At the heart of efficient cloud operations
are job scheduling and resource management, which are critical for optimizing
system performance and ensuring timely and cost-effective service delivery.
However, the dynamic and heterogeneous nature of cloud environments presents
significant challenges for these tasks, as workloads and resource availability
can fluctuate unpredictably. Traditional approaches, including heuristic and
meta-heuristic algorithms, often struggle to adapt to these real-time changes
due to their reliance on static models or predefined rules. Deep Reinforcement
Learning (DRL) has emerged as a promising solution to these challenges by
enabling systems to learn and adapt policies based on continuous observations
of the environment, facilitating intelligent and responsive decision-making.
This survey provides a comprehensive review of DRL-based algorithms for job
scheduling and resource management in cloud computing, analyzing their
methodologies, performance metrics, and practical applications. We also
highlight emerging trends and future research directions, offering valuable
insights into leveraging DRL to advance both job scheduling and resource
management in cloud computing.
|
2501.01010 | CryptoMamba: Leveraging State Space Models for Accurate Bitcoin Price
Prediction | cs.LG cs.AI cs.CE | Predicting Bitcoin price remains a challenging problem due to the high
volatility and complex non-linear dynamics of cryptocurrency markets.
Traditional time-series models, such as ARIMA and GARCH, and recurrent neural
networks, like LSTMs, have been widely applied to this task but struggle to
capture the regime shifts and long-range dependencies inherent in the data. In
this work, we propose CryptoMamba, a novel Mamba-based State Space Model (SSM)
architecture designed to effectively capture long-range dependencies in
financial time-series data. Our experiments show that CryptoMamba not only
provides more accurate predictions but also offers enhanced generalizability
across different market conditions, surpassing the limitations of previous
models. Coupled with trading algorithms for real-world scenarios, CryptoMamba
demonstrates its practical utility by translating accurate forecasts into
financial outcomes. Our findings signal a huge advantage for SSMs in stock and
cryptocurrency price forecasting tasks.
|
2501.01011 | Prediction of Geoeffective CMEs Using SOHO Images and Deep Learning | cs.LG astro-ph.SR physics.space-ph | The application of machine learning to the study of coronal mass ejections
(CMEs) and their impacts on Earth has seen significant growth recently.
Understanding and forecasting CME geoeffectiveness is crucial for protecting
infrastructure in space and ensuring the resilience of technological systems on
Earth. Here we present GeoCME, a deep-learning framework designed to predict,
deterministically or probabilistically, whether a CME event that arrives at
Earth will cause a geomagnetic storm. A geomagnetic storm is defined as a
disturbance of the Earth's magnetosphere during which the minimum Dst index
value is less than -50 nT. GeoCME is trained on observations from the
instruments including LASCO C2, EIT and MDI on board the Solar and Heliospheric
Observatory (SOHO), focusing on a dataset that includes 136 halo/partial halo
CMEs in Solar Cycle 23. Using ensemble and transfer learning techniques, GeoCME
is capable of extracting features hidden in the SOHO observations and making
predictions based on the learned features. Our experimental results demonstrate
the good performance of GeoCME, achieving a Matthew's correlation coefficient
of 0.807 and a true skill statistics score of 0.714 when the tool is used as a
deterministic prediction model. When the tool is used as a probabilistic
forecasting model, it achieves a Brier score of 0.094 and a Brier skill score
of 0.493. These results are promising, showing that the proposed GeoCME can
help enhance our understanding of CME-triggered solar-terrestrial interactions.
|
2501.01014 | MDSF: Context-Aware Multi-Dimensional Data Storytelling Framework based
on Large language Model | cs.CL cs.AI | The exponential growth of data and advancements in big data technologies have
created a demand for more efficient and automated approaches to data analysis
and storytelling. However, automated data analysis systems still face
challenges in leveraging large language models (LLMs) for data insight
discovery, augmented analysis, and data storytelling. This paper introduces the
Multidimensional Data Storytelling Framework (MDSF) based on large language
models for automated insight generation and context-aware storytelling. The
framework incorporates advanced preprocessing techniques, augmented analysis
algorithms, and a unique scoring mechanism to identify and prioritize
actionable insights. The use of fine-tuned LLMs enhances contextual
understanding and generates narratives with minimal manual intervention. The
architecture also includes an agent-based mechanism for real-time storytelling
continuation control. Key findings reveal that MDSF outperforms existing
methods across various datasets in terms of insight ranking accuracy,
descriptive quality, and narrative coherence. The experimental evaluation
demonstrates MDSF's ability to automate complex analytical tasks, reduce
interpretive biases, and improve user satisfaction. User studies further
underscore its practical utility in enhancing content structure, conclusion
extraction, and richness of detail.
|
2501.01015 | Boosting Adversarial Transferability with Spatial Adversarial Alignment | cs.CV cs.CR | Deep neural networks are vulnerable to adversarial examples that exhibit
transferability across various models. Numerous approaches are proposed to
enhance the transferability of adversarial examples, including advanced
optimization, data augmentation, and model modifications. However, these
methods still show limited transferability, particularly in cross-architecture
scenarios, such as from CNN to ViT. To achieve high transferability, we propose
a technique termed Spatial Adversarial Alignment (SAA), which employs an
alignment loss and leverages a witness model to fine-tune the surrogate model.
Specifically, SAA consists of two key parts: spatial-aware alignment and
adversarial-aware alignment. First, we minimize the divergences of features
between the two models in both global and local regions, facilitating spatial
alignment. Second, we introduce a self-adversarial strategy that leverages
adversarial examples to impose further constraints, aligning features from an
adversarial perspective. Through this alignment, the surrogate model is trained
to concentrate on the common features extracted by the witness model. This
facilitates adversarial attacks on these shared features, thereby yielding
perturbations that exhibit enhanced transferability. Extensive experiments on
various architectures on ImageNet show that aligned surrogate models based on
SAA can provide higher transferable adversarial examples, especially in
cross-architecture attacks.
|
2501.01022 | Efficient Connectivity-Preserving Instance Segmentation with
Supervoxel-Based Loss Function | cs.CV q-bio.NC | Reconstructing the intricate local morphology of neurons and their long-range
projecting axons can address many connectivity related questions in
neuroscience. The main bottleneck in connectomics pipelines is correcting
topological errors, as multiple entangled neuronal arbors is a challenging
instance segmentation problem. More broadly, segmentation of curvilinear,
filamentous structures continues to pose significant challenges. To address
this problem, we extend the notion of simple points from digital topology to
connected sets of voxels (i.e. supervoxels) and propose a topology-aware neural
network segmentation method with minimal computational overhead. We demonstrate
its effectiveness on a new public dataset of 3-d light microscopy images of
mouse brains, along with the benchmark datasets DRIVE, ISBI12, and CrackTree.
|
2501.01023 | Hadamard Attention Recurrent Transformer: A Strong Baseline for Stereo
Matching Transformer | cs.CV | In light of the advancements in transformer technology, extant research
posits the construction of stereo transformers as a potential solution to the
binocular stereo matching challenge. However, constrained by the low-rank
bottleneck and quadratic complexity of attention mechanisms, stereo
transformers still fail to demonstrate sufficient nonlinear expressiveness
within a reasonable inference time. The lack of focus on key homonymous points
renders the representations of such methods vulnerable to challenging
conditions, including reflections and weak textures. Furthermore, a slow
computing speed is not conducive to the application. To overcome these
difficulties, we present the \textbf{H}adamard \textbf{A}ttention
\textbf{R}ecurrent Stereo \textbf{T}ransformer (HART) that incorporates the
following components: 1) For faster inference, we present a Hadamard product
paradigm for the attention mechanism, achieving linear computational
complexity. 2) We designed a Dense Attention Kernel (DAK) to amplify the
differences between relevant and irrelevant feature responses. This allows HART
to focus on important details. DAK also converts zero elements to non-zero
elements to mitigate the reduced expressiveness caused by the low-rank
bottleneck. 3) To compensate for the spatial and channel interaction missing in
the Hadamard product, we propose MKOI to capture both global and local
information through the interleaving of large and small kernel convolutions.
Experimental results demonstrate the effectiveness of our HART. In reflective
area, HART ranked \textbf{1st} on the KITTI 2012 benchmark among all published
methods at the time of submission. Code is available at
\url{https://github.com/ZYangChen/HART}.
|
2501.01025 | Towards Adversarially Robust Deep Metric Learning | cs.LG cs.AI | Deep Metric Learning (DML) has shown remarkable successes in many domains by
taking advantage of powerful deep neural networks. Deep neural networks are
prone to adversarial attacks and could be easily fooled by adversarial
examples. The current progress on this robustness issue is mainly about deep
classification models but pays little attention to DML models. Existing works
fail to thoroughly inspect the robustness of DML and neglect an important DML
scenario, the clustering-based inference. In this work, we first point out the
robustness issue of DML models in clustering-based inference scenarios. We find
that, for the clustering-based inference, existing defenses designed DML are
unable to be reused and the adaptions of defenses designed for deep
classification models cannot achieve satisfactory robustness performance. To
alleviate the hazard of adversarial examples, we propose a new defense, the
Ensemble Adversarial Training (EAT), which exploits ensemble learning and
adversarial training. EAT promotes the diversity of the ensemble, encouraging
each model in the ensemble to have different robustness features, and employs a
self-transferring mechanism to make full use of the robustness statistics of
the whole ensemble in the update of every single model. We evaluate the EAT
method on three widely-used datasets with two popular model architectures. The
results show that the proposed EAT method greatly outperforms the adaptions of
defenses designed for deep classification models.
|
2501.01028 | KaLM-Embedding: Superior Training Data Brings A Stronger Embedding Model | cs.CL | As retrieval-augmented generation prevails in large language models,
embedding models are becoming increasingly crucial. Despite the growing number
of general embedding models, prior work often overlooks the critical role of
training data quality. In this work, we introduce KaLM-Embedding, a general
multilingual embedding model that leverages a large quantity of cleaner, more
diverse, and domain-specific training data. Our model has been trained with key
techniques proven to enhance performance: (1) persona-based synthetic data to
create diversified examples distilled from LLMs, (2) ranking consistency
filtering to remove less informative samples, and (3) semi-homogeneous task
batch sampling to improve training efficacy. Departing from traditional
BERT-like architectures, we adopt Qwen2-0.5B as the pre-trained model,
facilitating the adaptation of auto-regressive language models for general
embedding tasks. Extensive evaluations of the MTEB benchmark across multiple
languages show that our model outperforms others of comparable size, setting a
new standard for multilingual embedding models with <1B parameters.
|
2501.01029 | State-of-the-art AI-based Learning Approaches for Deepfake Generation
and Detection, Analyzing Opportunities, Threading through Pros, Cons, and
Future Prospects | cs.LG | The rapid advancement of deepfake technologies, specifically designed to
create incredibly lifelike facial imagery and video content, has ignited a
remarkable level of interest and curiosity across many fields, including
forensic analysis, cybersecurity and the innovative creation of digital
characters. By harnessing the latest breakthroughs in deep learning methods,
such as Generative Adversarial Networks, Variational Autoencoders, Few-Shot
Learning Strategies, and Transformers, the outcomes achieved in generating
deepfakes have been nothing short of astounding and transformative. Also, the
ongoing evolution of detection technologies is being developed to counteract
the potential for misuse associated with deepfakes, effectively addressing
critical concerns that range from political manipulation to the dissemination
of fake news and the ever-growing issue of cyberbullying. This comprehensive
review paper meticulously investigates the most recent developments in deepfake
generation and detection, including around 400 publications, providing an
in-depth analysis of the cutting-edge innovations shaping this rapidly evolving
landscape. Starting with a thorough examination of systematic literature review
methodologies, we embark on a journey that delves into the complex technical
intricacies inherent in the various techniques used for deepfake generation,
comprehensively addressing the challenges faced, potential solutions available,
and the nuanced details surrounding manipulation formulations. Subsequently,
the paper is dedicated to accurately benchmarking leading approaches against
prominent datasets, offering thorough assessments of the contributions that
have significantly impacted these vital domains. Ultimately, we engage in a
thoughtful discussion of the existing challenges, paving the way for continuous
advancements in this critical and ever-dynamic study area.
|
2501.01030 | Reasoning based on symbolic and parametric knowledge bases: a survey | cs.CL cs.AI | Reasoning is fundamental to human intelligence, and critical for
problem-solving, decision-making, and critical thinking. Reasoning refers to
drawing new conclusions based on existing knowledge, which can support various
applications like clinical diagnosis, basic education, and financial analysis.
Though a good number of surveys have been proposed for reviewing
reasoning-related methods, none of them has systematically investigated these
methods from the viewpoint of their dependent knowledge base. Both the
scenarios to which the knowledge bases are applied and their storage formats
are significantly different. Hence, investigating reasoning methods from the
knowledge base perspective helps us better understand the challenges and future
directions. To fill this gap, this paper first classifies the knowledge base
into symbolic and parametric ones. The former explicitly stores information in
human-readable symbols, and the latter implicitly encodes knowledge within
parameters. Then, we provide a comprehensive overview of reasoning methods
using symbolic knowledge bases, parametric knowledge bases, and both of them.
Finally, we identify the future direction toward enhancing reasoning
capabilities to bridge the gap between human and machine intelligence.
|
2501.01031 | ValuesRAG: Enhancing Cultural Alignment Through Retrieval-Augmented
Contextual Learning | cs.CL cs.AI cs.SI | Cultural values alignment in Large Language Models (LLMs) is a critical
challenge due to their tendency to embed Western-centric biases from training
data, leading to misrepresentations and fairness issues in cross-cultural
contexts. Recent approaches, such as role-assignment and few-shot learning,
often struggle with reliable cultural alignment as they heavily rely on
pre-trained knowledge, lack scalability, and fail to capture nuanced cultural
values effectively. To address these issues, we propose ValuesRAG, a novel and
effective framework that applies Retrieval-Augmented Generation (RAG) with
In-Context Learning (ICL) to integrate cultural and demographic knowledge
dynamically during text generation. Leveraging the World Values Survey (WVS)
dataset, ValuesRAG first generates summaries of values for each individual.
Subsequently, we curate several representative regional datasets to serve as
test datasets and retrieve relevant summaries of values based on demographic
features, followed by a reranking step to select the top-k relevant summaries.
ValuesRAG consistently outperforms baseline methods, both in the main
experiment and in the ablation study where only the values summary was
provided. Notably, ValuesRAG demonstrates an accuracy of 21% improvement over
other baseline methods, highlighting its potential to foster culturally aligned
AI systems and enhance the inclusivity of AI-driven applications.
|
2501.01032 | DynamicLip: Shape-Independent Continuous Authentication via Lip
Articulator Dynamics | cs.CV cs.CR | Biometrics authentication has become increasingly popular due to its security
and convenience; however, traditional biometrics are becoming less desirable in
scenarios such as new mobile devices, Virtual Reality, and Smart Vehicles. For
example, while face authentication is widely used, it suffers from significant
privacy concerns. The collection of complete facial data makes it less
desirable for privacy-sensitive applications. Lip authentication, on the other
hand, has emerged as a promising biometrics method. However, existing lip-based
authentication methods heavily depend on static lip shape when the mouth is
closed, which can be less robust due to lip shape dynamic motion and can barely
work when the user is speaking. In this paper, we revisit the nature of lip
biometrics and extract shape-independent features from the lips. We study the
dynamic characteristics of lip biometrics based on articulator motion. Building
on the knowledge, we propose a system for shape-independent continuous
authentication via lip articulator dynamics. This system enables robust,
shape-independent and continuous authentication, making it particularly
suitable for scenarios with high security and privacy requirements. We
conducted comprehensive experiments in different environments and attack
scenarios and collected a dataset of 50 subjects. The results indicate that our
system achieves an overall accuracy of 99.06% and demonstrates robustness under
advanced mimic attacks and AI deepfake attacks, making it a viable solution for
continuous biometric authentication in various applications.
|
2501.01034 | Advancing Singlish Understanding: Bridging the Gap with Datasets and
Multimodal Models | cs.CL cs.SD eess.AS | Singlish, a Creole language rooted in English, is a key focus in linguistic
research within multilingual and multicultural contexts. However, its spoken
form remains underexplored, limiting insights into its linguistic structure and
applications. To address this gap, we standardize and annotate the largest
spoken Singlish corpus, introducing the Multitask National Speech Corpus
(MNSC). These datasets support diverse tasks, including Automatic Speech
Recognition (ASR), Spoken Question Answering (SQA), Spoken Dialogue
Summarization (SDS), and Paralinguistic Question Answering (PQA). We release
standardized splits and a human-verified test set to facilitate further
research. Additionally, we propose SingAudioLLM, a multi-task multimodal model
leveraging multimodal large language models to handle these tasks concurrently.
Experiments reveal our models adaptability to Singlish context, achieving
state-of-the-art performance and outperforming prior models by 10-30% in
comparison with other AudioLLMs and cascaded solutions.
|
2501.01037 | MSC-Bench: Benchmarking and Analyzing Multi-Sensor Corruption for
Driving Perception | cs.RO cs.AI cs.CV | Multi-sensor fusion models play a crucial role in autonomous driving
perception, particularly in tasks like 3D object detection and HD map
construction. These models provide essential and comprehensive static
environmental information for autonomous driving systems. While camera-LiDAR
fusion methods have shown promising results by integrating data from both
modalities, they often depend on complete sensor inputs. This reliance can lead
to low robustness and potential failures when sensors are corrupted or missing,
raising significant safety concerns. To tackle this challenge, we introduce the
Multi-Sensor Corruption Benchmark (MSC-Bench), the first comprehensive
benchmark aimed at evaluating the robustness of multi-sensor autonomous driving
perception models against various sensor corruptions. Our benchmark includes 16
combinations of corruption types that disrupt both camera and LiDAR inputs,
either individually or concurrently. Extensive evaluations of six 3D object
detection models and four HD map construction models reveal substantial
performance degradation under adverse weather conditions and sensor failures,
underscoring critical safety issues. The benchmark toolkit and affiliated code
and model checkpoints have been made publicly accessible.
|
Subsets and Splits
No community queries yet
The top public SQL queries from the community will appear here once available.