id
stringlengths 9
16
| title
stringlengths 4
278
| categories
stringlengths 5
104
| abstract
stringlengths 6
4.09k
|
|---|---|---|---|
2501.15760
|
Investigating Application of Deep Neural Networks in Intrusion Detection
System Design
|
cs.CR cs.LG
|
Despite decades of development, existing IDSs still face challenges in
improving detection accuracy, evasion, and detection of unknown attacks. To
solve these problems, many researchers have focused on designing and developing
IDSs that use Deep Neural Networks (DNN) which provides advanced methods of
threat investigation and detection. Given this reason, the motivation of this
research then, is to learn how effective applications of Deep Neural Networks
(DNN) can accurately detect and identify malicious network intrusion, while
advancing the frontiers of their optimal potential use in network intrusion
detection. Using the ASNM-TUN dataset, the study used a Multilayer Perceptron
modeling approach in Deep Neural Network to identify network intrusions, in
addition to distinguishing them in terms of legitimate network traffic, direct
network attacks, and obfuscated network attacks. To further enhance the speed
and efficiency of this DNN solution, a thorough feature selection technique
called Forward Feature Selection (FFS), which resulted in a significant
reduction in the feature subset, was implemented. Using the Multilayer
Perceptron model, test results demonstrate no support for the model to
accurately and correctly distinguish the classification of network intrusion.
|
2501.15763
|
NanoHTNet: Nano Human Topology Network for Efficient 3D Human Pose
Estimation
|
cs.CV
|
The widespread application of 3D human pose estimation (HPE) is limited by
resource-constrained edge devices, requiring more efficient models. A key
approach to enhancing efficiency involves designing networks based on the
structural characteristics of input data. However, effectively utilizing the
structural priors in human skeletal inputs remains challenging. To address
this, we leverage both explicit and implicit spatio-temporal priors of the
human body through innovative model design and a pre-training proxy task.
First, we propose a Nano Human Topology Network (NanoHTNet), a tiny 3D HPE
network with stacked Hierarchical Mixers to capture explicit features.
Specifically, the spatial Hierarchical Mixer efficiently learns the human
physical topology across multiple semantic levels, while the temporal
Hierarchical Mixer with discrete cosine transform and low-pass filtering
captures local instantaneous movements and global action coherence. Moreover,
Efficient Temporal-Spatial Tokenization (ETST) is introduced to enhance
spatio-temporal interaction and reduce computational complexity significantly.
Second, PoseCLR is proposed as a general pre-training method based on
contrastive learning for 3D HPE, aimed at extracting implicit representations
of human topology. By aligning 2D poses from diverse viewpoints in the proxy
task, PoseCLR aids 3D HPE encoders like NanoHTNet in more effectively capturing
the high-dimensional features of the human body, leading to further performance
improvements. Extensive experiments verify that NanoHTNet with PoseCLR
outperforms other state-of-the-art methods in efficiency, making it ideal for
deployment on edge devices like the Jetson Nano. Code and models are available
at https://github.com/vefalun/NanoHTNet.
|
2501.15767
|
Formal Verification of Markov Processes with Learned Parameters
|
cs.LG cs.AI math.OC
|
We introduce the problem of formally verifying properties of Markov processes
where the parameters are the output of machine learning models. Our formulation
is general and solves a wide range of problems, including verifying properties
of probabilistic programs that use machine learning, and subgroup analysis in
healthcare modeling. We show that for a broad class of machine learning models,
including linear models, tree-based models, and neural networks, verifying
properties of Markov chains like reachability, hitting time, and total reward
can be formulated as a bilinear program. We develop a decomposition and bound
propagation scheme for solving the bilinear program and show through
computational experiments that our method solves the problem to global
optimality up to 100x faster than state-of-the-art solvers. We also release
$\texttt{markovml}$, an open-source tool for building Markov processes,
integrating pretrained machine learning models, and verifying their properties,
available at https://github.com/mmaaz-git/markovml.
|
2501.15768
|
Error-State LQR Formulation for Quadrotor UAV Trajectory Tracking
|
cs.RO cs.SY eess.SY
|
This article presents an error-state Linear Quadratic Regulator (LQR)
formulation for robust trajectory tracking in quadrotor Unmanned Aerial
Vehicles (UAVs). The proposed approach leverages error-state dynamics and
employs exponential coordinates to represent orientation errors, enabling a
linearized system representation for real-time control. The control strategy
integrates an LQR-based full-state feedback controller for trajectory tracking,
combined with a cascaded bodyrate controller to handle actuator dynamics.
Detailed derivations of the error-state dynamics, the linearization process,
and the controller design are provided, highlighting the applicability of the
method for precise and stable quadrotor control in dynamic environments.
|
2501.15773
|
Is It Navajo? Accurate Language Detection in Endangered Athabaskan
Languages
|
cs.CL
|
Endangered languages, such as Navajo - the most widely spoken Native American
language - are significantly underrepresented in contemporary language
technologies, exacerbating the challenges of their preservation and
revitalization. This study evaluates Google's Language Identification (LangID)
tool, which does not currently support any Native American languages. To
address this, we introduce a random forest classifier trained on Navajo and
twenty erroneously suggested languages by LangID. Despite its simplicity, the
classifier achieves near-perfect accuracy (97-100%). Additionally, the model
demonstrates robustness across other Athabaskan languages - a family of Native
American languages spoken primarily in Alaska, the Pacific Northwest, and parts
of the Southwestern United States - suggesting its potential for broader
application. Our findings underscore the pressing need for NLP systems that
prioritize linguistic diversity and adaptability over centralized,
one-size-fits-all solutions, especially in supporting underrepresented
languages in a multicultural world. This work directly contributes to ongoing
efforts to address cultural biases in language models and advocates for the
development of culturally localized NLP tools that serve diverse linguistic
communities.
|
2501.15774
|
Efficient Attention-Sharing Information Distillation Transformer for
Lightweight Single Image Super-Resolution
|
cs.CV
|
Transformer-based Super-Resolution (SR) methods have demonstrated superior
performance compared to convolutional neural network (CNN)-based SR approaches
due to their capability to capture long-range dependencies. However, their high
computational complexity necessitates the development of lightweight approaches
for practical use. To address this challenge, we propose the Attention-Sharing
Information Distillation (ASID) network, a lightweight SR network that
integrates attention-sharing and an information distillation structure
specifically designed for Transformer-based SR methods. We modify the
information distillation scheme, originally designed for efficient CNN
operations, to reduce the computational load of stacked self-attention layers,
effectively addressing the efficiency bottleneck. Additionally, we introduce
attention-sharing across blocks to further minimize the computational cost of
self-attention operations. By combining these strategies, ASID achieves
competitive performance with existing SR methods while requiring only around
300K parameters - significantly fewer than existing CNN-based and
Transformer-based SR models. Furthermore, ASID outperforms state-of-the-art SR
methods when the number of parameters is matched, demonstrating its efficiency
and effectiveness. The code and supplementary material are available on the
project page.
|
2501.15775
|
Do Existing Testing Tools Really Uncover Gender Bias in Text-to-Image
Models?
|
cs.CV cs.SE
|
Text-to-Image (T2I) models have recently gained significant attention due to
their ability to generate high-quality images and are consequently used in a
wide range of applications. However, there are concerns about the gender bias
of these models. Previous studies have shown that T2I models can perpetuate or
even amplify gender stereotypes when provided with neutral text prompts.
Researchers have proposed automated gender bias uncovering detectors for T2I
models, but a crucial gap exists: no existing work comprehensively compares the
various detectors and understands how the gender bias detected by them deviates
from the actual situation. This study addresses this gap by validating previous
gender bias detectors using a manually labeled dataset and comparing how the
bias identified by various detectors deviates from the actual bias in T2I
models, as verified by manual confirmation. We create a dataset consisting of
6,000 images generated from three cutting-edge T2I models: Stable Diffusion XL,
Stable Diffusion 3, and Dreamlike Photoreal 2.0. During the human-labeling
process, we find that all three T2I models generate a portion (12.48% on
average) of low-quality images (e.g., generate images with no face present),
where human annotators cannot determine the gender of the person. Our analysis
reveals that all three T2I models show a preference for generating male images,
with SDXL being the most biased. Additionally, images generated using prompts
containing professional descriptions (e.g., lawyer or doctor) show the most
bias. We evaluate seven gender bias detectors and find that none fully capture
the actual level of bias in T2I models, with some detectors overestimating bias
by up to 26.95%. We further investigate the causes of inaccurate estimations,
highlighting the limitations of detectors in dealing with low-quality images.
Based on our findings, we propose an enhanced detector...
|
2501.15777
|
Automatic Feedback Generation for Short Answer Questions using Answer
Diagnostic Graphs
|
cs.CL
|
Short-reading comprehension questions help students understand text structure
but lack effective feedback. Students struggle to identify and correct errors,
while manual feedback creation is labor-intensive. This highlights the need for
automated feedback linking responses to a scoring rubric for deeper
comprehension.
Despite advances in Natural Language Processing (NLP), research has focused
on automatic grading, with limited work on feedback generation. To address
this, we propose a system that generates feedback for student responses.
Our contributions are twofold. First, we introduce the first system for
feedback on short-answer reading comprehension. These answers are derived from
the text, requiring structural understanding. We propose an "answer diagnosis
graph," integrating the text's logical structure with feedback templates. Using
this graph and NLP techniques, we estimate students' comprehension and generate
targeted feedback.
Second, we evaluate our feedback through an experiment with Japanese high
school students (n=39). They answered two 70-80 word questions and were divided
into two groups with minimal academic differences. One received a model answer,
the other system-generated feedback. Both re-answered the questions, and we
compared score changes. A questionnaire assessed perceptions and motivation.
Results showed no significant score improvement between groups, but
system-generated feedback helped students identify errors and key points in the
text. It also significantly increased motivation. However, further refinement
is needed to enhance text structure understanding.
|
2501.15781
|
Large Language Models to Diffusion Finetuning
|
cs.CL cs.AI cs.LG
|
We propose a new finetuning method to provide pre-trained large language
models (LMs) the ability to scale test-time compute through the diffusion
framework. By increasing the number of diffusion steps, we show our finetuned
models achieve monotonically increasing accuracy, directly translating to
improved performance across downstream tasks. Furthermore, our finetuned models
can expertly answer questions on specific topics by integrating powerful
guidance techniques, and autonomously determine the compute required for a
given problem by leveraging adaptive ODE solvers. Our method is universally
applicable to any foundation model pre-trained with a cross-entropy loss and
does not modify any of its original weights, fully preserving its strong
single-step generation capabilities. We show our method is more effective and
fully compatible with traditional finetuning approaches, introducing an
orthogonal new direction to unify the strengths of the autoregressive and
diffusion frameworks.
|
2501.15785
|
Memorization and Regularization in Generative Diffusion Models
|
cs.LG math.DS math.OC
|
Diffusion models have emerged as a powerful framework for generative
modeling. At the heart of the methodology is score matching: learning gradients
of families of log-densities for noisy versions of the data distribution at
different scales. When the loss function adopted in score matching is evaluated
using empirical data, rather than the population loss, the minimizer
corresponds to the score of a time-dependent Gaussian mixture. However, use of
this analytically tractable minimizer leads to data memorization: in both
unconditioned and conditioned settings, the generative model returns the
training samples. This paper contains an analysis of the dynamical mechanism
underlying memorization. The analysis highlights the need for regularization to
avoid reproducing the analytically tractable minimizer; and, in so doing, lays
the foundations for a principled understanding of how to regularize. Numerical
experiments investigate the properties of: (i) Tikhonov regularization; (ii)
regularization designed to promote asymptotic consistency; and (iii)
regularizations induced by under-parameterization of a neural network or by
early stopping when training a neural network. These experiments are evaluated
in the context of memorization, and directions for future development of
regularization are highlighted.
|
2501.15790
|
Enhancing Synthetic Oversampling for Imbalanced Datasets Using
Proxima-Orion Neighbors and q-Gaussian Weighting Technique
|
cs.LG stat.ML
|
In this article, we propose a novel oversampling algorithm to increase the
number of instances of minority class in an imbalanced dataset. We select two
instances, Proxima and Orion, from the set of all minority class instances,
based on a combination of relative distance weights and density estimation of
majority class instances. Furthermore, the q-Gaussian distribution is used as a
weighting mechanism to produce new synthetic instances to improve the
representation and diversity. We conduct a comprehensive experiment on 42
datasets extracted from KEEL software and eight datasets from the UCI ML
repository to evaluate the usefulness of the proposed (PO-QG) algorithm.
Wilcoxon signed-rank test is used to compare the proposed algorithm with five
other existing algorithms. The test results show that the proposed technique
improves the overall classification performance. We also demonstrate the PO-QG
algorithm to a dataset of Indian patients with sarcopenia.
|
2501.15791
|
Harnessing Diverse Perspectives: A Multi-Agent Framework for Enhanced
Error Detection in Knowledge Graphs
|
cs.AI cs.MA
|
Knowledge graphs are widely used in industrial applications, making error
detection crucial for ensuring the reliability of downstream applications.
Existing error detection methods often fail to effectively utilize fine-grained
subgraph information and rely solely on fixed graph structures, while also
lacking transparency in their decision-making processes, which results in
suboptimal detection performance. In this paper, we propose a novel Multi-Agent
framework for Knowledge Graph Error Detection (MAKGED) that utilizes multiple
large language models (LLMs) in a collaborative setting. By concatenating
fine-grained, bidirectional subgraph embeddings with LLM-based query embeddings
during training, our framework integrates these representations to produce four
specialized agents. These agents utilize subgraph information from different
dimensions to engage in multi-round discussions, thereby improving error
detection accuracy and ensuring a transparent decision-making process.
Extensive experiments on FB15K and WN18RR demonstrate that MAKGED outperforms
state-of-the-art methods, enhancing the accuracy and robustness of KG
evaluation. For specific industrial scenarios, our framework can facilitate the
training of specialized agents using domain-specific knowledge graphs for error
detection, which highlights the potential industrial application value of our
framework. Our code and datasets are available at
https://github.com/kse-ElEvEn/MAKGED.
|
2501.15795
|
Can Multimodal Large Language Models be Guided to Improve Industrial
Anomaly Detection?
|
cs.CV
|
In industrial settings, the accurate detection of anomalies is essential for
maintaining product quality and ensuring operational safety. Traditional
industrial anomaly detection (IAD) models often struggle with flexibility and
adaptability, especially in dynamic production environments where new defect
types and operational changes frequently arise. Recent advancements in
Multimodal Large Language Models (MLLMs) hold promise for overcoming these
limitations by combining visual and textual information processing
capabilities. MLLMs excel in general visual understanding due to their training
on large, diverse datasets, but they lack domain-specific knowledge, such as
industry-specific defect tolerance levels, which limits their effectiveness in
IAD tasks. To address these challenges, we propose Echo, a novel multi-expert
framework designed to enhance MLLM performance for IAD. Echo integrates four
expert modules: Reference Extractor which provides a contextual baseline by
retrieving similar normal images, Knowledge Guide which supplies
domain-specific insights, Reasoning Expert which enables structured, stepwise
reasoning for complex queries, and Decision Maker which synthesizes information
from all modules to deliver precise, context-aware responses. Evaluated on the
MMAD benchmark, Echo demonstrates significant improvements in adaptability,
precision, and robustness, moving closer to meeting the demands of real-world
industrial anomaly detection.
|
2501.15797
|
LemmaHead: RAG Assisted Proof Generation Using Large Language Models
|
cs.LG cs.CL cs.IR
|
Developing the logic necessary to solve mathematical problems or write
mathematical proofs is one of the more difficult objectives for large language
models (LLMS). Currently, the most popular methods in literature consists of
fine-tuning the model on written mathematical content such as academic
publications and textbooks, so that the model can learn to emulate the style of
mathematical writing. In this project, we explore the effectiveness of using
retrieval augmented generation (RAG) to address gaps in the mathematical
reasoning of LLMs. We develop LemmaHead, a RAG knowledge base that supplements
queries to the model with relevant mathematical context, with particular focus
on context from published textbooks. To measure our model's performance in
mathematical reasoning, our testing paradigm focuses on the task of automated
theorem proving via generating proofs to a given mathematical claim in the Lean
formal language.
|
2501.15798
|
MM-Retinal V2: Transfer an Elite Knowledge Spark into Fundus
Vision-Language Pretraining
|
cs.CV
|
Vision-language pretraining (VLP) has been investigated to generalize across
diverse downstream tasks for fundus image analysis. Although recent methods
showcase promising achievements, they significantly rely on large-scale private
image-text data but pay less attention to the pretraining manner, which limits
their further advancements. In this work, we introduce MM-Retinal V2, a
high-quality image-text paired dataset comprising CFP, FFA, and OCT image
modalities. Then, we propose a novel fundus vision-language pretraining model,
namely KeepFIT V2, which is pretrained by integrating knowledge from the elite
data spark into categorical public datasets. Specifically, a preliminary
textual pretraining is adopted to equip the text encoder with primarily
ophthalmic textual knowledge. Moreover, a hybrid image-text knowledge injection
module is designed for knowledge transfer, which is essentially based on a
combination of global semantic concepts from contrastive learning and local
appearance details from generative learning. Extensive experiments across
zero-shot, few-shot, and linear probing settings highlight the generalization
and transferability of KeepFIT V2, delivering performance competitive to
state-of-the-art fundus VLP models trained on large-scale private image-text
datasets. Our dataset and model are publicly available via
https://github.com/lxirich/MM-Retinal.
|
2501.15799
|
Can Molecular Evolution Mechanism Enhance Molecular Representation?
|
q-bio.BM cs.LG cs.NE
|
Molecular evolution is the process of simulating the natural evolution of
molecules in chemical space to explore potential molecular structures and
properties. The relationships between similar molecules are often described
through transformations such as adding, deleting, and modifying atoms and
chemical bonds, reflecting specific evolutionary paths. Existing molecular
representation methods mainly focus on mining data, such as atomic-level
structures and chemical bonds directly from the molecules, often overlooking
their evolutionary history. Consequently, we aim to explore the possibility of
enhancing molecular representations by simulating the evolutionary process. We
extract and analyze the changes in the evolutionary pathway and explore
combining it with existing molecular representations. Therefore, this paper
proposes the molecular evolutionary network (MEvoN) for molecular
representations. First, we construct the MEvoN using molecules with a small
number of atoms and generate evolutionary paths utilizing similarity
calculations. Then, by modeling the atomic-level changes, MEvoN reveals their
impact on molecular properties. Experimental results show that the MEvoN-based
molecular property prediction method significantly improves the performance of
traditional end-to-end algorithms on several molecular datasets. The code is
available at https://anonymous.4open.science/r/MEvoN-7416/.
|
2501.15802
|
Adaptive AI-based Decentralized Resource Management in the Cloud-Edge
Continuum
|
cs.DC cs.AI
|
The increasing complexity of application requirements and the dynamic nature
of the Cloud-Edge Continuum present significant challenges for efficient
resource management. These challenges stem from the ever-changing
infrastructure, which is characterized by additions, removals, and
reconfigurations of nodes and links, as well as the variability of application
workloads. Traditional centralized approaches struggle to adapt to these
changes due to their static nature, while decentralized solutions face
challenges such as limited global visibility and coordination overhead. This
paper proposes a hybrid decentralized framework for dynamic application
placement and resource management. The framework utilizes Graph Neural Networks
(GNNs) to embed resource and application states, enabling comprehensive
representation and efficient decision-making. It employs a collaborative
multi-agent reinforcement learning (MARL) approach, where local agents optimize
resource management in their neighborhoods and a global orchestrator ensures
system-wide coordination. By combining decentralized application placement with
centralized oversight, our framework addresses the scalability, adaptability,
and accuracy challenges inherent in the Cloud-Edge Continuum. This work
contributes to the development of decentralized application placement
strategies, the integration of GNN embeddings, and collaborative MARL systems,
providing a foundation for efficient, adaptive and scalable resource
management.
|
2501.15806
|
Autonomous Horizon-based Asteroid Navigation With
Observability-constrained Maneuvers
|
cs.RO math.OC
|
Asteroid exploration is a pertinent challenge due to the varying complexity
of their dynamical environments, shape and communication delays due to
distance. Thus, autonomous navigation methods are continually being developed
and improved in current research to enable their safe exploration. These
methods often involve using horizon-based Optical Navigation (OpNav) to
determine the spacecraft's location, which is reliant on the visibility of the
horizon. It is critical to ensure the reliability of this measurement such that
the spacecraft may maintain an accurate state estimate throughout its mission.
This paper presents an algorithm that generates control maneuvers for
spacecraft to follow trajectories that allow continuously usable optical
measurements to maintain system observability for safe navigation. This
algorithm improves upon existing asteroid navigation capabilities by allowing
the safe and robust autonomous targeting of various trajectories and orbits at
a wide range of distances within optical measurement range. It is adaptable to
different asteroid scenarios. Overall, the approach develops an
all-encompassing system that simulates the asteroid dynamics, synthetic image
generation, edge detection, horizon-based OpNav, filtering and
observability-enhancing control.
|
2501.15808
|
ClearSight: Human Vision-Inspired Solutions for Event-Based Motion
Deblurring
|
cs.CV
|
Motion deblurring addresses the challenge of image blur caused by camera or
scene movement. Event cameras provide motion information that is encoded in the
asynchronous event streams. To efficiently leverage the temporal information of
event streams, we employ Spiking Neural Networks (SNNs) for motion feature
extraction and Artificial Neural Networks (ANNs) for color information
processing. Due to the non-uniform distribution and inherent redundancy of
event data, existing cross-modal feature fusion methods exhibit certain
limitations. Inspired by the visual attention mechanism in the human visual
system, this study introduces a bioinspired dual-drive hybrid network (BDHNet).
Specifically, the Neuron Configurator Module (NCM) is designed to dynamically
adjusts neuron configurations based on cross-modal features, thereby focusing
the spikes in blurry regions and adapting to varying blurry scenarios
dynamically. Additionally, the Region of Blurry Attention Module (RBAM) is
introduced to generate a blurry mask in an unsupervised manner, effectively
extracting motion clues from the event features and guiding more accurate
cross-modal feature fusion. Extensive subjective and objective evaluations
demonstrate that our method outperforms current state-of-the-art methods on
both synthetic and real-world datasets.
|
2501.15816
|
AdaF^2M^2: Comprehensive Learning and Responsive Leveraging Features in
Recommendation System
|
cs.IR cs.AI
|
Feature modeling, which involves feature representation learning and
leveraging, plays an essential role in industrial recommendation systems.
However, the data distribution in real-world applications usually follows a
highly skewed long-tail pattern due to the popularity bias, which easily leads
to over-reliance on ID-based features, such as user/item IDs and ID sequences
of interactions. Such over-reliance makes it hard for models to learn features
comprehensively, especially for those non-ID meta features, e.g., user/item
characteristics. Further, it limits the feature leveraging ability in models,
getting less generalized and more susceptible to data noise. Previous studies
on feature modeling focus on feature extraction and interaction, hardly
noticing the problems brought about by the long-tail data distribution. To
achieve better feature representation learning and leveraging on real-world
data, we propose a model-agnostic framework AdaF^2M^2, short for Adaptive
Feature Modeling with Feature Mask. The feature-mask mechanism helps
comprehensive feature learning via multi-forward training with augmented
samples, while the adapter applies adaptive weights on features responsive to
different user/item states. By arming base models with AdaF^2M^2, we conduct
online A/B tests on multiple recommendation scenarios, obtaining +1.37% and
+1.89% cumulative improvements on user active days and app duration
respectively. Besides, the extended offline experiments on different models
show improvements as well. AdaF$^2$M$^2$ has been widely deployed on both
retrieval and ranking tasks in multiple applications of Douyin Group,
indicating its superior effectiveness and universality.
|
2501.15817
|
Long-Term Interest Clock: Fine-Grained Time Perception in Streaming
Recommendation System
|
cs.IR cs.AI
|
User interests manifest a dynamic pattern within the course of a day, e.g., a
user usually favors soft music at 8 a.m. but may turn to ambient music at 10
p.m. To model dynamic interests in a day, hour embedding is widely used in
traditional daily-trained industrial recommendation systems. However, its
discreteness can cause periodical online patterns and instability in recent
streaming recommendation systems. Recently, Interest Clock has achieved
remarkable performance in streaming recommendation systems. Nevertheless, it
models users' dynamic interests in a coarse-grained manner, merely encoding
users' discrete interests of 24 hours from short-term behaviors. In this paper,
we propose a fine-grained method for perceiving time information for streaming
recommendation systems, named Long-term Interest Clock (LIC). The key idea of
LIC is adaptively calculating current user interests by taking into
consideration the relevance of long-term behaviors around current time (e.g., 8
a.m.) given a candidate item. LIC consists of two modules: (1) Clock-GSU
retrieves a sub-sequence by searching through long-term behaviors, using query
information from a candidate item and current time, (2) Clock-ESU employs a
time-gap-aware attention mechanism to aggregate sub-sequence with the candidate
item. With Clock-GSU and Clock-ESU, LIC is capable of capturing users' dynamic
fine-grained interests from long-term behaviors. We conduct online A/B tests,
obtaining +0.122% improvements on user active days. Besides, the extended
offline experiments show improvements as well. Long-term Interest Clock has
been integrated into Douyin Music App's recommendation system.
|
2501.15820
|
FuzzyLight: A Robust Two-Stage Fuzzy Approach for Traffic Signal Control
Works in Real Cities
|
eess.SY cs.AI cs.SY
|
Effective traffic signal control (TSC) is crucial in mitigating urban
congestion and reducing emissions. Recently, reinforcement learning (RL) has
been the research trend for TSC. However, existing RL algorithms face several
real-world challenges that hinder their practical deployment in TSC: (1) Sensor
accuracy deteriorates with increased sensor detection range, and data
transmission is prone to noise, potentially resulting in unsafe TSC decisions.
(2) During the training of online RL, interactions with the environment could
be unstable, potentially leading to inappropriate traffic signal phase (TSP)
selection and traffic congestion. (3) Most current TSC algorithms focus only on
TSP decisions, overlooking the critical aspect of phase duration, affecting
safety and efficiency. To overcome these challenges, we propose a robust
two-stage fuzzy approach called FuzzyLight, which integrates compressed sensing
and RL for TSC deployment. FuzzyLight offers several key contributions: (1) It
employs fuzzy logic and compressed sensing to address sensor noise and enhances
the efficiency of TSP decisions. (2) It maintains stable performance during
training and combines fuzzy logic with RL to generate precise phases. (3) It
works in real cities across 22 intersections and demonstrates superior
performance in both real-world and simulated environments. Experimental results
indicate that FuzzyLight enhances traffic efficiency by 48% compared to
expert-designed timings in the real world. Furthermore, it achieves
state-of-the-art (SOTA) performance in simulated environments using six
real-world datasets with transmission noise. The code and deployment video are
available at the URL1
|
2501.15826
|
MADP: Multi-Agent Deductive Planning for Enhanced Cognitive-Behavioral
Mental Health Question Answer
|
cs.CL
|
The Mental Health Question Answer (MHQA) task requires the seeker and
supporter to complete the support process in one-turn dialogue. Given the
richness of help-seeker posts, supporters must thoroughly understand the
content and provide logical, comprehensive, and well-structured responses.
Previous works in MHQA mostly focus on single-agent approaches based on the
cognitive element of Cognitive Behavioral Therapy (CBT), but they overlook the
interactions among various CBT elements, such as emotion and cognition. This
limitation hinders the models' ability to thoroughly understand the distress of
help-seekers. To address this, we propose a framework named Multi-Agent
Deductive Planning (MADP), which is based on the interactions between the
various psychological elements of CBT. This method guides Large Language Models
(LLMs) to achieve a deeper understanding of the seeker's context and provide
more personalized assistance based on individual circumstances. Furthermore, we
construct a new dataset based on the MADP framework and use it to fine-tune
LLMs, resulting in a specialized model named MADP-LLM. We conduct extensive
experiments, including comparisons with multiple LLMs, human evaluations, and
automatic evaluations, to validate the effectiveness of the MADP framework and
MADP-LLM.
|
2501.15828
|
Hybrid Quantum Neural Networks with Amplitude Encoding: Advancing
Recovery Rate Predictions
|
q-fin.CP cs.LG quant-ph
|
Recovery rate prediction plays a pivotal role in bond investment strategies,
enhancing risk assessment, optimizing portfolio allocation, improving pricing
accuracy, and supporting effective credit risk management. However, forecasting
faces challenges like high-dimensional features, small sample sizes, and
overfitting. We propose a hybrid Quantum Machine Learning model incorporating
Parameterized Quantum Circuits (PQC) within a neural network framework. PQCs
inherently preserve unitarity, avoiding computationally costly orthogonality
constraints, while amplitude encoding enables exponential data compression,
reducing qubit requirements logarithmically. Applied to a global dataset of
1,725 observations (1996-2023), our method achieved superior accuracy (RMSE
0.228) compared to classical neural networks (0.246) and quantum models with
angle encoding (0.242), with efficient computation times. This work highlights
the potential of hybrid quantum-classical architectures in advancing recovery
rate forecasting.
|
2501.15830
|
SpatialVLA: Exploring Spatial Representations for Visual-Language-Action
Model
|
cs.RO cs.AI
|
In this paper, we claim that spatial understanding is the keypoint in robot
manipulation, and propose SpatialVLA to explore effective spatial
representations for the robot foundation model. Specifically, we introduce
Ego3D Position Encoding to inject 3D information into the input observations of
the visual-language-action model, and propose Adaptive Action Grids to
represent spatial robot movement actions with adaptive discretized action
grids, facilitating learning generalizable and transferrable spatial action
knowledge for cross-robot control. SpatialVLA is first pre-trained on top of a
vision-language model with 1.1 Million real-world robot episodes, to learn a
generalist manipulation policy across multiple robot environments and tasks.
After pre-training, SpatialVLA is directly applied to perform numerous tasks in
a zero-shot manner. The superior results in both simulation and real-world
robots demonstrate its advantage of inferring complex robot motion trajectories
and its strong in-domain multi-task generalization ability. We further show the
proposed Adaptive Action Grids offer a new and effective way to fine-tune the
pre-trained SpatialVLA model for new simulation and real-world setups, where
the pre-learned action grids are re-discretized to capture robot-specific
spatial action movements of new setups. The superior results from extensive
evaluations demonstrate the exceptional in-distribution generalization and
out-of-distribution adaptation capability, highlighting the crucial benefit of
the proposed spatial-aware representations for generalist robot policy
learning. All the details and codes will be open-sourced.
|
2501.15831
|
Pfungst and Clever Hans: Identifying the unintended cues in a widely
used Alzheimer's disease MRI dataset using explainable deep learning
|
eess.IV cs.CV cs.LG
|
Backgrounds.
Deep neural networks have demonstrated high accuracy in classifying
Alzheimer's disease (AD). This study aims to enlighten the underlying black-box
nature and reveal individual contributions of T1-weighted (T1w) gray-white
matter texture, volumetric information and preprocessing on classification
performance.
Methods.
We utilized T1w MRI data from the Alzheimer's Disease Neuroimaging Initiative
to distinguish matched AD patients (990 MRIs) from healthy controls (990 MRIs).
Preprocessing included skull stripping and binarization at varying thresholds
to systematically eliminate texture information. A deep neural network was
trained on these configurations, and the model performance was compared using
McNemar tests with discrete Bonferroni-Holm correction. Layer-wise Relevance
Propagation (LRP) and structural similarity metrics between heatmaps were
applied to analyze learned features.
Results.
Classification performance metrics (accuracy, sensitivity, and specificity)
were comparable across all configurations, indicating a negligible influence of
T1w gray- and white signal texture. Models trained on binarized images
demonstrated similar feature performance and relevance distributions, with
volumetric features such as atrophy and skull-stripping features emerging as
primary contributors.
Conclusions.
We revealed a previously undiscovered Clever Hans effect in a widely used AD
MRI dataset. Deep neural networks classification predominantly rely on
volumetric features, while eliminating gray-white matter T1w texture did not
decrease the performance. This study clearly demonstrates an overestimation of
the importance of gray-white matter contrasts, at least for widely used
structural T1w images, and highlights potential misinterpretation of
performance metrics.
|
2501.15833
|
Mode Switching-Induced Instability of Multi-source Feed DC Microgrid
|
eess.SY cs.SY
|
In DC microgrids (DCMGs), DC-bus signaling based control strategy is
extensively used for power management, where mode switching plays a crucial
role in achieving multi-source coordination. However, few studies have noticed
the impact of mode switching and switching strategies on system voltage
stability. To fill this gap, this paper aims to provide a general analysis
framework for mode switching-induced instability in multi-source DCMGs. First,
manifold theory is employed to analyze the stability of the DCMG switched
system. Subsequently, the instability mechanism and its physical interpretation
are explored. The positive feedback activated by the decreasing DC bus voltage
during the switching process leads to instability. Switching strategy may
inadvertently contribute to this instability. To improve stability, a novel
control method based on mode scheduling is proposed, by adjusting switching
strategy and thereby correcting the system trajectory. Finally, both real-time
simulations and experimental tests on a DCMG system verify the correctness and
effectiveness of theoretical analysis results.
|
2501.15836
|
Intelligent Code Embedding Framework for High-Precision Ransomware
Detection via Multimodal Execution Path Analysis
|
cs.CR cs.AI
|
Modern threat landscapes continue to evolve with increasing sophistication,
challenging traditional detection methodologies and necessitating innovative
solutions capable of addressing complex adversarial tactics. A novel framework
was developed to identify ransomware activity through multimodal execution path
analysis, integrating high-dimensional embeddings and dynamic heuristic
derivation mechanisms to capture behavioral patterns across diverse attack
variants. The approach demonstrated high adaptability, effectively mitigating
obfuscation strategies and polymorphic characteristics often employed by
ransomware families to evade detection. Comprehensive experimental evaluations
revealed significant advancements in precision, recall, and accuracy metrics
compared to baseline techniques, particularly under conditions of variable
encryption speeds and obfuscated execution flows. The framework achieved
scalable and computationally efficient performance, ensuring robust
applicability across a range of system configurations, from
resource-constrained environments to high-performance infrastructures. Notable
findings included reduced false positive rates and enhanced detection latency,
even for ransomware families employing sophisticated encryption mechanisms. The
modular design allowed seamless integration of additional modalities, enabling
extensibility and future-proofing against emerging threat vectors. Quantitative
analyses further highlighted the system's energy efficiency, emphasizing its
practicality for deployment in environments with stringent operational
constraints. The results underline the importance of integrating advanced
computational techniques and dynamic adaptability to safeguard digital
ecosystems from increasingly complex threats.
|
2501.15838
|
CrySPAI: A new Crystal Structure Prediction Software Based on Artificial
Intelligence
|
cond-mat.mtrl-sci cs.AI
|
Crystal structure predictions based on the combination of first-principles
calculations and machine learning have achieved significant success in
materials science. However, most of these approaches are limited to predicting
specific systems, which hinders their application to unknown or unexplored
domains. In this paper, we present CrySPAI, a crystal structure prediction
package developed using artificial intelligence (AI) to predict energetically
stable crystal structures of inorganic materials given their chemical
compositions. The software consists of three key modules, an evolutionary
optimization algorithm (EOA) that searches for all possible crystal structure
configurations, density functional theory (DFT) that provides the accurate
energy values for these structures, and a deep neural network (DNN) that learns
the relationship between crystal structures and their corresponding energies.
To optimize the process across these modules, a distributed framework is
implemented to parallelize tasks, and an automated workflow has been integrated
into CrySPAI for seamless execution. This paper reports the development and
implementation of AI AI-based CrySPAI Crystal Prediction Software tool and its
unique features.
|
2501.15839
|
Controllable Hand Grasp Generation for HOI and Efficient Evaluation
Methods
|
cs.CV
|
Controllable affordance Hand-Object Interaction (HOI) generation has become
an increasingly important area of research in computer vision. In HOI
generation, the hand grasp generation is a crucial step for effectively
controlling the geometry of the hand. Current hand grasp generation methods
rely on 3D information for both the hand and the object. In addition, these
methods lack controllability concerning the hand's location and orientation. We
treat the hand pose as the discrete graph structure and exploit the geometric
priors. It is well established that higher order contextual dependency among
the points improves the quality of the results in general. We propose a
framework of higher order geometric representations (HOR's) inspired by
spectral graph theory and vector algebra to improve the quality of generated
hand poses. We demonstrate the effectiveness of our proposed HOR's in devising
a controllable novel diffusion method (based on 2D information) for hand grasp
generation that outperforms the state of the art (SOTA). Overcoming the
limitations of existing methods: like lacking of controllability and dependency
on 3D information. Once we have the generated pose, it is very natural to
evaluate them using a metric. Popular metrics like FID and MMD are biased and
inefficient for evaluating the generated hand poses. Using our proposed HOR's,
we introduce an efficient and stable framework of evaluation metrics for grasp
generation methods, addressing inefficiencies and biases in FID and MMD.
|
2501.15842
|
Beyond In-Distribution Performance: A Cross-Dataset Study of Trajectory
Prediction Robustness
|
cs.LG cs.AI stat.ML
|
We study the Out-of-Distribution (OoD) generalization ability of three SotA
trajectory prediction models with comparable In-Distribution (ID) performance
but different model designs. We investigate the influence of inductive bias,
size of training data and data augmentation strategy by training the models on
Argoverse 2 (A2) and testing on Waymo Open Motion (WO) and vice versa. We find
that the smallest model with highest inductive bias exhibits the best OoD
generalization across different augmentation strategies when trained on the
smaller A2 dataset and tested on the large WO dataset. In the converse setting,
training all models on the larger WO dataset and testing on the smaller A2
dataset, we find that all models generalize poorly, even though the model with
the highest inductive bias still exhibits the best generalization ability. We
discuss possible reasons for this surprising finding and draw conclusions about
the design and test of trajectory prediction models and benchmarks.
|
2501.15847
|
Can Location Embeddings Enhance Super-Resolution of Satellite Imagery?
|
cs.CV
|
Publicly available satellite imagery, such as Sentinel- 2, often lacks the
spatial resolution required for accurate analysis of remote sensing tasks
including urban planning and disaster response. Current super-resolution
techniques are typically trained on limited datasets, leading to poor
generalization across diverse geographic regions. In this work, we propose a
novel super-resolution framework that enhances generalization by incorporating
geographic context through location embeddings. Our framework employs
Generative Adversarial Networks (GANs) and incorporates techniques from
diffusion models to enhance image quality. Furthermore, we address tiling
artifacts by integrating information from neighboring images, enabling the
generation of seamless, high-resolution outputs. We demonstrate the
effectiveness of our method on the building segmentation task, showing
significant improvements over state-of-the-art methods and highlighting its
potential for real-world applications.
|
2501.15849
|
Gaussian Process-Based Prediction and Control of Hammerstein-Wiener
Systems
|
eess.SY cs.LG cs.SY
|
This work investigates data-driven prediction and control of
Hammerstein-Wiener systems using physics-informed Gaussian process models.
Data-driven prediction algorithms have been developed for structured nonlinear
systems based on Willems' fundamental lemma. However, existing frameworks
cannot treat output nonlinearities and require a dictionary of basis functions
for Hammerstein systems. In this work, an implicit predictor structure is
considered, leveraging the multi-step-ahead ARX structure for the linear part
of the model. This implicit function is learned by Gaussian process regression
with kernel functions designed from Gaussian process priors for the
nonlinearities. The linear model parameters are estimated as hyperparameters by
assuming a stable spline hyperprior. The implicit Gaussian process model
provides explicit output prediction by optimizing selected optimality criteria.
The model is also applied to receding horizon control with the expected control
cost and chance constraint satisfaction guarantee. Numerical results
demonstrate that the proposed prediction and control algorithms are superior to
black-box Gaussian process models.
|
2501.15850
|
LLM-attacker: Enhancing Closed-loop Adversarial Scenario Generation for
Autonomous Driving with Large Language Models
|
cs.LG cs.CV cs.RO
|
Ensuring and improving the safety of autonomous driving systems (ADS) is
crucial for the deployment of highly automated vehicles, especially in
safety-critical events. To address the rarity issue, adversarial scenario
generation methods are developed, in which behaviors of traffic participants
are manipulated to induce safety-critical events. However, existing methods
still face two limitations. First, identification of the adversarial
participant directly impacts the effectiveness of the generation. However, the
complexity of real-world scenarios, with numerous participants and diverse
behaviors, makes identification challenging. Second, the potential of generated
safety-critical scenarios to continuously improve ADS performance remains
underexplored. To address these issues, we propose LLM-attacker: a closed-loop
adversarial scenario generation framework leveraging large language models
(LLMs). Specifically, multiple LLM agents are designed and coordinated to
identify optimal attackers. Then, the trajectories of the attackers are
optimized to generate adversarial scenarios. These scenarios are iteratively
refined based on the performance of ADS, forming a feedback loop to improve
ADS. Experimental results show that LLM-attacker can create more dangerous
scenarios than other methods, and the ADS trained with it achieves a collision
rate half that of training with normal scenarios. This indicates the ability of
LLM-attacker to test and enhance the safety and robustness of ADS. Video
demonstrations are provided at:
https://drive.google.com/file/d/1Zv4V3iG7825oyiKbUwS2Y-rR0DQIE1ZA/view.
|
2501.15851
|
Coding for Strand Breaks in Composite DNA
|
cs.IT math.IT
|
Even tough DNA can be considered as a very stable long term storage medium,
errors must be expected during storage. From experiments it is evident that the
most common error type due to storage are strand breaks. We address the problem
of correcting strand breaks in DNA sequences resulting from composite DNA
synthesis. We introduce a novel channel model with realistic assumptions about
the errors resulting from long term storage. Our proposed coding scheme employs
marker codes to correct single breaks. For this purpose, we generalize
run-length-limited codes for the composite setting and derive bounds on the
code size.
|
2501.15852
|
CausalSR: Structural Causal Model-Driven Super-Resolution with
Counterfactual Inference
|
cs.CV
|
Physical and optical factors interacting with sensor characteristics create
complex image degradation patterns. Despite advances in deep learning-based
super-resolution, existing methods overlook the causal nature of degradation by
adopting simplistic black-box mappings. This paper formulates super-resolution
using structural causal models to reason about image degradation processes. We
establish a mathematical foundation that unifies principles from causal
inference, deriving necessary conditions for identifying latent degradation
mechanisms and corresponding propagation. We propose a novel counterfactual
learning strategy that leverages semantic guidance to reason about hypothetical
degradation scenarios, leading to theoretically-grounded representations that
capture invariant features across different degradation conditions. The
framework incorporates an adaptive intervention mechanism with provable bounds
on treatment effects, allowing precise manipulation of degradation factors
while maintaining semantic consistency. Through extensive empirical validation,
we demonstrate that our approach achieves significant improvements over
state-of-the-art methods, particularly in challenging scenarios with compound
degradations. On standard benchmarks, our method consistently outperforms
existing approaches by significant margins (0.86-1.21dB PSNR), while providing
interpretable insights into the restoration process. The theoretical framework
and empirical results demonstrate the fundamental importance of causal
reasoning in understanding image restoration systems.
|
2501.15857
|
Are Transformers Able to Reason by Connecting Separated Knowledge in
Training Data?
|
cs.AI cs.CL cs.LG
|
Humans exhibit remarkable compositional reasoning by integrating knowledge
from various sources. For example, if someone learns ( B = f(A) ) from one
source and ( C = g(B) ) from another, they can deduce ( C=g(B)=g(f(A)) ) even
without encountering ( ABC ) together, showcasing the generalization ability of
human intelligence. In this paper, we introduce a synthetic learning task,
"FTCT" (Fragmented at Training, Chained at Testing), to validate the potential
of Transformers in replicating this skill and interpret its inner mechanism. In
the training phase, data consist of separated knowledge fragments from an
overall causal graph. During testing, Transformers must infer complete causal
graph traces by integrating these fragments. Our findings demonstrate that
few-shot Chain-of-Thought prompting enables Transformers to perform
compositional reasoning on FTCT by revealing correct combinations of fragments,
even if such combinations were absent in the training data. Furthermore, the
emergence of compositional reasoning ability is strongly correlated with the
model complexity and training-testing data similarity. We propose, both
theoretically and empirically, that Transformers learn an underlying
generalizable program from training, enabling effective compositional reasoning
during testing.
|
2501.15858
|
Potential Applications of Artificial Intelligence for Cross-language
Intelligibility Assessment of Dysarthric Speech
|
cs.CL cs.SD eess.AS
|
Purpose: This commentary introduces how artificial intelligence (AI) can be
leveraged to advance cross-language intelligibility assessment of dysarthric
speech. Method: We propose a conceptual framework consisting of a universal
model that captures language-universal speech impairments and a
language-specific intelligibility model that incorporates linguistic nuances.
Additionally, we identify key barriers to cross-language intelligibility
assessment, including data scarcity, annotation complexity, and limited
linguistic insights, and present AI-driven solutions to overcome these
challenges. Conclusion: Advances in AI offer transformative opportunities to
enhance cross-language intelligibility assessment for dysarthric speech by
balancing scalability across languages and adaptability by languages.
|
2501.15860
|
The Components of Collaborative Joint Perception and Prediction -- A
Conceptual Framework
|
cs.CV
|
Connected Autonomous Vehicles (CAVs) benefit from Vehicle-to-Everything (V2X)
communication, which enables the exchange of sensor data to achieve
Collaborative Perception (CP). To reduce cumulative errors in perception
modules and mitigate the visual occlusion, this paper introduces a new task,
Collaborative Joint Perception and Prediction (Co-P&P), and provides a
conceptual framework for its implementation to improve motion prediction of
surrounding objects, thereby enhancing vehicle awareness in complex traffic
scenarios. The framework consists of two decoupled core modules, Collaborative
Scene Completion (CSC) and Joint Perception and Prediction (P&P) module, which
simplify practical deployment and enhance scalability. Additionally, we outline
the challenges in Co-P&P and discuss future directions for this research area.
|
2501.15865
|
Transfer of Knowledge through Reverse Annealing: A Preliminary Analysis
of the Benefits and What to Share
|
quant-ph cs.AI cs.ET
|
Being immersed in the NISQ-era, current quantum annealers present limitations
for solving optimization problems efficiently. To mitigate these limitations,
D-Wave Systems developed a mechanism called Reverse Annealing, a specific type
of quantum annealing designed to perform local refinement of good states found
elsewhere. Despite the research activity around Reverse Annealing, none has
theorized about the possible benefits related to the transfer of knowledge
under this paradigm. This work moves in that direction and is driven by
experimentation focused on answering two key research questions: i) is reverse
annealing a paradigm that can benefit from knowledge transfer between similar
problems? and ii) can we infer the characteristics that an input solution
should meet to help increase the probability of success? To properly guide the
tests in this paper, the well-known Knapsack Problem has been chosen for
benchmarking purposes, using a total of 34 instances composed of 14 and 16
items.
|
2501.15870
|
D-PLS: Decoupled Semantic Segmentation for
4D-Panoptic-LiDAR-Segmentation
|
cs.CV cs.AI
|
This paper introduces a novel approach to 4D Panoptic LiDAR Segmentation that
decouples semantic and instance segmentation, leveraging single-scan semantic
predictions as prior information for instance segmentation. Our method D-PLS
first performs single-scan semantic segmentation and aggregates the results
over time, using them to guide instance segmentation. The modular design of
D-PLS allows for seamless integration on top of any semantic segmentation
architecture, without requiring architectural changes or retraining. We
evaluate our approach on the SemanticKITTI dataset, where it demonstrates
significant improvements over the baseline in both classification and
association tasks, as measured by the LiDAR Segmentation and Tracking Quality
(LSTQ) metric. Furthermore, we show that our decoupled architecture not only
enhances instance prediction but also surpasses the baseline due to
advancements in single-scan semantic segmentation.
|
2501.15871
|
Transient Finite Element Simulation of Accelerator Magnets Using Thermal
Thin Shell Approximation
|
physics.acc-ph cond-mat.supr-con cs.CE physics.comp-ph
|
Thermal transient responses of superconducting magnets can be simulated using
the finite element (FE) method. Some accelerator magnets use cables whose
electric insulation is significantly thinner than the bare electric conductor.
The FE discretisation of such geometries with high-quality meshes leads to many
degrees of freedom. This increases the computational time, particularly since
non-linear material properties are involved. In this work, we propose to use a
thermal thin-shell approximation (TSA) to improve the computational efficiency
when solving the heat diffusion equation in two dimensions. We apply the method
to compute the thermal transient response of superconducting accelerator
magnets used for CERN's Large Hadron Collider (LHC) and High-Luminosity LHC.
The TSA collapses thin electrical insulation layers into lines while accurately
representing the thermal gradient across the insulation's thickness. The TSA is
implemented in the multipole module of the open-source Finite Element Quench
Simulator (FiQuS), which can generate the multipole magnet models
programmatically from input text files. First, the TSA approach is verified by
comparison to classical FE simulations with meshed surface insulation regions
for a simple block of four cables and a detailed model of the MBH dipole. The
results show that the TSA approach reduces the computational time significantly
while preserving the accuracy of the solution. Second, the quench heater (QH)
delay computed with the TSA method is compared to measurements for the MBH
magnet. To this end, the thermal transient simulation is coupled to a
magnetostatic solution to account for magneto-resistive effects. Third, the
TSA's full capabilities are showcased in non-linear magneto-thermal simulations
of several LHC and HL-LHC superconducting magnet models. The full source code,
including all input files, is publicly available.
|
2501.15875
|
LCTG Bench: LLM Controlled Text Generation Benchmark
|
cs.CL
|
The rise of large language models (LLMs) has led to more diverse and
higher-quality machine-generated text. However, their high expressive power
makes it difficult to control outputs based on specific business instructions.
In response, benchmarks focusing on the controllability of LLMs have been
developed, but several issues remain: (1) They primarily cover major languages
like English and Chinese, neglecting low-resource languages like Japanese; (2)
Current benchmarks employ task-specific evaluation metrics, lacking a unified
framework for selecting models based on controllability across different use
cases. To address these challenges, this research introduces LCTG Bench, the
first Japanese benchmark for evaluating the controllability of LLMs. LCTG Bench
provides a unified framework for assessing control performance, enabling users
to select the most suitable model for their use cases based on controllability.
By evaluating nine diverse Japanese-specific and multilingual LLMs like GPT-4,
we highlight the current state and challenges of controllability in Japanese
LLMs and reveal the significant gap between multilingual models and
Japanese-specific models.
|
2501.15876
|
Optimizing Sentence Embedding with Pseudo-Labeling and Model Ensembles:
A Hierarchical Framework for Enhanced NLP Tasks
|
cs.CL cs.AI
|
Sentence embedding tasks are important in natural language processing (NLP),
but improving their performance while keeping them reliable is still hard. This
paper presents a framework that combines pseudo-label generation and model
ensemble techniques to improve sentence embeddings. We use external data from
SimpleWiki, Wikipedia, and BookCorpus to make sure the training data is
consistent. The framework includes a hierarchical model with an encoding layer,
refinement layer, and ensemble prediction layer, using ALBERT-xxlarge,
RoBERTa-large, and DeBERTa-large models. Cross-attention layers combine
external context, and data augmentation techniques like synonym replacement and
back-translation increase data variety. Experimental results show large
improvements in accuracy and F1-score compared to basic models, and studies
confirm that cross-attention and data augmentation make a difference. This work
presents an effective way to improve sentence embedding tasks and lays the
groundwork for future NLP research.
|
2501.15877
|
Boli: A dataset for understanding stuttering experience and analyzing
stuttered speech
|
cs.HC cs.AI
|
There is a growing need for diverse, high-quality stuttered speech data,
particularly in the context of Indian languages. This paper introduces Project
Boli, a multi-lingual stuttered speech dataset designed to advance scientific
understanding and technology development for individuals who stutter,
particularly in India. The dataset constitutes (a) anonymized metadata (gender,
age, country, mother tongue) and responses to a questionnaire about how
stuttering affects their daily lives, (b) captures both read speech (using the
Rainbow Passage) and spontaneous speech (through image description tasks) for
each participant and (c) includes detailed annotations of five stutter types:
blocks, prolongations, interjections, sound repetitions and word repetitions.
We present a comprehensive analysis of the dataset, including the data
collection procedure, experience summarization of people who stutter, severity
assessment of stuttering events and technical validation of the collected data.
The dataset is released as an open access to further speech technology
development.
|
2501.15878
|
Slot-Guided Adaptation of Pre-trained Diffusion Models for
Object-Centric Learning and Compositional Generation
|
cs.CV cs.LG
|
We present SlotAdapt, an object-centric learning method that combines slot
attention with pretrained diffusion models by introducing adapters for
slot-based conditioning. Our method preserves the generative power of
pretrained diffusion models, while avoiding their text-centric conditioning
bias. We also incorporate an additional guidance loss into our architecture to
align cross-attention from adapter layers with slot attention. This enhances
the alignment of our model with the objects in the input image without using
external supervision. Experimental results show that our method outperforms
state-of-the-art techniques in object discovery and image generation tasks
across multiple datasets, including those with real images. Furthermore, we
demonstrate through experiments that our method performs remarkably well on
complex real-world images for compositional generation, in contrast to other
slot-based generative methods in the literature. The project page can be found
at https://kaanakan.github.io/SlotAdapt/.
|
2501.15880
|
Movable Antennas Meet Intelligent Reflecting Surface: Friends or Foes?
|
cs.IT eess.SP math.IT
|
Movable antenna (MA) and intelligent reflecting surface (IRS) are considered
promising technologies for the next-generation wireless communication systems
due to their shared channel reconfiguration capabilities. This, however, raises
a fundamental question: Does the performance gain of MAs over conventional
fixed-position antennas (FPAs) still exist in the presence of the IRS? To
answer this question, we investigate in this paper an IRS-assisted multi-user
multiple-input single-output (MISO) MA system, where a multi-MA base station
(BS) transmits to multiple single-FPA users. We formulate a sum-rate
maximization problem by jointly optimizing the active/passive beamforming of
the BS/IRS and the MA positions within a one-dimensional transmit region, which
is challenging to be optimally solved. To drive essential insights, we first
study a simplified case with a single user. Then, we analyze the performance
gain of MAs over FPAs in the light-of-sight (LoS) BS-IRS channel and derive the
conditions under which this gain becomes more or less significant. In addition,
we propose an alternating optimization (AO) algorithm to solve the
signal-to-noise ratio (SNR) maximization problem in the single-user case by
combining the block coordinate descent (BCD) method and the graph-based method.
For the general multi-user case, our performance analysis unveils that the
performance gain of MAs over FPAs diminishes with typical transmit precoding
strategies at the BS under certain conditions. We also propose a high-quality
suboptimal solution to the sum-rate maximization problem by applying the AO
algorithm that combines the weighted minimum mean square error (WMMSE)
algorithm, manifold optimization method and discrete sampling method. Numerical
results validate our theoretical analyses and demonstrate that the performance
gain of MAs over FPAs may be reduced if the IRS passive beamforming is
optimized.
|
2501.15881
|
Multivariate Feature Selection and Autoencoder Embeddings of Ovarian
Cancer Clinical and Genetic Data
|
cs.LG
|
This study explores a data-driven approach to discovering novel clinical and
genetic markers in ovarian cancer (OC). Two main analyses were performed: (1) a
nonlinear examination of an OC dataset using autoencoders, which compress data
into a 3-dimensional latent space to detect potential intrinsic separability
between platinum-sensitive and platinum-resistant groups; and (2) an adaptation
of the informative variable identifier (IVI) to determine which features
(clinical or genetic) are most relevant to disease progression. In the
autoencoder analysis, a clearer pattern emerged when using clinical features
and the combination of clinical and genetic data, indicating that disease
progression groups can be distinguished more effectively after supervised fine
tuning. For genetic data alone, this separability was less apparent but became
more pronounced with a supervised approach. Using the IVI-based feature
selection, key clinical variables (such as type of surgery and neoadjuvant
chemotherapy) and certain gene mutations showed strong relevance, along with
low-risk genetic factors. These findings highlight the strength of combining
machine learning tools (autoencoders) with feature selection methods (IVI) to
gain insights into ovarian cancer progression. They also underscore the
potential for identifying new biomarkers that integrate clinical and genomic
indicators, ultimately contributing to improved patient stratification and
personalized treatment strategies.
|
2501.15889
|
Adaptive Width Neural Networks
|
cs.LG cs.AI
|
For almost 70 years, researchers have mostly relied on hyper-parameter tuning
to pick the width of neural networks' layers out of many possible choices. This
paper challenges the status quo by introducing an easy-to-use technique to
learn an unbounded width of a neural network's layer during training. The
technique does not rely on alternate optimization nor hand-crafted gradient
heuristics; rather, it jointly optimizes the width and the parameters of each
layer via simple backpropagation. We apply the technique to a broad range of
data domains such as tables, images, texts, and graphs, showing how the width
adapts to the task's difficulty. By imposing a soft ordering of importance
among neurons, it is possible to truncate the trained network at virtually zero
cost, achieving a smooth trade-off between performance and compute resources in
a structured way. Alternatively, one can dynamically compress the network with
no performance degradation. In light of recent foundation models trained on
large datasets, believed to require billions of parameters and where
hyper-parameter tuning is unfeasible due to huge training costs, our approach
stands as a viable alternative for width learning.
|
2501.15890
|
Complexity in Complexity: Understanding Visual Complexity Through
Structure, Color, and Surprise
|
cs.CV cs.AI
|
Understanding human perception of visual complexity is crucial in visual
cognition. Recently (Shen, et al. 2024) proposed an interpretable
segmentation-based model that accurately predicted complexity across various
datasets, supporting the idea that complexity can be explained simply. In this
work, we investigate the failure of their model to capture structural, color
and surprisal contributions to complexity. To this end, we propose Multi-Scale
Sobel Gradient which measures spatial intensity variations, Multi-Scale Unique
Color which quantifies colorfulness across multiple scales, and surprise scores
generated using a Large Language Model. We test our features on existing
benchmarks and a novel dataset containing surprising images from Visual Genome.
Our experiments demonstrate that modeling complexity accurately is not as
simple as previously thought, requiring additional perceptual and semantic
factors to address dataset biases. Thus our results offer deeper insights into
how humans assess visual complexity.
|
2501.15891
|
Any2AnyTryon: Leveraging Adaptive Position Embeddings for Versatile
Virtual Clothing Tasks
|
cs.CV
|
Image-based virtual try-on (VTON) aims to generate a virtual try-on result by
transferring an input garment onto a target person's image. However, the
scarcity of paired garment-model data makes it challenging for existing methods
to achieve high generalization and quality in VTON. Also, it limits the ability
to generate mask-free try-ons. To tackle the data scarcity problem, approaches
such as Stable Garment and MMTryon use a synthetic data strategy, effectively
increasing the amount of paired data on the model side. However, existing
methods are typically limited to performing specific try-on tasks and lack
user-friendliness. To enhance the generalization and controllability of VTON
generation, we propose Any2AnyTryon, which can generate try-on results based on
different textual instructions and model garment images to meet various needs,
eliminating the reliance on masks, poses, or other conditions. Specifically, we
first construct the virtual try-on dataset LAION-Garment, the largest known
open-source garment try-on dataset. Then, we introduce adaptive position
embedding, which enables the model to generate satisfactory outfitted model
images or garment images based on input images of different sizes and
categories, significantly enhancing the generalization and controllability of
VTON generation. In our experiments, we demonstrate the effectiveness of our
Any2AnyTryon and compare it with existing methods. The results show that
Any2AnyTryon enables flexible, controllable, and high-quality image-based
virtual try-on generation.https://logn-2024.github.io/Any2anyTryonProjectPage/
|
2501.15893
|
Benchmarking Quantum Reinforcement Learning
|
quant-ph cs.LG
|
Benchmarking and establishing proper statistical validation metrics for
reinforcement learning (RL) remain ongoing challenges, where no consensus has
been established yet. The emergence of quantum computing and its potential
applications in quantum reinforcement learning (QRL) further complicate
benchmarking efforts. To enable valid performance comparisons and to streamline
current research in this area, we propose a novel benchmarking methodology,
which is based on a statistical estimator for sample complexity and a
definition of statistical outperformance. Furthermore, considering QRL, our
methodology casts doubt on some previous claims regarding its superiority. We
conducted experiments on a novel benchmarking environment with flexible levels
of complexity. While we still identify possible advantages, our findings are
more nuanced overall. We discuss the potential limitations of these results and
explore their implications for empirical research on quantum advantage in QRL.
|
2501.15897
|
MPC4RL -- A Software Package for Reinforcement Learning based on Model
Predictive Control
|
eess.SY cs.SY
|
In this paper, we present an early software integrating Reinforcement
Learning (RL) with Model Predictive Control (MPC). Our aim is to make recent
theoretical contributions from the literature more accessible to both the RL
and MPC communities. We combine standard software tools developed by the RL
community, such as Gymnasium, stable-baselines3, or CleanRL with the acados
toolbox, a widely-used software package for efficient MPC algorithms. Our core
contribution is MPC4RL, an open-source Python package that supports
learning-enhanced MPC schemes for existing acados implementations. The package
is designed to be modular, extensible, and user-friendly, facilitating the
tuning of MPC algorithms for a broad range of control problems. It is available
on GitHub.
|
2501.15899
|
Asynchronous distributed collision avoidance with intention consensus
for inland autonomous ships
|
eess.SY cs.SY
|
This paper focuses on the problem of collaborative collision avoidance for
autonomous inland ships. Two solutions are provided to solve the problem in a
distributed manner. We first present a distributed model predictive control
(MPC) algorithm that allows ships to directly negotiate their intention to
avoid collision in a synchronous communication framework. Moreover, we
introduce a new approach to shape the ship's behavior to follow the waterway
traffic regulations. The conditional convergence toward a stationary solution
of this algorithm is guaranteed by the theory of the Alternating Direction
Method of Multipliers (ADMM). To overcome the problem of asynchronous
communication between ships, we adopt a new asynchronous nonlinear ADMM and
present an asynchronous distributed MPC algorithm based on it. Several
simulations and field experiments show that the proposed algorithms can prevent
ship collisions even in complex scenarios.
|
2501.15900
|
Investigating the Sensitivity of Pre-trained Audio Embeddings to Common
Effects
|
cs.LG
|
In recent years, foundation models have significantly advanced data-driven
systems across various domains. Yet, their underlying properties, especially
when functioning as feature extractors, remain under-explored. In this paper,
we investigate the sensitivity to audio effects of audio embeddings extracted
from widely-used foundation models, including OpenL3, PANNs, and CLAP. We focus
on audio effects as the source of sensitivity due to their prevalent presence
in large audio datasets. By applying parameterized audio effects (gain,
low-pass filtering, reverberation, and bitcrushing), we analyze the correlation
between the deformation trajectories and the effect strength in the embedding
space. We propose to quantify the dimensionality and linearizability of the
deformation trajectories induced by audio effects using canonical correlation
analysis. We find that there exists a direction along which the embeddings move
monotonically as the audio effect strength increases, but that the subspace
containing the displacements is generally high-dimensional. This shows that
pre-trained audio embeddings do not globally linearize the effects. Our
empirical results on instrument classification downstream tasks confirm that
projecting out the estimated deformation directions cannot generally improve
the robustness of pre-trained embeddings to audio effects.
|
2501.15901
|
Robust Mobile Robot Path Planning via LLM-Based Dynamic Waypoint
Generation
|
cs.RO
|
Mobile robot path planning in complex environments remains a significant
challenge, especially in achieving efficient, safe and robust paths. The
traditional path planning techniques like DRL models typically trained for a
given configuration of the starting point and target positions, these models
only perform well when these conditions are satisfied. In this paper, we
proposed a novel path planning framework that embeds Large Language Models to
empower mobile robots with the capability of dynamically interpreting natural
language commands and autonomously generating efficient, collision-free
navigation paths. The proposed framework uses LLMs to translate high-level user
inputs into actionable waypoints while dynamically adjusting paths in response
to obstacles. We experimentally evaluated our proposed LLM-based approach
across three different environments of progressive complexity, showing the
robustness of our approach with llama3.1 model that outperformed other LLM
models in path planning time, waypoint generation success rate, and collision
avoidance. This underlines the promising contribution of LLMs for enhancing the
capability of mobile robots, especially when their operation involves complex
decisions in large and complex environments. Our framework has provided safer,
more reliable navigation systems and opened a new direction for the future
research. The source code of this work is publicly available on GitHub.
|
2501.15907
|
Emilia: A Large-Scale, Extensive, Multilingual, and Diverse Dataset for
Speech Generation
|
cs.SD cs.CL eess.AS
|
Recent advancements in speech generation have been driven by the large-scale
training datasets. However, current models fall short of capturing the
spontaneity and variability inherent in real-world human speech, due to their
reliance on audiobook datasets limited to formal read-aloud speech styles. To
bridge this gap, we introduce Emilia-Pipe, an open-source preprocessing
pipeline to extract high-quality training data from valuable yet underexplored
in-the-wild data that capture spontaneous human speech in real-world contexts.
By leveraging Emilia-Pipe, we construct Emilia, the first multilingual speech
generation dataset derived from in-the-wild speech data. This dataset comprises
over 101k hours of speech across six languages: English, Chinese, German,
French, Japanese, and Korean. Besides, we expand Emilia to Emilia-Large, a
dataset exceeding 216k hours, making it the largest open-source speech
generation dataset available. Extensive experiments demonstrate that Emilia
significantly outperforms traditional audiobook datasets in generating
spontaneous and human-like speech, showcasing superior performance in capturing
diverse speaker timbre and speaking styles of real-world human speech.
Furthermore, this work underscores the importance of scaling dataset size to
advance speech generation research and validates the effectiveness of Emilia
for both multilingual and crosslingual speech generation.
|
2501.15908
|
Evidential Physics-Informed Neural Networks
|
cs.LG cs.AI physics.comp-ph
|
We present a novel class of Physics-Informed Neural Networks that is
formulated based on the principles of Evidential Deep Learning, where the model
incorporates uncertainty quantification by learning parameters of a
higher-order distribution. The dependent and trainable variables of the PDE
residual loss and data-fitting loss terms are recast as functions of the
hyperparameters of an evidential prior distribution. Our model is equipped with
an information-theoretic regularizer that contains the Kullback-Leibler
divergence between two inverse-gamma distributions characterizing predictive
uncertainty. Relative to Bayesian-Physics-Informed-Neural-Networks, our
framework appeared to exhibit higher sensitivity to data noise, preserve
boundary conditions more faithfully and yield empirical coverage probabilities
closer to nominal ones. Toward examining its relevance for data mining in
scientific discoveries, we demonstrate how to apply our model to inverse
problems involving 1D and 2D nonlinear differential equations.
|
2501.15910
|
The Sample Complexity of Online Reinforcement Learning: A Multi-model
Perspective
|
cs.LG cs.SY eess.SY math.OC stat.ML
|
We study the sample complexity of online reinforcement learning for nonlinear
dynamical systems with continuous state and action spaces. Our analysis
accommodates a large class of dynamical systems ranging from a finite set of
nonlinear candidate models to models with bounded and Lipschitz continuous
dynamics, to systems that are parametrized by a compact and real-valued set of
parameters. In the most general setting, our algorithm achieves a policy regret
of $\mathcal{O}(N \epsilon^2 + \mathrm{ln}(m(\epsilon))/\epsilon^2)$, where $N$
is the time horizon, $\epsilon$ is a user-specified discretization width, and
$m(\epsilon)$ measures the complexity of the function class under consideration
via its packing number. In the special case where the dynamics are parametrized
by a compact and real-valued set of parameters (such as neural networks,
transformers, etc.), we prove a policy regret of $\mathcal{O}(\sqrt{N p})$,
where $p$ denotes the number of parameters, recovering earlier
sample-complexity results that were derived for linear time-invariant dynamical
systems. While this article focuses on characterizing sample complexity, the
proposed algorithms are likely to be useful in practice, due to their
simplicity, the ability to incorporate prior knowledge, and their benign
transient behavior.
|
2501.15915
|
Parametric Retrieval Augmented Generation
|
cs.CL cs.IR
|
Retrieval-augmented generation (RAG) techniques have emerged as a promising
solution to enhance the reliability of large language models (LLMs) by
addressing issues like hallucinations, outdated knowledge, and domain
adaptation. In particular, existing RAG methods append relevant documents
retrieved from external corpus or databases to the input of LLMs to guide their
generation process, which we refer to as the in-context knowledge injection
method. While this approach is simple and often effective, it has inherent
limitations. Firstly, increasing the context length and number of relevant
documents can lead to higher computational overhead and degraded performance,
especially in complex reasoning tasks. More importantly, in-context knowledge
injection operates primarily at the input level, but LLMs store their internal
knowledge in their parameters. This gap fundamentally limits the capacity of
in-context methods. To this end, we introduce Parametric retrieval-augmented
generation (Parametric RAG), a new RAG paradigm that integrates external
knowledge directly into the parameters of feed-forward networks (FFN) of an LLM
through document parameterization. This approach not only saves online
computational costs by eliminating the need to inject multiple documents into
the LLMs' input context, but also deepens the integration of external knowledge
into the parametric knowledge space of the LLM. Experimental results
demonstrate that Parametric RAG substantially enhances both the effectiveness
and efficiency of knowledge augmentation in LLMs. Also, it can be combined with
in-context RAG methods to achieve even better performance.
We have open-sourced all the code, data, and models in the following
anonymized GitHub link: https://github.com/oneal2000/PRAG
|
2501.15916
|
Online Housing Market
|
cs.GT cs.AI
|
This paper studies an online variant of the celebrated housing market
problem, where each agent has a single house and seeks to exchange it for
another based on her preferences. In this online setting, agents may arrive and
depart at any time, meaning that not all agents are present on the housing
market simultaneously. I extend the well known serial dictatorship and Gale s
top trading cycle mechanisms to this online scenario, aiming to retain their
desirable properties such as Pareto efficiency, individual rationality, and
strategy proofness. These extensions also seek to prevent agents from
strategically delaying their arrival or advancing their departure. I
demonstrate that achieving all of these properties simultaneously is impossible
in the online context, and I present several variants that achieve different
subsets of these properties.
|
2501.15920
|
Vienna Mosaic: Navigating Social Borders in a Melting Pot
|
physics.soc-ph cs.SI physics.data-an
|
Urban segregation poses a critical challenge in cities, exacerbating
inequalities, social tensions, fears, and polarization. It emerges from a
complex interplay of socioeconomic disparities and residential preferences,
disproportionately impacting migrant communities. In this paper, using a
comprehensive administrative data from Vienna, where nearly 40% of the
population consists of international migrants, we analyse co-residence
preferences between migrants and locals at the neighbourhood level. Our
findings reveal two major clusters in Vienna shaped by wealth disparities,
district diversity, and nationality-based homophily. These insights shed light
on the underlying mechanisms of urban segregation and designing policies for
better integration.
|
2501.15921
|
CREATOR Case: PMSM and IM Electric Machine Data for Validation and
Benchmarking of Simulation and Modeling Approaches
|
cs.CE
|
This paper describes the complete sets of data of two different machines, a
PMSM and an IM, that are made available to the public for modeling and
simulation validation and benchmarking. For both machines, not only the
complete sets of design parameters, i.e., motor geometry, electrical
parameters, material properties, and winding schemes as well as the measured
low-frequency equivalent circuit parameters are provided, but also
comprehensive measurement results on six different drive cycles to allow for
transient investigations. The data packages provide all the required
information in terms of design parameters and measurement results that
facilitate modeling and simulation validation and benchmarking results for
verification of different modeling approaches. The paper serves as the key
reference user manual for the extensive and comprehensive sets of data made
available. It is therefore recommended to follow up on the reading of the paper
by a study of the data packages themselves. To the authors' best knowledge,
this is the first time that the complete sets of machine data of two different
machines are published, allowing for benchmarking of modeling and simulation
projects, and reproducibility and reusability following the FAIR data principle
of analyses based on these data.
|
2501.15922
|
SkillScope: A Tool to Predict Fine-Grained Skills Needed to Solve Issues
on GitHub
|
cs.SE cs.LG
|
New contributors often struggle to find tasks that they can tackle when
onboarding onto a new Open Source Software (OSS) project. One reason for this
difficulty is that issue trackers lack explanations about the knowledge or
skills needed to complete a given task successfully. These explanations can be
complex and time-consuming to produce. Past research has partially addressed
this problem by labeling issues with issue types, issue difficulty level, and
issue skills. However, current approaches are limited to a small set of labels
and lack in-depth details about their semantics, which may not sufficiently
help contributors identify suitable issues. To surmount this limitation, this
paper explores large language models (LLMs) and Random Forest (RF) to predict
the multilevel skills required to solve the open issues. We introduce a novel
tool, SkillScope, which retrieves current issues from Java projects hosted on
GitHub and predicts the multilevel programming skills required to resolve these
issues. In a case study, we demonstrate that SkillScope could predict 217
multilevel skills for tasks with 91% precision, 88% recall, and 89% F-measure
on average. Practitioners can use this tool to better delegate or choose tasks
to solve in OSS projects.
|
2501.15924
|
Stabilization of an unstable reaction-diffusion PDE with input delay
despite state and input quantization
|
eess.SY cs.SY math.AP
|
We solve the global asymptotic stability problem of an unstable
reaction-diffusion Partial Differential Equation (PDE) subject to input delay
and state quantization developing a switched predictor-feedback law. To deal
with the input delay, we reformulate the problem as an actuated transport PDE
coupled with the original reaction-diffusion PDE. Then, we design a quantized
predictor-based feedback mechanism that employs a dynamic switching strategy to
adjust the quantization range and error over time. The stability of the
closed-loop system is proven properly combining backstepping with a small-gain
approach and input-to-state stability techniques, for deriving estimates on
solutions, despite the quantization effect and the system's instability. We
also extend this result to the input quantization case.
|
2501.15925
|
Efficient Distillation of Deep Spiking Neural Networks for Full-Range
Timestep Deployment
|
cs.LG q-bio.NC
|
Spiking Neural Networks (SNNs) are emerging as a brain-inspired alternative
to traditional Artificial Neural Networks (ANNs), prized for their potential
energy efficiency on neuromorphic hardware. Despite this, SNNs often suffer
from accuracy degradation compared to ANNs and face deployment challenges due
to fixed inference timesteps, which require retraining for adjustments,
limiting operational flexibility. To address these issues, our work considers
the spatio-temporal property inherent in SNNs, and proposes a novel
distillation framework for deep SNNs that optimizes performance across
full-range timesteps without specific retraining, enhancing both efficacy and
deployment adaptability. We provide both theoretical analysis and empirical
validations to illustrate that training guarantees the convergence of all
implicit models across full-range timesteps. Experimental results on CIFAR-10,
CIFAR-100, CIFAR10-DVS, and ImageNet demonstrate state-of-the-art performance
among distillation-based SNNs training methods.
|
2501.15928
|
Generative AI for Lyapunov Optimization Theory in UAV-based Low-Altitude
Economy Networking
|
cs.NI cs.AI cs.LG
|
Lyapunov optimization theory has recently emerged as a powerful mathematical
framework for solving complex stochastic optimization problems by transforming
long-term objectives into a sequence of real-time short-term decisions while
ensuring system stability. This theory is particularly valuable in unmanned
aerial vehicle (UAV)-based low-altitude economy (LAE) networking scenarios,
where it could effectively address inherent challenges of dynamic network
conditions, multiple optimization objectives, and stability requirements.
Recently, generative artificial intelligence (GenAI) has garnered significant
attention for its unprecedented capability to generate diverse digital content.
Extending beyond content generation, in this paper, we propose a framework
integrating generative diffusion models with reinforcement learning to address
Lyapunov optimization problems in UAV-based LAE networking. We begin by
introducing the fundamentals of Lyapunov optimization theory and analyzing the
limitations of both conventional methods and traditional AI-enabled approaches.
We then examine various GenAI models and comprehensively analyze their
potential contributions to Lyapunov optimization. Subsequently, we develop a
Lyapunov-guided generative diffusion model-based reinforcement learning
framework and validate its effectiveness through a UAV-based LAE networking
case study. Finally, we outline several directions for future research.
|
2501.15941
|
SAPPHIRE: Preconditioned Stochastic Variance Reduction for Faster
Large-Scale Statistical Learning
|
stat.ML cs.LG
|
Regularized empirical risk minimization (rERM) has become important in
data-intensive fields such as genomics and advertising, with stochastic
gradient methods typically used to solve the largest problems. However,
ill-conditioned objectives and non-smooth regularizers undermine the
performance of traditional stochastic gradient methods, leading to slow
convergence and significant computational costs. To address these challenges,
we propose the $\texttt{SAPPHIRE}$ ($\textbf{S}$ketching-based
$\textbf{A}$pproximations for $\textbf{P}$roximal $\textbf{P}$reconditioning
and $\textbf{H}$essian $\textbf{I}$nexactness with Variance-$\textbf{RE}$educed
Gradients) algorithm, which integrates sketch-based preconditioning to tackle
ill-conditioning and uses a scaled proximal mapping to minimize the non-smooth
regularizer. This stochastic variance-reduced algorithm achieves
condition-number-free linear convergence to the optimum, delivering an
efficient and scalable solution for ill-conditioned composite large-scale
convex machine learning problems. Extensive experiments on lasso and logistic
regression demonstrate that $\texttt{SAPPHIRE}$ often converges $20$ times
faster than other common choices such as $\texttt{Catalyst}$, $\texttt{SAGA}$,
and $\texttt{SVRG}$. This advantage persists even when the objective is
non-convex or the preconditioner is infrequently updated, highlighting its
robust and practical effectiveness.
|
2501.15942
|
TimeHF: Billion-Scale Time Series Models Guided by Human Feedback
|
cs.LG
|
Time series neural networks perform exceptionally well in real-world
applications but encounter challenges such as limited scalability, poor
generalization, and suboptimal zero-shot performance. Inspired by large
language models, there is interest in developing large time series models (LTM)
to address these issues. However, current methods struggle with training
complexity, adapting human feedback, and achieving high predictive accuracy. We
introduce TimeHF, a novel pipeline for creating LTMs with 6 billion parameters,
incorporating human feedback. We use patch convolutional embedding to capture
long time series information and design a human feedback mechanism called
time-series policy optimization. Deployed in JD.com's supply chain, TimeHF
handles automated replenishment for over 20,000 products, improving prediction
accuracy by 33.21% over existing methods. This work advances LTM technology and
shows significant industrial benefits.
|
2501.15946
|
Impact of Lead Time on Aggregate EV Flexibility for Congestion
Management Services
|
eess.SY cs.SY
|
Increased electrification of energy end-usage can lead to network congestion
during periods of high consumption. Flexibility of loads, such as aggregate
smart charging of Electric Vehicles (EVs), is increasingly leveraged to manage
grid congestion through various market-based mechanisms. Under such an
arrangement, this paper quantifies the effect of lead time on the aggregate
flexibility of EV fleets. Simulations using real-world charging transactions
spanning over different categories of charging stations are performed for two
flexibility products (redispatch and capacity limitations) when offered along
with different business-as-usual (BAU) schedules. Results show that the
variation of tradable flexibility depends mainly on the BAU schedules, the
duration of the requested flexibility, and its start time. Further, the
implication of these flexibility products on the average energy costs and
emissions is also studied for different cases. Simulations show that
bidirectional (V2G) charging outperforms unidirectional smart charging in all
cases.
|
2501.15949
|
Enhancing the Convergence of Federated Learning Aggregation Strategies
with Limited Data
|
cs.LG
|
The development of deep learning techniques is a leading field applied to
cases in which medical data is used, particularly in cases of image diagnosis.
This type of data has privacy and legal restrictions that in many cases prevent
it from being processed from central servers. However, in this area
collaboration between different research centers, in order to create models as
robust as possible, trained with the largest quantity and diversity of data
available, is a critical point to be taken into account. In this sense, the
application of privacy aware distributed architectures, such as federated
learning arises. When applying this type of architecture, the server aggregates
the different local models trained with the data of each data owner to build a
global model. This point is critical and therefore it is fundamental to analyze
different ways of aggregation according to the use case, taking into account
the distribution of the clients, the characteristics of the model, etc. In this
paper we propose a novel aggregation strategy and we apply it to a use case of
cerebral magnetic resonance image classification. In this use case the
aggregation function proposed manages to improve the convergence obtained over
the rounds of the federated learning process in relation to different
aggregation strategies classically implemented and applied.
|
2501.15953
|
Understanding Long Videos via LLM-Powered Entity Relation Graphs
|
cs.IR cs.CV
|
The analysis of extended video content poses unique challenges in artificial
intelligence, particularly when dealing with the complexity of tracking and
understanding visual elements across time. Current methodologies that process
video frames sequentially struggle to maintain coherent tracking of objects,
especially when these objects temporarily vanish and later reappear in the
footage. A critical limitation of these approaches is their inability to
effectively identify crucial moments in the video, largely due to their limited
grasp of temporal relationships. To overcome these obstacles, we present
GraphVideoAgent, a cutting-edge system that leverages the power of graph-based
object tracking in conjunction with large language model capabilities. At its
core, our framework employs a dynamic graph structure that maps and monitors
the evolving relationships between visual entities throughout the video
sequence. This innovative approach enables more nuanced understanding of how
objects interact and transform over time, facilitating improved frame selection
through comprehensive contextual awareness. Our approach demonstrates
remarkable effectiveness when tested against industry benchmarks. In
evaluations on the EgoSchema dataset, GraphVideoAgent achieved a 2.2
improvement over existing methods while requiring analysis of only 8.2 frames
on average. Similarly, testing on the NExT-QA benchmark yielded a 2.0
performance increase with an average frame requirement of 8.1. These results
underscore the efficiency of our graph-guided methodology in enhancing both
accuracy and computational performance in long-form video understanding tasks.
|
2501.15955
|
Rethinking the Bias of Foundation Model under Long-tailed Distribution
|
cs.LG cs.CV stat.ML
|
Long-tailed learning has garnered increasing attention due to its practical
significance. Among the various approaches, the fine-tuning paradigm has gained
considerable interest with the advent of foundation models. However, most
existing methods primarily focus on leveraging knowledge from these models,
overlooking the inherent biases introduced by the imbalanced training data they
rely on. In this paper, we examine how such imbalances from pre-training affect
long-tailed downstream tasks. Specifically, we find the imbalance biases
inherited in foundation models on downstream task as parameter imbalance and
data imbalance. During fine-tuning, we observe that parameter imbalance plays a
more critical role, while data imbalance can be mitigated using existing
re-balancing strategies. Moreover, we find that parameter imbalance cannot be
effectively addressed by current re-balancing techniques, such as adjusting the
logits, during training, unlike data imbalance. To tackle both imbalances
simultaneously, we build our method on causal learning and view the incomplete
semantic factor as the confounder, which brings spurious correlations between
input samples and labels. To resolve the negative effects of this, we propose a
novel backdoor adjustment method that learns the true causal effect between
input samples and labels, rather than merely fitting the correlations in the
data. Notably, we achieve an average performance increase of about $1.67\%$ on
each dataset.
|
2501.15957
|
Inverse Reinforcement Learning via Convex Optimization
|
cs.LG cs.CE math.OC q-bio.NC
|
We consider the inverse reinforcement learning (IRL) problem, where an
unknown reward function of some Markov decision process is estimated based on
observed expert demonstrations. In most existing approaches, IRL is formulated
and solved as a nonconvex optimization problem, posing challenges in scenarios
where robustness and reproducibility are critical. We discuss a convex
formulation of the IRL problem (CIRL) initially proposed by Ng and Russel, and
reformulate the problem such that the domain-specific language CVXPY can be
applied directly to specify and solve the convex problem. We also extend the
CIRL problem to scenarios where the expert policy is not given analytically but
by trajectory as state-action pairs, which can be strongly inconsistent with
optimality, by augmenting some of the constraints. Theoretical analysis and
practical implementation for hyperparameter auto-selection are introduced. This
note helps the users to easily apply CIRL for their problems, without
background knowledge on convex optimization.
|
2501.15963
|
Evaluating Data Influence in Meta Learning
|
cs.LG cs.AI cs.CV
|
As one of the most fundamental models, meta learning aims to effectively
address few-shot learning challenges. However, it still faces significant
issues related to the training data, such as training inefficiencies due to
numerous low-contribution tasks in large datasets and substantial noise from
incorrect labels. Thus, training data attribution methods are needed for meta
learning. However, the dual-layer structure of mata learning complicates the
modeling of training data contributions because of the interdependent influence
between meta-parameters and task-specific parameters, making existing data
influence evaluation tools inapplicable or inaccurate. To address these
challenges, based on the influence function, we propose a general data
attribution evaluation framework for meta-learning within the bilevel
optimization framework. Our approach introduces task influence functions
(task-IF) and instance influence functions (instance-IF) to accurately assess
the impact of specific tasks and individual data points in closed forms. This
framework comprehensively models data contributions across both the inner and
outer training processes, capturing the direct effects of data points on
meta-parameters as well as their indirect influence through task-specific
parameters. We also provide several strategies to enhance computational
efficiency and scalability. Experimental results demonstrate the framework's
effectiveness in training data evaluation via several downstream tasks.
|
2501.15968
|
Multi-View Attention Syntactic Enhanced Graph Convolutional Network for
Aspect-based Sentiment Analysis
|
cs.CL cs.AI
|
Aspect-based Sentiment Analysis (ABSA) is the task aimed at predicting the
sentiment polarity of aspect words within sentences. Recently, incorporating
graph neural networks (GNNs) to capture additional syntactic structure
information in the dependency tree derived from syntactic dependency parsing
has been proven to be an effective paradigm for boosting ABSA. Despite GNNs
enhancing model capability by fusing more types of information, most works only
utilize a single topology view of the dependency tree or simply conflate
different perspectives of information without distinction, which limits the
model performance. To address these challenges, in this paper, we propose a new
multi-view attention syntactic enhanced graph convolutional network (MASGCN)
that weighs different syntactic information of views using attention
mechanisms. Specifically, we first construct distance mask matrices from the
dependency tree to obtain multiple subgraph views for GNNs. To aggregate
features from different views, we propose a multi-view attention mechanism to
calculate the attention weights of views. Furthermore, to incorporate more
syntactic information, we fuse the dependency type information matrix into the
adjacency matrices and present a structural entropy loss to learn the
dependency type adjacency matrix. Comprehensive experiments on four benchmark
datasets demonstrate that our model outperforms state-of-the-art methods. The
codes and datasets are available at https://github.com/SELGroup/MASGCN.
|
2501.15969
|
An Explainable Disease Surveillance System for Early Prediction of
Multiple Chronic Diseases
|
cs.LG cs.AI
|
This study addresses a critical gap in the healthcare system by developing a
clinically meaningful, practical, and explainable disease surveillance system
for multiple chronic diseases, utilizing routine EHR data from multiple U.S.
practices integrated with CureMD's EMR/EHR system. Unlike traditional
systems--using AI models that rely on features from patients' labs--our
approach focuses on routinely available data, such as medical history, vitals,
diagnoses, and medications, to preemptively assess the risks of chronic
diseases in the next year. We trained three distinct models for each chronic
disease: prediction models that forecast the risk of a disease 3, 6, and 12
months before a potential diagnosis. We developed Random Forest models, which
were internally validated using F1 scores and AUROC as performance metrics and
further evaluated by a panel of expert physicians for clinical relevance based
on inferences grounded in medical knowledge. Additionally, we discuss our
implementation of integrating these models into a practical EMR system. Beyond
using Shapley attributes and surrogate models for explainability, we also
introduce a new rule-engineering framework to enhance the intrinsic
explainability of Random Forests.
|
2501.15971
|
REINFORCE-ING Chemical Language Models in Drug Design
|
cs.LG
|
Chemical language models, combined with reinforcement learning, have shown
significant promise to efficiently traverse large chemical spaces in drug
design. However, the performance of various RL algorithms and their best
practices for practical drug design are still unclear. Here, starting from the
principles of the REINFORCE algorithm, we investigate the effect of different
components from RL theory including experience replay, hill-climbing, baselines
to reduce variance, and alternative reward shaping. Additionally we demonstrate
how RL hyperparameters can be fine-tuned for effectiveness, efficiency, or
chemical regularization as demonstrated using the MolOpt benchmark.
|
2501.15972
|
Flexible Blood Glucose Control: Offline Reinforcement Learning from
Human Feedback
|
cs.AI cs.LG
|
Reinforcement learning (RL) has demonstrated success in automating insulin
dosing in simulated type 1 diabetes (T1D) patients but is currently unable to
incorporate patient expertise and preference. This work introduces PAINT
(Preference Adaptation for INsulin control in T1D), an original RL framework
for learning flexible insulin dosing policies from patient records. PAINT
employs a sketch-based approach for reward learning, where past data is
annotated with a continuous reward signal to reflect patient's desired
outcomes. Labelled data trains a reward model, informing the actions of a novel
safety-constrained offline RL algorithm, designed to restrict actions to a safe
strategy and enable preference tuning via a sliding scale. In-silico evaluation
shows PAINT achieves common glucose goals through simple labelling of desired
states, reducing glycaemic risk by 15% over a commercial benchmark. Action
labelling can also be used to incorporate patient expertise, demonstrating an
ability to pre-empt meals (+10% time-in-range post-meal) and address certain
device errors (-1.6% variance post-error) with patient guidance. These results
hold under realistic conditions, including limited samples, labelling errors,
and intra-patient variability. This work illustrates PAINT's potential in
real-world T1D management and more broadly any tasks requiring rapid and
precise preference learning under safety constraints.
|
2501.15973
|
Integrating Probabilistic Trees and Causal Networks for Clinical and
Epidemiological Data
|
cs.LG q-bio.QM
|
Healthcare decision-making requires not only accurate predictions but also
insights into how factors influence patient outcomes. While traditional Machine
Learning (ML) models excel at predicting outcomes, such as identifying high
risk patients, they are limited in addressing what-if questions about
interventions. This study introduces the Probabilistic Causal Fusion (PCF)
framework, which integrates Causal Bayesian Networks (CBNs) and Probability
Trees (PTrees) to extend beyond predictions. PCF leverages causal relationships
from CBNs to structure PTrees, enabling both the quantification of factor
impacts and simulation of hypothetical interventions. PCF was validated on
three real-world healthcare datasets i.e. MIMIC-IV, Framingham Heart Study, and
Diabetes, chosen for their clinically diverse variables. It demonstrated
predictive performance comparable to traditional ML models while providing
additional causal reasoning capabilities. To enhance interpretability, PCF
incorporates sensitivity analysis and SHapley Additive exPlanations (SHAP).
Sensitivity analysis quantifies the influence of causal parameters on outcomes
such as Length of Stay (LOS), Coronary Heart Disease (CHD), and Diabetes, while
SHAP highlights the importance of individual features in predictive modeling.
By combining causal reasoning with predictive modeling, PCF bridges the gap
between clinical intuition and data-driven insights. Its ability to uncover
relationships between modifiable factors and simulate hypothetical scenarios
provides clinicians with a clearer understanding of causal pathways. This
approach supports more informed, evidence-based decision-making, offering a
robust framework for addressing complex questions in diverse healthcare
settings.
|
2501.15977
|
Classification Error Bound for Low Bayes Error Conditions in Machine
Learning
|
cs.LG stat.ML
|
In statistical classification and machine learning, classification error is
an important performance measure, which is minimized by the Bayes decision
rule. In practice, the unknown true distribution is usually replaced with a
model distribution estimated from the training data in the Bayes decision rule.
This substitution introduces a mismatch between the Bayes error and the
model-based classification error. In this work, we apply classification error
bounds to study the relationship between the error mismatch and the
Kullback-Leibler divergence in machine learning. Motivated by recent
observations of low model-based classification errors in many machine learning
tasks, bounding the Bayes error to be lower, we propose a linear approximation
of the classification error bound for low Bayes error conditions. Then, the
bound for class priors are discussed. Moreover, we extend the classification
error bound for sequences. Using automatic speech recognition as a
representative example of machine learning applications, this work analytically
discusses the correlations among different performance measures with extended
bounds, including cross-entropy loss, language model perplexity, and word error
rate.
|
2501.15981
|
MatCLIP: Light- and Shape-Insensitive Assignment of PBR Material Models
|
cs.CV cs.GR cs.LG
|
Assigning realistic materials to 3D models remains a significant challenge in
computer graphics. We propose MatCLIP, a novel method that extracts shape- and
lighting-insensitive descriptors of Physically Based Rendering (PBR) materials
to assign plausible textures to 3D objects based on images, such as the output
of Latent Diffusion Models (LDMs) or photographs. Matching PBR materials to
static images is challenging because the PBR representation captures the
dynamic appearance of materials under varying viewing angles, shapes, and
lighting conditions. By extending an Alpha-CLIP-based model on material
renderings across diverse shapes and lighting, and encoding multiple viewing
conditions for PBR materials, our approach generates descriptors that bridge
the domains of PBR representations with photographs or renderings, including
LDM outputs. This enables consistent material assignments without requiring
explicit knowledge of material relationships between different parts of an
object. MatCLIP achieves a top-1 classification accuracy of 76.6%,
outperforming state-of-the-art methods such as PhotoShape and MatAtlas by over
15 percentage points on publicly available datasets. Our method can be used to
construct material assignments for 3D shape datasets such as ShapeNet,
3DCoMPaT++, and Objaverse. All code and data will be released.
|
2501.15987
|
MultiPDENet: PDE-embedded Learning with Multi-time-stepping for
Accelerated Flow Simulation
|
math.NA cs.AI cs.NA
|
Solving partial differential equations (PDEs) by numerical methods meet
computational cost challenge for getting the accurate solution since fine grids
and small time steps are required. Machine learning can accelerate this
process, but struggle with weak generalizability, interpretability, and data
dependency, as well as suffer in long-term prediction. To this end, we propose
a PDE-embedded network with multiscale time stepping (MultiPDENet), which fuses
the scheme of numerical methods and machine learning, for accelerated
simulation of flows. In particular, we design a convolutional filter based on
the structure of finite difference stencils with a small number of parameters
to optimize, which estimates the equivalent form of spatial derivative on a
coarse grid to minimize the equation's residual. A Physics Block with a
4th-order Runge-Kutta integrator at the fine time scale is established that
embeds the structure of PDEs to guide the prediction. To alleviate the curse of
temporal error accumulation in long-term prediction, we introduce a multiscale
time integration approach, where a neural network is used to correct the
prediction error at a coarse time scale. Experiments across various PDE
systems, including the Navier-Stokes equations, demonstrate that MultiPDENet
can accurately predict long-term spatiotemporal dynamics, even given small and
incomplete training data, e.g., spatiotemporally down-sampled datasets.
MultiPDENet achieves the state-of-the-art performance compared with other
neural baseline models, also with clear speedup compared to classical numerical
methods.
|
2501.15990
|
3CEL: A corpus of legal Spanish contract clauses
|
cs.CL
|
Legal corpora for Natural Language Processing (NLP) are valuable and scarce
resources in languages like Spanish due to two main reasons: data accessibility
and legal expert knowledge availability. INESData 2024 is a European Union
funded project lead by the Universidad Polit\'ecnica de Madrid (UPM) and
developed by Instituto de Ingenier\'ia del Conocimiento (IIC) to create a
series of state-of-the-art NLP resources applied to the legal/administrative
domain in Spanish. The goal of this paper is to present the Corpus of Legal
Spanish Contract Clauses (3CEL), which is a contract information extraction
corpus developed within the framework of INESData 2024. 3CEL contains 373
manually annotated tenders using 19 defined categories (4 782 total tags) that
identify key information for contract understanding and reviewing.
|
2501.15991
|
Modeling and stability analysis of live systems with time-varying
dimension
|
math.OC cs.SY eess.SY math.DS
|
A major limitation of the classical control theory is the assumption that the
state space and its dimension do not change with time. This prevents analyzing
and even formalizing the stability and control problems for open multi-agent
systems whose agents may enter or leave the network, industrial processes where
the sensors or actuators may be exchanged frequently, smart grids, etc. In this
work, we propose a framework of live systems that covers a rather general class
of systems with a time-varying state space. We argue that input-to-state
stability is a proper stability notion for this class of systems, and many of
the classic tools and results, such as Lyapunov methods and superposition
theorems, can be extended to this setting.
|
2501.15994
|
Real-Time Brain Tumor Detection in Intraoperative Ultrasound Using
YOLO11: From Model Training to Deployment in the Operating Room
|
eess.IV cs.CV
|
Intraoperative ultrasound (ioUS) is a valuable tool in brain tumor surgery
due to its versatility, affordability, and seamless integration into the
surgical workflow. However, its adoption remains limited, primarily because of
the challenges associated with image interpretation and the steep learning
curve required for effective use. This study aimed to enhance the
interpretability of ioUS images by developing a real-time brain tumor detection
system deployable in the operating room. We collected 2D ioUS images from the
Brain Tumor Intraoperative Database (BraTioUS) and the public ReMIND dataset,
annotated with expert-refined tumor labels. Using the YOLO11 architecture and
its variants, we trained object detection models to identify brain tumors. The
dataset included 1,732 images from 192 patients, divided into training,
validation, and test sets. Data augmentation expanded the training set to
11,570 images. In the test dataset, YOLO11s achieved the best balance of
precision and computational efficiency, with a mAP@50 of 0.95, mAP@50-95 of
0.65, and a processing speed of 34.16 frames per second. The proposed solution
was prospectively validated in a cohort of 15 consecutively operated patients
diagnosed with brain tumors. Neurosurgeons confirmed its seamless integration
into the surgical workflow, with real-time predictions accurately delineating
tumor regions. These findings highlight the potential of real-time object
detection algorithms to enhance ioUS-guided brain tumor surgery, addressing key
challenges in interpretation and providing a foundation for future development
of computer vision-based tools for neuro-oncological surgery.
|
2501.15995
|
Brain-Inspired Decentralized Satellite Learning in Space Computing Power
Networks
|
cs.LG cs.DC cs.NI eess.SP
|
Satellite networks are able to collect massive space information with
advanced remote sensing technologies, which is essential for real-time
applications such as natural disaster monitoring. However, traditional
centralized processing by the ground server incurs a severe timeliness issue
caused by the transmission bottleneck of raw data. To this end, Space Computing
Power Networks (Space-CPN) emerges as a promising architecture to coordinate
the computing capability of satellites and enable on board data processing.
Nevertheless, due to the natural limitations of solar panels, satellite power
system is difficult to meet the energy requirements for ever-increasing
intelligent computation tasks of artificial neural networks. To tackle this
issue, we propose to employ spiking neural networks (SNNs), which is supported
by the neuromorphic computing architecture, for on-board data processing. The
extreme sparsity in its computation enables a high energy efficiency.
Furthermore, to achieve effective training of these on-board models, we put
forward a decentralized neuromorphic learning framework, where a
communication-efficient inter-plane model aggregation method is developed with
the inspiration from RelaySum. We provide a theoretical analysis to
characterize the convergence behavior of the proposed algorithm, which reveals
a network diameter related convergence speed. We then formulate a minimum
diameter spanning tree problem on the inter-plane connectivity topology and
solve it to further improve the learning performance. Extensive experiments are
conducted to evaluate the superiority of the proposed method over benchmarks.
|
2501.15998
|
Controllable Forgetting Mechanism for Few-Shot Class-Incremental
Learning
|
cs.CV cs.AI cs.LG
|
Class-incremental learning in the context of limited personal labeled samples
(few-shot) is critical for numerous real-world applications, such as smart home
devices. A key challenge in these scenarios is balancing the trade-off between
adapting to new, personalized classes and maintaining the performance of the
model on the original, base classes. Fine-tuning the model on novel classes
often leads to the phenomenon of catastrophic forgetting, where the accuracy of
base classes declines unpredictably and significantly. In this paper, we
propose a simple yet effective mechanism to address this challenge by
controlling the trade-off between novel and base class accuracy. We
specifically target the ultra-low-shot scenario, where only a single example is
available per novel class. Our approach introduces a Novel Class Detection
(NCD) rule, which adjusts the degree of forgetting a priori while
simultaneously enhancing performance on novel classes. We demonstrate the
versatility of our solution by applying it to state-of-the-art Few-Shot
Class-Incremental Learning (FSCIL) methods, showing consistent improvements
across different settings. To better quantify the trade-off between novel and
base class performance, we introduce new metrics: NCR@2FOR and NCR@5FOR. Our
approach achieves up to a 30% improvement in novel class accuracy on the
CIFAR100 dataset (1-shot, 1 novel class) while maintaining a controlled base
class forgetting rate of 2%.
|
2501.16002
|
ScaDyG:A New Paradigm for Large-scale Dynamic Graph Learning
|
cs.LG
|
Dynamic graphs (DGs), which capture time-evolving relationships between graph
entities, have widespread real-world applications. To efficiently encode DGs
for downstream tasks, most dynamic graph neural networks follow the traditional
message-passing mechanism and extend it with time-based techniques. Despite
their effectiveness, the growth of historical interactions introduces
significant scalability issues, particularly in industry scenarios. To address
this limitation, we propose ScaDyG, with the core idea of designing a
time-aware scalable learning paradigm as follows: 1) Time-aware Topology
Reformulation: ScaDyG first segments historical interactions into time steps
(intra and inter) based on dynamic modeling, enabling weight-free and
time-aware graph propagation within pre-processing. 2) Dynamic Temporal
Encoding: To further achieve fine-grained graph propagation within time steps,
ScaDyG integrates temporal encoding through a combination of exponential
functions in a scalable manner. 3) Hypernetwork-driven Message Aggregation:
After obtaining the propagated features (i.e., messages), ScaDyG utilizes
hypernetwork to analyze historical dependencies, implementing node-wise
representation by an adaptive temporal fusion. Extensive experiments on 12
datasets demonstrate that ScaDyG performs comparably well or even outperforms
other SOTA methods in both node and link-level downstream tasks, with fewer
learnable parameters and higher efficiency.
|
2501.16003
|
Improving Tropical Cyclone Forecasting With Video Diffusion Models
|
cs.CV physics.ao-ph
|
Tropical cyclone (TC) forecasting is crucial for disaster preparedness and
mitigation. While recent deep learning approaches have shown promise, existing
methods often treat TC evolution as a series of independent frame-to-frame
predictions, limiting their ability to capture long-term dynamics. We present a
novel application of video diffusion models for TC forecasting that explicitly
models temporal dependencies through additional temporal layers. Our approach
enables the model to generate multiple frames simultaneously, better capturing
cyclone evolution patterns. We introduce a two-stage training strategy that
significantly improves individual-frame quality and performance in low-data
regimes. Experimental results show our method outperforms the previous approach
of Nath et al. by 19.3% in MAE, 16.2% in PSNR, and 36.1% in SSIM. Most notably,
we extend the reliable forecasting horizon from 36 to 50 hours. Through
comprehensive evaluation using both traditional metrics and Fr\'echet Video
Distance (FVD), we demonstrate that our approach produces more temporally
coherent forecasts while maintaining competitive single-frame quality. Code
accessible at https://github.com/Ren-creater/forecast-video-diffmodels.
|
2501.16004
|
Epidemics on the Move: How Public Transport Demand and Capacity Shape
Disease Spread
|
cs.SI physics.soc-ph
|
Understanding the dynamics of passenger interactions and their
epidemiological impact throughout public transportation systems is crucial for
both service efficiency and public health. High passenger density and close
physical proximity has been shown to accelerate the spread of infectious
diseases. During the COVID-19 pandemic, many public transportation companies
took measures to slow down and minimize disease spreading. One of these
measures was introducing spacing and capacity constraints to public transit
vehicles. Our objective is to explore the effects of demand changes and
transportation measures from an epidemiological point of view, offering
alternative measures to public transportation companies to keep the system
alive while minimizing the epidemiological risk as much as possible.
|
2501.16006
|
Underactuated dexterous robotic grasping with reconfigurable passive
joints
|
cs.RO
|
We introduce a novel reconfigurable passive joint (RP-joint), which has been
implemented and tested on an underactuated three-finger robotic gripper.
RP-joint has no actuation, but instead it is lightweight and compact. It can be
easily reconfigured by applying external forces and locked to perform complex
dexterous manipulation tasks, but only after tension is applied to the
connected tendon. Additionally, we present an approach that allows learning
dexterous grasps from single examples with underactuated grippers and
automatically configures the RP-joints for dexterous manipulation. This is
enhanced by integrating kinaesthetic contact optimization, which improves grasp
performance even further. The proposed RP-joint gripper and grasp planner have
been tested on over 370 grasps executed on 42 IKEA objects and on the YCB
object dataset, achieving grasping success rates of 80% and 87%, on IKEA and
YCB, respectively.
|
2501.16008
|
Gaussian credible intervals in Bayesian nonparametric estimation of the
unseen
|
stat.ME cs.LG stat.ML stat.OT
|
The unseen-species problem assumes $n\geq1$ samples from a population of
individuals belonging to different species, possibly infinite, and calls for
estimating the number $K_{n,m}$ of hitherto unseen species that would be
observed if $m\geq1$ new samples were collected from the same population. This
is a long-standing problem in statistics, which has gained renewed relevance in
biological and physical sciences, particularly in settings with large values of
$n$ and $m$. In this paper, we adopt a Bayesian nonparametric approach to the
unseen-species problem under the Pitman-Yor prior, and propose a novel
methodology to derive large $m$ asymptotic credible intervals for $K_{n,m}$,
for any $n\geq1$. By leveraging a Gaussian central limit theorem for the
posterior distribution of $K_{n,m}$, our method improves upon competitors in
two key aspects: firstly, it enables the full parameterization of the
Pitman-Yor prior, including the Dirichlet prior; secondly, it avoids the need
of Monte Carlo sampling, enhancing computational efficiency. We validate the
proposed method on synthetic and real data, demonstrating that it improves the
empirical performance of competitors by significantly narrowing the gap between
asymptotic and exact credible intervals for any $m\geq1$.
|
2501.16011
|
MEL: Legal Spanish Language Model
|
cs.CL
|
Legal texts, characterized by complex and specialized terminology, present a
significant challenge for Language Models. Adding an underrepresented language,
such as Spanish, to the mix makes it even more challenging. While pre-trained
models like XLM-RoBERTa have shown capabilities in handling multilingual
corpora, their performance on domain specific documents remains underexplored.
This paper presents the development and evaluation of MEL, a legal language
model based on XLM-RoBERTa-large, fine-tuned on legal documents such as BOE
(Bolet\'in Oficial del Estado, the Spanish oficial report of laws) and congress
texts. We detail the data collection, processing, training, and evaluation
processes. Evaluation benchmarks show a significant improvement over baseline
models in understanding the legal Spanish language. We also present case
studies demonstrating the model's application to new legal texts, highlighting
its potential to perform top results over different NLP tasks.
|
2501.16018
|
Strategic Multi-Armed Bandit Problems Under Debt-Free Reporting
|
cs.LG cs.GT
|
We consider the classical multi-armed bandit problem, but with strategic
arms. In this context, each arm is characterized by a bounded support reward
distribution and strategically aims to maximize its own utility by potentially
retaining a portion of its reward, and disclosing only a fraction of it to the
learning agent. This scenario unfolds as a game over $T$ rounds, leading to a
competition of objectives between the learning agent, aiming to minimize their
regret, and the arms, motivated by the desire to maximize their individual
utilities. To address these dynamics, we introduce a new mechanism that
establishes an equilibrium wherein each arm behaves truthfully and discloses as
much of its rewards as possible. With this mechanism, the agent can attain the
second-highest average (true) reward among arms, with a cumulative regret
bounded by $O(\log(T)/\Delta)$ (problem-dependent) or $O(\sqrt{T\log(T)})$
(worst-case).
|
2501.16022
|
Freestyle Sketch-in-the-Loop Image Segmentation
|
cs.CV
|
In this paper, we expand the domain of sketch research into the field of
image segmentation, aiming to establish freehand sketches as a query modality
for subjective image segmentation. Our innovative approach introduces a
"sketch-in-the-loop" image segmentation framework, enabling the segmentation of
visual concepts partially, completely, or in groupings - a truly "freestyle"
approach - without the need for a purpose-made dataset (i.e., mask-free). This
framework capitalises on the synergy between sketch-based image retrieval
(SBIR) models and large-scale pre-trained models (CLIP or DINOv2). The former
provides an effective training signal, while fine-tuned versions of the latter
execute the subjective segmentation. Additionally, our purpose-made
augmentation strategy enhances the versatility of our sketch-guided mask
generation, allowing segmentation at multiple granularity levels. Extensive
evaluations across diverse benchmark datasets underscore the superior
performance of our method in comparison to existing approaches across various
evaluation scenarios.
|
2501.16029
|
FDLLM: A Text Fingerprint Detection Method for LLMs in Multi-Language,
Multi-Domain Black-Box Environments
|
cs.CR cs.AI
|
Using large language models (LLMs) integration platforms without transparency
about which LLM is being invoked can lead to potential security risks.
Specifically, attackers may exploit this black-box scenario to deploy malicious
models and embed viruses in the code provided to users. In this context, it is
increasingly urgent for users to clearly identify the LLM they are interacting
with, in order to avoid unknowingly becoming victims of malicious models.
However, existing studies primarily focus on mixed classification of human and
machine-generated text, with limited attention to classifying texts generated
solely by different models. Current research also faces dual bottlenecks: poor
quality of LLM-generated text (LLMGT) datasets and limited coverage of
detectable LLMs, resulting in poor detection performance for various LLMGT in
black-box scenarios. We propose the first LLMGT fingerprint detection model,
\textbf{FDLLM}, based on Qwen2.5-7B and fine-tuned using LoRA to address these
challenges. FDLLM can more efficiently handle detection tasks across
multilingual and multi-domain scenarios. Furthermore, we constructed a dataset
named \textbf{FD-Datasets}, consisting of 90,000 samples that span multiple
languages and domains, covering 20 different LLMs. Experimental results
demonstrate that FDLLM achieves a macro F1 score 16.7\% higher than the best
baseline method, LM-D.
|
2501.16033
|
PRISMe: A Novel LLM-Powered Tool for Interactive Privacy Policy
Assessment
|
cs.HC cs.AI
|
Protecting online privacy requires users to engage with and comprehend
website privacy policies, but many policies are difficult and tedious to read.
We present PRISMe (Privacy Risk Information Scanner for Me), a novel Large
Language Model (LLM)-driven privacy policy assessment tool, which helps users
to understand the essence of a lengthy, complex privacy policy while browsing.
The tool, a browser extension, integrates a dashboard and an LLM chat. One
major contribution is the first rigorous evaluation of such a tool. In a
mixed-methods user study (N=22), we evaluate PRISMe's efficiency, usability,
understandability of the provided information, and impacts on awareness. While
our tool improves privacy awareness by providing a comprehensible quick
overview and a quality chat for in-depth discussion, users note issues with
consistency and building trust in the tool. From our insights, we derive
important design implications to guide future policy analysis tools.
|
2501.16037
|
Addressing Out-of-Label Hazard Detection in Dashcam Videos: Insights
from the COOOL Challenge
|
cs.CV
|
This paper presents a novel approach for hazard analysis in dashcam footage,
addressing the detection of driver reactions to hazards, the identification of
hazardous objects, and the generation of descriptive captions. We first
introduce a method for detecting driver reactions through speed and sound
anomaly detection, leveraging unsupervised learning techniques. For hazard
detection, we employ a set of heuristic rules as weak classifiers, which are
combined using an ensemble method. This ensemble approach is further refined
with differential privacy to mitigate overconfidence, ensuring robustness
despite the lack of labeled data. Lastly, we use state-of-the-art
vision-language models for hazard captioning, generating descriptive labels for
the detected hazards. Our method achieved the highest scores in the Challenge
on Out-of-Label in Autonomous Driving, demonstrating its effectiveness across
all three tasks. Source codes are publicly available at
https://github.com/ffyyytt/COOOL_2025.
|
2501.16046
|
Revisiting Projection-Free Online Learning with Time-Varying Constraints
|
cs.LG stat.ML
|
We investigate constrained online convex optimization, in which decisions
must belong to a fixed and typically complicated domain, and are required to
approximately satisfy additional time-varying constraints over the long term.
In this setting, the commonly used projection operations are often
computationally expensive or even intractable. To avoid the time-consuming
operation, several projection-free methods have been proposed with an
$\mathcal{O}(T^{3/4} \sqrt{\log T})$ regret bound and an $\mathcal{O}(T^{7/8})$
cumulative constraint violation (CCV) bound for general convex losses. In this
paper, we improve this result and further establish \textit{novel} regret and
CCV bounds when loss functions are strongly convex. The primary idea is to
first construct a composite surrogate loss, involving the original loss and
constraint functions, by utilizing the Lyapunov-based technique. Then, we
propose a parameter-free variant of the classical projection-free method,
namely online Frank-Wolfe (OFW), and run this new extension over the
online-generated surrogate loss. Theoretically, for general convex losses, we
achieve an $\mathcal{O}(T^{3/4})$ regret bound and an $\mathcal{O}(T^{3/4} \log
T)$ CCV bound, both of which are order-wise tighter than existing results. For
strongly convex losses, we establish new guarantees of an
$\mathcal{O}(T^{2/3})$ regret bound and an $\mathcal{O}(T^{5/6})$ CCV bound.
Moreover, we also extend our methods to a more challenging setting with bandit
feedback, obtaining similar theoretical findings. Empirically, experiments on
real-world datasets have demonstrated the effectiveness of our methods.
|
Subsets and Splits
No community queries yet
The top public SQL queries from the community will appear here once available.