id
stringlengths 9
16
| title
stringlengths 4
278
| categories
stringlengths 5
104
| abstract
stringlengths 6
4.09k
|
|---|---|---|---|
2502.05428
|
Aero-engines Anomaly Detection using an Unsupervised Fisher Autoencoder
|
eess.SP cs.SY eess.SY
|
Reliable aero-engine anomaly detection is crucial for ensuring aircraft
safety and operational efficiency. This research explores the application of
the Fisher autoencoder as an unsupervised deep learning method for detecting
anomalies in aero-engine multivariate sensor data, using a Gaussian mixture as
the prior distribution of the latent space. The proposed method aims to
minimize the Fisher divergence between the true and the modeled data
distribution in order to train an autoencoder that can capture the normal
patterns of aero-engine behavior. The Fisher divergence is robust to model
uncertainty, meaning it can handle noisy or incomplete data. The Fisher
autoencoder also has well-defined latent space regions, which makes it more
generalizable and regularized for various types of aero-engines as well as
facilitates diagnostic purposes. The proposed approach improves the accuracy of
anomaly detection and reduces false alarms. Simulations using the CMAPSS
dataset demonstrate the model's efficacy in achieving timely anomaly detection,
even in the case of an unbalanced dataset.
|
2502.05431
|
APE: Faster and Longer Context-Augmented Generation via Adaptive
Parallel Encoding
|
cs.LG cs.AI
|
Context-augmented generation (CAG) techniques, including RAG and ICL, require
the efficient combination of multiple contexts to generate responses to user
queries. Directly inputting these contexts as a sequence introduces a
considerable computational burden by re-encoding the combined selection of
contexts for every request. To address this, we explore the promising potential
of parallel encoding to independently pre-compute and cache each context's KV
states. This approach enables the direct loading of cached states during
inference while accommodating more contexts through position reuse across
contexts. However, due to misalignments in attention distribution, directly
applying parallel encoding results in a significant performance drop. To enable
effective and efficient CAG, we propose Adaptive Parallel Encoding
($\textbf{APE}$), which brings shared prefix, attention temperature, and
scaling factor to align the distribution of parallel encoding with sequential
encoding. Results on RAG and ICL tasks demonstrate that APE can preserve 98%
and 93% sequential encoding performance using the same inputs while
outperforming parallel encoding by 3.6% and 7.9%, respectively. It also scales
to many-shot CAG, effectively encoding hundreds of contexts in parallel.
Efficiency evaluation shows that APE can achieve an end-to-end 4.5$\times$
speedup by reducing 28$\times$ prefilling time for a 128K-length context.
|
2502.05432
|
MoFM: A Large-Scale Human Motion Foundation Model
|
cs.CV cs.LG
|
AFoundation Models (FM) have increasingly drawn the attention of researchers
due to their scalability and generalization across diverse tasks. Inspired by
the success of FMs and the principles that have driven advancements in Large
Language Models (LLMs), we introduce MoFM as a novel Motion Foundation Model.
MoFM is designed for the semantic understanding of complex human motions in
both time and space. To facilitate large-scale training, MotionBook, a
comprehensive human motion dictionary of discretized motions is designed and
employed. MotionBook utilizes Thermal Cubes to capture spatio-temporal motion
heatmaps, applying principles from discrete variational models to encode human
movements into discrete units for a more efficient and scalable representation.
MoFM, trained on a large corpus of motion data, provides a foundational
backbone adaptable to diverse downstream tasks, supporting paradigms such as
one-shot, unsupervised, and supervised tasks. This versatility makes MoFM
well-suited for a wide range of motion-based applications.
|
2502.05433
|
AdaFlow: Efficient Long Video Editing via Adaptive Attention Slimming
And Keyframe Selection
|
cs.CV
|
Despite great progress, text-driven long video editing is still notoriously
challenging mainly due to excessive memory overhead. Although recent efforts
have simplified this task into a two-step process of keyframe translation and
interpolation generation, the token-wise keyframe translation still plagues the
upper limit of video length. In this paper, we propose a novel and
training-free approach towards efficient and effective long video editing,
termed AdaFlow. We first reveal that not all tokens of video frames hold equal
importance for keyframe translation, based on which we propose an Adaptive
Attention Slimming scheme for AdaFlow to squeeze the $KV$ sequence, thus
increasing the number of keyframes for translations by an order of magnitude.
In addition, an Adaptive Keyframe Selection scheme is also equipped to select
the representative frames for joint editing, further improving generation
quality. With these innovative designs, AdaFlow achieves high-quality long
video editing of minutes in one inference, i.e., more than 1$k$ frames on one
A800 GPU, which is about ten times longer than the compared methods, e.g.,
TokenFlow. To validate AdaFlow, we also build a new benchmark for long video
editing with high-quality annotations, termed LongV-EVAL. Our code is released
at: https://github.com/jidantang55/AdaFlow.
|
2502.05434
|
Sample-Efficient Reinforcement Learning from Human Feedback via
Information-Directed Sampling
|
cs.LG
|
We study the problem of reinforcement learning from human feedback (RLHF), a
critical problem in training large language models, from a theoretical
perspective. Our main contribution is the design of novel sample-efficient RLHF
algorithms based on information-directed sampling (IDS), an online
decision-making principle inspired by information theory. Our algorithms
maximize the sum of the value function and a mutual information term that
encourages exploration of the unknown environment (which quantifies the
information gained about the environment through observed human feedback data).
To tackle the challenge of large state spaces and improve sample efficiency, we
construct a simplified \emph{surrogate environment} and introduce a novel
distance measure (named the \emph{$\ell_g$-distance}), enabling our IDS-based
algorithm to achieve a Bayesian regret upper bound of order
$O(H^{\frac{3}{2}}\sqrt{\log(K(\epsilon)) T})$, where $H$ is the episode
length, $T$ is the number of episode and $K(\epsilon)$ is related to the
covering number of the environment. Specializing to the tabular settings, this
regret bound is of order $\tilde{O}(H^2\sqrt{SAT})$, where $S$ and $A$ are the
numbers of states and actions. Finally, we propose an Approximate-IDS algorithm
that is computationally more efficient while maintaining nearly the same sample
efficiency. The design principle of this approximate algorithm is not only
effective in RLHF settings but also applicable to the standard RL framework.
Moreover, our work showcases the value of information theory in reinforcement
learning and in the training of large language models.
|
2502.05435
|
Unbiased Sliced Wasserstein Kernels for High-Quality Audio Captioning
|
eess.AS cs.AI cs.LG
|
Teacher-forcing training for audio captioning usually leads to exposure bias
due to training and inference mismatch. Prior works propose the contrastive
method to deal with caption degeneration. However, the contrastive method
ignores the temporal information when measuring similarity across acoustic and
linguistic modalities, leading to inferior performance. In this work, we
develop the temporal-similarity score by introducing the unbiased sliced
Wasserstein RBF (USW-RBF) kernel equipped with rotary positional embedding to
account for temporal information across modalities. In contrast to the
conventional sliced Wasserstein RBF kernel, we can form an unbiased estimation
of USW-RBF kernel via Monte Carlo estimation. Therefore, it is well-suited to
stochastic gradient optimization algorithms, and its approximation error
decreases at a parametric rate of $\mathcal{O}(L^{-1/2})$ with $L$ Monte Carlo
samples. Additionally, we introduce an audio captioning framework based on the
unbiased sliced Wasserstein kernel, incorporating stochastic decoding methods
to mitigate caption degeneration during the generation process. We conduct
extensive quantitative and qualitative experiments on two datasets, AudioCaps
and Clotho, to illustrate the capability of generating high-quality audio
captions. Experimental results show that our framework is able to increase
caption length, lexical diversity, and text-to-audio self-retrieval accuracy.
|
2502.05437
|
Approximating the total variation distance between spin systems
|
cs.DS cs.LG math.PR
|
Spin systems form an important class of undirected graphical models. For two
Gibbs distributions $\mu$ and $\nu$ induced by two spin systems on the same
graph $G = (V, E)$, we study the problem of approximating the total variation
distance $d_{TV}(\mu,\nu)$ with an $\epsilon$-relative error. We propose a new
reduction that connects the problem of approximating the TV-distance to
sampling and approximate counting. Our applications include the hardcore model
and the antiferromagnetic Ising model in the uniqueness regime, the
ferromagnetic Ising model, and the general Ising model satisfying the spectral
condition.
Additionally, we explore the computational complexity of approximating the
total variation distance $d_{TV}(\mu_S,\nu_S)$ between two marginal
distributions on an arbitrary subset $S \subseteq V$. We prove that this
problem remains hard even when both $\mu$ and $\nu$ admit polynomial-time
sampling and approximate counting algorithms.
|
2502.05439
|
Agentic AI Systems Applied to tasks in Financial Services: Modeling and
model risk management crews
|
cs.AI cs.CE cs.CL cs.LG
|
The advent of large language models has ushered in a new era of agentic
systems, where artificial intelligence programs exhibit remarkable autonomous
decision-making capabilities across diverse domains. This paper explores
agentic system workflows in the financial services industry. In particular, we
build agentic crews that can effectively collaborate to perform complex
modeling and model risk management (MRM) tasks. The modeling crew consists of a
manager and multiple agents who perform specific tasks such as exploratory data
analysis, feature engineering, model selection, hyperparameter tuning, model
training, model evaluation, and writing documentation. The MRM crew consists of
a manager along with specialized agents who perform tasks such as checking
compliance of modeling documentation, model replication, conceptual soundness,
analysis of outcomes, and writing documentation. We demonstrate the
effectiveness and robustness of modeling and MRM crews by presenting a series
of numerical examples applied to credit card fraud detection, credit card
approval, and portfolio credit risk modeling datasets.
|
2502.05440
|
Non-cooperative Stochastic Target Encirclement by Anti-synchronization
Control via Range-only Measurement
|
cs.RO
|
This paper investigates the stochastic moving target encirclement problem in
a realistic setting. In contrast to typical assumptions in related works, the
target in our work is non-cooperative and capable of escaping the circle
containment by boosting its speed to maximum for a short duration. Considering
the extreme environment, such as GPS denial, weight limit, and lack of ground
guidance, two agents can only rely on their onboard single-modality perception
tools to measure the distances to the target. The distance measurement allows
for creating a position estimator by providing a target position-dependent
variable. Furthermore, the construction of the unique distributed
anti-synchronization controller (DASC) can guarantee that the two agents track
and encircle the target swiftly. The convergence of the estimator and
controller is rigorously evaluated using the Lyapunov technique. A real-world
UAV-based experiment is conducted to illustrate the performance of the proposed
methodology in addition to a simulated Matlab numerical sample. Our video
demonstration can be found in the URL https://youtu.be/JXu1gib99yQ.
|
2502.05442
|
The Odyssey of the Fittest: Can Agents Survive and Still Be Good?
|
cs.AI cs.CY cs.HC cs.LG
|
As AI models grow in power and generality, understanding how agents learn and
make decisions in complex environments is critical to promoting ethical
behavior. This paper examines the ethical implications of implementing
biological drives, specifically, self preservation, into three different
agents. A Bayesian agent optimized with NEAT, a Bayesian agent optimized with
stochastic variational inference, and a GPT 4o agent play a simulated, LLM
generated text based adventure game. The agents select actions at each scenario
to survive, adapting to increasingly challenging scenarios. Post simulation
analysis evaluates the ethical scores of the agent's decisions, uncovering the
tradeoffs they navigate to survive. Specifically, analysis finds that when
danger increases, agents ignore ethical considerations and opt for unethical
behavior. The agents' collective behavior, trading ethics for survival,
suggests that prioritizing survival increases the risk of unethical behavior.
In the context of AGI, designing agents to prioritize survival may amplify the
likelihood of unethical decision making and unintended emergent behaviors,
raising fundamental questions about goal design in AI safety research.
|
2502.05444
|
Diverse Image Generation with Diffusion Models and Cross Class Label
Learning for Polyp Classification
|
eess.IV cs.CV
|
Pathologic diagnosis is a critical phase in deciding the optimal treatment
procedure for dealing with colorectal cancer (CRC). Colonic polyps, precursors
to CRC, can pathologically be classified into two major types: adenomatous and
hyperplastic. For precise classification and early diagnosis of such polyps,
the medical procedure of colonoscopy has been widely adopted paired with
various imaging techniques, including narrow band imaging and white light
imaging. However, the existing classification techniques mainly rely on a
single imaging modality and show limited performance due to data scarcity.
Recently, generative artificial intelligence has been gaining prominence in
overcoming such issues. Additionally, various generation-controlling mechanisms
using text prompts and images have been introduced to obtain visually appealing
and desired outcomes. However, such mechanisms require class labels to make the
model respond efficiently to the provided control input. In the colonoscopy
domain, such controlling mechanisms are rarely explored; specifically, the text
prompt is a completely uninvestigated area. Moreover, the unavailability of
expensive class-wise labels for diverse sets of images limits such
explorations. Therefore, we develop a novel model, PathoPolyp-Diff, that
generates text-controlled synthetic images with diverse characteristics in
terms of pathology, imaging modalities, and quality. We introduce cross-class
label learning to make the model learn features from other classes, reducing
the burdensome task of data annotation. The experimental results report an
improvement of up to 7.91% in balanced accuracy using a publicly available
dataset. Moreover, cross-class label learning achieves a statistically
significant improvement of up to 18.33% in balanced accuracy during video-level
analysis. The code is available at https://github.com/Vanshali/PathoPolyp-Diff.
|
2502.05445
|
Unsupervised Self-Prior Embedding Neural Representation for Iterative
Sparse-View CT Reconstruction
|
eess.IV cs.CV
|
Emerging unsupervised implicit neural representation (INR) methods, such as
NeRP, NeAT, and SCOPE, have shown great potential to address sparse-view
computed tomography (SVCT) inverse problems. Although these INR-based methods
perform well in relatively dense SVCT reconstructions, they struggle to achieve
comparable performance to supervised methods in sparser SVCT scenarios. They
are prone to being affected by noise, limiting their applicability in real
clinical settings. Additionally, current methods have not fully explored the
use of image domain priors for solving SVCsT inverse problems. In this work, we
demonstrate that imperfect reconstruction results can provide effective image
domain priors for INRs to enhance performance. To leverage this, we introduce
Self-prior embedding neural representation (Spener), a novel unsupervised
method for SVCT reconstruction that integrates iterative reconstruction
algorithms. During each iteration, Spener extracts local image prior features
from the previous iteration and embeds them to constrain the solution space.
Experimental results on multiple CT datasets show that our unsupervised Spener
method achieves performance comparable to supervised state-of-the-art (SOTA)
methods on in-domain data while outperforming them on out-of-domain datasets.
Moreover, Spener significantly improves the performance of INR-based methods in
handling SVCT with noisy sinograms. Our code is available at
https://github.com/MeijiTian/Spener.
|
2502.05446
|
Stochastic Forward-Backward Deconvolution: Training Diffusion Models
with Finite Noisy Datasets
|
cs.LG
|
Recent diffusion-based generative models achieve remarkable results by
training on massive datasets, yet this practice raises concerns about
memorization and copyright infringement. A proposed remedy is to train
exclusively on noisy data with potential copyright issues, ensuring the model
never observes original content. However, through the lens of deconvolution
theory, we show that although it is theoretically feasible to learn the data
distribution from noisy samples, the practical challenge of collecting
sufficient samples makes successful learning nearly unattainable. To overcome
this limitation, we propose to pretrain the model with a small fraction of
clean data to guide the deconvolution process. Combined with our Stochastic
Forward--Backward Deconvolution (SFBD) method, we attain an FID of $6.31$ on
CIFAR-10 with just $4\%$ clean images (and $3.58$ with $10\%$). Theoretically,
we prove that SFBD guides the model to learn the true data distribution. The
result also highlights the importance of pretraining on limited but clean data
or the alternative from similar datasets. Empirical studies further support
these findings and offer additional insights.
|
2502.05448
|
Distributionally Robust Model Predictive Control with Mixture of
Gaussian Processes
|
eess.SY cs.SY math.OC
|
Despite the success of Gaussian process based Model Predictive Control (MPC)
in robotic control, its applicability scope is greatly hindered by multimodal
disturbances that are prevalent in real-world settings. Here we propose a novel
Mixture of Gaussian Processes based Distributionally Robust MPC (MoGP-DR-MPC)
framework for linear time invariant systems subject to potentially multimodal
state-dependent disturbances. This framework utilizes MoGP to automatically
determine the number of modes from disturbance data. Using the mean and
variance information provided by each mode-specific predictive distribution, it
constructs a data-driven state-dependent ambiguity set, which allows for
flexible and fine-grained disturbance modeling. Based on this ambiguity set, we
impose Distributionally Robust Conditional Value-at Risk (DR-CVaR) constraints
to effectively achieve distributional robustness against errors in the
predictive distributions. To address the computational challenge posed by these
constraints in the resulting MPC problem, we equivalently reformulate the
DR-CVaR constraints into tractable second-order cone constraints. Furthermore,
we provide theoretical guarantees on the recursive feasibility and stability of
the proposed framework. The enhanced control performance of MoGP-DR-MPC is
validated through both numerical experiments and simulations on a quadrotor
system, demonstrating notable reductions in closed-loop cost by 17% and 4%
respectively compared against Gaussian process based MPC.
|
2502.05449
|
Iterative Deepening Sampling for Large Language Models
|
cs.CL cs.AI cs.LG
|
The recent release of OpenAI's o1 models and other similar frameworks
showcasing test-time scaling laws has demonstrated their exceptional capability
to tackle complex reasoning tasks. Inspired by this, subsequent research has
revealed that such test-time scaling laws hinge on the model's ability to
search both within a single response (intra-response) and across multiple
responses (inter-response) during training. Crucially, beyond selecting a
single optimal response, the model must also develop robust self-correction
capabilities within its own outputs. However, training models to achieve
effective self-evaluation and self-correction remains a significant challenge,
heavily dependent on the quality of self-reflection data. In this paper, we
address this challenge by focusing on enhancing the quality of self-reflection
data generation for complex problem-solving, which can subsequently improve the
training of next-generation large language models (LLMs). Specifically, we
explore how manually triggering a model's self-correction mechanisms can
improve performance on challenging reasoning tasks. To this end, we propose a
novel iterative deepening sampling algorithm framework designed to enhance
self-correction and generate higher-quality samples. Through extensive
experiments on Math500 and AIME benchmarks, we demonstrate that our method
achieves a higher success rate on difficult tasks and provide detailed ablation
studies to analyze its effectiveness across diverse settings.
|
2502.05450
|
ConRFT: A Reinforced Fine-tuning Method for VLA Models via Consistency
Policy
|
cs.RO cs.AI
|
Vision-Language-Action (VLA) models have shown substantial potential in
real-world robotic manipulation. However, fine-tuning these models through
supervised learning struggles to achieve robust performance due to limited,
inconsistent demonstrations, especially in contact-rich environments. In this
paper, we propose a reinforced fine-tuning approach for VLA models, named
ConRFT, which consists of offline and online fine-tuning with a unified
consistency-based training objective, to address these challenges. In the
offline stage, our method integrates behavior cloning and Q-learning to
effectively extract policy from a small set of demonstrations and stabilize
value estimating. In the online stage, the VLA model is further fine-tuned via
consistency policy, with human interventions to ensure safe exploration and
high sample efficiency. We evaluate our approach on eight diverse real-world
manipulation tasks. It achieves an average success rate of 96.3% within 45-90
minutes of online fine-tuning, outperforming prior supervised methods with a
144% improvement in success rate and 1.9x shorter episode length. This work
highlights the potential of integrating reinforcement learning to enhance the
performance of VLA models for real-world robotic applications.
|
2502.05451
|
Inversion of Magnetic Data using Learned Dictionaries and Scale Space
|
physics.geo-ph cs.CV cs.LG
|
Magnetic data inversion is an important tool in geophysics, used to infer
subsurface magnetic susceptibility distributions from surface magnetic field
measurements. This inverse problem is inherently ill-posed, characterized by
non-unique solutions, depth ambiguity, and sensitivity to noise. Traditional
inversion approaches rely on predefined regularization techniques to stabilize
solutions, limiting their adaptability to complex or diverse geological
scenarios. In this study, we propose an approach that integrates variable
dictionary learning and scale-space methods to address these challenges. Our
method employs learned dictionaries, allowing for adaptive representation of
complex subsurface features that are difficult to capture with predefined
bases. Additionally, we extend classical variational inversion by incorporating
multi-scale representations through a scale-space framework, enabling the
progressive introduction of structural detail while mitigating overfitting. We
implement both fixed and dynamic dictionary learning techniques, with the
latter introducing iteration-dependent dictionaries for enhanced flexibility.
Using a synthetic dataset to simulate geological scenarios, we demonstrate
significant improvements in reconstruction accuracy and robustness compared to
conventional variational and dictionary-based methods. Our results highlight
the potential of learned dictionaries, especially when coupled with scale-space
dynamics, to improve model recovery and noise handling. These findings
underscore the promise of our data-driven approach for advance magnetic data
inversion and its applications in geophysical exploration, environmental
assessment, and mineral prospecting.
|
2502.05453
|
LLM-Powered Decentralized Generative Agents with Adaptive Hierarchical
Knowledge Graph for Cooperative Planning
|
cs.AI cs.MA
|
Developing intelligent agents for long-term cooperation in dynamic open-world
scenarios is a major challenge in multi-agent systems. Traditional Multi-agent
Reinforcement Learning (MARL) frameworks like centralized training
decentralized execution (CTDE) struggle with scalability and flexibility. They
require centralized long-term planning, which is difficult without custom
reward functions, and face challenges in processing multi-modal data. CTDE
approaches also assume fixed cooperation strategies, making them impractical in
dynamic environments where agents need to adapt and plan independently. To
address decentralized multi-agent cooperation, we propose Decentralized
Adaptive Knowledge Graph Memory and Structured Communication System (DAMCS) in
a novel Multi-agent Crafter environment. Our generative agents, powered by
Large Language Models (LLMs), are more scalable than traditional MARL agents by
leveraging external knowledge and language for long-term planning and
reasoning. Instead of fully sharing information from all past experiences,
DAMCS introduces a multi-modal memory system organized as a hierarchical
knowledge graph and a structured communication protocol to optimize agent
cooperation. This allows agents to reason from past interactions and share
relevant information efficiently. Experiments on novel multi-agent open-world
tasks show that DAMCS outperforms both MARL and LLM baselines in task
efficiency and collaboration. Compared to single-agent scenarios, the two-agent
scenario achieves the same goal with 63% fewer steps, and the six-agent
scenario with 74% fewer steps, highlighting the importance of adaptive memory
and structured communication in achieving long-term goals. We publicly release
our project at: https://happyeureka.github.io/damcs.
|
2502.05454
|
Temporal Representation Alignment: Successor Features Enable Emergent
Compositionality in Robot Instruction Following
|
cs.RO cs.LG
|
Effective task representations should facilitate compositionality, such that
after learning a variety of basic tasks, an agent can perform compound tasks
consisting of multiple steps simply by composing the representations of the
constituent steps together. While this is conceptually simple and appealing, it
is not clear how to automatically learn representations that enable this sort
of compositionality. We show that learning to associate the representations of
current and future states with a temporal alignment loss can improve
compositional generalization, even in the absence of any explicit subtask
planning or reinforcement learning. We evaluate our approach across diverse
robotic manipulation tasks as well as in simulation, showing substantial
improvements for tasks specified with either language or goal images.
|
2502.05457
|
Content-based Video Retrieval in Traffic Videos using Latent Dirichlet
Allocation Topic Model
|
cs.CV
|
Content-based video retrieval is one of the most challenging tasks in
surveillance systems. In this study, Latent Dirichlet Allocation (LDA) topic
model is used to annotate surveillance videos in an unsupervised manner. In
scene understanding methods, some of the learned patterns are ambiguous and
represents a mixture of atomic actions. To address the ambiguity issue in the
proposed method, feature vectors, and the primary model are processed to obtain
a secondary model which describes the scene with primitive patterns that lack
any ambiguity. Experiments show performance improvement in the retrieval task
compared to other topic model-based methods. In terms of false positive and
true positive responses, the proposed method achieves at least 80\% and 124\%
improvement respectively. Four search strategies are proposed, and users can
define and search for a variety of activities using the proposed query
formulation which is based on topic models. In addition, the lightweight
database in our method occupies much fewer storage which in turn speeds up the
search procedure compared to the methods which are based on low-level features.
|
2502.05458
|
Block Graph Neural Networks for tumor heterogeneity prediction
|
cs.CV cs.LG stat.ML
|
Accurate tumor classification is essential for selecting effective
treatments, but current methods have limitations. Standard tumor grading, which
categorizes tumors based on cell differentiation, is not recommended as a
stand-alone procedure, as some well-differentiated tumors can be malignant.
Tumor heterogeneity assessment via single-cell sequencing offers profound
insights but can be costly and may still require significant manual
intervention. Many existing statistical machine learning methods for tumor data
still require complex pre-processing of MRI and histopathological data.
In this paper, we propose to build on a mathematical model that simulates
tumor evolution (O\.{z}a\'{n}ski (2017)) and generate artificial datasets for
tumor classification. Tumor heterogeneity is estimated using normalized
entropy, with a threshold to classify tumors as having high or low
heterogeneity. Our contributions are threefold: (1) the cut and graph
generation processes from the artificial data, (2) the design of tumor
features, and (3) the construction of Block Graph Neural Networks (BGNN), a
Graph Neural Network-based approach to predict tumor heterogeneity. The
experimental results reveal that the combination of the proposed features and
models yields excellent results on artificially generated data ($89.67\%$
accuracy on the test data). In particular, in alignment with the emerging
trends in AI-assisted grading and spatial transcriptomics, our results suggest
that enriching traditional grading methods with birth (e.g., Ki-67
proliferation index) and death markers can improve heterogeneity prediction and
enhance tumor classification.
|
2502.05459
|
DCENWCNet: A Deep CNN Ensemble Network for White Blood Cell
Classification with LIME-Based Explainability
|
cs.CV cs.AI q-bio.CB stat.ML
|
White blood cells (WBC) are important parts of our immune system, and they
protect our body against infections by eliminating viruses, bacteria, parasites
and fungi. The number of WBC types and the total number of WBCs provide
important information about our health status. A traditional method,
convolutional neural networks (CNN), a deep learning architecture, can classify
the blood cell from a part of an object and perform object recognition. Various
CNN models exhibit potential; however, their development often involves ad-hoc
processes that neglect unnecessary layers, leading to issues with unbalanced
datasets and insufficient data augmentation. To address these challenges, we
propose a novel ensemble approach that integrates three CNN architectures, each
uniquely configured with different dropout and max-pooling layer settings to
enhance feature learning. This ensemble model, named DCENWCNet, effectively
balances the bias-variance trade-off. When evaluated on the widely recognized
Rabbin-WBC dataset, our model outperforms existing state-of-the-art networks,
achieving highest mean accuracy. Additionally, it demonstrates superior
performance in precision, recall, F1-score, and Area Under the ROC Curve (AUC)
across all categories. To delve deeper into the interpretability of
classifiers, we employ reliable post-hoc explanation techniques, including
Local Interpretable Model-Agnostic Explanations (LIME). These methods
approximate the behavior of a black-box model by elucidating the relationships
between feature values and predictions. Interpretable results enable users to
comprehend and validate the model's predictions, thereby increasing their
confidence in the automated diagnosis.
|
2502.05462
|
Motion Planning of Nonholonomic Cooperative Mobile Manipulators
|
cs.RO cs.MA cs.SY eess.SY math.OC
|
We propose a real-time implementable motion planning technique for
cooperative object transportation by nonholonomic mobile manipulator robots
(MMRs) in an environment with static and dynamic obstacles. The proposed motion
planning technique works in two steps. A novel visibility vertices-based path
planning algorithm computes a global piece-wise linear path between the start
and the goal location in the presence of static obstacles offline. It defines
the static obstacle free space around the path with a set of convex polygons
for the online motion planner. We employ a Nonliner Model Predictive Control
(NMPC) based online motion planning technique for nonholonomic MMRs that
jointly plans for the mobile base and the manipulators arm. It efficiently
utilizes the locomotion capability of the mobile base and the manipulation
capability of the arm. The motion planner plans feasible motion for the MMRs
and generates trajectory for object transportation considering the kinodynamic
constraints and the static and dynamic obstacles. The efficiency of our
approach is validated by numerical simulation and hardware experiments in
varied environments.
|
2502.05463
|
Learning Memory and Material Dependent Constitutive Laws
|
math.NA cs.LG cs.NA
|
The theory of homogenization provides a systematic approach to the derivation
of macroscale constitutive laws, obviating the need to repeatedly resolve
complex microstructure. However, the unit cell problem that defines the
constitutive model is typically not amenable to explicit evaluation. It is
therefore of interest to learn constitutive models from data generated by the
unit cell problem. Many viscoelastic and elastoviscoplastic materials are
characterized by memory-dependent constitutive laws. In order to amortize the
computational investment in finding such memory-dependent constitutive laws, it
is desirable to learn their dependence on the material microstructure. While
prior work has addressed learning memory dependence and material dependence
separately, their joint learning has not been considered. This paper focuses on
the joint learning problem and proposes a novel neural operator framework to
address it.
In order to provide firm foundations, the homogenization problem for linear
Kelvin-Voigt viscoelastic materials is studied. The theoretical properties of
the cell problem in this Kelvin-Voigt setting are used to motivate the proposed
general neural operator framework; these theoretical properties are also used
to prove a universal approximation theorem for the learned macroscale
constitutive model. This formulation of learnable constitutive models is then
deployed beyond the Kelvin-Voigt setting. Numerical experiments are presented
showing that the resulting data-driven methodology accurately learns history-
and microstructure-dependent linear viscoelastic and nonlinear
elastoviscoplastic constitutive models, and numerical results also demonstrate
that the resulting constitutive models can be deployed in macroscale simulation
of material deformation.
|
2502.05464
|
Prescribed-Time Newton Extremum Seeking using Delays and Time-Periodic
Gains
|
eess.SY cs.SY math.OC
|
We study prescribed-time extremum seeking (ES) for scalar maps in the
presence of time delay. The problem has been solved by Yilmaz and Krstic using
chirpy probing and time-varying singular gains. To alleviate the gain
singularity, we present an alternative approach, employing delays with bounded
time-periodic gains, for achieving prescribed-time convergence to the extremum.
Our results are not extensions or refinements but a new methodological
direction, even in the absence of the delay on the map. The main result we
present compensates the map's delay and uses perturbation-based and the Newton
(rather than gradient) approaches. The simultaneous presence of perturbation
period, and two delays -- a map delay and a seeking feedback delay -- whose
values are different (feedback delay must be longer than map delay), makes for
an intricate situation in the design and analysis.
ES can settle arbitrarily soon after four times the map delay. In the absence
of a map delay, the settling time is arbitrarily short, with feedback delay
chosen as one quarter of the prescribed settling time, i.e., the search settles
after four times any positive feedback delay. In addition to removing the gain
singularity of the Yilmaz-Krstic singular-gain prescribed-time ES, we go beyond
that method's limitation to operating only up to the terminal time. With the
help of averaging theorems in infinite dimension, we conduct a prescribed-time
convergence analysis on a suitable perturbation-averaged \textit{target} ES
system, which contains the time-periodic gains of the map and feedback delays.
Since the notion of ``dead-beat'' Lyapunov stabilization by time-periodic
delayed feedback originates from Hale and Verduyn-Lunel (analysis, 1993) and
Karafyllis (feedback design, 2006), we refer to our approach to prescribed-time
ES as the ``Karafyllis, Hale, Verduyn-Lunel" (KHV) PT-ES approach.
|
2502.05467
|
Position: LLMs Can be Good Tutors in Foreign Language Education
|
cs.CL cs.AI
|
While recent efforts have begun integrating large language models (LLMs) into
foreign language education (FLE), they often rely on traditional approaches to
learning tasks without fully embracing educational methodologies, thus lacking
adaptability to language learning. To address this gap, we argue that LLMs have
the potential to serve as effective tutors in FLE. Specifically, LLMs can play
three critical roles: (1) as data enhancers, improving the creation of learning
materials or serving as student simulations; (2) as task predictors, serving as
learner assessment or optimizing learning pathway; and (3) as agents, enabling
personalized and inclusive education. We encourage interdisciplinary research
to explore these roles, fostering innovation while addressing challenges and
risks, ultimately advancing FLE through the thoughtful integration of LLMs.
|
2502.05468
|
Gen-DFL: Decision-Focused Generative Learning for Robust Decision Making
|
cs.LG
|
Decision-focused learning (DFL) integrates predictive models with downstream
optimization, directly training machine learning models to minimize decision
errors. While DFL has been shown to provide substantial advantages when
compared to a counterpart that treats the predictive and prescriptive models
separately, it has also been shown to struggle in high-dimensional and
risk-sensitive settings, limiting its applicability in real-world settings. To
address this limitation, this paper introduces decision-focused generative
learning (Gen-DFL), a novel framework that leverages generative models to
adaptively model uncertainty and improve decision quality. Instead of relying
on fixed uncertainty sets, Gen-DFL learns a structured representation of the
optimization parameters and samples from the tail regions of the learned
distribution to enhance robustness against worst-case scenarios. This approach
mitigates over-conservatism while capturing complex dependencies in the
parameter space. The paper shows, theoretically, that Gen-DFL achieves improved
worst-case performance bounds compared to traditional DFL. Empirically, it
evaluates Gen-DFL on various scheduling and logistics problems, demonstrating
its strong performance against existing DFL methods.
|
2502.05469
|
Data-Driven Distributionally Robust Mixed-Integer Control through Lifted
Control Policy
|
math.OC cs.SY eess.SY
|
This paper investigates the finite-horizon distributionally robust
mixed-integer control (DRMIC) of uncertain linear systems. However, deriving an
optimal causal feedback control policy to this DRMIC problem is computationally
formidable for most ambiguity sets. To address the computational challenge, we
propose a novel distributionally robust lifted control policy (DR-LCP) method
to derive a high-quality approximate solution to this DRMIC problem for a rich
class of Wasserstein metric-based ambiguity sets, including the Wasserstein
ambiguity set and its variants. In theory, we analyze the asymptotic
performance and establish a tight non-asymptotic bound of the proposed method.
In numerical experiments, the proposed DR-LCP method empirically demonstrates
superior performance compared with existing methods in the literature.
|
2502.05472
|
Robust Deep Signed Graph Clustering via Weak Balance Theory
|
cs.SI
|
Signed graph clustering is a critical technique for discovering community
structures in graphs that exhibit both positive and negative relationships. We
have identified two significant challenges in this domain: i) existing signed
spectral methods are highly vulnerable to noise, which is prevalent in
real-world scenarios; ii) the guiding principle ``an enemy of my enemy is my
friend'', rooted in \textit{Social Balance Theory}, often narrows or disrupts
cluster boundaries in mainstream signed graph neural networks. Addressing these
challenges, we propose the \underline{D}eep \underline{S}igned
\underline{G}raph \underline{C}lustering framework (DSGC), which leverages
\textit{Weak Balance Theory} to enhance preprocessing and encoding for robust
representation learning. First, DSGC introduces Violation Sign-Refine to
denoise the signed network by correcting noisy edges with high-order neighbor
information. Subsequently, Density-based Augmentation enhances semantic
structures by adding positive edges within clusters and negative edges across
clusters, following \textit{Weak Balance} principles. The framework then
utilizes \textit{Weak Balance} principles to develop clustering-oriented signed
neural networks to broaden cluster boundaries by emphasizing distinctions
between negatively linked nodes. Finally, DSGC optimizes clustering assignments
by minimizing a regularized clustering loss. Comprehensive experiments on
synthetic and real-world datasets demonstrate DSGC consistently outperforms all
baselines, establishing a new benchmark in signed graph clustering.
|
2502.05473
|
LMS-Net: A Learned Mumford-Shah Network For Few-Shot Medical Image
Segmentation
|
cs.CV
|
Few-shot semantic segmentation (FSS) methods have shown great promise in
handling data-scarce scenarios, particularly in medical image segmentation
tasks. However, most existing FSS architectures lack sufficient
interpretability and fail to fully incorporate the underlying physical
structures of semantic regions. To address these issues, in this paper, we
propose a novel deep unfolding network, called the Learned Mumford-Shah Network
(LMS-Net), for the FSS task. Specifically, motivated by the effectiveness of
pixel-to-prototype comparison in prototypical FSS methods and the capability of
deep priors to model complex spatial structures, we leverage our learned
Mumford-Shah model (LMS model) as a mathematical foundation to integrate these
insights into a unified framework. By reformulating the LMS model into
prototype update and mask update tasks, we propose an alternating optimization
algorithm to solve it efficiently. Further, the iterative steps of this
algorithm are unfolded into corresponding network modules, resulting in LMS-Net
with clear interpretability. Comprehensive experiments on three publicly
available medical segmentation datasets verify the effectiveness of our method,
demonstrating superior accuracy and robustness in handling complex structures
and adapting to challenging segmentation scenarios. These results highlight the
potential of LMS-Net to advance FSS in medical imaging applications. Our code
will be available at: https://github.com/SDZhang01/LMSNet
|
2502.05475
|
You Are What You Eat -- AI Alignment Requires Understanding How Data
Shapes Structure and Generalisation
|
cs.LG
|
In this position paper, we argue that understanding the relation between
structure in the data distribution and structure in trained models is central
to AI alignment. First, we discuss how two neural networks can have equivalent
performance on the training set but compute their outputs in essentially
different ways and thus generalise differently. For this reason, standard
testing and evaluation are insufficient for obtaining assurances of safety for
widely deployed generally intelligent systems. We argue that to progress beyond
evaluation to a robust mathematical science of AI alignment, we need to develop
statistical foundations for an understanding of the relation between structure
in the data distribution, internal structure in models, and how these
structures underlie generalisation.
|
2502.05476
|
Convolutional Neural Network Segmentation for Satellite Imagery Data to
Identify Landforms Using U-Net Architecture
|
cs.CV
|
This study demonstrates a novel use of the U-Net architecture in the field of
semantic segmentation to detect landforms using preprocessed satellite imagery.
The study applies the U-Net model for effective feature extraction by using
Convolutional Neural Network (CNN) segmentation techniques. Dropout is
strategically used for regularization to improve the model's perseverance, and
the Adam optimizer is used for effective training. The study thoroughly
assesses the performance of the U-Net architecture utilizing a large sample of
preprocessed satellite topographical images. The model excels in semantic
segmentation tasks, displaying high-resolution outputs, quick feature
extraction, and flexibility to a wide range of applications. The findings
highlight the U-Net architecture's substantial contribution to the advancement
of machine learning and image processing technologies. The U-Net approach,
which emphasizes pixel-wise categorization and comprehensive segmentation map
production, is helpful in practical applications such as autonomous driving,
disaster management, and land use planning. This study not only investigates
the complexities of U-Net architecture for semantic segmentation, but also
highlights its real-world applications in image classification, analysis, and
landform identification. The study demonstrates the U-Net model's key
significance in influencing the environment of modern technology.
|
2502.05478
|
OntoTune: Ontology-Driven Self-training for Aligning Large Language
Models
|
cs.CL
|
Existing domain-specific Large Language Models (LLMs) are typically developed
by fine-tuning general-purposed LLMs with large-scale domain-specific corpora.
However, training on large-scale corpora often fails to effectively organize
domain knowledge of LLMs, leading to fragmented understanding. Inspired by how
humans connect concepts and organize knowledge through mind maps, we aim to
emulate this approach by using ontology with hierarchical conceptual knowledge
to reorganize LLM's domain knowledge. From this perspective, we propose an
ontology-driven self-training framework called OntoTune, which aims to align
LLMs with ontology through in-context learning, enabling the generation of
responses guided by the ontology. We leverage in-context learning to identify
whether the LLM has acquired the specific concept's ontology knowledge, and
select the entries not yet mastered by LLM as the training set to further align
the LLM with ontology. Compared to existing domain LLMs based on newly
collected large-scale domain-specific corpora, our OntoTune, which relies on
the existing, long-term developed ontology and LLM itself, significantly
reduces data maintenance costs and offers improved generalization ability. We
conduct our study in the medical domain to evaluate the effectiveness of
OntoTune, utilizing a standardized medical ontology, SNOMED CT as our ontology
source. Experimental results demonstrate that OntoTune achieves
state-of-the-art performance in both in-ontology task hypernym discovery and
out-of-ontology task medical domain QA. Moreover, compared to the latest direct
ontology injection method TaxoLLaMA, our OntoTune better preserves original
knowledge of LLM. The code and data are available at
https://github.com/zjukg/OntoTune.
|
2502.05479
|
Model Validity in Observers: When to Increase the Complexity of Your
Model?
|
cs.RO
|
Model validity is key to the accurate and safe behavior of autonomous
vehicles. Using invalid vehicle models in the different plan and control
vehicle frameworks puts the stability of the vehicle, and thus its safety at
stake. In this work, we analyze the validity of several popular vehicle models
used in the literature with respect to a real vehicle and we prove that serious
accuracy issues are encountered beyond a specific lateral acceleration point.
We set a clear lateral acceleration domain in which the used models are an
accurate representation of the behavior of the vehicle. We then target the
necessity of using learned methods to model the vehicle's behavior. The effects
of model validity on state observers are investigated. The performance of
model-based observers is compared to learning-based ones. Overall, the
presented work emphasizes the validity of vehicle models and presents clear
operational domains in which models could be used safely.
|
2502.05482
|
Robustifying Fourier Features Embeddings for Implicit Neural
Representations
|
cs.CV
|
Implicit Neural Representations (INRs) employ neural networks to represent
continuous functions by mapping coordinates to the corresponding values of the
target function, with applications e.g., inverse graphics. However, INRs face a
challenge known as spectral bias when dealing with scenes containing varying
frequencies. To overcome spectral bias, the most common approach is the Fourier
features-based methods such as positional encoding. However, Fourier
features-based methods will introduce noise to output, which degrades their
performances when applied to downstream tasks. In response, this paper
initially hypothesizes that combining multi-layer perceptrons (MLPs) with
Fourier feature embeddings mutually enhances their strengths, yet
simultaneously introduces limitations inherent in Fourier feature embeddings.
By presenting a simple theorem, we validate our hypothesis, which serves as a
foundation for the design of our solution. Leveraging these insights, we
propose the use of multi-layer perceptrons (MLPs) without additive
|
2502.05485
|
HAMSTER: Hierarchical Action Models For Open-World Robot Manipulation
|
cs.RO cs.AI cs.CV
|
Large foundation models have shown strong open-world generalization to
complex problems in vision and language, but similar levels of generalization
have yet to be achieved in robotics. One fundamental challenge is the lack of
robotic data, which are typically obtained through expensive on-robot
operation. A promising remedy is to leverage cheaper, off-domain data such as
action-free videos, hand-drawn sketches or simulation data. In this work, we
posit that hierarchical vision-language-action (VLA) models can be more
effective in utilizing off-domain data than standard monolithic VLA models that
directly finetune vision-language models (VLMs) to predict actions. In
particular, we study a class of hierarchical VLA models, where the high-level
VLM is finetuned to produce a coarse 2D path indicating the desired robot
end-effector trajectory given an RGB image and a task description. The
intermediate 2D path prediction is then served as guidance to the low-level,
3D-aware control policy capable of precise manipulation. Doing so alleviates
the high-level VLM from fine-grained action prediction, while reducing the
low-level policy's burden on complex task-level reasoning. We show that, with
the hierarchical design, the high-level VLM can transfer across significant
domain gaps between the off-domain finetuning data and real-robot testing
scenarios, including differences on embodiments, dynamics, visual appearances
and task semantics, etc. In the real-robot experiments, we observe an average
of 20% improvement in success rate across seven different axes of
generalization over OpenVLA, representing a 50% relative gain. Visual results
are provided at: https://hamster-robot.github.io/
|
2502.05487
|
Modeling of Core Loss Based on Machine Learning and Deep Learning
|
cs.LG eess.SP
|
This article proposes a Mix Neural Network (MNN) based on CNN-FCNN for
predicting magnetic loss of different materials. In traditional magnetic core
loss models, empirical equations usually need to be regressed under the same
external conditions. When the magnetic core material is different, it needs to
be classified and discussed. If external factors increase, multiple models need
to be proposed for classification and discussion, making the modeling process
extremely cumbersome. And traditional empirical equations still has the problem
of low accuracy, although various correction equations have been introduced
later, the accuracy has always been unsatisfactory. By introducing machine
learning and deep learning, it is possible to simultaneously solve prediction
problems with low accuracy of empirical equations and complex conditions. Based
on the MagNet database, through the training of the newly proposed MNN, it is
found that a single model is sufficient to make predictions for at least four
different materials under varying temperatures, frequencies, and waveforms,
with accuracy far exceeding that of traditional models. At the same time, we
also used three other machine learning and deep learning models (Random Forest,
XGBoost, MLP-LSTM) for training, all of which had much higher accuracy than
traditional models. On the basis of the predicted results, a hybrid model
combining MNN and XGBoost was proposed, which predicted through weighting and
found that the accuracy could continue to improve. This provides a solution for
modeling magnetic core loss under different materials and operating modes.
|
2502.05489
|
Mechanistic Interpretability of Emotion Inference in Large Language
Models
|
cs.CL cs.AI
|
Large language models (LLMs) show promising capabilities in predicting human
emotions from text. However, the mechanisms through which these models process
emotional stimuli remain largely unexplored. Our study addresses this gap by
investigating how autoregressive LLMs infer emotions, showing that emotion
representations are functionally localized to specific regions in the model.
Our evaluation includes diverse model families and sizes and is supported by
robustness checks. We then show that the identified representations are
psychologically plausible by drawing on cognitive appraisal theory, a
well-established psychological framework positing that emotions emerge from
evaluations (appraisals) of environmental stimuli. By causally intervening on
construed appraisal concepts, we steer the generation and show that the outputs
align with theoretical and intuitive expectations. This work highlights a novel
way to causally intervene and precisely shape emotional text generation,
potentially benefiting safety and alignment in sensitive affective domains.
|
2502.05491
|
Lie-algebra Adaptive Tracking Control for Rigid Body Dynamics
|
cs.RO cs.SY eess.SY
|
Adaptive tracking control for rigid body dynamics is of critical importance
in control and robotics, particularly for addressing uncertainties or
variations in system model parameters. However, most existing adaptive control
methods are designed for systems with states in vector spaces, often neglecting
the manifold constraints inherent to robotic systems. In this work, we propose
a novel Lie-algebra-based adaptive control method that leverages the intrinsic
relationship between the special Euclidean group and its associated Lie
algebra. By transforming the state space from the group manifold to a vector
space, we derive a linear error dynamics model that decouples model parameters
from the system state. This formulation enables the development of an adaptive
optimal control method that is both geometrically consistent and
computationally efficient. Extensive simulations demonstrate the effectiveness
and efficiency of the proposed method. We have made our source code publicly
available to the community to support further research and collaboration.
|
2502.05494
|
Multi-scale Masked Autoencoder for Electrocardiogram Anomaly Detection
|
cs.LG cs.AI stat.AP
|
Electrocardiogram (ECG) analysis is a fundamental tool for diagnosing
cardiovascular conditions, yet anomaly detection in ECG signals remains
challenging due to their inherent complexity and variability. We propose
Multi-scale Masked Autoencoder for ECG anomaly detection (MMAE-ECG), a novel
end-to-end framework that effectively captures both global and local
dependencies in ECG data. Unlike state-of-the-art methods that rely on
heartbeat segmentation or R-peak detection, MMAE-ECG eliminates the need for
such pre-processing steps, enhancing its suitability for clinical deployment.
MMAE-ECG partitions ECG signals into non-overlapping segments, with each
segment assigned learnable positional embeddings. A novel multi-scale masking
strategy and multi-scale attention mechanism, along with distinct positional
embeddings, enable a lightweight Transformer encoder to effectively capture
both local and global dependencies. The masked segments are then reconstructed
using a single-layer Transformer block, with an aggregation strategy employed
during inference to refine the outputs. Experimental results demonstrate that
our method achieves performance comparable to state-of-the-art approaches while
significantly reducing computational complexity-approximately 1/78 of the
floating-point operations (FLOPs) required for inference. Ablation studies
further validate the effectiveness of each component, highlighting the
potential of multi-scale masked autoencoders for anomaly detection.
|
2502.05496
|
Feature Explosion: a generic optimization strategy for outlier detection
algorithms
|
cs.LG
|
Outlier detection tasks aim at discovering potential issues or opportunities
and are widely used in cybersecurity, financial security, industrial
inspection, etc. To date, thousands of outlier detection algorithms have been
proposed. Clearly, in real-world scenarios, such a large number of algorithms
is unnecessary. In other words, a large number of outlier detection algorithms
are redundant. We believe the root cause of this redundancy lies in the current
highly customized (i.e., non-generic) optimization strategies. Specifically,
when researchers seek to improve the performance of existing outlier detection
algorithms, they have to design separate optimized versions tailored to the
principles of each algorithm, leading to an ever-growing number of outlier
detection algorithms. To address this issue, in this paper, we introduce the
explosion from physics into the outlier detection task and propose a generic
optimization strategy based on feature explosion, called OSD (Optimization
Strategy for outlier Detection algorithms). In the future, when improving the
performance of existing outlier detection algorithms, it will be sufficient to
invoke the OSD plugin without the need to design customized optimized versions
for them. We compared the performances of 14 outlier detection algorithms on 24
datasets before and after invoking the OSD plugin. The experimental results
show that the performances of all outlier detection algorithms are improved on
almost all datasets. In terms of average accuracy, OSD make these outlier
detection algorithms improve by 15% (AUC), 63.7% (AP).
|
2502.05497
|
DeepThink: Aligning Language Models with Domain-Specific User Intents
|
cs.CL
|
Supervised fine-tuning with synthesized instructions has been a common
practice for adapting LLMs to domain-specific QA tasks. However, the
synthesized instructions deviate from real user questions and expected answers.
This study proposes a novel framework called DeepThink to generate high-quality
instructions. DeepThink first generates a few seed questions to mimic actual
user questions, simulates conversations to uncover the hidden user needs, and
refines the answer by conversational contexts and the retrieved documents for
more comprehensive answers. Experiments demonstrate that DeepThink achieves an
average performance improvement of 7.92% compared to a GPT-4-turbo+RAG-based
assistant on the real user test set in the advertising domain across dimensions
such as relevance, completeness, clarity, accuracy, and actionability.
|
2502.05498
|
Riemannian Manifold Learning for Stackelberg Games with Neural Flow
Representations
|
cs.LG cs.AI cs.GT cs.MA
|
We present a novel framework for online learning in Stackelberg general-sum
games, where two agents, the leader and follower, engage in sequential
turn-based interactions. At the core of this approach is a learned
diffeomorphism that maps the joint action space to a smooth Riemannian
manifold, referred to as the Stackelberg manifold. This mapping, facilitated by
neural normalizing flows, ensures the formation of tractable isoplanar
subspaces, enabling efficient techniques for online learning. By assuming
linearity between the agents' reward functions on the Stackelberg manifold, our
construct allows the application of standard bandit algorithms. We then provide
a rigorous theoretical basis for regret minimization on convex manifolds and
establish finite-time bounds on simple regret for learning Stackelberg
equilibria. This integration of manifold learning into game theory uncovers a
previously unrecognized potential for neural normalizing flows as an effective
tool for multi-agent learning. We present empirical results demonstrating the
effectiveness of our approach compared to standard baselines, with applications
spanning domains such as cybersecurity and economic supply chain optimization.
|
2502.05500
|
Vision-Ultrasound Robotic System based on Deep Learning for Gas and Arc
Hazard Detection in Manufacturing
|
cs.RO cs.AI
|
Gas leaks and arc discharges present significant risks in industrial
environments, requiring robust detection systems to ensure safety and
operational efficiency. Inspired by human protocols that combine visual
identification with acoustic verification, this study proposes a deep
learning-based robotic system for autonomously detecting and classifying gas
leaks and arc discharges in manufacturing settings. The system is designed to
execute all experimental tasks entirely onboard the robot. Utilizing a
112-channel acoustic camera operating at a 96 kHz sampling rate to capture
ultrasonic frequencies, the system processes real-world datasets recorded in
diverse industrial scenarios. These datasets include multiple gas leak
configurations (e.g., pinhole, open end) and partial discharge types (Corona,
Surface, Floating) under varying environmental noise conditions. Proposed
system integrates visual detection and a beamforming-enhanced acoustic analysis
pipeline. Signals are transformed using STFT and refined through Gamma
Correction, enabling robust feature extraction. An Inception-inspired CNN
further classifies hazards, achieving 99% gas leak detection accuracy. The
system not only detects individual hazard sources but also enhances
classification reliability by fusing multi-modal data from both vision and
acoustic sensors. When tested in reverberation and noise-augmented
environments, the system outperformed conventional models by up to 44%p, with
experimental tasks meticulously designed to ensure fairness and
reproducibility. Additionally, the system is optimized for real-time
deployment, maintaining an inference time of 2.1 seconds on a mobile robotic
platform. By emulating human-like inspection protocols and integrating vision
with acoustic modalities, this study presents an effective solution for
industrial automation, significantly improving safety and operational
reliability.
|
2502.05503
|
A Physical Coherence Benchmark for Evaluating Video Generation Models
via Optical Flow-guided Frame Prediction
|
cs.CV cs.AI
|
Recent advances in video generation models demonstrate their potential as
world simulators, but they often struggle with videos deviating from physical
laws, a key concern overlooked by most text-to-video benchmarks. We introduce a
benchmark designed specifically to assess the Physical Coherence of generated
videos, PhyCoBench. Our benchmark includes 120 prompts covering 7 categories of
physical principles, capturing key physical laws observable in video content.
We evaluated four state-of-the-art (SoTA) T2V models on PhyCoBench and
conducted manual assessments. Additionally, we propose an automated evaluation
model: PhyCoPredictor, a diffusion model that generates optical flow and video
frames in a cascade manner. Through a consistency evaluation comparing
automated and manual sorting, the experimental results show that PhyCoPredictor
currently aligns most closely with human evaluation. Therefore, it can
effectively evaluate the physical coherence of videos, providing insights for
future model optimization. Our benchmark, including physical coherence prompts,
the automatic evaluation tool PhyCoPredictor, and the generated video dataset,
has been released on GitHub at https://github.com/Jeckinchen/PhyCoBench.
|
2502.05504
|
Physics-Conditioned Diffusion Models for Lattice Gauge Theory
|
hep-lat cs.LG
|
We develop diffusion models for simulating lattice gauge theories, where
stochastic quantization is explicitly incorporated as a physical condition for
sampling. We demonstrate the applicability of this novel sampler to U(1) gauge
theory in two spacetime dimensions and find that a model trained at a small
inverse coupling constant can be extrapolated to larger inverse coupling
regions without encountering the topological freezing problem. Additionally,
the trained model can be employed to sample configurations on different lattice
sizes without requiring further training. The exactness of the generated
samples is ensured by incorporating Metropolis-adjusted Langevin dynamics into
the generation process. Furthermore, we demonstrate that this approach enables
more efficient sampling of topological quantities compared to traditional
algorithms such as Hybrid Monte Carlo and Langevin simulations.
|
2502.05505
|
Differentially Private Synthetic Data via APIs 3: Using Simulators
Instead of Foundation Model
|
cs.LG cs.CR cs.CV stat.ML
|
Differentially private (DP) synthetic data, which closely resembles the
original private data while maintaining strong privacy guarantees, has become a
key tool for unlocking the value of private data without compromising privacy.
Recently, Private Evolution (PE) has emerged as a promising method for
generating DP synthetic data. Unlike other training-based approaches, PE only
requires access to inference APIs from foundation models, enabling it to
harness the power of state-of-the-art models. However, a suitable foundation
model for a specific private data domain is not always available. In this
paper, we discover that the PE framework is sufficiently general to allow
inference APIs beyond foundation models. Specifically, we show that simulators
-- such as computer graphics-based image synthesis tools -- can also serve as
effective APIs within the PE framework. This insight greatly expands the
applicability of PE, enabling the use of a wide variety of domain-specific
simulators for DP data synthesis. We explore the potential of this approach,
named Sim-PE, in the context of image synthesis. Across three diverse
simulators, Sim-PE performs well, improving the downstream classification
accuracy of PE by up to 3x and reducing the FID score by up to 80%. We also
show that simulators and foundation models can be easily leveraged together
within the PE framework to achieve further improvements. The code is
open-sourced in the Private Evolution Python library:
https://github.com/microsoft/DPSDA.
|
2502.05509
|
Do Spikes Protect Privacy? Investigating Black-Box Model Inversion
Attacks in Spiking Neural Networks
|
cs.LG cs.CR cs.NE
|
As machine learning models become integral to security-sensitive
applications, concerns over data leakage from adversarial attacks continue to
rise. Model Inversion (MI) attacks pose a significant privacy threat by
enabling adversaries to reconstruct training data from model outputs. While MI
attacks on Artificial Neural Networks (ANNs) have been widely studied, Spiking
Neural Networks (SNNs) remain largely unexplored in this context. Due to their
event-driven and discrete computations, SNNs introduce fundamental differences
in information processing that may offer inherent resistance to such attacks. A
critical yet underexplored aspect of this threat lies in black-box settings,
where attackers operate through queries without direct access to model
parameters or gradients-representing a more realistic adversarial scenario in
deployed systems. This work presents the first study of black-box MI attacks on
SNNs. We adapt a generative adversarial MI framework to the spiking domain by
incorporating rate-based encoding for input transformation and decoding
mechanisms for output interpretation. Our results show that SNNs exhibit
significantly greater resistance to MI attacks than ANNs, as demonstrated by
degraded reconstructions, increased instability in attack convergence, and
overall reduced attack effectiveness across multiple evaluation metrics.
Further analysis suggests that the discrete and temporally distributed nature
of SNN decision boundaries disrupts surrogate modeling, limiting the attacker's
ability to approximate the target model.
|
2502.05510
|
Data-Driven Neural Certificate Synthesis
|
eess.SY cs.SY
|
We investigate the problem of verifying different properties of discrete time
dynamical systems, namely, reachability, safety and reach-while-avoid. To
achieve this, we adopt a data driven perspective and using past systems'
trajectories as data, we aim at learning a specific function termed
\emph{certificate} for each property we wish to verify. The certificate
construction problem is treated as a safety informed neural network training
process, where we use a neural network to learn the parameterization of each
certificate, while the loss function we seek to minimize is designed to
encompass conditions on the certificate to be learned that encode the
satisfaction of the associated property. Besides learning a certificate, we
quantify probabilistically its generalization properties, namely, how likely it
is for a certificate to be valid (and hence for the associated property to be
satisfied) when it comes to a new system trajectory not included in the
training data set. We view this problem under the realm of probably
approximately correct (PAC) learning under the notion of compression, and use
recent advancements of the so-called scenario approach to obtain scalable
generalization bounds on the learned certificates. To achieve this, we design a
novel algorithm that minimizes the loss function and hence constructs a
certificate, and at the same time determines a quantity termed compression,
which is instrumental in obtaining meaningful probabilistic guarantees. This
process is novel per se and provides a constructive mechanism for compression
set calculation, thus opening the road for its use to more general non-convex
optimization problems. We verify the efficacy of our methodology on several
numerical case studies, and compare it (both theoretically and numerically)
with closely related results on data-driven property verification.
|
2502.05512
|
IndexTTS: An Industrial-Level Controllable and Efficient Zero-Shot
Text-To-Speech System
|
cs.SD cs.AI eess.AS
|
Recently, large language model (LLM) based text-to-speech (TTS) systems have
gradually become the mainstream in the industry due to their high naturalness
and powerful zero-shot voice cloning capabilities.Here, we introduce the
IndexTTS system, which is mainly based on the XTTS and Tortoise model. We add
some novel improvements. Specifically, in Chinese scenarios, we adopt a hybrid
modeling method that combines characters and pinyin, making the pronunciations
of polyphonic characters and long-tail characters controllable. We also
performed a comparative analysis of the Vector Quantization (VQ) with
Finite-Scalar Quantization (FSQ) for codebook utilization of acoustic speech
tokens. To further enhance the effect and stability of voice cloning, we
introduce a conformer-based speech conditional encoder and replace the
speechcode decoder with BigVGAN2. Compared with XTTS, it has achieved
significant improvements in naturalness, content consistency, and zero-shot
voice cloning. As for the popular TTS systems in the open-source, such as
Fish-Speech, CosyVoice2, FireRedTTS and F5-TTS, IndexTTS has a relatively
simple training process, more controllable usage, and faster inference speed.
Moreover, its performance surpasses that of these systems. Our demos are
available at https://index-tts.github.io.
|
2502.05516
|
Evaluating Differential Privacy on Correlated Datasets Using Pointwise
Maximal Leakage
|
cs.CR cs.IT math.IT
|
Data-driven advancements significantly contribute to societal progress, yet
they also pose substantial risks to privacy. In this landscape, differential
privacy (DP) has become a cornerstone in privacy preservation efforts. However,
the adequacy of DP in scenarios involving correlated datasets has sometimes
been questioned and multiple studies have hinted at potential vulnerabilities.
In this work, we delve into the nuances of applying DP to correlated datasets
by leveraging the concept of pointwise maximal leakage (PML) for a quantitative
assessment of information leakage. Our investigation reveals that DP's
guarantees can be arbitrarily weak for correlated databases when assessed
through the lens of PML. More precisely, we prove the existence of a pure DP
mechanism with PML levels arbitrarily close to that of a mechanism which
releases individual entries from a database without any perturbation. By
shedding light on the limitations of DP on correlated datasets, our work aims
to foster a deeper understanding of subtle privacy risks and highlight the need
for the development of more effective privacy-preserving mechanisms tailored to
diverse scenarios.
|
2502.05517
|
Evaluation of Vision Transformers for Multimodal Image Classification: A
Case Study on Brain, Lung, and Kidney Tumors
|
cs.CV
|
Neural networks have become the standard technique for medical diagnostics,
especially in cancer detection and classification. This work evaluates the
performance of Vision Transformers architectures, including Swin Transformer
and MaxViT, in several datasets of magnetic resonance imaging (MRI) and
computed tomography (CT) scans. We used three training sets of images with
brain, lung, and kidney tumors. Each dataset includes different classification
labels, from brain gliomas and meningiomas to benign and malignant lung
conditions and kidney anomalies such as cysts and cancers. This work aims to
analyze the behavior of the neural networks in each dataset and the benefits of
combining different image modalities and tumor classes. We designed several
experiments by fine-tuning the models on combined and individual image
modalities. The results revealed that the Swin Transformer provided high
accuracy, achieving up to 99.9\% for kidney tumor classification and 99.3\%
accuracy in a combined dataset. MaxViT also provided excellent results in
individual datasets but performed poorly when data is combined. This research
highlights the adaptability of Transformer-based models to various image
modalities and features. However, challenges persist, including limited
annotated data and interpretability issues. Future works will expand this study
by incorporating other image modalities and enhancing diagnostic capabilities.
Integrating these models across diverse datasets could mark a pivotal advance
in precision medicine, paving the way for more efficient and comprehensive
healthcare solutions.
|
2502.05523
|
Adaptive Domain Scaling for Personalized Sequential Modeling in
Recommenders
|
cs.IR
|
Users generally exhibit complex behavioral patterns and diverse intentions in
multiple business scenarios of super applications like Douyin, presenting great
challenges to current industrial multi-domain recommenders. To mitigate the
discrepancies across diverse domains, researches and industrial practices
generally emphasize sophisticated network structures to accomodate diverse data
distributions, while neglecting the inherent understanding of user behavioral
sequence from the multi-domain perspective. In this paper, we present Adaptive
Domain Scaling (ADS) model, which comprehensively enhances the personalization
capability in target-aware sequence modeling across multiple domains.
Specifically, ADS comprises of two major modules, including personalized
sequence representation generation (PSRG) and personalized candidate
representation generation (PCRG). The modules contribute to the tailored
multi-domain learning by dynamically learning both the user behavioral sequence
item representation and the candidate target item representation under
different domains, facilitating adaptive user intention understanding.
Experiments are performed on both a public dataset and two billion-scaled
industrial datasets, and the extensive results verify the high effectiveness
and compatibility of ADS. Besides, we conduct online experiments on two
influential business scenarios including Douyin Advertisement Platform and
Douyin E-commerce Service Platform, both of which show substantial business
improvements. Currently, ADS has been fully deployed in many recommendation
services at ByteDance, serving billions of users.
|
2502.05526
|
Towards Learning Scalable Agile Dynamic Motion Planning for Robosoccer
Teams with Policy Optimization
|
cs.RO cs.AI cs.LG cs.MA
|
In fast-paced, ever-changing environments, dynamic Motion Planning for
Multi-Agent Systems in the presence of obstacles is a universal and unsolved
problem. Be it from path planning around obstacles to the movement of robotic
arms, or in planning navigation of robot teams in settings such as Robosoccer,
dynamic motion planning is needed to avoid collisions while reaching the
targeted destination when multiple agents occupy the same area. In continuous
domains where the world changes quickly, existing classical Motion Planning
algorithms such as RRT* and A* become computationally expensive to rerun at
every time step. Many variations of classical and well-formulated non-learning
path-planning methods have been proposed to solve this universal problem but
fall short due to their limitations of speed, smoothness, optimally, etc. Deep
Learning models overcome their challenges due to their ability to adapt to
varying environments based on past experience. However, current learning motion
planning models use discretized environments, do not account for heterogeneous
agents or replanning, and build up to improve the classical motion planners'
efficiency, leading to issues with scalability. To prevent collisions between
heterogenous team members and collision to obstacles while trying to reach the
target location, we present a learning-based dynamic navigation model and show
our model working on a simple environment in the concept of a simple Robosoccer
Game.
|
2502.05534
|
Fg-T2M++: LLMs-Augmented Fine-Grained Text Driven Human Motion
Generation
|
cs.CV
|
We address the challenging problem of fine-grained text-driven human motion
generation. Existing works generate imprecise motions that fail to accurately
capture relationships specified in text due to: (1) lack of effective text
parsing for detailed semantic cues regarding body parts, (2) not fully modeling
linguistic structures between words to comprehend text comprehensively. To
tackle these limitations, we propose a novel fine-grained framework Fg-T2M++
that consists of: (1) an LLMs semantic parsing module to extract body part
descriptions and semantics from text, (2) a hyperbolic text representation
module to encode relational information between text units by embedding the
syntactic dependency graph into hyperbolic space, and (3) a multi-modal fusion
module to hierarchically fuse text and motion features. Extensive experiments
on HumanML3D and KIT-ML datasets demonstrate that Fg-T2M++ outperforms SOTA
methods, validating its ability to accurately generate motions adhering to
comprehensive text semantics.
|
2502.05535
|
Rate-Matching Framework for RSMA-Enabled Multibeam LEO Satellite
Communications
|
cs.IT cs.NI math.IT
|
With the goal of ubiquitous global connectivity, multibeam low Earth orbit
(LEO) satellite communication (SATCOM) has attracted significant attention in
recent years. The traffic demands of users are heterogeneous within the broad
coverage of SATCOM due to different geological conditions and user
distributions. Motivated by this, this paper proposes a novel rate-matching
(RM) framework based on rate-splitting multiple access (RSMA) that minimizes
the difference between the traffic demands and offered rates while
simultaneously minimizing transmit power for power-hungry satellite payloads.
Moreover, channel phase perturbations arising from channel estimation and
feedback errors are considered to capture realistic multibeam LEO SATCOM
scenarios. To tackle the non-convexity of the RSMA-based RM problem under phase
perturbations, we convert it into a tractable convex form via the successive
convex approximation method and present an efficient algorithm to solve the RM
problem. Through the extensive numerical analysis across various traffic demand
distribution and channel state information accuracy at LEO satellites, we
demonstrate that RSMA flexibly allocates the power between common and private
streams according to different traffic patterns across beams, thereby
efficiently satisfying users non-uniform traffic demands. In particular, the
use of common messages plays a vital role in overcoming the limited spatial
dimension available at LEO satellites, enabling it to manage inter- and
intra-beam interference effectively in the presence of phase perturbation.
|
2502.05537
|
Sequential Stochastic Combinatorial Optimization Using Hierarchal
Reinforcement Learning
|
cs.AI cs.LG
|
Reinforcement learning (RL) has emerged as a promising tool for combinatorial
optimization (CO) problems due to its ability to learn fast, effective, and
generalizable solutions. Nonetheless, existing works mostly focus on one-shot
deterministic CO, while sequential stochastic CO (SSCO) has rarely been studied
despite its broad applications such as adaptive influence maximization (IM) and
infectious disease intervention. In this paper, we study the SSCO problem where
we first decide the budget (e.g., number of seed nodes in adaptive IM)
allocation for all time steps, and then select a set of nodes for each time
step. The few existing studies on SSCO simplify the problems by assuming a
uniformly distributed budget allocation over the time horizon, yielding
suboptimal solutions. We propose a generic hierarchical RL (HRL) framework
called wake-sleep option (WS-option), a two-layer option-based framework that
simultaneously decides adaptive budget allocation on the higher layer and node
selection on the lower layer. WS-option starts with a coherent formulation of
the two-layer Markov decision processes (MDPs), capturing the interdependencies
between the two layers of decisions. Building on this, WS-option employs
several innovative designs to balance the model's training stability and
computational efficiency, preventing the vicious cyclic interference issue
between the two layers. Empirical results show that WS-option exhibits
significantly improved effectiveness and generalizability compared to
traditional methods. Moreover, the learned model can be generalized to larger
graphs, which significantly reduces the overhead of computational resources.
|
2502.05538
|
Coalition Formation for Heterogeneous Federated Learning Enabled Channel
Estimation in RIS-assisted Cell-free MIMO
|
cs.IT math.IT
|
Downlink channel estimation remains a significant bottleneck in
reconfigurable intelligent surface-assisted cell-free multiple-input
multiple-output communication systems. Conventional approaches primarily rely
on centralized deep learning methods to estimate the high-dimensional and
complex cascaded channels. These methods require data aggregation from all
users for centralized model training, leading to excessive communication
overhead and significant data privacy concerns. Additionally, the large size of
local learning models imposes heavy computational demands on end users,
necessitating strong computational capabilities that most commercial devices
lack. To address the aforementioned challenges, a coalition-formation-guided
heterogeneous federated learning (FL) framework is proposed. This framework
leverages coalition formation to guide the formation of heterogeneous FL user
groups for efficient channel estimation. Specifically, by utilizing a
distributed deep reinforcement learning (DRL) approach, each FL user
intelligently and independently decides whether to join or leave a coalition,
aiming at improving channel estimation accuracy, while reducing local model
size and computational costs for end users. Moreover, to accelerate the DRL-FL
convergence process and reduce computational burdens on end users, a transfer
learning method is introduced. This method incorporates both received reference
signal power and distance similarity metrics, by considering that nodes with
similar distances to the base station and comparable received signal power have
a strong likelihood of experiencing similar channel fading. Massive experiments
performed that reveal that, compared with the benchmarks, the proposed
framework significantly reduces the computational overhead of end users by 16%,
improves data privacy, and improves channel estimation accuracy by 20%.
|
2502.05539
|
SSH: Sparse Spectrum Adaptation via Discrete Hartley Transformation
|
cs.CV cs.LG
|
Low-rank adaptation (LoRA) has been demonstrated effective in reducing the
trainable parameter number when fine-tuning a large foundation model (LLM).
However, it still encounters computational and memory challenges when scaling
to larger models or addressing more complex task adaptation.
In this work, we introduce Sparse Spectrum Adaptation via Discrete Hartley
Transformation (SSH), a novel approach that significantly reduces the number of
trainable parameters while enhancing model performance. It selects the most
informative spectral components across all layers, under the guidance of the
initial weights after a discrete Hartley transformation (DHT). The lightweight
inverse DHT then projects the spectrum back into the spatial domain for
updates.
Extensive experiments across both single-modality tasks such as language
understanding and generation and multi-modality tasks such as video-text
understanding demonstrate that SSH outperforms existing parameter-efficient
fine-tuning (PEFT) methods while achieving substantial reductions in
computational cost and memory requirements.
|
2502.05540
|
Demystifying Catastrophic Forgetting in Two-Stage Incremental Object
Detector
|
cs.CV
|
Catastrophic forgetting is a critical chanllenge for incremental object
detection (IOD). Most existing methods treat the detector monolithically,
relying on instance replay or knowledge distillation without analyzing
component-specific forgetting. Through dissection of Faster R-CNN, we reveal a
key insight: Catastrophic forgetting is predominantly localized to the RoI Head
classifier, while regressors retain robustness across incremental stages. This
finding challenges conventional assumptions, motivating us to develop a
framework termed NSGP-RePRE. Regional Prototype Replay (RePRE) mitigates
classifier forgetting via replay of two types of prototypes: coarse prototypes
represent class-wise semantic centers of RoI features, while fine-grained
prototypes model intra-class variations. Null Space Gradient Projection (NSGP)
is further introduced to eliminate prototype-feature misalignment by updating
the feature extractor in directions orthogonal to subspace of old inputs via
gradient projection, aligning RePRE with incremental learning dynamics. Our
simple yet effective design allows NSGP-RePRE to achieve state-of-the-art
performance on the Pascal VOC and MS COCO datasets under various settings. Our
work not only advances IOD methodology but also provide pivotal insights for
catastrophic forgetting mitigation in IOD. Code will be available soon.
|
2502.05542
|
Democratic Training Against Universal Adversarial Perturbations
|
cs.LG
|
Despite their advances and success, real-world deep neural networks are known
to be vulnerable to adversarial attacks. Universal adversarial perturbation, an
input-agnostic attack, poses a serious threat for them to be deployed in
security-sensitive systems. In this case, a single universal adversarial
perturbation deceives the model on a range of clean inputs without requiring
input-specific optimization, which makes it particularly threatening. In this
work, we observe that universal adversarial perturbations usually lead to
abnormal entropy spectrum in hidden layers, which suggests that the prediction
is dominated by a small number of ``feature'' in such cases (rather than
democratically by many features). Inspired by this, we propose an efficient yet
effective defense method for mitigating UAPs called \emph{Democratic Training}
by performing entropy-based model enhancement to suppress the effect of the
universal adversarial perturbations in a given model. \emph{Democratic
Training} is evaluated with 7 neural networks trained on 5 benchmark datasets
and 5 types of state-of-the-art universal adversarial attack methods. The
results show that it effectively reduces the attack success rate, improves
model robustness and preserves the model accuracy on clean samples.
|
2502.05547
|
Dual Defense: Enhancing Privacy and Mitigating Poisoning Attacks in
Federated Learning
|
cs.CR cs.AI
|
Federated learning (FL) is inherently susceptible to privacy breaches and
poisoning attacks. To tackle these challenges, researchers have separately
devised secure aggregation mechanisms to protect data privacy and robust
aggregation methods that withstand poisoning attacks. However, simultaneously
addressing both concerns is challenging; secure aggregation facilitates
poisoning attacks as most anomaly detection techniques require access to
unencrypted local model updates, which are obscured by secure aggregation. Few
recent efforts to simultaneously tackle both challenges offen depend on
impractical assumption of non-colluding two-server setups that disrupt FL's
topology, or three-party computation which introduces scalability issues,
complicating deployment and application. To overcome this dilemma, this paper
introduce a Dual Defense Federated learning (DDFed) framework. DDFed
simultaneously boosts privacy protection and mitigates poisoning attacks,
without introducing new participant roles or disrupting the existing FL
topology. DDFed initially leverages cutting-edge fully homomorphic encryption
(FHE) to securely aggregate model updates, without the impractical requirement
for non-colluding two-server setups and ensures strong privacy protection.
Additionally, we proposes a unique two-phase anomaly detection mechanism for
encrypted model updates, featuring secure similarity computation and
feedback-driven collaborative selection, with additional measures to prevent
potential privacy breaches from Byzantine clients incorporated into the
detection process. We conducted extensive experiments on various model
poisoning attacks and FL scenarios, including both cross-device and cross-silo
FL. Experiments on publicly available datasets demonstrate that DDFed
successfully protects model privacy and effectively defends against model
poisoning threats.
|
2502.05550
|
4DR P2T: 4D Radar Tensor Synthesis with Point Clouds
|
cs.CV
|
In four-dimensional (4D) Radar-based point cloud generation, clutter removal
is commonly performed using the constant false alarm rate (CFAR) algorithm.
However, CFAR may not fully capture the spatial characteristics of objects. To
address limitation, this paper proposes the 4D Radar Point-to-Tensor (4DR P2T)
model, which generates tensor data suitable for deep learning applications
while minimizing measurement loss. Our method employs a conditional generative
adversarial network (cGAN), modified to effectively process 4D Radar point
cloud data and generate tensor data. Experimental results on the K-Radar
dataset validate the effectiveness of the 4DR P2T model, achieving an average
PSNR of 30.39dB and SSIM of 0.96. Additionally, our analysis of different point
cloud generation methods highlights that the 5% percentile method provides the
best overall performance, while the 1% percentile method optimally balances
data volume reduction and performance, making it well-suited for deep learning
applications.
|
2502.05551
|
FRAME: Boosting LLMs with A Four-Quadrant Multi-Stage Pretraining
Strategy
|
cs.CL
|
Large language models (LLMs) have significantly advanced human language
understanding and generation, with pretraining data quality and organization
being crucial to their performance. Multi-stage pretraining is a promising
approach, but existing methods often lack quantitative criteria for data
partitioning and instead rely on intuitive heuristics. In this paper, we
propose the novel Four-quadRAnt Multi-stage prEtraining strategy (FRAME),
guided by the established principle of organizing the pretraining process into
four stages to achieve significant loss reductions four times. This principle
is grounded in two key findings: first, training on high Perplexity (PPL) data
followed by low PPL data, and second, training on low PPL difference (PD) data
followed by high PD data, both causing the loss to drop significantly twice and
performance enhancements. By partitioning data into four quadrants and
strategically organizing them, FRAME achieves a remarkable 16.8% average
improvement over random across MMLU and CMMLU for the 3B model, effectively
boosting LLM performance.
|
2502.05553
|
Latent Structure Modulation in Large Language Models Through Stochastic
Concept Embedding Transitions
|
cs.CL
|
Stochastic embedding transitions introduce a probabilistic mechanism for
adjusting token representations dynamically during inference, mitigating the
constraints imposed through static or deterministic embeddings. A transition
framework was proposed in which each token embedding evolved through
probabilistic updates, ensuring adaptability while preserving semantic
integrity across linguistic contexts. Empirical evaluations demonstrated that
models incorporating stochastic transitions exhibited greater lexical
diversity, improved generative coherence, and enhanced retention of
low-frequency vocabulary, contributing to more varied sentence structures and
reduced reliance on high-probability token selections. Statistical analyses of
embedding drift across transformer layers indicated that representations
evolved more flexibly without losing coherence, supporting the hypothesis that
controlled stochasticity facilitated context-sensitive representation learning.
Experimental results revealed that probabilistic embeddings introduced minor
computational overhead while maintaining generative efficiency, reinforcing
their feasibility in large-scale applications. A comparative study with
traditional embedding approaches highlighted measurable gains in text
completion accuracy, dialogue coherence, and structural complexity, confirming
the effectiveness of stochastic transitions in enhancing representation
expressiveness. Clustering patterns in the embedding space suggested that
probabilistic updates preserved meaningful semantic groupings while enabling
context-driven shifts, further validating the stability of the transition
mechanism. Performance metrics indicated that stochastic transitions balanced
adaptability and control, ensuring that generative outputs remained
linguistically coherent without excessive randomness.
|
2502.05555
|
Efficient Reinforcement Learning Through Adaptively Pretrained Visual
Encoder
|
cs.CV
|
While Reinforcement Learning (RL) agents can successfully learn to handle
complex tasks, effectively generalizing acquired skills to unfamiliar settings
remains a challenge. One of the reasons behind this is the visual encoders used
are task-dependent, preventing effective feature extraction in different
settings. To address this issue, recent studies have tried to pretrain encoders
with diverse visual inputs in order to improve their performance. However, they
rely on existing pretrained encoders without further exploring the impact of
pretraining period. In this work, we propose APE: efficient reinforcement
learning through Adaptively Pretrained visual Encoder -- a framework that
utilizes adaptive augmentation strategy during the pretraining phase and
extracts generalizable features with only a few interactions within the task
environments in the policy learning period. Experiments are conducted across
various domains, including DeepMind Control Suite, Atari Games and Memory Maze
benchmarks, to verify the effectiveness of our method. Results show that
mainstream RL methods, such as DreamerV3 and DrQ-v2, achieve state-of-the-art
performance when equipped with APE. In addition, APE significantly improves the
sampling efficiency using only visual inputs during learning, approaching the
efficiency of state-based method in several control tasks. These findings
demonstrate the potential of adaptive pretraining of encoder in enhancing the
generalization ability and efficiency of visual RL algorithms.
|
2502.05556
|
Knowledge is Power: Harnessing Large Language Models for Enhanced
Cognitive Diagnosis
|
cs.AI
|
Cognitive Diagnosis Models (CDMs) are designed to assess students' cognitive
states by analyzing their performance across a series of exercises. However,
existing CDMs often struggle with diagnosing infrequent students and exercises
due to a lack of rich prior knowledge. With the advancement in large language
models (LLMs), which possess extensive domain knowledge, their integration into
cognitive diagnosis presents a promising opportunity. Despite this potential,
integrating LLMs with CDMs poses significant challenges. LLMs are not
well-suited for capturing the fine-grained collaborative interactions between
students and exercises, and the disparity between the semantic space of LLMs
and the behavioral space of CDMs hinders effective integration. To address
these issues, we propose a novel Knowledge-enhanced Cognitive Diagnosis (KCD)
framework, which is a model-agnostic framework utilizing LLMs to enhance CDMs
and compatible with various CDM architectures. The KCD framework operates in
two stages: LLM Diagnosis and Cognitive Level Alignment. In the LLM Diagnosis
stage, both students and exercises are diagnosed to achieve comprehensive and
detailed modeling. In the Cognitive Level Alignment stage, we bridge the gap
between the CDMs' behavioral space and the LLMs' semantic space using
contrastive learning and mask-reconstruction approaches. Experiments on several
real-world datasets demonstrate the effectiveness of our proposed framework.
|
2502.05557
|
MMHMER:Multi-viewer and Multi-task for Handwritten Mathematical
Expression Recognition
|
cs.CV
|
Handwritten Mathematical Expression Recognition (HMER) methods have made
remarkable progress, with most existing HMER approaches based on either a
hybrid CNN/RNN-based with GRU architecture or Transformer architectures. Each
of these has its strengths and weaknesses. Leveraging different model
structures as viewers and effectively integrating their diverse capabilities
presents an intriguing avenue for exploration. This involves addressing two key
challenges: 1) How to fuse these two methods effectively, and 2) How to achieve
higher performance under an appropriate level of complexity. This paper
proposes an efficient CNN-Transformer multi-viewer, multi-task approach to
enhance the model's recognition performance. Our MMHMER model achieves 63.96%,
62.51%, and 65.46% ExpRate on CROHME14, CROHME16, and CROHME19, outperforming
Posformer with an absolute gain of 1.28%, 1.48%, and 0.58%. The main
contribution of our approach is that we propose a new multi-view, multi-task
framework that can effectively integrate the strengths of CNN and Transformer.
By leveraging the feature extraction capabilities of CNN and the sequence
modeling capabilities of Transformer, our model can better handle the
complexity of handwritten mathematical expressions.
|
2502.05558
|
Large Memory Network for Recommendation
|
cs.IR
|
Modeling user behavior sequences in recommender systems is essential for
understanding user preferences over time, enabling personalized and accurate
recommendations for improving user retention and enhancing business values.
Despite its significance, there are two challenges for current sequential
modeling approaches. From the spatial dimension, it is difficult to mutually
perceive similar users' interests for a generalized intention understanding;
from the temporal dimension, current methods are generally prone to forgetting
long-term interests due to the fixed-length input sequence. In this paper, we
present Large Memory Network (LMN), providing a novel idea by compressing and
storing user history behavior information in a large-scale memory block. With
the elaborated online deployment strategy, the memory block can be easily
scaled up to million-scale in the industry. Extensive offline comparison
experiments, memory scaling up experiments, and online A/B test on Douyin
E-Commerce Search (ECS) are performed, validating the superior performance of
LMN. Currently, LMN has been fully deployed in Douyin ECS, serving millions of
users each day.
|
2502.05561
|
Diffusion Model for Interest Refinement in Multi-Interest Recommendation
|
cs.IR
|
Multi-interest candidate matching plays a pivotal role in personalized
recommender systems, as it captures diverse user interests from their
historical behaviors. Most existing methods utilize attention mechanisms to
generate interest representations by aggregating historical item embeddings.
However, these methods only capture overall item-level relevance, leading to
coarse-grained interest representations that include irrelevant information. To
address this issue, we propose the Diffusion Multi-Interest model (DMI), a
novel framework for refining user interest representations at the dimension
level. Specifically, DMI first introduces controllable noise into
coarse-grained interest representations at the dimensional level. Then, in the
iterative reconstruction process, DMI combines a cross-attention mechanism and
an item pruning strategy to reconstruct the personalized interest vectors with
the guidance of tailored collaborative information. Extensive experiments
demonstrate the effectiveness of DMI, surpassing state-of-the-art methods on
offline evaluations and an online A/B test. Successfully deployed in the
real-world recommender system, DMI effectively enhances user satisfaction and
system performance at scale, serving the major traffic of hundreds of millions
of daily active users. \footnote{The code will be released for reproducibility
once the paper is accepted.}
|
2502.05562
|
Can Large Language Models Be Query Optimizer for Relational Databases?
|
cs.DB
|
Query optimization, which finds the optimized execution plan for a given
query, is a complex planning and decision-making problem within the
exponentially growing plan space in database management systems (DBMS).
Traditional optimizers heavily rely on a certain cost model constructed by
various heuristics and empirical tuning, probably leading to generating
suboptimal plans. Recent developments of Large Language Models (LLMs) have
demonstrated their potential in solving complex planning and decision-making
problems, such as arithmetic and programmatic tasks. In this paper, we try to
explore the potential of LLMs in handling query optimization and propose a
tentative LLM-based query optimizer dubbed LLM-QO, established on PostgreSQL's
execution engine. In LLM-QO, we formulate query optimization in an
autoregressive fashion which directly generates the execution plan without
explicit plan enumeration. To investigate the essential input of LLM-QO, we
design a customized data recipe named QInstruct to collect the training data
from various optimizers and serialize the database's meta data, queries and
corresponding plans into a textual format. Based on QInstruct, we implement a
two-stage fine-tuning pipeline, Query Instruction Tuning (QIT) and Query Direct
Preference Optimization (QDPO), to empower the capability of general-purpose
LLMs in handling query optimization. In our experiments, LLM-QO can generate
valid and high-quality plans and consistently outperforms both traditional and
learned optimizers on three query workloads. Our findings verify that LLMs can
be derived as query optimizers where generalization, efficiency and adaptivity
deserve further research efforts.
|
2502.05564
|
TabICL: A Tabular Foundation Model for In-Context Learning on Large Data
|
cs.LG cs.AI
|
The long-standing dominance of gradient-boosted decision trees on tabular
data is currently challenged by tabular foundation models using In-Context
Learning (ICL): setting the training data as context for the test data and
predicting in a single forward pass without parameter updates. While the very
recent TabPFNv2 foundation model (2025) excels on tables with up to 10K
samples, its alternating column- and row-wise attentions make handling large
training sets computationally prohibitive. So, can ICL be effectively scaled
and deliver a benefit for larger tables? We introduce TabICL, a tabular
foundation model for classification, pretrained on synthetic datasets with up
to 60K samples and capable of handling 500K samples on affordable resources.
This is enabled by a novel two-stage architecture: a column-then-row attention
mechanism to build fixed-dimensional embeddings of rows, followed by a
transformer for efficient ICL. Across 200 classification datasets from the
TALENT benchmark, TabICL is on par with TabPFNv2 while being systematically
faster (up to 10 times), and significantly outperforms all other approaches. On
56 datasets with over 10K samples, TabICL surpasses both TabPFNv2 and CatBoost,
demonstrating the potential of ICL for large data.
|
2502.05565
|
Multi-Scale Conformal Prediction: A Theoretical Framework with Coverage
Guarantees
|
math.ST cs.SY eess.SY stat.TH
|
We propose a multi-scale extension of conformal prediction, an approach that
constructs prediction sets with finite-sample coverage guarantees under minimal
statistical assumptions. Classic conformal prediction relies on a single notion
of conformity, overlooking the multi-level structures that arise in
applications such as image analysis, hierarchical data exploration, and
multi-resolution time series modeling. In contrast, the proposed framework
defines a distinct conformity function at each relevant scale or resolution,
producing multiple conformal predictors whose prediction sets are then
intersected to form the final multi-scale output. We establish theoretical
results confirming that the multi-scale prediction set retains the marginal
coverage guarantees of the original conformal framework and can, in fact, yield
smaller or more precise sets in practice. By distributing the total miscoverage
probability across scales in proportion to their informative power, the method
further refines the set sizes. We also show that dependence between scales can
lead to conservative coverage, ensuring that the actual coverage exceeds the
nominal level. Numerical experiments in a synthetic classification setting
demonstrate that multi-scale conformal prediction achieves or surpasses the
nominal coverage level while generating smaller prediction sets compared to
single-scale conformal methods.
|
2502.05567
|
ATLAS: Autoformalizing Theorems through Lifting, Augmentation, and
Synthesis of Data
|
cs.CL cs.AI cs.LG
|
Autoformalization, the process of automatically translating natural language
mathematics into machine-verifiable formal language, has demonstrated
advancements with the progress of large language models (LLMs). However, a key
obstacle to further advancements is the scarcity of paired datasets that align
natural language with formal language. To address this challenge, we introduce
ATLAS (Autoformalizing Theorems through Lifting, Augmentation, and Synthesis of
Data), an iterative data generation framework designed to produce large-scale,
high-quality parallel theorem statements. With the proposed ATLAS running for
10 iterations, we construct an undergraduate-level dataset comprising 300k
theorem statements and develop the ATLAS translator, achieving accuracies of
80.59% (pass@8) and 92.99% (pass@128) on ProofNet, significantly outperforming
the base model (23.99% and 47.17%) and InternLM2-Math-Plus-7B (50.94% and
80.32%). Furthermore, the ATLAS translator also achieves state-of-the-art
performance on both the high-school-level miniF2F dataset and the
graduate-level MathQual dataset introduced in this work. The datasets, model,
and code will be released to the public soon.
|
2502.05568
|
Large Multimodal Models for Low-Resource Languages: A Survey
|
cs.CL cs.AI cs.LG
|
In this survey, we systematically analyze techniques used to adapt large
multimodal models (LMMs) for low-resource (LR) languages, examining approaches
ranging from visual enhancement and data creation to cross-modal transfer and
fusion strategies. Through a comprehensive analysis of 106 studies across 75 LR
languages, we identify key patterns in how researchers tackle the challenges of
limited data and computational resources. We find that visual information often
serves as a crucial bridge for improving model performance in LR settings,
though significant challenges remain in areas such as hallucination mitigation
and computational efficiency. We aim to provide researchers with a clear
understanding of current approaches and remaining challenges in making LMMs
more accessible to speakers of LR (understudied) languages. We complement our
survey with an open-source repository available at:
https://github.com/marianlupascu/LMM4LRL-Survey.
|
2502.05573
|
Low-Rank Agent-Specific Adaptation (LoRASA) for Multi-Agent Policy
Learning
|
cs.MA cs.AI cs.LG cs.RO
|
Multi-agent reinforcement learning (MARL) often relies on \emph{parameter
sharing (PS)} to scale efficiently. However, purely shared policies can stifle
each agent's unique specialization, reducing overall performance in
heterogeneous environments. We propose \textbf{Low-Rank Agent-Specific
Adaptation (LoRASA)}, a novel approach that treats each agent's policy as a
specialized ``task'' fine-tuned from a shared backbone. Drawing inspiration
from parameter-efficient transfer methods, LoRASA appends small, low-rank
adaptation matrices to each layer of the shared policy, naturally inducing
\emph{parameter-space sparsity} that promotes both specialization and
scalability. We evaluate LoRASA on challenging benchmarks including the
StarCraft Multi-Agent Challenge (SMAC) and Multi-Agent MuJoCo (MAMuJoCo),
implementing it atop widely used algorithms such as MAPPO and A2PO. Across
diverse tasks, LoRASA matches or outperforms existing baselines \emph{while
reducing memory and computational overhead}. Ablation studies on adapter rank,
placement, and timing validate the method's flexibility and efficiency. Our
results suggest LoRASA's potential to establish a new norm for MARL policy
parameterization: combining a shared foundation for coordination with low-rank
agent-specific refinements for individual specialization.
|
2502.05574
|
Event Stream-based Visual Object Tracking: HDETrack V2 and A
High-Definition Benchmark
|
cs.CV cs.AI
|
We then introduce a novel hierarchical knowledge distillation strategy that
incorporates the similarity matrix, feature representation, and response
map-based distillation to guide the learning of the student Transformer
network. We also enhance the model's ability to capture temporal dependencies
by applying the temporal Fourier transform to establish temporal relationships
between video frames. We adapt the network model to specific target objects
during testing via a newly proposed test-time tuning strategy to achieve high
performance and flexibility in target tracking. Recognizing the limitations of
existing event-based tracking datasets, which are predominantly low-resolution,
we propose EventVOT, the first large-scale high-resolution event-based tracking
dataset. It comprises 1141 videos spanning diverse categories such as
pedestrians, vehicles, UAVs, ping pong, etc. Extensive experiments on both
low-resolution (FE240hz, VisEvent, FELT), and our newly proposed
high-resolution EventVOT dataset fully validated the effectiveness of our
proposed method. Both the benchmark dataset and source code have been released
on https://github.com/Event-AHU/EventVOT_Benchmark
|
2502.05575
|
Graph-Based Vector Search: An Experimental Evaluation of the
State-of-the-Art
|
cs.IR cs.PF
|
Vector data is prevalent across business and scientific applications, and its
popularity is growing with the proliferation of learned embeddings. Vector data
collections often reach billions of vectors with thousands of dimensions, thus,
increasing the complexity of their analysis. Vector search is the backbone of
many critical analytical tasks, and graph-based methods have become the best
choice for analytical tasks that do not require guarantees on the quality of
the answers. We briefly survey in-memory graph-based vector search, outline the
chronology of the different methods and classify them according to five main
design paradigms: seed selection, incremental insertion, neighborhood
propagation, neighborhood diversification, and divide-and-conquer. We conduct
an exhaustive experimental evaluation of twelve state-of-the-art methods on
seven real data collections, with sizes up to 1 billion vectors. We share key
insights about the strengths and limitations of these methods; e.g., the best
approaches are typically based on incremental insertion and neighborhood
diversification, and the choice of the base graph can hurt scalability.
Finally, we discuss open research directions, such as the importance of
devising more sophisticated data-adaptive seed selection and diversification
strategies.
|
2502.05586
|
A Cost-Benefit Analysis of Additive Manufacturing as a Service
|
cs.ET cs.CE cs.CY
|
The global manufacturing landscape is undergoing a fundamental shift from
resource-intensive mass production to sustainable, localised manufacturing.
This paper presents a comprehensive analysis of a Cloud Crafting Platform that
enables Manufacturing as a Service (MaaS) through additive manufacturing
technologies. The platform connects web shops with local three-dimensional (3D)
printing facilities, allowing customers to purchase products that are
manufactured on-demand in their vicinity. We present the platform's
Service-Oriented Architecture (SOA), deployment on the Microsoft Azure cloud,
and integration with three different 3D printer models in a testbed
environment. A detailed cost-benefit analysis demonstrates the economic
viability of the approach, which generates significant profit margins. The
platform implements a weighted profit-sharing model that fairly compensates all
stakeholders based on their investment and operational responsibilities. Our
results show that on-demand, localised manufacturing through MaaS is not only
technically feasible but also economically viable, while reducing environmental
impact through shortened supply chains and elimination of inventory waste. The
platform's extensible architecture allows for future integration of additional
manufacturing technologies beyond 3D printing.
|
2502.05588
|
Optimizing Information Freshness of IEEE 802.11ax Uplink OFDMA-Based
Random Access
|
cs.IT math.IT
|
The latest WiFi standard, IEEE 802.11ax (WiFi 6), introduces a novel uplink
random access mechanism called uplink orthogonal frequency division multiple
access-based random access (UORA). While existing work has evaluated the
performance of UORA using conventional performance metrics, such as throughput
and delay, its information freshness performance has not been thoroughly
investigated in the literature. This is of practical significance as WiFi 6 and
beyond are expected to support real-time applications. This paper presents the
first attempt to fill this gap by investigating the information freshness,
quantified by the Age of Information (AoI) metric, in UORA networks. We
establish an analytical framework comprising two discrete-time Markov chains
(DTMCs) to characterize the transmission states of stations (STAs) in UORA
networks. Building on the formulated DTMCs, we derive an analytical expression
for the long-term average AoI (AAoI), facilitating the optimization of UORA
parameters for enhanced AoI performance through exhaustive search. To gain
deeper design insights and improve the effectiveness of UORA parameter
optimization, we derive a closed-form expression for the AAoI and its
approximated lower bound for a simplified scenario characterized by a fixed
backoff contention window and generate-at-will status updates. By analyzing the
approximated lower bound of the AAoI, we propose efficient UORA parameter
optimization algorithms that can be realized with only a few comparisons of
different possible values of the parameters to be optimized. Simulation results
validate our analysis and demonstrate that the AAoI achieved through our
proposed parameter optimization algorithm closely approximates the optimal AoI
performance obtained via exhaustive search, outperforming the round-robin and
max-AoI policies in large and low-traffic networks.
|
2502.05589
|
On Memory Construction and Retrieval for Personalized Conversational
Agents
|
cs.CL cs.AI
|
To deliver coherent and personalized experiences in long-term conversations,
existing approaches typically perform retrieval augmented response generation
by constructing memory banks from conversation history at either the
turn-level, session-level, or through summarization techniques. In this paper,
we present two key findings: (1) The granularity of memory unit matters:
Turn-level, session-level, and summarization-based methods each exhibit
limitations in both memory retrieval accuracy and the semantic quality of the
retrieved content. (2) Prompt compression methods, such as
\textit{LLMLingua-2}, can effectively serve as a denoising mechanism, enhancing
memory retrieval accuracy across different granularities. Building on these
insights, we propose SeCom, a method that constructs a memory bank with topical
segments by introducing a conversation Segmentation model, while performing
memory retrieval based on Compressed memory units. Experimental results show
that SeCom outperforms turn-level, session-level, and several
summarization-based methods on long-term conversation benchmarks such as LOCOMO
and Long-MT-Bench+. Additionally, the proposed conversation segmentation method
demonstrates superior performance on dialogue segmentation datasets such as
DialSeg711, TIAGE, and SuperDialSeg.
|
2502.05593
|
Semantic Data Augmentation Enhanced Invariant Risk Minimization for
Medical Image Domain Generalization
|
cs.CV
|
Deep learning has achieved remarkable success in medical image
classification. However, its clinical application is often hindered by data
heterogeneity caused by variations in scanner vendors, imaging protocols, and
operators. Approaches such as invariant risk minimization (IRM) aim to address
this challenge of out-of-distribution generalization. For instance, VIRM
improves upon IRM by tackling the issue of insufficient feature support
overlap, demonstrating promising potential. Nonetheless, these methods face
limitations in medical imaging due to the scarcity of annotated data and the
inefficiency of augmentation strategies. To address these issues, we propose a
novel domain-oriented direction selector to replace the random augmentation
strategy used in VIRM. Our method leverages inter-domain covariance as a guider
for augmentation direction, guiding data augmentation towards the target
domain. This approach effectively reduces domain discrepancies and enhances
generalization performance. Experiments on a multi-center diabetic retinopathy
dataset demonstrate that our method outperforms state-of-the-art approaches,
particularly under limited data conditions and significant domain
heterogeneity.
|
2502.05594
|
A Hybrid Tabu Scatter Search Algorithm for Simulation-Based Optimization
of Multi-Objective Runway Operations Scheduling
|
cs.NE
|
This dissertation addresses the growing challenge of air traffic flow
management by proposing a simulation-based optimization (SbO) approach for
multi-objective runway operations scheduling. The goal is to optimize airport
capacity utilization while minimizing delays, fuel consumption, and
environmental impacts. Given the NP-Hard complexity of the problem, traditional
analytical methods often rely on oversimplifications and fail to account for
real-world uncertainties, limiting their practical applicability. The proposed
SbO framework integrates a discrete-event simulation model to handle stochastic
conditions and a hybrid Tabu-Scatter Search algorithm to identify
Pareto-optimal solutions, explicitly incorporating uncertainty and fairness
among aircraft as key objectives. Computational experiments using real-world
data from a major U.S. airport demonstrate the approach's effectiveness and
tractability, outperforming traditional methods such as First-Come-First-Served
(FCFS) and deterministic approaches while maintaining schedule fairness. The
algorithm's ability to generate trade-off solutions between competing
objectives makes it a promising decision support tool for air traffic
controllers managing complex runway operations.
|
2502.05595
|
Data efficient Robotic Object Throwing with Model-Based Reinforcement
Learning
|
cs.RO
|
Pick-and-place (PnP) operations, featuring object grasping and trajectory
planning, are fundamental in industrial robotics applications. Despite many
advancements in the field, PnP is limited by workspace constraints, reducing
flexibility. Pick-and-throw (PnT) is a promising alternative where the robot
throws objects to target locations, leveraging extrinsic resources like gravity
to improve efficiency and expand the workspace. However, PnT execution is
complex, requiring precise coordination of high-speed movements and object
dynamics. Solutions to the PnT problem are categorized into analytical and
learning-based approaches. Analytical methods focus on system modeling and
trajectory generation but are time-consuming and offer limited generalization.
Learning-based solutions, in particular Model-Free Reinforcement Learning
(MFRL), offer automation and adaptability but require extensive interaction
time. This paper introduces a Model-Based Reinforcement Learning (MBRL)
framework, MC-PILOT, which combines data-driven modeling with policy
optimization for efficient and accurate PnT tasks. MC-PILOT accounts for model
uncertainties and release errors, demonstrating superior performance in
simulations and real-world tests with a Franka Emika Panda manipulator. The
proposed approach generalizes rapidly to new targets, offering advantages over
analytical and Model-Free methods.
|
2502.05599
|
Online Bidding Algorithms with Strict Return on Spend (ROS) Constraint
|
cs.GT cs.DS cs.LG
|
Auto-bidding problem under a strict return-on-spend constraint (ROSC) is
considered, where an algorithm has to make decisions about how much to bid for
an ad slot depending on the revealed value, and the hidden allocation and
payment function that describes the probability of winning the ad-slot
depending on its bid. The objective of an algorithm is to maximize the expected
utility (product of ad value and probability of winning the ad slot) summed
across all time slots subject to the total expected payment being less than the
total expected utility, called the ROSC. A (surprising) impossibility result is
derived that shows that no online algorithm can achieve a sub-linear regret
even when the value, allocation and payment function are drawn i.i.d. from an
unknown distribution. The problem is non-trivial even when the revealed value
remains constant across time slots, and an algorithm with regret guarantee that
is optimal up to logarithmic factor is derived.
|
2502.05605
|
ARIES: Stimulating Self-Refinement of Large Language Models by Iterative
Preference Optimization
|
cs.CL cs.LG
|
A truly intelligent Large Language Model (LLM) should be capable of
correcting errors in its responses through external interactions. However, even
the most advanced models often face challenges in improving their outputs. In
this paper, we explore how to cultivate LLMs with the self-refinement
capability through iterative preference training, and how this ability can be
leveraged to improve model performance during inference. To this end, we
introduce a novel post-training and inference framework, called ARIES: Adaptive
Refinement and Iterative Enhancement Structure. This method iteratively
performs preference training and self-refinement-based data collection. During
training, ARIES strengthen the model's direct question-answering capability
while simultaneously unlocking its self-refinement potential. During inference,
ARIES harnesses this self-refinement capability to generate a series of
progressively refined responses, which are then filtered using either the
Reward Model Scoring or a simple yet effective Rule-Based Selection mechanism,
specifically tailored to our approach, to construct a dataset for the next
round of preference training. Experimental results demonstrate the remarkable
performance of ARIES. When applied to the Llama-3.1-8B model and under the
self-refinement setting, ARIES surpasses powerful models such as GPT-4o,
achieving 62.3% length-controlled (LC) and a 63.3% raw win rates on AlpacaEval
2, outperforming Iterative DPO by 27.8% and 35.5% respectively, as well as a
50.3% win rate on Arena-Hard, surpassing Iterative DPO by 26.6%. Furthermore,
ARIES consistently enhances performance on mathematical reasoning tasks like
GSM8K and MATH.
|
2502.05606
|
FreeBlend: Advancing Concept Blending with Staged Feedback-Driven
Interpolation Diffusion
|
cs.CV
|
Concept blending is a promising yet underexplored area in generative models.
While recent approaches, such as embedding mixing and latent modification based
on structural sketches, have been proposed, they often suffer from incompatible
semantic information and discrepancies in shape and appearance. In this work,
we introduce FreeBlend, an effective, training-free framework designed to
address these challenges. To mitigate cross-modal loss and enhance feature
detail, we leverage transferred image embeddings as conditional inputs. The
framework employs a stepwise increasing interpolation strategy between latents,
progressively adjusting the blending ratio to seamlessly integrate auxiliary
features. Additionally, we introduce a feedback-driven mechanism that updates
the auxiliary latents in reverse order, facilitating global blending and
preventing rigid or unnatural outputs. Extensive experiments demonstrate that
our method significantly improves both the semantic coherence and visual
quality of blended images, yielding compelling and coherent results.
|
2502.05608
|
Closing the Responsibility Gap in AI-based Network Management: An
Intelligent Audit System Approach
|
cs.AI cs.NI
|
Existing network paradigms have achieved lower downtime as well as a higher
Quality of Experience (QoE) through the use of Artificial Intelligence
(AI)-based network management tools. These AI management systems, allow for
automatic responses to changes in network conditions, lowering operation costs
for operators, and improving overall performance. While adopting AI-based
management tools enhance the overall network performance, it also introduce
challenges such as removing human supervision, privacy violations, algorithmic
bias, and model inaccuracies. Furthermore, AI-based agents that fail to address
these challenges should be culpable themselves rather than the network as a
whole. To address this accountability gap, a framework consisting of a Deep
Reinforcement Learning (DRL) model and a Machine Learning (ML) model is
proposed to identify and assign numerical values of responsibility to the
AI-based management agents involved in any decision-making regarding the
network conditions, which eventually affects the end-user. A simulation
environment was created for the framework to be trained using simulated network
operation parameters. The DRL model had a 96% accuracy during testing for
identifying the AI-based management agents, while the ML model using gradient
descent learned the network conditions at an 83% accuracy during testing.
|
2502.05609
|
Lossless Acceleration of Large Language Models with Hierarchical
Drafting based on Temporal Locality in Speculative Decoding
|
cs.CL
|
Accelerating inference in Large Language Models (LLMs) is critical for
real-time interactions, as they have been widely incorporated into real-world
services. Speculative decoding, a fully algorithmic solution, has gained
attention for improving inference speed by drafting and verifying tokens,
thereby generating multiple tokens in a single forward pass. However, current
drafting strategies usually require significant fine-tuning or have
inconsistent performance across tasks. To address these challenges, we propose
Hierarchy Drafting (HD), a novel lossless drafting approach that organizes
various token sources into multiple databases in a hierarchical framework based
on temporal locality. In the drafting step, HD sequentially accesses multiple
databases to obtain draft tokens from the highest to the lowest locality,
ensuring consistent acceleration across diverse tasks and minimizing drafting
latency. Our experiments on Spec-Bench using LLMs with 7B and 13B parameters
demonstrate that HD outperforms existing database drafting methods, achieving
robust inference speedups across model sizes, tasks, and temperatures.
|
2502.05610
|
Towards Sustainable NLP: Insights from Benchmarking Inference Energy in
Large Language Models
|
cs.CL
|
Large language models (LLMs) are increasingly recognized for their
exceptional generative capabilities and versatility across various tasks.
However, the high inference costs associated with these models have not
received adequate attention, particularly when compared to the focus on
training costs in existing research. In response to this gap, our study
conducts a comprehensive benchmarking of LLM inference energy across a wide
range of NLP tasks, where we analyze the impact of different models, tasks,
prompts, and system-related factors on inference energy. Specifically, our
experiments reveal several interesting insights, including strong correlation
of inference energy with output token length and response time. Also, we find
that quantization and optimal batch sizes, along with targeted prompt phrases,
can significantly reduce energy usage. This study is the first to thoroughly
benchmark LLM inference across such a diverse range of aspects, providing
insights and offering several recommendations for improving energy efficiency
in model deployment.
|
2502.05615
|
XiHeFusion: Harnessing Large Language Models for Science Communication
in Nuclear Fusion
|
cs.CV cs.AI
|
Nuclear fusion is one of the most promising ways for humans to obtain
infinite energy. Currently, with the rapid development of artificial
intelligence, the mission of nuclear fusion has also entered a critical period
of its development. How to let more people to understand nuclear fusion and
join in its research is one of the effective means to accelerate the
implementation of fusion. This paper proposes the first large model in the
field of nuclear fusion, XiHeFusion, which is obtained through supervised
fine-tuning based on the open-source large model Qwen2.5-14B. We have collected
multi-source knowledge about nuclear fusion tasks to support the training of
this model, including the common crawl, eBooks, arXiv, dissertation, etc. After
the model has mastered the knowledge of the nuclear fusion field, we further
used the chain of thought to enhance its logical reasoning ability, making
XiHeFusion able to provide more accurate and logical answers. In addition, we
propose a test questionnaire containing 180+ questions to assess the
conversational ability of this science popularization large model. Extensive
experimental results show that our nuclear fusion dialogue model, XiHeFusion,
can perform well in answering science popularization knowledge. The pre-trained
XiHeFusion model is released on https://github.com/Event-AHU/XiHeFusion.
|
2502.05620
|
dynoGP: Deep Gaussian Processes for dynamic system identification
|
stat.ML cs.LG
|
In this work, we present a novel approach to system identification for
dynamical systems, based on a specific class of Deep Gaussian Processes (Deep
GPs). These models are constructed by interconnecting linear dynamic GPs
(equivalent to stochastic linear time-invariant dynamical systems) and static
GPs (to model static nonlinearities). Our approach combines the strengths of
data-driven methods, such as those based on neural network architectures, with
the ability to output a probability distribution. This offers a more
comprehensive framework for system identification that includes uncertainty
quantification. Using both simulated and real-world data, we demonstrate the
effectiveness of the proposed approach.
|
2502.05622
|
Social inequality and cultural factors impact the awareness and reaction
during the cryptic transmission period of pandemic
|
cs.SI
|
The World Health Organization (WHO) declared the COVID-19 outbreak a Public
Health Emergency of International Concern (PHEIC) on January 31, 2020. However,
rumors of a "mysterious virus" had already been circulating in China in
December 2019, possibly preceding the first confirmed COVID-19 case.
Understanding how awareness about an emerging pandemic spreads through society
is vital not only for enhancing disease surveillance, but also for mitigating
demand shocks and social inequities, such as shortages of personal protective
equipment (PPE) and essential supplies. Here we leverage a massive e-commerce
dataset comprising 150 billion online queries and purchase records from 94
million people to detect the traces of early awareness and public response
during the cryptic transmission period of COVID-19. Our analysis focuses on
identifying information gaps across different demographic cohorts, revealing
significant social inequities and the role of cultural factors in shaping
awareness diffusion and response behaviors. By modeling awareness diffusion in
heterogeneous social networks and analyzing online shopping behavior, we
uncover the evolving characteristics of vulnerable populations. Our findings
expand the theoretical understanding of awareness spread and social inequality
in the early stages of a pandemic, highlighting the critical importance of
e-commerce data and social network data in effectively and timely addressing
future pandemic challenges. We also provide actionable recommendations to
better manage and mitigate dynamic social inequalities in public health crises.
|
2502.05623
|
Mixing Time of the Proximal Sampler in Relative Fisher Information via
Strong Data Processing Inequality
|
cs.IT cs.LG math.IT math.OC math.ST stat.TH
|
We study the mixing time guarantee for sampling in relative Fisher
information via the Proximal Sampler algorithm, which is an approximate
proximal discretization of the Langevin dynamics. We show that when the target
probability distribution is strongly log-concave, the relative Fisher
information converges exponentially fast along the Proximal Sampler; this
matches the exponential convergence rate of the relative Fisher information
along the continuous-time Langevin dynamics for strongly log-concave target.
When combined with a standard implementation of the Proximal Sampler via
rejection sampling, this exponential convergence rate provides a high-accuracy
iteration complexity guarantee for the Proximal Sampler in relative Fisher
information when the target distribution is strongly log-concave and
log-smooth. Our proof proceeds by establishing a strong data processing
inequality for relative Fisher information along the Gaussian channel under
strong log-concavity, and a data processing inequality along the reverse
Gaussian channel for a special distribution. The forward and reverse Gaussian
channels compose to form the Proximal Sampler, and these data processing
inequalities imply the exponential convergence rate of the relative Fisher
information along the Proximal Sampler.
|
2502.05625
|
Training-Free Constrained Generation With Stable Diffusion Models
|
cs.LG
|
Stable diffusion models represent the state-of-the-art in data synthesis
across diverse domains and hold transformative potential for applications in
science and engineering, e.g., by facilitating the discovery of novel solutions
and simulating systems that are computationally intractable to model
explicitly. However, their current utility in these fields is severely limited
by an inability to enforce strict adherence to physical laws and
domain-specific constraints. Without this grounding, the deployment of such
models in critical applications, ranging from material science to
safety-critical systems, remains impractical. This paper addresses this
fundamental limitation by proposing a novel approach to integrate stable
diffusion models with constrained optimization frameworks, enabling them to
generate outputs that satisfy stringent physical and functional requirements.
We demonstrate the effectiveness of this approach through material science
experiments requiring adherence to precise morphometric properties, inverse
design problems involving the generation of stress-strain responses using video
generation with a simulator in the loop, and safety settings where outputs must
avoid copyright infringement.
|
2502.05628
|
AnyEdit: Edit Any Knowledge Encoded in Language Models
|
cs.CL
|
Large language models (LLMs) often produce incorrect or outdated information,
necessitating efficient and precise knowledge updates. Current model editing
methods, however, struggle with long-form knowledge in diverse formats, such as
poetry, code snippets, and mathematical derivations. These limitations arise
from their reliance on editing a single token's hidden state, a limitation we
term "efficacy barrier". To solve this, we propose AnyEdit, a new
autoregressive editing paradigm. It decomposes long-form knowledge into
sequential chunks and iteratively edits the key token in each chunk, ensuring
consistent and accurate outputs. Theoretically, we ground AnyEdit in the Chain
Rule of Mutual Information, showing its ability to update any knowledge within
LLMs. Empirically, it outperforms strong baselines by 21.5% on benchmarks
including UnKEBench, AKEW, and our new EditEverything dataset for long-form
diverse-formatted knowledge. Additionally, AnyEdit serves as a plug-and-play
framework, enabling current editing methods to update knowledge with arbitrary
length and format, significantly advancing the scope and practicality of LLM
knowledge editing.
|
2502.05629
|
TrackDiffuser: Nearly Model-Free Bayesian Filtering with Diffusion Model
|
cs.LG eess.SP
|
State estimation remains a fundamental challenge across numerous domains,
from autonomous driving, aircraft tracking to quantum system control. Although
Bayesian filtering has been the cornerstone solution, its classical model-based
paradigm faces two major limitations: it struggles with inaccurate state space
model (SSM) and requires extensive prior knowledge of noise characteristics. We
present TrackDiffuser, a generative framework addressing both challenges by
reformulating Bayesian filtering as a conditional diffusion model. Our approach
implicitly learns system dynamics from data to mitigate the effects of
inaccurate SSM, while simultaneously circumventing the need for explicit
measurement models and noise priors by establishing a direct relationship
between measurements and states. Through an implicit predict-and-update
mechanism, TrackDiffuser preserves the interpretability advantage of
traditional model-based filtering methods. Extensive experiments demonstrate
that our framework substantially outperforms both classical and contemporary
hybrid methods, especially in challenging non-linear scenarios involving
non-Gaussian noises. Notably, TrackDiffuser exhibits remarkable robustness to
SSM inaccuracies, offering a practical solution for real-world state estimation
problems where perfect models and prior knowledge are unavailable.
|
2502.05632
|
Amorphous Fortress Online: Collaboratively Designing Open-Ended
Multi-Agent AI and Game Environments
|
cs.AI
|
This work introduces Amorphous Fortress Online -- a web-based platform where
users can design petri-dish-like environments and games consisting of
multi-agent AI characters. Users can play, create, and share artificial life
and game environments made up of microscopic but transparent finite-state
machine agents that interact with each other. The website features multiple
interactive editors and accessible settings to view the multi-agent
interactions directly from the browser. This system serves to provide a
database of thematically diverse AI and game environments that use the emergent
behaviors of simple AI agents.
|
2502.05633
|
Mol-MoE: Training Preference-Guided Routers for Molecule Generation
|
cs.LG
|
Recent advances in language models have enabled framing molecule generation
as sequence modeling. However, existing approaches often rely on
single-objective reinforcement learning, limiting their applicability to
real-world drug design, where multiple competing properties must be optimized.
Traditional multi-objective reinforcement learning (MORL) methods require
costly retraining for each new objective combination, making rapid exploration
of trade-offs impractical. To overcome these limitations, we introduce Mol-MoE,
a mixture-of-experts (MoE) architecture that enables efficient test-time
steering of molecule generation without retraining. Central to our approach is
a preference-based router training objective that incentivizes the router to
combine experts in a way that aligns with user-specified trade-offs. This
provides improved flexibility in exploring the chemical property space at test
time, facilitating rapid trade-off exploration. Benchmarking against
state-of-the-art methods, we show that Mol-MoE achieves superior sample quality
and steerability.
|
2502.05637
|
Adversarial Machine Learning: Attacks, Defenses, and Open Challenges
|
cs.CR cs.AI
|
Adversarial Machine Learning (AML) addresses vulnerabilities in AI systems
where adversaries manipulate inputs or training data to degrade performance.
This article provides a comprehensive analysis of evasion and poisoning
attacks, formalizes defense mechanisms with mathematical rigor, and discusses
the challenges of implementing robust solutions in adaptive threat models.
Additionally, it highlights open challenges in certified robustness,
scalability, and real-world deployment.
|
Subsets and Splits
No community queries yet
The top public SQL queries from the community will appear here once available.