id
stringlengths 9
16
| title
stringlengths 4
278
| categories
stringlengths 5
104
| abstract
stringlengths 6
4.09k
|
|---|---|---|---|
2502.07056
|
Autonomous Deep Agent
|
cs.AI cs.LG
|
This technical brief introduces Deep Agent, an advanced autonomous AI system
designed to manage complex multi-phase tasks through a novel hierarchical task
management architecture. The system's foundation is built on our Hierarchical
Task DAG (HTDAG) framework, which dynamically decomposes high-level objectives
into manageable sub-tasks while rigorously maintaining dependencies and
execution coherence. Deep Agent advances beyond traditional agent systems
through three key innovations: First, it implements a recursive two-stage
planner-executor architecture that enables continuous task refinement and
adaptation as circumstances change. Second, it features an Autonomous API &
Tool Creation (AATC) system that automatically generates reusable components
from UI interactions, substantially reducing operational costs for similar
tasks. Third, it incorporates Prompt Tweaking Engine and Autonomous Prompt
Feedback Learning components that optimize Large Language Model prompts for
specific scenarios, enhancing both inference accuracy and operational
stability. These components are integrated to form a service infrastructure
that manages user contexts, handles complex task dependencies, and orchestrates
end-to-end agentic workflow execution. Through this sophisticated architecture,
Deep Agent establishes a novel paradigm in self-governing AI systems,
demonstrating robust capability to independently handle intricate, multi-step
tasks while maintaining consistent efficiency and reliability through
continuous self-optimization.
|
2502.07057
|
Tokenization Standards for Linguistic Integrity: Turkish as a Benchmark
|
cs.CL
|
Tokenization is a fundamental preprocessing step in NLP, directly impacting
large language models' (LLMs) ability to capture syntactic, morphosyntactic,
and semantic structures. This paper introduces a novel framework for
systematically evaluating tokenization strategies, addressing challenges in
morphologically rich and low-resource languages. Using a Turkish dataset of
6,200 multiple-choice questions from the Massive Multitask Language
Understanding (MMLU) benchmark, the framework assesses tokenizers across five
key metrics: vocabulary size, token count, processing time, language-specific
token percentages (\%TR), and token purity. These metrics provide a structured
approach to evaluating how well tokenizers preserve linguistic structures.
While \%TR measures the proportion of valid words in the target language,
\%Pure assesses the alignment of tokens with meaningful linguistic units, such
as roots and valid morphemes, minimizing semantic fragmentation. The findings
reveal that \%TR, introduced as a critical metric, exhibits a stronger
correlation with downstream performance (e.g., MMLU scores) than token purity,
emphasizing its role in improving model accuracy. Additionally, larger model
parameters do not necessarily yield better tokenization quality or enhanced
results, highlighting the importance of tailored tokenization strategies that
prioritize linguistic alignment. This framework sets a new standard for
developing robust tokenization methods optimized for morphologically complex
and low-resource languages. Future work will refine morphological analysis,
explore domain-specific customizations, and conduct cross-linguistic
evaluations to further enhance tokenization practices.
|
2502.07058
|
Using Contextually Aligned Online Reviews to Measure LLMs' Performance
Disparities Across Language Varieties
|
cs.CL cs.HC
|
A language can have different varieties. These varieties can affect the
performance of natural language processing (NLP) models, including large
language models (LLMs), which are often trained on data from widely spoken
varieties. This paper introduces a novel and cost-effective approach to
benchmark model performance across language varieties. We argue that
international online review platforms, such as Booking.com, can serve as
effective data sources for constructing datasets that capture comments in
different language varieties from similar real-world scenarios, like reviews
for the same hotel with the same rating using the same language (e.g., Mandarin
Chinese) but different language varieties (e.g., Taiwan Mandarin, Mainland
Mandarin). To prove this concept, we constructed a contextually aligned dataset
comprising reviews in Taiwan Mandarin and Mainland Mandarin and tested six LLMs
in a sentiment analysis task. Our results show that LLMs consistently
underperform in Taiwan Mandarin.
|
2502.07059
|
Federated Continual Learning: Concepts, Challenges, and Solutions
|
cs.LG cs.AI
|
Federated Continual Learning (FCL) has emerged as a robust solution for
collaborative model training in dynamic environments, where data samples are
continuously generated and distributed across multiple devices. This survey
provides a comprehensive review of FCL, focusing on key challenges such as
heterogeneity, model stability, communication overhead, and privacy
preservation. We explore various forms of heterogeneity and their impact on
model performance. Solutions to non-IID data, resource-constrained platforms,
and personalized learning are reviewed in an effort to show the complexities of
handling heterogeneous data distributions. Next, we review techniques for
ensuring model stability and avoiding catastrophic forgetting, which are
critical in non-stationary environments. Privacy-preserving techniques are
another aspect of FCL that have been reviewed in this work. This survey has
integrated insights from federated learning and continual learning to present
strategies for improving the efficacy and scalability of FCL systems, making it
applicable to a wide range of real-world scenarios.
|
2502.07064
|
Contextual Thompson Sampling via Generation of Missing Data
|
cs.LG cs.AI stat.ML
|
We introduce a framework for Thompson sampling contextual bandit algorithms,
in which the algorithm's ability to quantify uncertainty and make decisions
depends on the quality of a generative model that is learned offline. Instead
of viewing uncertainty in the environment as arising from unobservable latent
parameters, our algorithm treats uncertainty as stemming from missing, but
potentially observable, future outcomes. If these future outcomes were all
observed, one could simply make decisions using an "oracle" policy fit on the
complete dataset. Inspired by this conceptualization, at each decision-time,
our algorithm uses a generative model to probabilistically impute missing
future outcomes, fits a policy using the imputed complete dataset, and uses
that policy to select the next action. We formally show that this algorithm is
a generative formulation of Thompson Sampling and prove a state-of-the-art
regret bound for it. Notably, our regret bound i) depends on the probabilistic
generative model only through the quality of its offline prediction loss, and
ii) applies to any method of fitting the "oracle" policy, which easily allows
one to adapt Thompson sampling to decision-making settings with fairness and/or
resource constraints.
|
2502.07065
|
Active Inference through Incentive Design in Markov Decision Processes
|
eess.SY cs.SY
|
We present a method for active inference with partial observations in
stochastic systems through incentive design, also known as the leader-follower
game. Consider a leader agent who aims to infer a follower agent's type given a
finite set of possible types. Different types of followers differ in either the
dynamical model, the reward function, or both. We assume the leader can
partially observe a follower's behavior in the stochastic system modeled as a
Markov decision process, in which the follower takes an optimal policy to
maximize a total reward. To improve inference accuracy and efficiency, the
leader can offer side payments (incentives) to the followers such that
different types of them, under the incentive design, can exhibit diverging
behaviors that facilitate the leader's inference task. We show the problem of
active inference through incentive design can be formulated as a special class
of leader-follower games, where the leader's objective is to balance the
information gain and cost of incentive design. The information gain is measured
by the entropy of the estimated follower's type given partial observations.
Furthermore, we demonstrate that this problem can be solved by reducing a
single-level optimization through softmax temporal consistency between
followers' policies and value functions. This reduction allows us to develop an
efficient gradient-based algorithm. We utilize observable operators in the
hidden Markov model (HMM) to compute the necessary gradients and demonstrate
the effectiveness of our approach through experiments in stochastic grid world
environments.
|
2502.07067
|
Repository-level Code Search with Neural Retrieval Methods
|
cs.IR
|
This paper presents a multi-stage reranking system for repository-level code
search, which leverages the vastly available commit histories of large
open-source repositories to aid in bug fixing. We define the task of
repository-level code search as retrieving the set of files from the current
state of a code repository that are most relevant to addressing a user's
question or bug. The proposed approach combines BM25-based retrieval over
commit messages with neural reranking using CodeBERT to identify the most
pertinent files. By learning patterns from diverse repositories and their
commit histories, the system can surface relevant files for the task at hand.
The system leverages both commit messages and source code for relevance
matching, and is evaluated in both normal and oracle settings. Experiments on a
new dataset created from 7 popular open-source repositories demonstrate
substantial improvements of up to 80% in MAP, MRR and P@1 over the BM25
baseline, across a diverse set of queries, demonstrating the effectiveness this
approach. We hope this work aids LLM agents as a tool for better code search
and understanding. Our code and results obtained are publicly available.
|
2502.07068
|
Specializing Large Language Models to Simulate Survey Response
Distributions for Global Populations
|
cs.CL
|
Large-scale surveys are essential tools for informing social science research
and policy, but running surveys is costly and time-intensive. If we could
accurately simulate group-level survey results, this would therefore be very
valuable to social science research. Prior work has explored the use of large
language models (LLMs) for simulating human behaviors, mostly through
prompting. In this paper, we are the first to specialize LLMs for the task of
simulating survey response distributions. As a testbed, we use country-level
results from two global cultural surveys. We devise a fine-tuning method based
on first-token probabilities to minimize divergence between predicted and
actual response distributions for a given question. Then, we show that this
method substantially outperforms other methods and zero-shot classifiers, even
on unseen questions, countries, and a completely unseen survey. While even our
best models struggle with the task, especially on unseen questions, our results
demonstrate the benefits of specialization for simulation, which may accelerate
progress towards sufficiently accurate simulation in the future.
|
2502.07069
|
Semantics-Aware Updates from Remote IoT Devices to Interconnected LEO
Satellites
|
cs.NI cs.IT math.IT
|
Providing timely and informative data in Integrated Terrestrial and
Non-Terrestrial Networks (T-NTNs) is critical as data volume continues to grow
while the resources available on devices remain limited. To address this, we
adopt a semantics-aware approach to optimize the Version Age of Information
(VAoI) in a status update system in which a remote Energy Harvesting (EH)
Internet of Things (IoT) device samples data and transmits it to a network of
interconnected Low Earth Orbit (LEO) satellites for dissemination and
utilization. The optimal update policy is derived through stochastic modeling
and optimization of the VAoI across the network. The results indicate that this
policy reduces the frequency of updates by skipping stale or irrelevant data,
significantly improving energy efficiency.
|
2502.07070
|
Comprehensive Analysis of Thermal Dissipation in Lithium-Ion Battery
Packs
|
eess.SY cs.SY hep-ph
|
Effective thermal management is critical for lithium-ion battery packs' safe
and efficient operations, particularly in applications such as drones, where
compact designs and varying airflow conditions present unique challenges. This
study investigates the thermal performance of a 16-cell lithium-ion battery
pack by optimizing cooling airflow configurations and integrating phase change
materials (PCMs) for enhanced heat dissipation. Seven geometric configurations
were evaluated under airflow speeds ranging from 0 to 15 m/s, reflecting the
operational conditions of civilian drones. A comprehensive 3D simulation
approach was used to analyze the effects of inlet and outlet configurations,
airflow dynamics, and PCM phase transition behavior. Results indicate that the
trapezoidal (wide-base) configuration, paired with a 5-inlet and 1-outlet
setup, achieves the most balanced performance, effectively maintaining optimal
operating temperatures across low and high-speed airflow conditions. PCM
integration further stabilized thermal behavior, with phase change durations
extending to 12.5 min under tested conditions. These findings highlight the
importance of geometric optimization and material integration in advancing
compact and reliable thermal management systems for energy-dense battery packs.
This study provides a foundation for designing efficient cooling strategies
tailored to lightweight applications such as drones and portable energy storage
systems.
|
2502.07071
|
TRADES: Generating Realistic Market Simulations with Diffusion Models
|
q-fin.TR cs.AI cs.LG q-fin.CP
|
Financial markets are complex systems characterized by high statistical
noise, nonlinearity, and constant evolution. Thus, modeling them is extremely
hard. We address the task of generating realistic and responsive Limit Order
Book (LOB) market simulations, which are fundamental for calibrating and
testing trading strategies, performing market impact experiments, and
generating synthetic market data. Previous works lack realism, usefulness, and
responsiveness of the generated simulations. To bridge this gap, we propose a
novel TRAnsformer-based Denoising Diffusion Probabilistic Engine for LOB
Simulations (TRADES). TRADES generates realistic order flows conditioned on the
state of the market, leveraging a transformer-based architecture that captures
the temporal and spatial characteristics of high-frequency market data. There
is a notable absence of quantitative metrics for evaluating generative market
simulation models in the literature. To tackle this problem, we adapt the
predictive score, a metric measured as an MAE, by training a stock price
predictive model on synthetic data and testing it on real data. We compare
TRADES with previous works on two stocks, reporting an x3.27 and x3.47
improvement over SoTA according to the predictive score, demonstrating that we
generate useful synthetic market data for financial downstream tasks. We assess
TRADES's market simulation realism and responsiveness, showing that it
effectively learns the conditional data distribution and successfully reacts to
an experimental agent, giving sprout to possible calibrations and evaluations
of trading strategies and market impact experiments. We developed DeepMarket,
the first open-source Python framework for market simulation with deep
learning. Our repository includes a synthetic LOB dataset composed of TRADES's
generates simulations. We release the code at
github.com/LeonardoBerti00/DeepMarket.
|
2502.07072
|
IRepair: An Intent-Aware Approach to Repair Data-Driven Errors in Large
Language Models
|
cs.CL cs.AI cs.SE
|
Not a day goes by without hearing about the impressive feats of large
language models (LLMs), and equally, not a day passes without hearing about
their challenges. LLMs are notoriously vulnerable to biases in their dataset,
leading to issues such as toxicity. While domain-adaptive training has been
employed to mitigate these issues, these techniques often address all model
parameters indiscriminately during the repair process, resulting in poor repair
quality and reduced model versatility. In this paper, we introduce a novel
dynamic slicing-based intent-aware LLM repair strategy, IRepair. This approach
selectively targets the most error-prone sections of the model for repair.
Specifically, we propose dynamically slicing the model's most sensitive layers
that require immediate attention, concentrating repair efforts on those areas.
This method enables more effective repairs with potentially less impact on the
model's overall performance by altering a smaller portion of the model. We
evaluated our technique on three models from the GPT2 and GPT-Neo families,
with parameters ranging from 800M to 1.6B, in a toxicity mitigation setup. Our
results show that IRepair repairs errors 43.6% more effectively while causing
46% less disruption to general performance compared to the closest baseline,
direct preference optimization. Our empirical analysis also reveals that errors
are more concentrated in a smaller section of the model, with the top 20% of
layers exhibiting 773% more error density than the remaining 80\%. This
highlights the need for selective repair. Additionally, we demonstrate that a
dynamic selection approach is essential for addressing errors dispersed
throughout the model, ensuring a robust and efficient repair.
|
2502.07076
|
On the use of neural networks for the structural characterization of
polymeric porous materials
|
cond-mat.soft cond-mat.mtrl-sci cs.CV eess.IV
|
The structural characterization is an essential task in the study of porous
materials. To achieve reliable results, it requires to evaluate images with
hundreds of pores. Current methods require large time amounts and are subjected
to human errors and subjectivity. A completely automatic tool would not only
speed up the process but also enhance its reliability and reproducibility.
Therefore, the main objective of this article is the study of a
deep-learning-based technique for the structural characterization of porous
materials, through the use of a convolutional neural network. Several
fine-tuned Mask R CNN models are evaluated using different training
configurations in four separate datasets each composed of numerous SEM images
of diverse polymeric porous materials: closed-pore extruded polystyrene (XPS),
polyurethane (PU), and poly(methyl methacrylate) (PMMA), and open-pore PU.
Results prove the tool capable of providing very accurate results, equivalent
to those achieved by time consuming manual methods, in a matter of seconds.
|
2502.07077
|
Multi-turn Evaluation of Anthropomorphic Behaviours in Large Language
Models
|
cs.CL cs.CY cs.HC
|
The tendency of users to anthropomorphise large language models (LLMs) is of
growing interest to AI developers, researchers, and policy-makers. Here, we
present a novel method for empirically evaluating anthropomorphic LLM
behaviours in realistic and varied settings. Going beyond single-turn static
benchmarks, we contribute three methodological advances in state-of-the-art
(SOTA) LLM evaluation. First, we develop a multi-turn evaluation of 14
anthropomorphic behaviours. Second, we present a scalable, automated approach
by employing simulations of user interactions. Third, we conduct an
interactive, large-scale human subject study (N=1101) to validate that the
model behaviours we measure predict real users' anthropomorphic perceptions. We
find that all SOTA LLMs evaluated exhibit similar behaviours, characterised by
relationship-building (e.g., empathy and validation) and first-person pronoun
use, and that the majority of behaviours only first occur after multiple turns.
Our work lays an empirical foundation for investigating how design choices
influence anthropomorphic model behaviours and for progressing the ethical
debate on the desirability of these behaviours. It also showcases the necessity
of multi-turn evaluations for complex social phenomena in human-AI interaction.
|
2502.07081
|
Fast Clustering of Categorical Big Data
|
cs.LG cs.DB
|
The K-Modes algorithm, developed for clustering categorical data, is of high
algorithmic simplicity but suffers from unreliable performances in clustering
quality and clustering efficiency, both heavily influenced by the choice of
initial cluster centers. In this paper, we investigate Bisecting K-Modes
(BK-Modes), a successive bisecting process to find clusters, in examining how
good the cluster centers out of the bisecting process will be when used as
initial centers for the K-Modes. The BK-Modes works by splitting a dataset into
multiple clusters iteratively with one cluster being chosen and bisected into
two clusters in each iteration. We use the sum of distances of data to their
cluster centers as the selection metric to choose a cluster to be bisected in
each iteration. This iterative process stops when K clusters are produced. The
centers of these K clusters are then used as the initial cluster centers for
the K-Modes. Experimental studies of the BK-Modes were carried out and were
compared against the K-Modes with multiple sets of initial cluster centers as
well as the best of the existing methods we found so far in our survey.
Experimental results indicated good performances of BK-Modes both in the
clustering quality and efficiency for large datasets.
|
2502.07082
|
"Once Upon a Time..." Literary Narrative Connectedness Progresses with
Grade Level: Potential Impact on Reading Fluency and Literacy Skills
|
cs.CL
|
Selecting an appropriate book is crucial for fostering reading habits in
children. While children exhibit varying levels of complexity when generating
oral narratives, the question arises: do children's books also differ in
narrative complexity? This study explores the narrative dynamics of literary
texts used in schools, focusing on how their complexity evolves across
different grade levels. Using Word-Recurrence Graph Analysis, we examined a
dataset of 1,627 literary texts spanning 13 years of education. The findings
reveal significant exponential growth in connectedness, particularly during the
first three years of schooling, mirroring patterns observed in children's oral
narratives. These results highlight the potential of literary texts as a tool
to support the development of literacy skills.
|
2502.07085
|
Real-time Optimization for Wind-to-H2 Driven Critical Infrastructures
Based on Active Constraints Identification and Integer Variables Prediction
|
eess.SY cs.SY
|
This paper proposes a concept of wind-to-hydrogen-driven critical
infrastructure (W2H-CI) as an engineering solution for decarbonizing the power
generation sector where it utilizes wind power to produce hydrogen through
electrolysis and combines it with the carbon captured from fossil fuel power
plants. First, a convex mathematical model of W2H-CI is developed. Then, an
optimization model for optimal operation of W2H-CI, which is a large-scale
mixed-integer convex program (MICP), is proposed. Moreover, we propose to solve
this problem in real-time in order to hedge against the uncertainty of wind
power. For this purpose, a novel solution method based on active constraints
identification and integer variable prediction is introduced. This method can
solve MICP problems very fast since it uses historical optimization data to
predict the values of binary variables and a limited number of constraints
which most likely contain all active constraints. We validate the effectiveness
of the proposed fast solution method using two W2H-CI case studies.
|
2502.07087
|
Evaluating the Systematic Reasoning Abilities of Large Language Models
through Graph Coloring
|
cs.LG
|
Contemporary large language models are powerful problem-solving tools, but
they exhibit weaknesses in their reasoning abilities which ongoing research
seeks to mitigate. We investigate graph coloring as a means of evaluating an
LLM's capacities for systematic step-by-step reasoning and possibility space
exploration, as well as effects of semantic problem framing. We test Claude 3.5
Sonnet, Llama 3.1 405B, Gemini 1.5 Pro, GPT-4o, o1-mini, and DeepSeek-R1 on a
dataset of $k$-coloring problems with $2 \leq k \leq 4$ and vertex count $4
\leq n \leq 8$, using partial algorithmic solvers to further categorize
problems by difficulty. In addition to substantial but varying framing effects,
we find that all models except o1-mini and R1 exhibit $>60\%$ error rates on
difficult problem types in all frames ($>15\%$ for o1-mini and $>10\%$ for R1),
and no model achieves perfect accuracy even in the simple domain of 2-coloring
4-vertex graphs. Our results highlight both the considerable recent progress in
LLM systematic reasoning and the limits of its reliability, especially in
relation to increasing computational costs. We expect that more complex graph
coloring problems, and procedural generation of arbitrary-complexity reasoning
problems more broadly, offer further untapped potential for LLM benchmarking.
|
2502.07088
|
Kernels of Selfhood: GPT-4o shows humanlike patterns of cognitive
consistency moderated by free choice
|
cs.CY cs.AI cs.CL cs.HC cs.LG
|
Large Language Models (LLMs) show emergent patterns that mimic human
cognition. We explore whether they also mirror other, less deliberative human
psychological processes. Drawing upon classical theories of cognitive
consistency, two preregistered studies tested whether GPT-4o changed its
attitudes toward Vladimir Putin in the direction of a positive or negative
essay it wrote about the Russian leader. Indeed, GPT displayed patterns of
attitude change mimicking cognitive consistency effects in humans. Even more
remarkably, the degree of change increased sharply when the LLM was offered an
illusion of choice about which essay (positive or negative) to write. This
result suggests that GPT-4o manifests a functional analog of humanlike
selfhood, although how faithfully the chatbot's behavior reflects the
mechanisms of human attitude change remains to be understood.
|
2502.07090
|
Generative Distribution Prediction: A Unified Approach to Multimodal
Learning
|
stat.ML cs.AI cs.LG
|
Accurate prediction with multimodal data-encompassing tabular, textual, and
visual inputs or outputs-is fundamental to advancing analytics in diverse
application domains. Traditional approaches often struggle to integrate
heterogeneous data types while maintaining high predictive accuracy. We
introduce Generative Distribution Prediction (GDP), a novel framework that
leverages multimodal synthetic data generation-such as conditional diffusion
models-to enhance predictive performance across structured and unstructured
modalities. GDP is model-agnostic, compatible with any high-fidelity generative
model, and supports transfer learning for domain adaptation. We establish a
rigorous theoretical foundation for GDP, providing statistical guarantees on
its predictive accuracy when using diffusion models as the generative backbone.
By estimating the data-generating distribution and adapting to various loss
functions for risk minimization, GDP enables accurate point predictions across
multimodal settings. We empirically validate GDP on four supervised learning
tasks-tabular data prediction, question answering, image captioning, and
adaptive quantile regression-demonstrating its versatility and effectiveness
across diverse domains.
|
2502.07096
|
Lotus: Creating Short Videos From Long Videos With Abstractive and
Extractive Summarization
|
cs.HC cs.CV
|
Short-form videos are popular on platforms like TikTok and Instagram as they
quickly capture viewers' attention. Many creators repurpose their long-form
videos to produce short-form videos, but creators report that planning,
extracting, and arranging clips from long-form videos is challenging.
Currently, creators make extractive short-form videos composed of existing
long-form video clips or abstractive short-form videos by adding newly recorded
narration to visuals. While extractive videos maintain the original connection
between audio and visuals, abstractive videos offer flexibility in selecting
content to be included in a shorter time. We present Lotus, a system that
combines both approaches to balance preserving the original content with
flexibility over the content. Lotus first creates an abstractive short-form
video by generating both a short-form script and its corresponding speech, then
matching long-form video clips to the generated narration. Creators can then
add extractive clips with an automated method or Lotus's editing interface.
Lotus's interface can be used to further refine the short-form video. We
compare short-form videos generated by Lotus with those using an extractive
baseline method. In our user study, we compare creating short-form videos using
Lotus to participants' existing practice.
|
2502.07101
|
SMAB: MAB based word Sensitivity Estimation Framework and its
Applications in Adversarial Text Generation
|
cs.CL
|
To understand the complexity of sequence classification tasks, Hahn et al.
(2021) proposed sensitivity as the number of disjoint subsets of the input
sequence that can each be individually changed to change the output. Though
effective, calculating sensitivity at scale using this framework is costly
because of exponential time complexity. Therefore, we introduce a
Sensitivity-based Multi-Armed Bandit framework (SMAB), which provides a
scalable approach for calculating word-level local (sentence-level) and global
(aggregated) sensitivities concerning an underlying text classifier for any
dataset. We establish the effectiveness of our approach through various
applications. We perform a case study on CHECKLIST generated sentiment analysis
dataset where we show that our algorithm indeed captures intuitively high and
low-sensitive words. Through experiments on multiple tasks and languages, we
show that sensitivity can serve as a proxy for accuracy in the absence of gold
data. Lastly, we show that guiding perturbation prompts using sensitivity
values in adversarial example generation improves attack success rate by
15.58%, whereas using sensitivity as an additional reward in adversarial
paraphrase generation gives a 12.00% improvement over SOTA approaches. Warning:
Contains potentially offensive content.
|
2502.07102
|
Optimal Steady-State Secondary Control of MT-HVdc Grids with Reduced
Communications
|
eess.SY cs.SY math.OC
|
In this paper, we propose a centralized secondary control for the real-time
steady-state optimization of multi-terminal HVdc grids under voltage and
current limits. First, we present the dynamic models of the grid components,
including the modular multilevel converter (MMC) stations and their different
control layers. We also derive the quasi-static input-output model of the
system, which is suitable for the steady-state control design. Second, we
formulate a general optimization problem using this quasi-static model and find
the Karush-Kuhn-Tucker optimality conditions of its solutions. Third, we
propose a secondary control based on primal-dual dynamics to adjust the voltage
setpoints of the dispatchable MMCs, with which the system asymptotically
converges to a steady state that satisfies these optimality conditions. Fourth,
we provide a communication triggering mechanism to reduce the communication
traffic between the secondary control unit and the MMC stations. Finally, we
verify our proposal for different case studies by adapting it to an offshore
multi-terminal HVdc grid composed of heterogeneous MMC stations simulated in
the MATLAB/Simulink environment. The problems of proportional current
minimization and loss reduction are two special case studies.
|
2502.07107
|
A Framework for Supervised and Unsupervised Segmentation and
Classification of Materials Microstructure Images
|
stat.AP cs.CV stat.ML
|
Microstructure of materials is often characterized through image analysis to
understand processing-structure-properties linkages. We propose a largely
automated framework that integrates unsupervised and supervised learning
methods to classify micrographs according to microstructure phase/class and,
for multiphase microstructures, segments them into different homogeneous
regions. With the advance of manufacturing and imaging techniques, the
ultra-high resolution of imaging that reveals the complexity of microstructures
and the rapidly increasing quantity of images (i.e., micrographs) enables and
necessitates a more powerful and automated framework to extract materials
characteristics and knowledge. The framework we propose can be used to
gradually build a database of microstructure classes relevant to a particular
process or group of materials, which can help in analyzing and
discovering/identifying new materials. The framework has three steps: (1)
segmentation of multiphase micrographs through a recently developed score-based
method so that different microstructure homogeneous regions can be identified
in an unsupervised manner; (2) {identification and classification of}
homogeneous regions of micrographs through an uncertainty-aware supervised
classification network trained using the segmented micrographs from Step $1$
with their identified labels verified via the built-in uncertainty
quantification and minimal human inspection; (3) supervised segmentation (more
powerful than the segmentation in Step $1$) of multiphase microstructures
through a segmentation network trained with micrographs and the results from
Steps $1$-$2$ using a form of data augmentation. This framework can iteratively
characterize/segment new homogeneous or multiphase materials while expanding
the database to enhance performance. The framework is demonstrated on various
sets of materials and texture images.
|
2502.07109
|
Game of Coding With an Unknown Adversary
|
cs.IT cs.LG math.IT
|
Motivated by emerging decentralized applications, the \emph{game of coding}
framework has been recently introduced to address scenarios where the
adversary's control over coded symbols surpasses the fundamental limits of
traditional coding theory. Still, the reward mechanism available in
decentralized systems, motivates the adversary to act rationally. While the
decoder, as the data collector (DC), has an acceptance and rejection mechanism,
followed by an estimation module, the adversary aims to maximize its utility,
as an increasing function of (1) the chance of acceptance (to increase the
reward), and (2) estimation error. On the other hand, the decoder also adjusts
its acceptance rule to maximize its own utility, as (1) an increasing function
of the chance of acceptance (to keep the system functional), (2) decreasing
function of the estimation error. Prior works within this framework rely on the
assumption that the game is complete, that is, both the DC and the adversary
are fully aware of each other's utility functions. However, in practice, the
decoder is often unaware of the utility of the adversary. To address this
limitation, we develop an algorithm enabling the DC to commit to a strategy
that achieves within the vicinity of the equilibrium, without knowledge of the
adversary's utility function. Our approach builds on an observation that at the
equilibrium, the relationship between the probability of acceptance and the
mean squared error (MSE) follows a predetermined curve independent of the
specific utility functions of the players. By exploiting this invariant
relationship, the DC can iteratively refine its strategy based on observable
parameters, converging to a near-optimal solution. We provide theoretical
guarantees on sample complexity and accuracy of the proposed scheme.
|
2502.07111
|
Likelihood-Free Estimation for Spatiotemporal Hawkes processes with
missing data and application to predictive policing
|
cs.LG stat.AP stat.ME
|
With the growing use of AI technology, many police departments use
forecasting software to predict probable crime hotspots and allocate patrolling
resources effectively for crime prevention. The clustered nature of crime data
makes self-exciting Hawkes processes a popular modeling choice. However, one
significant challenge in fitting such models is the inherent missingness in
crime data due to non-reporting, which can bias the estimated parameters of the
predictive model, leading to inaccurate downstream hotspot forecasts, often
resulting in over or under-policing in various communities, especially the
vulnerable ones. Our work introduces a Wasserstein Generative Adversarial
Networks (WGAN) driven likelihood-free approach to account for unreported
crimes in Spatiotemporal Hawkes models. We demonstrate through empirical
analysis how this methodology improves the accuracy of parametric estimation in
the presence of data missingness, leading to more reliable and efficient
policing strategies.
|
2502.07114
|
Online Covariance Matrix Estimation in Sketched Newton Methods
|
stat.ML cs.LG cs.NA math.NA math.OC stat.CO
|
Given the ubiquity of streaming data, online algorithms have been widely used
for parameter estimation, with second-order methods particularly standing out
for their efficiency and robustness. In this paper, we study an online sketched
Newton method that leverages a randomized sketching technique to perform an
approximate Newton step in each iteration, thereby eliminating the
computational bottleneck of second-order methods. While existing studies have
established the asymptotic normality of sketched Newton methods, a consistent
estimator of the limiting covariance matrix remains an open problem. We propose
a fully online covariance matrix estimator that is constructed entirely from
the Newton iterates and requires no matrix factorization. Compared to
covariance estimators for first-order online methods, our estimator for
second-order methods is batch-free. We establish the consistency and
convergence rate of our estimator, and coupled with asymptotic normality
results, we can then perform online statistical inference for the model
parameters based on sketched Newton methods. We also discuss the extension of
our estimator to constrained problems, and demonstrate its superior performance
on regression problems as well as benchmark problems in the CUTEst set.
|
2502.07115
|
Online Scheduling for LLM Inference with KV Cache Constraints
|
cs.LG cs.AI math.OC
|
Large Language Model (LLM) inference, where a trained model generates text
one word at a time in response to user prompts, is a computationally intensive
process requiring efficient scheduling to optimize latency and resource
utilization. A key challenge in LLM inference is the management of the
Key-Value (KV) cache, which reduces redundant computations but introduces
memory constraints. In this work, we model LLM inference with KV cache
constraints theoretically and propose novel batching and scheduling algorithms
that minimize inference latency while effectively managing the KV cache's
memory.
We analyze both semi-online and fully online scheduling models, and our
results are threefold. First, we provide a polynomial-time algorithm that
achieves exact optimality in terms of average latency in the semi-online prompt
arrival model. Second, in the fully online case with a stochastic prompt
arrival, we introduce an efficient online scheduling algorithm with constant
regret. Third, we prove that no algorithm (deterministic or randomized) can
achieve a constant competitive ratio in fully online adversarial settings. Our
empirical evaluations on a public LLM inference dataset, using the Llama-70B
model on A100 GPUs, show that our approach significantly outperforms benchmark
algorithms used currently in practice, achieving lower latency while reducing
energy consumption. Overall, our results offer a path toward more sustainable
and cost-effective LLM deployment.
|
2502.07117
|
Choroidal image analysis for OCT image sequences with applications in
systemic health
|
eess.IV cs.CV cs.LG cs.MS
|
The choroid, a highly vascular layer behind the retina, is an extension of
the central nervous system and has parallels with the renal cortex, with blood
flow far exceeding that of the brain and kidney. Thus, there has been growing
interest of choroidal blood flow reflecting physiological status of systemic
disease. Optical coherence tomography (OCT) enables high-resolution imaging of
the choroid, but conventional analysis methods remain manual or semi-automatic,
limiting reproducibility, standardisation and clinical utility. In this thesis,
I develop several new methods to analyse the choroid in OCT image sequences,
with each successive method improving on its predecessors. I first develop two
semi-automatic approaches for choroid region (Gaussian Process Edge Tracing,
GPET) and vessel (Multi-scale Median Cut Quantisation, MMCQ) analysis, which
improve on manual approaches but remain user-dependent. To address this, I
introduce DeepGPET, a deep learning-based region segmentation method which
improves on execution time, reproducibility, and end-user accessibility, but
lacks choroid vessel analysis and automatic feature measurement. Improving on
this, I developed Choroidalyzer, a deep learning-based pipeline to segment the
choroidal space and vessels and generate fully automatic, clinically meaningful
and reproducible choroidal features. I provide rigorous evaluation of these
four approaches and consider their potential clinical value in three
applications into systemic health: OCTANE, assessing choroidal changes in renal
transplant recipients and donors; PREVENT, exploring choroidal associations
with Alzheimer's risk factors at mid-life; D-RISCii, assessing choroidal
variation and feasibility of OCT in critical care. In short, this thesis
contributes many open-source tools for standardised choroidal measurement and
highlights the choroid's potential as a biomarker in systemic health.
|
2502.07119
|
SAFE: Self-Supervised Anomaly Detection Framework for Intrusion
Detection
|
cs.CR cs.LG
|
The proliferation of IoT devices has significantly increased network
vulnerabilities, creating an urgent need for effective Intrusion Detection
Systems (IDS). Machine Learning-based IDS (ML-IDS) offer advanced detection
capabilities but rely on labeled attack data, which limits their ability to
identify unknown threats. Self-Supervised Learning (SSL) presents a promising
solution by using only normal data to detect patterns and anomalies. This paper
introduces SAFE, a novel framework that transforms tabular network intrusion
data into an image-like format, enabling Masked Autoencoders (MAEs) to learn
robust representations of network behavior. The features extracted by the MAEs
are then incorporated into a lightweight novelty detector, enhancing the
effectiveness of anomaly detection. Experimental results demonstrate that SAFE
outperforms the state-of-the-art anomaly detection method, Scale Learning-based
Deep Anomaly Detection method (SLAD), by up to 26.2% and surpasses the
state-of-the-art SSL-based network intrusion detection approach, Anomal-E, by
up to 23.5% in F1-score.
|
2502.07120
|
Is Long Range Sequential Modeling Necessary For Colorectal Tumor
Segmentation?
|
cs.CV
|
Segmentation of colorectal cancer (CRC) tumors in 3D medical imaging is both
complex and clinically critical, providing vital support for effective
radiation therapy planning and survival outcome assessment. Recently, 3D
volumetric segmentation architectures incorporating long-range sequence
modeling mechanisms, such as Transformers and Mamba, have gained attention for
their capacity to achieve high accuracy in 3D medical image segmentation. In
this work, we evaluate the effectiveness of these global token modeling
techniques by pitting them against our proposed MambaOutUNet within the context
of our newly introduced colorectal tumor segmentation dataset (CTS-204). Our
findings suggest that robust local token interactions can outperform long-range
modeling techniques in cases where the region of interest is small and
anatomically complex, proposing a potential shift in 3D tumor segmentation
research.
|
2502.07124
|
Structural Reformation of Large Language Model Neuron Encapsulation for
Divergent Information Aggregation
|
cs.CL
|
Structured neuron encapsulation introduces a modular framework that enables
more effective aggregation and specialization of information within deep
learning architectures. A model modified through this framework demonstrated
improved perplexity scores, greater lexical variability, and enhanced
consistency in logical reasoning, suggesting that structured parameter
distribution contributes to more efficient language representation. Statistical
analyses of generated text highlighted a wider range of sentence structures and
reduced redundancy in token selection, indicating that encapsulation fosters
more adaptable language generation. A detailed evaluation of attention weight
distributions revealed that the experimental model exhibited greater divergence
in cross-layer activations, supporting the hypothesis that encapsulated neurons
assume specialized processing roles. Logical consistency assessments further
demonstrated that modular architectures mitigate contradictory outputs,
reducing internal conflicts in inferred relationships between linguistic
constructs. Computational trade-offs were analyzed, with results showing a
minor increase in processing overhead, though improvements in parameter
efficiency and structured decision-making compensated for the additional
complexity. The mathematical formulation of the encapsulation mechanism
confirmed that modular aggregation maintains stable convergence properties
while promoting distinct functional roles for different neuron clusters.
|
2502.07128
|
Cardiverse: Harnessing LLMs for Novel Card Game Prototyping
|
cs.CL cs.AI cs.MM
|
The prototyping of computer games, particularly card games, requires
extensive human effort in creative ideation and gameplay evaluation. Recent
advances in Large Language Models (LLMs) offer opportunities to automate and
streamline these processes. However, it remains challenging for LLMs to design
novel game mechanics beyond existing databases, generate consistent gameplay
environments, and develop scalable gameplay AI for large-scale evaluations.
This paper addresses these challenges by introducing a comprehensive automated
card game prototyping framework. The approach highlights a graph-based indexing
method for generating novel game designs, an LLM-driven system for consistent
game code generation validated by gameplay records, and a gameplay AI
constructing method that uses an ensemble of LLM-generated action-value
functions optimized through self-play. These contributions aim to accelerate
card game prototyping, reduce human labor, and lower barriers to entry for game
developers.
|
2502.07129
|
Fourier-enhanced Neural Networks For Systems Biology Applications
|
cs.LG q-bio.QM
|
In the field of systems biology, differential equations are commonly used to
model biological systems, but solving them for large-scale and complex systems
can be computationally expensive. Recently, the integration of machine learning
and mathematical modeling has offered new opportunities for scientific
discoveries in biology and health. The emerging physics-informed neural network
(PINN) has been proposed as a solution to this problem. However, PINN can be
computationally expensive and unreliable for complex biological systems. To
address these issues, we propose the Fourier-enhanced Neural Networks for
systems biology (SB-FNN). SB-FNN uses an embedded Fourier neural network with
an adaptive activation function and a cyclic penalty function to optimize the
prediction of biological dynamics, particularly for biological systems that
exhibit oscillatory patterns. Experimental results demonstrate that SB-FNN
achieves better performance and is more efficient than PINN for handling
complex biological models. Experimental results on cellular and population
models demonstrate that SB-FNN outperforms PINN in both accuracy and
efficiency, making it a promising alternative approach for handling complex
biological models. The proposed method achieved better performance on six
biological models and is expected to replace PINN as the most advanced method
in systems biology.
|
2502.07130
|
Unconstrained Body Recognition at Altitude and Range: Comparing Four
Approaches
|
cs.CV cs.AI cs.LG
|
This study presents an investigation of four distinct approaches to long-term
person identification using body shape. Unlike short-term re-identification
systems that rely on temporary features (e.g., clothing), we focus on learning
persistent body shape characteristics that remain stable over time. We
introduce a body identification model based on a Vision Transformer (ViT) (Body
Identification from Diverse Datasets, BIDDS) and on a Swin-ViT model
(Swin-BIDDS). We also expand on previous approaches based on the Linguistic and
Non-linguistic Core ResNet Identity Models (LCRIM and NLCRIM), but with
improved training. All models are trained on a large and diverse dataset of
over 1.9 million images of approximately 5k identities across 9 databases.
Performance was evaluated on standard re-identification benchmark datasets
(MARS, MSMT17, Outdoor Gait, DeepChange) and on an unconstrained dataset that
includes images at a distance (from close-range to 1000m), at altitude (from an
unmanned aerial vehicle, UAV), and with clothing change. A comparative analysis
across these models provides insights into how different backbone architectures
and input image sizes impact long-term body identification performance across
real-world conditions.
|
2502.07131
|
TWICE: What Advantages Can Low-Resource Domain-Specific Embedding Model
Bring? -- A Case Study on Korea Financial Texts
|
cs.CL q-fin.CP
|
Domain specificity of embedding models is critical for effective performance.
However, existing benchmarks, such as FinMTEB, are primarily designed for
high-resource languages, leaving low-resource settings, such as Korean,
under-explored. Directly translating established English benchmarks often fails
to capture the linguistic and cultural nuances present in low-resource domains.
In this paper, titled TWICE: What Advantages Can Low-Resource Domain-Specific
Embedding Models Bring? A Case Study on Korea Financial Texts, we introduce
KorFinMTEB, a novel benchmark for the Korean financial domain, specifically
tailored to reflect its unique cultural characteristics in low-resource
languages. Our experimental results reveal that while the models perform
robustly on a translated version of FinMTEB, their performance on KorFinMTEB
uncovers subtle yet critical discrepancies, especially in tasks requiring
deeper semantic understanding, that underscore the limitations of direct
translation. This discrepancy highlights the necessity of benchmarks that
incorporate language-specific idiosyncrasies and cultural nuances. The insights
from our study advocate for the development of domain-specific evaluation
frameworks that can more accurately assess and drive the progress of embedding
models in low-resource settings.
|
2502.07132
|
Interactive Data Harmonization with LLM Agents
|
cs.AI cs.DB
|
Data harmonization is an essential task that entails integrating datasets
from diverse sources. Despite years of research in this area, it remains a
time-consuming and challenging task due to schema mismatches, varying
terminologies, and differences in data collection methodologies. This paper
presents the case for agentic data harmonization as a means to both empower
experts to harmonize their data and to streamline the process. We introduce
Harmonia, a system that combines LLM-based reasoning, an interactive user
interface, and a library of data harmonization primitives to automate the
synthesis of data harmonization pipelines. We demonstrate Harmonia in a
clinical data harmonization scenario, where it helps to interactively create
reusable pipelines that map datasets to a standard format. Finally, we discuss
challenges and open problems, and suggest research directions for advancing our
vision.
|
2502.07133
|
Cross-platform Learning-based Fault Tolerant Surfacing Controller for
Underwater Robots
|
cs.RO
|
In this paper, we propose a novel cross-platform fault-tolerant surfacing
controller for underwater robots, based on reinforcement learning (RL). Unlike
conventional approaches, which require explicit identification of
malfunctioning actuators, our method allows the robot to surface using only the
remaining operational actuators without needing to pinpoint the failures. The
proposed controller learns a robust policy capable of handling diverse failure
scenarios across different actuator configurations. Moreover, we introduce a
transfer learning mechanism that shares a part of the control policy across
various underwater robots with different actuators, thus improving learning
efficiency and generalization across platforms. To validate our approach, we
conduct simulations on three different types of underwater robots: a
hovering-type AUV, a torpedo shaped AUV, and a turtle-shaped robot (U-CAT).
Additionally, real-world experiments are performed, successfully transferring
the learned policy from simulation to a physical U-CAT in a controlled
environment. Our RL-based controller demonstrates superior performance in terms
of stability and success rate compared to a baseline controller, achieving an
85.7 percent success rate in real-world tests compared to 57.1 percent with a
baseline controller. This research provides a scalable and efficient solution
for fault-tolerant control for diverse underwater platforms, with potential
applications in real-world aquatic missions.
|
2502.07135
|
One-Shot Learning for k-SAT
|
cs.DS cs.LG math.ST stat.ML stat.TH
|
Consider a $k$-SAT formula $\Phi$ where every variable appears at most $d$
times, and let $\sigma$ be a satisfying assignment of $\Phi$ sampled
proportionally to $e^{\beta m(\sigma)}$ where $m(\sigma)$ is the number of
variables set to true and $\beta$ is a real parameter. Given $\Phi$ and
$\sigma$, can we learn the value of $\beta$ efficiently?
This problem falls into a recent line of works about single-sample
("one-shot") learning of Markov random fields. The $k$-SAT setting we consider
here was recently studied by Galanis, Kandiros, and Kalavasis (SODA'24) where
they showed that single-sample learning is possible when roughly $d\leq
2^{k/6.45}$ and impossible when $d\geq (k+1) 2^{k-1}$. Crucially, for their
impossibility results they used the existence of unsatisfiable instances which,
aside from the gap in $d$, left open the question of whether the feasibility
threshold for one-shot learning is dictated by the satisfiability threshold of
$k$-SAT formulas of bounded degree.
Our main contribution is to answer this question negatively. We show that
one-shot learning for $k$-SAT is infeasible well below the satisfiability
threshold; in fact, we obtain impossibility results for degrees $d$ as low as
$k^2$ when $\beta$ is sufficiently large, and bootstrap this to small values of
$\beta$ when $d$ scales exponentially with $k$, via a probabilistic
construction. On the positive side, we simplify the analysis of the learning
algorithm and obtain significantly stronger bounds on $d$ in terms of $\beta$.
In particular, for the uniform case $\beta\rightarrow 0$ that has been studied
extensively in the sampling literature, our analysis shows that learning is
possible under the condition $d\lesssim 2^{k/2}$. This is nearly optimal (up to
constant factors) in the sense that it is known that sampling a
uniformly-distributed satisfying assignment is NP-hard for $d\gtrsim 2^{k/2}$.
|
2502.07136
|
A Safe Hybrid Control Framework for Car-like Robot with Guaranteed
Global Path-Invariance using a Control Barrier Function
|
cs.RO cs.SY eess.SY
|
This work proposes a hybrid framework for car-like robots with obstacle
avoidance, global convergence, and safety, where safety is interpreted as path
invariance, namely, once the robot converges to the path, it never leaves the
path. Given a priori obstacle-free feasible path where obstacles can be around
the path, the task is to avoid obstacles while reaching the path and then
staying on the path without leaving it. The problem is solved in two stages.
Firstly, we define a ``tight'' obstacle-free neighborhood along the path and
design a local controller to ensure convergence to the path and path
invariance. The control barrier function technology is involved in the control
design to steer the system away from its singularity points, where the local
path invariant controller is not defined. Secondly, we design a hybrid control
framework that integrates this local path-invariant controller with any global
tracking controller from the existing literature without path invariance
guarantee, ensuring convergence from any position to the desired path, namely,
global convergence. This framework guarantees path invariance and robustness to
sensor noise. Detailed simulation results affirm the effectiveness of the
proposed scheme.
|
2502.07138
|
Towards a Robust Framework for Multimodal Hate Detection: A Study on
Video vs. Image-based Content
|
cs.CV cs.CL cs.LG
|
Social media platforms enable the propagation of hateful content across
different modalities such as textual, auditory, and visual, necessitating
effective detection methods. While recent approaches have shown promise in
handling individual modalities, their effectiveness across different modality
combinations remains unexplored. This paper presents a systematic analysis of
fusion-based approaches for multimodal hate detection, focusing on their
performance across video and image-based content. Our comprehensive evaluation
reveals significant modality-specific limitations: while simple embedding
fusion achieves state-of-the-art performance on video content (HateMM dataset)
with a 9.9% points F1-score improvement, it struggles with complex image-text
relationships in memes (Hateful Memes dataset). Through detailed ablation
studies and error analysis, we demonstrate how current fusion approaches fail
to capture nuanced cross-modal interactions, particularly in cases involving
benign confounders. Our findings provide crucial insights for developing more
robust hate detection systems and highlight the need for modality-specific
architectural considerations. The code is available at
https://github.com/gak97/Video-vs-Meme-Hate.
|
2502.07139
|
Language-TPP: Integrating Temporal Point Processes with Language Models
for Event Analysis
|
cs.CL cs.LG
|
Temporal Point Processes (TPPs) have been widely used for event sequence
modeling, but they often struggle to incorporate rich textual event
descriptions effectively. Conversely, while Large Language Models (LLMs) have
been shown remarkable capabilities in processing textual data, they lack
mechanisms for handling temporal dynamics. To bridge this gap, we introduce
Language-TPP, a unified framework that integrates TPPs with LLMs for enhanced
event sequence modeling. Language-TPP introduces a novel temporal encoding
mechanism that converts continuous time intervals into specialized byte-tokens,
enabling seamless integration with standard LLM architectures. This approach
allows Language-TPP to achieve state-of-the-art performance across multiple TPP
tasks, including event time prediction, type prediction, and intensity
estimation, on five datasets. Additionally, we demonstrate that incorporating
temporal information significantly improves the quality of generated event
descriptions.
|
2502.07140
|
Few-Shot Multi-Human Neural Rendering Using Geometry Constraints
|
cs.CV cs.AI cs.GR
|
We present a method for recovering the shape and radiance of a scene
consisting of multiple people given solely a few images. Multi-human scenes are
complex due to additional occlusion and clutter. For single-human settings,
existing approaches using implicit neural representations have achieved
impressive results that deliver accurate geometry and appearance. However, it
remains challenging to extend these methods for estimating multiple humans from
sparse views. We propose a neural implicit reconstruction method that addresses
the inherent challenges of this task through the following contributions:
First, we propose to use geometry constraints by exploiting pre-computed meshes
using a human body model (SMPL). Specifically, we regularize the signed
distances using the SMPL mesh and leverage bounding boxes for improved
rendering. Second, we propose a ray regularization scheme to minimize rendering
inconsistencies, and a saturation regularization for robust optimization in
variable illumination. Extensive experiments on both real and synthetic
datasets demonstrate the benefits of our approach and show state-of-the-art
performance against existing neural reconstruction methods.
|
2502.07141
|
Small steps no more: Global convergence of stochastic gradient bandits
for arbitrary learning rates
|
cs.LG
|
We provide a new understanding of the stochastic gradient bandit algorithm by
showing that it converges to a globally optimal policy almost surely using
\emph{any} constant learning rate. This result demonstrates that the stochastic
gradient algorithm continues to balance exploration and exploitation
appropriately even in scenarios where standard smoothness and noise control
assumptions break down. The proofs are based on novel findings about action
sampling rates and the relationship between cumulative progress and noise, and
extend the current understanding of how simple stochastic gradient methods
behave in bandit settings.
|
2502.07143
|
Ask Patients with Patience: Enabling LLMs for Human-Centric Medical
Dialogue with Grounded Reasoning
|
cs.CL
|
Accurate and efficient diagnosis in online medical consultations remains a
challenge for current large language models. These models often rely on
single-turn interactions and lack the ability to refine their predictions
through follow-up questions. Additionally, their responses frequently contain
complex medical terminology, making them less accessible to non-medical users
and creating barriers to effective communication. In this paper, we introduce
Ask Patients with Patience (APP), the first multi-turn dialogue that enables
LLMs to iteratively refine diagnoses based on grounded reasoning. By
integrating medical guidelines and entropy minimization, APP improves both
diagnostic accuracy and efficiency. Furthermore, it features human-centric
communication that bridges the gap between user comprehension and medical
terminology, significantly enhancing user accessibility and engagement. We
evaluated APP using a subset of the ReMeDi dataset, comparing it with
single-turn and traditional multi-turn LLM baselines. APP achieved higher
similarity scores in diagnosis predictions, demonstrating better alignment with
ground truth diagnoses. Entropy analysis showed that APP reduces diagnostic
uncertainty more rapidly across iterations, increasing confidence in its
predictions. APP also excels in user accessibility and empathy, further
bridging the gap between complex medical language and user understanding. Code
will be released at: https://github.com/SuperMedIntel/AskPatients.
|
2502.07145
|
Mesh2SSM++: A Probabilistic Framework for Unsupervised Learning of
Statistical Shape Model of Anatomies from Surface Meshes
|
cs.CV
|
Anatomy evaluation is crucial for understanding the physiological state,
diagnosing abnormalities, and guiding medical interventions. Statistical shape
modeling (SSM) is vital in this process. By enabling the extraction of
quantitative morphological shape descriptors from MRI and CT scans, SSM
provides comprehensive descriptions of anatomical variations within a
population. However, the effectiveness of SSM in anatomy evaluation hinges on
the quality and robustness of the shape models. While deep learning techniques
show promise in addressing these challenges by learning complex nonlinear
representations of shapes, existing models still have limitations and often
require pre-established shape models for training. To overcome these issues, we
propose Mesh2SSM++, a novel approach that learns to estimate correspondences
from meshes in an unsupervised manner. This method leverages unsupervised,
permutation-invariant representation learning to estimate how to deform a
template point cloud into subject-specific meshes, forming a
correspondence-based shape model. Additionally, our probabilistic formulation
allows learning a population-specific template, reducing potential biases
associated with template selection. A key feature of Mesh2SSM++ is its ability
to quantify aleatoric uncertainty, which captures inherent data variability and
is essential for ensuring reliable model predictions and robust decision-making
in clinical tasks, especially under challenging imaging conditions. Through
extensive validation across diverse anatomies, evaluation metrics, and
downstream tasks, we demonstrate that Mesh2SSM++ outperforms existing methods.
Its ability to operate directly on meshes, combined with computational
efficiency and interpretability through its probabilistic framework, makes it
an attractive alternative to traditional and deep learning-based SSM
approaches.
|
2502.07148
|
Expressing entropy and cross-entropy in expansions of common meadows
|
cs.IT math.IT
|
A common meadow is an enrichment of a field with a partial division operation
that is made total by assuming that division by zero takes the a default value,
a special element $\bot$ adjoined to the field. To a common meadow of real
numbers we add a binary logarithm $\log_2(-)$, which we also assume to be total
with $\log_2(p) = \bot$ for $p \leq 0$. With these and other auxiliary
operations, such as a sign function, we form algebras over which entropy and
cross entropy can be defined for probability mass functions on a finite sample
space by algebraic formulae that are simple terms built from the operations of
the algebras and without case distinctions or conventions to avoid partiality.
The discuss the advantages of algebras based on common meadows, whose theory is
established, and alternate methods to define entropy and other information
measures completely for all arguments using single terms.
|
2502.07151
|
Conditional Distribution Quantization in Machine Learning
|
cs.LG
|
Conditional expectation \mathbb{E}(Y \mid X) often fails to capture the
complexity of multimodal conditional distributions \mathcal{L}(Y \mid X). To
address this, we propose using n-point conditional quantizations--functional
mappings of X that are learnable via gradient descent--to approximate
\mathcal{L}(Y \mid X). This approach adapts Competitive Learning Vector
Quantization (CLVQ), tailored for conditional distributions. It goes beyond
single-valued predictions by providing multiple representative points that
better reflect multimodal structures. It enables the approximation of the true
conditional law in the Wasserstein distance. The resulting framework is
theoretically grounded and useful for uncertainty quantification and multimodal
data generation tasks. For example, in computer vision inpainting tasks,
multiple plausible reconstructions may exist for the same partially observed
input image X. We demonstrate the effectiveness of our approach through
experiments on synthetic and real-world datasets.
|
2502.07153
|
Feature Importance Depends on Properties of the Data: Towards Choosing
the Correct Explanations for Your Data and Decision Trees based Models
|
cs.LG cs.AI
|
In order to ensure the reliability of the explanations of machine learning
models, it is crucial to establish their advantages and limits and in which
case each of these methods outperform. However, the current understanding of
when and how each method of explanation can be used is insufficient. To fill
this gap, we perform a comprehensive empirical evaluation by synthesizing
multiple datasets with the desired properties. Our main objective is to assess
the quality of feature importance estimates provided by local explanation
methods, which are used to explain predictions made by decision tree-based
models. By analyzing the results obtained from synthetic datasets as well as
publicly available binary classification datasets, we observe notable
disparities in the magnitude and sign of the feature importance estimates
generated by these methods. Moreover, we find that these estimates are
sensitive to specific properties present in the data. Although some model
hyper-parameters do not significantly influence feature importance assignment,
it is important to recognize that each method of explanation has limitations in
specific contexts. Our assessment highlights these limitations and provides
valuable insight into the suitability and reliability of different explanatory
methods in various scenarios.
|
2502.07154
|
Rethinking Fine-Tuning when Scaling Test-Time Compute: Limiting
Confidence Improves Mathematical Reasoning
|
cs.LG cs.AI
|
Recent progress in large language models (LLMs) highlights the power of
scaling test-time compute to achieve strong performance on complex tasks, such
as mathematical reasoning and code generation. This raises a critical question:
how should model training be modified to optimize performance under a
subsequent test-time compute strategy and budget? To explore this, we focus on
pass@N, a simple test-time strategy that searches for a correct answer in $N$
independent samples. We show, surprisingly, that training with cross-entropy
(CE) loss can be ${\it misaligned}$ with pass@N in that pass@N accuracy ${\it
decreases}$ with longer training. We explain the origins of this misalignment
in terms of model overconfidence induced by CE, and experimentally verify our
prediction of overconfidence as an impediment to scaling test-time compute via
pass@N. Furthermore we suggest a principled, modified training loss that is
better aligned to pass@N by limiting model confidence and rescuing pass@N test
performance. Our algorithm demonstrates improved mathematical reasoning on MATH
and MiniF2F benchmarks under several scenarios: (1) providing answers to math
questions; and (2) proving theorems by searching over proof trees of varying
shapes. Overall our work underscores the importance of co-designing two
traditionally separate phases of LLM development: training-time protocols and
test-time search and reasoning strategies.
|
2502.07156
|
Explaining 3D Computed Tomography Classifiers with Counterfactuals
|
cs.CV cs.AI
|
Counterfactual explanations in medical imaging are critical for understanding
the predictions made by deep learning models. We extend the Latent Shift
counterfactual generation method from 2D applications to 3D computed tomography
(CT) scans. We address the challenges associated with 3D data, such as limited
training samples and high memory demands, by implementing a slice-based
approach. This method leverages a 2D encoder trained on CT slices, which are
subsequently combined to maintain 3D context. We demonstrate this technique on
two models for clinical phenotype prediction and lung segmentation. Our
approach is both memory-efficient and effective for generating interpretable
counterfactuals in high-resolution 3D medical imaging.
|
2502.07158
|
Early Risk Prediction of Pediatric Cardiac Arrest from Electronic Health
Records via Multimodal Fused Transformer
|
cs.LG cs.AI
|
Early prediction of pediatric cardiac arrest (CA) is critical for timely
intervention in high-risk intensive care settings. We introduce PedCA-FT, a
novel transformer-based framework that fuses tabular view of EHR with the
derived textual view of EHR to fully unleash the interactions of
high-dimensional risk factors and their dynamics. By employing dedicated
transformer modules for each modality view, PedCA-FT captures complex temporal
and contextual patterns to produce robust CA risk estimates. Evaluated on a
curated pediatric cohort from the CHOA-CICU database, our approach outperforms
ten other artificial intelligence models across five key performance metrics
and identifies clinically meaningful risk factors. These findings underscore
the potential of multimodal fusion techniques to enhance early CA detection and
improve patient care.
|
2502.07160
|
HDCompression: Hybrid-Diffusion Image Compression for Ultra-Low Bitrates
|
cs.CV cs.MM
|
Image compression under ultra-low bitrates remains challenging for both
conventional learned image compression (LIC) and generative vector-quantized
(VQ) modeling. Conventional LIC suffers from severe artifacts due to heavy
quantization, while generative VQ modeling gives poor fidelity due to the
mismatch between learned generative priors and specific inputs. In this work,
we propose Hybrid-Diffusion Image Compression (HDCompression), a dual-stream
framework that utilizes both generative VQ-modeling and diffusion models, as
well as conventional LIC, to achieve both high fidelity and high perceptual
quality. Different from previous hybrid methods that directly use pre-trained
LIC models to generate low-quality fidelity-preserving information from heavily
quantized latent, we use diffusion models to extract high-quality complimentary
fidelity information from the ground-truth input, which can enhance the system
performance in several aspects: improving indices map prediction, enhancing the
fidelity-preserving output of the LIC stream, and refining conditioned image
reconstruction with VQ-latent correction. In addition, our diffusion model is
based on a dense representative vector (DRV), which is lightweight with very
simple sampling schedulers. Extensive experiments demonstrate that our
HDCompression outperforms the previous conventional LIC, generative
VQ-modeling, and hybrid frameworks in both quantitative metrics and qualitative
visualization, providing balanced robust compression performance at ultra-low
bitrates.
|
2502.07161
|
A Survey on Mamba Architecture for Vision Applications
|
cs.CV cs.AI
|
Transformers have become foundational for visual tasks such as object
detection, semantic segmentation, and video understanding, but their quadratic
complexity in attention mechanisms presents scalability challenges. To address
these limitations, the Mamba architecture utilizes state-space models (SSMs)
for linear scalability, efficient processing, and improved contextual
awareness. This paper investigates Mamba architecture for visual domain
applications and its recent advancements, including Vision Mamba (ViM) and
VideoMamba, which introduce bidirectional scanning, selective scanning
mechanisms, and spatiotemporal processing to enhance image and video
understanding. Architectural innovations like position embeddings, cross-scan
modules, and hierarchical designs further optimize the Mamba framework for
global and local feature extraction. These advancements position Mamba as a
promising architecture in computer vision research and applications.
|
2502.07164
|
Does Training on Synthetic Data Make Models Less Robust?
|
cs.CL cs.AI cs.LG
|
An increasingly common practice is to train large language models (LLMs)
using synthetic data. Often this synthetic data is produced by the same or
similar LLMs as those it is being used to train. This raises the question of
whether the synthetic data might in fact exacerbate certain "blindspots" by
reinforcing heuristics that the LLM already encodes. In this paper, we conduct
simulated experiments on the natural language inference (NLI) task with
Llama-2-7B-hf models. We use MultiNLI as the general task and HANS, a targeted
evaluation set designed to measure the presence of specific heuristic
strategies for NLI, as our "blindspot" task. Our goal is to determine whether
performance disparities between the general and blind spot tasks emerge. Our
results indicate that synthetic data does not reinforce blindspots in the way
we expected. Specifically, we see that, while fine-tuning with synthetic data
doesn't necessarily reduce the use of the heuristic, it also does not make it
worse as we hypothesized.
|
2502.07165
|
Don't Just Demo, Teach Me the Principles: A Principle-Based Multi-Agent
Prompting Strategy for Text Classification
|
cs.CL cs.AI
|
We present PRINCIPLE-BASED PROMPTING, a simple but effective multi-agent
prompting strategy for text classification. It first asks multiple LLM agents
to independently generate candidate principles based on analysis of
demonstration samples with or without labels, consolidates them into final
principles via a finalizer agent, and then sends them to a classifier agent to
perform downstream classification tasks. Extensive experiments on binary and
multi-class classification datasets with different sizes of LLMs show that our
approach not only achieves substantial performance gains (1.55% - 19.37%) over
zero-shot prompting on macro-F1 score but also outperforms other strong
baselines (CoT and stepback prompting). Principles generated by our approach
help LLMs perform better on classification tasks than human crafted principles
on two private datasets. Our multi-agent PRINCIPLE-BASED PROMPTING approach
also shows on-par or better performance compared to demonstration-based
few-shot prompting approaches, yet with substantially lower inference costs.
Ablation studies show that label information and the multi-agent cooperative
LLM framework play an important role in generating high-quality principles to
facilitate downstream classification tasks.
|
2502.07166
|
Bayesian Optimization for Building Social-Influence-Free Consensus
|
cs.MA cs.GT cs.LG stat.ML
|
We introduce Social Bayesian Optimization (SBO), a vote-efficient algorithm
for consensus-building in collective decision-making. In contrast to
single-agent scenarios, collective decision-making encompasses group dynamics
that may distort agents' preference feedback, thereby impeding their capacity
to achieve a social-influence-free consensus -- the most preferable decision
based on the aggregated agent utilities. We demonstrate that under mild
rationality axioms, reaching social-influence-free consensus using noisy
feedback alone is impossible. To address this, SBO employs a dual voting
system: cheap but noisy public votes (e.g., show of hands in a meeting), and
more accurate, though expensive, private votes (e.g., one-to-one interview). We
model social influence using an unknown social graph and leverage the dual
voting system to efficiently learn this graph. Our theoretical findigns show
that social graph estimation converges faster than the black-box estimation of
agents' utilities, allowing us to reduce reliance on costly private votes early
in the process. This enables efficient consensus-building primarily through
noisy public votes, which are debiased based on the estimated social graph to
infer social-influence-free feedback. We validate the efficacy of SBO across
multiple real-world applications, including thermal comfort, team building,
travel negotiation, and energy trading collaboration.
|
2502.07169
|
Advancing Geological Carbon Storage Monitoring With 3d Digital Shadow
Technology
|
physics.comp-ph cs.LG physics.geo-ph
|
Geological Carbon Storage (GCS) is a key technology for achieving global
climate goals by capturing and storing CO2 in deep geological formations. Its
effectiveness and safety rely on accurate monitoring of subsurface CO2
migration using advanced time-lapse seismic imaging. A Digital Shadow framework
integrates field data, including seismic and borehole measurements, to track
CO2 saturation over time. Machine learning-assisted data assimilation
techniques, such as generative AI and nonlinear ensemble Bayesian filtering,
update a digital model of the CO2 plume while incorporating uncertainties in
reservoir properties. Compared to 2D approaches, 3D monitoring enhances the
spatial accuracy of GCS assessments, capturing the full extent of CO2
migration. This study extends the uncertainty-aware 2D Digital Shadow framework
by incorporating 3D seismic imaging and reservoir modeling, improving
decision-making and risk mitigation in CO2 storage projects.
|
2502.07170
|
Effcient classical error correction for parity encoded spin systems
|
quant-ph cs.IT math.IT
|
Fast solvers for combinatorial optimization problems (COPs) have attracted
engineering interest in various industrial and social applications. Quantum
annealing (QA) has emerged as a promising candidate and significant efforts
have been dedicated to its development. Since COP is encoded in the Ising
interaction between logical spins, its realization requires a spin system with
all-to-all connectivity, which poses technical difficulties in the physical
implementation of large-scale QA devices. W. Lechner, P. Hauke, and P. Zoller
proposed parity-encoding (PE) architecture, consisting of a larger system of
physical spins with only local connectivities between them, to avoid this
diffculty in the near future QA device development. They suggested that this
architecture not only reduces implementation diffculties and improves
scalability, but also has intrinsic fault tolerance because logical spins are
redundantly and nonlocally encoded into the physical spins. Nevertheless, it
remains unclear how these advantageous features can be exploited. This paper
addresses how to correct errors in a spin readout of PE architecture. Our work
is based on the close connection between PE architecture and classical
low-density parity-check (LDPC) codes. We have shown that independent and
identically distributed errors in a spin readout can be corrected by a very
simple decoding algorithm that can be regarded as a bit flipping (BF) algorithm
for the LDPC codes. The BF algorithm was shown to have comparable performance
to the belief propagation (BP) decoding algorithm. Furthermore, it is suggested
that the introduction of post-readout BF decoding reduces the total
computational cost and improves the performance of the global optimal solution
search using the PE architecture. We believe that our results indicate that the
PE architecture is a promising platform for near-term QA devices.
|
2502.07171
|
Enhancing Robustness Of Digital Shadow For CO2 Storage Monitoring With
Augmented Rock Physics Modeling
|
physics.comp-ph cs.LG physics.geo-ph
|
To meet climate targets, the IPCC underscores the necessity of technologies
capable of removing gigatonnes of CO2 annually, with Geological Carbon Storage
(GCS) playing a central role. GCS involves capturing CO2 and injecting it into
deep geological formations for long-term storage, requiring precise monitoring
to ensure containment and prevent leakage. Time-lapse seismic imaging is
essential for tracking CO2 migration but often struggles to capture the
complexities of multi-phase subsurface flow. Digital Shadows (DS), leveraging
machine learning-driven data assimilation techniques such as nonlinear Bayesian
filtering and generative AI, provide a more detailed, uncertainty-aware
monitoring approach. By incorporating uncertainties in reservoir properties, DS
frameworks improve CO2 migration forecasts, reducing risks in GCS operations.
However, data assimilation depends on assumptions regarding reservoir
properties, rock physics models, and initial conditions, which, if inaccurate,
can compromise prediction reliability. This study demonstrates that augmenting
forecast ensembles with diverse rock physics models mitigates the impact of
incorrect assumptions and improves predictive accuracy, particularly in
differentiating uniform versus patchy saturation models.
|
2502.07172
|
SemiHMER: Semi-supervised Handwritten Mathematical Expression
Recognition using pseudo-labels
|
cs.CV cs.AI
|
In this paper, we study semi-supervised Handwritten Mathematical Expression
Recognition (HMER) via exploring both labeled data and extra unlabeled data. We
propose a novel consistency regularization framework, termed SemiHMER, which
introduces dual-branch semi-supervised learning. Specifically, we enforce
consistency between the two networks for the same input image. The
pseudo-label, generated by one perturbed recognition network, is utilized to
supervise the other network using the standard cross-entropy loss. The SemiHMER
consistency encourages high similarity between the predictions of the two
perturbed networks for the same input image and expands the training data by
leveraging unlabeled data with pseudo-labels. We further introduce a
weak-to-strong strategy by applying different levels of augmentation to each
branch, effectively expanding the training data and enhancing the quality of
network training. Additionally, we propose a novel module, the Global Dynamic
Counting Module (GDCM), to enhance the performance of the HMER decoder by
alleviating recognition inaccuracies in long-distance formula recognition and
reducing the occurrence of repeated characters. The experimental results
demonstrate that our work achieves significant performance improvements, with
an average accuracy increase of 5.47% on CROHME14, 4.87% on CROHME16, and 5.25%
on CROHME19, compared to our baselines.
|
2502.07175
|
Foreign-Object Detection in High-Voltage Transmission Line Based on
Improved YOLOv8m
|
cs.CV cs.AI
|
The safe operation of high-voltage transmission lines ensures the power
grid's security. Various foreign objects attached to the transmission lines,
such as balloons, kites and nesting birds, can significantly affect the safe
and stable operation of high-voltage transmission lines. With the advancement
of computer vision technology, periodic automatic inspection of foreign objects
is efficient and necessary. Existing detection methods have low accuracy
because foreign objects at-tached to the transmission lines are complex,
including occlusions, diverse object types, significant scale variations, and
complex backgrounds. In response to the practical needs of the Yunnan Branch of
China Southern Power Grid Co., Ltd., this paper proposes an improved
YOLOv8m-based model for detecting foreign objects on transmission lines.
Experiments are conducted on a dataset collected from Yunnan Power Grid. The
proposed model enhances the original YOLOv8m by in-corporating a Global
Attention Module (GAM) into the backbone to focus on occluded foreign objects,
replacing the SPPF module with the SPPCSPC module to augment the model's
multiscale feature extraction capability, and introducing the Focal-EIoU loss
function to address the issue of high- and low-quality sample imbalances. These
improvements accelerate model convergence and enhance detection accuracy. The
experimental results demonstrate that our proposed model achieves a 2.7%
increase in mAP_0.5, a 4% increase in mAP_0.5:0.95, and a 6% increase in
recall.
|
2502.07176
|
MatrixKAN: Parallelized Kolmogorov-Arnold Network
|
cs.LG
|
Kolmogorov-Arnold Networks (KAN) are a new class of neural network
architecture representing a promising alternative to the Multilayer Perceptron
(MLP), demonstrating improved expressiveness and interpretability. However,
KANs suffer from slow training and inference speeds relative to MLPs due in
part to the recursive nature of the underlying B-spline calculations. This
issue is particularly apparent with respect to KANs utilizing high-degree
B-splines, as the number of required non-parallelizable recursions is
proportional to B-spline degree. We solve this issue by proposing MatrixKAN, a
novel optimization that parallelizes B-spline calculations with matrix
representation and operations, thus significantly improving effective
computation time for models utilizing high-degree B-splines. In this paper, we
demonstrate the superior scaling of MatrixKAN's computation time relative to
B-spline degree. Further, our experiments demonstrate speedups of approximately
40x relative to KAN, with significant additional speedup potential for larger
datasets or higher spline degrees.
|
2502.07178
|
Online Aggregation of Trajectory Predictors
|
cs.RO
|
Trajectory prediction, the task of forecasting future agent behavior from
past data, is central to safe and efficient autonomous driving. A diverse set
of methods (e.g., rule-based or learned with different architectures and
datasets) have been proposed, yet it is often the case that the performance of
these methods is sensitive to the deployment environment (e.g., how well the
design rules model the environment, or how accurately the test data match the
training data). Building upon the principled theory of online convex
optimization but also going beyond convexity and stationarity, we present a
lightweight and model-agnostic method to aggregate different trajectory
predictors online. We propose treating each individual trajectory predictor as
an "expert" and maintaining a probability vector to mix the outputs of
different experts. Then, the key technical approach lies in leveraging online
data -- the true agent behavior to be revealed at the next timestep -- to form
a convex-or-nonconvex, stationary-or-dynamic loss function whose gradient
steers the probability vector towards choosing the best mixture of experts. We
instantiate this method to aggregate trajectory predictors trained on different
cities in the NUSCENES dataset and show that it performs just as well, if not
better than, any singular model, even when deployed on the out-of-distribution
LYFT dataset.
|
2502.07179
|
Improved YOLOv7 model for insulator defect detection
|
cs.CV cs.AI
|
Insulators are crucial insulation components and structural supports in power
grids, playing a vital role in the transmission lines. Due to temperature
fluctuations, internal stress, or damage from hail, insulators are prone to
injury. Automatic detection of damaged insulators faces challenges such as
diverse types, small defect targets, and complex backgrounds and shapes. Most
research for detecting insulator defects has focused on a single defect type or
a specific material. However, the insulators in the grid's transmission lines
have different colors and materials. Various insulator defects coexist, and the
existing methods have difficulty meeting the practical application
requirements. Current methods suffer from low detection accuracy and mAP0.5
cannot meet application requirements. This paper proposes an improved YOLOv7
model for multi-type insulator defect detection. First, our model replaces the
SPPCSPC module with the RFB module to enhance the network's feature extraction
capability. Second, a CA mechanism is introduced into the head part to enhance
the network's feature representation ability and to improve detection accuracy.
Third, a WIoU loss function is employed to address the low-quality samples
hindering model generalization during training, thereby improving the model's
overall performance. The experimental results indicate that the proposed model
exhibits enhancements across various performance metrics. Specifically, there
is a 1.6% advancement in mAP_0.5, a corresponding 1.6% enhancement in
mAP_0.5:0.95, a 1.3% elevation in precision, and a 1% increase in recall.
Moreover, the model achieves parameter reduction by 3.2 million, leading to a
decrease of 2.5 GFLOPS in computational cost. Notably, there is also an
improvement of 2.81 milliseconds in single-image detection speed.
|
2502.07181
|
Tab2Visual: Overcoming Limited Data in Tabular Data Classification Using
Deep Learning with Visual Representations
|
cs.LG cs.CV
|
This research addresses the challenge of limited data in tabular data
classification, particularly prevalent in domains with constraints like
healthcare. We propose Tab2Visual, a novel approach that transforms
heterogeneous tabular data into visual representations, enabling the
application of powerful deep learning models. Tab2Visual effectively addresses
data scarcity by incorporating novel image augmentation techniques and
facilitating transfer learning. We extensively evaluate the proposed approach
on diverse tabular datasets, comparing its performance against a wide range of
machine learning algorithms, including classical methods, tree-based ensembles,
and state-of-the-art deep learning models specifically designed for tabular
data. We also perform an in-depth analysis of factors influencing Tab2Visual's
performance. Our experimental results demonstrate that Tab2Visual outperforms
other methods in classification problems with limited tabular data.
|
2502.07182
|
Morphing Wing Designs in Commercial Aviation
|
eess.SY cs.SY
|
With increasing demands for fuel efficiency and operational adaptability in
commercial aviation}, this paper provides a systematic review and
classification of morphing wing technologies, analyzing their aerodynamic
performance characteristics and atmospheric condition adaptability. We first
develop a comprehensive classification framework for morphing wing designs
based on their scale of morphing, actuation mechanisms, and intended purposes.
Through analysis of historical developments and current implementations, we
evaluate two significant case studies: the Mission Adaptive Compliant Wing
(MACW) and Adaptive Aspect Ratio (AdAR) morphing wing, demonstrating
performance improvements of up to 25% in drag reduction and 40% in control
authority. Our investigation reveals critical trade-offs between full-span and
partial morphing approaches, particularly regarding implementation complexity,
certification requirements, and operational reliability. The study concludes
with an assessment of technical barriers and opportunities, providing specific
recommendations for advancing morphing wing technology in commercial aviation
applications. Key findings indicate that while material science and control
system advances enable practical implementation, certification pathways and
maintenance considerations remain critical challenges for widespread adoption.
|
2502.07183
|
Space-Aware Instruction Tuning: Dataset and Benchmark for Guide Dog
Robots Assisting the Visually Impaired
|
cs.RO cs.CV
|
Guide dog robots offer promising solutions to enhance mobility and safety for
visually impaired individuals, addressing the limitations of traditional guide
dogs, particularly in perceptual intelligence and communication. With the
emergence of Vision-Language Models (VLMs), robots are now capable of
generating natural language descriptions of their surroundings, aiding in safer
decision-making. However, existing VLMs often struggle to accurately interpret
and convey spatial relationships, which is crucial for navigation in complex
environments such as street crossings. We introduce the Space-Aware Instruction
Tuning (SAIT) dataset and the Space-Aware Benchmark (SA-Bench) to address the
limitations of current VLMs in understanding physical environments. Our
automated data generation pipeline focuses on the virtual path to the
destination in 3D space and the surroundings, enhancing environmental
comprehension and enabling VLMs to provide more accurate guidance to visually
impaired individuals. We also propose an evaluation protocol to assess VLM
effectiveness in delivering walking guidance. Comparative experiments
demonstrate that our space-aware instruction-tuned model outperforms
state-of-the-art algorithms. We have fully open-sourced the SAIT dataset and
SA-Bench, along with the related code, at
https://github.com/byungokhan/Space-awareVLM
|
2502.07184
|
Refine Knowledge of Large Language Models via Adaptive Contrastive
Learning
|
cs.CL cs.AI
|
How to alleviate the hallucinations of Large Language Models (LLMs) has
always been the fundamental goal pursued by the LLMs research community.
Looking through numerous hallucination-related studies, a mainstream category
of methods is to reduce hallucinations by optimizing the knowledge
representation of LLMs to change their output. Considering that the core focus
of these works is the knowledge acquired by models, and knowledge has long been
a central theme in human societal progress, we believe that the process of
models refining knowledge can greatly benefit from the way humans learn. In our
work, by imitating the human learning process, we design an Adaptive
Contrastive Learning strategy. Our method flexibly constructs different
positive and negative samples for contrastive learning based on LLMs' actual
mastery of knowledge. This strategy helps LLMs consolidate the correct
knowledge they already possess, deepen their understanding of the correct
knowledge they have encountered but not fully grasped, forget the incorrect
knowledge they previously learned, and honestly acknowledge the knowledge they
lack. Extensive experiments and detailed analyses on widely used datasets
demonstrate the effectiveness of our method.
|
2502.07186
|
Perceived Confidence Scoring for Data Annotation with Zero-Shot LLMs
|
cs.CL cs.LG
|
Zero-shot LLMs are now also used for textual classification tasks, e.g.,
sentiment/emotion detection of a given input as a sentence/article. However,
their performance can be suboptimal in such data annotation tasks. We introduce
a novel technique Perceived Confidence Scoring (PCS) that evaluates LLM's
confidence for its classification of an input by leveraging Metamorphic
Relations (MRs). The MRs generate semantically equivalent yet textually mutated
versions of the input. Following the principles of Metamorphic Testing (MT),
the mutated versions are expected to have annotation labels similar to the
input. By analyzing the consistency of LLM responses across these variations,
PCS computes a confidence score based on the frequency of predicted labels. PCS
can be used both for single LLM and multiple LLM settings (e.g., majority
voting). We introduce an algorithm Perceived Differential Evolution (PDE) that
determines the optimal weights assigned to the MRs and the LLMs for a
classification task. Empirical evaluation shows PCS significantly improves
zero-shot accuracy for Llama-3-8B-Instruct (4.96%) and Mistral-7B-Instruct-v0.3
(10.52%), with Gemma-2-9b-it showing a 9.39% gain. When combining all three
models, PCS significantly outperforms majority voting by 7.75%.
|
2502.07187
|
Local Regularizers Are Not Transductive Learners
|
cs.LG stat.ML
|
We partly resolve an open question raised by Asilis et al. (COLT 2024):
whether the algorithmic template of local regularization -- an intriguing
generalization of explicit regularization, a.k.a. structural risk minimization
-- suffices to learn all learnable multiclass problems. Specifically, we
provide a negative answer to this question in the transductive model of
learning. We exhibit a multiclass classification problem which is learnable in
both the transductive and PAC models, yet cannot be learned transductively by
any local regularizer. The corresponding hypothesis class, and our proof, are
based on principles from cryptographic secret sharing. We outline challenges in
extending our negative result to the PAC model, leaving open the tantalizing
possibility of a PAC/transductive separation with respect to local
regularization.
|
2502.07188
|
A Large-Scale Benchmark for Vietnamese Sentence Paraphrases
|
cs.CL
|
This paper presents ViSP, a high-quality Vietnamese dataset for sentence
paraphrasing, consisting of 1.2M original-paraphrase pairs collected from
various domains. The dataset was constructed using a hybrid approach that
combines automatic paraphrase generation with manual evaluation to ensure high
quality. We conducted experiments using methods such as back-translation, EDA,
and baseline models like BART and T5, as well as large language models (LLMs),
including GPT-4o, Gemini-1.5, Aya, Qwen-2.5, and Meta-Llama-3.1 variants. To
the best of our knowledge, this is the first large-scale study on Vietnamese
paraphrasing. We hope that our dataset and findings will serve as a valuable
foundation for future research and applications in Vietnamese paraphrase tasks.
|
2502.07189
|
Exploring Neural Network Pruning with Screening Methods
|
cs.LG stat.ML
|
Deep neural networks (DNNs) such as convolutional neural networks (CNNs) for
visual tasks, recurrent neural networks (RNNs) for sequence data, and
transformer models for rich linguistic or multimodal tasks, achieved
unprecedented performance on a wide range of tasks. The impressive performance
of modern DNNs is partially attributed to their sheer scale. The latest deep
learning models have tens to hundreds of millions of parameters which makes the
inference processes resource-intensive. The high computational complexity of
these networks prevents their deployment on resource-limited devices such as
mobile platforms, IoT devices, and edge computing systems because these devices
require energy-efficient and real-time processing capabilities. This paper
proposes and evaluates a network pruning framework that eliminates
non-essential parameters based on a statistical analysis of network component
significance across classification categories. The proposed method uses
screening methods coupled with a weighted scheme to assess connection and
channel contributions for unstructured and structured pruning which allows for
the elimination of unnecessary network elements without significantly degrading
model performance. Extensive experimental validation on real-world vision
datasets for both fully connected neural networks (FNNs) and CNNs has shown
that the proposed framework produces competitive lean networks compared to the
original networks. Moreover, the proposed framework outperforms state-of-art
network pruning methods in two out of three cases.
|
2502.07190
|
Understanding LLMs' Fluid Intelligence Deficiency: An Analysis of the
ARC Task
|
cs.AI
|
While LLMs have exhibited strong performance on various NLP tasks, it is
noteworthy that most of these tasks rely on utilizing the vast amount of
knowledge encoded in LLMs' parameters, rather than solving new problems without
prior knowledge. In cognitive research, the latter ability is referred to as
fluid intelligence, which is considered to be critical for assessing human
intelligence. Recent research on fluid intelligence assessments has highlighted
significant deficiencies in LLMs' abilities. In this paper, we analyze the
challenges LLMs face in demonstrating fluid intelligence through controlled
experiments, using the most representative ARC task as an example. Our study
revealed three major limitations in existing LLMs: limited ability for skill
composition, unfamiliarity with abstract input formats, and the intrinsic
deficiency of left-to-right decoding. Our data and code can be found in
https://wujunjie1998.github.io/araoc-benchmark.github.io/.
|
2502.07191
|
Bag of Tricks for Inference-time Computation of LLM Reasoning
|
cs.AI
|
With the advancement of large language models (LLMs), solving complex
reasoning tasks has gained increasing attention. Inference-time computation
methods (e.g., Best-of-N, beam search, et al.) are particularly valuable as
they can enhance reasoning performance without modifying model parameters or
requiring additional training. However, these techniques come with
implementation challenges, and most existing methods remain at the
proof-of-concept stage with limited practical adoption due to their
computational complexity and varying effectiveness across different tasks. In
this paper, we investigate and benchmark diverse inference-time computation
strategies across reasoning tasks of varying complexity. Since most current
methods rely on a proposer-verifier pipeline that first generates candidate
solutions (e.g., reasoning solutions) and then selects the best one based on
reward signals (e.g., RLHF rewards, process rewards), our research focuses on
optimizing both candidate solution generation (e.g., instructing prompts,
hyperparameters such as temperature and top-p) and reward mechanisms (e.g.,
self-evaluation, reward types). Through extensive experiments (more than 20,000
A100-80G GPU hours with over 1,000 experiments) across a variety of models
(e.g., Llama, Qwen, and Mistral families) of various sizes, our ablation
studies reveal that previously overlooked strategies can significantly enhance
performance (e.g., tuning temperature can improve reasoning task performance by
up to 5%). Furthermore, we establish a standardized benchmark for
inference-time computation by systematically evaluating six representative
methods across eight reasoning tasks. These findings provide a stronger
foundation for future research. The code is available at
https://github.com/usail-hkust/benchmark_inference_time_computation_LLM
|
2502.07192
|
OscNet: Machine Learning on CMOS Oscillator Networks
|
cs.CV
|
Machine learning and AI have achieved remarkable advancements but at the cost
of significant computational resources and energy consumption. This has created
an urgent need for a novel, energy-efficient computational fabric to replace
the current computing pipeline. Recently, a promising approach has emerged by
mimicking spiking neurons in the brain and leveraging oscillators on CMOS for
direct computation. In this context, we propose a new and energy efficient
machine learning framework implemented on CMOS Oscillator Networks (OscNet). We
model the developmental processes of the prenatal brain's visual system using
OscNet, updating weights based on the biologically inspired Hebbian rule. This
same pipeline is then directly applied to standard machine learning tasks.
OscNet is a specially designed hardware and is inherently energy-efficient. Its
reliance on forward propagation alone for training further enhances its energy
efficiency while maintaining biological plausibility. Simulation validates our
designs of OscNet architectures. Experimental results demonstrate that Hebbian
learning pipeline on OscNet achieves performance comparable to or even
surpassing traditional machine learning algorithms, highlighting its potential
as a energy efficient and effective computational paradigm.
|
2502.07193
|
Provably Efficient RLHF Pipeline: A Unified View from Contextual Bandits
|
cs.LG stat.ML
|
Reinforcement Learning from Human Feedback (RLHF) is a widely used approach
for aligning Large Language Models (LLMs) with human preferences. While recent
advancements have provided valuable insights into various stages and settings
of RLHF, a comprehensive theoretical understanding of the entire RLHF pipeline
remains lacking. Towards this end, we propose a unified framework for the RLHF
pipeline from the view of contextual bandits and provide provable efficiency
guarantees. In particular, we decompose the RLHF process into two distinct
stages: (post-)training and deployment, exploring both passive and active data
collection strategies during the training phase. By employing the Bradley-Terry
preference model with a linearly parameterized reward function, we reformulate
RLHF as a contextual preference bandit problem. We then develop novel
algorithms for each stage, demonstrating significant improvements over existing
approaches in both statistical and computational efficiency. Finally, we apply
our method to train and deploy Llama-3-8B-Instruct on the
Ultrafeedback-binarized dataset, and empirical results confirm the
effectiveness of our approach.
|
2502.07194
|
Dense Object Detection Based on De-homogenized Queries
|
cs.CV cs.AI
|
Dense object detection is widely used in automatic driving, video
surveillance, and other fields. This paper focuses on the challenging task of
dense object detection. Currently, detection methods based on greedy
algorithms, such as non-maximum suppression (NMS), often produce many
repetitive predictions or missed detections in dense scenarios, which is a
common problem faced by NMS-based algorithms. Through the end-to-end DETR
(DEtection TRansformer), as a type of detector that can incorporate the
post-processing de-duplication capability of NMS, etc., into the network, we
found that homogeneous queries in the query-based detector lead to a reduction
in the de-duplication capability of the network and the learning efficiency of
the encoder, resulting in duplicate prediction and missed detection problems.
To solve this problem, we propose learnable differentiated encoding to
de-homogenize the queries, and at the same time, queries can communicate with
each other via differentiated encoding information, replacing the previous
self-attention among the queries. In addition, we used joint loss on the output
of the encoder that considered both location and confidence prediction to give
a higher-quality initialization for queries. Without cumbersome decoder
stacking and guaranteeing accuracy, our proposed end-to-end detection framework
was more concise and reduced the number of parameters by about 8% compared to
deformable DETR. Our method achieved excellent results on the challenging
CrowdHuman dataset with 93.6% average precision (AP), 39.2% MR-2, and 84.3% JI.
The performance overperformed previous SOTA methods, such as Iter-E2EDet
(Progressive End-to-End Object Detection) and MIP (One proposal, Multiple
predictions). In addition, our method is more robust in various scenarios with
different densities.
|
2502.07196
|
Parameter Optimization of Optical Six-Axis Force/Torque Sensor for
Legged Robots
|
cs.RO
|
This paper introduces a novel six-axis force/torque sensor tailored for
compact and lightweight legged robots. Unlike traditional strain gauge-based
sensors, the proposed non-contact design employs photocouplers, enhancing
resistance to physical impacts and reducing damage risk. This approach
simplifies manufacturing, lowers costs, and meets the demands of legged robots
by combining small size, light weight, and a wide force measurement range. A
methodology for optimizing sensor parameters is also presented, focusing on
maximizing sensitivity and minimizing error. Precise modeling and analysis of
objective functions enabled the derivation of optimal design parameters. The
sensor's performance was validated through extensive testing and integration
into quadruped robots, demonstrating alignment with theoretical modeling. The
sensor's precise measurement capabilities make it suitable for diverse robotic
environments, particularly in analyzing interactions between robot feet and the
ground. This innovation addresses existing sensor limitations while
contributing to advancements in robotics and sensor technology, paving the way
for future applications in robotic systems.
|
2502.07199
|
Fixed-Confidence Best Arm Identification with Decreasing Variance
|
cs.LG cs.IT math.IT math.ST stat.ML stat.TH
|
We focus on the problem of best-arm identification in a stochastic multi-arm
bandit with temporally decreasing variances for the arms' rewards. We model arm
rewards as Gaussian random variables with fixed means and variances that
decrease with time. The cost incurred by the learner is modeled as a weighted
sum of the time needed by the learner to identify the best arm, and the number
of samples of arms collected by the learner before termination. Under this cost
function, there is an incentive for the learner to not sample arms in all
rounds, especially in the initial rounds. On the other hand, not sampling
increases the termination time of the learner, which also increases cost. This
trade-off necessitates new sampling strategies. We propose two policies. The
first policy has an initial wait period with no sampling followed by continuous
sampling. The second policy samples periodically and uses a weighted average of
the rewards observed to identify the best arm. We provide analytical guarantees
on the performance of both policies and supplement our theoretical results with
simulations which show that our polices outperform the state-of-the-art
policies for the classical best arm identification problem.
|
2502.07200
|
Color-Quality Invariance for Robust Medical Image Segmentation
|
eess.IV cs.CV
|
Single-source domain generalization (SDG) in medical image segmentation
remains a significant challenge, particularly for images with varying color
distributions and qualities. Previous approaches often struggle when models
trained on high-quality images fail to generalize to low-quality test images
due to these color and quality shifts. In this work, we propose two novel
techniques to enhance generalization: dynamic color image normalization (DCIN)
module and color-quality generalization (CQG) loss. The DCIN dynamically
normalizes the color of test images using two reference image selection
strategies. Specifically, the DCIN utilizes a global reference image selection
(GRIS), which finds a universal reference image, and a local reference image
selection (LRIS), which selects a semantically similar reference image per test
sample. Additionally, CQG loss enforces invariance to color and quality
variations by ensuring consistent segmentation predictions across transformed
image pairs. Experimental results show that our proposals significantly improve
segmentation performance over the baseline on two target domain datasets,
despite being trained solely on a single source domain. Notably, our model
achieved up to a 32.3-point increase in Dice score compared to the baseline,
consistently producing robust and usable results even under substantial domain
shifts. Our work contributes to the development of more robust medical image
segmentation models that generalize across unseen domains. The implementation
code is available at https://github.com/RaviShah1/DCIN-CQG.
|
2502.07202
|
Monte Carlo Tree Diffusion for System 2 Planning
|
cs.AI cs.LG
|
Diffusion models have recently emerged as a powerful tool for planning.
However, unlike Monte Carlo Tree Search (MCTS)-whose performance naturally
improves with additional test-time computation (TTC), standard diffusion-based
planners offer only limited avenues for TTC scalability. In this paper, we
introduce Monte Carlo Tree Diffusion (MCTD), a novel framework that integrates
the generative strength of diffusion models with the adaptive search
capabilities of MCTS. Our method reconceptualizes denoising as a
tree-structured process, allowing partially denoised plans to be iteratively
evaluated, pruned, and refined. By selectively expanding promising trajectories
while retaining the flexibility to revisit and improve suboptimal branches,
MCTD achieves the benefits of MCTS such as controlling exploration-exploitation
trade-offs within the diffusion framework. Empirical results on challenging
long-horizon tasks show that MCTD outperforms diffusion baselines, yielding
higher-quality solutions as TTC increases.
|
2502.07203
|
Playmate: Flexible Control of Portrait Animation via 3D-Implicit Space
Guided Diffusion
|
cs.CV
|
Recent diffusion-based talking face generation models have demonstrated
impressive potential in synthesizing videos that accurately match a speech
audio clip with a given reference identity. However, existing approaches still
encounter significant challenges due to uncontrollable factors, such as
inaccurate lip-sync, inappropriate head posture and the lack of fine-grained
control over facial expressions. In order to introduce more face-guided
conditions beyond speech audio clips, a novel two-stage training framework
Playmate is proposed to generate more lifelike facial expressions and talking
faces. In the first stage, we introduce a decoupled implicit 3D representation
along with a meticulously designed motion-decoupled module to facilitate more
accurate attribute disentanglement and generate expressive talking videos
directly from audio cues. Then, in the second stage, we introduce an
emotion-control module to encode emotion control information into the latent
space, enabling fine-grained control over emotions and thereby achieving the
ability to generate talking videos with desired emotion. Extensive experiments
demonstrate that Playmate outperforms existing state-of-the-art methods in
terms of video quality and lip-synchronization, and improves flexibility in
controlling emotion and head pose. The code will be available at
https://playmate111.github.io.
|
2502.07205
|
VINP: Variational Bayesian Inference with Neural Speech Prior for Joint
ASR-Effective Speech Dereverberation and Blind RIR Identification
|
eess.AS cs.AI cs.LG
|
Reverberant speech, denoting the speech signal degraded by the process of
reverberation, contains crucial knowledge of both anechoic source speech and
room impulse response (RIR). This work proposes a variational Bayesian
inference (VBI) framework with neural speech prior (VINP) for joint speech
dereverberation and blind RIR identification. In VINP, a probabilistic signal
model is constructed in the time-frequency (T-F) domain based on convolution
transfer function (CTF) approximation. For the first time, we propose using an
arbitrary discriminative dereverberation deep neural network (DNN) to predict
the prior distribution of anechoic speech within a probabilistic model. By
integrating both reverberant speech and the anechoic speech prior, VINP yields
the maximum a posteriori (MAP) and maximum likelihood (ML) estimations of the
anechoic speech spectrum and CTF filter, respectively. After simple
transformations, the waveforms of anechoic speech and RIR are estimated.
Moreover, VINP is effective for automatic speech recognition (ASR) systems,
which sets it apart from most deep learning (DL)-based single-channel
dereverberation approaches. Experiments on single-channel speech
dereverberation demonstrate that VINP reaches an advanced level in most metrics
related to human perception and displays unquestionable state-of-the-art (SOTA)
performance in ASR-related metrics. For blind RIR identification, experiments
indicate that VINP attains the SOTA level in blind estimation of reverberation
time at 60 dB (RT60) and direct-to-reverberation ratio (DRR). Codes and audio
samples are available online.
|
2502.07207
|
A Study on the Importance of Features in Detecting Advanced Persistent
Threats Using Machine Learning
|
cs.CR cs.AI cs.LG
|
Advanced Persistent Threats (APTs) pose a significant security risk to
organizations and industries. These attacks often lead to severe data breaches
and compromise the system for a long time. Mitigating these sophisticated
attacks is highly challenging due to the stealthy and persistent nature of
APTs. Machine learning models are often employed to tackle this challenge by
bringing automation and scalability to APT detection. Nevertheless, these
intelligent methods are data-driven, and thus, highly affected by the quality
and relevance of input data. This paper aims to analyze measurements considered
when recording network traffic and conclude which features contribute more to
detecting APT samples. To do this, we study the features associated with
various APT cases and determine their importance using a machine learning
framework. To ensure the generalization of our findings, several feature
selection techniques are employed and paired with different classifiers to
evaluate their effectiveness. Our findings provide insights into how APT
detection can be enhanced in real-world scenarios.
|
2502.07209
|
Enhancing Physics-Informed Neural Networks Through Feature Engineering
|
cs.LG
|
Physics-Informed Neural Networks (PINNs) seek to solve partial differential
equations (PDEs) with deep learning. Mainstream approaches that deploy
fully-connected multi-layer deep learning architectures require prolonged
training to achieve even moderate accuracy, while recent work on feature
engineering allows higher accuracy and faster convergence. This paper
introduces SAFE-NET, a Single-layered Adaptive Feature Engineering NETwork that
achieves orders-of-magnitude lower errors with far fewer parameters than
baseline feature engineering methods. SAFE-NET returns to basic ideas in
machine learning, using Fourier features, a simplified single hidden layer
network architecture, and an effective optimizer that improves the conditioning
of the PINN optimization problem. Numerical results show that SAFE-NET
converges faster and typically outperforms deeper networks and more complex
architectures. It consistently uses fewer parameters -- on average, 65% fewer
than the competing feature engineering methods -- while achieving comparable
accuracy in less than 30% of the training epochs. Moreover, each SAFE-NET epoch
is 95% faster than those of competing feature engineering approaches. These
findings challenge the prevailing belief that modern PINNs effectively learn
features in these scientific applications and highlight the efficiency gains
possible through feature engineering.
|
2502.07211
|
Improve the Training Efficiency of DRL for Wireless Communication
Resource Allocation: The Role of Generative Diffusion Models
|
cs.LG
|
Dynamic resource allocation in mobile wireless networks involves complex,
time-varying optimization problems, motivating the adoption of deep
reinforcement learning (DRL). However, most existing works rely on pre-trained
policies, overlooking dynamic environmental changes that rapidly invalidate the
policies. Periodic retraining becomes inevitable but incurs prohibitive
computational costs and energy consumption-critical concerns for
resource-constrained wireless systems. We identify three root causes of
inefficient retraining: high-dimensional state spaces, suboptimal action spaces
exploration-exploitation trade-offs, and reward design limitations. To overcome
these limitations, we propose Diffusion-based Deep Reinforcement Learning
(D2RL), which leverages generative diffusion models (GDMs) to holistically
enhance all three DRL components. Iterative refinement process and distribution
modelling of GDMs enable (1) the generation of diverse state samples to improve
environmental understanding, (2) balanced action space exploration to escape
local optima, and (3) the design of discriminative reward functions that better
evaluate action quality. Our framework operates in two modes: Mode I leverages
GDMs to explore reward spaces and design discriminative reward functions that
rigorously evaluate action quality, while Mode II synthesizes diverse state
samples to enhance environmental understanding and generalization. Extensive
experiments demonstrate that D2RL achieves faster convergence and reduced
computational costs over conventional DRL methods for resource allocation in
wireless communications while maintaining competitive policy performance. This
work underscores the transformative potential of GDMs in overcoming fundamental
DRL training bottlenecks for wireless networks, paving the way for practical,
real-time deployments.
|
2502.07213
|
Evaluation for Regression Analyses on Evolving Data Streams
|
cs.LG cs.AI
|
The paper explores the challenges of regression analysis in evolving data
streams, an area that remains relatively underexplored compared to
classification. We propose a standardized evaluation process for regression and
prediction interval tasks in streaming contexts. Additionally, we introduce an
innovative drift simulation strategy capable of synthesizing various drift
types, including the less-studied incremental drift. Comprehensive experiments
with state-of-the-art methods, conducted under the proposed process, validate
the effectiveness and robustness of our approach.
|
2502.07214
|
Pareto Optimal Algorithmic Recourse in Multi-cost Function
|
cs.LG cs.AI cs.DS
|
In decision-making systems, algorithmic recourse aims to identify
minimal-cost actions to alter an individual features, thereby obtaining a
desired outcome. This empowers individuals to understand, question, or alter
decisions that negatively affect them. However, due to the variety and
sensitivity of system environments and individual personalities, quantifying
the cost of a single function is nearly impossible while considering multiple
criteria situations. Most current recourse mechanisms use gradient-based
methods that assume cost functions are differentiable, often not applicable in
real-world scenarios, resulting in sub-optimal solutions that compromise
various criteria. These solutions are typically intractable and lack rigorous
theoretical foundations, raising concerns regarding interpretability,
reliability, and transparency from the explainable AI (XAI) perspective.
To address these issues, this work proposes an algorithmic recourse framework
that handles non-differentiable and discrete multi-cost functions. By
formulating recourse as a multi-objective optimization problem and assigning
weights to different criteria based on their importance, our method identifies
Pareto optimal recourse recommendations. To demonstrate scalability, we
incorporate the concept of epsilon-net, proving the ability to find
approximated Pareto optimal actions. Experiments show the trade-off between
different criteria and the methods scalability in large graphs. Compared to
current heuristic practices, our approach provides a stronger theoretical
foundation and better aligns recourse suggestions with real-world requirements.
|
2502.07215
|
PDV: Prompt Directional Vectors for Zero-shot Composed Image Retrieval
|
cs.CV
|
Zero-shot composed image retrieval (ZS-CIR) enables image search using a
reference image and text prompt without requiring specialized text-image
composition networks trained on large-scale paired data. However, current
ZS-CIR approaches face three critical limitations in their reliance on composed
text embeddings: static query embedding representations, insufficient
utilization of image embeddings, and suboptimal performance when fusing text
and image embeddings. To address these challenges, we introduce the Prompt
Directional Vector (PDV), a simple yet effective training-free enhancement that
captures semantic modifications induced by user prompts. PDV enables three key
improvements: (1) dynamic composed text embeddings where prompt adjustments are
controllable via a scaling factor, (2) composed image embeddings through
semantic transfer from text prompts to image features, and (3) weighted fusion
of composed text and image embeddings that enhances retrieval by balancing
visual and semantic similarity. Our approach serves as a plug-and-play
enhancement for existing ZS-CIR methods with minimal computational overhead.
Extensive experiments across multiple benchmarks demonstrate that PDV
consistently improves retrieval performance when integrated with
state-of-the-art ZS-CIR approaches, particularly for methods that generate
accurate compositional embeddings. The code will be publicly available.
|
2502.07216
|
SparseFormer: Detecting Objects in HRW Shots via Sparse Vision
Transformer
|
cs.CV cs.AI
|
Recent years have seen an increase in the use of gigapixel-level image and
video capture systems and benchmarks with high-resolution wide (HRW) shots.
However, unlike close-up shots in the MS COCO dataset, the higher resolution
and wider field of view raise unique challenges, such as extreme sparsity and
huge scale changes, causing existing close-up detectors inaccuracy and
inefficiency. In this paper, we present a novel model-agnostic sparse vision
transformer, dubbed SparseFormer, to bridge the gap of object detection between
close-up and HRW shots. The proposed SparseFormer selectively uses attentive
tokens to scrutinize the sparsely distributed windows that may contain objects.
In this way, it can jointly explore global and local attention by fusing
coarse- and fine-grained features to handle huge scale changes. SparseFormer
also benefits from a novel Cross-slice non-maximum suppression (C-NMS)
algorithm to precisely localize objects from noisy windows and a simple yet
effective multi-scale strategy to improve accuracy. Extensive experiments on
two HRW benchmarks, PANDA and DOTA-v1.0, demonstrate that the proposed
SparseFormer significantly improves detection accuracy (up to 5.8%) and speed
(up to 3x) over the state-of-the-art approaches.
|
2502.07218
|
LUNAR: LLM Unlearning via Neural Activation Redirection
|
cs.LG cs.AI
|
Large Language Models (LLMs) benefit from training on ever larger amounts of
textual data, but as a result, they increasingly incur the risk of leaking
private information. The ability to selectively remove knowledge from LLMs is,
therefore, a highly desirable capability. In this paper, we propose LUNAR, a
novel unlearning methodology grounded in the Linear Representation Hypothesis.
LUNAR operates by redirecting the representations of unlearned data to regions
that trigger the model's inherent ability to express its inability to answer.
LUNAR achieves state-of-the-art unlearning performance while significantly
enhancing the controllability of the unlearned model during inference.
Specifically, LUNAR achieves between 2.9x to 11.7x improvements on combined
"unlearning efficacy" and "model utility" score ("Deviation Score") on the
PISTOL dataset across various base models. We also demonstrate, through
quantitative analysis and qualitative examples, LUNAR's superior
controllability in generating coherent and contextually aware responses,
mitigating undesired side effects of existing methods. Moreover, we demonstrate
that LUNAR is robust against white-box adversarial attacks and versatile in
handling real-world scenarios, such as processing sequential unlearning
requests.
|
2502.07219
|
DOGR: Leveraging Document-Oriented Contrastive Learning in Generative
Retrieval
|
cs.IR
|
Generative retrieval constitutes an innovative approach in information
retrieval, leveraging generative language models (LM) to generate a ranked list
of document identifiers (docid) for a given query. It simplifies the retrieval
pipeline by replacing the large external index with model parameters. However,
existing works merely learned the relationship between queries and document
identifiers, which is unable to directly represent the relevance between
queries and documents. To address the above problem, we propose a novel and
general generative retrieval framework, namely Leveraging Document-Oriented
Contrastive Learning in Generative Retrieval (DOGR), which leverages
contrastive learning to improve generative retrieval tasks. It adopts a
two-stage learning strategy that captures the relationship between queries and
documents comprehensively through direct interactions. Furthermore, negative
sampling methods and corresponding contrastive learning objectives are
implemented to enhance the learning of semantic representations, thereby
promoting a thorough comprehension of the relationship between queries and
documents. Experimental results demonstrate that DOGR achieves state-of-the-art
performance compared to existing generative retrieval methods on two public
benchmark datasets. Further experiments have shown that our framework is
generally effective for common identifier construction techniques.
|
2502.07221
|
MLLM4PUE: Toward Universal Embeddings in Computational Pathology through
Multimodal LLMs
|
cs.CV
|
Pathology plays a critical role in diagnosing a wide range of diseases, yet
existing approaches often rely heavily on task-specific models trained on
extensive, well-labeled datasets. These methods face sustainability challenges
due to the diversity of pathologies and the labor-intensive nature of data
collection. To address these limitations, we highlight the need for universal
multimodal embeddings that can support multiple downstream tasks. Previous
approaches often involve fine-tuning CLIP-based models, which handle images and
text separately, limiting their ability to capture complex multimodal
relationships. Additionally, these models are evaluated across diverse datasets
without a unified benchmark for assessing multimodal embeddings in pathology.
To address these challenges, we propose MLLM4PUE, a novel framework that
leverages Multimodal Large Language Models (MLLMs) to generate Pathology
Universal Embeddings. The MLLM4PUE framework not only facilitates robust
integration of images and text but also enhances understanding and fusion
capabilities across various tasks. We further introduce the Pathology
Multimodal Embedding Benchmark (PMEB), a comprehensive benchmark designed to
assess the quality of pathology multimodal embeddings. PMEB comprises 15
original tasks drawn from 14 datasets, organized into three meta-tasks:
retrieval, classification, and composed retrieval. Experimental results
demonstrate the superiority of MLLM4PUE, illustrating MLLM-based models can
effectively support a wide range of downstream tasks and unify the research
direction for foundation models in pathology.
|
2502.07222
|
A Memory Efficient Randomized Subspace Optimization Method for Training
Large Language Models
|
cs.LG
|
The memory challenges associated with training Large Language Models (LLMs)
have become a critical concern, particularly when using the Adam optimizer. To
address this issue, numerous memory-efficient techniques have been proposed,
with GaLore standing out as a notable example designed to reduce the memory
footprint of optimizer states. However, these approaches do not alleviate the
memory burden imposed by activations, rendering them unsuitable for scenarios
involving long context sequences or large mini-batches. Moreover, their
convergence properties are still not well-understood in the literature. In this
work, we introduce a Randomized Subspace Optimization framework for
pre-training and fine-tuning LLMs. Our approach decomposes the high-dimensional
training problem into a series of lower-dimensional subproblems. At each
iteration, a random subspace is selected, and the parameters within that
subspace are optimized. This structured reduction in dimensionality allows our
method to simultaneously reduce memory usage for both activations and optimizer
states. We establish comprehensive convergence guarantees and derive rates for
various scenarios, accommodating different optimization strategies to solve the
subproblems. Extensive experiments validate the superior memory and
communication efficiency of our method, achieving performance comparable to
GaLore and Adam.
|
2502.07223
|
Graph RAG-Tool Fusion
|
cs.CL
|
Recent developments in retrieval-augmented generation (RAG) for selecting
relevant tools from a tool knowledge base enable LLM agents to scale their
complex tool calling capabilities to hundreds or thousands of external tools,
APIs, or agents-as-tools. However, traditional RAG-based tool retrieval fails
to capture structured dependencies between tools, limiting the retrieval
accuracy of a retrieved tool's dependencies. For example, among a vector
database of tools, a "get stock price" API requires a "stock ticker" parameter
from a "get stock ticker" API, and both depend on OS-level internet
connectivity tools. In this paper, we address this limitation by introducing
Graph RAG-Tool Fusion, a novel plug-and-play approach that combines the
strengths of vector-based retrieval with efficient graph traversal to capture
all relevant tools (nodes) along with any nested dependencies (edges) within
the predefined tool knowledge graph. We also present ToolLinkOS, a new tool
selection benchmark of 573 fictional tools, spanning over 15 industries, each
with an average of 6.3 tool dependencies. We demonstrate that Graph RAG-Tool
Fusion achieves absolute improvements of 71.7% and 22.1% over na\"ive RAG on
ToolLinkOS and ToolSandbox benchmarks, respectively (mAP@10). ToolLinkOS
dataset is available at
https://github.com/EliasLumer/Graph-RAG-Tool-Fusion-ToolLinkOS
|
2502.07225
|
CAT: Contrastive Adversarial Training for Evaluating the Robustness of
Protective Perturbations in Latent Diffusion Models
|
cs.CV
|
Latent diffusion models have recently demonstrated superior capabilities in
many downstream image synthesis tasks. However, customization of latent
diffusion models using unauthorized data can severely compromise the privacy
and intellectual property rights of data owners. Adversarial examples as
protective perturbations have been developed to defend against unauthorized
data usage by introducing imperceptible noise to customization samples,
preventing diffusion models from effectively learning them. In this paper, we
first reveal that the primary reason adversarial examples are effective as
protective perturbations in latent diffusion models is the distortion of their
latent representations, as demonstrated through qualitative and quantitative
experiments. We then propose the Contrastive Adversarial Training (CAT)
utilizing adapters as an adaptive attack against these protection methods,
highlighting their lack of robustness. Extensive experiments demonstrate that
our CAT method significantly reduces the effectiveness of protective
perturbations in customization configurations, urging the community to
reconsider and enhance the robustness of existing protective perturbation
methods. Code is available at \hyperlink{here}{https://github.com/senp98/CAT}.
|
2502.07226
|
Non-Iterative Coordination of Interconnected Power Grids via
Dimension-Decomposition-Based Flexibility Aggregation
|
eess.SY cs.SY
|
The bulk power grid is divided into regional grids interconnected with
multiple tie-lines for efficient operation. Since interconnected power grids
are operated by different control centers, it is a challenging task to realize
coordinated dispatch of multiple regional grids. A viable solution is to
compute a flexibility aggregation model for each regional power grid, then
optimize the tie-line schedule using the aggregated models to implement
non-iterative coordinated dispatch. However, challenges such as intricate
interdependencies and curse of dimensionality persist in computing the
aggregated models in high-dimensional space. Existing methods like
Fourier-Motzkin elimination, vertex search, and multi-parameter programming are
limited by dimensionality and conservatism, hindering practical application.
This paper presents a novel dimension-decomposition-based flexibility
aggregation algorithm for calculating the aggregated models of multiple
regional power grids, enabling non-iterative coordination in large-scale
interconnected systems. Compared to existing methods, the proposed approach
yields a significantly less conservative flexibility region. The derived
flexibility aggregation model for each regional power grid has a well-defined
physical counterpart, which facilitates intuitive analysis of multi-port
regional power grids and provides valuable insights into their internal
resource endowments. Numerical tests validate the feasibility of the aggregated
model and demonstrate its accuracy in coordinating interconnected power grids.
|
2502.07230
|
Physics-Informed Recurrent Network for Gas Pipeline Network Parameters
Identification
|
eess.SY cs.SY
|
As a part of the integrated energy system (IES), gas pipeline networks can
provide additional flexibility to power systems through coordinated optimal
dispatch. An accurate pipeline network model is critical for the optimal
operation and control of IESs. However, inaccuracies or unavailability of
accurate pipeline parameters often introduce errors in the mathematical models
of such networks. This paper proposes a physics-informed recurrent network
(PIRN) model to identify the state-space model of gas pipelines. The approach
combines data-driven learning from measurement data with the fluid dynamics
described by partial differential equations. By embedding the physical
state-space model within the recurrent network, parameter identification is
transformed into a training process for a PIRN. Similar to standard recurrent
neural networks, this model can be implemented using the PyTorch framework and
trained via backpropagation. Case studies demonstrate that our method
accurately estimates gas pipeline models from sparse terminal node
measurements, providing robust performance and significantly higher parameter
efficiency. Furthermore, the identified models can be seamlessly integrated
into optimization frameworks.
|
2502.07232
|
Simplifying Adversarially Robust PAC Learning with Tolerance
|
cs.LG
|
Adversarially robust PAC learning has proved to be challenging, with the
currently best known learners [Montasser et al., 2021a] relying on improper
methods based on intricate compression schemes, resulting in sample complexity
exponential in the VC-dimension. A series of follow up work considered a
slightly relaxed version of the problem called adversarially robust learning
with tolerance [Ashtiani et al., 2023, Bhattacharjee et al., 2023, Raman et
al., 2024] and achieved better sample complexity in terms of the VC-dimension.
However, those algorithms were either improper and complex, or required
additional assumptions on the hypothesis class H. We prove, for the first time,
the existence of a simpler learner that achieves a sample complexity linear in
the VC-dimension without requiring additional assumptions on H. Even though our
learner is improper, it is "almost proper" in the sense that it outputs a
hypothesis that is "similar" to a hypothesis in H.
We also use the ideas from our algorithm to construct a semi-supervised
learner in the tolerant setting. This simple algorithm achieves comparable
bounds to the previous (non-tolerant) semi-supervised algorithm of Attias et
al. [2022a], but avoids the use of intricate subroutines from previous works,
and is "almost proper."
|
Subsets and Splits
No community queries yet
The top public SQL queries from the community will appear here once available.