id
stringlengths 9
16
| title
stringlengths 4
278
| categories
stringlengths 5
104
| abstract
stringlengths 6
4.09k
|
|---|---|---|---|
2502.06052
|
A Comprehensive Energy Management Application Method considering Smart
Home Occupant Behavior using IoT and Real Big Data
|
eess.SY cs.SY
|
One of the most far-reaching use cases of the internet of things is in smart
grid and smart home operation. The smart home concept allows residents to
control, monitor, and manage their energy consumption with minimum loss and
self-involvement. Since each household's lifestyle and energy consumption is
unique, the management system needs background knowledge about residents'
energy consumption behavioral patterns for more accurate planning. To obtain
this information, data related to residents' consumption records must be
processed. This research has attempted to provide an optimal decentralized
management system consisting of interoperable sections to forecast, optimize,
schedule, and implement load management in a smart home. Comparing different
prediction models using 4 years of 1-min interval real data of a smart home
with photovoltaic generation (PV) and electric vehicle (EV), forecasting
non-controllable loads and taking a deterministic approach in different
scenarios, the system uses mixed integer linear programming (MILP) to provide
load scheduling with the objective of an optimal total energy cost reduction
with minimum changes in the household's desired consumption compared to the
initial state. The results have shown that the proposed system has reliable
performance due to the high precision of the forecast and has led to increased
energy efficiency, reduced energy cost (up to 62. 05\%), reduced
peak-to-average ratio (PAR) (up to 44. 19\%) and reduced standard deviation
(SD) (up to 19. 70\%) in net consumption.
|
2502.06058
|
Regular LDPC codes on BMS wiretap channels: Security bounds
|
cs.IT math.IT
|
We improve the secrecy guarantees for transmission over general binary
memoryless symmetric wiretap channels that relies on regular LDPC codes.
Previous works showed that LDPC codes achieve secrecy capacity of some classes
of wiretap channels while leaking $o(n)$ bits of information over $n$ uses of
the channel. In this note, we improve the security component of these results
by reducing the leakage parameter to $O(\log^2 n)$. While this result stops
short of proving \emph{strong security}, it goes beyond the general secrecy
guarantees derived from properties of capacity-approaching code families.
|
2502.06060
|
Training Language Models for Social Deduction with Multi-Agent
Reinforcement Learning
|
cs.AI cs.CL cs.LG cs.MA
|
Communicating in natural language is a powerful tool in multi-agent settings,
as it enables independent agents to share information in partially observable
settings and allows zero-shot coordination with humans. However, most prior
works are limited as they either rely on training with large amounts of human
demonstrations or lack the ability to generate natural and useful communication
strategies. In this work, we train language models to have productive
discussions about their environment in natural language without any human
demonstrations. We decompose the communication problem into listening and
speaking. Our key idea is to leverage the agent's goal to predict useful
information about the world as a dense reward signal that guides communication.
Specifically, we improve a model's listening skills by training them to predict
information about the environment based on discussions, and we simultaneously
improve a model's speaking skills with multi-agent reinforcement learning by
rewarding messages based on their influence on other agents. To investigate the
role and necessity of communication in complex social settings, we study an
embodied social deduction game based on Among Us, where the key question to
answer is the identity of an adversarial imposter. We analyze emergent
behaviors due to our technique, such as accusing suspects and providing
evidence, and find that it enables strong discussions, doubling the win rates
compared to standard RL. We release our code and models at
https://socialdeductionllm.github.io/
|
2502.06061
|
Online Reward-Weighted Fine-Tuning of Flow Matching with Wasserstein
Regularization
|
cs.LG cs.AI cs.CV stat.ML
|
Recent advancements in reinforcement learning (RL) have achieved great
success in fine-tuning diffusion-based generative models. However, fine-tuning
continuous flow-based generative models to align with arbitrary user-defined
reward functions remains challenging, particularly due to issues such as policy
collapse from overoptimization and the prohibitively high computational cost of
likelihoods in continuous-time flows. In this paper, we propose an easy-to-use
and theoretically sound RL fine-tuning method, which we term Online
Reward-Weighted Conditional Flow Matching with Wasserstein-2 Regularization
(ORW-CFM-W2). Our method integrates RL into the flow matching framework to
fine-tune generative models with arbitrary reward functions, without relying on
gradients of rewards or filtered datasets. By introducing an online
reward-weighting mechanism, our approach guides the model to prioritize
high-reward regions in the data manifold. To prevent policy collapse and
maintain diversity, we incorporate Wasserstein-2 (W2) distance regularization
into our method and derive a tractable upper bound for it in flow matching,
effectively balancing exploration and exploitation of policy optimization. We
provide theoretical analyses to demonstrate the convergence properties and
induced data distributions of our method, establishing connections with
traditional RL algorithms featuring Kullback-Leibler (KL) regularization and
offering a more comprehensive understanding of the underlying mechanisms and
learning behavior of our approach. Extensive experiments on tasks including
target image generation, image compression, and text-image alignment
demonstrate the effectiveness of our method, where our method achieves optimal
policy convergence while allowing controllable trade-offs between reward
maximization and diversity preservation.
|
2502.06062
|
Multi-modal Data Fusion and Deep Ensemble Learning for Accurate Crop
Yield Prediction
|
eess.IV cs.AI
|
This study introduces RicEns-Net, a novel Deep Ensemble model designed to
predict crop yields by integrating diverse data sources through multimodal data
fusion techniques. The research focuses specifically on the use of synthetic
aperture radar (SAR), optical remote sensing data from Sentinel 1, 2, and 3
satellites, and meteorological measurements such as surface temperature and
rainfall. The initial field data for the study were acquired through Ernst &
Young's (EY) Open Science Challenge 2023. The primary objective is to enhance
the precision of crop yield prediction by developing a machine-learning
framework capable of handling complex environmental data. A comprehensive data
engineering process was employed to select the most informative features from
over 100 potential predictors, reducing the set to 15 features from 5 distinct
modalities. This step mitigates the ``curse of dimensionality" and enhances
model performance. The RicEns-Net architecture combines multiple machine
learning algorithms in a deep ensemble framework, integrating the strengths of
each technique to improve predictive accuracy. Experimental results demonstrate
that RicEns-Net achieves a mean absolute error (MAE) of 341 kg/Ha (roughly
corresponds to 5-6\% of the lowest average yield in the region), significantly
exceeding the performance of previous state-of-the-art models, including those
developed during the EY challenge.
|
2502.06065
|
Benchmarking Prompt Sensitivity in Large Language Models
|
cs.CL cs.AI cs.IR
|
Large language Models (LLMs) are highly sensitive to variations in prompt
formulation, which can significantly impact their ability to generate accurate
responses. In this paper, we introduce a new task, Prompt Sensitivity
Prediction, and a dataset PromptSET designed to investigate the effects of
slight prompt variations on LLM performance. Using TriviaQA and HotpotQA
datasets as the foundation of our work, we generate prompt variations and
evaluate their effectiveness across multiple LLMs. We benchmark the prompt
sensitivity prediction task employing state-of-the-art methods from related
tasks, including LLM-based self-evaluation, text classification, and query
performance prediction techniques. Our findings reveal that existing methods
struggle to effectively address prompt sensitivity prediction, underscoring the
need to understand how information needs should be phrased for accurate LLM
responses.
|
2502.06067
|
Lipschitz-Driven Inference: Bias-corrected Confidence Intervals for
Spatial Linear Models
|
stat.ML cs.LG stat.ME
|
Linear models remain ubiquitous in modern spatial applications - including
climate science, public health, and economics - due to their interpretability,
speed, and reproducibility. While practitioners generally report a form of
uncertainty, popular spatial uncertainty quantification methods do not jointly
handle model misspecification and distribution shift - despite both being
essentially always present in spatial problems. In the present paper, we show
that existing methods for constructing confidence (or credible) intervals in
spatial linear models fail to provide correct coverage due to unaccounted-for
bias. In contrast to classical methods that rely on an i.i.d. assumption that
is inappropriate in spatial problems, in the present work we instead make a
spatial smoothness (Lipschitz) assumption. We are then able to propose a new
confidence-interval construction that accounts for bias in the estimation
procedure. We demonstrate that our new method achieves nominal coverage via
both theory and experiments. Code to reproduce experiments is available at
https://github.com/DavidRBurt/Lipschitz-Driven-Inference.
|
2502.06072
|
ID policy (with reassignment) is asymptotically optimal for
heterogeneous weakly-coupled MDPs
|
cs.LG math.OC math.PR
|
Heterogeneity poses a fundamental challenge for many real-world large-scale
decision-making problems but remains largely understudied. In this paper, we
study the fully heterogeneous setting of a prominent class of such problems,
known as weakly-coupled Markov decision processes (WCMDPs). Each WCMDP consists
of $N$ arms (or subproblems), which have distinct model parameters in the fully
heterogeneous setting, leading to the curse of dimensionality when $N$ is
large. We show that, under mild assumptions, a natural adaptation of the ID
policy, although originally proposed for a homogeneous special case of WCMDPs,
in fact achieves an $O(1/\sqrt{N})$ optimality gap in long-run average reward
per arm for fully heterogeneous WCMDPs as $N$ becomes large. This is the first
asymptotic optimality result for fully heterogeneous average-reward WCMDPs. Our
techniques highlight the construction of a novel projection-based Lyapunov
function, which witnesses the convergence of rewards and costs to an optimal
region in the presence of heterogeneity.
|
2502.06075
|
Deconstructing Depression Stigma: Integrating AI-driven Data Collection
and Analysis with Causal Knowledge Graphs
|
cs.HC cs.CL cs.CY
|
Mental-illness stigma is a persistent social problem, hampering both
treatment-seeking and recovery. Accordingly, there is a pressing need to
understand it more clearly, but analyzing the relevant data is highly
labor-intensive. Therefore, we designed a chatbot to engage participants in
conversations; coded those conversations qualitatively with AI assistance; and,
based on those coding results, built causal knowledge graphs to decode stigma.
The results we obtained from 1,002 participants demonstrate that conversation
with our chatbot can elicit rich information about people's attitudes toward
depression, while our AI-assisted coding was strongly consistent with
human-expert coding. Our novel approach combining large language models (LLMs)
and causal knowledge graphs uncovered patterns in individual responses and
illustrated the interrelationships of psychological constructs in the dataset
as a whole. The paper also discusses these findings' implications for HCI
researchers in developing digital interventions, decomposing human
psychological constructs, and fostering inclusive attitudes.
|
2502.06076
|
A Planning Framework for Adaptive Labeling
|
cs.LG
|
Ground truth labels/outcomes are critical for advancing scientific and
engineering applications, e.g., evaluating the treatment effect of an
intervention or performance of a predictive model. Since randomly sampling
inputs for labeling can be prohibitively expensive, we introduce an adaptive
labeling framework where measurement effort can be reallocated in batches. We
formulate this problem as a Markov decision process where posterior beliefs
evolve over time as batches of labels are collected (state transition), and
batches (actions) are chosen to minimize uncertainty at the end of data
collection. We design a computational framework that is agnostic to different
uncertainty quantification approaches including those based on deep learning,
and allows a diverse array of policy gradient approaches by relying on
continuous policy parameterizations. On real and synthetic datasets, we
demonstrate even a one-step lookahead policy can substantially outperform
common adaptive labeling heuristics, highlighting the virtue of planning. On
the methodological side, we note that standard REINFORCE-style policy gradient
estimators can suffer high variance since they rely only on zeroth order
information. We propose a direct backpropagation-based approach,
Smoothed-Autodiff, based on a carefully smoothed version of the original
non-differentiable MDP. Our method enjoys low variance at the price of
introducing bias, and we theoretically and empirically show that this trade-off
can be favorable.
|
2502.06079
|
Debiasing Guidance for Discrete Diffusion with Sequential Monte Carlo
|
cs.LG
|
Discrete diffusion models are a class of generative models that produce
samples from an approximated data distribution within a discrete state space.
Often, there is a need to target specific regions of the data distribution.
Current guidance methods aim to sample from a distribution with mass
proportional to $p_0(x_0) p(\zeta|x_0)^\alpha$ but fail to achieve this in
practice. We introduce a Sequential Monte Carlo algorithm that generates
unbiasedly from this target distribution, utilising the learnt unconditional
and guided process. We validate our approach on low-dimensional distributions,
controlled images and text generations. For text generation, our method
provides strong control while maintaining low perplexity compared to
guidance-based approaches.
|
2502.06084
|
Physics-Guided Foundation Model for Scientific Discovery: An Application
to Aquatic Science
|
cs.LG cs.AI cs.NE
|
Physics-guided machine learning (PGML) has become a prevalent approach in
studying scientific systems due to its ability to integrate scientific theories
for enhancing machine learning (ML) models. However, most PGML approaches are
tailored to isolated and relatively simple tasks, which limits their
applicability to complex systems involving multiple interacting processes and
numerous influencing features. In this paper, we propose a
\textit{\textbf{P}hysics-\textbf{G}uided \textbf{F}oundation \textbf{M}odel
(\textbf{PGFM})} that combines pre-trained ML models and physics-based models
and leverages their complementary strengths to improve the modeling of multiple
coupled processes. To effectively conduct pre-training, we construct a
simulated environmental system that encompasses a wide range of influencing
features and various simulated variables generated by physics-based models. The
model is pre-trained in this system to adaptively select important feature
interactions guided by multi-task objectives. We then fine-tune the model for
each specific task using true observations, while maintaining consistency with
established physical theories, such as the principles of mass and energy
conservation. We demonstrate the effectiveness of this methodology in modeling
water temperature and dissolved oxygen dynamics in real-world lakes. The
proposed PGFM is also broadly applicable to a range of scientific fields where
physics-based models are being used.
|
2502.06086
|
Is a Peeled Apple Still Red? Evaluating LLMs' Ability for Conceptual
Combination with Property Type
|
cs.CL
|
Conceptual combination is a cognitive process that merges basic concepts,
enabling the creation of complex expressions. During this process, the
properties of combination (e.g., the whiteness of a peeled apple) can be
inherited from basic concepts, newly emerge, or be canceled. However, previous
studies have evaluated a limited set of properties and have not examined the
generative process. To address this gap, we introduce the Conceptual
Combination with Property Type dataset (CCPT), which consists of 12.3K
annotated triplets of noun phrases, properties, and property types. Using CCPT,
we establish three types of tasks to evaluate LLMs for conceptual combination
thoroughly. Our key findings are threefold: (1) Our automatic metric grading
property emergence and cancellation closely corresponds with human judgments.
(2) LLMs, including OpenAI's o1, struggle to generate noun phrases which
possess given emergent properties. (3) Our proposed method, inspired by
cognitive psychology model that explains how relationships between concepts are
formed, improves performances in all generative tasks. The dataset and
experimental code are available at https://github.com/seokwon99/CCPT.git.
|
2502.06087
|
ConMeC: A Dataset for Metonymy Resolution with Common Nouns
|
cs.CL
|
Metonymy plays an important role in our daily communication. People naturally
think about things using their most salient properties or commonly related
concepts. For example, by saying "The bus decided to skip our stop today," we
actually mean that the bus driver made the decision, not the bus. Prior work on
metonymy resolution has mainly focused on named entities. However, metonymy
involving common nouns (such as desk, baby, and school) is also a frequent and
challenging phenomenon. We argue that NLP systems should be capable of
identifying the metonymic use of common nouns in context. We create a new
metonymy dataset ConMeC, which consists of 6,000 sentences, where each sentence
is paired with a target common noun and annotated by humans to indicate whether
that common noun is used metonymically or not in that context. We also
introduce a chain-of-thought based prompting method for detecting metonymy
using large language models (LLMs). We evaluate our LLM-based pipeline, as well
as a supervised BERT model on our dataset and three other metonymy datasets.
Our experimental results demonstrate that LLMs could achieve performance
comparable to the supervised BERT model on well-defined metonymy categories,
while still struggling with instances requiring nuanced semantic understanding.
Our dataset is publicly available at: https://github.com/SaptGhosh/ConMeC.
|
2502.06089
|
On the Computability of Multiclass PAC Learning
|
cs.LG stat.ML
|
We study the problem of computable multiclass learnability within the
Probably Approximately Correct (PAC) learning framework of Valiant (1984). In
the recently introduced computable PAC (CPAC) learning framework of Agarwal et
al. (2020), both learners and the functions they output are required to be
computable. We focus on the case of finite label space and start by proposing a
computable version of the Natarajan dimension and showing that it characterizes
CPAC learnability in this setting. We further generalize this result by
establishing a meta-characterization of CPAC learnability for a certain family
of dimensions: computable distinguishers. Distinguishers were defined by
Ben-David et al. (1992) as a certain family of embeddings of the label space,
with each embedding giving rise to a dimension. It was shown that the
finiteness of each such dimension characterizes multiclass PAC learnability for
finite label space in the non-computable setting. We show that the
corresponding computable dimensions for distinguishers characterize CPAC
learning. We conclude our analysis by proving that the DS dimension, which
characterizes PAC learnability for infinite label space, cannot be expressed as
a distinguisher (even in the case of finite label space).
|
2502.06094
|
Fair-MoE: Fairness-Oriented Mixture of Experts in Vision-Language Models
|
cs.CV
|
Fairness is a fundamental principle in medical ethics. Vision Language Models
(VLMs) have shown significant potential in the medical field due to their
ability to leverage both visual and linguistic contexts, reducing the need for
large datasets and enabling the performance of complex tasks. However, the
exploration of fairness within VLM applications remains limited. Applying VLMs
without a comprehensive analysis of fairness could lead to concerns about equal
treatment opportunities and diminish public trust in medical deep learning
models. To build trust in medical VLMs, we propose Fair-MoE, a model
specifically designed to ensure both fairness and effectiveness. Fair-MoE
comprises two key components: \textit{the Fairness-Oriented Mixture of Experts
(FO-MoE)} and \textit{the Fairness-Oriented Loss (FOL)}. FO-MoE is designed to
leverage the expertise of various specialists to filter out biased patch
embeddings and use an ensemble approach to extract more equitable information
relevant to specific tasks. FOL is a novel fairness-oriented loss function that
not only minimizes the distances between different attributes but also
optimizes the differences in the dispersion of various attributes'
distributions. Extended experiments demonstrate the effectiveness and fairness
of Fair-MoE. Tested on the Harvard-FairVLMed dataset, Fair-MoE showed
improvements in both fairness and accuracy across all four attributes. Code
will be publicly available.
|
2502.06095
|
Rateless Joint Source-Channel Coding, and a Blueprint for 6G Semantic
Communications System Design
|
cs.IT cs.AI math.IT
|
This paper introduces rateless joint source-channel coding (rateless JSCC).
The code is rateless in that it is designed and optimized for a continuum of
coding rates such that it achieves a desired distortion for any rate in that
continuum. We further introduce rate-adaptive and stable communication link
operation to accommodate rateless JSCCs. The link operation resembles a ``bit
pipe'' that is identified by its rate in bits per frame, and, by the rate of
bits that are flipped in each frame. Thus, the link operation is rate-adaptive
such that it punctures the rateless JSCC codeword to adapt its length (and
coding rate) to the underlying channel capacity, and is stable in maintaining
the bit flipping ratio across time frames.
Next, a new family of autoencoder rateless JSCC codes are introduced. The
code family is dubbed RLACS code (read as relax code, standing for ratelss and
lossy autoencoder channel and source code). The code is tested for
reconstruction loss of image signals and demonstrates powerful performance that
is resilient to variation of channel quality. RLACS code is readily applicable
to the case of semantic distortion suited to variety of semantic and
effectiveness communications use cases.
In the second part of the paper, we dive into the practical concerns around
semantic communication and provide a blueprint for semantic networking system
design relying on updating the existing network systems with some essential
modifications. We further outline a comprehensive list of open research
problems and development challenges towards a practical 6G communications
system design that enables semantic networking.
|
2502.06096
|
Post-detection inference for sequential changepoint localization
|
stat.ML cs.AI cs.LG stat.ME
|
This paper addresses a fundamental but largely unexplored challenge in
sequential changepoint analysis: conducting inference following a detected
change. We study the problem of localizing the changepoint using only the data
observed up to a data-dependent stopping time at which a sequential detection
algorithm $\mathcal A$ declares a change. We first construct confidence sets
for the unknown changepoint when pre- and post-change distributions are assumed
to be known. We then extend our framework to composite pre- and post-change
scenarios. We impose no conditions on the observation space or on $\mathcal A$
-- we only need to be able to run $\mathcal A$ on simulated data sequences. In
summary, this work offers both theoretically sound and practically effective
tools for sequential changepoint localization.
|
2502.06097
|
NLGR: Utilizing Neighbor Lists for Generative Rerank in Personalized
Recommendation Systems
|
cs.IR cs.AI
|
Reranking plays a crucial role in modern multi-stage recommender systems by
rearranging the initial ranking list. Due to the inherent challenges of
combinatorial search spaces, some current research adopts an
evaluator-generator paradigm, with a generator generating feasible sequences
and an evaluator selecting the best sequence based on the estimated list
utility. However, these methods still face two issues. Firstly, due to the goal
inconsistency problem between the evaluator and generator, the generator tends
to fit the local optimal solution of exposure distribution rather than
combinatorial space optimization. Secondly, the strategy of generating target
items one by one is difficult to achieve optimality because it ignores the
information of subsequent items.
To address these issues, we propose a utilizing Neighbor Lists model for
Generative Reranking (NLGR), which aims to improve the performance of the
generator in the combinatorial space. NLGR follows the evaluator-generator
paradigm and improves the generator's training and generating methods.
Specifically, we use neighbor lists in combination space to enhance the
training process, making the generator perceive the relative scores and find
the optimization direction. Furthermore, we propose a novel sampling-based
non-autoregressive generation method, which allows the generator to jump
flexibly from the current list to any neighbor list. Extensive experiments on
public and industrial datasets validate NLGR's effectiveness and we have
successfully deployed NLGR on the Meituan food delivery platform.
|
2502.06099
|
Fine-Tuning Federated Learning-Based Intrusion Detection Systems for
Transportation IoT
|
cs.LG
|
The rapid advancement of machine learning (ML) and on-device computing has
revolutionized various industries, including transportation, through the
development of Connected and Autonomous Vehicles (CAVs) and Intelligent
Transportation Systems (ITS). These technologies improve traffic management and
vehicle safety, but also introduce significant security and privacy concerns,
such as cyberattacks and data breaches. Traditional Intrusion Detection Systems
(IDS) are increasingly inadequate in detecting modern threats, leading to the
adoption of ML-based IDS solutions. Federated Learning (FL) has emerged as a
promising method for enabling the decentralized training of IDS models on
distributed edge devices without sharing sensitive data. However, deploying
FL-based IDS in CAV networks poses unique challenges, including limited
computational and memory resources on edge devices, competing demands from
critical applications such as navigation and safety systems, and the need to
scale across diverse hardware and connectivity conditions. To address these
issues, we propose a hybrid server-edge FL framework that offloads pre-training
to a central server while enabling lightweight fine-tuning on edge devices.
This approach reduces memory usage by up to 42%, decreases training times by up
to 75%, and achieves competitive IDS accuracy of up to 99.2%. Scalability
analyses further demonstrates minimal performance degradation as the number of
clients increase, highlighting the framework's feasibility for CAV networks and
other IoT applications.
|
2502.06100
|
Col-OLHTR: A Novel Framework for Multimodal Online Handwritten Text
Recognition
|
cs.CV eess.SP
|
Online Handwritten Text Recognition (OLHTR) has gained considerable attention
for its diverse range of applications. Current approaches usually treat OLHTR
as a sequence recognition task, employing either a single trajectory or image
encoder, or multi-stream encoders, combined with a CTC or attention-based
recognition decoder. However, these approaches face several drawbacks: 1)
single encoders typically focus on either local trajectories or visual regions,
lacking the ability to dynamically capture relevant global features in
challenging cases; 2) multi-stream encoders, while more comprehensive, suffer
from complex structures and increased inference costs. To tackle this, we
propose a Collaborative learning-based OLHTR framework, called Col-OLHTR, that
learns multimodal features during training while maintaining a single-stream
inference process. Col-OLHTR consists of a trajectory encoder, a
Point-to-Spatial Alignment (P2SA) module, and an attention-based decoder. The
P2SA module is designed to learn image-level spatial features through
trajectory-encoded features and 2D rotary position embeddings. During training,
an additional image-stream encoder-decoder is collaboratively trained to
provide supervision for P2SA features. At inference, the extra streams are
discarded, and only the P2SA module is used and merged before the decoder,
simplifying the process while preserving high performance. Extensive
experimental results on several OLHTR benchmarks demonstrate the
state-of-the-art (SOTA) performance, proving the effectiveness and robustness
of our design.
|
2502.06101
|
RALLRec: Improving Retrieval Augmented Large Language Model
Recommendation with Representation Learning
|
cs.IR cs.CL
|
Large Language Models (LLMs) have been integrated into recommendation systems
to enhance user behavior comprehension. The Retrieval Augmented Generation
(RAG) technique is further incorporated into these systems to retrieve more
relevant items and improve system performance. However, existing RAG methods
rely primarily on textual semantics and often fail to incorporate the most
relevant items, limiting the effectiveness of the systems.
In this paper, we propose Representation learning for retrieval-Augmented
Large Language model Recommendation (RALLRec). Specifically, we enhance textual
semantics by prompting LLMs to generate more detailed item descriptions,
followed by joint representation learning of textual and collaborative
semantics, which are extracted by the LLM and recommendation models,
respectively. Considering the potential time-varying characteristics of user
interest, a simple yet effective reranking method is further introduced to
capture the dynamics of user preference. We conducted extensive experiments on
three real-world datasets, and the evaluation results validated the
effectiveness of our method. Code is made public at
https://github.com/JianXu95/RALLRec.
|
2502.06105
|
Comprehensive Framework for Evaluating Conversational AI Chatbots
|
cs.CY cs.AI
|
Conversational AI chatbots are transforming industries by streamlining
customer service, automating transactions, and enhancing user engagement.
However, evaluating these systems remains a challenge, particularly in
financial services, where compliance, user trust, and operational efficiency
are critical. This paper introduces a novel evaluation framework that
systematically assesses chatbots across four dimensions: cognitive and
conversational intelligence, user experience, operational efficiency, and
ethical and regulatory compliance. By integrating advanced AI methodologies
with financial regulations, the framework bridges theoretical foundations and
real-world deployment challenges. Additionally, we outline future research
directions, emphasizing improvements in conversational coherence, real-time
adaptability, and fairness.
|
2502.06106
|
Circuit-tuning: A Mechanistic Approach for Identifying Parameter
Redundancy and Fine-tuning Neural Networks
|
cs.LG cs.AI cs.CL
|
The study of mechanistic interpretability aims to reverse-engineer a model to
explain its behaviors. While recent studies have focused on the static
mechanism of a certain behavior, the training dynamics inside a model remain to
be explored. In this work, we develop an interpretable method for fine-tuning
and reveal the mechanism behind learning. We first propose the concept of node
redundancy as an extension of intrinsic dimension and explain the idea behind
circuit discovery from a fresh view. Based on the theory, we propose
circuit-tuning, a two-stage algorithm that iteratively performs circuit
discovery to mask out irrelevant edges and updates the remaining parameters
responsible for a specific task. Experiments show that our method not only
improves performance on a wide range of tasks but is also scalable while
preserving general capabilities. We visualize and analyze the circuits before,
during, and after fine-tuning, providing new insights into the
self-organization mechanism of a neural network in the learning process.
|
2502.06109
|
CDM: Contact Diffusion Model for Multi-Contact Point Localization
|
cs.RO
|
In this paper, we propose a Contact Diffusion Model (CDM), a novel
learning-based approach for multi-contact point localization. We consider a
robot equipped with joint torque sensors and a force/torque sensor at the base.
By leveraging a diffusion model, CDM addresses the singularity where multiple
pairs of contact points and forces produce identical sensor measurements. We
formulate CDM to be conditioned on past model outputs to account for the
time-dependent characteristics of the multi-contact scenarios. Moreover, to
effectively address the complex shape of the robot surfaces, we incorporate the
signed distance field in the denoising process. Consequently, CDM can localize
contacts at arbitrary locations with high accuracy. Simulation and real-world
experiments demonstrate the effectiveness of the proposed method. In
particular, CDM operates at 15.97ms and, in the real world, achieves an error
of 0.44cm in single-contact scenarios and 1.24cm in dual-contact scenarios.
|
2502.06111
|
CSR-Bench: Benchmarking LLM Agents in Deployment of Computer Science
Research Repositories
|
cs.SE cs.AI cs.LG
|
The increasing complexity of computer science research projects demands more
effective tools for deploying code repositories. Large Language Models (LLMs),
such as Anthropic Claude and Meta Llama, have demonstrated significant
advancements across various fields of computer science research, including the
automation of diverse software engineering tasks. To evaluate the effectiveness
of LLMs in handling complex code development tasks of research projects,
particularly for NLP/CV/AI/ML/DM topics, we introduce CSR-Bench, a benchmark
for Computer Science Research projects. This benchmark assesses LLMs from
various aspects including accuracy, efficiency, and deployment script quality,
aiming to explore their potential in conducting computer science research
autonomously. We also introduce a novel framework, CSR-Agents, that utilizes
multiple LLM agents to automate the deployment of GitHub code repositories of
computer science research projects. Specifically, by checking instructions from
markdown files and interpreting repository structures, the model generates and
iteratively improves bash commands that set up the experimental environments
and deploy the code to conduct research tasks. Preliminary results from
CSR-Bench indicate that LLM agents can significantly enhance the workflow of
repository deployment, thereby boosting developer productivity and improving
the management of developmental workflows.
|
2502.06112
|
Pcodec: Better Compression for Numerical Sequences
|
cs.IT cs.DS math.IT
|
We present Pcodec (Pco), a format and algorithm for losslessly compressing
numerical sequences. Pco's core and most novel component is a binning algorithm
that quickly converges to the true entropy of smoothly, independently, and
identically distributed (SIID) data. To automatically handle more general data,
Pco has two opinionated preprocessing steps. The first step, Pco's mode,
decomposes the data into more smoothly distributed latent variables. The second
step, delta encoding, makes the latents more independently and identically
distributed. We prove that, given $k$ bins, binning uses only
$\mathcal{O}(1/k)$ bits more than the SIID data's entropy. Additionally, we
demonstrate that Pco achieves 29-94% higher compression ratio than other
approaches on six real-world datasets while using less compression time.
|
2502.06113
|
Towards Bio-inspired Heuristically Accelerated Reinforcement Learning
for Adaptive Underwater Multi-Agents Behaviour
|
cs.RO cs.SY eess.SY
|
This paper describes the problem of coordination of an autonomous Multi-Agent
System which aims to solve the coverage planning problem in a complex
environment. The considered applications are the detection and identification
of objects of interest while covering an area. These tasks, which are highly
relevant for space applications, are also of interest among various domains
including the underwater context, which is the focus of this study. In this
context, coverage planning is traditionally modelled as a Markov Decision
Process where a coordinated MAS, a swarm of heterogeneous autonomous underwater
vehicles, is required to survey an area and search for objects. This MDP is
associated with several challenges: environment uncertainties, communication
constraints, and an ensemble of hazards, including time-varying and
unpredictable changes in the underwater environment. MARL algorithms can solve
highly non-linear problems using deep neural networks and display great
scalability against an increased number of agents. Nevertheless, most of the
current results in the underwater domain are limited to simulation due to the
high learning time of MARL algorithms. For this reason, a novel strategy is
introduced to accelerate this convergence rate by incorporating biologically
inspired heuristics to guide the policy during training. The PSO method, which
is inspired by the behaviour of a group of animals, is selected as a heuristic.
It allows the policy to explore the highest quality regions of the action and
state spaces, from the beginning of the training, optimizing the
exploration/exploitation trade-off. The resulting agent requires fewer
interactions to reach optimal performance. The method is applied to the MSAC
algorithm and evaluated for a 2D covering area mission in a continuous control
environment.
|
2502.06114
|
A Novel Multi-Teacher Knowledge Distillation for Real-Time Object
Detection using 4D Radar
|
cs.CV
|
Accurate 3D object detection is crucial for safe autonomous navigation,
requiring reliable performance across diverse weather conditions. While LiDAR
performance deteriorates in challenging weather, Radar systems maintain their
reliability. Traditional Radars have limitations due to their lack of elevation
data, but the recent 4D Radars overcome this by measuring elevation alongside
range, azimuth, and Doppler velocity, making them invaluable for autonomous
vehicles. The primary challenge in utilizing 4D Radars is the sparsity of their
point clouds. Previous works address this by developing architectures that
better capture semantics and context in sparse point cloud, largely drawing
from LiDAR-based approaches. However, these methods often overlook a unique
advantage of 4D Radars: the dense Radar tensor, which encapsulates power
measurements across three spatial dimensions and the Doppler dimension. Our
paper leverages this tensor to tackle the sparsity issue. We introduce a novel
knowledge distillation framework that enables a student model to densify its
sparse input in the latent space by emulating an ensemble of teacher models.
Our experiments demonstrate a 25% performance improvement over the
state-of-the-art RTNH model on the K-Radar dataset. Notably, this improvement
is achieved while still maintaining a real-time inference speed.
|
2502.06115
|
Task-driven Layerwise Additive Activation Intervention
|
cs.CL cs.LG
|
Modern language models (LMs) have significantly advanced generative modeling
in natural language processing (NLP). Despite their success, LMs often struggle
with adaptation to new contexts in real-time applications. A promising approach
to task adaptation is activation intervention, which steers the LMs' generation
process by identifying and manipulating the activations. However, existing
interventions are highly dependent on heuristic rules or require many prompt
inputs to determine effective interventions. This paper proposes a layer-wise
additive activation intervention framework that optimizes the intervention
process, thus enhancing the sample efficiency. We benchmark our framework on
various datasets, demonstrating improvements in the accuracy of pre-trained LMs
and competing intervention baselines.
|
2502.06116
|
Event Vision Sensor: A Review
|
physics.ins-det cs.CV
|
By monitoring temporal contrast, event-based vision sensors can provide high
temporal resolution and low latency while maintaining low power consumption and
simplicity in circuit structure. These characteristics have garnered
significant attention in both academia and industry. In recent years, the
application of back-illuminated (BSI) technology, wafer stacking techniques,
and industrial interfaces has brought new opportunities for enhancing the
performance of event-based vision sensors. This is evident in the substantial
advancements made in reducing noise, improving resolution, and increasing
readout rates. Additionally, the integration of these technologies has enhanced
the compatibility of event-based vision sensors with current and edge vision
systems, providing greater possibilities for their practical applications. This
paper will review the progression from neuromorphic engineering to
state-of-the-art event-based vision sensor technologies, including their
development trends, operating principles, and key features. Moreover, we will
delve into the sensitivity of event-based vision sensors and the opportunities
and challenges they face in the realm of infrared imaging, providing references
for future research and applications.
|
2502.06117
|
Revisiting Dynamic Graph Clustering via Matrix Factorization
|
cs.LG cs.AI stat.ML
|
Dynamic graph clustering aims to detect and track time-varying clusters in
dynamic graphs, revealing the evolutionary mechanisms of complex real-world
dynamic systems. Matrix factorization-based methods are promising approaches
for this task; however, these methods often struggle with scalability and can
be time-consuming when applied to large-scale dynamic graphs. Moreover, they
tend to lack robustness and are vulnerable to real-world noisy data. To address
these issues, we make three key contributions. First, to improve scalability,
we propose temporal separated matrix factorization, where a single matrix is
divided into multiple smaller matrices for independent factorization, resulting
in faster computation. Second, to improve robustness, we introduce
bi-clustering regularization, which jointly optimizes graph embedding and
clustering, thereby filtering out noisy features from the graph embeddings.
Third, to further enhance effectiveness and efficiency, we propose selective
embedding updating, where we update only the embeddings of dynamic nodes while
the embeddings of static nodes are fixed among different timestamps.
Experimental results on six synthetic and five real-world benchmarks
demonstrate the scalability, robustness and effectiveness of our proposed
method. Source code is available at https://github.com/Clearloveyuan/DyG-MF.
|
2502.06118
|
Token-Domain Multiple Access: Exploiting Semantic Orthogonality for
Collision Mitigation
|
cs.IT eess.SP math.IT
|
Token communications is an emerging generative semantic communication concept
that reduces transmission rates by using context and transformer-based token
processing, with tokens serving as universal semantic units. In this paper, we
propose a semantic multiple access scheme in the token domain, referred to as
ToDMA, where a large number of devices share a tokenizer and a modulation
codebook for source and channel coding, respectively. Specifically, the source
signal is tokenized into sequences, with each token modulated into a codeword.
Codewords from multiple devices are transmitted simultaneously, resulting in
overlap at the receiver. The receiver detects the transmitted tokens, assigns
them to their respective sources, and mitigates token collisions by leveraging
context and semantic orthogonality across the devices' messages. Simulations
demonstrate that the proposed ToDMA framework outperforms context-unaware
orthogonal and non-orthogonal communication methods in image transmission
tasks, achieving lower latency and better image quality.
|
2502.06119
|
An Appearance Defect Detection Method for Cigarettes Based on
C-CenterNet
|
cs.CV
|
Due to the poor adaptability of traditional methods in the cigarette
detection task on the automatic cigarette production line, it is difficult to
accurately identify whether a cigarette has defects and the types of defects;
thus, a cigarette appearance defect detection method based on C-CenterNet is
proposed. This detector uses keypoint estimation to locate center points and
regresses all other defect properties. Firstly, Resnet50 is used as the
backbone feature extraction network, and the convolutional block attention
mechanism (CBAM) is introduced to enhance the network's ability to extract
effective features and reduce the interference of non-target information. At
the same time, the feature pyramid network is used to enhance the feature
extraction of each layer. Then, deformable convolution is used to replace part
of the common convolution to enhance the learning ability of different shape
defects. Finally, the activation function ACON (ActivateOrNot) is used instead
of the ReLU activation function, and the activation operation of some neurons
is adaptively selected to improve the detection accuracy of the network. The
experimental results are mainly acquired via the mean Average Precision (mAP).
The experimental results show that the mAP of the C-CenterNet model applied in
the cigarette appearance defect detection task is 95.01%. Compared with the
original CenterNet model, the model's success rate is increased by 6.14%, so it
can meet the requirements of precision and adaptability in cigarette detection
tasks on the automatic cigarette production line.
|
2502.06123
|
Real-Time LiDAR Point Cloud Compression and Transmission for
Resource-constrained Robots
|
cs.RO
|
LiDARs are widely used in autonomous robots due to their ability to provide
accurate environment structural information. However, the large size of point
clouds poses challenges in terms of data storage and transmission. In this
paper, we propose a novel point cloud compression and transmission framework
for resource-constrained robotic applications, called RCPCC. We iteratively fit
the surface of point clouds with a similar range value and eliminate redundancy
through their spatial relationships. Then, we use Shape-adaptive DCT (SA-DCT)
to transform the unfit points and reduce the data volume by quantizing the
transformed coefficients. We design an adaptive bitrate control strategy based
on QoE as the optimization goal to control the quality of the transmitted point
cloud. Experiments show that our framework achieves compression rates of
40$\times$ to 80$\times$ while maintaining high accuracy for downstream
applications. our method significantly outperforms other baselines in terms of
accuracy when the compression rate exceeds 70$\times$. Furthermore, in
situations of reduced communication bandwidth, our adaptive bitrate control
strategy demonstrates significant QoE improvements. The code will be available
at https://github.com/HITSZ-NRSL/RCPCC.git.
|
2502.06124
|
Foundation Model of Electronic Medical Records for Adaptive Risk
Estimation
|
cs.LG cs.AI
|
We developed the Enhanced Transformer for Health Outcome Simulation (ETHOS),
an AI model that tokenizes patient health timelines (PHTs) from EHRs. ETHOS
predicts future PHTs using transformer-based architectures. The Adaptive Risk
Estimation System (ARES) employs ETHOS to compute dynamic and personalized risk
probabilities for clinician-defined critical events. ARES incorporates a
personalized explainability module that identifies key clinical factors
influencing risk estimates for individual patients. ARES was evaluated on the
MIMIC-IV v2.2 dataset in emergency department (ED) settings, benchmarking its
performance against traditional early warning systems and machine learning
models. We processed 299,721 unique patients from MIMIC-IV into 285,622 PHTs,
with 60% including hospital admissions. The dataset contained over 357 million
tokens. ETHOS outperformed benchmark models in predicting hospital admissions,
ICU admissions, and prolonged hospital stays, achieving superior AUC scores.
ETHOS-based risk estimates demonstrated robustness across demographic subgroups
with strong model reliability, confirmed via calibration curves. The
personalized explainability module provides insights into patient-specific
factors contributing to risk. ARES, powered by ETHOS, advances predictive
healthcare AI by providing dynamic, real-time, and personalized risk estimation
with patient-specific explainability to enhance clinician trust. Its
adaptability and superior accuracy position it as a transformative tool for
clinical decision-making, potentially improving patient outcomes and resource
allocation in emergency and inpatient settings. We release the full code at
github.com/ipolharvard/ethos-ares to facilitate future research.
|
2502.06126
|
Graph Pseudotime Analysis and Neural Stochastic Differential Equations
for Analyzing Retinal Degeneration Dynamics and Beyond
|
cs.LG
|
Understanding disease progression at the molecular pathway level usually
requires capturing both structural dependencies between pathways and the
temporal dynamics of disease evolution. In this work, we solve the former
challenge by developing a biologically informed graph-forming method to
efficiently construct pathway graphs for subjects from our newly curated JR5558
mouse transcriptomics dataset. We then develop Graph-level Pseudotime Analysis
(GPA) to infer graph-level trajectories that reveal how disease progresses at
the population level, rather than in individual subjects. Based on the
trajectories estimated by GPA, we identify the most sensitive pathways that
drive disease stage transitions. In addition, we measure changes in pathway
features using neural stochastic differential equations (SDEs), which enables
us to formally define and compute pathway stability and disease bifurcation
points (points of no return), two fundamental problems in disease progression
research. We further extend our theory to the case when pathways can interact
with each other, enabling a more comprehensive and multi-faceted
characterization of disease phenotypes. The comprehensive experimental results
demonstrate the effectiveness of our framework in reconstructing the dynamics
of the pathway, identifying critical transitions, and providing novel insights
into the mechanistic understanding of disease evolution.
|
2502.06127
|
Improved YOLOv5s model for key components detection of power
transmission lines
|
cs.CV cs.AI
|
High-voltage transmission lines are located far from the road, resulting in
inconvenient inspection work and rising maintenance costs. Intelligent
inspection of power transmission lines has become increasingly important.
However, subsequent intelligent inspection relies on accurately detecting
various key components. Due to the low detection accuracy of key components in
transmission line image inspection, this paper proposed an improved object
detection model based on the YOLOv5s (You Only Look Once Version 5 Small) model
to improve the detection accuracy of key components of transmission lines.
According to the characteristics of the power grid inspection image, we first
modify the distance measurement in the k-means clustering to improve the anchor
matching of the YOLOv5s model. Then, we add the convolutional block attention
module (CBAM) attention mechanism to the backbone network to improve accuracy.
Finally, we apply the focal loss function to reduce the impact of class
imbalance. Our improved method's mAP (mean average precision) reached 98.1%,
the precision reached 97.5%, the recall reached 94.4%, and the detection rate
reached 84.8 FPS (frames per second). The experimental results show that our
improved model improves detection accuracy and has performance advantages over
other models.
|
2502.06128
|
Intelligent Reconfigurable Optical Wireless Ether
|
eess.SY cs.SY eess.SP
|
Optical wireless communication (OWC) uses light for wireless data
transmission, potentially providing faster and more secure communication than
traditional radio-frequency-based techniques like Wi-Fi. However, light's high
directionality and its limited penetration ability restrict the signal
coverage. To address this limitation, we propose an artificial "optical
wireless ether" (OWE) fabric. OWE acts as a reconfigurable electromagnetic (EM)
wave-propagating medium, intelligently enhancing the strength of light signals
and redirecting their propagation to cover a broader area. Our proposed ether
fabric comprises simple optical signal amplification units, called ether
amplifiers (EAs), strategically placed in the environment, e.g., on ceilings.
The EAs amplify and propagate signals at the analog level and are agnostic to
the signal format: Signals propagate wirelessly between the EAs, losing
strength due to attenuation during transmission but regaining it as they pass
through the EAs. The key challenge in OWE design lies in the fact that, while
increasing EA gains can extend signal coverage, it can also create positive
feedback loops, resulting in self-interference and amplifier saturation, which
distort the signals -- the key challenge in OWE design. This paper presents a
systematic theoretical analysis to prevent amplifier saturation while
optimizing the performance of OWE in both single-basic-service-set (single-BSS)
and multiple-BSS scenarios. Optimization objectives could include
signal-to-noise ratio, resource allocation fairness, and mutual interference.
Furthermore, we conducted simulations and experiments to corroborate our
theories. To our knowledge, ours is the first experimental demonstration of the
feasibility of an artificial ether fabric for extending and guiding light
propagation, laying a solid groundwork for future development and exploration
of OWE.
|
2502.06130
|
Self-Correcting Decoding with Generative Feedback for Mitigating
Hallucinations in Large Vision-Language Models
|
cs.CV cs.CL
|
While recent Large Vision-Language Models (LVLMs) have shown remarkable
performance in multi-modal tasks, they are prone to generating hallucinatory
text responses that do not align with the given visual input, which restricts
their practical applicability in real-world scenarios. In this work, inspired
by the observation that the text-to-image generation process is the inverse of
image-conditioned response generation in LVLMs, we explore the potential of
leveraging text-to-image generative models to assist in mitigating
hallucinations in LVLMs. We discover that generative models can offer valuable
self-feedback for mitigating hallucinations at both the response and token
levels. Building on this insight, we introduce self-correcting Decoding with
Generative Feedback (DeGF), a novel training-free algorithm that incorporates
feedback from text-to-image generative models into the decoding process to
effectively mitigate hallucinations in LVLMs. Specifically, DeGF generates an
image from the initial response produced by LVLMs, which acts as an auxiliary
visual reference and provides self-feedback to verify and correct the initial
response through complementary or contrastive decoding. Extensive experimental
results validate the effectiveness of our approach in mitigating diverse types
of hallucinations, consistently surpassing state-of-the-art methods across six
benchmarks. Code is available at https://github.com/zhangce01/DeGF.
|
2502.06132
|
Enhancing Document Key Information Localization Through Data
Augmentation
|
cs.CV cs.CL
|
The Visually Rich Form Document Intelligence and Understanding (VRDIU) Track
B focuses on the localization of key information in document images. The goal
is to develop a method capable of localizing objects in both digital and
handwritten documents, using only digital documents for training. This paper
presents a simple yet effective approach that includes a document augmentation
phase and an object detection phase. Specifically, we augment the training set
of digital documents by mimicking the appearance of handwritten documents. Our
experiments demonstrate that this pipeline enhances the models' generalization
ability and achieves high performance in the competition.
|
2502.06134
|
Integrating Sequence and Image Modeling in Irregular Medical Time Series
Through Self-Supervised Learning
|
cs.CV cs.AI
|
Medical time series are often irregular and face significant missingness,
posing challenges for data analysis and clinical decision-making. Existing
methods typically adopt a single modeling perspective, either treating series
data as sequences or transforming them into image representations for further
classification. In this paper, we propose a joint learning framework that
incorporates both sequence and image representations. We also design three
self-supervised learning strategies to facilitate the fusion of sequence and
image representations, capturing a more generalizable joint representation. The
results indicate that our approach outperforms seven other state-of-the-art
models in three representative real-world clinical datasets. We further
validate our approach by simulating two major types of real-world missingness
through leave-sensors-out and leave-samples-out techniques. The results
demonstrate that our approach is more robust and significantly surpasses other
baselines in terms of classification performance.
|
2502.06136
|
Graph Neural Networks at a Fraction
|
cs.LG cs.AI
|
Graph Neural Networks (GNNs) have emerged as powerful tools for learning
representations of graph-structured data. In addition to real-valued GNNs,
quaternion GNNs also perform well on tasks on graph-structured data. With the
aim of reducing the energy footprint, we reduce the model size while
maintaining accuracy comparable to that of the original-sized GNNs. This paper
introduces Quaternion Message Passing Neural Networks (QMPNNs), a framework
that leverages quaternion space to compute node representations. Our approach
offers a generalizable method for incorporating quaternion representations into
GNN architectures at one-fourth of the original parameter count. Furthermore,
we present a novel perspective on Graph Lottery Tickets, redefining their
applicability within the context of GNNs and QMPNNs. We specifically aim to
find the initialization lottery from the subnetwork of the GNNs that can
achieve comparable performance to the original GNN upon training. Thereby
reducing the trainable model parameters even further. To validate the
effectiveness of our proposed QMPNN framework and LTH for both GNNs and QMPNNs,
we evaluate their performance on real-world datasets across three fundamental
graph-based tasks: node classification, link prediction, and graph
classification.
|
2502.06138
|
Enhanced Hybrid Deep Learning Approach for Botnet Attacks Detection in
IoT Environment
|
cs.CR cs.CV
|
Cyberattacks in an Internet of Things (IoT) environment can have significant
impacts because of the interconnected nature of devices and systems. An
attacker uses a network of compromised IoT devices in a botnet attack to carry
out various harmful activities. Detecting botnet attacks poses several
challenges because of the intricate and evolving nature of these threats.
Botnet attacks erode trust in IoT devices and systems, undermining confidence
in their security, reliability, and integrity. Deep learning techniques have
significantly enhanced the detection of botnet attacks due to their ability to
analyze and learn from complex patterns in data. This research proposed the
stacking of Deep convolutional neural networks, Bi-Directional Long Short-Term
Memory (Bi-LSTM), Bi-Directional Gated Recurrent Unit (Bi-GRU), and Recurrent
Neural Networks (RNN) for botnet attacks detection. The UNSW-NB15 dataset is
utilized for botnet attacks detection. According to experimental results, the
proposed model accurately provides for the intricate patterns and features of
botnet attacks, with a testing accuracy of 99.76%. The proposed model also
identifies botnets with a high ROC-AUC curve value of 99.18%. A performance
comparison of the proposed method with existing state-of-the-art models
confirms its higher performance. The outcomes of this research could strengthen
cyber security procedures and safeguard against new attacks.
|
2502.06139
|
LCIRC: A Recurrent Compression Approach for Efficient Long-form Context
and Query Dependent Modeling in LLMs
|
cs.CL
|
While large language models (LLMs) excel in generating coherent and
contextually rich outputs, their capacity to efficiently handle long-form
contexts is limited by fixed-length position embeddings. Additionally, the
computational cost of processing long sequences increases quadratically, making
it challenging to extend context length. To address these challenges, we
propose Long-form Context Injection with Recurrent Compression (LCIRC), a
method that enables the efficient processing long-form sequences beyond the
model's length limit through recurrent compression without retraining the
entire model. We further introduce query dependent context modeling, which
selectively compresses query-relevant information, ensuring that the model
retains the most pertinent content. Our empirical results demonstrate that
Query Dependent LCIRC (QD-LCIRC) significantly improves LLM's ability to manage
extended contexts, making it well-suited for tasks that require both
comprehensive context understanding and query relevance.
|
2502.06141
|
Mixed Reality Outperforms Virtual Reality for Remote Error Resolution in
Pick-and-Place Tasks
|
cs.RO cs.HC
|
This study evaluates the performance and usability of Mixed Reality (MR),
Virtual Reality (VR), and camera stream interfaces for remote error resolution
tasks, such as correcting warehouse packaging errors. Specifically, we consider
a scenario where a robotic arm halts after detecting an error, requiring a
remote operator to intervene and resolve it via pick-and-place actions.
Twenty-one participants performed simulated pick-and-place tasks using each
interface. A linear mixed model (LMM) analysis of task resolution time,
usability scores (SUS), and mental workload scores (NASA-TLX) showed that the
MR interface outperformed both VR and camera interfaces. MR enabled
significantly faster task completion, was rated higher in usability, and was
perceived to be less cognitively demanding. Notably, the MR interface, which
projected a virtual robot onto a physical table, provided superior spatial
understanding and physical reference cues. Post-study surveys further confirmed
participants' preference for MR over other interfaces.
|
2502.06142
|
Linear Bandits with Partially Observable Features
|
stat.ML cs.LG
|
We introduce a novel linear bandit problem with partially observable
features, resulting in partial reward information and spurious estimates.
Without proper address for latent part, regret possibly grows linearly in
decision horizon $T$, as their influence on rewards are unknown. To tackle
this, we propose a novel analysis to handle the latent features and an
algorithm that achieves sublinear regret. The core of our algorithm involves
(i) augmenting basis vectors orthogonal to the observed feature space, and (ii)
introducing an efficient doubly robust estimator. Our approach achieves a
regret bound of $\tilde{O}(\sqrt{(d + d_h)T})$, where $d$ is the dimension of
observed features, and $d_h$ is the unknown dimension of the subspace of the
unobserved features. Notably, our algorithm requires no prior knowledge of the
unobserved feature space, which may expand as more features become hidden.
Numerical experiments confirm that our algorithm outperforms both
non-contextual multi-armed bandits and linear bandit algorithms depending
solely on observed features.
|
2502.06145
|
Animate Anyone 2: High-Fidelity Character Image Animation with
Environment Affordance
|
cs.CV
|
Recent character image animation methods based on diffusion models, such as
Animate Anyone, have made significant progress in generating consistent and
generalizable character animations. However, these approaches fail to produce
reasonable associations between characters and their environments. To address
this limitation, we introduce Animate Anyone 2, aiming to animate characters
with environment affordance. Beyond extracting motion signals from source
video, we additionally capture environmental representations as conditional
inputs. The environment is formulated as the region with the exclusion of
characters and our model generates characters to populate these regions while
maintaining coherence with the environmental context. We propose a
shape-agnostic mask strategy that more effectively characterizes the
relationship between character and environment. Furthermore, to enhance the
fidelity of object interactions, we leverage an object guider to extract
features of interacting objects and employ spatial blending for feature
injection. We also introduce a pose modulation strategy that enables the model
to handle more diverse motion patterns. Experimental results demonstrate the
superior performance of the proposed method.
|
2502.06146
|
Guided Exploration for Efficient Relational Model Learning
|
cs.LG cs.AI
|
Efficient exploration is critical for learning relational models in
large-scale environments with complex, long-horizon tasks. Random exploration
methods often collect redundant or irrelevant data, limiting their ability to
learn accurate relational models of the environment. Goal-literal babbling
(GLIB) improves upon random exploration by setting and planning to novel goals,
but its reliance on random actions and random novel goal selection limits its
scalability to larger domains. In this work, we identify the principles
underlying efficient exploration in relational domains: (1) operator
initialization with demonstrations that cover the distinct lifted effects
necessary for planning and (2) refining preconditions to collect maximally
informative transitions by selecting informative goal-action pairs and
executing plans to them. To demonstrate these principles, we introduce
Baking-Large, a challenging domain with extensive state-action spaces and
long-horizon tasks. We evaluate methods using oracle-driven demonstrations for
operator initialization and precondition-targeting guidance to efficiently
gather critical transitions. Experiments show that both the oracle
demonstrations and precondition-targeting oracle guidance significantly improve
sample efficiency and generalization, paving the way for future methods to use
these principles to efficiently learn accurate relational models in complex
domains.
|
2502.06147
|
LegalViz: Legal Text Visualization by Text To Diagram Generation
|
cs.CL
|
Legal documents including judgments and court orders require highly
sophisticated legal knowledge for understanding. To disclose expert knowledge
for non-experts, we explore the problem of visualizing legal texts with
easy-to-understand diagrams and propose a novel dataset of LegalViz with 23
languages and 7,010 cases of legal document and visualization pairs, using the
DOT graph description language of Graphviz. LegalViz provides a simple diagram
from a complicated legal corpus identifying legal entities, transactions, legal
sources, and statements at a glance, that are essential in each judgment. In
addition, we provide new evaluation metrics for the legal diagram visualization
by considering graph structures, textual similarities, and legal contents. We
conducted empirical studies on few-shot and finetuning large language models
for generating legal diagrams and evaluated them with these metrics, including
legal content-based evaluation within 23 languages. Models trained with
LegalViz outperform existing models including GPTs, confirming the
effectiveness of our dataset.
|
2502.06148
|
Optimizing Knowledge Integration in Retrieval-Augmented Generation with
Self-Selection
|
cs.CL cs.IR
|
Retrieval-Augmented Generation (RAG), which integrates external knowledge
into Large Language Models (LLMs), has proven effective in enabling LLMs to
produce more accurate and reliable responses. However, it remains a significant
challenge how to effectively integrate external retrieved knowledge with
internal parametric knowledge in LLMs. In this work, we propose a novel
Self-Selection RAG framework, where the LLM is made to select from pairwise
responses generated with internal parametric knowledge solely and with external
retrieved knowledge together to achieve enhanced accuracy. To this end, we
devise a Self-Selection-RGP method to enhance the capabilities of the LLM in
both generating and selecting the correct answer, by training the LLM with
Direct Preference Optimization (DPO) over a curated Retrieval Generation
Preference (RGP) dataset. Experimental results with two open-source LLMs (i.e.,
Llama2-13B-Chat and Mistral-7B) well demonstrate the superiority of our
approach over other baseline methods on Natural Questions (NQ) and TrivialQA
datasets.
|
2502.06149
|
Reward-Based Collision-Free Algorithm for Trajectory Planning of
Autonomous Robots
|
cs.RO cs.SY eess.SY
|
This paper introduces a new mission planning algorithm for autonomous robots
that enables the reward-based selection of an optimal waypoint sequence from a
predefined set. The algorithm computes a feasible trajectory and corresponding
control inputs for a robot to navigate between waypoints while avoiding
obstacles, maximizing the total reward, and adhering to constraints on state,
input and its derivatives, mission time window, and maximum distance. This also
solves a generalized prize-collecting traveling salesman problem. The proposed
algorithm employs a new genetic algorithm that evolves solution candidates
toward the optimal solution based on a fitness function and crossover. During
fitness evaluation, a penalty method enforces constraints, and the differential
flatness property with clothoid curves efficiently penalizes infeasible
trajectories. The Euler spiral method showed promising results for trajectory
parameterization compared to minimum snap and jerk polynomials. Due to the
discrete exploration space, crossover is performed using a dynamic
time-warping-based method and extended convex combination with projection. A
mutation step enhances exploration. Results demonstrate the algorithm's ability
to find the optimal waypoint sequence, fulfill constraints, avoid infeasible
waypoints, and prioritize high-reward ones. Simulations and experiments with a
ground vehicle, quadrotor, and quadruped are presented, complemented by
benchmarking and a time-complexity analysis.
|
2502.06150
|
Scaling Public Health Text Annotation: Zero-Shot Learning vs.
Crowdsourcing for Improved Efficiency and Labeling Accuracy
|
cs.CL
|
Public health researchers are increasingly interested in using social media
data to study health-related behaviors, but manually labeling this data can be
labor-intensive and costly. This study explores whether zero-shot labeling
using large language models (LLMs) can match or surpass conventional
crowd-sourced annotation for Twitter posts related to sleep disorders, physical
activity, and sedentary behavior. Multiple annotation pipelines were designed
to compare labels produced by domain experts, crowd workers, and LLM-driven
approaches under varied prompt-engineering strategies. Our findings indicate
that LLMs can rival human performance in straightforward classification tasks
and significantly reduce labeling time, yet their accuracy diminishes for tasks
requiring more nuanced domain knowledge. These results clarify the trade-offs
between automated scalability and human expertise, demonstrating conditions
under which LLM-based labeling can be efficiently integrated into public health
research without undermining label quality.
|
2502.06151
|
Powerformer: A Transformer with Weighted Causal Attention for
Time-series Forecasting
|
cs.LG cs.AI stat.ML
|
Transformers have recently shown strong performance in time-series
forecasting, but their all-to-all attention mechanism overlooks the (temporal)
causal and often (temporally) local nature of data. We introduce Powerformer, a
novel Transformer variant that replaces noncausal attention weights with causal
weights that are reweighted according to a smooth heavy-tailed decay. This
simple yet effective modification endows the model with an inductive bias
favoring temporally local dependencies, while still allowing sufficient
flexibility to learn the unique correlation structure of each dataset. Our
empirical results demonstrate that Powerformer not only achieves
state-of-the-art accuracy on public time-series benchmarks, but also that it
offers improved interpretability of attention patterns. Our analyses show that
the model's locality bias is amplified during training, demonstrating an
interplay between time-series data and power-law-based attention. These
findings highlight the importance of domain-specific modifications to the
Transformer architecture for time-series forecasting, and they establish
Powerformer as a strong, efficient, and principled baseline for future research
and real-world applications.
|
2502.06152
|
The Value of Information in Human-AI Decision-making
|
cs.AI cs.LG
|
Humans and AIs are often paired on decision tasks with the expectation of
achieving complementary performance, where the combination of human and AI
outperforms either one alone. However, how to improve performance of a human-AI
team is often not clear without knowing more about what particular information
and strategies each agent employs. We provide a decision-theoretic framework
for characterizing the value of information -- and consequently, opportunities
for agents to better exploit available information -- in AI-assisted decision
workflow. We demonstrate the use of the framework for model selection,
empirical evaluation of human-AI performance, and explanation design. We
propose a novel information-based instance-level explanation technique that
adapts a conventional saliency-based explanation to explain information value
in decision making.
|
2502.06153
|
Low Tensor-Rank Adaptation of Kolmogorov--Arnold Networks
|
cs.LG cs.AI
|
Kolmogorov--Arnold networks (KANs) have demonstrated their potential as an
alternative to multi-layer perceptions (MLPs) in various domains, especially
for science-related tasks. However, transfer learning of KANs remains a
relatively unexplored area. In this paper, inspired by Tucker decomposition of
tensors and evidence on the low tensor-rank structure in KAN parameter updates,
we develop low tensor-rank adaptation (LoTRA) for fine-tuning KANs. We study
the expressiveness of LoTRA based on Tucker decomposition approximations.
Furthermore, we provide a theoretical analysis to select the learning rates for
each LoTRA component to enable efficient training. Our analysis also shows that
using identical learning rates across all components leads to inefficient
training, highlighting the need for an adaptive learning rate strategy. Beyond
theoretical insights, we explore the application of LoTRA for efficiently
solving various partial differential equations (PDEs) by fine-tuning KANs.
Additionally, we propose Slim KANs that incorporate the inherent
low-tensor-rank properties of KAN parameter tensors to reduce model size while
maintaining superior performance. Experimental results validate the efficacy of
the proposed learning rate selection strategy and demonstrate the effectiveness
of LoTRA for transfer learning of KANs in solving PDEs. Further evaluations on
Slim KANs for function representation and image classification tasks highlight
the expressiveness of LoTRA and the potential for parameter reduction through
low tensor-rank decomposition.
|
2502.06155
|
Efficient-vDiT: Efficient Video Diffusion Transformers With Attention
Tile
|
cs.CV
|
Despite the promise of synthesizing high-fidelity videos, Diffusion
Transformers (DiTs) with 3D full attention suffer from expensive inference due
to the complexity of attention computation and numerous sampling steps. For
example, the popular Open-Sora-Plan model consumes more than 9 minutes for
generating a single video of 29 frames. This paper addresses the inefficiency
issue from two aspects: 1) Prune the 3D full attention based on the redundancy
within video data; We identify a prevalent tile-style repetitive pattern in the
3D attention maps for video data, and advocate a new family of sparse 3D
attention that holds a linear complexity w.r.t. the number of video frames. 2)
Shorten the sampling process by adopting existing multi-step consistency
distillation; We split the entire sampling trajectory into several segments and
perform consistency distillation within each one to activate few-step
generation capacities. We further devise a three-stage training pipeline to
conjoin the low-complexity attention and few-step generation capacities.
Notably, with 0.1% pretraining data, we turn the Open-Sora-Plan-1.2 model into
an efficient one that is 7.4x -7.8x faster for 29 and 93 frames 720p video
generation with a marginal performance trade-off in VBench. In addition, we
demonstrate that our approach is amenable to distributed inference, achieving
an additional 3.91x speedup when running on 4 GPUs with sequence parallelism.
|
2502.06156
|
Axial current as the origin of quantum intrinsic orbital angular
momentum
|
hep-ph cs.SY eess.SY
|
We show that it is impossible to experimentally observe the quantum intrinsic
orbital angular momentum (IOAM) effect without its axial current. Broadly
speaking, we argue that the spiral or interference characteristics of the axial
current density determine the occurrence of nonlinear or tunneling effects in
any spacetimedependent quantum systems. Our findings offer a comprehensive
theoretical framework that addresses the limitations of Keldysh theory and
provides new insights into the angular momentum properties of quantum systems,
particularly in tunneling-dominated regimes. Using Wigner function methods,
fermionic generalized two-level model, and Berry phase simulations, we predict
that IOAM effect can persist even in pure quantum tunneling processes. These
results open the door for experimental verification of IOAM effects in future
high-intensity QED experiments, such as those using X-ray free electron lasers.
|
2502.06159
|
Analysis and Optimization of Robustness in Multiplex Flow Networks
Against Cascading Failures
|
eess.SY cs.SY
|
Networked systems are susceptible to cascading failures, where the failure of
an initial set of nodes propagates through the network, often leading to
system-wide failures. In this work, we propose a multiplex flow network model
to study robustness against cascading failures triggered by random failures.
The model is inspired by systems where nodes carry or support multiple types of
flows, and failures result in the redistribution of flows within the same layer
rather than between layers. To represent different types of interdependencies
between the layers of the multiplex network, we define two cases of failure
conditions: layer-independent overload and layer-influenced overload. We
provide recursive equations and their solutions to calculate the steady-state
fraction of surviving nodes, validate them through a set of simulation
experiments, and discuss optimal load-capacity allocation strategies. Our
results demonstrate that allocating the total excess capacity to each layer
proportional to the mean effective load in the layer and distributing that
excess capacity equally among the nodes within the layer ensures maximum
robustness. The proposed framework for different failure conditions allows us
to analyze the two overload conditions presented and can be extended to explore
more complex interdependent relationships.
|
2502.06163
|
Scalable k-Means Clustering for Large k via Seeded Approximate
Nearest-Neighbor Search
|
cs.LG cs.CG stat.ML
|
For very large values of $k$, we consider methods for fast $k$-means
clustering of massive datasets with $10^7\sim10^9$ points in high-dimensions
($d\geq100$). All current practical methods for this problem have runtimes at
least $\Omega(k^2)$. We find that initialization routines are not a bottleneck
for this case. Instead, it is critical to improve the speed of Lloyd's
local-search algorithm, particularly the step that reassigns points to their
closest center. Attempting to improve this step naturally leads us to leverage
approximate nearest-neighbor search methods, although this alone is not enough
to be practical. Instead, we propose a family of problems we call "Seeded
Approximate Nearest-Neighbor Search", for which we propose "Seeded
Search-Graph" methods as a solution.
|
2502.06164
|
Generalized Temporal Tensor Decomposition with Rank-revealing Latent-ODE
|
cs.LG stat.ML
|
Tensor decomposition is a fundamental tool for analyzing multi-dimensional
data by learning low-rank factors to represent high-order interactions. While
recent works on temporal tensor decomposition have made significant progress by
incorporating continuous timestamps in latent factors, they still struggle with
general tensor data with continuous indexes not only in the temporal mode but
also in other modes, such as spatial coordinates in climate data. Additionally,
the problem of determining the tensor rank remains largely unexplored in
temporal tensor models. To address these limitations, we propose
\underline{G}eneralized temporal tensor decomposition with
\underline{R}ank-r\underline{E}vealing laten\underline{T}-ODE (GRET).
Our approach encodes continuous spatial indexes as learnable Fourier features
and employs neural ODEs in latent space to learn the temporal trajectories of
factors. To automatically reveal the rank of temporal tensors, we introduce a
rank-revealing Gaussian-Gamma prior over the factor trajectories. We develop an
efficient variational inference scheme with an analytical evidence lower bound,
enabling sampling-free optimization. Through extensive experiments on both
synthetic and real-world datasets, we demonstrate that GRET not only reveals
the underlying ranks of temporal tensors but also significantly outperforms
existing methods in prediction performance and robustness against noise.
|
2502.06166
|
Portable, High-Frequency, and High-Voltage Control Circuits for
Untethered Miniature Robots Driven by Dielectric Elastomer Actuators
|
cs.RO
|
In this work, we propose a high-voltage, high-frequency control circuit for
the untethered applications of dielectric elastomer actuators (DEAs). The
circuit board leverages low-voltage resistive components connected in series to
control voltages of up to 1.8 kV within a compact size, suitable for
frequencies ranging from 0 to 1 kHz. A single-channel control board weighs only
2.5 g. We tested the performance of the control circuit under different load
conditions and power supplies. Based on this control circuit, along with a
commercial miniature high-voltage power converter, we construct an untethered
crawling robot driven by a cylindrical DEA. The 42-g untethered robots
successfully obtained crawling locomotion on a bench and within a pipeline at a
driving frequency of 15 Hz, while simultaneously transmitting real-time video
data via an onboard camera and antenna. Our work provides a practical way to
use low-voltage control electronics to achieve the untethered driving of DEAs,
and therefore portable and wearable devices.
|
2502.06167
|
Universal Approximation of Visual Autoregressive Transformers
|
cs.LG cs.AI cs.CL cs.CV
|
We investigate the fundamental limits of transformer-based foundation models,
extending our analysis to include Visual Autoregressive (VAR) transformers. VAR
represents a big step toward generating images using a novel, scalable,
coarse-to-fine ``next-scale prediction'' framework. These models set a new
quality bar, outperforming all previous methods, including Diffusion
Transformers, while having state-of-the-art performance for image synthesis
tasks. Our primary contributions establish that, for single-head VAR
transformers with a single self-attention layer and single interpolation layer,
the VAR Transformer is universal. From the statistical perspective, we prove
that such simple VAR transformers are universal approximators for any
image-to-image Lipschitz functions. Furthermore, we demonstrate that flow-based
autoregressive transformers inherit similar approximation capabilities. Our
results provide important design principles for effective and computationally
efficient VAR Transformer strategies that can be used to extend their utility
to more sophisticated VAR models in image generation and other related areas.
|
2502.06168
|
Dynamic Pricing with Adversarially-Censored Demands
|
stat.ML cs.LG econ.EM math.OC
|
We study an online dynamic pricing problem where the potential demand at each
time period $t=1,2,\ldots, T$ is stochastic and dependent on the price.
However, a perishable inventory is imposed at the beginning of each time $t$,
censoring the potential demand if it exceeds the inventory level. To address
this problem, we introduce a pricing algorithm based on the optimistic
estimates of derivatives. We show that our algorithm achieves
$\tilde{O}(\sqrt{T})$ optimal regret even with adversarial inventory series.
Our findings advance the state-of-the-art in online decision-making problems
with censored feedback, offering a theoretically optimal solution against
adversarial observations.
|
2502.06170
|
An Interpretable Implicit-Based Approach for Modeling Local Spatial
Effects: A Case Study of Global Gross Primary Productivity
|
cs.CV cs.AI cs.LG
|
In Earth sciences, unobserved factors exhibit non-stationary spatial
distributions, causing the relationships between features and targets to
display spatial heterogeneity. In geographic machine learning tasks,
conventional statistical learning methods often struggle to capture spatial
heterogeneity, leading to unsatisfactory prediction accuracy and unreliable
interpretability. While approaches like Geographically Weighted Regression
(GWR) capture local variations, they fall short of uncovering global patterns
and tracking the continuous evolution of spatial heterogeneity. Motivated by
this limitation, we propose a novel perspective - that is, simultaneously
modeling common features across different locations alongside spatial
differences using deep neural networks. The proposed method is a dual-branch
neural network with an encoder-decoder structure. In the encoding stage, the
method aggregates node information in a spatiotemporal conditional graph using
GCN and LSTM, encoding location-specific spatiotemporal heterogeneity as an
implicit conditional vector. Additionally, a self-attention-based encoder is
used to extract location-invariant common features from the data. In the
decoding stage, the approach employs a conditional generation strategy that
predicts response variables and interpretative weights based on data features
under spatiotemporal conditions. The approach is validated by predicting
vegetation gross primary productivity (GPP) using global climate and land cover
data from 2001 to 2020. Trained on 50 million samples and tested on 2.8
million, the proposed model achieves an RMSE of 0.836, outperforming LightGBM
(1.063) and TabNet (0.944). Visualization analyses indicate that our method can
reveal the distribution differences of the dominant factors of GPP across
various times and locations.
|
2502.06171
|
A Data-Efficient Pan-Tumor Foundation Model for Oncology CT
Interpretation
|
eess.IV cs.CV
|
Artificial intelligence-assisted imaging analysis has made substantial
strides in tumor diagnosis and management. Here we present PASTA, a pan-tumor
CT foundation model that achieves state-of-the-art performance on 45 of 46
representative oncology tasks -- including lesion segmentation, tumor detection
in plain CT, tumor staging, survival prediction, structured report generation,
and cross-modality transfer learning, significantly outperforming the
second-best models on 35 tasks. This remarkable advancement is driven by our
development of PASTA-Gen, an innovative synthetic tumor generation framework
that produces a comprehensive dataset of 30,000 CT scans with pixel-level
annotated lesions and paired structured reports, encompassing malignancies
across ten organs and five benign lesion types. By leveraging this rich,
high-quality synthetic data, we overcome a longstanding bottleneck in the
development of CT foundation models -- specifically, the scarcity of publicly
available, high-quality annotated datasets due to privacy constraints and the
substantial labor required for scaling precise data annotation. Encouragingly,
PASTA demonstrates exceptional data efficiency with promising practical value,
markedly improving performance on various tasks with only a small amount of
real-world data. The open release of both the synthetic dataset and PASTA
foundation model effectively addresses the challenge of data scarcity, thereby
advancing oncological research and clinical translation.
|
2502.06172
|
PLATTER: A Page-Level Handwritten Text Recognition System for Indic
Scripts
|
cs.CV
|
In recent years, the field of Handwritten Text Recognition (HTR) has seen the
emergence of various new models, each claiming to perform competitively better
than the other in specific scenarios. However, making a fair comparison of
these models is challenging due to inconsistent choices and diversity in test
sets. Furthermore, recent advancements in HTR often fail to account for the
diverse languages, especially Indic languages, likely due to the scarcity of
relevant labeled datasets. Moreover, much of the previous work has focused
primarily on character-level or word-level recognition, overlooking the crucial
stage of Handwritten Text Detection (HTD) necessary for building a page-level
end-to-end handwritten OCR pipeline. Through our paper, we address these gaps
by making three pivotal contributions. Firstly, we present an end-to-end
framework for Page-Level hAndwriTTen TExt Recognition (PLATTER) by treating it
as a two-stage problem involving word-level HTD followed by HTR. This approach
enables us to identify, assess, and address challenges in each stage
independently. Secondly, we demonstrate the usage of PLATTER to measure the
performance of our language-agnostic HTD model and present a consistent
comparison of six trained HTR models on ten diverse Indic languages thereby
encouraging consistent comparisons. Finally, we also release a Corpus of
Handwritten Indic Scripts (CHIPS), a meticulously curated, page-level Indic
handwritten OCR dataset labeled for both detection and recognition purposes.
Additionally, we release our code and trained models, to encourage further
contributions in this direction.
|
2502.06173
|
Uncertainty-Aware Adaptation of Large Language Models for
Protein-Protein Interaction Analysis
|
cs.LG cs.AI cs.CL stat.AP stat.ML
|
Identification of protein-protein interactions (PPIs) helps derive cellular
mechanistic understanding, particularly in the context of complex conditions
such as neurodegenerative disorders, metabolic syndromes, and cancer. Large
Language Models (LLMs) have demonstrated remarkable potential in predicting
protein structures and interactions via automated mining of vast biomedical
literature; yet their inherent uncertainty remains a key challenge for deriving
reproducible findings, critical for biomedical applications. In this study, we
present an uncertainty-aware adaptation of LLMs for PPI analysis, leveraging
fine-tuned LLaMA-3 and BioMedGPT models. To enhance prediction reliability, we
integrate LoRA ensembles and Bayesian LoRA models for uncertainty
quantification (UQ), ensuring confidence-calibrated insights into protein
behavior. Our approach achieves competitive performance in PPI identification
across diverse disease contexts while addressing model uncertainty, thereby
enhancing trustworthiness and reproducibility in computational biology. These
findings underscore the potential of uncertainty-aware LLM adaptation for
advancing precision medicine and biomedical research.
|
2502.06178
|
Bayesian Optimization by Kernel Regression and Density-based Exploration
|
math.OC cs.LG stat.ML
|
Bayesian optimization is highly effective for optimizing
expensive-to-evaluate black-box functions, but it faces significant
computational challenges due to the high computational complexity of Gaussian
processes, which results in a total time complexity that is quartic with
respect to the number of iterations. To address this limitation, we propose the
Bayesian Optimization by Kernel regression and density-based Exploration (BOKE)
algorithm. BOKE uses kernel regression for efficient function approximation,
kernel density for exploration, and the improved kernel regression upper
confidence bound criteria to guide the optimization process, thus reducing
computational costs to quadratic. Our theoretical analysis rigorously
establishes the global convergence of BOKE and ensures its robustness. Through
extensive numerical experiments on both synthetic and real-world optimization
tasks, we demonstrate that BOKE not only performs competitively compared to
Gaussian process-based methods but also exhibits superior computational
efficiency. These results highlight BOKE's effectiveness in
resource-constrained environments, providing a practical approach for
optimization problems in engineering applications.
|
2502.06180
|
RideKE: Leveraging Low-Resource, User-Generated Twitter Content for
Sentiment and Emotion Detection in Kenyan Code-Switched Dataset
|
cs.CL cs.AI
|
Social media has become a crucial open-access platform for individuals to
express opinions and share experiences. However, leveraging low-resource
language data from Twitter is challenging due to scarce, poor-quality content
and the major variations in language use, such as slang and code-switching.
Identifying tweets in these languages can be difficult as Twitter primarily
supports high-resource languages. We analyze Kenyan code-switched data and
evaluate four state-of-the-art (SOTA) transformer-based pretrained models for
sentiment and emotion classification, using supervised and semi-supervised
methods. We detail the methodology behind data collection and annotation, and
the challenges encountered during the data curation phase. Our results show
that XLM-R outperforms other models; for sentiment analysis, XLM-R supervised
model achieves the highest accuracy (69.2\%) and F1 score (66.1\%), XLM-R
semi-supervised (67.2\% accuracy, 64.1\% F1 score). In emotion analysis,
DistilBERT supervised leads in accuracy (59.8\%) and F1 score (31\%), mBERT
semi-supervised (accuracy (59\% and F1 score 26.5\%). AfriBERTa models show the
lowest accuracy and F1 scores. All models tend to predict neutral sentiment,
with Afri-BERT showing the highest bias and unique sensitivity to empathy
emotion. https://github.com/NEtori21/Ride_hailing
|
2502.06181
|
CANeRV: Content Adaptive Neural Representation for Video Compression
|
cs.CV
|
Recent advances in video compression introduce implicit neural representation
(INR) based methods, which effectively capture global dependencies and
characteristics of entire video sequences. Unlike traditional and deep learning
based approaches, INR-based methods optimize network parameters from a global
perspective, resulting in superior compression potential. However, most current
INR methods utilize a fixed and uniform network architecture across all frames,
limiting their adaptability to dynamic variations within and between video
sequences. This often leads to suboptimal compression outcomes as these methods
struggle to capture the distinct nuances and transitions in video content. To
overcome these challenges, we propose Content Adaptive Neural Representation
for Video Compression (CANeRV), an innovative INR-based video compression
network that adaptively conducts structure optimisation based on the specific
content of each video sequence. To better capture dynamic information across
video sequences, we propose a dynamic sequence-level adjustment (DSA).
Furthermore, to enhance the capture of dynamics between frames within a
sequence, we implement a dynamic frame-level adjustment (DFA). {Finally, to
effectively capture spatial structural information within video frames, thereby
enhancing the detail restoration capabilities of CANeRV, we devise a structure
level hierarchical structural adaptation (HSA).} Experimental results
demonstrate that CANeRV can outperform both H.266/VVC and state-of-the-art
INR-based video compression techniques across diverse video datasets.
|
2502.06185
|
Discourse-Driven Evaluation: Unveiling Factual Inconsistency in Long
Document Summarization
|
cs.CL cs.AI
|
Detecting factual inconsistency for long document summarization remains
challenging, given the complex structure of the source article and long summary
length. In this work, we study factual inconsistency errors and connect them
with a line of discourse analysis. We find that errors are more common in
complex sentences and are associated with several discourse features. We
propose a framework that decomposes long texts into discourse-inspired chunks
and utilizes discourse information to better aggregate sentence-level scores
predicted by natural language inference models. Our approach shows improved
performance on top of different model baselines over several evaluation
benchmarks, covering rich domains of texts, focusing on long document
summarization. This underscores the significance of incorporating discourse
features in developing models for scoring summaries for long document factual
inconsistency.
|
2502.06186
|
Learning the Frequency Dynamics of the Power System Using Higher-order
Dynamic Mode Decomposition
|
eess.SY cs.SY
|
The increasing penetration of renewable energy sources, characterised by low
inertia and intermittent disturbances, presents substantial challenges to power
system stability. As critical indicators of system stability, frequency
dynamics and associated oscillatory phenomena have attracted significant
research attention. While existing studies predominantly employ linearized
models, our findings demonstrate that linear approximations exhibit
considerable errors when predicting frequency oscillation dynamics across
multiple time scales, thus necessitating the incorporation of nonlinear
characteristics. This paper proposes a data-driven approach based on
higher-order dynamical mode decomposition (HODMD) for learning frequency
dynamics. The proposed method offers distinct advantages over alternative
nonlinear methods, including no prior knowledge required, adaptability to
high-dimensional systems, and robust performance. Furthermore, HODMD
demonstrates superior capability in capturing system-wide spatio-temporal
modes, successfully identifying modal behaviour that remains undetectable
through standard Dynamic Mode Decomposition techniques. The efficacy of the
proposed methodology is validated through comprehensive case studies on both
IEEE 14-bus and WECC systems.
|
2502.06189
|
Multi-Level Decoupled Relational Distillation for Heterogeneous
Architectures
|
cs.CV
|
Heterogeneous distillation is an effective way to transfer knowledge from
cross-architecture teacher models to student models. However, existing
heterogeneous distillation methods do not take full advantage of the dark
knowledge hidden in the teacher's output, limiting their performance.To this
end, we propose a novel framework named Multi-Level Decoupled Relational
Knowledge Distillation (MLDR-KD) to unleash the potential of relational
distillation in heterogeneous distillation. Concretely, we first introduce
Decoupled Finegrained Relation Alignment (DFRA) in both logit and feature
levels to balance the trade-off between distilled dark knowledge and the
confidence in the correct category of the heterogeneous teacher model. Then,
Multi-Scale Dynamic Fusion (MSDF) module is applied to dynamically fuse the
projected logits of multiscale features at different stages in student model,
further improving performance of our method in feature level. We verify our
method on four architectures (CNNs, Transformers, MLPs and Mambas), two
datasets (CIFAR-100 and Tiny-ImageNet). Compared with the best available
method, our MLDR-KD improves student model performance with gains of up to
4.86% on CIFAR-100 and 2.78% on Tiny-ImageNet datasets respectively, showing
robustness and generality in heterogeneous distillation. Code will be released
soon.
|
2502.06190
|
Is Science Inevitable?
|
cs.DL cs.SI
|
Using large-scale citation data and a breakthrough metric, the study
systematically evaluates the inevitability of scientific breakthroughs. We find
that scientific breakthroughs emerge as multiple discoveries rather than
singular events. Through analysis of over 40 million journal articles, we
identify multiple discoveries as papers that independently displace the same
reference using the Disruption Index (D-index), suggesting functional
equivalence. Our findings support Merton's core argument that scientific
discoveries arise from historical context rather than individual genius. The
results reveal a long-tail distribution pattern of multiple discoveries across
various datasets, challenging Merton's Poisson model while reinforcing the
structural inevitability of scientific progress.
|
2502.06192
|
Right Time to Learn:Promoting Generalization via Bio-inspired Spacing
Effect in Knowledge Distillation
|
cs.LG cs.AI
|
Knowledge distillation (KD) is a powerful strategy for training deep neural
networks (DNNs). Although it was originally proposed to train a more compact
``student'' model from a large ``teacher'' model, many recent efforts have
focused on adapting it to promote generalization of the model itself, such as
online KD and self KD. % as an effective way Here, we propose an accessible and
compatible strategy named Spaced KD to improve the effectiveness of both online
KD and self KD, in which the student model distills knowledge from a teacher
model trained with a space interval ahead. This strategy is inspired by a
prominent theory named \emph{spacing effect} in biological learning and memory,
positing that appropriate intervals between learning trials can significantly
enhance learning performance. With both theoretical and empirical analyses, we
demonstrate that the benefits of the proposed Spaced KD stem from convergence
to a flatter loss landscape during stochastic gradient descent (SGD). We
perform extensive experiments to validate the effectiveness of Spaced KD in
improving the learning performance of DNNs (e.g., the performance gain is up to
2.31\% and 3.34\% on Tiny-ImageNet over online KD and self KD, respectively).
|
2502.06193
|
Can LLMs Replace Human Evaluators? An Empirical Study of LLM-as-a-Judge
in Software Engineering
|
cs.SE cs.AI
|
Recently, large language models (LLMs) have been deployed to tackle various
software engineering (SE) tasks like code generation, significantly advancing
the automation of SE tasks. However, assessing the quality of these
LLM-generated code and text remains challenging. The commonly used Pass@k
metric necessitates extensive unit tests and configured environments, demands a
high labor cost, and is not suitable for evaluating LLM-generated text.
Conventional metrics like BLEU, which measure only lexical rather than semantic
similarity, have also come under scrutiny. In response, a new trend has emerged
to employ LLMs for automated evaluation, known as LLM-as-a-judge. These
LLM-as-a-judge methods are claimed to better mimic human assessment than
conventional metrics without relying on high-quality reference answers.
Nevertheless, their exact human alignment in SE tasks remains unexplored. In
this paper, we empirically explore LLM-as-a-judge methods for evaluating SE
tasks, focusing on their alignment with human judgments. We select seven
LLM-as-a-judge methods that utilize general-purpose LLMs, alongside two LLMs
specifically fine-tuned for evaluation. After generating and manually scoring
LLM responses on three recent SE datasets of code translation, code generation,
and code summarization, we then prompt these methods to evaluate each response.
Finally, we compare the scores generated by these methods with human
evaluation. The results indicate that output-based methods reach the highest
Pearson correlation of 81.32 and 68.51 with human scores in code translation
and generation, achieving near-human evaluation, noticeably outperforming
ChrF++, one of the best conventional metrics, at 34.23 and 64.92. Such
output-based methods prompt LLMs to output judgments directly, and exhibit more
balanced score distributions that resemble human score patterns. Finally, we
provide...
|
2502.06194
|
Multimodal Task Representation Memory Bank vs. Catastrophic Forgetting
in Anomaly Detection
|
cs.CV
|
Unsupervised Continuous Anomaly Detection (UCAD) faces significant challenges
in multi-task representation learning, with existing methods suffering from
incomplete representation and catastrophic forgetting. Unlike supervised
models, unsupervised scenarios lack prior information, making it difficult to
effectively distinguish redundant and complementary multimodal features. To
address this, we propose the Multimodal Task Representation Memory Bank (MTRMB)
method through two key technical innovations: A Key-Prompt-Multimodal Knowledge
(KPMK) mechanism that uses concise key prompts to guide cross-modal feature
interaction between BERT and ViT. Refined Structure-based Contrastive Learning
(RSCL) leveraging Grounding DINO and SAM to generate precise segmentation
masks, pulling features of the same structural region closer while pushing
different structural regions apart. Experiments on MVtec AD and VisA datasets
demonstrate MTRMB's superiority, achieving an average detection accuracy of
0.921 at the lowest forgetting rate, significantly outperforming
state-of-the-art methods. We plan to open source on GitHub.
|
2502.06195
|
Calibration of Multiple Asynchronous Microphone Arrays using Hybrid TDOA
|
cs.SD cs.RO
|
Accurate calibration of acoustic sensing systems made of multiple
asynchronous microphone arrays is essential for satisfactory performance in
sound source localization and tracking. State-of-the-art calibration methods
for this type of system rely on the time difference of arrival and direction of
arrival measurements among the microphone arrays (denoted as TDOA-M and DOA,
respectively). In this paper, to enhance calibration accuracy, we propose to
incorporate the time difference of arrival measurements between adjacent sound
events (TDOAS) with respect to the microphone arrays. More specifically, we
propose a two-stage calibration approach, including an initial value estimation
(IVE) procedure and the final joint optimization step. The IVE stage first
initializes all parameters except for microphone array orientations, using
hybrid TDOA (i.e., TDOAM and TDOA-S), odometer data from a moving robot
carrying a speaker, and DOA. Subsequently, microphone orientations are
estimated through the iterative closest point method. The final joint
optimization step estimates multiple microphone array locations, orientations,
time offsets, clock drift rates, and sound source locations simultaneously.
Both simulation and experiment results show that for scenarios with low or
moderate TDOA noise levels, our approach outperforms existing methods in terms
of accuracy. All code and data are available at
https://github.com/AISLABsustech/Hybrid-TDOA-Multi-Calib.
|
2502.06196
|
Improved Extrinsic Calibration of Acoustic Cameras via Batch
Optimization
|
cs.RO cs.SD
|
Acoustic cameras have found many applications in practice. Accurate and
reliable extrinsic calibration of the microphone array and visual sensors
within acoustic cameras is crucial for fusing visual and auditory measurements.
Existing calibration methods either require prior knowledge of the microphone
array geometry or rely on grid search which suffers from slow iteration speed
or poor convergence. To overcome these limitations, in this paper, we propose
an automatic calibration technique using a calibration board with both visual
and acoustic markers to identify each microphone position in the camera frame.
We formulate the extrinsic calibration problem (between microphones and the
visual sensor) as a nonlinear least squares problem and employ a batch
optimization strategy to solve the associated problem. Extensive numerical
simulations and realworld experiments show that the proposed method improves
both the accuracy and robustness of extrinsic parameter calibration for
acoustic cameras, in comparison to existing methods. To benefit the community,
we open-source all the codes and data at
https://github.com/AISLAB-sustech/AcousticCamera.
|
2502.06200
|
On the query complexity of sampling from non-log-concave distributions
|
cs.DS cs.LG stat.ML
|
We study the problem of sampling from a $d$-dimensional distribution with
density $p(x)\propto e^{-f(x)}$, which does not necessarily satisfy good
isoperimetric conditions.
Specifically, we show that for any $L,M$ satisfying $LM\ge d\ge 5$,
$\epsilon\in \left(0,\frac{1}{32}\right)$, and any algorithm with query
accesses to the value of $f(x)$ and $\nabla f(x)$, there exists an
$L$-log-smooth distribution with second moment at most $M$ such that the
algorithm requires $\left(\frac{LM}{d\epsilon}\right)^{\Omega(d)}$ queries to
compute a sample whose distribution is within $\epsilon$ in total variation
distance to the target distribution. We complement the lower bound with an
algorithm requiring $\left(\frac{LM}{d\epsilon}\right)^{\mathcal O(d)}$
queries, thereby characterizing the tight (up to the constant in the exponent)
query complexity for sampling from the family of non-log-concave distributions.
Our results are in sharp contrast with the recent work of Huang et al.
(COLT'24), where an algorithm with quasi-polynomial query complexity was
proposed for sampling from a non-log-concave distribution when
$M=\mathtt{poly}(d)$. Their algorithm works under the stronger condition that
all distributions along the trajectory of the Ornstein-Uhlenbeck process,
starting from the target distribution, are $\mathcal O(1)$-log-smooth. We
investigate this condition and prove that it is strictly stronger than
requiring the target distribution to be $\mathcal O(1)$-log-smooth.
Additionally, we study this condition in the context of mixtures of Gaussians.
Finally, we place our results within the broader theme of ``sampling versus
optimization'', as studied in Ma et al. (PNAS'19). We show that for a wide
range of parameters, sampling is strictly easier than optimization by a
super-exponential factor in the dimension $d$.
|
2502.06201
|
Comparing Image Segmentation Algorithms
|
cs.CV
|
This paper presents a novel approach for denoising binary images using
simulated annealing (SA), a global optimization technique that addresses the
inherent challenges of non convex energy functions. Binary images are often
corrupted by noise, necessitating effective restoration methods. We propose an
energy function E(x, y) that captures the relationship between the noisy image
y and the desired clean image x. Our algorithm combines simulated annealing
with a localized optimization strategy to efficiently navigate the solution
space, minimizing the energy function while maintaining computational
efficiency. We evaluate the performance of the proposed method against
traditional iterative conditional modes (ICM), employing a binary image with
10% pixel corruption as a test case. Experimental results demonstrate that the
simulated annealing method achieves a significant restoration improvement,
yielding a 99.19% agreement with the original image compared to 96.21% for ICM.
Visual assessments reveal that simulated annealing effectively removes noise
while preserving structural details, making it a promising approach for binary
image denoising. This work contributes to the field of image processing by
highlighting the advantages of incorporating global optimization techniques in
restoration tasks.
|
2502.06204
|
Non-literal Understanding of Number Words by Language Models
|
cs.CL
|
Humans naturally interpret numbers non-literally, effortlessly combining
context, world knowledge, and speaker intent. We investigate whether large
language models (LLMs) interpret numbers similarly, focusing on hyperbole and
pragmatic halo effects. Through systematic comparison with human data and
computational models of pragmatic reasoning, we find that LLMs diverge from
human interpretation in striking ways. By decomposing pragmatic reasoning into
testable components, grounded in the Rational Speech Act framework, we pinpoint
where LLM processing diverges from human cognition -- not in prior knowledge,
but in reasoning with it. This insight leads us to develop a targeted solution
-- chain-of-thought prompting inspired by an RSA model makes LLMs'
interpretations more human-like. Our work demonstrates how computational
cognitive models can both diagnose AI-human differences and guide development
of more human-like language understanding capabilities.
|
2502.06205
|
C-3PO: Compact Plug-and-Play Proxy Optimization to Achieve Human-like
Retrieval-Augmented Generation
|
cs.CL cs.AI cs.LG
|
Retrieval-augmented generation (RAG) systems face a fundamental challenge in
aligning independently developed retrievers and large language models (LLMs).
Existing approaches typically involve modifying either component or introducing
simple intermediate modules, resulting in practical limitations and sub-optimal
performance. Inspired by human search behavior -- typically involving a
back-and-forth process of proposing search queries and reviewing documents, we
propose C-3PO, a proxy-centric framework that facilitates communication between
retrievers and LLMs through a lightweight multi-agent system. Our framework
implements three specialized agents that collaboratively optimize the entire
RAG pipeline without altering the retriever and LLMs. These agents work
together to assess the need for retrieval, generate effective queries, and
select information suitable for the LLMs. To enable effective multi-agent
coordination, we develop a tree-structured rollout approach for reward credit
assignment in reinforcement learning. Extensive experiments in both in-domain
and out-of-distribution scenarios demonstrate that C-3PO significantly enhances
RAG performance while maintaining plug-and-play flexibility and superior
generalization capabilities.
|
2502.06207
|
Unveiling the Capabilities of Large Language Models in Detecting
Offensive Language with Annotation Disagreement
|
cs.CL cs.AI
|
Large Language Models (LLMs) have become essential for offensive language
detection, yet their ability to handle annotation disagreement remains
underexplored. Disagreement samples, which arise from subjective
interpretations, pose a unique challenge due to their ambiguous nature.
Understanding how LLMs process these cases, particularly their confidence
levels, can offer insight into their alignment with human annotators. This
study systematically evaluates the performance of multiple LLMs in detecting
offensive language at varying levels of annotation agreement. We analyze binary
classification accuracy, examine the relationship between model confidence and
human disagreement, and explore how disagreement samples influence model
decision-making during few-shot learning and instruction fine-tuning. Our
findings reveal that LLMs struggle with low-agreement samples, often exhibiting
overconfidence in these ambiguous cases. However, utilizing disagreement
samples in training improves both detection accuracy and model alignment with
human judgment. These insights provide a foundation for enhancing LLM-based
offensive language detection in real-world moderation tasks.
|
2502.06208
|
Product gales and Finite state dimension
|
cs.IT math.IT
|
In this work, we introduce the notion of product gales, which is the
modification of an $s$-gale such that $k$ separate bets can be placed at each
symbol. The product of the bets placed are taken into the capital function of
the product-gale. We show that Hausdorff dimension can be characterised using
product gales.
A $k$-bet finite-state gambler is one that can place $k$ separate bets at
each symbol. We call the notion of finite-state dimension, characterized by
product gales induced by $k$-bet finite-state gamblers, as multi-bet
finite-state dimension. Bourke, Hitchcock and Vinodchandran gave an equivalent
characterisation of finite state dimension by disjoint block entropy rates. We
show that multi-bet finite state dimension can be characterised using sliding
block entropy rates. Further, we show that multi-bet finite state dimension can
also be charatcterised by disjoint block entropy rates.
Hence we show that finite state dimension and multi-bet finite state
dimension are the same notions, thereby giving a new characterisation of finite
state dimension using $k$-bet finite state $s$-gales. We also provide a proof
of equivalence between sliding and disjoint block entropy rates, providing an
alternate, automata based proof of the result by Kozachinskiy, and Shen.
|
2502.06209
|
Enhancing Cost Efficiency in Active Learning with Candidate Set Query
|
cs.LG cs.CV
|
This paper introduces a cost-efficient active learning (AL) framework for
classification, featuring a novel query design called candidate set query.
Unlike traditional AL queries requiring the oracle to examine all possible
classes, our method narrows down the set of candidate classes likely to include
the ground-truth class, significantly reducing the search space and labeling
cost. Moreover, we leverage conformal prediction to dynamically generate small
yet reliable candidate sets, adapting to model enhancement over successive AL
rounds. To this end, we introduce an acquisition function designed to
prioritize data points that offer high information gain at lower cost.
Empirical evaluations on CIFAR-10, CIFAR-100, and ImageNet64x64 demonstrate the
effectiveness and scalability of our framework. Notably, it reduces labeling
cost by 42% on ImageNet64x64.
|
2502.06210
|
Position: Continual Learning Benefits from An Evolving Population over
An Unified Model
|
cs.LG
|
Deep neural networks have demonstrated remarkable success in machine
learning; however, they remain fundamentally ill-suited for Continual Learning
(CL). Recent research has increasingly focused on achieving CL without the need
for rehearsal. Among these, parameter isolation-based methods have proven
particularly effective in enhancing CL by optimizing model weights for each
incremental task. Despite their success, they fall short in optimizing
architectures tailored to distinct incremental tasks. To address this
limitation, updating a group of models with different architectures offers a
promising alternative to the traditional CL paradigm that relies on a single
unified model. Building on this insight, this study introduces a novel
Population-based Continual Learning (PCL) framework. PCL extends CL to the
architectural level by maintaining and evolving a population of neural network
architectures, which are continually refined for the current task through NAS.
Importantly, the well-evolved population for the current incremental task is
naturally inherited by the subsequent one, thereby facilitating forward
transfer, a crucial objective in CL. Throughout the CL process, the population
evolves, yielding task-specific architectures that collectively form a robust
CL system. Experimental results demonstrate that PCL outperforms
state-of-the-art rehearsal-free CL methods that employs a unified model,
highlighting its potential as a new paradigm for CL.
|
2502.06212
|
AVSim -- Realistic Simulation Framework for Airborne and Vector-Borne
Disease Dynamics
|
eess.SY cs.SY
|
The COVID-19 pandemic underscored the critical need for rapid epidemic trend
identification and effective intervention strategies to mitigate disease
progression and its socio-economic impact. Concurrent with emerging threats,
endemic diseases like dengue continue to strain healthcare systems,
particularly in populous, economically challenged nations. This paper
introduces AVSim (Airborne and Vectorborne Simulator), an agent-based model
designed to provide granular insights for optimizing resource allocation within
existing healthcare management frameworks. AVSim leverages realistic human
mobility and behavioral patterns to simulate disease propagation within a
detailed, scalable environment encompassing homes, schools, hospitals, and
commercial venues. Human movement is modeled based on occupational and
behavioral patterns, including age-specific activities. The simulator
incorporates age- and environment-specific disease outcomes, host-host and
host-vector interactions, and multiple disease stages, including mild, severe,
and critical phases. Immunity, quarantine, and hospitalization are also
modeled. Furthermore, AVSim supports tracing the path of disease spread,
providing micro-level insights into transmission dynamics. Implemented in
Python, AVSim offers flexibility and extensibility, enabling users to create
highly customized scenarios for airborne and vector-borne disease modeling.
Case studies demonstrating AVSim's application to COVID-19 and dengue
illustrate its potential for generating actionable epidemic insights, thereby
enhancing public health planning and response.
|
2502.06215
|
LessLeak-Bench: A First Investigation of Data Leakage in LLMs Across 83
Software Engineering Benchmarks
|
cs.SE cs.AI cs.CL
|
Large Language Models (LLMs) are widely utilized in software engineering (SE)
tasks, such as code generation and automated program repair. However, their
reliance on extensive and often undisclosed pre-training datasets raises
significant concerns about data leakage, where the evaluation benchmark data is
unintentionally ``seen'' by LLMs during the model's construction phase. The
data leakage issue could largely undermine the validity of LLM-based research
and evaluations. Despite the increasing use of LLMs in the SE community, there
is no comprehensive study that assesses the extent of data leakage in SE
benchmarks for LLMs yet. To address this gap, this paper presents the first
large-scale analysis of data leakage in 83 SE benchmarks concerning LLMs. Our
results show that in general, data leakage in SE benchmarks is minimal, with
average leakage ratios of only 4.8\%, 2.8\%, and 0.7\% for Python, Java, and
C/C++ benchmarks, respectively. However, some benchmarks exhibit relatively
higher leakage ratios, which raises concerns about their bias in evaluation.
For instance, QuixBugs and BigCloneBench have leakage ratios of 100.0\% and
55.7\%, respectively. Furthermore, we observe that data leakage has a
substantial impact on LLM evaluation. We also identify key causes of high data
leakage, such as the direct inclusion of benchmark data in pre-training
datasets and the use of coding platforms like LeetCode for benchmark
construction. To address the data leakage, we introduce
\textbf{LessLeak-Bench}, a new benchmark that removes leaked samples from the
83 SE benchmarks, enabling more reliable LLM evaluations in future research.
Our study enhances the understanding of data leakage in SE benchmarks and
provides valuable insights for future research involving LLMs in SE.
|
2502.06217
|
Examining False Positives under Inference Scaling for Mathematical
Reasoning
|
cs.CL cs.AI
|
Recent advancements in language models have led to significant improvements
in mathematical reasoning across various benchmarks. However, most of these
benchmarks rely on automatic evaluation methods that only compare final answers
using heuristics, without verifying the underlying reasoning steps. This
limitation results in false positive solutions, where models may produce
correct final answers but with flawed deduction paths. In this paper, we
systematically examine the prevalence of false positive solutions in
mathematical problem solving for language models. We analyze the
characteristics and extent of this issue across different open-source models,
datasets of varying difficulty levels, and decoding strategies. Specifically,
we explore how false positives influence the inference time scaling behavior of
language models. Our experimental results reveal that: (1) false positive
solutions persist across different models, datasets, and decoding methods, (2)
sampling-based inference time scaling methods do not alleviate the problem, and
(3) the pass@N evaluation metric is more susceptible to false positives,
suggesting a significantly lower scaling ceiling than what automatic
evaluations indicate. Additionally, we analyze specific instances of false
positives and discuss potential limitations in self-improvement techniques and
synthetic data generation under such conditions.
|
2502.06219
|
Fully Exploiting Vision Foundation Model's Profound Prior Knowledge for
Generalizable RGB-Depth Driving Scene Parsing
|
cs.CV
|
Recent vision foundation models (VFMs), typically based on Vision Transformer
(ViT), have significantly advanced numerous computer vision tasks. Despite
their success in tasks focused solely on RGB images, the potential of VFMs in
RGB-depth driving scene parsing remains largely under-explored. In this
article, we take one step toward this emerging research area by investigating a
feasible technique to fully exploit VFMs for generalizable RGB-depth driving
scene parsing. Specifically, we explore the inherent characteristics of RGB and
depth data, thereby presenting a Heterogeneous Feature Integration Transformer
(HFIT). This network enables the efficient extraction and integration of
comprehensive heterogeneous features without re-training ViTs. Relative depth
prediction results from VFMs, used as inputs to the HFIT side adapter, overcome
the limitations of the dependence on depth maps. Our proposed HFIT demonstrates
superior performance compared to all other traditional single-modal and
data-fusion scene parsing networks, pre-trained VFMs, and ViT adapters on the
Cityscapes and KITTI Semantics datasets. We believe this novel strategy paves
the way for future innovations in VFM-based data-fusion techniques for driving
scene parsing. Our source code is publicly available at
https://mias.group/HFIT.
|
2502.06220
|
FunduSAM: A Specialized Deep Learning Model for Enhanced Optic Disc and
Cup Segmentation in Fundus Images
|
cs.CV cs.IR
|
The Segment Anything Model (SAM) has gained popularity as a versatile image
segmentation method, thanks to its strong generalization capabilities across
various domains. However, when applied to optic disc (OD) and optic cup (OC)
segmentation tasks, SAM encounters challenges due to the complex structures,
low contrast, and blurred boundaries typical of fundus images, leading to
suboptimal performance. To overcome these challenges, we introduce a novel
model, FunduSAM, which incorporates several Adapters into SAM to create a deep
network specifically designed for OD and OC segmentation. The FunduSAM utilizes
Adapter into each transformer block after encoder for parameter fine-tuning
(PEFT). It enhances SAM's feature extraction capabilities by designing a
Convolutional Block Attention Module (CBAM), addressing issues related to
blurred boundaries and low contrast. Given the unique requirements of OD and OC
segmentation, polar transformation is used to convert the original fundus OD
images into a format better suited for training and evaluating FunduSAM. A
joint loss is used to achieve structure preservation between the OD and OC,
while accurate segmentation. Extensive experiments on the REFUGE dataset,
comprising 1,200 fundus images, demonstrate the superior performance of
FunduSAM compared to five mainstream approaches.
|
2502.06221
|
Interaction-aware Conformal Prediction for Crowd Navigation
|
cs.RO
|
During crowd navigation, robot motion plan needs to consider human motion
uncertainty, and the human motion uncertainty is dependent on the robot motion
plan. We introduce Interaction-aware Conformal Prediction (ICP) to alternate
uncertainty-aware robot motion planning and decision-dependent human motion
uncertainty quantification. ICP is composed of a trajectory predictor to
predict human trajectories, a model predictive controller to plan robot motion
with confidence interval radii added for probabilistic safety, a human
simulator to collect human trajectory calibration dataset conditioned on the
planned robot motion, and a conformal prediction module to quantify trajectory
prediction error on the decision-dependent calibration dataset. Crowd
navigation simulation experiments show that ICP strikes a good balance of
performance among navigation efficiency, social awareness, and uncertainty
quantification compared to previous works. ICP generalizes well to navigation
tasks under various crowd densities. The fast runtime and efficient memory
usage make ICP practical for real-world applications. Code is available at
https://github.com/tedhuang96/icp.
|
2502.06227
|
Unsupervised deep learning for semantic segmentation of multispectral
LiDAR forest point clouds
|
cs.CV
|
Point clouds captured with laser scanning systems from forest environments
can be utilized in a wide variety of applications within forestry and plant
ecology, such as the estimation of tree stem attributes, leaf angle
distribution, and above-ground biomass. However, effectively utilizing the data
in such tasks requires the semantic segmentation of the data into wood and
foliage points, also known as leaf-wood separation. The traditional approach to
leaf-wood separation has been geometry- and radiometry-based unsupervised
algorithms, which tend to perform poorly on data captured with airborne laser
scanning (ALS) systems, even with a high point density. While recent machine
and deep learning approaches achieve great results even on sparse point clouds,
they require manually labeled training data, which is often extremely laborious
to produce. Multispectral (MS) information has been demonstrated to have
potential for improving the accuracy of leaf-wood separation, but quantitative
assessment of its effects has been lacking. This study proposes a fully
unsupervised deep learning method, GrowSP-ForMS, which is specifically designed
for leaf-wood separation of high-density MS ALS point clouds and based on the
GrowSP architecture. GrowSP-ForMS achieved a mean accuracy of 84.3% and a mean
intersection over union (mIoU) of 69.6% on our MS test set, outperforming the
unsupervised reference methods by a significant margin. When compared to
supervised deep learning methods, our model performed similarly to the slightly
older PointNet architecture but was outclassed by more recent approaches.
Finally, two ablation studies were conducted, which demonstrated that our
proposed changes increased the test set mIoU of GrowSP-ForMS by 29.4 percentage
points (pp) in comparison to the original GrowSP model and that utilizing MS
data improved the mIoU by 5.6 pp from the monospectral case.
|
2502.06231
|
Falsification of Unconfoundedness by Testing Independence of Causal
Mechanisms
|
stat.ME cs.LG stat.ML
|
A major challenge in estimating treatment effects in observational studies is
the reliance on untestable conditions such as the assumption of no unmeasured
confounding. In this work, we propose an algorithm that can falsify the
assumption of no unmeasured confounding in a setting with observational data
from multiple heterogeneous sources, which we refer to as environments. Our
proposed falsification strategy leverages a key observation that unmeasured
confounding can cause observed causal mechanisms to appear dependent. Building
on this observation, we develop a novel two-stage procedure that detects these
dependencies with high statistical power while controlling false positives. The
algorithm does not require access to randomized data and, in contrast to other
falsification approaches, functions even under transportability violations when
the environment has a direct effect on the outcome of interest. To showcase the
practical relevance of our approach, we show that our method is able to
efficiently detect confounding on both simulated and real-world data.
|
2502.06233
|
Confidence Improves Self-Consistency in LLMs
|
cs.CL cs.AI
|
Self-consistency decoding enhances LLMs' performance on reasoning tasks by
sampling diverse reasoning paths and selecting the most frequent answer.
However, it is computationally expensive, as sampling many of these (lengthy)
paths is required to increase the chances that the correct answer emerges as
the most frequent one. To address this, we introduce Confidence-Informed
Self-Consistency (CISC). CISC performs a weighted majority vote based on
confidence scores obtained directly from the model. By prioritizing
high-confidence paths, it can identify the correct answer with a significantly
smaller sample size. When tested on nine models and four datasets, CISC
outperforms self-consistency in nearly all configurations, reducing the
required number of reasoning paths by over 40% on average. In addition, we
introduce the notion of within-question confidence evaluation, after showing
that standard evaluation methods are poor predictors of success in
distinguishing correct and incorrect answers to the same question. In fact, the
most calibrated confidence method proved to be the least effective for CISC.
Lastly, beyond these practical implications, our results and analyses show that
LLMs can effectively judge the correctness of their own outputs, contributing
to the ongoing debate on this topic.
|
2502.06235
|
Conditioning and AGM-like belief change in the Desirability-Indifference
framework
|
cs.AI math.PR quant-ph
|
We show how the AGM framework for belief change (expansion, revision,
contraction) can be extended to deal with conditioning in the so-called
Desirability-Indifference framework, based on abstract notions of accepting and
rejecting options, as well as on abstract notions of events. This level of
abstraction allows us to deal simultaneously with classical and quantum
probability theory.
|
2502.06238
|
XNet-Enhanced Deep BSDE Method and Numerical Analysis
|
cs.CE
|
Solving high-dimensional semilinear parabolic partial differential equations
(PDEs) challenges traditional numerical methods due to the "curse of
dimensionality." Deep learning, particularly through the Deep BSDE method,
offers a promising alternative by leveraging neural networks' capability to
approximate high-dimensional functions. This paper introduces a novel network
architecture, XNet, which significantly enhances the computational efficiency
and accuracy of the Deep BSDE method. XNet demonstrates superior approximation
capabilities with fewer parameters, addressing the trade-off between
approximation and optimization errors found in existing methods. We detail the
implementation of XNet within the Deep BSDE framework and present results that
show marked improvements in solving high-dimensional PDEs, potentially setting
a new standard for such computations.
|
2502.06239
|
Pre-Equalization Aided Grant-Free Massive Access in Massive MIMO System
|
eess.SP cs.IT math.IT
|
The spatial diversity and multiplexing advantages of massive
multi-input-multi-output (mMIMO) can significantly improve the capacity of
massive non-orthogonal multiple access (NOMA) in machine type communications.
However, state-of-the-art grant-free massive NOMA schemes for mMIMO systems
require accurate estimation of random access channels to perform activity
detection and the following coherent data demodulation, which suffers from
excessive pilot overhead and access latency. To address this, we propose a
pre-equalization aided grant-free massive access scheme for mMIMO systems,
where an iterative detection scheme is conceived. Specifically, the base
station (BS) firstly activates one of its antennas (i.e., beacon antenna) to
broadcast a beacon signal, which facilitates the user equipment (UEs) to
perform downlink channel estimation and pre-equalize the uplink random access
signal with respect to the channels associated with the beacon antenna. During
the uplink transmission stage, the BS detects UEs' activity and data by using
the proposed iterative detection algorithm, which consists of three modules:
coarse data detection (DD), data-aided channel estimation (CE), and fine DD. In
the proposed algorithm, the joint activity and DD is firstly performed based on
the signals received by the beacon antenna. Subsequently, the DD is further
refined by iteratively performing data-aided CE module and fine DD module using
signals received by all BS antennas. Our simulation results demonstrate that
the proposed scheme outperforms state-of-the-art mMIMO-based grant-free massive
NOMA schemes with the same access latency. Simulation codes are provided to
reproduce the results in this article: https://github.com/owenwang517/tvt-2025.
|
Subsets and Splits
No community queries yet
The top public SQL queries from the community will appear here once available.