id
stringlengths 9
16
| title
stringlengths 4
278
| categories
stringlengths 5
104
| abstract
stringlengths 6
4.09k
|
|---|---|---|---|
2501.06478
|
Speech Recognition for Automatically Assessing Afrikaans and isiXhosa
Preschool Oral Narratives
|
eess.AS cs.CL cs.SD
|
We develop automatic speech recognition (ASR) systems for stories told by
Afrikaans and isiXhosa preschool children. Oral narratives provide a way to
assess children's language development before they learn to read. We consider a
range of prior child-speech ASR strategies to determine which is best suited to
this unique setting. Using Whisper and only 5 minutes of transcribed in-domain
child speech, we find that additional in-domain adult data (adult speech
matching the story domain) provides the biggest improvement, especially when
coupled with voice conversion. Semi-supervised learning also helps for both
languages, while parameter-efficient fine-tuning helps on Afrikaans but not on
isiXhosa (which is under-represented in the Whisper model). Few child-speech
studies look at non-English data, and even fewer at the preschool ages of 4 and
5. Our work therefore represents a unique validation of a wide range of
previous child-speech ASR strategies in an under-explored setting.
|
2501.06480
|
Flash Window Attention: speedup the attention computation for Swin
Transformer
|
cs.CV
|
To address the high resolution of image pixels, the Swin Transformer
introduces window attention. This mechanism divides an image into
non-overlapping windows and restricts attention computation to within each
window, significantly enhancing computational efficiency. To further optimize
this process, one might consider replacing standard attention with flash
attention, which has proven to be more efficient in language models. However, a
direct substitution is ineffective. Flash attention is designed for long
sequences, whereas window attention deals with shorter sequences but must
handle numerous of them in parallel. In this report, we present an optimized
solution called Flash Window Attention, tailored specifically for window
attention. Flash Window Attention improves attention computation efficiency by
up to 300% and enhances end-to-end runtime efficiency by up to 30%. Our code is
available online.
|
2501.06481
|
Focus-N-Fix: Region-Aware Fine-Tuning for Text-to-Image Generation
|
cs.CV
|
Text-to-image (T2I) generation has made significant advances in recent years,
but challenges still remain in the generation of perceptual artifacts,
misalignment with complex prompts, and safety. The prevailing approach to
address these issues involves collecting human feedback on generated images,
training reward models to estimate human feedback, and then fine-tuning T2I
models based on the reward models to align them with human preferences.
However, while existing reward fine-tuning methods can produce images with
higher rewards, they may change model behavior in unexpected ways. For example,
fine-tuning for one quality aspect (e.g., safety) may degrade other aspects
(e.g., prompt alignment), or may lead to reward hacking (e.g., finding a way to
increase rewards without having the intended effect). In this paper, we propose
Focus-N-Fix, a region-aware fine-tuning method that trains models to correct
only previously problematic image regions. The resulting fine-tuned model
generates images with the same high-level structure as the original model but
shows significant improvements in regions where the original model was
deficient in safety (over-sexualization and violence), plausibility, or other
criteria. Our experiments demonstrate that Focus-N-Fix improves these localized
quality aspects with little or no degradation to others and typically
imperceptible changes in the rest of the image. Disclaimer: This paper contains
images that may be overly sexual, violent, offensive, or harmful.
|
2501.06485
|
A Diffusive Data Augmentation Framework for Reconstruction of Complex
Network Evolutionary History
|
cs.AI
|
The evolutionary processes of complex systems contain critical information
regarding their functional characteristics. The generation time of edges
provides insights into the historical evolution of various networked complex
systems, such as protein-protein interaction networks, ecosystems, and social
networks. Recovering these evolutionary processes holds significant scientific
value, including aiding in the interpretation of the evolution of
protein-protein interaction networks. However, existing methods are capable of
predicting the generation times of remaining edges given a partial temporal
network but often perform poorly in cross-network prediction tasks. These
methods frequently fail in edge generation time recovery tasks for static
networks that lack timestamps. In this work, we adopt a comparative
paradigm-based framework that fuses multiple networks for training, enabling
cross-network learning of the relationship between network structure and edge
generation times. Compared to separate training, this approach yields an
average accuracy improvement of 16.98%. Furthermore, given the difficulty in
collecting temporal networks, we propose a novel diffusion-model-based
generation method to produce a large number of temporal networks. By combining
real temporal networks with generated ones for training, we achieve an
additional average accuracy improvement of 5.46% through joint training.
|
2501.06488
|
NVS-SQA: Exploring Self-Supervised Quality Representation Learning for
Neurally Synthesized Scenes without References
|
cs.CV cs.AI cs.HC cs.MM eess.IV
|
Neural View Synthesis (NVS), such as NeRF and 3D Gaussian Splatting,
effectively creates photorealistic scenes from sparse viewpoints, typically
evaluated by quality assessment methods like PSNR, SSIM, and LPIPS. However,
these full-reference methods, which compare synthesized views to reference
views, may not fully capture the perceptual quality of neurally synthesized
scenes (NSS), particularly due to the limited availability of dense reference
views. Furthermore, the challenges in acquiring human perceptual labels hinder
the creation of extensive labeled datasets, risking model overfitting and
reduced generalizability. To address these issues, we propose NVS-SQA, a NSS
quality assessment method to learn no-reference quality representations through
self-supervision without reliance on human labels. Traditional self-supervised
learning predominantly relies on the "same instance, similar representation"
assumption and extensive datasets. However, given that these conditions do not
apply in NSS quality assessment, we employ heuristic cues and quality scores as
learning objectives, along with a specialized contrastive pair preparation
process to improve the effectiveness and efficiency of learning. The results
show that NVS-SQA outperforms 17 no-reference methods by a large margin (i.e.,
on average 109.5% in SRCC, 98.6% in PLCC, and 91.5% in KRCC over the second
best) and even exceeds 16 full-reference methods across all evaluation metrics
(i.e., 22.9% in SRCC, 19.1% in PLCC, and 18.6% in KRCC over the second best).
|
2501.06490
|
Sequential Classification of Aviation Safety Occurrences with Natural
Language Processing
|
cs.CL cs.LG
|
Safety is a critical aspect of the air transport system given even slight
operational anomalies can result in serious consequences. To reduce the chances
of aviation safety occurrences, accidents and incidents are reported to
establish the root cause, propose safety recommendations etc. However, analysis
narratives of the pre-accident events are presented using human-understandable,
raw, unstructured, text that a computer system cannot understand. The ability
to classify and categorise safety occurrences from their textual narratives
would help aviation industry stakeholders make informed safety-critical
decisions. To classify and categorise safety occurrences, we applied natural
language processing (NLP) and AI (Artificial Intelligence) models to process
text narratives. The study aimed to answer the question. How well can the
damage level caused to the aircraft in a safety occurrence be inferred from the
text narrative using natural language processing. The classification
performance of various deep learning models including LSTM, BLSTM, GRU, sRNN,
and combinations of these models including LSTM and GRU, BLSTM+GRU, sRNN and
LSTM, sRNN and BLSTM, sRNN and GRU, sRNN and BLSTM and GRU, and sRNN and LSTM
and GRU was evaluated on a set of 27,000 safety occurrence reports from the
NTSB. The results of this study indicate that all models investigated performed
competitively well recording an accuracy of over 87.9% which is well above the
random guess of 25% for a four-class classification problem. Also, the models
recorded high precision, recall, and F1 scores above 80%, 88%, and 85%,
respectively. sRNN slightly outperformed other single models in terms of recall
(90%) and accuracy (90%) while LSTM reported slightly better performance in
terms of precision (87%).
|
2501.06491
|
Improving Requirements Classification with SMOTE-Tomek Preprocessing
|
cs.SE cs.AI cs.SY eess.SY
|
This study emphasizes the domain of requirements engineering by applying the
SMOTE-Tomek preprocessing technique, combined with stratified K-fold
cross-validation, to address class imbalance in the PROMISE dataset. This
dataset comprises 969 categorized requirements, classified into functional and
non-functional types. The proposed approach enhances the representation of
minority classes while maintaining the integrity of validation folds, leading
to a notable improvement in classification accuracy. Logistic regression
achieved 76.16\%, significantly surpassing the baseline of 58.31\%. These
results highlight the applicability and efficiency of machine learning models
as scalable and interpretable solutions.
|
2501.06492
|
A New Flexible Train-Test Split Algorithm, an approach for choosing
among the Hold-out, K-fold cross-validation, and Hold-out iteration
|
cs.LG
|
Artificial Intelligent transformed industries, like engineering, medicine,
finance. Predictive models use supervised learning, a vital Machine learning
subset. Crucial for model evaluation, cross-validation includes
re-substitution, hold-out, and K-fold. This study focuses on improving the
accuracy of ML algorithms across three different datasets. To evaluate
Hold-out, Hold-out with iteration, and K-fold Cross-Validation techniques, we
created a flexible Python program. By modifying parameters like test size,
Random State, and 'k' values, we were able to improve accuracy assessment. The
outcomes demonstrate the Hold-out validation method's persistent superiority,
particularly with a test size of 10%. With iterations and Random State
settings, hold-out with iteration shows little accuracy variance. It suggests
that there are variances according to algorithm, with Decision Tree doing best
for Framingham and Naive Bayes and K Nearest Neighbors for COVID-19. Different
datasets require different optimal K values in K-Fold Cross Validation,
highlighting these considerations. This study challenges the universality of K
values in K-Fold Cross Validation and suggests a 10% test size and 90% training
size for better outcomes. It also emphasizes the contextual impact of dataset
features, sample size, feature count, and selected methodologies. Researchers
can adapt these codes for their dataset to obtain highest accuracy with
specific evaluation.
|
2501.06493
|
Whole-Body Integrated Motion Planning for Aerial Manipulators
|
cs.RO
|
Efficient motion planning for Aerial Manipulators (AMs) is essential for
tackling complex manipulation tasks, yet achieving coupled trajectory planning
remains challenging. In this work, we propose, to the best of our knowledge,
the first whole-body integrated motion planning framework for aerial
manipulators, which is facilitated by an improved Safe Flight Corridor (SFC)
generation strategy and high-dimensional collision-free trajectory planning. In
particular, we formulate an optimization problem to generate feasible
trajectories for both the quadrotor and manipulator while ensuring collision
avoidance, dynamic feasibility, kinematic feasibility, and waypoint
constraints. To achieve collision avoidance, we introduce a variable geometry
approximation method, which dynamically models the changing collision volume
induced by different manipulator configurations. Moreover, waypoint constraints
in our framework are defined in $\mathrm{SE(3)\times\mathbb{R}^3}$, allowing
the aerial manipulator to traverse specified positions while maintaining
desired attitudes and end-effector states. The effectiveness of our framework
is validated through comprehensive simulations and real-world experiments
across various environments.
|
2501.06494
|
TopoFormer: Integrating Transformers and ConvLSTMs for Coastal
Topography Prediction
|
eess.SP cs.AI
|
This paper presents \textit{TopoFormer}, a novel hybrid deep learning
architecture that integrates transformer-based encoders with convolutional long
short-term memory (ConvLSTM) layers for the precise prediction of topographic
beach profiles referenced to elevation datums, with a particular focus on Mean
Low Water Springs (MLWS) and Mean Low Water Neaps (MLWN). Accurate topographic
estimation down to MLWS is critical for coastal management, navigation safety,
and environmental monitoring. Leveraging a comprehensive dataset from the Wales
Coastal Monitoring Centre (WCMC), consisting of over 2000 surveys across 36
coastal survey units, TopoFormer addresses key challenges in topographic
prediction, including temporal variability and data gaps in survey
measurements. The architecture uniquely combines multi-head attention
mechanisms and ConvLSTM layers to capture both long-range dependencies and
localized temporal patterns inherent in beach profiles data. TopoFormer's
predictive performance was rigorously evaluated against state-of-the-art
models, including DenseNet, 1D/2D CNNs, and LSTMs. While all models
demonstrated strong performance, \textit{TopoFormer} achieved the lowest mean
absolute error (MAE), as low as 2 cm, and provided superior accuracy in both
in-distribution (ID) and out-of-distribution (OOD) evaluations.
|
2501.06496
|
Analyzing the Role of Context in Forecasting with Large Language Models
|
cs.CL cs.IR
|
This study evaluates the forecasting performance of recent language models
(LLMs) on binary forecasting questions. We first introduce a novel dataset of
over 600 binary forecasting questions, augmented with related news articles and
their concise question-related summaries. We then explore the impact of input
prompts with varying level of context on forecasting performance. The results
indicate that incorporating news articles significantly improves performance,
while using few-shot examples leads to a decline in accuracy. We find that
larger models consistently outperform smaller models, highlighting the
potential of LLMs in enhancing automated forecasting.
|
2501.06497
|
PASS: Presentation Automation for Slide Generation and Speech
|
cs.CL cs.AI
|
In today's fast-paced world, effective presentations have become an essential
tool for communication in both online and offline meetings. The crafting of a
compelling presentation requires significant time and effort, from gathering
key insights to designing slides that convey information clearly and concisely.
However, despite the wealth of resources available, people often find
themselves manually extracting crucial points, analyzing data, and organizing
content in a way that ensures clarity and impact. Furthermore, a successful
presentation goes beyond just the slides; it demands rehearsal and the ability
to weave a captivating narrative to fully engage the audience. Although there
has been some exploration of automating document-to-slide generation, existing
research is largely centered on converting research papers. In addition,
automation of the delivery of these presentations has yet to be addressed. We
introduce PASS, a pipeline used to generate slides from general Word documents,
going beyond just research papers, which also automates the oral delivery of
the generated slides. PASS analyzes user documents to create a dynamic,
engaging presentation with an AI-generated voice. Additionally, we developed an
LLM-based evaluation metric to assess our pipeline across three critical
dimensions of presentations: relevance, coherence, and redundancy. The data and
codes are available at https://github.com/AggarwalTushar/PASS.
|
2501.06505
|
Online Algorithm for Aggregating Experts' Predictions with Unbounded
Quadratic Loss
|
cs.LG
|
We consider the problem of online aggregation of expert predictions with the
quadratic loss function. We propose an algorithm for aggregating expert
predictions which does not require a prior knowledge of the upper bound on the
losses. The algorithm is based on the exponential reweighing of expert losses.
|
2501.06506
|
Resource Allocation under the Latin Square Constraint
|
cs.GT cs.AI cs.MA
|
A Latin square is an $n \times n$ matrix filled with $n$ distinct symbols,
each of which appears exactly once in each row and exactly once in each column.
We introduce a problem of allocating $n$ indivisible items among $n$ agents
over $n$ rounds while satisfying the Latin square constraint. This constraint
ensures that each agent receives no more than one item per round and receives
each item at most once. Each agent has an additive valuation on the item--round
pairs. Real-world applications like scheduling, resource management, and
experimental design require the Latin square constraint to satisfy fairness or
balancedness in allocation. Our goal is to find a partial or complete
allocation that maximizes the sum of the agents' valuations (utilitarian social
welfare) or the minimum of the agents' valuations (egalitarian social welfare).
For the problem of maximizing utilitarian social welfare, we prove NP-hardness
even when the valuations are binary additive. We then provide $(1-1/e)$ and
$(1-1/e)/4$-approximation algorithms for partial and complete settings,
respectively. Additionally, we present fixed-parameter tractable (FPT)
algorithms with respect to the order of Latin square and the optimum value for
both partial and complete settings. For the problem of maximizing egalitarian
social welfare, we establish that deciding whether the optimum value is at most
$1$ or at least $2$ is NP-hard for both the partial and complete settings, even
when the valuations are binary. Furthermore, we demonstrate that checking the
existence of a complete allocation that satisfies each of envy-free,
proportional, equitable, envy-free up to any good, proportional up to any good,
or equitable up to any good is NP-hard, even when the valuations are identical.
|
2501.06510
|
Cooperative Optimal Output Tracking for Discrete-Time Multiagent
Systems: Stabilizing Policy Iteration Frameworks and Analysis
|
eess.SY cs.SY
|
In this paper, two model-free optimal output tracking frameworks based on
policy iteration for discrete-time multi-agent systems are proposed. First, we
establish a framework of stabilizing policy iteration that can start from any
initial feedback control policy, relaxing the dependence of traditional policy
iteration on the initial stabilizing control policy. Then, another efficient
and equivalent $Q$-learning policy iteration framework is developed, which is
shown to require only less system data to get the same results as the
stabilizing policy iteration. Both frameworks obtain stabilizing control policy
by iterating the stabilizing virtual closed-loop system step-by-step to the
actual closed-loop system. Multiple explicit schemes for the iteration
step-size/coefficient are designed and their stability during the above
iterations is analyzed. By using the generated closed-loop stabilizing control
policy and two frameworks, the optimal feedback control gain is obtained. The
approximate solution of the regulator equations is found by model-free
iteration, which leads to the optimal feedforward gain. Finally, the
cooperative optimal output tracking is realized by a distributed
feedforward-feedback controller. The proposed algorithms are validated by
simulation.
|
2501.06514
|
Neural Codec Source Tracing: Toward Comprehensive Attribution in
Open-Set Condition
|
cs.SD cs.AI eess.AS
|
Current research in audio deepfake detection is gradually transitioning from
binary classification to multi-class tasks, referred as audio deepfake source
tracing task. However, existing studies on source tracing consider only
closed-set scenarios and have not considered the challenges posed by open-set
conditions. In this paper, we define the Neural Codec Source Tracing (NCST)
task, which is capable of performing open-set neural codec classification and
interpretable ALM detection. Specifically, we constructed the ST-Codecfake
dataset for the NCST task, which includes bilingual audio samples generated by
11 state-of-the-art neural codec methods and ALM-based out-ofdistribution (OOD)
test samples. Furthermore, we establish a comprehensive source tracing
benchmark to assess NCST models in open-set conditions. The experimental
results reveal that although the NCST models perform well in in-distribution
(ID) classification and OOD detection, they lack robustness in classifying
unseen real audio. The ST-codecfake dataset and code are available.
|
2501.06521
|
Fine-tuning Large Language Models for Improving Factuality in Legal
Question Answering
|
cs.CL
|
Hallucination, or the generation of incorrect or fabricated information,
remains a critical challenge in large language models (LLMs), particularly in
high-stake domains such as legal question answering (QA). In order to mitigate
the hallucination rate in legal QA, we first introduce a benchmark called
LegalHalBench and three automatic metrics to evaluate the common hallucinations
when LLMs answer legal questions. We then propose a hallucination mitigation
method that integrates behavior cloning and a novel Hard Sample-aware Iterative
Direct Preference Optimization (HIPO). We conduct extensive real-data
experiments to validate the effectiveness of our approach. Our results
demonstrate remarkable improvements in various metrics, including the newly
proposed Non-Hallucinated Statute Rate, Statute Relevance Rate, Legal Claim
Truthfulness, as well as traditional metrics such as METEOR, BERTScore,
ROUGE-L, and win rates.
|
2501.06524
|
Multi-View Factorizing and Disentangling: A Novel Framework for
Incomplete Multi-View Multi-Label Classification
|
cs.CV
|
Multi-view multi-label classification (MvMLC) has recently garnered
significant research attention due to its wide range of real-world
applications. However, incompleteness in views and labels is a common
challenge, often resulting from data collection oversights and uncertainties in
manual annotation. Furthermore, the task of learning robust multi-view
representations that are both view-consistent and view-specific from diverse
views still a challenge problem in MvMLC. To address these issues, we propose a
novel framework for incomplete multi-view multi-label classification (iMvMLC).
Our method factorizes multi-view representations into two independent sets of
factors: view-consistent and view-specific, and we correspondingly design a
graph disentangling loss to fully reduce redundancy between these
representations. Additionally, our framework innovatively decomposes consistent
representation learning into three key sub-objectives: (i) how to extract
view-shared information across different views, (ii) how to eliminate
intra-view redundancy in consistent representations, and (iii) how to preserve
task-relevant information. To this end, we design a robust task-relevant
consistency learning module that collaboratively learns high-quality consistent
representations, leveraging a masked cross-view prediction (MCP) strategy and
information theory. Notably, all modules in our framework are developed to
function effectively under conditions of incomplete views and labels, making
our method adaptable to various multi-view and multi-label datasets. Extensive
experiments on five datasets demonstrate that our method outperforms other
leading approaches.
|
2501.06527
|
Scaffolding Creativity: Integrating Generative AI Tools and Real-world
Experiences in Business Education
|
cs.AI cs.HC
|
This case study explores the integration of Generative AI tools and
real-world experiences in business education. Through a study of an innovative
undergraduate course, we investigate how AI-assisted learning, combined with
experiential components, impacts students' creative processes and learning
outcomes. Our findings reveal that this integrated approach accelerates
knowledge acquisition, enables students to overcome traditional creative
barriers, and facilitates a dynamic interplay between AI-generated insights and
real-world observations. The study also highlights challenges, including the
need for instructors with high AI literacy and the rapid evolution of AI tools
creating a moving target for curriculum design. These insights contribute to
the growing body of literature on AI in education and provide actionable
recommendations for educators preparing students for the complexities of modern
business environments.
|
2501.06528
|
Safe Circumnavigation of a Hostile Target Using Range-Based Measurements
|
cs.RO cs.SY eess.SY
|
Robotic systems are frequently deployed in missions that are dull, dirty, and
dangerous, where ensuring their safety is of paramount importance when
designing stabilizing controllers to achieve their desired goals. This paper
addresses the problem of safe circumnavigation around a hostile target by a
nonholonomic robot, with the objective of maintaining a desired safe distance
from the target. Our solution approach involves incorporating an auxiliary
circle into the problem formulation, which assists in navigating the robot
around the target using available range-based measurements. By leveraging the
concept of a barrier Lyapunov function, we propose a novel control law that
ensures stable circumnavigation around the target while preventing the robot
from entering the safety circle. This controller is designed based on a
parameter that depends on the radii of three circles, namely the stabilizing
circle, the auxiliary circle, and the safety circle. By identifying an
appropriate range for this design parameter, we rigorously prove the stability
of the desired equilibrium of the closed-loop system. Additionally, we provide
an analysis of the robot's motion within the auxiliary circle, which is
influenced by a gain parameter in the proposed controller. Simulation and
experimental results are presented to illustrate the key theoretical
developments.
|
2501.06532
|
Determination of galaxy photometric redshifts using Conditional
Generative Adversarial Networks (CGANs)
|
astro-ph.IM astro-ph.CO cs.AI
|
Accurate and reliable photometric redshifts determination is one of the key
aspects for wide-field photometric surveys. Determination of photometric
redshift for galaxies, has been traditionally solved by use of machine-learning
and artificial intelligence techniques trained on a calibration sample of
galaxies, where both photometry and spectrometry are determined. On this paper,
we present a new algorithmic approach for determining photometric redshifts of
galaxies using Conditional Generative Adversarial Networks (CGANs). Proposed
CGAN implementation, approaches photometric redshift determination as a
probabilistic regression, where instead of determining a single value for the
estimated redshift of the galaxy, a full probability density is computed. The
methodology proposed, is tested with data from Dark Energy Survey (DES) Y1 data
and compared with other existing algorithm such as a Random Forest regressor.
|
2501.06533
|
DivTrackee versus DynTracker: Promoting Diversity in Anti-Facial
Recognition against Dynamic FR Strategy
|
cs.CV cs.CR
|
The widespread adoption of facial recognition (FR) models raises serious
concerns about their potential misuse, motivating the development of
anti-facial recognition (AFR) to protect user facial privacy. In this paper, we
argue that the static FR strategy, predominantly adopted in prior literature
for evaluating AFR efficacy, cannot faithfully characterize the actual
capabilities of determined trackers who aim to track a specific target
identity. In particular, we introduce \emph{\ourAttack}, a dynamic FR strategy
where the model's gallery database is iteratively updated with newly recognized
target identity images. Surprisingly, such a simple approach renders all the
existing AFR protections ineffective. To mitigate the privacy threats posed by
DynTracker, we advocate for explicitly promoting diversity in the AFR-protected
images. We hypothesize that the lack of diversity is the primary cause of the
failure of existing AFR methods. Specifically, we develop \emph{DivTrackee}, a
novel method for crafting diverse AFR protections that builds upon a
text-guided image generation framework and diversity-promoting adversarial
losses. Through comprehensive experiments on various facial image benchmarks
and feature extractors, we demonstrate DynTracker's strength in breaking
existing AFR methods and the superiority of DivTrackee in preventing user
facial images from being identified by dynamic FR strategies. We believe our
work can act as an important initial step towards developing more effective AFR
methods for protecting user facial privacy against determined trackers.
|
2501.06534
|
Dynamic Causal Structure Discovery and Causal Effect Estimation
|
stat.ML cs.LG
|
To represent the causal relationships between variables, a directed acyclic
graph (DAG) is widely utilized in many areas, such as social sciences,
epidemics, and genetics. Many causal structure learning approaches are
developed to learn the hidden causal structure utilizing deep-learning
approaches. However, these approaches have a hidden assumption that the causal
relationship remains unchanged over time, which may not hold in real life. In
this paper, we develop a new framework to model the dynamic causal graph where
the causal relations are allowed to be time-varying. We incorporate the basis
approximation method into the score-based causal discovery approach to capture
the dynamic pattern of the causal graphs. Utilizing the autoregressive model
structure, we could capture both contemporaneous and time-lagged causal
relationships while allowing them to vary with time. We propose an algorithm
that could provide both past-time estimates and future-time predictions on the
causal graphs, and conduct simulations to demonstrate the usefulness of the
proposed method. We also apply the proposed method for the covid-data analysis,
and provide causal estimates on how policy restriction's effect changes.
|
2501.06536
|
Dispersion Measures as Predictors of Lexical Decision Time, Word
Familiarity, and Lexical Complexity
|
cs.CL
|
Various measures of dispersion have been proposed to paint a fuller picture
of a word's distribution in a corpus, but only little has been done to validate
them externally. We evaluate a wide range of dispersion measures as predictors
of lexical decision time, word familiarity, and lexical complexity in five
diverse languages. We find that the logarithm of range is not only a better
predictor than log-frequency across all tasks and languages, but that it is
also the most powerful additional variable to log-frequency, consistently
outperforming the more complex dispersion measures. We discuss the effects of
corpus part granularity and logarithmic transformation, shedding light on
contradictory results of previous studies.
|
2501.06540
|
CeViT: Copula-Enhanced Vision Transformer in multi-task learning and
bi-group image covariates with an application to myopia screening
|
cs.CV math.ST stat.AP stat.ME stat.TH
|
We aim to assist image-based myopia screening by resolving two longstanding
problems, "how to integrate the information of ocular images of a pair of eyes"
and "how to incorporate the inherent dependence among high-myopia status and
axial length for both eyes." The classification-regression task is modeled as a
novel 4-dimensional muti-response regression, where discrete responses are
allowed, that relates to two dependent 3rd-order tensors (3D ultrawide-field
fundus images). We present a Vision Transformer-based bi-channel architecture,
named CeViT, where the common features of a pair of eyes are extracted via a
shared Transformer encoder, and the interocular asymmetries are modeled through
separated multilayer perceptron heads. Statistically, we model the conditional
dependence among mixture of discrete-continuous responses given the image
covariates by a so-called copula loss. We establish a new theoretical framework
regarding fine-tuning on CeViT based on latent representations, allowing the
black-box fine-tuning procedure interpretable and guaranteeing higher relative
efficiency of fine-tuning weight estimation in the asymptotic setting. We apply
CeViT to an annotated ultrawide-field fundus image dataset collected by
Shanghai Eye \& ENT Hospital, demonstrating that CeViT enhances the baseline
model in both accuracy of classifying high-myopia and prediction of AL on both
eyes.
|
2501.06545
|
Energy-Aware Resource Allocation for Energy Harvesting Powered Wireless
Sensor Nodes
|
cs.IT eess.SP math.IT
|
Low harvested energy poses a significant challenge to sustaining continuous
communication in energy harvesting (EH)-powered wireless sensor networks. This
is mainly due to intermittent and limited power availability from radio
frequency signals. In this paper, we introduce a novel energy-aware resource
allocation problem aimed at enabling the asynchronous accumulate-then-transmit
protocol, offering an alternative to the extensively studied
harvest-then-transmit approach. Specifically, we jointly optimize power
allocation and time fraction dedicated to EH to maximize the average long-term
system throughput, accounting for both data and energy queue lengths. By
leveraging inner approximation and network utility maximization techniques, we
develop a simple yet efficient iterative algorithm that guarantees at least a
local optimum and achieves long-term utility improvement. Numerical results
highlight the proposed approach's effectiveness in terms of both queue length
and sustained system throughput.
|
2501.06546
|
Natural Language Supervision for Low-light Image Enhancement
|
cs.CV cs.AI
|
With the development of deep learning, numerous methods for low-light image
enhancement (LLIE) have demonstrated remarkable performance. Mainstream LLIE
methods typically learn an end-to-end mapping based on pairs of low-light and
normal-light images. However, normal-light images under varying illumination
conditions serve as reference images, making it difficult to define a
``perfect'' reference image This leads to the challenge of reconciling
metric-oriented and visual-friendly results. Recently, many cross-modal studies
have found that side information from other related modalities can guide visual
representation learning. Based on this, we introduce a Natural Language
Supervision (NLS) strategy, which learns feature maps from text corresponding
to images, offering a general and flexible interface for describing an image
under different illumination.
However, image distributions conditioned on textual descriptions are highly
multimodal, which makes training difficult. To address this issue, we design a
Textual Guidance Conditioning Mechanism (TCM) that incorporates the connections
between image regions and sentence words, enhancing the ability to capture
fine-grained cross-modal cues for images and text. This strategy not only
utilizes a wider range of supervised sources, but also provides a new paradigm
for LLIE based on visual and textual feature alignment. In order to effectively
identify and merge features from various levels of image and textual
information, we design an Information Fusion Attention (IFA) module to enhance
different regions at different levels. We integrate the proposed TCM and IFA
into a Natural Language Supervision network for LLIE, named NaLSuper. Finally,
extensive experiments demonstrate the robustness and superior effectiveness of
our proposed NaLSuper.
|
2501.06550
|
CoreNet: Conflict Resolution Network for Point-Pixel Misalignment and
Sub-Task Suppression of 3D LiDAR-Camera Object Detection
|
cs.CV
|
Fusing multi-modality inputs from different sensors is an effective way to
improve the performance of 3D object detection. However, current methods
overlook two important conflicts: point-pixel misalignment and sub-task
suppression. The former means a pixel feature from the opaque object is
projected to multiple point features of the same ray in the world space, and
the latter means the classification prediction and bounding box regression may
cause mutual suppression. In this paper, we propose a novel method named
Conflict Resolution Network (CoreNet) to address the aforementioned issues.
Specifically, we first propose a dual-stream transformation module to tackle
point-pixel misalignment. It consists of ray-based and point-based 2D-to-BEV
transformations. Both of them achieve approximately unique mapping from the
image space to the world space. Moreover, we introduce a task-specific
predictor to tackle sub-task suppression. It uses the dual-branch structure
which adopts class-specific query and Bbox-specific query to corresponding
sub-tasks. Each task-specific query is constructed of task-specific feature and
general feature, which allows the heads to adaptively select information of
interest based on different sub-tasks. Experiments on the large-scale nuScenes
dataset demonstrate the superiority of our proposed CoreNet, by achieving
75.6\% NDS and 73.3\% mAP on the nuScenes test set without test-time
augmentation and model ensemble techniques. The ample ablation study also
demonstrates the effectiveness of each component. The code is released on
https://github.com/liyih/CoreNet.
|
2501.06552
|
When xURLLC Meets NOMA: A Stochastic Network Calculus Perspective
|
eess.SP cs.IT cs.SY eess.SY math.IT
|
The advent of next-generation ultra-reliable and low-latency communications
(xURLLC) presents stringent and unprecedented requirements for key performance
indicators (KPIs). As a disruptive technology, non-orthogonal multiple access
(NOMA) harbors the potential to fulfill these stringent KPIs essential for
xURLLC. However, the immaturity of research on the tail distributions of these
KPIs significantly impedes the application of NOMA to xURLLC. Stochastic
network calculus (SNC), as a potent methodology, is leveraged to provide
dependable theoretical insights into tail distribution analysis and statistical
QoS provisioning (SQP). In this article, we develop a NOMA-assisted uplink
xURLLC network architecture that incorporates an SNC-based SQP theoretical
framework (SNC-SQP) to support tail distribution analysis in terms of delay,
age-of-information (AoI), and reliability. Based on SNC-SQP, an SQP-driven
power optimization problem is proposed to minimize transmit power while
guaranteeing xURLLC's KPIs on delay, AoI, reliability, and power consumption.
Extensive simulations validate our proposed theoretical framework and
demonstrate that the proposed power allocation scheme significantly reduces
uplink transmit power and outperforms conventional schemes in terms of SQP
performance.
|
2501.06553
|
VASparse: Towards Efficient Visual Hallucination Mitigation for Large
Vision-Language Model via Visual-Aware Sparsification
|
cs.CV
|
Large Vision-Language Models (LVLMs) may produce outputs that are unfaithful
to reality, also known as visual hallucinations (VH), which significantly
impedes their real-world usage. To alleviate VH, various decoding strategies
have been proposed to enhance visual information. However, many of these
methods may require secondary decoding and rollback, which significantly
reduces inference speed. In this work, we propose an efficient plug-and-play
decoding algorithm via Visual-Aware Sparsification (VASparse) from the
perspective of token sparsity for mitigating VH. VASparse is inspired by
empirical observations: (1) the sparse activation of attention in LVLMs, and
(2) visual-agnostic tokens sparsification exacerbates VH. Based on these
insights, we propose a novel token sparsification strategy that balances
efficiency and trustworthiness. Specifically, VASparse implements a
visual-aware token selection strategy during decoding to reduce redundant
tokens while preserving visual context effectively. Additionally, we
innovatively introduce a sparse-based visual contrastive decoding method to
recalibrate the distribution of hallucinated outputs without the time overhead
associated with secondary decoding. Subsequently, VASparse recalibrates
attention scores to penalize attention sinking of LVLMs towards text tokens.
Extensive experiments across four popular benchmarks confirm the effectiveness
of VASparse in mitigating VH across different LVLM families without requiring
additional training or post-processing. Impressively, VASparse achieves
state-of-the-art performance for mitigating VH while maintaining competitive
decoding speed. Code is available at
https://github.com/mengchuang123/VASparse-github.
|
2501.06554
|
Hierarchical Reinforcement Learning for Optimal Agent Grouping in
Cooperative Systems
|
cs.LG cs.AI cs.MA
|
This paper presents a hierarchical reinforcement learning (RL) approach to
address the agent grouping or pairing problem in cooperative multi-agent
systems. The goal is to simultaneously learn the optimal grouping and agent
policy. By employing a hierarchical RL framework, we distinguish between
high-level decisions of grouping and low-level agents' actions. Our approach
utilizes the CTDE (Centralized Training with Decentralized Execution) paradigm,
ensuring efficient learning and scalable execution. We incorporate
permutation-invariant neural networks to handle the homogeneity and cooperation
among agents, enabling effective coordination. The option-critic algorithm is
adapted to manage the hierarchical decision-making process, allowing for
dynamic and optimal policy adjustments.
|
2501.06557
|
A Survey on Spoken Italian Datasets and Corpora
|
cs.CL cs.AI cs.DL
|
Spoken language datasets are vital for advancing linguistic research, Natural
Language Processing, and speech technology. However, resources dedicated to
Italian, a linguistically rich and diverse Romance language, remain
underexplored compared to major languages like English or Mandarin. This survey
provides a comprehensive analysis of 66 spoken Italian datasets, highlighting
their characteristics, methodologies, and applications. The datasets are
categorized by speech type, source and context, and demographic and linguistic
features, with a focus on their utility in fields such as Automatic Speech
Recognition, emotion detection, and education. Challenges related to dataset
scarcity, representativeness, and accessibility are discussed alongside
recommendations for enhancing dataset creation and utilization. The full
dataset inventory is publicly accessible via GitHub and archived on Zenodo,
serving as a valuable resource for researchers and developers. By addressing
current gaps and proposing future directions, this work aims to support the
advancement of Italian speech technologies and linguistic research.
|
2501.06561
|
Where to Go Next Day: Multi-scale Spatial-Temporal Decoupled Model for
Mid-term Human Mobility Prediction
|
cs.AI
|
Predicting individual mobility patterns is crucial across various
applications. While current methods mainly focus on predicting the next
location for personalized services like recommendations, they often fall short
in supporting broader applications such as traffic management and epidemic
control, which require longer period forecasts of human mobility. This study
addresses mid-term mobility prediction, aiming to capture daily travel patterns
and forecast trajectories for the upcoming day or week. We propose a novel
Multi-scale Spatial-Temporal Decoupled Predictor (MSTDP) designed to
efficiently extract spatial and temporal information by decoupling daily
trajectories into distinct location-duration chains. Our approach employs a
hierarchical encoder to model multi-scale temporal patterns, including daily
recurrence and weekly periodicity, and utilizes a transformer-based decoder to
globally attend to predicted information in the location or duration chain.
Additionally, we introduce a spatial heterogeneous graph learner to capture
multi-scale spatial relationships, enhancing semantic-rich representations.
Extensive experiments, including statistical physics analysis, are conducted on
large-scale mobile phone records in five cities (Boston, Los Angeles, SF Bay
Area, Shanghai, and Tokyo), to demonstrate MSTDP's advantages. Applied to
epidemic modeling in Boston, MSTDP significantly outperforms the
best-performing baseline, achieving a remarkable 62.8% reduction in MAE for
cumulative new cases.
|
2501.06562
|
Discrete Speech Unit Extraction via Independent Component Analysis
|
eess.AS cs.AI cs.LG cs.SD
|
Self-supervised speech models (S3Ms) have become a common tool for the speech
processing community, leveraging representations for downstream tasks.
Clustering S3M representations yields discrete speech units (DSUs), which serve
as compact representations for speech signals. DSUs are typically obtained by
k-means clustering. Using DSUs often leads to strong performance in various
tasks, including automatic speech recognition (ASR). However, even with the
high dimensionality and redundancy of S3M representations, preprocessing S3M
representations for better clustering remains unexplored, even though it can
affect the quality of DSUs. In this paper, we investigate the potential of
linear preprocessing methods for extracting DSUs. We evaluate standardization,
principal component analysis, whitening, and independent component analysis
(ICA) on DSU-based ASR benchmarks and demonstrate their effectiveness as
preprocessing for k-means. We also conduct extensive analyses of their
behavior, such as orthogonality or interpretability of individual components of
ICA.
|
2501.06564
|
Natural Language Processing and Deep Learning Models to Classify Phase
of Flight in Aviation Safety Occurrences
|
cs.CL cs.LG
|
The air transport system recognizes the criticality of safety, as even minor
anomalies can have severe consequences. Reporting accidents and incidents play
a vital role in identifying their causes and proposing safety recommendations.
However, the narratives describing pre-accident events are presented in
unstructured text that is not easily understood by computer systems.
Classifying and categorizing safety occurrences based on these narratives can
support informed decision-making by aviation industry stakeholders. In this
study, researchers applied natural language processing (NLP) and artificial
intelligence (AI) models to process text narratives to classify the flight
phases of safety occurrences. The classification performance of two deep
learning models, ResNet and sRNN was evaluated, using an initial dataset of
27,000 safety occurrence reports from the NTSB. The results demonstrated good
performance, with both models achieving an accuracy exceeding 68%, well above
the random guess rate of 14% for a seven-class classification problem. The
models also exhibited high precision, recall, and F1 scores. The sRNN model
greatly outperformed the simplified ResNet model architecture used in this
study. These findings indicate that NLP and deep learning models can infer the
flight phase from raw text narratives, enabling effective analysis of safety
occurrences.
|
2501.06566
|
Cooperative Aerial Robot Inspection Challenge: A Benchmark for
Heterogeneous Multi-UAV Planning and Lessons Learned
|
cs.RO cs.SY eess.SY
|
We propose the Cooperative Aerial Robot Inspection Challenge (CARIC), a
simulation-based benchmark for motion planning algorithms in heterogeneous
multi-UAV systems. CARIC features UAV teams with complementary sensors,
realistic constraints, and evaluation metrics prioritizing inspection quality
and efficiency. It offers a ready-to-use perception-control software stack and
diverse scenarios to support the development and evaluation of task allocation
and motion planning algorithms. Competitions using CARIC were held at IEEE CDC
2023 and the IROS 2024 Workshop on Multi-Robot Perception and Navigation,
attracting innovative solutions from research teams worldwide. This paper
examines the top three teams from CDC 2023, analyzing their exploration,
inspection, and task allocation strategies while drawing insights into their
performance across scenarios. The results highlight the task's complexity and
suggest promising directions for future research in cooperative multi-UAV
systems.
|
2501.06570
|
Aster: Enhancing LSM-structures for Scalable Graph Database
|
cs.DB
|
There is a proliferation of applications requiring the management of
large-scale, evolving graphs under workloads with intensive graph updates and
lookups. Driven by this challenge, we introduce Poly-LSM, a high-performance
key-value storage engine for graphs with the following novel techniques: (1)
Poly-LSM is embedded with a new design of graph-oriented LSM-tree structure
that features a hybrid storage model for concisely and effectively storing
graph data. (2) Poly-LSM utilizes an adaptive mechanism to handle edge
insertions and deletions on graphs with optimized I/O efficiency. (3) Poly-LSM
exploits the skewness of graph data to encode the key-value entries. Building
upon this foundation, we further implement Aster, a robust and versatile graph
database that supports Gremlin query language facilitating various graph
applications. In our experiments, we compared Aster against several mainstream
real-world graph databases. The results demonstrate that Aster outperforms all
baseline graph databases, especially on large-scale graphs. Notably, on the
billion-scale Twitter graph dataset, Aster achieves up to 17x throughput
improvement compared to the best-performing baseline graph system.
|
2501.06571
|
Active Rule Mining for Multivariate Anomaly Detection in Radio Access
Networks
|
cs.LG cs.AI
|
Multivariate anomaly detection finds its importance in diverse applications.
Despite the existence of many detectors to solve this problem, one cannot
simply define why an obtained anomaly inferred by the detector is anomalous.
This reasoning is required for network operators to understand the root cause
of the anomaly and the remedial action that should be taken to counteract its
occurrence. Existing solutions in explainable AI may give cues to features that
influence an anomaly, but they do not formulate generalizable rules that can be
assessed by a domain expert. Furthermore, not all outliers are anomalous in a
business sense. There is an unfulfilled need for a system that can interpret
anomalies predicted by a multivariate anomaly detector and map these patterns
to actionable rules. This paper aims to fulfill this need by proposing a
semi-autonomous anomaly rule miner. The proposed method is applicable to both
discrete and time series data and is tailored for radio access network (RAN)
anomaly detection use cases. The proposed method is demonstrated in this paper
with time series RAN data.
|
2501.06572
|
Physics-Informed Neuro-Evolution (PINE): A Survey and Prospects
|
cs.NE cs.CE cs.LG
|
Deep learning models trained on finite data lack a complete understanding of
the physical world. On the other hand, physics-informed neural networks (PINNs)
are infused with such knowledge through the incorporation of mathematically
expressible laws of nature into their training loss function. By complying with
physical laws, PINNs provide advantages over purely data-driven models in
limited-data regimes. This feature has propelled them to the forefront of
scientific machine learning, a domain characterized by scarce and costly data.
However, the vision of accurate physics-informed learning comes with
significant challenges. This review examines PINNs for the first time in terms
of model optimization and generalization, shedding light on the need for new
algorithmic advances to overcome issues pertaining to the training speed,
precision, and generalizability of today's PINN models. Of particular interest
are the gradient-free methods of neuroevolution for optimizing the uniquely
complex loss landscapes arising in PINN training. Methods synergizing gradient
descent and neuroevolution for discovering bespoke neural architectures and
balancing multiple conflicting terms in physics-informed learning objectives
are positioned as important avenues for future research. Yet another exciting
track is to cast neuroevolution as a meta-learner of generalizable PINN models.
|
2501.06573
|
Modeling the residual queue and queue-dependent capacity in a static
traffic assignment problem
|
eess.SY cs.SY
|
The residual queue during a given study period (e.g., peak hour) is an
important feature that should be considered when solving a traffic assignment
problem under equilibrium for strategic traffic planning. Although studies have
focused extensively on static or quasi-dynamic traffic assignment models
considering the residual queue, they have failed to capture the situation
wherein the equilibrium link flow passing through the link is less than the
link physical capacity under congested conditions. To address this critical
issue, we introduce a novel static traffic assignment model that explicitly
incorporates the residual queue and queue-dependent link capacity. The proposed
model ensures that equilibrium link flows remain within the physical capacity
bounds, yielding estimations more aligned with data observed by traffic
detectors, especially in oversaturated scenarios. A generalized link cost
function considering queue-dependent capacity, with an additional queuing delay
term is proposed. The queuing delay term represents the added travel cost under
congestion, offering a framework wherein conventional static models, both with
and without physical capacity constraints, become special cases of our model.
Our study rigorously analyzes the mathematical properties of the new model,
establishing the theoretical uniqueness of solutions for link flow and residual
queue under certain conditions. We also introduce a gradient projection-based
alternating minimization algorithm tailored for the proposed model. Numerical
examples are conducted to demonstrate the superiority and merit of the proposed
model and solution algorithm.
|
2501.06577
|
Transforming Social Science Research with Transfer Learning: Social
Science Survey Data Integration with AI
|
cs.AI
|
Large-N nationally representative surveys, which have profoundly shaped
American politics scholarship, represent related but distinct domains -a key
condition for transfer learning applications. These surveys are related through
their shared demographic, party identification, and ideological variables, yet
differ in that individual surveys often lack specific policy preference
questions that researchers require. Our study introduces a novel application of
transfer learning (TL) to address these gaps, marking the first systematic use
of TL paradigms in the context of survey data. Specifically, models pre-trained
on the Cooperative Election Study (CES) dataset are fine-tuned for use in the
American National Election Studies (ANES) dataset to predict policy questions
based on demographic variables. Even with a naive architecture, our transfer
learning approach achieves approximately 92 percentage accuracy in predicting
missing variables across surveys, demonstrating the robust potential of this
method. Beyond this specific application, our paper argues that transfer
learning is a promising framework for maximizing the utility of existing survey
data. We contend that artificial intelligence, particularly transfer learning,
opens new frontiers in social science methodology by enabling systematic
knowledge transfer between well-administered surveys that share common
variables but differ in their outcomes of interest.
|
2501.06581
|
Recommending the right academic programs: An interest mining approach
using BERTopic
|
cs.LG cs.CY cs.IR
|
Prospective students face the challenging task of selecting a university
program that will shape their academic and professional careers. For
decision-makers and support services, it is often time-consuming and extremely
difficult to match personal interests with suitable programs due to the vast
and complex catalogue information available. This paper presents the first
information system that provides students with efficient recommendations based
on both program content and personal preferences. BERTopic, a powerful topic
modeling algorithm, is used that leverages text embedding techniques to
generate topic representations. It enables us to mine interest topics from all
course descriptions, representing the full body of knowledge taught at the
institution. Underpinned by the student's individual choice of topics, a
shortlist of the most relevant programs is computed through statistical
backtracking in the knowledge map, a novel characterization of the
program-course relationship. This approach can be applied to a wide range of
educational settings, including professional and vocational training. A case
study at a post-secondary school with 80 programs and over 5,000 courses shows
that the system provides immediate and effective decision support. The
presented interest topics are meaningful, leading to positive effects such as
serendipity, personalization, and fairness, as revealed by a qualitative study
involving 65 students. Over 98% of users indicated that the recommendations
aligned with their interests, and about 94% stated they would use the tool in
the future. Quantitative analysis shows the system can be configured to ensure
fairness, achieving 98% program coverage while maintaining a personalization
score of 0.77. These findings suggest that this real-time, user-centered,
data-driven system could improve the program selection process.
|
2501.06582
|
ACORD: An Expert-Annotated Retrieval Dataset for Legal Contract Drafting
|
cs.CL
|
Information retrieval, specifically contract clause retrieval, is
foundational to contract drafting because lawyers rarely draft contracts from
scratch; instead, they locate and revise the most relevant precedent. We
introduce the Atticus Clause Retrieval Dataset (ACORD), the first retrieval
benchmark for contract drafting fully annotated by experts. ACORD focuses on
complex contract clauses such as Limitation of Liability, Indemnification,
Change of Control, and Most Favored Nation. It includes 114 queries and over
126,000 query-clause pairs, each ranked on a scale from 1 to 5 stars. The task
is to find the most relevant precedent clauses to a query. The bi-encoder
retriever paired with pointwise LLMs re-rankers shows promising results.
However, substantial improvements are still needed to effectively manage the
complex legal work typically undertaken by lawyers. As the first retrieval
benchmark for contract drafting annotated by experts, ACORD can serve as a
valuable IR benchmark for the NLP community.
|
2501.06583
|
Optimizing wheel loader performance: an end-to-end approach
|
cs.CE cs.SY eess.SY
|
Wheel loaders in mines and construction sites repeatedly load soil from a
pile to load receivers. This task presents a challenging optimization problem
since each loading's performance depends on the pile state, which depends on
previous loadings. We investigate an end-to-end optimization approach
considering future loading outcomes and V-cycle transportation costs. To
predict the evolution of the pile state and the loading performance, we use
world models that leverage deep neural networks trained on numerous simulated
loading cycles. A look-ahead tree search optimizes the sequence of loading
actions by evaluating the performance of thousands of action candidates, which
expand into subsequent action candidates under the predicted pile states
recursively. Test results demonstrate that, over a horizon of 15 sequential
loadings, the look-ahead tree search is 6% more efficient than a greedy
strategy, which always selects the action that maximizes the current single
loading performance, and 14% more efficient than using a fixed loading
controller optimized for the nominal case.
|
2501.06585
|
Boundary-enhanced time series data imputation with long-term dependency
diffusion models
|
cs.LG cs.SI
|
Data imputation is crucial for addressing challenges posed by missing values
in multivariate time series data across various fields, such as healthcare,
traffic, and economics, and has garnered significant attention. Among various
methods, diffusion model-based approaches show notable performance
improvements. However, existing methods often cause disharmonious boundaries
between missing and known regions and overlook long-range dependencies in
missing data estimation, leading to suboptimal results. To address these
issues, we propose a Diffusion-based time Series Data Imputation (DSDI)
framework. We develop a weight-reducing injection strategy that incorporates
the predicted values of missing points with reducing weights into the reverse
diffusion process to mitigate boundary inconsistencies. Further, we introduce a
multi-scale S4-based U-Net, which combines hierarchical information from
different levels via multi-resolution integration to capture long-term
dependencies. Experimental results demonstrate that our model outperforms
existing imputation methods.
|
2501.06588
|
A Tight VC-Dimension Analysis of Clustering Coresets with Applications
|
cs.CG cs.DS cs.LG
|
We consider coresets for $k$-clustering problems, where the goal is to assign
points to centers minimizing powers of distances. A popular example is the
$k$-median objective $\sum_{p}\min_{c\in C}dist(p,C)$. Given a point set $P$, a
coreset $\Omega$ is a small weighted subset that approximates the cost of $P$
for all candidate solutions $C$ up to a $(1\pm\varepsilon )$ multiplicative
factor. In this paper, we give a sharp VC-dimension based analysis for coreset
construction. As a consequence, we obtain improved $k$-median coreset bounds
for the following metrics:
Coresets of size $\tilde{O}\left(k\varepsilon^{-2}\right)$ for shortest path
metrics in planar graphs, improving over the bounds
$\tilde{O}\left(k\varepsilon^{-6}\right)$ by [Cohen-Addad, Saulpic,
Schwiegelshohn, STOC'21] and $\tilde{O}\left(k^2\varepsilon^{-4}\right)$ by
[Braverman, Jiang, Krauthgamer, Wu, SODA'21].
Coresets of size $\tilde{O}\left(kd\ell\varepsilon^{-2}\log m\right)$ for
clustering $d$-dimensional polygonal curves of length at most $m$ with curves
of length at most $\ell$ with respect to Frechet metrics, improving over the
bounds $\tilde{O}\left(k^3d\ell\varepsilon^{-3}\log m\right)$ by [Braverman,
Cohen-Addad, Jiang, Krauthgamer, Schwiegelshohn, Toftrup, and Wu, FOCS'22] and
$\tilde{O}\left(k^2d\ell\varepsilon^{-2}\log m \log |P|\right)$ by [Conradi,
Kolbe, Psarros, Rohde, SoCG'24].
|
2501.06589
|
Ladder-residual: parallelism-aware architecture for accelerating large
model inference with communication overlapping
|
cs.LG cs.CL cs.DC
|
Large language model inference is both memory-intensive and time-consuming,
often requiring distributed algorithms to efficiently scale. Various model
parallelism strategies are used in multi-gpu training and inference to
partition computation across multiple devices, reducing memory load and
computation time. However, using model parallelism necessitates communication
of information between GPUs, which has been a major bottleneck and limits the
gains obtained by scaling up the number of devices. We introduce Ladder
Residual, a simple architectural modification applicable to all residual-based
models that enables straightforward overlapping that effectively hides the
latency of communication. Our insight is that in addition to systems
optimization, one can also redesign the model architecture to decouple
communication from computation. While Ladder Residual can allow
communication-computation decoupling in conventional parallelism patterns, we
focus on Tensor Parallelism in this paper, which is particularly bottlenecked
by its heavy communication. For a Transformer model with 70B parameters,
applying Ladder Residual to all its layers can achieve 29% end-to-end wall
clock speed up at inference time with TP sharding over 8 devices. We refer the
resulting Transformer model as the Ladder Transformer. We train a 1B and 3B
Ladder Transformer from scratch and observe comparable performance to a
standard dense transformer baseline. We also show that it is possible to
convert parts of the Llama-3.1 8B model to our Ladder Residual architecture
with minimal accuracy degradation by only retraining for 3B tokens. We release
our code for training and inference for easier replication of experiments.
|
2501.06590
|
ChemAgent: Self-updating Library in Large Language Models Improves
Chemical Reasoning
|
cs.CL cs.AI
|
Chemical reasoning usually involves complex, multi-step processes that demand
precise calculations, where even minor errors can lead to cascading failures.
Furthermore, large language models (LLMs) encounter difficulties handling
domain-specific formulas, executing reasoning steps accurately, and integrating
code effectively when tackling chemical reasoning tasks. To address these
challenges, we present ChemAgent, a novel framework designed to improve the
performance of LLMs through a dynamic, self-updating library. This library is
developed by decomposing chemical tasks into sub-tasks and compiling these
sub-tasks into a structured collection that can be referenced for future
queries. Then, when presented with a new problem, ChemAgent retrieves and
refines pertinent information from the library, which we call memory,
facilitating effective task decomposition and the generation of solutions. Our
method designs three types of memory and a library-enhanced reasoning
component, enabling LLMs to improve over time through experience. Experimental
results on four chemical reasoning datasets from SciBench demonstrate that
ChemAgent achieves performance gains of up to 46% (GPT-4), significantly
outperforming existing methods. Our findings suggest substantial potential for
future applications, including tasks such as drug discovery and materials
science. Our code can be found at https://github.com/gersteinlab/chemagent
|
2501.06591
|
Exploring Pose-Based Anomaly Detection for Retail Security: A Real-World
Shoplifting Dataset and Benchmark
|
cs.CV cs.AI
|
Shoplifting poses a significant challenge for retailers, resulting in
billions of dollars in annual losses. Traditional security measures often fall
short, highlighting the need for intelligent solutions capable of detecting
shoplifting behaviors in real time. This paper frames shoplifting detection as
an anomaly detection problem, focusing on the identification of deviations from
typical shopping patterns. We introduce PoseLift, a privacy-preserving dataset
specifically designed for shoplifting detection, addressing challenges such as
data scarcity, privacy concerns, and model biases. PoseLift is built in
collaboration with a retail store and contains anonymized human pose data from
real-world scenarios. By preserving essential behavioral information while
anonymizing identities, PoseLift balances privacy and utility. We benchmark
state-of-the-art pose-based anomaly detection models on this dataset,
evaluating performance using a comprehensive set of metrics. Our results
demonstrate that pose-based approaches achieve high detection accuracy while
effectively addressing privacy and bias concerns inherent in traditional
methods. As one of the first datasets capturing real-world shoplifting
behaviors, PoseLift offers researchers a valuable tool to advance computer
vision ethically and will be publicly available to foster innovation and
collaboration. The dataset is available at
https://github.com/TeCSAR-UNCC/PoseLift.
|
2501.06597
|
EmoXpt: Analyzing Emotional Variances in Human Comments and
LLM-Generated Responses
|
cs.LG cs.CL cs.HC
|
The widespread adoption of generative AI has generated diverse opinions, with
individuals expressing both support and criticism of its applications. This
study investigates the emotional dynamics surrounding generative AI by
analyzing human tweets referencing terms such as ChatGPT, OpenAI, Copilot, and
LLMs. To further understand the emotional intelligence of ChatGPT, we examine
its responses to selected tweets, highlighting differences in sentiment between
human comments and LLM-generated responses. We introduce EmoXpt, a sentiment
analysis framework designed to assess both human perspectives on generative AI
and the sentiment embedded in ChatGPT's responses. Unlike prior studies that
focus exclusively on human sentiment, EmoXpt uniquely evaluates the emotional
expression of ChatGPT. Experimental results demonstrate that LLM-generated
responses are notably more efficient, cohesive, and consistently positive than
human responses.
|
2501.06598
|
ChartCoder: Advancing Multimodal Large Language Model for Chart-to-Code
Generation
|
cs.AI
|
Multimodal Large Language Models (MLLMs) have demonstrated remarkable
capabilities in chart understanding tasks. However, interpreting charts with
textual descriptions often leads to information loss, as it fails to fully
capture the dense information embedded in charts. In contrast, parsing charts
into code provides lossless representations that can effectively contain all
critical details. Although existing open-source MLLMs have achieved success in
chart understanding tasks, they still face two major challenges when applied to
chart-to-code tasks.: (1) Low executability and poor restoration of chart
details in the generated code and (2) Lack of large-scale and diverse training
data. To address these challenges, we propose \textbf{ChartCoder}, the first
dedicated chart-to-code MLLM, which leverages Code LLMs as the language
backbone to enhance the executability of the generated code. Furthermore, we
introduce \textbf{Chart2Code-160k}, the first large-scale and diverse dataset
for chart-to-code generation, and propose the \textbf{Snippet-of-Thought (SoT)}
method, which transforms direct chart-to-code generation data into step-by-step
generation. Experiments demonstrate that ChartCoder, with only 7B parameters,
surpasses existing open-source MLLMs on chart-to-code benchmarks, achieving
superior chart restoration and code excitability. Our code will be available at
https://github.com/thunlp/ChartCoder.
|
2501.06602
|
A Comparative Performance Analysis of Classification and Segmentation
Models on Bangladeshi Pothole Dataset
|
cs.CV
|
The study involves a comprehensive performance analysis of popular
classification and segmentation models, applied over a Bangladeshi pothole
dataset, being developed by the authors of this research. This custom dataset
of 824 samples, collected from the streets of Dhaka and Bogura performs
competitively against the existing industrial and custom datasets utilized in
the present literature. The dataset was further augmented four-fold for
segmentation and ten-fold for classification evaluation. We tested nine
classification models (CCT, CNN, INN, Swin Transformer, ConvMixer, VGG16,
ResNet50, DenseNet201, and Xception) and four segmentation models (U-Net,
ResU-Net, U-Net++, and Attention-Unet) over both the datasets. Among the
classification models, lightweight models namely CCT, CNN, INN, Swin
Transformer, and ConvMixer were emphasized due to their low computational
requirements and faster prediction times. The lightweight models performed
respectfully, oftentimes equating to the performance of heavyweight models. In
addition, augmentation was found to enhance the performance of all the tested
models. The experimental results exhibit that, our dataset performs on par or
outperforms the similar classification models utilized in the existing
literature, reaching accuracy and f1-scores over 99%. The dataset also
performed on par with the existing datasets for segmentation, achieving model
Dice Similarity Coefficient up to 67.54% and IoU scores up to 59.39%.
|
2501.06603
|
Preconditioned Sharpness-Aware Minimization: Unifying Analysis and a
Novel Learning Algorithm
|
cs.LG
|
Targeting solutions over `flat' regions of the loss landscape,
sharpness-aware minimization (SAM) has emerged as a powerful tool to improve
generalizability of deep neural network based learning. While several SAM
variants have been developed to this end, a unifying approach that also guides
principled algorithm design has been elusive. This contribution leverages
preconditioning (pre) to unify SAM variants and provide not only unifying
convergence analysis, but also valuable insights. Building upon preSAM, a novel
algorithm termed infoSAM is introduced to address the so-called adversarial
model degradation issue in SAM by adjusting gradients depending on noise
estimates. Extensive numerical tests demonstrate the superiority of infoSAM
across various benchmarks.
|
2501.06605
|
RoboHorizon: An LLM-Assisted Multi-View World Model for Long-Horizon
Robotic Manipulation
|
cs.RO
|
Efficient control in long-horizon robotic manipulation is challenging due to
complex representation and policy learning requirements. Model-based visual
reinforcement learning (RL) has shown great potential in addressing these
challenges but still faces notable limitations, particularly in handling sparse
rewards and complex visual features in long-horizon environments. To address
these limitations, we propose the Recognize-Sense-Plan-Act (RSPA) pipeline for
long-horizon tasks and further introduce RoboHorizon, an LLM-assisted
multi-view world model tailored for long-horizon robotic manipulation. In
RoboHorizon, pre-trained LLMs generate dense reward structures for multi-stage
sub-tasks based on task language instructions, enabling robots to better
recognize long-horizon tasks. Keyframe discovery is then integrated into the
multi-view masked autoencoder (MAE) architecture to enhance the robot's ability
to sense critical task sequences, strengthening its multi-stage perception of
long-horizon processes. Leveraging these dense rewards and multi-view
representations, a robotic world model is constructed to efficiently plan
long-horizon tasks, enabling the robot to reliably act through RL algorithms.
Experiments on two representative benchmarks, RLBench and FurnitureBench, show
that RoboHorizon outperforms state-of-the-art visual model-based RL methods,
achieving a 23.35% improvement in task success rates on RLBench's 4
short-horizon tasks and a 29.23% improvement on 6 long-horizon tasks from
RLBench and 3 furniture assembly tasks from FurnitureBench.
|
2501.06608
|
Dual-Modality Representation Learning for Molecular Property Prediction
|
cs.LG q-bio.QM
|
Molecular property prediction has attracted substantial attention recently.
Accurate prediction of drug properties relies heavily on effective molecular
representations. The structures of chemical compounds are commonly represented
as graphs or SMILES sequences. Recent advances in learning drug properties
commonly employ Graph Neural Networks (GNNs) based on the graph representation.
For the SMILES representation, Transformer-based architectures have been
adopted by treating each SMILES string as a sequence of tokens. Because each
representation has its own advantages and disadvantages, combining both
representations in learning drug properties is a promising direction. We
propose a method named Dual-Modality Cross-Attention (DMCA) that can
effectively combine the strengths of two representations by employing the
cross-attention mechanism. DMCA was evaluated across eight datasets including
both classification and regression tasks. Results show that our method achieves
the best overall performance, highlighting its effectiveness in leveraging the
complementary information from both graph and SMILES modalities.
|
2501.06625
|
Guided Code Generation with LLMs: A Multi-Agent Framework for Complex
Code Tasks
|
cs.AI
|
Large Language Models (LLMs) have shown remarkable capabilities in code
generation tasks, yet they face significant limitations in handling complex,
long-context programming challenges and demonstrating complex compositional
reasoning abilities. This paper introduces a novel agentic framework for
``guided code generation'' that tries to address these limitations through a
deliberately structured, fine-grained approach to code generation tasks. Our
framework leverages LLMs' strengths as fuzzy searchers and approximate
information retrievers while mitigating their weaknesses in long sequential
reasoning and long-context understanding. Empirical evaluation using OpenAI's
HumanEval benchmark with Meta's Llama 3.1 8B model (int4 precision)
demonstrates a 23.79\% improvement in solution accuracy compared to direct
one-shot generation. Our results indicate that structured, guided approaches to
code generation can significantly enhance the practical utility of LLMs in
software development while overcoming their inherent limitations in
compositional reasoning and context handling.
|
2501.06628
|
Quantifying Relational Exploration in Cultural Heritage Knowledge Graphs
with LLMs: A Neuro-Symbolic Approach
|
cs.AI
|
This paper introduces a neuro-symbolic approach for relational exploration in
cultural heritage knowledge graphs, leveraging Large Language Models (LLMs) for
explanation generation and a novel mathematical framework to quantify the
interestingness of relationships. We demonstrate the importance of
interestingness measure using a quantitative analysis, by highlighting its
impact on the overall performance of our proposed system, particularly in terms
of precision, recall, and F1-score. Using the Wikidata Cultural Heritage Linked
Open Data (WCH-LOD) dataset, our approach yields a precision of 0.70, recall of
0.68, and an F1-score of 0.69, representing an improvement compared to
graph-based (precision: 0.28, recall: 0.25, F1-score: 0.26) and knowledge-based
baselines (precision: 0.45, recall: 0.42, F1-score: 0.43). Furthermore, our
LLM-powered explanations exhibit better quality, reflected in BLEU (0.52),
ROUGE-L (0.58), and METEOR (0.63) scores, all higher than the baseline
approaches. We show a strong correlation (0.65) between interestingness measure
and the quality of generated explanations, validating its effectiveness. The
findings highlight the importance of LLMs and a mathematical formalization for
interestingness in enhancing the effectiveness of relational exploration in
cultural heritage knowledge graphs, with results that are measurable and
testable. We further show that the system enables more effective exploration
compared to purely knowledge-based and graph-based methods.
|
2501.06635
|
A Reduced Order Iterative Linear Quadratic Regulator (ILQR) Technique
for the Optimal Control of Nonlinear Partial Differential Equations
|
eess.SY cs.SY
|
In this paper, we introduce a reduced order model-based reinforcement
learning (MBRL) approach, utilizing the Iterative Linear Quadratic Regulator
(ILQR) algorithm for the optimal control of nonlinear partial differential
equations (PDEs). The approach proposes a novel modification of the ILQR
technique: it uses the Method of Snapshots to identify a reduced order Linear
Time Varying (LTV) approximation of the nonlinear PDE dynamics around a current
estimate of the optimal trajectory, utilizes the identified LTV model to solve
a time-varying reduced order LQR problem to obtain an improved estimate of the
optimal trajectory along with a new reduced basis, and iterates till
convergence. The convergence behavior of the reduced order approach is analyzed
and the algorithm is shown to converge to a limit set that is dependent on the
truncation error in the reduction. The proposed approach is tested on the
viscous Burger's equation and two phase-field models for microstructure
evolution in materials, and the results show that there is a significant
reduction in the computational burden over the standard ILQR approach, without
significantly sacrificing performance.
|
2501.06636
|
Dual use issues in the field of Natural Language Generation
|
cs.CL
|
This report documents the results of a recent survey in the SIGGEN community,
focusing on Dual Use issues in Natural Language Generation (NLG). SIGGEN is the
Special Interest Group (SIG) of the Association for Computational Linguistics
(ACL) for researchers working on NLG. The survey was prompted by the ACL
executive board, which asked all SIGs to provide an overview of dual use issues
within their respective subfields. The survey was sent out in October 2024 and
the results were processed in January 2025. With 23 respondents, the survey is
presumably not representative of all SIGGEN members, but at least this document
offers a helpful resource for future discussions.
This report is open to feedback from the SIGGEN community. Let me know if you
have any questions or comments!
|
2501.06638
|
Scaling Down Semantic Leakage: Investigating Associative Bias in Smaller
Language Models
|
cs.CL
|
Semantic leakage is a phenomenon recently introduced by Gonen et al. (2024).
It refers to a situation in which associations learnt from the training data
emerge in language model generations in an unexpected and sometimes undesired
way. Prior work has focused on leakage in large language models (7B+
parameters). In this study, I use Qwen2.5 model family to explore whether
smaller models, ranging from 500M to 7B parameters, demonstrate less semantic
leakage due to their limited capacity for capturing complex associations.
Building on the previous dataset from Gonen et al. (2024), I introduce a new
dataset of color-focused prompts, categorized into specific types of semantic
associations, to systematically evaluate the models' performance. Results
indicate that smaller models exhibit less semantic leakage overall, although
this trend is not strictly linear, with medium-sized models sometimes
surpassing larger ones in leaking behavior. The dataset, the model generations,
and the evaluation code are publicly available at
https://github.com/smilni/semantic_leakage_project.
|
2501.06639
|
Enhancing Path Planning Performance through Image Representation
Learning of High-Dimensional Configuration Spaces
|
cs.RO cs.AI
|
This paper presents a novel method for accelerating path-planning tasks in
unknown scenes with obstacles by utilizing Wasserstein Generative Adversarial
Networks (WGANs) with Gradient Penalty (GP) to approximate the distribution of
waypoints for a collision-free path using the Rapidly-exploring Random Tree
algorithm. Our approach involves conditioning the WGAN-GP with a forward
diffusion process in a continuous latent space to handle multimodal datasets
effectively. We also propose encoding the waypoints of a collision-free path as
a matrix, where the multidimensional ordering of the waypoints is naturally
preserved. This method not only improves model learning but also enhances
training convergence. Furthermore, we propose a method to assess whether the
trained model fails to accurately capture the true waypoints. In such cases, we
revert to uniform sampling to ensure the algorithm's probabilistic
completeness; a process that traditionally involves manually determining an
optimal ratio for each scenario in other machine learning-based methods. Our
experiments demonstrate promising results in accelerating path-planning tasks
under critical time constraints. The source code is openly available at
https://bitbucket.org/joro3001/imagewgangpplanning/src/master/.
|
2501.06641
|
A Permutation-Free Length 3 Decimal Check Digit Code
|
cs.IT math.CO math.IT
|
In 1969 J. Verhoeff provided the first examples of a decimal error detecting
code using a single check digit to provide protection against all single,
transposition and adjacent twin errors. The three codes he presented are length
3-digit codes with 2 information digits. Existence of a 4-digit code would
imply the existence of 10 such disjoint 3-digit codes. Apparently, not even a
pair of such disjoint 3-digit codes is known. The code developed herein, has
the property that the knowledge of any two digits is sufficient to determine
the entire codeword even though their positions were unknown. This fulfills
Verhoeff's desire to eliminate "cyclic errors". Phonetic errors, where 2 digit
pairs of the forms X0 and 1X are interchanged, are also eliminated.
|
2501.06642
|
Common Sense Is All You Need
|
cs.AI
|
Artificial intelligence (AI) has made significant strides in recent years,
yet it continues to struggle with a fundamental aspect of cognition present in
all animals: common sense. Current AI systems, including those designed for
complex tasks like autonomous driving, problem-solving challenges such as the
Abstraction and Reasoning Corpus (ARC), and conversational benchmarks like the
Turing Test, often lack the ability to adapt to new situations without
extensive prior knowledge. This manuscript argues that integrating common sense
into AI systems is essential for achieving true autonomy and unlocking the full
societal and commercial value of AI.
We propose a shift in the order of knowledge acquisition emphasizing the
importance of developing AI systems that start from minimal prior knowledge and
are capable of contextual learning, adaptive reasoning, and embodiment -- even
within abstract domains. Additionally, we highlight the need to rethink the AI
software stack to address this foundational challenge. Without common sense, AI
systems may never reach true autonomy, instead exhibiting asymptotic
performance that approaches theoretical ideals like AIXI but remains
unattainable in practice due to infinite resource and computation requirements.
While scaling AI models and passing benchmarks like the Turing Test have
brought significant advancements in applications that do not require autonomy,
these approaches alone are insufficient to achieve autonomous AI with common
sense. By redefining existing benchmarks and challenges to enforce constraints
that require genuine common sense, and by broadening our understanding of
embodiment to include both physical and abstract domains, we can encourage the
development of AI systems better equipped to handle the complexities of
real-world and abstract environments.
|
2501.06645
|
FocalPO: Enhancing Preference Optimizing by Focusing on Correct
Preference Rankings
|
cs.CL cs.AI
|
Efficient preference optimization algorithms such as Direct Preference
Optimization (DPO) have become a popular approach in aligning large language
models (LLMs) with human preferences. These algorithms implicitly treat the LLM
as a reward model, and focus on training it to correct misranked preference
pairs. However, recent work~\citep{chen2024preference} empirically finds that
DPO training \textit{rarely improves these misranked preference pairs}, despite
its gradient emphasizing on these cases. We introduce FocalPO, a DPO variant
that instead \textit{down-weighs} misranked preference pairs and prioritizes
enhancing the model's understanding of pairs that it can already rank
correctly. Inspired by Focal Loss used in vision tasks, FocalPO achieves this
by adding a modulating factor to dynamically scale DPO loss. Our experiment
demonstrates that FocalPO surpasses DPO and its variants on popular benchmarks
like Alpaca Eval 2.0 using Mistral-Base-7B and Llama-3-Instruct-8B.
Additionally, we empirically reveals how FocalPO affects training on correct
and incorrect sample groups, further underscoring its effectiveness.
|
2501.06650
|
SafeSplit: A Novel Defense Against Client-Side Backdoor Attacks in Split
Learning
|
cs.CR cs.DC cs.LG
|
Split Learning (SL) is a distributed deep learning approach enabling multiple
clients and a server to collaboratively train and infer on a shared deep neural
network (DNN) without requiring clients to share their private local data. The
DNN is partitioned in SL, with most layers residing on the server and a few
initial layers and inputs on the client side. This configuration allows
resource-constrained clients to participate in training and inference. However,
the distributed architecture exposes SL to backdoor attacks, where malicious
clients can manipulate local datasets to alter the DNN's behavior. Existing
defenses from other distributed frameworks like Federated Learning are not
applicable, and there is a lack of effective backdoor defenses specifically
designed for SL.
We present SafeSplit, the first defense against client-side backdoor attacks
in Split Learning (SL). SafeSplit enables the server to detect and filter out
malicious client behavior by employing circular backward analysis after a
client's training is completed, iteratively reverting to a trained checkpoint
where the model under examination is found to be benign. It uses a two-fold
analysis to identify client-induced changes and detect poisoned models. First,
a static analysis in the frequency domain measures the differences in the
layer's parameters at the server. Second, a dynamic analysis introduces a novel
rotational distance metric that assesses the orientation shifts of the server's
layer parameters during training. Our comprehensive evaluation across various
data distributions, client counts, and attack scenarios demonstrates the high
efficacy of this dual analysis in mitigating backdoor attacks while preserving
model utility.
|
2501.06651
|
Parking Space Detection in the City of Granada
|
cs.CV
|
This paper addresses the challenge of parking space detection in urban areas,
focusing on the city of Granada. Utilizing aerial imagery, we develop and apply
semantic segmentation techniques to accurately identify parked cars, moving
cars and roads. A significant aspect of our research is the creation of a
proprietary dataset specific to Granada, which is instrumental in training our
neural network model. We employ Fully Convolutional Networks, Pyramid Networks
and Dilated Convolutions, demonstrating their effectiveness in urban semantic
segmentation. Our approach involves comparative analysis and optimization of
various models, including Dynamic U-Net, PSPNet and DeepLabV3+, tailored for
the segmentation of aerial images. The study includes a thorough
experimentation phase, using datasets such as UDD5 and UAVid, alongside our
custom Granada dataset. We evaluate our models using metrics like Foreground
Accuracy, Dice Coefficient and Jaccard Index. Our results indicate that
DeepLabV3+ offers the most promising performance. We conclude with future
directions, emphasizing the need for a dedicated neural network for parked car
detection and the potential for application in other urban environments. This
work contributes to the fields of urban planning and traffic management,
providing insights into efficient utilization of parking spaces through
advanced image processing techniques.
|
2501.06653
|
Theoretical Characterization of Effect of Masks in Snapshot Compressive
Imaging
|
cs.IT eess.IV math.IT stat.AP
|
Snapshot compressive imaging (SCI) refers to the recovery of
three-dimensional data cubes-such as videos or hyperspectral images-from their
two-dimensional projections, which are generated by a special encoding of the
data with a mask. SCI systems commonly use binary-valued masks that follow
certain physical constraints. Optimizing these masks subject to these
constraints is expected to improve system performance. However, prior
theoretical work on SCI systems focuses solely on independently and identically
distributed (i.i.d.) Gaussian masks, which do not permit such optimization. On
the other hand, existing practical mask optimizations rely on computationally
intensive joint optimizations that provide limited insight into the role of
masks and are expected to be sub-optimal due to the non-convexity and
complexity of the optimization. In this paper, we analytically characterize the
performance of SCI systems employing binary masks and leverage our analysis to
optimize hardware parameters. Our findings provide a comprehensive and
fundamental understanding of the role of binary masks - with both independent
and dependent elements - and their optimization. We also present simulation
results that confirm our theoretical findings and further illuminate different
aspects of mask design.
|
2501.06655
|
Personalized Preference Fine-tuning of Diffusion Models
|
cs.LG cs.CV
|
RLHF techniques like DPO can significantly improve the generation quality of
text-to-image diffusion models. However, these methods optimize for a single
reward that aligns model generation with population-level preferences,
neglecting the nuances of individual users' beliefs or values. This lack of
personalization limits the efficacy of these models. To bridge this gap, we
introduce PPD, a multi-reward optimization objective that aligns diffusion
models with personalized preferences. With PPD, a diffusion model learns the
individual preferences of a population of users in a few-shot way, enabling
generalization to unseen users. Specifically, our approach (1) leverages a
vision-language model (VLM) to extract personal preference embeddings from a
small set of pairwise preference examples, and then (2) incorporates the
embeddings into diffusion models through cross attention. Conditioning on user
embeddings, the text-to-image models are fine-tuned with the DPO objective,
simultaneously optimizing for alignment with the preferences of multiple users.
Empirical results demonstrate that our method effectively optimizes for
multiple reward functions and can interpolate between them during inference. In
real-world user scenarios, with as few as four preference examples from a new
user, our approach achieves an average win rate of 76\% over Stable Cascade,
generating images that more accurately reflect specific user preferences.
|
2501.06659
|
TWIX: Automatically Reconstructing Structured Data from Templatized
Documents
|
cs.DB cs.CV
|
Many documents, that we call templatized documents, are programmatically
generated by populating fields in a visual template. Effective data extraction
from these documents is crucial to supporting downstream analytical tasks.
Current data extraction tools often struggle with complex document layouts,
incur high latency and/or cost on large datasets, and often require significant
human effort, when extracting tables or values given user-specified fields from
documents. The key insight of our tool, TWIX, is to predict the underlying
template used to create such documents, modeling the visual and structural
commonalities across documents. Data extraction based on this predicted
template provides a more principled, accurate, and efficient solution at a low
cost. Comprehensive evaluations on 34 diverse real-world datasets show that
uncovering the template is crucial for data extraction from templatized
documents. TWIX achieves over 90% precision and recall on average,
outperforming tools from industry: Textract and Azure Document Intelligence,
and vision-based LLMs like GPT-4-Vision, by over 25% in precision and recall.
TWIX scales easily to large datasets and is 734X faster and 5836X cheaper than
vision-based LLMs for extracting data from a large document collection with 817
pages.
|
2501.06660
|
MapGS: Generalizable Pretraining and Data Augmentation for Online
Mapping via Novel View Synthesis
|
cs.CV cs.RO
|
Online mapping reduces the reliance of autonomous vehicles on high-definition
(HD) maps, significantly enhancing scalability. However, recent advancements
often overlook cross-sensor configuration generalization, leading to
performance degradation when models are deployed on vehicles with different
camera intrinsics and extrinsics. With the rapid evolution of novel view
synthesis methods, we investigate the extent to which these techniques can be
leveraged to address the sensor configuration generalization challenge. We
propose a novel framework leveraging Gaussian splatting to reconstruct scenes
and render camera images in target sensor configurations. The target config
sensor data, along with labels mapped to the target config, are used to train
online mapping models. Our proposed framework on the nuScenes and Argoverse 2
datasets demonstrates a performance improvement of 18% through effective
dataset augmentation, achieves faster convergence and efficient training, and
exceeds state-of-the-art performance when using only 25% of the original
training data. This enables data reuse and reduces the need for laborious data
labeling. Project page at https://henryzhangzhy.github.io/mapgs.
|
2501.06661
|
Learning dynamical systems with hit-and-run random feature maps
|
cs.LG physics.data-an stat.ME stat.ML
|
We show how random feature maps can be used to forecast dynamical systems
with excellent forecasting skill. We consider the tanh activation function and
judiciously choose the internal weights in a data-driven manner such that the
resulting features explore the nonlinear, non-saturated regions of the
activation function. We introduce skip connections and construct a deep variant
of random feature maps by combining several units. To mitigate the curse of
dimensionality, we introduce localization where we learn local maps, employing
conditional independence. Our modified random feature maps provide excellent
forecasting skill for both single trajectory forecasts as well as long-time
estimates of statistical properties, for a range of chaotic dynamical systems
with dimensions up to 512. In contrast to other methods such as reservoir
computers which require extensive hyperparameter tuning, we effectively need to
tune only a single hyperparameter, and are able to achieve state-of-the-art
forecast skill with much smaller networks.
|
2501.06662
|
The Magnitude of Categories of Texts Enriched by Language Models
|
math.CT cs.CL
|
The purpose of this article is twofold. Firstly, we use the next-token
probabilities given by a language model to explicitly define a
$[0,1]$-enrichment of a category of texts in natural language, in the sense of
Bradley, Terilla, and Vlassopoulos. We consider explicitly the terminating
conditions for text generation and determine when the enrichment itself can be
interpreted as a probability over texts. Secondly, we compute the M\"obius
function and the magnitude of an associated generalized metric space
$\mathcal{M}$ of texts using a combinatorial version of these quantities
recently introduced by Vigneaux. The magnitude function $f(t)$ of $\mathcal{M}$
is a sum over texts $x$ (prompts) of the Tsallis $t$-entropies of the
next-token probability distributions $p(-|x)$ plus the cardinality of the
model's possible outputs. The derivative of $f$ at $t=1$ recovers a sum of
Shannon entropies, which justifies seeing magnitude as a partition function.
Following Leinster and Schulman, we also express the magnitude function of
$\mathcal M$ as an Euler characteristic of magnitude homology and provide an
explicit description of the zeroeth and first magnitude homology groups.
|
2501.06663
|
Ultra Memory-Efficient On-FPGA Training of Transformers via
Tensor-Compressed Optimization
|
cs.LG cs.AR cs.CL
|
Transformer models have achieved state-of-the-art performance across a wide
range of machine learning tasks. There is growing interest in training
transformers on resource-constrained edge devices due to considerations such as
privacy, domain adaptation, and on-device scientific machine learning. However,
the significant computational and memory demands required for transformer
training often exceed the capabilities of an edge device. Leveraging low-rank
tensor compression, this paper presents the first on-FPGA accelerator for
end-to-end transformer training. On the algorithm side, we present a
bi-directional contraction flow for tensorized transformer training,
significantly reducing the computational FLOPS and intra-layer memory costs
compared to existing tensor operations. On the hardware side, we store all
highly compressed model parameters and gradient information on chip, creating
an on-chip-memory-only framework for each stage in training. This reduces
off-chip communication and minimizes latency and energy costs. Additionally, we
implement custom computing kernels for each training stage and employ
intra-layer parallelism and pipe-lining to further enhance run-time and memory
efficiency. Through experiments on transformer models within $36.7$ to $93.5$
MB using FP-32 data formats on the ATIS dataset, our tensorized FPGA
accelerator could conduct single-batch end-to-end training on the AMD Alevo U50
FPGA, with a memory budget of less than $6$-MB BRAM and $22.5$-MB URAM.
Compared to uncompressed training on the NVIDIA RTX 3090 GPU, our on-FPGA
training achieves a memory reduction of $30\times$ to $51\times$. Our FPGA
accelerator also achieves up to $3.6\times$ less energy cost per epoch compared
with tensor Transformer training on an NVIDIA RTX 3090 GPU.
|
2501.06669
|
Challenging reaction prediction models to generalize to novel chemistry
|
cs.LG physics.chem-ph
|
Deep learning models for anticipating the products of organic reactions have
found many use cases, including validating retrosynthetic pathways and
constraining synthesis-based molecular design tools. Despite compelling
performance on popular benchmark tasks, strange and erroneous predictions
sometimes ensue when using these models in practice. The core issue is that
common benchmarks test models in an in-distribution setting, whereas many
real-world uses for these models are in out-of-distribution settings and
require a greater degree of extrapolation. To better understand how current
reaction predictors work in out-of-distribution domains, we report a series of
more challenging evaluations of a prototypical SMILES-based deep learning
model. First, we illustrate how performance on randomly sampled datasets is
overly optimistic compared to performance when generalizing to new patents or
new authors. Second, we conduct time splits that evaluate how models perform
when tested on reactions published in years after those in their training set,
mimicking real-world deployment. Finally, we consider extrapolation across
reaction classes to reflect what would be required for the discovery of novel
reaction types. This panel of tasks can reveal the capabilities and limitations
of today's reaction predictors, acting as a crucial first step in the
development of tomorrow's next-generation models capable of reaction discovery.
|
2501.06670
|
A Geometric Analysis-Based Safety Assessment Framework for MASS Route
Decision-Making in Restricted Waters
|
eess.SY cs.SY
|
To enhance the safety of Maritime Autonomous Surface Ships (MASS) navigating
in restricted waters, this paper aims to develop a geometric analysis-based
route safety assessment (GARSA) framework, specifically designed for their
route decision-making in irregularly shaped waterways. Utilizing line and point
geometric elements to define waterway boundaries, the framework enables to
construct a dynamic width characterization function to quantify spatial safety
along intricate waterways. An iterative method is developed to calculate this
function, enabling an abstracted spatial property representation of the
waterways. Based on this, we introduce a navigational safety index that
balances global navigational safety and local risk to determine the safest
route. To accommodate ship kinematic constraints, path modifications are
applied using a dynamic window approach. A case study in a simulated Port of
Hamburg environment shows that GARSA effectively identifies safe routes and
avoids the risk of entering narrow waterways in an autonomous manner, thereby
prioritizing safety in route decision-making for MASS in confined waters.
|
2501.06678
|
Imbalanced Medical Image Segmentation with Pixel-dependent Noisy Labels
|
cs.CV cs.AI
|
Accurate medical image segmentation is often hindered by noisy labels in
training data, due to the challenges of annotating medical images. Prior
research works addressing noisy labels tend to make class-dependent
assumptions, overlooking the pixel-dependent nature of most noisy labels.
Furthermore, existing methods typically apply fixed thresholds to filter out
noisy labels, risking the removal of minority classes and consequently
degrading segmentation performance. To bridge these gaps, our proposed
framework, Collaborative Learning with Curriculum Selection (CLCS), addresses
pixel-dependent noisy labels with class imbalance. CLCS advances the existing
works by i) treating noisy labels as pixel-dependent and addressing them
through a collaborative learning framework, and ii) employing a curriculum
dynamic thresholding approach adapting to model learning progress to select
clean data samples to mitigate the class imbalance issue, and iii) applying a
noise balance loss to noisy data samples to improve data utilization instead of
discarding them outright. Specifically, our CLCS contains two modules:
Curriculum Noisy Label Sample Selection (CNS) and Noise Balance Loss (NBL). In
the CNS module, we designed a two-branch network with discrepancy loss for
collaborative learning so that different feature representations of the same
instance could be extracted from distinct views and used to vote the class
probabilities of pixels. Besides, a curriculum dynamic threshold is adopted to
select clean-label samples through probability voting. In the NBL module,
instead of directly dropping the suspiciously noisy labels, we further adopt a
robust loss to leverage such instances to boost the performance.
|
2501.06679
|
Coordinated Deliverable Energy Flexibility from EV Aggregators in
Distribution Networks
|
eess.SY cs.SY
|
This paper presents a coordinated framework to optimize electric vehicle (EV)
charging considering grid constraints and system uncertainties. The proposed
framework consists of two optimization models. In particular, the distribution
system operator (DSO) solves the first model to optimize the amount of
deliverable energy flexibility that can be obtained from EV aggregators. To
address the uncertainties of loads and solar energy generation, a hybrid
robust/stochastic approach is employed, enabling the transformation of
uncertainty-related constraints into a set of equivalent deterministic
constraints. Once the DSO has computed the optimal energy flexibility, each
aggregator utilizes the second optimization model to optimize the charging
schedule for its respective fleet of EVs. Numerical simulations are performed
on a modified IEEE 33-bus distribution network to illustrate the efficiency of
the proposed framework.
|
2501.06680
|
Application of Vision-Language Model to Pedestrians Behavior and Scene
Understanding in Autonomous Driving
|
cs.CV cs.AI cs.LG cs.RO
|
Autonomous driving (AD) has experienced significant improvements in recent
years and achieved promising 3D detection, classification, and localization
results. However, many challenges remain, e.g. semantic understanding of
pedestrians' behaviors, and downstream handling for pedestrian interactions.
Recent studies in applications of Large Language Models (LLM) and
Vision-Language Models (VLM) have achieved promising results in scene
understanding and high-level maneuver planning in diverse traffic scenarios.
However, deploying the billion-parameter LLMs to vehicles requires significant
computation and memory resources. In this paper, we analyzed effective
knowledge distillation of semantic labels to smaller Vision networks, which can
be used for the semantic representation of complex scenes for downstream
decision-making for planning and control.
|
2501.06682
|
Generative AI in Education: From Foundational Insights to the Socratic
Playground for Learning
|
cs.AI
|
This paper explores the synergy between human cognition and Large Language
Models (LLMs), highlighting how generative AI can drive personalized learning
at scale. We discuss parallels between LLMs and human cognition, emphasizing
both the promise and new perspectives on integrating AI systems into education.
After examining challenges in aligning technology with pedagogy, we review
AutoTutor-one of the earliest Intelligent Tutoring Systems (ITS)-and detail its
successes, limitations, and unfulfilled aspirations. We then introduce the
Socratic Playground, a next-generation ITS that uses advanced transformer-based
models to overcome AutoTutor's constraints and provide personalized, adaptive
tutoring. To illustrate its evolving capabilities, we present a JSON-based
tutoring prompt that systematically guides learner reflection while tracking
misconceptions. Throughout, we underscore the importance of placing pedagogy at
the forefront, ensuring that technology's power is harnessed to enhance
teaching and learning rather than overshadow it.
|
2501.06685
|
Tab-Shapley: Identifying Top-k Tabular Data Quality Insights
|
cs.LG stat.ML
|
We present an unsupervised method for aggregating anomalies in tabular
datasets by identifying the top-k tabular data quality insights. Each insight
consists of a set of anomalous attributes and the corresponding subsets of
records that serve as evidence to the user. The process of identifying these
insight blocks is challenging due to (i) the absence of labeled anomalies, (ii)
the exponential size of the subset search space, and (iii) the complex
dependencies among attributes, which obscure the true sources of anomalies.
Simple frequency-based methods fail to capture these dependencies, leading to
inaccurate results. To address this, we introduce Tab-Shapley, a cooperative
game theory based framework that uses Shapley values to quantify the
contribution of each attribute to the data's anomalous nature. While
calculating Shapley values typically requires exponential time, we show that
our game admits a closed-form solution, making the computation efficient. We
validate the effectiveness of our approach through empirical analysis on
real-world tabular datasets with ground-truth anomaly labels.
|
2501.06686
|
Understanding and Mitigating Membership Inference Risks of Neural
Ordinary Differential Equations
|
cs.CR cs.LG
|
Neural ordinary differential equations (NODEs) are an emerging paradigm in
scientific computing for modeling dynamical systems. By accurately learning
underlying dynamics in data in the form of differential equations, NODEs have
been widely adopted in various domains, such as healthcare, finance, computer
vision, and language modeling. However, there remains a limited understanding
of the privacy implications of these fundamentally different models,
particularly with regard to their membership inference risks.
In this work, we study the membership inference risks associated with NODEs.
We first comprehensively evaluate NODEs against membership inference attacks.
We show that NODEs are twice as resistant to these privacy attacks compared to
conventional feedforward models such as ResNets. By analyzing the variance in
membership risks across different NODE models, we identify the factors that
contribute to their lower risks. We then demonstrate, both theoretically and
empirically, that membership inference risks can be further mitigated by
utilizing a stochastic variant of NODEs: Neural stochastic differential
equations (NSDEs). We show that NSDEs are differentially-private (DP) learners
that provide the same provable privacy guarantees as DP-SGD, the de-facto
mechanism for training private models. NSDEs are also effective in mitigating
existing membership inference attacks, demonstrating risks comparable to
private models trained with DP-SGD while offering an improved privacy-utility
trade-off. Moreover, we propose a drop-in-replacement strategy that efficiently
integrates NSDEs into conventional feedforward models to enhance their privacy.
|
2501.06689
|
TAPO: Task-Referenced Adaptation for Prompt Optimization
|
cs.CL
|
Prompt engineering can significantly improve the performance of large
language models (LLMs), with automated prompt optimization (APO) gaining
significant attention due to the time-consuming and laborious nature of manual
prompt design. However, much of the existing work in APO overlooks
task-specific characteristics, resulting in prompts that lack domain
specificity and are not well-suited for task-specific optimization. In this
paper, we introduce TAPO, a multitask-aware prompt optimization framework
composed of three key modules. First, a task-aware metric selection module is
proposed to enhance task-specific prompt generation capabilities. Second, we
present a multi-metrics evaluation module to jointly evaluate prompts from
multiple perspectives. Third, an evolution-based optimization framework is
introduced for automatic prompt refinement, which improves adaptability across
various tasks. Extensive experiments on six datasets demonstrate the
effectiveness of our approach, and our code is publicly available.
|
2501.06692
|
PGP-SAM: Prototype-Guided Prompt Learning for Efficient Few-Shot Medical
Image Segmentation
|
cs.CV cs.AI
|
The Segment Anything Model (SAM) has demonstrated strong and versatile
segmentation capabilities, along with intuitive prompt-based interactions.
However, customizing SAM for medical image segmentation requires massive
amounts of pixel-level annotations and precise point- or box-based prompt
designs. To address these challenges, we introduce PGP-SAM, a novel
prototype-based few-shot tuning approach that uses limited samples to replace
tedious manual prompts. Our key idea is to leverage inter- and intra-class
prototypes to capture class-specific knowledge and relationships. We propose
two main components: (1) a plug-and-play contextual modulation module that
integrates multi-scale information, and (2) a class-guided cross-attention
mechanism that fuses prototypes and features for automatic prompt generation.
Experiments on a public multi-organ dataset and a private ventricle dataset
demonstrate that PGP-SAM achieves superior mean Dice scores compared with
existing prompt-free SAM variants, while using only 10\% of the 2D slices.
|
2501.06693
|
Vid2Sim: Realistic and Interactive Simulation from Video for Urban
Navigation
|
cs.CV cs.RO
|
Sim-to-real gap has long posed a significant challenge for robot learning in
simulation, preventing the deployment of learned models in the real world.
Previous work has primarily focused on domain randomization and system
identification to mitigate this gap. However, these methods are often limited
by the inherent constraints of the simulation and graphics engines. In this
work, we propose Vid2Sim, a novel framework that effectively bridges the
sim2real gap through a scalable and cost-efficient real2sim pipeline for neural
3D scene reconstruction and simulation. Given a monocular video as input,
Vid2Sim can generate photorealistic and physically interactable 3D simulation
environments to enable the reinforcement learning of visual navigation agents
in complex urban environments. Extensive experiments demonstrate that Vid2Sim
significantly improves the performance of urban navigation in the digital twins
and real world by 31.2% and 68.3% in success rate compared with agents trained
with prior simulation methods.
|
2501.06695
|
DVM: Towards Controllable LLM Agents in Social Deduction Games
|
cs.AI
|
Large Language Models (LLMs) have advanced the capability of game agents in
social deduction games (SDGs). These games rely heavily on conversation-driven
interactions and require agents to infer, make decisions, and express based on
such information. While this progress leads to more sophisticated and strategic
non-player characters (NPCs) in SDGs, there exists a need to control the
proficiency of these agents. This control not only ensures that NPCs can adapt
to varying difficulty levels during gameplay, but also provides insights into
the safety and fairness of LLM agents. In this paper, we present DVM, a novel
framework for developing controllable LLM agents for SDGs, and demonstrate its
implementation on one of the most popular SDGs, Werewolf. DVM comprises three
main components: Predictor, Decider, and Discussor. By integrating
reinforcement learning with a win rate-constrained decision chain reward
mechanism, we enable agents to dynamically adjust their gameplay proficiency to
achieve specified win rates. Experiments show that DVM not only outperforms
existing methods in the Werewolf game, but also successfully modulates its
performance levels to meet predefined win rate targets. These results pave the
way for LLM agents' adaptive and balanced gameplay in SDGs, opening new avenues
for research in controllable game agents.
|
2501.06697
|
Mamba-MOC: A Multicategory Remote Object Counting via State Space Model
|
cs.CV cs.AI
|
Multicategory remote object counting is a fundamental task in computer
vision, aimed at accurately estimating the number of objects of various
categories in remote images. Existing methods rely on CNNs and Transformers,
but CNNs struggle to capture global dependencies, and Transformers are
computationally expensive, which limits their effectiveness in remote
applications. Recently, Mamba has emerged as a promising solution in the field
of computer vision, offering a linear complexity for modeling global
dependencies. To this end, we propose Mamba-MOC, a mamba-based network designed
for multi-category remote object counting, which represents the first
application of Mamba to remote sensing object counting. Specifically, we
propose a cross-scale interaction module to facilitate the deep integration of
hierarchical features. Then we design a context state space model to capture
both global and local contextual information and provide local neighborhood
information during the scan process. Experimental results in large-scale
realistic scenarios demonstrate that our proposed method achieves
state-of-the-art performance compared with some mainstream counting algorithms.
|
2501.06699
|
Large Language Models, Knowledge Graphs and Search Engines: A Crossroads
for Answering Users' Questions
|
cs.AI cs.IR cs.SC
|
Much has been discussed about how Large Language Models, Knowledge Graphs and
Search Engines can be combined in a synergistic manner. A dimension largely
absent from current academic discourse is the user perspective. In particular,
there remain many open questions regarding how best to address the diverse
information needs of users, incorporating varying facets and levels of
difficulty. This paper introduces a taxonomy of user information needs, which
guides us to study the pros, cons and possible synergies of Large Language
Models, Knowledge Graphs and Search Engines. From this study, we derive a
roadmap for future research.
|
2501.06700
|
Average Reward Reinforcement Learning for Wireless Radio Resource
Management
|
cs.IT cs.LG cs.NI eess.SP math.IT
|
In this paper, we address a crucial but often overlooked issue in applying
reinforcement learning (RL) to radio resource management (RRM) in wireless
communications: the mismatch between the discounted reward RL formulation and
the undiscounted goal of wireless network optimization. To the best of our
knowledge, we are the first to systematically investigate this discrepancy,
starting with a discussion of the problem formulation followed by simulations
that quantify the extent of the gap. To bridge this gap, we introduce the use
of average reward RL, a method that aligns more closely with the long-term
objectives of RRM. We propose a new method called the Average Reward Off policy
Soft Actor Critic (ARO SAC) is an adaptation of the well known Soft Actor
Critic algorithm in the average reward framework. This new method achieves
significant performance improvement our simulation results demonstrate a 15%
gain in the system performance over the traditional discounted reward RL
approach, underscoring the potential of average reward RL in enhancing the
efficiency and effectiveness of wireless network optimization.
|
2501.06701
|
Sequential Portfolio Selection under Latent Side Information-Dependence
Structure: Optimality and Universal Learning Algorithms
|
q-fin.MF cs.IT cs.LG math.IT math.PR q-fin.PM
|
This paper investigates the investment problem of constructing an optimal
no-short sequential portfolio strategy in a market with a latent dependence
structure between asset prices and partly unobservable side information, which
is often high-dimensional. The results demonstrate that a dynamic strategy,
which forms a portfolio based on perfect knowledge of the dependence structure
and full market information over time, may not grow at a higher rate infinitely
often than a constant strategy, which remains invariant over time.
Specifically, if the market is stationary, implying that the dependence
structure is statistically stable, the growth rate of an optimal dynamic
strategy, utilizing the maximum capacity of the entire market information,
almost surely decays over time into an equilibrium state, asymptotically
converging to the growth rate of a constant strategy.
Technically, this work reassesses the common belief that a constant strategy
only attains the optimal limiting growth rate of dynamic strategies when the
market process is identically and independently distributed. By analyzing the
dynamic log-optimal portfolio strategy as the optimal benchmark in a stationary
market with side information, we show that a random optimal constant strategy
almost surely exists, even when a limiting growth rate for the dynamic strategy
does not. Consequently, two approaches to learning algorithms for portfolio
construction are discussed, demonstrating the safety of removing side
information from the learning process while still guaranteeing an asymptotic
growth rate comparable to that of the optimal dynamic strategy.
|
2501.06704
|
Fine-tuning ChatGPT for Automatic Scoring of Written Scientific
Explanations in Chinese
|
cs.AI cs.CL
|
The development of explanations for scientific phenomena is essential in
science assessment, but scoring student-written explanations remains
challenging and resource-intensive. Large language models (LLMs) have shown
promise in addressing this issue, particularly in alphabetic languages like
English. However, their applicability to logographic languages is less
explored. This study investigates the potential of fine-tuning ChatGPT, a
leading LLM, to automatically score scientific explanations written in Chinese.
Student responses to seven scientific explanation tasks were collected and
automatically scored, with scoring accuracy examined in relation to reasoning
complexity using the Kendall correlation. A qualitative analysis explored how
linguistic features influenced scoring accuracy. The results show that
domain-specific adaptation enables ChatGPT to score Chinese scientific
explanations with accuracy. However, scoring accuracy correlates with reasoning
complexity: a negative correlation for lower-level responses and a positive one
for higher-level responses. The model overrates complex reasoning in low-level
responses with intricate sentence structures and underrates high-level
responses using concise causal reasoning. These correlations stem from
linguistic features--simplicity and clarity enhance accuracy for lower-level
responses, while comprehensiveness improves accuracy for higher-level ones.
Simpler, shorter responses tend to score more accurately at lower levels,
whereas longer, information-rich responses yield better accuracy at higher
levels. These findings demonstrate the effectiveness of LLMs in automatic
scoring within a Chinese context and emphasize the importance of linguistic
features and reasoning complexity in fine-tuning scoring models for educational
assessments.
|
2501.06705
|
Quantum Data Sketches
|
cs.DB quant-ph
|
Recent advancements in quantum technologies, particularly in quantum sensing
and simulation, have facilitated the generation and analysis of inherently
quantum data. This progress underscores the necessity for developing efficient
and scalable quantum data management strategies. This goal faces immense
challenges due to the exponential dimensionality of quantum data and its unique
quantum properties such as no-cloning and measurement stochasticity.
Specifically, classical storage and manipulation of an arbitrary n-qubit
quantum state requires exponential space and time. Hence, there is a critical
need to revisit foundational data management concepts and algorithms for
quantum data. In this paper, we propose succinct quantum data sketches to
support basic database operations such as search and selection. We view our
work as an initial step towards the development of quantum data management
model, opening up many possibilities for future research in this direction.
|
2501.06706
|
AIOpsLab: A Holistic Framework to Evaluate AI Agents for Enabling
Autonomous Clouds
|
cs.AI cs.DC cs.MA cs.SE
|
AI for IT Operations (AIOps) aims to automate complex operational tasks, such
as fault localization and root cause analysis, to reduce human workload and
minimize customer impact. While traditional DevOps tools and AIOps algorithms
often focus on addressing isolated operational tasks, recent advances in Large
Language Models (LLMs) and AI agents are revolutionizing AIOps by enabling
end-to-end and multitask automation. This paper envisions a future where AI
agents autonomously manage operational tasks throughout the entire incident
lifecycle, leading to self-healing cloud systems, a paradigm we term AgentOps.
Realizing this vision requires a comprehensive framework to guide the design,
development, and evaluation of these agents. To this end, we present AIOPSLAB,
a framework that not only deploys microservice cloud environments, injects
faults, generates workloads, and exports telemetry data but also orchestrates
these components and provides interfaces for interacting with and evaluating
agents. We discuss the key requirements for such a holistic framework and
demonstrate how AIOPSLAB can facilitate the evaluation of next-generation AIOps
agents. Through evaluations of state-of-the-art LLM agents within the benchmark
created by AIOPSLAB, we provide insights into their capabilities and
limitations in handling complex operational tasks in cloud environments.
|
2501.06707
|
ELIZA Reanimated: The world's first chatbot restored on the world's
first time sharing system
|
cs.AI cs.CY cs.SC
|
ELIZA, created by Joseph Weizenbaum at MIT in the early 1960s, is usually
considered the world's first chatbot. It was developed in MAD-SLIP on MIT's
CTSS, the world's first time-sharing system, on an IBM 7094. We discovered an
original ELIZA printout in Prof. Weizenbaum's archives at MIT, including an
early version of the famous DOCTOR script, a nearly complete version of the
MAD-SLIP code, and various support functions in MAD and FAP. Here we describe
the reanimation of this original ELIZA on a restored CTSS, itself running on an
emulated IBM 7094. The entire stack is open source, so that any user of a
unix-like OS can run the world's first chatbot on the world's first
time-sharing system.
|
2501.06708
|
Evaluating Sample Utility for Data Selection by Mimicking Model Weights
|
cs.LG cs.AI
|
Foundation models are trained on large-scale web-crawled datasets, which
often contain noise, biases, and irrelevant information. This motivates the use
of data selection techniques, which can be divided into model-free variants --
relying on heuristic rules and downstream datasets -- and model-based, e.g.,
using influence functions. The former can be expensive to design and risk
introducing unwanted dependencies, while the latter are often computationally
prohibitive. Instead, we propose an efficient, model-based approach using the
Mimic Score, a new data quality metric that leverages the weights of a
reference model to assess the usefulness of individual samples for training a
new model. It relies on the alignment between gradients and a target direction
induced by the reference model. Using the derived Mimic Scores, we develop
Grad-Mimic, a framework that prioritizes samples for learning, creates
effective filters, and automates data selection. Empirically, using Mimic
Scores to guide training improves data efficiency, results in consistent
performance gains across six image datasets, and includes enhancements to CLIP
models. Moreover, Mimic Score-based filters improve upon existing filtering
methods, e.g., cutting 4.7 million samples to train better CLIP models while
offering accurate estimation of training dataset quality.
|
2501.06710
|
Multi-task Visual Grounding with Coarse-to-Fine Consistency Constraints
|
cs.CV cs.AI
|
Multi-task visual grounding involves the simultaneous execution of
localization and segmentation in images based on textual expressions. The
majority of advanced methods predominantly focus on transformer-based
multimodal fusion, aiming to extract robust multimodal representations.
However, ambiguity between referring expression comprehension (REC) and
referring image segmentation (RIS) is error-prone, leading to inconsistencies
between multi-task predictions. Besides, insufficient multimodal understanding
directly contributes to biased target perception. To overcome these challenges,
we propose a Coarse-to-fine Consistency Constraints Visual Grounding
architecture ($\text{C}^3\text{VG}$), which integrates implicit and explicit
modeling approaches within a two-stage framework. Initially, query and pixel
decoders are employed to generate preliminary detection and segmentation
outputs, a process referred to as the Rough Semantic Perception (RSP) stage.
These coarse predictions are subsequently refined through the proposed
Mask-guided Interaction Module (MIM) and a novel explicit bidirectional
consistency constraint loss to ensure consistent representations across tasks,
which we term the Refined Consistency Interaction (RCI) stage. Furthermore, to
address the challenge of insufficient multimodal understanding, we leverage
pre-trained models based on visual-linguistic fusion representations. Empirical
evaluations on the RefCOCO, RefCOCO+, and RefCOCOg datasets demonstrate the
efficacy and soundness of $\text{C}^3\text{VG}$, which significantly
outperforms state-of-the-art REC and RIS methods by a substantial margin. Code
and model will be available at \url{https://github.com/Dmmm1997/C3VG}.
|
2501.06713
|
MiniRAG: Towards Extremely Simple Retrieval-Augmented Generation
|
cs.AI
|
The growing demand for efficient and lightweight Retrieval-Augmented
Generation (RAG) systems has highlighted significant challenges when deploying
Small Language Models (SLMs) in existing RAG frameworks. Current approaches
face severe performance degradation due to SLMs' limited semantic understanding
and text processing capabilities, creating barriers for widespread adoption in
resource-constrained scenarios. To address these fundamental limitations, we
present MiniRAG, a novel RAG system designed for extreme simplicity and
efficiency. MiniRAG introduces two key technical innovations: (1) a
semantic-aware heterogeneous graph indexing mechanism that combines text chunks
and named entities in a unified structure, reducing reliance on complex
semantic understanding, and (2) a lightweight topology-enhanced retrieval
approach that leverages graph structures for efficient knowledge discovery
without requiring advanced language capabilities. Our extensive experiments
demonstrate that MiniRAG achieves comparable performance to LLM-based methods
even when using SLMs while requiring only 25\% of the storage space.
Additionally, we contribute a comprehensive benchmark dataset for evaluating
lightweight RAG systems under realistic on-device scenarios with complex
queries. We fully open-source our implementation and datasets at:
https://github.com/HKUDS/MiniRAG.
|
2501.06714
|
F3D-Gaus: Feed-forward 3D-aware Generation on ImageNet with
Cycle-Consistent Gaussian Splatting
|
cs.CV
|
This paper tackles the problem of generalizable 3D-aware generation from
monocular datasets, e.g., ImageNet. The key challenge of this task is learning
a robust 3D-aware representation without multi-view or dynamic data, while
ensuring consistent texture and geometry across different viewpoints. Although
some baseline methods are capable of 3D-aware generation, the quality of the
generated images still lags behind state-of-the-art 2D generation approaches,
which excel in producing high-quality, detailed images. To address this severe
limitation, we propose a novel feed-forward pipeline based on pixel-aligned
Gaussian Splatting, coined as F3D-Gaus, which can produce more realistic and
reliable 3D renderings from monocular inputs. In addition, we introduce a
self-supervised cycle-consistent constraint to enforce cross-view consistency
in the learned 3D representation. This training strategy naturally allows
aggregation of multiple aligned Gaussian primitives and significantly
alleviates the interpolation limitations inherent in single-view pixel-aligned
Gaussian Splatting. Furthermore, we incorporate video model priors to perform
geometry-aware refinement, enhancing the generation of fine details in
wide-viewpoint scenarios and improving the model's capability to capture
intricate 3D textures. Extensive experiments demonstrate that our approach not
only achieves high-quality, multi-view consistent 3D-aware generation from
monocular datasets, but also significantly improves training and inference
efficiency.
|
2501.06715
|
ZNO-Eval: Benchmarking reasoning capabilities of large language models
in Ukrainian
|
cs.CL cs.AI
|
As the usage of large language models for problems outside of simple text
understanding or generation increases, assessing their abilities and
limitations becomes crucial. While significant progress has been made in this
area over the last few years, most research has focused on benchmarking
English, leaving other languages underexplored. This makes evaluating the
reasoning and robustness level of language models in Ukrainian particularly
challenging. The purpose of this work is to establish a comprehensive benchmark
for the reasoning capabilities evaluation of large language models in the
Ukrainian language. This paper presents the ZNO-Eval benchmark based on real
exam tasks from Ukraine's standardized educational testing system: the External
Independent Evaluation and the National Multi-subject Test. With single-answer
options, multiple-choice, matching, and open-ended questions from diverse
subjects, including Ukrainian language, mathematics, history, and geography,
this dataset paves the way toward a thorough analysis of reasoning capabilities
across different domains and complexities. Evaluation of several well-known
language models, such as GPT-3.5-Turbo, GPT-4o, GPT-4-Turbo, Mistral Large,
Claude 3 Opus, and Gemini-1.5 Pro on this benchmark demonstrated the
superiority of GPT-4o in both common knowledge reasoning and intricate language
tasks. At the same time, Gemini Pro and GPT-4 Turbo excelled in the arithmetic
domain, leading in single-answer and open-ended math problems. While all models
were close to max performance in text-only common knowledge tasks like history
and geography, there still is a gap for Ukrainian language and math, thus
highlighting the importance of developing specialized language benchmarks for
more accurate assessments of model capabilities and limitations across
different languages and contexts.
|
2501.06718
|
DRDT3: Diffusion-Refined Decision Test-Time Training Model
|
cs.LG
|
Decision Transformer (DT), a trajectory modeling method, has shown
competitive performance compared to traditional offline reinforcement learning
(RL) approaches on various classic control tasks. However, it struggles to
learn optimal policies from suboptimal, reward-labeled trajectories. In this
study, we explore the use of conditional generative modeling to facilitate
trajectory stitching given its high-quality data generation ability.
Additionally, recent advancements in Recurrent Neural Networks (RNNs) have
shown their linear complexity and competitive sequence modeling performance
over Transformers. We leverage the Test-Time Training (TTT) layer, an RNN that
updates hidden states during testing, to model trajectories in the form of DT.
We introduce a unified framework, called Diffusion-Refined Decision TTT
(DRDT3), to achieve performance beyond DT models. Specifically, we propose the
Decision TTT (DT3) module, which harnesses the sequence modeling strengths of
both self-attention and the TTT layer to capture recent contextual information
and make coarse action predictions. We further integrate DT3 with the diffusion
model using a unified optimization objective. With experiments on multiple
tasks of Gym and AntMaze in the D4RL benchmark, our DT3 model without diffusion
refinement demonstrates improved performance over standard DT, while DRDT3
further achieves superior results compared to state-of-the-art conventional
offline RL and DT-based methods.
|
2501.06719
|
Hierarchical Sampling-based Planner with LTL Constraints and Text
Prompting
|
cs.RO cs.SY eess.SY
|
This project introduces a hierarchical planner integrating Linear Temporal
Logic (LTL) constraints with natural language prompting for robot motion
planning. The framework decomposes maps into regions, generates directed
graphs, and converts them into transition systems for high-level planning. Text
instructions are translated into LTL formulas and converted to Deterministic
Finite Automata (DFA) for sequential goal-reaching tasks while adhering to
safety constraints. High-level plans, derived via Breadth-First Search (BFS),
guide low-level planners like Exploring Random Trees (RRT) and Probabilistic
Roadmaps (PRM) for obstacle-avoidant navigation along with LTL tasks. The
approach demonstrates adaptability to various task complexities, though
challenges such as graph construction overhead and suboptimal path generation
remain. Future directions include extending to considering terrain conditions
and incorporating higher-order dynamics.
|
Subsets and Splits
No community queries yet
The top public SQL queries from the community will appear here once available.