id stringlengths 9 16 | title stringlengths 4 278 | categories stringlengths 5 104 | abstract stringlengths 6 4.09k |
|---|---|---|---|
2501.00037 | Effects of Turbulence Modeling and Parcel Approach on Dispersed
Two-Phase Swirling Flow | physics.flu-dyn cs.CE cs.NA math.NA | Several numerical simulations of a co-axial particle-laden swirling air flow
in a vertical circular pipe were performed. The air flow was modeled using the
unsteady Favre-averaged Navier-Stokes equations. A Lagrangian model was used
for the particle motion. The gas and particles are coupled through two-way
momentum exchange. The results of the simulations using three versions of the
k-epsilon turbulence model (standard, re-normalization group (RNG), and
realizable) are compared with experimental mean velocity profiles. The standard
model achieved the best overall performance. The realizable model was unable to
satisfactorily predict the radial velocity; it is also the most
computationally-expensive model. The simulations using the RNG model predicted
additional recirculation zones. We also compared the particle and parcel
approaches in solving the particle motion. In the latter, multiple similar
particles are grouped in a single parcel, thereby reducing the amount of
computation.
|
2501.00038 | Sound-Based Recognition of Touch Gestures and Emotions for Enhanced
Human-Robot Interaction | cs.HC cs.RO cs.SD eess.AS | Emotion recognition and touch gesture decoding are crucial for advancing
human-robot interaction (HRI), especially in social environments where
emotional cues and tactile perception play important roles. However, many
humanoid robots, such as Pepper, Nao, and Furhat, lack full-body tactile skin,
limiting their ability to engage in touch-based emotional and gesture
interactions. In addition, vision-based emotion recognition methods usually
face strict GDPR compliance challenges due to the need to collect personal
facial data. To address these limitations and avoid privacy issues, this paper
studies the potential of using the sounds produced by touching during HRI to
recognise tactile gestures and classify emotions along the arousal and valence
dimensions. Using a dataset of tactile gestures and emotional interactions from
28 participants with the humanoid robot Pepper, we design an audio-only
lightweight touch gesture and emotion recognition model with only 0.24M
parameters, 0.94MB model size, and 0.7G FLOPs. Experimental results show that
the proposed sound-based touch gesture and emotion recognition model
effectively recognises the arousal and valence states of different emotions, as
well as various tactile gestures, when the input audio length varies. The
proposed model is low-latency and achieves similar results as well-known
pretrained audio neural networks (PANNs), but with much smaller FLOPs,
parameters, and model size.
|
2501.00039 | Speech Recognition With LLMs Adapted to Disordered Speech Using
Reinforcement Learning | eess.AS cs.CL cs.LG cs.SD | We introduce a large language model (LLM) capable of processing speech inputs
and show that tuning it further with reinforcement learning on human preference
(RLHF) enables it to adapt better to disordered speech than traditional
fine-tuning. Our method replaces low-frequency text tokens in an LLM's
vocabulary with audio tokens and enables the model to recognize speech by
fine-tuning it on speech with transcripts. We then use RL with rewards based on
syntactic and semantic accuracy measures generalizing the LLM further to
recognize disordered speech. While the resulting LLM does not outperform
existing systems for speech recognition, we find that tuning with reinforcement
learning using custom rewards leads to substantially better performance than
supervised fine-tuning of the language model, specifically when adapting to
speech in a different setting. This presents a compelling alternative tuning
strategy for speech recognition using large language models.
|
2501.00042 | Resource-Efficient Transformer Architecture: Optimizing Memory and
Execution Time for Real-Time Applications | cs.LG cs.AI | This paper describes a memory-efficient transformer model designed to drive a
reduction in memory usage and execution time by substantial orders of magnitude
without impairing the model's performance near that of the original model.
Recently, new architectures of transformers were presented, focused on
parameter efficiency and computational optimization; however, such models
usually require considerable resources in terms of hardware when deployed in
real-world applications on edge devices. This approach addresses this concern
by halving embedding size and applying targeted techniques such as parameter
pruning and quantization to optimize the memory footprint with minimum
sacrifices in terms of accuracy. Experimental results include a 52% reduction
in memory usage and a 33% decrease in execution time, resulting in better
efficiency than state-of-the-art models. This work compared our model with
existing compelling architectures, such as MobileBERT and DistilBERT, and
proved its feasibility in the domain of resource-friendly deep learning
architectures, mainly for applications in real-time and in resource-constrained
applications.
|
2501.00045 | Cross-Linguistic Examination of Machine Translation Transfer Learning | cs.CL cs.LG | This study investigates the effectiveness of transfer learning in machine
translation across diverse linguistic families by evaluating five distinct
language pairs. Leveraging pre-trained models on high-resource languages, these
models were fine-tuned on low-resource languages, examining variations in
hyperparameters such as learning rate, batch size, number of epochs, and weight
decay. The research encompasses language pairs from different linguistic
backgrounds: Semitic (Modern Standard Arabic - Levantine Arabic), Bantu (Hausa
- Zulu), Romance (Spanish - Catalan), Slavic (Slovakian - Macedonian), and
language isolates (Eastern Armenian - Western Armenian). Results demonstrate
that transfer learning is effective across different language families,
although the impact of hyperparameters varies. A moderate batch size (e.g., 32)
is generally more effective, while very high learning rates can disrupt model
training. The study highlights the universality of transfer learning in
multilingual contexts and suggests that consistent hyperparameter settings can
simplify and enhance the efficiency of multilingual model training.
|
2501.00046 | Numerical solutions of fixed points in two-dimensional
Kuramoto-Sivashinsky equation expedited by reinforcement learning | cs.LG | This paper presents a combined approach to enhancing the effectiveness of
Jacobian-Free Newton-Krylov (JFNK) method by deep reinforcement learning (DRL)
in identifying fixed points within the 2D Kuramoto-Sivashinsky Equation (KSE).
JFNK approach entails a good initial guess for improved convergence when
searching for fixed points. With a properly defined reward function, we utilise
DRL as a preliminary step to enhance the initial guess in the converging
process. We report new results of fixed points in the 2D KSE which have not
been reported in the literature. Additionally, we explored control optimization
for the 2D KSE to navigate the system trajectories between known fixed points,
based on parallel reinforcement learning techniques. This combined method
underscores the improved JFNK approach to finding new fixed-point solutions
within the context of 2D KSE, which may be instructive for other
high-dimensional dynamical systems.
|
2501.00048 | Stroke Prediction using Clinical and Social Features in Machine Learning | cs.LG cs.AI | Every year in the United States, 800,000 individuals suffer a stroke - one
person every 40 seconds, with a death occurring every four minutes. While
individual factors vary, certain predictors are more prevalent in determining
stroke risk. As strokes are the second leading cause of death and disability
worldwide, predicting stroke likelihood based on lifestyle factors is crucial.
Showing individuals their stroke risk could motivate lifestyle changes, and
machine learning offers solutions to this prediction challenge. Neural networks
excel at predicting outcomes based on training features like lifestyle factors,
however, they're not the only option. Logistic regression models can also
effectively compute the likelihood of binary outcomes based on independent
variables, making them well-suited for stroke prediction. This analysis will
compare both neural networks (dense and convolutional) and logistic regression
models for stroke prediction, examining their pros, cons, and differences to
develop the most effective predictor that minimizes false negatives.
|
2501.00049 | Seq2Seq Model-Based Chatbot with LSTM and Attention Mechanism for
Enhanced User Interaction | cs.CL cs.ET | A chatbot is an intelligent software application that automates conversations
and engages users in natural language through messaging platforms. Leveraging
artificial intelligence (AI), chatbots serve various functions, including
customer service, information gathering, and casual conversation. Existing
virtual assistant chatbots, such as ChatGPT and Gemini, demonstrate the
potential of AI in Natural Language Processing (NLP). However, many current
solutions rely on predefined APIs, which can result in vendor lock-in and high
costs. To address these challenges, this work proposes a chatbot developed
using a Sequence-to-Sequence (Seq2Seq) model with an encoder-decoder
architecture that incorporates attention mechanisms and Long Short-Term Memory
(LSTM) cells. By avoiding predefined APIs, this approach ensures flexibility
and cost-effectiveness. The chatbot is trained, validated, and tested on a
dataset specifically curated for the tourism sector in Draa-Tafilalet, Morocco.
Key evaluation findings indicate that the proposed Seq2Seq model-based chatbot
achieved high accuracies: approximately 99.58% in training, 98.03% in
validation, and 94.12% in testing. These results demonstrate the chatbot's
effectiveness in providing relevant and coherent responses within the tourism
domain, highlighting the potential of specialized AI applications to enhance
user experience and satisfaction in niche markets.
|
2501.00050 | Learning in Multiple Spaces: Few-Shot Network Attack Detection with
Metric-Fused Prototypical Networks | cs.CR cs.LG | Network intrusion detection systems face significant challenges in
identifying emerging attack patterns, especially when limited data samples are
available. To address this, we propose a novel Multi-Space Prototypical
Learning (MSPL) framework tailored for few-shot attack detection. The framework
operates across multiple metric spaces-Euclidean, Cosine, Chebyshev, and
Wasserstein distances-integrated through a constrained weighting scheme to
enhance embedding robustness and improve pattern recognition. By leveraging
Polyak-averaged prototype generation, the framework stabilizes the learning
process and effectively adapts to rare and zero-day attacks. Additionally, an
episodic training paradigm ensures balanced representation across diverse
attack classes, enabling robust generalization. Experimental results on
benchmark datasets demonstrate that MSPL outperforms traditional approaches in
detecting low-profile and novel attack types, establishing it as a robust
solution for zero-day attack detection.
|
2501.00051 | DDD-GenDT: Dynamic Data-driven Generative Digital Twin Framework | cs.LG cs.AI cs.SY eess.SY | Digital twin (DT) technology has emerged as a transformative approach to
simulate, predict, and optimize the behavior of physical systems, with
applications that span manufacturing, healthcare, climate science, and more.
However, the development of DT models often faces challenges such as high data
requirements, integration complexity, and limited adaptability to dynamic
changes in physical systems. This paper presents a new method inspired by
dynamic data-driven applications systems (DDDAS), called the dynamic
data-driven generative of digital twins framework (DDD-GenDT), which combines
the physical system with LLM, allowing LLM to act as DT to interact with the
physical system operating status and generate the corresponding physical
behaviors. We apply DDD-GenDT to the computer numerical control (CNC) machining
process, and we use the spindle current measurement data in the NASA milling
wear data set as an example to enable LLMs to forecast the physical behavior
from historical data and interact with current observations. Experimental
results show that in the zero-shot prediction setting, the LLM-based DT can
adapt to the change in the system, and the average RMSE of the GPT-4 prediction
is 0.479A, which is 4.79% of the maximum spindle motor current measurement of
10A, with little training data and instructions required. Furthermore, we
analyze the performance of DDD-GenDT in this specific application and their
potential to construct digital twins. We also discuss the limitations and
challenges that may arise in practical implementations.
|
2501.00052 | Efficient and Scalable Deep Reinforcement Learning for Mean Field
Control Games | cs.LG cs.GT cs.MA | Mean Field Control Games (MFCGs) provide a powerful theoretical framework for
analyzing systems of infinitely many interacting agents, blending elements from
Mean Field Games (MFGs) and Mean Field Control (MFC). However, solving the
coupled Hamilton-Jacobi-Bellman and Fokker-Planck equations that characterize
MFCG equilibria remains a significant computational challenge, particularly in
high-dimensional or complex environments.
This paper presents a scalable deep Reinforcement Learning (RL) approach to
approximate equilibrium solutions of MFCGs. Building on previous works, We
reformulate the infinite-agent stochastic control problem as a Markov Decision
Process, where each representative agent interacts with the evolving mean field
distribution. We use the actor-critic based algorithm from a previous paper
(Angiuli et.al., 2024) as the baseline and propose several versions of more
scalable and efficient algorithms, utilizing techniques including parallel
sample collection (batching); mini-batching; target network; proximal policy
optimization (PPO); generalized advantage estimation (GAE); and entropy
regularization. By leveraging these techniques, we effectively improved the
efficiency, scalability, and training stability of the baseline algorithm.
We evaluate our method on a linear-quadratic benchmark problem, where an
analytical solution to the MFCG equilibrium is available. Our results show that
some versions of our proposed approach achieve faster convergence and closely
approximate the theoretical optimum, outperforming the baseline algorithm by an
order of magnitude in sample efficiency. Our work lays the foundation for
adapting deep RL to solve more complicated MFCGs closely related to real life,
such as large-scale autonomous transportation systems, multi-firm economic
competition, and inter-bank borrowing problems.
|
2501.00053 | Implementing Trust in Non-Small Cell Lung Cancer Diagnosis with a
Conformalized Uncertainty-Aware AI Framework in Whole-Slide Images | eess.IV cs.AI cs.LG | Ensuring trustworthiness is fundamental to the development of artificial
intelligence (AI) that is considered societally responsible, particularly in
cancer diagnostics, where a misdiagnosis can have dire consequences. Current
digital pathology AI models lack systematic solutions to address
trustworthiness concerns arising from model limitations and data discrepancies
between model deployment and development environments. To address this issue,
we developed TRUECAM, a framework designed to ensure both data and model
trustworthiness in non-small cell lung cancer subtyping with whole-slide
images. TRUECAM integrates 1) a spectral-normalized neural Gaussian process for
identifying out-of-scope inputs and 2) an ambiguity-guided elimination of tiles
to filter out highly ambiguous regions, addressing data trustworthiness, as
well as 3) conformal prediction to ensure controlled error rates. We
systematically evaluated the framework across multiple large-scale cancer
datasets, leveraging both task-specific and foundation models, illustrate that
an AI model wrapped with TRUECAM significantly outperforms models that lack
such guidance, in terms of classification accuracy, robustness,
interpretability, and data efficiency, while also achieving improvements in
fairness. These findings highlight TRUECAM as a versatile wrapper framework for
digital pathology AI models with diverse architectural designs, promoting their
responsible and effective applications in real-world settings.
|
2501.00054 | AdvAnchor: Enhancing Diffusion Model Unlearning with Adversarial Anchors | cs.LG cs.AI cs.CL | Security concerns surrounding text-to-image diffusion models have driven
researchers to unlearn inappropriate concepts through fine-tuning. Recent
fine-tuning methods typically align the prediction distributions of unsafe
prompts with those of predefined text anchors. However, these techniques
exhibit a considerable performance trade-off between eliminating undesirable
concepts and preserving other concepts. In this paper, we systematically
analyze the impact of diverse text anchors on unlearning performance. Guided by
this analysis, we propose AdvAnchor, a novel approach that generates
adversarial anchors to alleviate the trade-off issue. These adversarial anchors
are crafted to closely resemble the embeddings of undesirable concepts to
maintain overall model performance, while selectively excluding defining
attributes of these concepts for effective erasure. Extensive experiments
demonstrate that AdvAnchor outperforms state-of-the-art methods. Our code is
publicly available at https://anonymous.4open.science/r/AdvAnchor.
|
2501.00055 | LLM-Virus: Evolutionary Jailbreak Attack on Large Language Models | cs.CR cs.AI cs.CL | While safety-aligned large language models (LLMs) are increasingly used as
the cornerstone for powerful systems such as multi-agent frameworks to solve
complex real-world problems, they still suffer from potential adversarial
queries, such as jailbreak attacks, which attempt to induce harmful content.
Researching attack methods allows us to better understand the limitations of
LLM and make trade-offs between helpfulness and safety. However, existing
jailbreak attacks are primarily based on opaque optimization techniques (e.g.
token-level gradient descent) and heuristic search methods like LLM refinement,
which fall short in terms of transparency, transferability, and computational
cost. In light of these limitations, we draw inspiration from the evolution and
infection processes of biological viruses and propose LLM-Virus, a jailbreak
attack method based on evolutionary algorithm, termed evolutionary jailbreak.
LLM-Virus treats jailbreak attacks as both an evolutionary and transfer
learning problem, utilizing LLMs as heuristic evolutionary operators to ensure
high attack efficiency, transferability, and low time cost. Our experimental
results on multiple safety benchmarks show that LLM-Virus achieves competitive
or even superior performance compared to existing attack methods.
|
2501.00056 | Transforming CCTV cameras into NO$_2$ sensors at city scale for adaptive
policymaking | cs.LG cs.AI cs.CY | Air pollution in cities, especially NO\textsubscript{2}, is linked to
numerous health problems, ranging from mortality to mental health challenges
and attention deficits in children. While cities globally have initiated
policies to curtail emissions, real-time monitoring remains challenging due to
limited environmental sensors and their inconsistent distribution. This gap
hinders the creation of adaptive urban policies that respond to the sequence of
events and daily activities affecting pollution in cities. Here, we demonstrate
how city CCTV cameras can act as a pseudo-NO\textsubscript{2} sensors. Using a
predictive graph deep model, we utilised traffic flow from London's cameras in
addition to environmental and spatial factors, generating NO\textsubscript{2}
predictions from over 133 million frames. Our analysis of London's mobility
patterns unveiled critical spatiotemporal connections, showing how specific
traffic patterns affect NO\textsubscript{2} levels, sometimes with temporal
lags of up to 6 hours. For instance, if trucks only drive at night, their
effects on NO\textsubscript{2} levels are most likely to be seen in the morning
when people commute. These findings cast doubt on the efficacy of some of the
urban policies currently being implemented to reduce pollution. By leveraging
existing camera infrastructure and our introduced methods, city planners and
policymakers could cost-effectively monitor and mitigate the impact of
NO\textsubscript{2} and other pollutants.
|
2501.00057 | VisTabNet: Adapting Vision Transformers for Tabular Data | cs.LG cs.AI cs.CV | Although deep learning models have had great success in natural language
processing and computer vision, we do not observe comparable improvements in
the case of tabular data, which is still the most common data type used in
biological, industrial and financial applications. In particular, it is
challenging to transfer large-scale pre-trained models to downstream tasks
defined on small tabular datasets. To address this, we propose VisTabNet -- a
cross-modal transfer learning method, which allows for adapting Vision
Transformer (ViT) with pre-trained weights to process tabular data. By
projecting tabular inputs to patch embeddings acceptable by ViT, we can
directly apply a pre-trained Transformer Encoder to tabular inputs. This
approach eliminates the conceptual cost of designing a suitable architecture
for processing tabular data, while reducing the computational cost of training
the model from scratch. Experimental results on multiple small tabular datasets
(less than 1k samples) demonstrate VisTabNet's superiority, outperforming both
traditional ensemble methods and recent deep learning models. The proposed
method goes beyond conventional transfer learning practice and shows that
pre-trained image models can be transferred to solve tabular problems,
extending the boundaries of transfer learning.
|
2501.00059 | Large Language Models for Mathematical Analysis | cs.CL cs.AI | Mathematical problem-solving is a key field in artificial intelligence (AI)
and a critical benchmark for evaluating the capabilities of large language
models (LLMs). While extensive research has focused on mathematical
problem-solving, most existing work and datasets concentrate on computational
tasks, leaving gaps in areas like mathematical analysis, which demands rigorous
proofs and formal reasoning. We developed the DEMI-MathAnalysis dataset,
comprising proof-based problems from mathematical analysis topics such as
Sequences and Limits, Infinite Series, and Convex Functions. We also designed a
guiding framework to rigorously enhance LLMs' ability to solve these problems.
Through fine-tuning LLMs on this dataset and employing our framework, we
observed significant improvements in their capability to generate logical,
complete, and elegant proofs. This work addresses critical gaps in mathematical
reasoning and contributes to advancing trustworthy AI capable of handling
formalized mathematical language. The code is publicly accessible at LLMs for
Mathematical Analysis.
|
2501.00061 | Training-free Heterogeneous Model Merging | cs.LG cs.AI | Model merging has attracted significant attention as a powerful paradigm for
model reuse, facilitating the integration of task-specific models into a
singular, versatile framework endowed with multifarious capabilities. Previous
studies, predominantly utilizing methods such as Weight Average (WA), have
shown that model merging can effectively leverage pretrained models without the
need for laborious retraining. However, the inherent heterogeneity among models
poses a substantial constraint on its applicability, particularly when
confronted with discrepancies in model architectures. To overcome this
challenge, we propose an innovative model merging framework designed for
heterogeneous models, encompassing both depth and width heterogeneity. To
address depth heterogeneity, we introduce a layer alignment strategy that
harmonizes model layers by segmenting deeper models, treating consecutive
layers with similar representations as a cohesive segment, thus enabling the
seamless merging of models with differing layer depths. For width
heterogeneity, we propose a novel elastic neuron zipping algorithm that
projects the weights from models of varying widths onto a common dimensional
space, eliminating the need for identical widths. Extensive experiments
validate the efficacy of these proposed methods, demonstrating that the merging
of structurally heterogeneous models can achieve performance levels comparable
to those of homogeneous merging, across both vision and NLP tasks. Our code is
publicly available at
https://github.com/zju-vipa/training_free_heterogeneous_model_merging.
|
2501.00062 | ELECTRA and GPT-4o: Cost-Effective Partners for Sentiment Analysis | cs.CL cs.AI | Bidirectional transformers excel at sentiment analysis, and Large Language
Models (LLM) are effective zero-shot learners. Might they perform better as a
team? This paper explores collaborative approaches between ELECTRA and GPT-4o
for three-way sentiment classification. We fine-tuned (FT) four models (ELECTRA
Base/Large, GPT-4o/4o-mini) using a mix of reviews from Stanford Sentiment
Treebank (SST) and DynaSent. We provided input from ELECTRA to GPT as:
predicted label, probabilities, and retrieved examples. Sharing ELECTRA Base FT
predictions with GPT-4o-mini significantly improved performance over either
model alone (82.74 macro F1 vs. 79.29 ELECTRA Base FT, 79.52 GPT-4o-mini) and
yielded the lowest cost/performance ratio (\$0.12/F1 point). However, when GPT
models were fine-tuned, including predictions decreased performance. GPT-4o
FT-M was the top performer (86.99), with GPT-4o-mini FT close behind (86.77) at
much less cost (\$0.38 vs. \$1.59/F1 point). Our results show that augmenting
prompts with predictions from fine-tuned encoders is an efficient way to boost
performance, and a fine-tuned GPT-4o-mini is nearly as good as GPT-4o FT at 76%
less cost. Both are affordable options for projects with limited resources.
|
2501.00063 | "Generative Models for Financial Time Series Data: Enhancing
Signal-to-Noise Ratio and Addressing Data Scarcity in A-Share Market | cs.LG cs.AI | The financial industry is increasingly seeking robust methods to address the
challenges posed by data scarcity and low signal-to-noise ratios, which limit
the application of deep learning techniques in stock market analysis. This
paper presents two innovative generative model-based approaches to synthesize
stock data, specifically tailored for different scenarios within the A-share
market in China. The first method, a sector-based synthesis approach, enhances
the signal-to-noise ratio of stock data by classifying the characteristics of
stocks from various sectors in China's A-share market. This method employs an
Approximate Non-Local Total Variation algorithm to smooth the generated data, a
bandpass filtering method based on Fourier Transform to eliminate noise, and
Denoising Diffusion Implicit Models to accelerate sampling speed. The second
method, a recursive stock data synthesis approach based on pattern recognition,
is designed to synthesize data for stocks with short listing periods and
limited comparable companies. It leverages pattern recognition techniques and
Markov models to learn and generate variable-length stock sequences, while
introducing a sub-time-level data augmentation method to alleviate data
scarcity issues.We validate the effectiveness of these methods through
extensive experiments on various datasets, including those from the main board,
STAR Market, Growth Enterprise Market Board, Beijing Stock Exchange, NASDAQ,
NYSE, and AMEX. The results demonstrate that our synthesized data not only
improve the performance of predictive models but also enhance the
signal-to-noise ratio of individual stock signals in price trading strategies.
Furthermore, the introduction of sub-time-level data significantly improves the
quality of synthesized data.
|
2501.00064 | Lungmix: A Mixup-Based Strategy for Generalization in Respiratory Sound
Classification | cs.SD cs.LG eess.AS | Respiratory sound classification plays a pivotal role in diagnosing
respiratory diseases. While deep learning models have shown success with
various respiratory sound datasets, our experiments indicate that models
trained on one dataset often fail to generalize effectively to others, mainly
due to data collection and annotation \emph{inconsistencies}. To address this
limitation, we introduce \emph{Lungmix}, a novel data augmentation technique
inspired by Mixup. Lungmix generates augmented data by blending waveforms using
loudness and random masks while interpolating labels based on their semantic
meaning, helping the model learn more generalized representations.
Comprehensive evaluations across three datasets, namely ICBHI, SPR, and HF,
demonstrate that Lungmix significantly enhances model generalization to unseen
data. In particular, Lungmix boosts the 4-class classification score by up to
3.55\%, achieving performance comparable to models trained directly on the
target dataset.
|
2501.00065 | Predicting Preschoolers' Externalizing Problems with Mother-Child
Interaction Dynamics and Deep Learning | cs.LG cs.AI | Objective: Predicting children's future levels of externalizing problems
helps to identify children at risk and guide targeted prevention. Existing
studies have shown that mothers providing support in response to children's
dysregulation was associated with children's lower levels of externalizing
problems. The current study aims to evaluate and improve the accuracy of
predicting children's externalizing problems with mother-child interaction
dynamics. Method: This study used mother-child interaction dynamics during a
challenging puzzle task to predict children's externalizing problems six months
later (N=101, 46 boys, Mage=57.41 months, SD=6.58). Performance of the Residual
Dynamic Structural Equation Model (RDSEM) was compared with the Attention-based
Sequential Behavior Interaction Modeling (ASBIM) model, developed using the
deep learning techniques. Results: The RDSEM revealed that children whose
mothers provided more autonomy support after increases of child defeat had
lower levels of externalizing problems. Five-fold cross-validation showed that
the RDSEM had good prediction accuracy. The ASBIM model further improved
prediction accuracy, especially after including child inhibitory control as a
personalized individual feature. Conclusions: The dynamic process of
mother-child interaction provides important information for predicting
children's externalizing problems, especially maternal autonomy supportive
response to child defeat. The deep learning model is a useful tool to further
improve prediction accuracy.
|
2501.00066 | On Adversarial Robustness of Language Models in Transfer Learning | cs.CL cs.AI cs.CR cs.LG | We investigate the adversarial robustness of LLMs in transfer learning
scenarios. Through comprehensive experiments on multiple datasets (MBIB Hate
Speech, MBIB Political Bias, MBIB Gender Bias) and various model architectures
(BERT, RoBERTa, GPT-2, Gemma, Phi), we reveal that transfer learning, while
improving standard performance metrics, often leads to increased vulnerability
to adversarial attacks. Our findings demonstrate that larger models exhibit
greater resilience to this phenomenon, suggesting a complex interplay between
model size, architecture, and adaptation methods. Our work highlights the
crucial need for considering adversarial robustness in transfer learning
scenarios and provides insights into maintaining model security without
compromising performance. These findings have significant implications for the
development and deployment of LLMs in real-world applications where both
performance and robustness are paramount.
|
2501.00067 | Ensemble of classifiers for speech evaluation | cs.SD cs.AI eess.AS | The article describes an attempt to apply an ensemble of binary classifiers
to solve the problem of speech assessment in medicine. A dataset was compiled
based on quantitative and expert assessments of syllable pronunciation quality.
Quantitative assessments of 7 selected metrics were used as features: dynamic
time warp distance, Minkowski distance, correlation coefficient, longest common
subsequence (LCSS), edit distance of real se-quence (EDR), edit distance with
real penalty (ERP), and merge split (MSM). Expert as-sessment of pronunciation
quality was used as a class label: class 1 means high-quality speech, class 0
means distorted. A comparison of training results was carried out for five
classification methods: logistic regression (LR), support vector machine (SVM),
naive Bayes (NB), decision trees (DT), and K-nearest neighbors (KNN). The
results of using the mixture method to build an ensemble of classifiers are
also presented. The use of an en-semble for the studied data sets allowed us to
slightly increase the classification accuracy compared to the use of individual
binary classifiers.
|
2501.00068 | Dynamic Optimization of Storage Systems Using Reinforcement Learning
Techniques | cs.OS cs.DC cs.LG | The exponential growth of data-intensive applications has placed
unprecedented demands on modern storage systems, necessitating dynamic and
efficient optimization strategies. Traditional heuristics employed for storage
performance optimization often fail to adapt to the variability and complexity
of contemporary workloads, leading to significant performance bottlenecks and
resource inefficiencies. To address these challenges, this paper introduces
RL-Storage, a novel reinforcement learning (RL)-based framework designed to
dynamically optimize storage system configurations. RL-Storage leverages deep
Q-learning algorithms to continuously learn from real-time I/O patterns and
predict optimal storage parameters, such as cache size, queue depths, and
readahead settings[1]. The proposed framework operates within the storage
kernel, ensuring minimal latency and low computational overhead. Through an
adaptive feedback mechanism, RL-Storage dynamically adjusts critical
parameters, achieving efficient resource utilization across a wide range of
workloads. Experimental evaluations conducted on a range of benchmarks,
including RocksDB and PostgreSQL, demonstrate significant improvements, with
throughput gains of up to 2.6x and latency reductions of 43% compared to
baseline heuristics. Additionally, RL-Storage achieves these performance
enhancements with a negligible CPU overhead of 0.11% and a memory footprint of
only 5 KB, making it suitable for seamless deployment in production
environments. This work underscores the transformative potential of
reinforcement learning techniques in addressing the dynamic nature of modern
storage systems. By autonomously adapting to workload variations in real time,
RL-Storage provides a robust and scalable solution for optimizing storage
performance, paving the way for next-generation intelligent storage
infrastructures.
|
2501.00069 | Adversarial Negotiation Dynamics in Generative Language Models | cs.CL cs.AI | Generative language models are increasingly used for contract drafting and
enhancement, creating a scenario where competing parties deploy different
language models against each other. This introduces not only a game-theory
challenge but also significant concerns related to AI safety and security, as
the language model employed by the opposing party can be unknown. These
competitive interactions can be seen as adversarial testing grounds, where
models are effectively red-teamed to expose vulnerabilities such as generating
biased, harmful or legally problematic text. Despite the importance of these
challenges, the competitive robustness and safety of these models in
adversarial settings remain poorly understood. In this small study, we approach
this problem by evaluating the performance and vulnerabilities of major
open-source language models in head-to-head competitions, simulating real-world
contract negotiations. We further explore how these adversarial interactions
can reveal potential risks, informing the development of more secure and
reliable models. Our findings contribute to the growing body of research on AI
safety, offering insights into model selection and optimisation in competitive
legal contexts and providing actionable strategies for mitigating risks.
|
2501.00070 | ICLR: In-Context Learning of Representations | cs.CL cs.AI cs.LG | Recent work has demonstrated that semantics specified by pretraining data
influence how representations of different concepts are organized in a large
language model (LLM). However, given the open-ended nature of LLMs, e.g., their
ability to in-context learn, we can ask whether models alter these pretraining
semantics to adopt alternative, context-specified ones. Specifically, if we
provide in-context exemplars wherein a concept plays a different role than what
the pretraining data suggests, do models reorganize their representations in
accordance with these novel semantics? To answer this question, we take
inspiration from the theory of conceptual role semantics and define a toy
"graph tracing" task wherein the nodes of the graph are referenced via concepts
seen during training (e.g., apple, bird, etc.) and the connectivity of the
graph is defined via some predefined structure (e.g., a square grid). Given
exemplars that indicate traces of random walks on the graph, we analyze
intermediate representations of the model and find that as the amount of
context is scaled, there is a sudden re-organization from pretrained semantic
representations to in-context representations aligned with the graph structure.
Further, we find that when reference concepts have correlations in their
semantics (e.g., Monday, Tuesday, etc.), the context-specified graph structure
is still present in the representations, but is unable to dominate the
pretrained structure. To explain these results, we analogize our task to energy
minimization for a predefined graph topology, providing evidence towards an
implicit optimization process to infer context-specified semantics. Overall,
our findings indicate scaling context-size can flexibly re-organize model
representations, possibly unlocking novel capabilities.
|
2501.00072 | Open-Book Neural Algorithmic Reasoning | cs.LG cs.AI | Neural algorithmic reasoning is an emerging area of machine learning that
focuses on building neural networks capable of solving complex algorithmic
tasks. Recent advancements predominantly follow the standard supervised
learning paradigm -- feeding an individual problem instance into the network
each time and training it to approximate the execution steps of a classical
algorithm. We challenge this mode and propose a novel open-book learning
framework. In this framework, whether during training or testing, the network
can access and utilize all instances in the training dataset when reasoning for
a given instance.
Empirical evaluation is conducted on the challenging CLRS Algorithmic
Reasoning Benchmark, which consists of 30 diverse algorithmic tasks. Our
open-book learning framework exhibits a significant enhancement in neural
reasoning capabilities. Further, we notice that there is recent literature
suggesting that multi-task training on CLRS can improve the reasoning accuracy
of certain tasks, implying intrinsic connections between different algorithmic
tasks. We delve into this direction via the open-book framework. When the
network reasons for a specific task, we enable it to aggregate information from
training instances of other tasks in an attention-based manner. We show that
this open-book attention mechanism offers insights into the inherent
relationships among various tasks in the benchmark and provides a robust tool
for interpretable multi-task training.
|
2501.00073 | Position Information Emerges in Causal Transformers Without Positional
Encodings via Similarity of Nearby Embeddings | cs.CL cs.LG | Transformers with causal attention can solve tasks that require positional
information without using positional encodings. In this work, we propose and
investigate a new hypothesis about how positional information can be stored
without using explicit positional encoding. We observe that nearby embeddings
are more similar to each other than faraway embeddings, allowing the
transformer to potentially reconstruct the positions of tokens. We show that
this pattern can occur in both the trained and the randomly initialized
Transformer models with causal attention and no positional encodings over a
common range of hyperparameters.
|
2501.00076 | A Novel Framework for Learning Stochastic Representations for Sequence
Generation and Recognition | cs.LG cs.AI cs.RO | The ability to generate and recognize sequential data is fundamental for
autonomous systems operating in dynamic environments. Inspired by the key
principles of the brain-predictive coding and the Bayesian brain-we propose a
novel stochastic Recurrent Neural Network with Parametric Biases (RNNPB). The
proposed model incorporates stochasticity into the latent space using the
reparameterization trick used in variational autoencoders. This approach
enables the model to learn probabilistic representations of multidimensional
sequences, capturing uncertainty and enhancing robustness against overfitting.
We tested the proposed model on a robotic motion dataset to assess its
performance in generating and recognizing temporal patterns. The experimental
results showed that the stochastic RNNPB model outperformed its deterministic
counterpart in generating and recognizing motion sequences. The results
highlighted the proposed model's capability to quantify and adjust uncertainty
during both learning and inference. The stochasticity resulted in a continuous
latent space representation, facilitating stable motion generation and enhanced
generalization when recognizing novel sequences. Our approach provides a
biologically inspired framework for modeling temporal patterns and advances the
development of robust and adaptable systems in artificial intelligence and
robotics.
|
2501.00078 | Human-like Bots for Tactical Shooters Using Compute-Efficient Sensors | cs.HC cs.AI cs.LG | Artificial intelligence (AI) has enabled agents to master complex video
games, from first-person shooters like Counter-Strike to real-time strategy
games such as StarCraft II and racing games like Gran Turismo. While these
achievements are notable, applying these AI methods in commercial video game
production remains challenging due to computational constraints. In commercial
scenarios, the majority of computational resources are allocated to 3D
rendering, leaving limited capacity for AI methods, which often demand high
computational power, particularly those relying on pixel-based sensors.
Moreover, the gaming industry prioritizes creating human-like behavior in AI
agents to enhance player experience, unlike academic models that focus on
maximizing game performance. This paper introduces a novel methodology for
training neural networks via imitation learning to play a complex,
commercial-standard, VALORANT-like 2v2 tactical shooter game, requiring only
modest CPU hardware during inference. Our approach leverages an innovative,
pixel-free perception architecture using a small set of ray-cast sensors, which
capture essential spatial information efficiently. These sensors allow AI to
perform competently without the computational overhead of traditional methods.
Models are trained to mimic human behavior using supervised learning on human
trajectory data, resulting in realistic and engaging AI agents. Human
evaluation tests confirm that our AI agents provide human-like gameplay
experiences while operating efficiently under computational constraints. This
offers a significant advancement in AI model development for tactical shooter
games and possibly other genres.
|
2501.00083 | AI Agent for Education: von Neumann Multi-Agent System Framework | cs.MA cs.AI cs.CY | The development of large language models has ushered in new paradigms for
education. This paper centers on the multi-Agent system in education and
proposes the von Neumann multi-Agent system framework. It breaks down each AI
Agent into four modules: control unit, logic unit, storage unit, and
input-output devices, defining four types of operations: task deconstruction,
self-reflection, memory processing, and tool invocation. Furthermore, it
introduces related technologies such as Chain-of-Thought, Reson+Act, and
Multi-Agent Debate associated with these four types of operations. The paper
also discusses the ability enhancement cycle of a multi-Agent system for
education, including the outer circulation for human learners to promote
knowledge construction and the inner circulation for LLM-based-Agents to
enhance swarm intelligence. Through collaboration and reflection, the
multi-Agent system can better facilitate human learners' learning and enhance
their teaching abilities in this process.
|
2501.00085 | Machine Learning-Based Security Policy Analysis | cs.LG cs.AI cs.CR | Security-Enhanced Linux (SELinux) is a robust security mechanism that
enforces mandatory access controls (MAC), but its policy language's complexity
creates challenges for policy analysis and management. This research
investigates the automation of SELinux policy analysis using graph-based
techniques combined with machine learning approaches to detect policy
anomalies. The study addresses two key questions: Can SELinux policy analysis
be automated through graph analysis, and how do different anomaly detection
models compare in analyzing SELinux policies? We will be comparing different
machine learning models by evaluating their effectiveness in detecting policy
violations and anomalies. Our approach utilizes Neo4j for graph representation
of policies, with Node2vec transforming these graph structures into meaningful
vector embeddings that can be processed by our machine learning models. In our
results, the MLP Neural Network consistently demonstrated superior performance
across different dataset sizes, achieving 95% accuracy with balanced precision
and recall metrics, while both Random Forest and SVM models showed competitive
but slightly lower performance in detecting policy violations. This combination
of graph-based modeling and machine learning provides a more sophisticated and
automated approach to understanding and analyzing complex SELinux policies
compared to traditional manual analysis methods.
|
2501.00087 | High-Dimensional Markov-switching Ordinary Differential Processes | stat.ME cs.LG math.ST stat.AP stat.TH | We investigate the parameter recovery of Markov-switching ordinary
differential processes from discrete observations, where the differential
equations are nonlinear additive models. This framework has been widely applied
in biological systems, control systems, and other domains; however, limited
research has been conducted on reconstructing the generating processes from
observations. In contrast, many physical systems, such as human brains, cannot
be directly experimented upon and rely on observations to infer the underlying
systems. To address this gap, this manuscript presents a comprehensive study of
the model, encompassing algorithm design, optimization guarantees, and
quantification of statistical errors. Specifically, we develop a two-stage
algorithm that first recovers the continuous sample path from discrete samples
and then estimates the parameters of the processes. We provide novel
theoretical insights into the statistical error and linear convergence
guarantee when the processes are $\beta$-mixing. Our analysis is based on the
truncation of the latent posterior processes and demonstrates that the
truncated processes approximate the true processes under mixing conditions. We
apply this model to investigate the differences in resting-state brain networks
between the ADHD group and normal controls, revealing differences in the
transition rate matrices of the two groups.
|
2501.00089 | Insights on Galaxy Evolution from Interpretable Sparse Feature Networks | astro-ph.GA cs.LG | Galaxy appearances reveal the physics of how they formed and evolved. Machine
learning models can now exploit galaxies' information-rich morphologies to
predict physical properties directly from image cutouts. Learning the
relationship between pixel-level features and galaxy properties is essential
for building a physical understanding of galaxy evolution, but we are still
unable to explicate the details of how deep neural networks represent image
features. To address this lack of interpretability, we present a novel neural
network architecture called a Sparse Feature Network (SFNet). SFNets produce
interpretable features that can be linearly combined in order to estimate
galaxy properties like optical emission line ratios or gas-phase metallicity.
We find that SFNets do not sacrifice accuracy in order to gain
interpretability, and that they perform comparably well to cutting-edge models
on astronomical machine learning tasks. Our novel approach is valuable for
finding physical patterns in large datasets and helping astronomers interpret
machine learning results.
|
2501.00093 | Machine Learning Gravity Compactifications on Negatively Curved
Manifolds | hep-th cs.LG gr-qc | Constructing the landscape of vacua of higher-dimensional theories of gravity
by directly solving the low-energy (semi-)classical equations of motion is
notoriously difficult. In this work, we investigate the feasibility of Machine
Learning techniques as tools for solving the equations of motion for general
warped gravity compactifications. As a proof-of-concept we use Neural Networks
to solve the Einstein PDEs on non-trivial three manifolds obtained by filling
one or more cusps of hyperbolic manifolds. While in three dimensions an
Einstein metric is also locally hyperbolic, the generality and scalability of
Machine Learning methods, the availability of explicit families of hyperbolic
manifolds in higher dimensions, and the universality of the filling procedure
strongly suggest that the methods and code developed in this work can be of
broader applicability. Specifically, they can be used to tackle both the
geometric problem of numerically constructing novel higher-dimensional
negatively curved Einstein metrics, as well as the physical problem of
constructing four-dimensional de Sitter compactifications of M-theory on the
same manifolds.
|
2501.00097 | CaseSumm: A Large-Scale Dataset for Long-Context Summarization from U.S.
Supreme Court Opinions | cs.CL cs.AI cs.CY cs.LG | This paper introduces CaseSumm, a novel dataset for long-context
summarization in the legal domain that addresses the need for longer and more
complex datasets for summarization evaluation. We collect 25.6K U.S. Supreme
Court (SCOTUS) opinions and their official summaries, known as "syllabuses."
Our dataset is the largest open legal case summarization dataset, and is the
first to include summaries of SCOTUS decisions dating back to 1815.
We also present a comprehensive evaluation of LLM-generated summaries using
both automatic metrics and expert human evaluation, revealing discrepancies
between these assessment methods. Our evaluation shows Mistral 7b, a smaller
open-source model, outperforms larger models on most automatic metrics and
successfully generates syllabus-like summaries. In contrast, human expert
annotators indicate that Mistral summaries contain hallucinations. The
annotators consistently rank GPT-4 summaries as clearer and exhibiting greater
sensitivity and specificity. Further, we find that LLM-based evaluations are
not more correlated with human evaluations than traditional automatic metrics.
Furthermore, our analysis identifies specific hallucinations in generated
summaries, including precedent citation errors and misrepresentations of case
facts. These findings demonstrate the limitations of current automatic
evaluation methods for legal summarization and highlight the critical role of
human evaluation in assessing summary quality, particularly in complex,
high-stakes domains.
CaseSumm is available at https://huggingface.co/datasets/ChicagoHAI/CaseSumm
|
2501.00103 | LTX-Video: Realtime Video Latent Diffusion | cs.CV | We introduce LTX-Video, a transformer-based latent diffusion model that
adopts a holistic approach to video generation by seamlessly integrating the
responsibilities of the Video-VAE and the denoising transformer. Unlike
existing methods, which treat these components as independent, LTX-Video aims
to optimize their interaction for improved efficiency and quality. At its core
is a carefully designed Video-VAE that achieves a high compression ratio of
1:192, with spatiotemporal downscaling of 32 x 32 x 8 pixels per token, enabled
by relocating the patchifying operation from the transformer's input to the
VAE's input. Operating in this highly compressed latent space enables the
transformer to efficiently perform full spatiotemporal self-attention, which is
essential for generating high-resolution videos with temporal consistency.
However, the high compression inherently limits the representation of fine
details. To address this, our VAE decoder is tasked with both latent-to-pixel
conversion and the final denoising step, producing the clean result directly in
pixel space. This approach preserves the ability to generate fine details
without incurring the runtime cost of a separate upsampling module. Our model
supports diverse use cases, including text-to-video and image-to-video
generation, with both capabilities trained simultaneously. It achieves
faster-than-real-time generation, producing 5 seconds of 24 fps video at
768x512 resolution in just 2 seconds on an Nvidia H100 GPU, outperforming all
existing models of similar scale. The source code and pre-trained models are
publicly available, setting a new benchmark for accessible and scalable video
generation.
|
2501.00106 | LicenseGPT: A Fine-tuned Foundation Model for Publicly Available Dataset
License Compliance | cs.SE cs.AI | Dataset license compliance is a critical yet complex aspect of developing
commercial AI products, particularly with the increasing use of publicly
available datasets. Ambiguities in dataset licenses pose significant legal
risks, making it challenging even for software IP lawyers to accurately
interpret rights and obligations. In this paper, we introduce LicenseGPT, a
fine-tuned foundation model (FM) specifically designed for dataset license
compliance analysis. We first evaluate existing legal FMs (i.e., FMs
specialized in understanding and processing legal texts) and find that the
best-performing model achieves a Prediction Agreement (PA) of only 43.75%.
LicenseGPT, fine-tuned on a curated dataset of 500 licenses annotated by legal
experts, significantly improves PA to 64.30%, outperforming both legal and
general-purpose FMs. Through an A/B test and user study with software IP
lawyers, we demonstrate that LicenseGPT reduces analysis time by 94.44%, from
108 seconds to 6 seconds per license, without compromising accuracy. Software
IP lawyers perceive LicenseGPT as a valuable supplementary tool that enhances
efficiency while acknowledging the need for human oversight in complex cases.
Our work underscores the potential of specialized AI tools in legal practice
and offers a publicly available resource for practitioners and researchers.
|
2501.00107 | An Unsupervised Anomaly Detection in Electricity Consumption Using
Reinforcement Learning and Time Series Forest Based Framework | cs.LG cs.AI | Anomaly detection (AD) plays a crucial role in time series applications,
primarily because time series data is employed across real-world scenarios.
Detecting anomalies poses significant challenges since anomalies take diverse
forms making them hard to pinpoint accurately. Previous research has explored
different AD models, making specific assumptions with varying sensitivity
toward particular anomaly types. To address this issue, we propose a novel
model selection for unsupervised AD using a combination of time series forest
(TSF) and reinforcement learning (RL) approaches that dynamically chooses an AD
technique. Our approach allows for effective AD without explicitly depending on
ground truth labels that are often scarce and expensive to obtain. Results from
the real-time series dataset demonstrate that the proposed model selection
approach outperforms all other AD models in terms of the F1 score metric. For
the synthetic dataset, our proposed model surpasses all other AD models except
for KNN, with an impressive F1 score of 0.989. The proposed model selection
framework also exceeded the performance of GPT-4 when prompted to act as an
anomaly detector on the synthetic dataset. Exploring different reward functions
revealed that the original reward function in our proposed AD model selection
approach yielded the best overall scores. We evaluated the performance of the
six AD models on an additional three datasets, having global, local, and
clustered anomalies respectively, showing that each AD model exhibited distinct
performance depending on the type of anomalies. This emphasizes the
significance of our proposed AD model selection framework, maintaining high
performance across all datasets, and showcasing superior performance across
different anomaly types.
|
2501.00110 | Modelling and Control of Spatial Behaviours in Multi-Agent Systems with
Applications to Biology and Robotics | eess.SY cs.MA cs.RO cs.SY | Large-Scale Multi-Agent Systems (LS-MAS) consist of several autonomous
components, interacting in a non-trivial way, so that the emerging behaviour of
the ensemble depends on the individual dynamics of the components and their
reciprocal interactions. These models can describe a rich variety of natural
systems, as well as artificial ones, characterised by unparalleled scalability,
robustness, and flexibility. Indeed, a crucial objective is devising efficient
strategies to model and control the spatial behaviours of LS-MAS to achieve
specific goals. However, the inherent complexity of these systems and the wide
spectrum of their emerging behaviours pose significant challenges. The
overarching goal of this thesis is, therefore, to advance methods for
modelling, analyzing and controlling the spatial behaviours of LS-MAS, with
applications to cellular populations and swarm robotics. The thesis begins with
an overview of the existing Literature, and is then organized into two distinct
parts. In the context of swarm robotics, Part I deals with distributed control
algorithms to spatially organize agents on geometric patterns. The contribution
is twofold, encompassing both the development of original control algorithms,
and providing a novel formal analysis, which allows to guarantee the emergence
of specific geometric patterns. In Part II, looking at the spatial behaviours
of biological agents, experiments are carried out to study the movement of
microorganisms and their response to light stimuli. This allows the derivation
and parametrization of mathematical models that capture these behaviours, and
pave the way for the development of innovative approaches for the spatial
control of microorganisms. The results presented in the thesis were developed
by leveraging formal analytical tools, simulations, and experiments, using
innovative platforms and original computational frameworks.
|
2501.00112 | Steppability-informed Quadrupedal Contact Planning through Deep Visual
Search Heuristics | cs.RO | In this work, we introduce a method for predicting environment steppability
-- the ability of a legged robot platform to place a foothold at a particular
location in the local environment -- in the image space. This novel environment
representation captures this critical geometric property of the local terrain
while allowing us to exploit the computational benefits of sensing and planning
in the image space. We adapt a primitive shapes-based synthetic data generation
scheme to create geometrically rich and diverse simulation scenes and extract
ground truth semantic information in order to train a steppability model. We
then integrate this steppability model into an existing interleaved graph
search and trajectory optimization-based footstep planner to demonstrate how
this steppability paradigm can inform footstep planning in complex, unknown
environments. We analyze the steppability model performance to demonstrate its
validity, and we deploy the perception-informed footstep planner both in
offline and online settings to experimentally verify planning performance.
|
2501.00113 | AltGen: AI-Driven Alt Text Generation for Enhancing EPUB Accessibility | cs.AI | Digital accessibility is a cornerstone of inclusive content delivery, yet
many EPUB files fail to meet fundamental accessibility standards, particularly
in providing descriptive alt text for images. Alt text plays a critical role in
enabling visually impaired users to understand visual content through assistive
technologies. However, generating high-quality alt text at scale is a
resource-intensive process, creating significant challenges for organizations
aiming to ensure accessibility compliance. This paper introduces AltGen, a
novel AI-driven pipeline designed to automate the generation of alt text for
images in EPUB files. By integrating state-of-the-art generative models,
including advanced transformer-based architectures, AltGen achieves
contextually relevant and linguistically coherent alt text descriptions. The
pipeline encompasses multiple stages, starting with data preprocessing to
extract and prepare relevant content, followed by visual analysis using
computer vision models such as CLIP and ViT. The extracted visual features are
enriched with contextual information from surrounding text, enabling the
fine-tuned language models to generate descriptive and accurate alt text.
Validation of the generated output employs both quantitative metrics, such as
cosine similarity and BLEU scores, and qualitative feedback from visually
impaired users.
Experimental results demonstrate the efficacy of AltGen across diverse
datasets, achieving a 97.5% reduction in accessibility errors and high scores
in similarity and linguistic fidelity metrics. User studies highlight the
practical impact of AltGen, with participants reporting significant
improvements in document usability and comprehension. Furthermore, comparative
analyses reveal that AltGen outperforms existing approaches in terms of
accuracy, relevance, and scalability.
|
2501.00116 | Text-to-Image GAN with Pretrained Representations | cs.CV cs.AI cs.LG | Generating desired images conditioned on given text descriptions has received
lots of attention. Recently, diffusion models and autoregressive models have
demonstrated their outstanding expressivity and gradually replaced GAN as the
favored architectures for text-to-image synthesis. However, they still face
some obstacles: slow inference speed and expensive training costs. To achieve
more powerful and faster text-to-image synthesis under complex scenes, we
propose TIGER, a text-to-image GAN with pretrained representations. To be
specific, we propose a vision-empowered discriminator and a high-capacity
generator. (i) The vision-empowered discriminator absorbs the complex scene
understanding ability and the domain generalization ability from pretrained
vision models to enhance model performance. Unlike previous works, we explore
stacking multiple pretrained models in our discriminator to collect multiple
different representations. (ii) The high-capacity generator aims to achieve
effective text-image fusion while increasing the model capacity. The
high-capacity generator consists of multiple novel high-capacity fusion blocks
(HFBlock). And the HFBlock contains several deep fusion modules and a global
fusion module, which play different roles to benefit our model. Extensive
experiments demonstrate the outstanding performance of our proposed TIGER both
on standard and zero-shot text-to-image synthesis tasks. On the standard
text-to-image synthesis task, TIGER achieves state-of-the-art performance on
two challenging datasets, which obtain a new FID 5.48 (COCO) and 9.38 (CUB). On
the zero-shot text-to-image synthesis task, we achieve comparable performance
with fewer model parameters, smaller training data size and faster inference
speed. Additionally, more experiments and analyses are conducted in the
Supplementary Material.
|
2501.00119 | Post Launch Evaluation of Policies in a High-Dimensional Setting | stat.ML cs.LG stat.AP stat.ME | A/B tests, also known as randomized controlled experiments (RCTs), are the
gold standard for evaluating the impact of new policies, products, or
decisions. However, these tests can be costly in terms of time and resources,
potentially exposing users, customers, or other test subjects (units) to
inferior options. This paper explores practical considerations in applying
methodologies inspired by "synthetic control" as an alternative to traditional
A/B testing in settings with very large numbers of units, involving up to
hundreds of millions of units, which is common in modern applications such as
e-commerce and ride-sharing platforms. This method is particularly valuable in
settings where the treatment affects only a subset of units, leaving many units
unaffected. In these scenarios, synthetic control methods leverage data from
unaffected units to estimate counterfactual outcomes for treated units. After
the treatment is implemented, these estimates can be compared to actual
outcomes to measure the treatment effect. A key challenge in creating accurate
counterfactual outcomes is interpolation bias, a well-documented phenomenon
that occurs when control units differ significantly from treated units. To
address this, we propose a two-phase approach: first using nearest neighbor
matching based on unit covariates to select similar control units, then
applying supervised learning methods suitable for high-dimensional data to
estimate counterfactual outcomes. Testing using six large-scale experiments
demonstrates that this approach successfully improves estimate accuracy.
However, our analysis reveals that machine learning bias -- which arises from
methods that trade off bias for variance reduction -- can impact results and
affect conclusions about treatment effects. We document this bias in
large-scale experimental settings and propose effective de-biasing techniques
to address this challenge.
|
2501.00124 | PQD: Post-training Quantization for Efficient Diffusion Models | cs.CV cs.LG | Diffusionmodels(DMs)havedemonstratedremarkableachievements in synthesizing
images of high fidelity and diversity. However, the extensive computational
requirements and slow generative speed of diffusion models have limited their
widespread adoption. In this paper, we propose a novel post-training
quantization for diffusion models (PQD), which is a time-aware optimization
framework for diffusion models based on post-training quantization. The
proposed framework optimizes the inference process by selecting representative
samples and conducting time-aware calibration. Experimental results show that
our proposed method is able to directly quantize full-precision diffusion
models into 8-bit or 4-bit models while maintaining comparable performance in a
training-free manner, achieving a few FID change on ImageNet for unconditional
image generation. Our approach demonstrates compatibility and can also be
applied to 512x512 text-guided image generation for the first time.
|
2501.00129 | A Data-Centric Approach to Detecting and Mitigating Demographic Bias in
Pediatric Mental Health Text: A Case Study in Anxiety Detection | cs.CL cs.AI | Introduction: Healthcare AI models often inherit biases from their training
data. While efforts have primarily targeted bias in structured data, mental
health heavily depends on unstructured data. This study aims to detect and
mitigate linguistic differences related to non-biological differences in the
training data of AI models designed to assist in pediatric mental health
screening. Our objectives are: (1) to assess the presence of bias by evaluating
outcome parity across sex subgroups, (2) to identify bias sources through
textual distribution analysis, and (3) to develop a de-biasing method for
mental health text data. Methods: We examined classification parity across
demographic groups and assessed how gendered language influences model
predictions. A data-centric de-biasing method was applied, focusing on
neutralizing biased terms while retaining salient clinical information. This
methodology was tested on a model for automatic anxiety detection in pediatric
patients. Results: Our findings revealed a systematic under-diagnosis of female
adolescent patients, with a 4% lower accuracy and a 9% higher False Negative
Rate (FNR) compared to male patients, likely due to disparities in information
density and linguistic differences in patient notes. Notes for male patients
were on average 500 words longer, and linguistic similarity metrics indicated
distinct word distributions between genders. Implementing our de-biasing
approach reduced diagnostic bias by up to 27%, demonstrating its effectiveness
in enhancing equity across demographic groups. Discussion: We developed a
data-centric de-biasing framework to address gender-based content disparities
within clinical text. By neutralizing biased language and enhancing focus on
clinically essential information, our approach demonstrates an effective
strategy for mitigating bias in AI healthcare models trained on text.
|
2501.00135 | GroverGPT: A Large Language Model with 8 Billion Parameters for Quantum
Searching | quant-ph cs.AI cs.LG | Quantum computing is an exciting non-Von Neumann paradigm, offering provable
speedups over classical computing for specific problems. However, the practical
limits of classical simulatability for quantum circuits remain unclear,
especially with current noisy quantum devices. In this work, we explore the
potential of leveraging Large Language Models (LLMs) to simulate the output of
a quantum Turing machine using Grover's quantum circuits, known to provide
quadratic speedups over classical counterparts. To this end, we developed
GroverGPT, a specialized model based on LLaMA's 8-billion-parameter
architecture, trained on over 15 trillion tokens. Unlike brute-force
state-vector simulations, which demand substantial computational resources,
GroverGPT employs pattern recognition to approximate quantum search algorithms
without explicitly representing quantum states. Analyzing 97K quantum search
instances, GroverGPT consistently outperformed OpenAI's GPT-4o (45\% accuracy),
achieving nearly 100\% accuracy on 6- and 10-qubit datasets when trained on
4-qubit or larger datasets. It also demonstrated strong generalization,
surpassing 95\% accuracy for systems with over 20 qubits when trained on 3- to
6-qubit data. Analysis indicates GroverGPT captures quantum features of
Grover's search rather than classical patterns, supported by novel prompting
strategies to enhance performance. Although accuracy declines with increasing
system size, these findings offer insights into the practical boundaries of
classical simulatability. This work suggests task-specific LLMs can surpass
general-purpose models like GPT-4o in quantum algorithm learning and serve as
powerful tools for advancing quantum research.
|
2501.00136 | Detection-Fusion for Knowledge Graph Extraction from Videos | cs.CV cs.AI cs.LG | One of the challenging tasks in the field of video understanding is
extracting semantic content from video inputs. Most existing systems use
language models to describe videos in natural language sentences, but this has
several major shortcomings. Such systems can rely too heavily on the language
model component and base their output on statistical regularities in natural
language text rather than on the visual contents of the video. Additionally,
natural language annotations cannot be readily processed by a computer, are
difficult to evaluate with performance metrics and cannot be easily translated
into a different natural language. In this paper, we propose a method to
annotate videos with knowledge graphs, and so avoid these problems.
Specifically, we propose a deep-learning-based model for this task that first
predicts pairs of individuals and then the relations between them.
Additionally, we propose an extension of our model for the inclusion of
background knowledge in the construction of knowledge graphs.
|
2501.00138 | NiaAutoARM: Automated generation and evaluation of Association Rule
Mining pipelines | cs.NE cs.AI | The Numerical Association Rule Mining paradigm that includes concurrent
dealing with numerical and categorical attributes is beneficial for discovering
associations from datasets consisting of both features. The process is not
considered as easy since it incorporates several processing steps running
sequentially that form an entire pipeline, e.g., preprocessing, algorithm
selection, hyper-parameter optimization, and the definition of metrics
evaluating the quality of the association rule. In this paper, we proposed a
novel Automated Machine Learning method, NiaAutoARM, for constructing the full
association rule mining pipelines based on stochastic population-based
meta-heuristics automatically. Along with the theoretical representation of the
proposed method, we also present a comprehensive experimental evaluation of the
proposed method.
|
2501.00142 | Minimalist Vision with Freeform Pixels | cs.CV eess.IV | A minimalist vision system uses the smallest number of pixels needed to solve
a vision task. While traditional cameras use a large grid of square pixels, a
minimalist camera uses freeform pixels that can take on arbitrary shapes to
increase their information content. We show that the hardware of a minimalist
camera can be modeled as the first layer of a neural network, where the
subsequent layers are used for inference. Training the network for any given
task yields the shapes of the camera's freeform pixels, each of which is
implemented using a photodetector and an optical mask. We have designed
minimalist cameras for monitoring indoor spaces (with 8 pixels), measuring room
lighting (with 8 pixels), and estimating traffic flow (with 8 pixels). The
performance demonstrated by these systems is on par with a traditional camera
with orders of magnitude more pixels. Minimalist vision has two major
advantages. First, it naturally tends to preserve the privacy of individuals in
the scene since the captured information is inadequate for extracting visual
details. Second, since the number of measurements made by a minimalist camera
is very small, we show that it can be fully self-powered, i.e., function
without an external power supply or a battery.
|
2501.00149 | LASSE: Learning Active Sampling for Storm Tide Extremes in
Non-Stationary Climate Regimes | physics.ao-ph cs.LG physics.geo-ph | Identifying tropical cyclones that generate destructive storm tides for risk
assessment, such as from large downscaled storm catalogs for climate studies,
is often intractable because it entails many expensive Monte Carlo hydrodynamic
simulations. Here, we show that surrogate models are promising from accuracy,
recall, and precision perspectives, and they "generalize" to novel climate
scenarios. We then present an informative online learning approach to rapidly
search for extreme storm tide-producing cyclones using only a few hydrodynamic
simulations. Starting from a minimal subset of TCs with detailed storm tide
hydrodynamic simulations, a surrogate model selects informative data to retrain
online and iteratively improves its predictions of damaging TCs. Results on an
extensive catalog of downscaled TCs indicate 100% precision in retrieving rare
destructive storms using less than 20% of the simulations as training. The
informative sampling approach is efficient, scalable to large storm catalogs,
and generalizable to climate scenarios.
|
2501.00152 | Temporal reasoning for timeline summarisation in social media | cs.CL cs.AI | This paper explores whether enhancing temporal reasoning capabilities in
Large Language Models (LLMs) can improve the quality of timeline summarisation,
the task of summarising long texts containing sequences of events, such as
social media threads. We first introduce NarrativeReason, a novel dataset
focused on temporal relationships among sequential events within narratives,
distinguishing it from existing temporal reasoning datasets that primarily
address pair-wise event relationships. Our approach then combines temporal
reasoning with timeline summarisation through a knowledge distillation
framework, where we first fine-tune a teacher model on temporal reasoning tasks
and then distill this knowledge into a student model while simultaneously
training it for the task of timeline summarisation. Experimental results
demonstrate that our model achieves superior performance on out-of-domain
mental health-related timeline summarisation tasks, which involve long social
media threads with repetitions of events and a mix of emotions, highlighting
the importance and generalisability of leveraging temporal reasoning to improve
timeline summarisation.
|
2501.00154 | Probabilistic Explanations for Linear Models | cs.AI cs.CC | Formal XAI is an emerging field that focuses on providing explanations with
mathematical guarantees for the decisions made by machine learning models. A
significant amount of work in this area is centered on the computation of
"sufficient reasons". Given a model $M$ and an input instance $\vec{x}$, a
sufficient reason for the decision $M(\vec{x})$ is a subset $S$ of the features
of $\vec{x}$ such that for any instance $\vec{z}$ that has the same values as
$\vec{x}$ for every feature in $S$, it holds that $M(\vec{x}) = M(\vec{z})$.
Intuitively, this means that the features in $S$ are sufficient to fully
justify the classification of $\vec{x}$ by $M$. For sufficient reasons to be
useful in practice, they should be as small as possible, and a natural way to
reduce the size of sufficient reasons is to consider a probabilistic
relaxation; the probability of $M(\vec{x}) = M(\vec{z})$ must be at least some
value $\delta \in (0,1]$, for a random instance $\vec{z}$ that coincides with
$\vec{x}$ on the features in $S$. Computing small $\delta$-sufficient reasons
($\delta$-SRs) is known to be a theoretically hard problem; even over decision
trees--traditionally deemed simple and interpretable models--strong
inapproximability results make the efficient computation of small $\delta$-SRs
unlikely. We propose the notion of $(\delta, \epsilon)$-SR, a simple relaxation
of $\delta$-SRs, and show that this kind of explanation can be computed
efficiently over linear models.
|
2501.00158 | Urban Water Consumption Forecasting Using Deep Learning and Correlated
District Metered Areas | cs.LG cs.CY | Accurate water consumption forecasting is a crucial tool for water utilities
and policymakers, as it helps ensure a reliable supply, optimize operations,
and support infrastructure planning. Urban Water Distribution Networks (WDNs)
are divided into District Metered Areas (DMAs), where water flow is monitored
to efficiently manage resources. This work focuses on short-term forecasting of
DMA consumption using deep learning and aims to address two key challenging
issues. First, forecasting based solely on a DMA's historical data may lack
broader context and provide limited insights. Second, DMAs may experience
sensor malfunctions providing incorrect data, or some DMAs may not be monitored
at all due to computational costs, complicating accurate forecasting. We
propose a novel method that first identifies DMAs with correlated consumption
patterns and then uses these patterns, along with the DMA's local data, as
input to a deep learning model for forecasting. In a real-world study with data
from five DMAs, we show that: i) the deep learning model outperforms a
classical statistical model; ii) accurate forecasting can be carried out using
only correlated DMAs' consumption patterns; and iii) even when a DMA's local
data is available, including correlated DMAs' data improves accuracy.
|
2501.00160 | Deterministic Model of Incremental Multi-Agent Boltzmann Q-Learning:
Transient Cooperation, Metastability, and Oscillations | cs.MA nlin.AO physics.soc-ph | Multi-Agent Reinforcement Learning involves agents that learn together in a
shared environment, leading to emergent dynamics sensitive to initial
conditions and parameter variations. A Dynamical Systems approach, which
studies the evolution of multi-component systems over time, has uncovered some
of the underlying dynamics by constructing deterministic approximation models
of stochastic algorithms. In this work, we demonstrate that even in the
simplest case of independent Q-learning with a Boltzmann exploration policy,
significant discrepancies arise between the actual algorithm and previous
approximations. We elaborate why these models actually approximate interesting
variants rather than the original incremental algorithm. To explain the
discrepancies, we introduce a new discrete-time approximation model that
explicitly accounts for agents' update frequencies within the learning process
and show that its dynamics fundamentally differ from the simplified dynamics of
prior models. We illustrate the usefulness of our approach by applying it to
the question of spontaneous cooperation in social dilemmas, specifically the
Prisoner's Dilemma as the simplest case study. We identify conditions under
which the learning behaviour appears as long-term stable cooperation from an
external perspective. However, our model shows that this behaviour is merely a
metastable transient phase and not a true equilibrium, making it exploitable.
We further exemplify how specific parameter settings can significantly
exacerbate the moving target problem in independent learning. Through a
systematic analysis of our model, we show that increasing the discount factor
induces oscillations, preventing convergence to a joint policy. These
oscillations arise from a supercritical Neimark-Sacker bifurcation, which
transforms the unique stable fixed point into an unstable focus surrounded by a
stable limit cycle.
|
2501.00162 | Class-based Subset Selection for Transfer Learning under Extreme Label
Shift | cs.LG cs.AI | Existing work within transfer learning often follows a two-step process --
pre-training over a large-scale source domain and then finetuning over limited
samples from the target domain. Yet, despite its popularity, this methodology
has been shown to suffer in the presence of distributional shift --
specifically when the output spaces diverge. Previous work has focused on
increasing model performance within this setting by identifying and classifying
only the shared output classes between distributions. However, these methods
are inherently limited as they ignore classes outside the shared class set,
disregarding potential information relevant to the model transfer. This paper
proposes a new process for few-shot transfer learning that selects and weighs
classes from the source domain to optimize the transfer between domains. More
concretely, we use Wasserstein distance to choose a set of source classes and
their weights that minimize the distance between the source and target domain.
To justify our proposed algorithm, we provide a generalization analysis of the
performance of the learned classifier over the target domain and show that our
method corresponds to a bound minimization algorithm. We empirically
demonstrate the effectiveness of our approach (WaSS) by experimenting on
several different datasets and presenting superior performance within various
label shift settings, including the extreme case where the label spaces are
disjoint.
|
2501.00164 | Measuring Large Language Models Capacity to Annotate Journalistic
Sourcing | cs.CL cs.CY | Since the launch of ChatGPT in late 2022, the capacities of Large Language
Models and their evaluation have been in constant discussion and evaluation
both in academic research and in the industry. Scenarios and benchmarks have
been developed in several areas such as law, medicine and math (Bommasani et
al., 2023) and there is continuous evaluation of model variants. One area that
has not received sufficient scenario development attention is journalism, and
in particular journalistic sourcing and ethics. Journalism is a crucial
truth-determination function in democracy (Vincent, 2023), and sourcing is a
crucial pillar to all original journalistic output. Evaluating the capacities
of LLMs to annotate stories for the different signals of sourcing and how
reporters justify them is a crucial scenario that warrants a benchmark
approach. It offers potential to build automated systems to contrast more
transparent and ethically rigorous forms of journalism with everyday fare. In
this paper we lay out a scenario to evaluate LLM performance on identifying and
annotating sourcing in news stories on a five-category schema inspired from
journalism studies (Gans, 2004). We offer the use case, our dataset and metrics
and as the first step towards systematic benchmarking. Our accuracy findings
indicate LLM-based approaches have more catching to do in identifying all the
sourced statements in a story, and equally, in matching the type of sources. An
even harder task is spotting source justifications.
|
2501.00165 | Dynamic Graph Communication for Decentralised Multi-Agent Reinforcement
Learning | cs.MA | This work presents a novel communication framework for decentralized
multi-agent systems operating in dynamic network environments. Integrated into
a multi-agent reinforcement learning system, the framework is designed to
enhance decision-making by optimizing the network's collective knowledge
through efficient communication. Key contributions include adapting a static
network packet-routing scenario to a dynamic setting with node failures,
incorporating a graph attention network layer in a recurrent message-passing
framework, and introducing a multi-round communication targeting mechanism.
This approach enables an attention-based aggregation mechanism to be
successfully trained within a sparse-reward, dynamic network packet-routing
environment using only reinforcement learning. Experimental results show
improvements in routing performance, including a 9.5 percent increase in
average rewards and a 6.4 percent reduction in communication overhead compared
to a baseline system. The study also examines the ethical and legal
implications of deploying such systems in critical infrastructure and military
contexts, identifies current limitations, and suggests potential directions for
future research.
|
2501.00167 | On Functional Observability of Nonlinear Systems and the Design of
Functional Observers with Assignable Error Dynamics | eess.SY cs.SY | This paper proposes a novel approach for designing functional observers for
nonlinear systems, with linear error dynamics and assignable poles. Sufficient
conditions for functional observability are first derived, leading to
functional relationships between the Lie derivatives of the output to be
estimated and the ones of the measured output. These are directly used in the
proposed design of the functional observer. The functional observer is defined
in differential input-output form, satisfying an appropriate invariance
condition that emerges from the state-space invariance conditions of the
literature. A concept of functional observer index is also proposed, to
characterize the lowest feasible order of functional observer with pole
assignment. Two chemical reactor applications are used to illustrate the
proposed approach.
|
2501.00169 | DeepLL: Considering Linear Logic for the Analysis of Deep Learning
Experiments | cs.PL cs.AI cs.CL cs.SE | Deep Learning experiments have critical requirements regarding the careful
handling of their datasets as well as the efficient and correct usage of APIs
that interact with hardware accelerators. On the one hand, software mistakes
during data handling can contaminate experiments and lead to incorrect results.
On the other hand, poorly coded APIs that interact with the hardware can lead
to sub-optimal usage and untrustworthy conclusions. In this work we investigate
the use of Linear Logic for the analysis of Deep Learning experiments. We show
that primitives and operators of Linear Logic can be used to express: (i) an
abstract representation of the control flow of an experiment, (ii) a set of
available experimental resources, such as API calls to the underlying
data-structures and hardware as well as (iii) reasoning rules about the correct
consumption of resources during experiments. Our proposed model is not only
lightweight but also easy to comprehend having both a symbolic and a visual
component. Finally, its artifacts are themselves proofs in Linear Logic that
can be readily verified by off-the-shelf reasoners.
|
2501.00170 | Federated Learning with Workload Reduction through Partial Training of
Client Models and Entropy-Based Data Selection | cs.LG cs.AI cs.DC | With the rapid expansion of edge devices, such as IoT devices, where crucial
data needed for machine learning applications is generated, it becomes
essential to promote their participation in privacy-preserving Federated
Learning (FL) systems. The best way to achieve this desiderate is by reducing
their training workload to match their constrained computational resources.
While prior FL research has address the workload constrains by introducing
lightweight models on the edge, limited attention has been given to optimizing
on-device training efficiency through reducing the amount of data need during
training. In this work, we propose FedFT-EDS, a novel approach that combines
Fine-Tuning of partial client models with Entropy-based Data Selection to
reduce training workloads on edge devices. By actively selecting the most
informative local instances for learning, FedFT-EDS reduces training data
significantly in FL and demonstrates that not all user data is equally
beneficial for FL on all rounds. Our experiments on CIFAR-10 and CIFAR-100 show
that FedFT-EDS uses only 50% user data while improving the global model
performance compared to baseline methods, FedAvg and FedProx. Importantly,
FedFT-EDS improves client learning efficiency by up to 3 times, using one third
of training time on clients to achieve an equivalent performance to the
baselines. This work highlights the importance of data selection in FL and
presents a promising pathway to scalable and efficient Federate Learning.
|
2501.00172 | Algebraic Control: Complete Stable Inversion with Necessary and
Sufficient Conditions | math.OC cs.SY eess.SY | Recent advances in learning-based control have increased interest in stable
inversion to meet growing performance demands. Here, we establish necessary and
sufficient conditions for stable inversion, addressing challenges in
non-minimum phase, non-square, and singular systems. An H-Infinity based
algebraic approximation is introduced for near-perfect tracking without
preview. Additionally, we propose a novel robust control strategy combining the
nominal model with dual feedforward control to form a feedback structure.
Numerical comparison demonstrates the approach's effectiveness.
|
2501.00174 | The Text Classification Pipeline: Starting Shallow going Deeper | cs.CL cs.AI cs.IR | Text Classification (TC) stands as a cornerstone within the realm of Natural
Language Processing (NLP), particularly when viewed through the lens of
computer science and engineering. The past decade has seen deep learning
revolutionize TC, propelling advancements in text retrieval, categorization,
information extraction, and summarization. The scholarly literature is rich
with datasets, models, and evaluation criteria, with English being the
predominant language of focus, despite studies involving Arabic, Chinese,
Hindi, and others. The efficacy of TC models relies heavily on their ability to
capture intricate textual relationships and nonlinear correlations,
necessitating a comprehensive examination of the entire TC pipeline.
This monograph provides an in-depth exploration of the TC pipeline, with a
particular emphasis on evaluating the impact of each component on the overall
performance of TC models. The pipeline includes state-of-the-art datasets, text
preprocessing techniques, text representation methods, classification models,
evaluation metrics, current results and future trends. Each chapter
meticulously examines these stages, presenting technical innovations and
significant recent findings. The work critically assesses various
classification strategies, offering comparative analyses, examples, case
studies, and experimental evaluations. These contributions extend beyond a
typical survey, providing a detailed and insightful exploration of TC.
|
2501.00184 | TrajLearn: Trajectory Prediction Learning using Deep Generative Models | cs.LG cs.CV cs.RO | Trajectory prediction aims to estimate an entity's future path using its
current position and historical movement data, benefiting fields like
autonomous navigation, robotics, and human movement analytics. Deep learning
approaches have become key in this area, utilizing large-scale trajectory
datasets to model movement patterns, but face challenges in managing complex
spatial dependencies and adapting to dynamic environments. To address these
challenges, we introduce TrajLearn, a novel model for trajectory prediction
that leverages generative modeling of higher-order mobility flows based on
hexagonal spatial representation. TrajLearn predicts the next $k$ steps by
integrating a customized beam search for exploring multiple potential paths
while maintaining spatial continuity. We conducted a rigorous evaluation of
TrajLearn, benchmarking it against leading state-of-the-art approaches and
meaningful baselines. The results indicate that TrajLearn achieves significant
performance gains, with improvements of up to ~40% across multiple real-world
trajectory datasets. In addition, we evaluated different prediction horizons
(i.e., various values of $k$), conducted resolution sensitivity analysis, and
performed ablation studies to assess the impact of key model components.
Furthermore, we developed a novel algorithm to generate mixed-resolution maps
by hierarchically subdividing hexagonal regions into finer segments within a
specified observation area. This approach supports selective detailing,
applying finer resolution to areas of interest or high activity (e.g., urban
centers) while using coarser resolution for less significant regions (e.g.,
rural areas), effectively reducing data storage requirements and computational
overhead. We promote reproducibility and adaptability by offering complete
code, data, and detailed documentation with flexible configuration options for
various applications.
|
2501.00190 | SepsisCalc: Integrating Clinical Calculators into Early Sepsis
Prediction via Dynamic Temporal Graph Construction | cs.LG cs.AI cs.HC | Sepsis is an organ dysfunction caused by a deregulated immune response to an
infection. Early sepsis prediction and identification allow for timely
intervention, leading to improved clinical outcomes. Clinical calculators
(e.g., the six-organ dysfunction assessment of SOFA) play a vital role in
sepsis identification within clinicians' workflow, providing evidence-based
risk assessments essential for sepsis diagnosis. However, artificial
intelligence (AI) sepsis prediction models typically generate a single sepsis
risk score without incorporating clinical calculators for assessing organ
dysfunctions, making the models less convincing and transparent to clinicians.
To bridge the gap, we propose to mimic clinicians' workflow with a novel
framework SepsisCalc to integrate clinical calculators into the predictive
model, yielding a clinically transparent and precise model for utilization in
clinical settings. Practically, clinical calculators usually combine
information from multiple component variables in Electronic Health Records
(EHR), and might not be applicable when the variables are (partially) missing.
We mitigate this issue by representing EHRs as temporal graphs and integrating
a learning module to dynamically add the accurately estimated calculator to the
graphs. Experimental results on real-world datasets show that the proposed
model outperforms state-of-the-art methods on sepsis prediction tasks.
Moreover, we developed a system to identify organ dysfunctions and potential
sepsis risks, providing a human-AI interaction tool for deployment, which can
help clinicians understand the prediction outputs and prepare timely
interventions for the corresponding dysfunctions, paving the way for actionable
clinical decision-making support for early intervention.
|
2501.00191 | Equilibria in Network Constrained Markets with Market Maker | cs.GT cs.MA cs.SI cs.SY eess.SY math.OC | We study a networked economic system composed of $n$ producers supplying a
single homogeneous good to a number of geographically separated markets and of
a centralized authority, called the market maker. Producers compete \`a la
Cournot, by choosing the quantities of good to supply to each market they have
access to in order to maximize their profit. Every market is characterized by
its inverse demand functions returning the unit price of the considered good as
a function of the total available quantity. Markets are interconnected by a
dispatch network through which quantities of the considered good can flow
within finite capacity constraints. Such flows are determined by the market
maker, who aims at maximizing a designated welfare function. We model such
competition as a strategic game with $n+1$ players: the producers and the
market game. For this game, we first establish the existence of Nash equilibria
under standard concavity assumptions. We then identify sufficient conditions
for the game to be potential with an essentially unique Nash equilibrium. Next,
we present a general result that connects the optimal action of the market
maker with the capacity constraints imposed on the network. For the commonly
used Walrasian welfare, our finding proves a connection between capacity
bottlenecks in the market network and the emergence of price differences
between markets separated by saturated lines. This phenomenon is frequently
observed in real-world scenarios, for instance in power networks. Finally, we
validate the model with data from the Italian day-ahead electricity market.
|
2501.00192 | MLLM-as-a-Judge for Image Safety without Human Labeling | cs.CV cs.CL cs.CY cs.LG | Image content safety has become a significant challenge with the rise of
visual media on online platforms. Meanwhile, in the age of AI-generated content
(AIGC), many image generation models are capable of producing harmful content,
such as images containing sexual or violent material. Thus, it becomes crucial
to identify such unsafe images based on established safety rules. Pre-trained
Multimodal Large Language Models (MLLMs) offer potential in this regard, given
their strong pattern recognition abilities. Existing approaches typically
fine-tune MLLMs with human-labeled datasets, which however brings a series of
drawbacks. First, relying on human annotators to label data following intricate
and detailed guidelines is both expensive and labor-intensive. Furthermore,
users of safety judgment systems may need to frequently update safety rules,
making fine-tuning on human-based annotation more challenging. This raises the
research question: Can we detect unsafe images by querying MLLMs in a zero-shot
setting using a predefined safety constitution (a set of safety rules)? Our
research showed that simply querying pre-trained MLLMs does not yield
satisfactory results. This lack of effectiveness stems from factors such as the
subjectivity of safety rules, the complexity of lengthy constitutions, and the
inherent biases in the models. To address these challenges, we propose a
MLLM-based method includes objectifying safety rules, assessing the relevance
between rules and images, making quick judgments based on debiased token
probabilities with logically complete yet simplified precondition chains for
safety rules, and conducting more in-depth reasoning with cascaded
chain-of-thought processes if necessary. Experiment results demonstrate that
our method is highly effective for zero-shot image safety judgment tasks.
|
2501.00193 | A Pseudo-random Number Generator for Multi-Sequence Generation with
Programmable Statistics | cs.CR cs.IT math.IT | Pseudo-random number generators (PRNGs) are essential in a wide range of
applications, from cryptography to statistical simulations and optimization
algorithms. While uniform randomness is crucial for security-critical areas
like cryptography, many domains, such as simulated annealing and CMOS-based
Ising Machines, benefit from controlled or non-uniform randomness to enhance
solution exploration and optimize performance. This paper presents a hardware
PRNG that can simultaneously generate multiple uncorrelated sequences with
programmable statistics tailored to specific application needs. Designed in
65nm process, the PRNG occupies an area of approximately 0.0013mm^2 and has an
energy consumption of 0.57pJ/bit. Simulations confirm the PRNG's effectiveness
in modulating the statistical distribution while demonstrating high-quality
randomness properties.
|
2501.00195 | Towards Unraveling and Improving Generalization in World Models | cs.LG cs.AI | World models have recently emerged as a promising approach to reinforcement
learning (RL), achieving state-of-the-art performance across a wide range of
visual control tasks. This work aims to obtain a deep understanding of the
robustness and generalization capabilities of world models. Thus motivated, we
develop a stochastic differential equation formulation by treating the world
model learning as a stochastic dynamical system, and characterize the impact of
latent representation errors on robustness and generalization, for both cases
with zero-drift representation errors and with non-zero-drift representation
errors. Our somewhat surprising findings, based on both theoretic and
experimental studies, reveal that for the case with zero drift, modest latent
representation errors can in fact function as implicit regularization and hence
result in improved robustness. We further propose a Jacobian regularization
scheme to mitigate the compounding error propagation effects of non-zero drift,
thereby enhancing training stability and robustness. Our experimental studies
corroborate that this regularization approach not only stabilizes training but
also accelerates convergence and improves accuracy of long-horizon prediction.
|
2501.00199 | GPT-4 on Clinic Depression Assessment: An LLM-Based Pilot Study | cs.CL cs.AI | Depression has impacted millions of people worldwide and has become one of
the most prevalent mental disorders. Early mental disorder detection can lead
to cost savings for public health agencies and avoid the onset of other major
comorbidities. Additionally, the shortage of specialized personnel is a
critical issue because clinical depression diagnosis is highly dependent on
expert professionals and is time consuming.
In this study, we explore the use of GPT-4 for clinical depression assessment
based on transcript analysis. We examine the model's ability to classify
patient interviews into binary categories: depressed and not depressed. A
comparative analysis is conducted considering prompt complexity (e.g., using
both simple and complex prompts) as well as varied temperature settings to
assess the impact of prompt complexity and randomness on the model's
performance.
Results indicate that GPT-4 exhibits considerable variability in accuracy and
F1-Score across configurations, with optimal performance observed at lower
temperature values (0.0-0.2) for complex prompts. However, beyond a certain
threshold (temperature >= 0.3), the relationship between randomness and
performance becomes unpredictable, diminishing the gains from prompt
complexity.
These findings suggest that, while GPT-4 shows promise for clinical
assessment, the configuration of the prompts and model parameters requires
careful calibration to ensure consistent results. This preliminary study
contributes to understanding the dynamics between prompt engineering and large
language models, offering insights for future development of AI-powered tools
in clinical settings.
|
2501.00200 | Scalable Neural Network Verification with Branch-and-bound Inferred
Cutting Planes | cs.LG cs.CR math.OC | Recently, cutting-plane methods such as GCP-CROWN have been explored to
enhance neural network verifiers and made significant advances. However,
GCP-CROWN currently relies on generic cutting planes (cuts) generated from
external mixed integer programming (MIP) solvers. Due to the poor scalability
of MIP solvers, large neural networks cannot benefit from these cutting planes.
In this paper, we exploit the structure of the neural network verification
problem to generate efficient and scalable cutting planes specific for this
problem setting. We propose a novel approach, Branch-and-bound Inferred Cuts
with COnstraint Strengthening (BICCOS), which leverages the logical
relationships of neurons within verified subproblems in the branch-and-bound
search tree, and we introduce cuts that preclude these relationships in other
subproblems. We develop a mechanism that assigns influence scores to neurons in
each path to allow the strengthening of these cuts. Furthermore, we design a
multi-tree search technique to identify more cuts, effectively narrowing the
search space and accelerating the BaB algorithm. Our results demonstrate that
BICCOS can generate hundreds of useful cuts during the branch-and-bound process
and consistently increase the number of verifiable instances compared to other
state-of-the-art neural network verifiers on a wide range of benchmarks,
including large networks that previous cutting plane methods could not scale
to. BICCOS is part of the $\alpha,\beta$-CROWN verifier, the VNN-COMP 2024
winner. The code is available at http://github.com/Lemutisme/BICCOS .
|
2501.00201 | Hierarchical Functionality Prioritization in Multicast ISAC: Optimal
Admission Control and Discrete-Phase Beamforming | eess.SP cs.IT math.IT | We investigate the joint admission control and discrete-phase multicast
beamforming design for integrated sensing and communications (ISAC) systems,
where sensing and communications functionalities have different hierarchies.
Specifically, the ISAC system first allocates resources to the higher-hierarchy
functionality and opportunistically uses the remaining resources to support the
lower-hierarchy one. This resource allocation problem is a nonconvex
mixed-integer nonlinear program (MINLP). We propose an exact mixed-integer
linear program (MILP) reformulation, leading to a globally optimal solution. In
addition, we implemented three baselines for comparison, which our proposed
method outperforms by more than 39%.
|
2501.00204 | MSM-BD: Multimodal Social Media Bot Detection Using Heterogeneous
Information | cs.MM cs.SI | Although social bots can be engineered for constructive applications, their
potential for misuse in manipulative schemes and malware distribution cannot be
overlooked. This dichotomy underscores the critical need to detect social bots
on social media platforms. Advances in artificial intelligence have improved
the abilities of social bots, allowing them to generate content that is almost
indistinguishable from human-created content. These advancements require the
development of more advanced detection techniques to accurately identify these
automated entities. Given the heterogeneous information landscape on social
media, spanning images, texts, and user statistical features, we propose
MSM-BD, a Multimodal Social Media Bot Detection approach using heterogeneous
information. MSM-BD incorporates specialized encoders for heterogeneous
information and introduces a cross-modal fusion technology, Cross-Modal
Residual Cross-Attention (CMRCA), to enhance detection accuracy. We validate
the effectiveness of our model through extensive experiments using the
TwiBot-22 dataset.
|
2501.00208 | An Empirical Evaluation of Large Language Models on Consumer Health
Questions | cs.CL cs.AI | This study evaluates the performance of several Large Language Models (LLMs)
on MedRedQA, a dataset of consumer-based medical questions and answers by
verified experts extracted from the AskDocs subreddit. While LLMs have shown
proficiency in clinical question answering (QA) benchmarks, their effectiveness
on real-world, consumer-based, medical questions remains less understood.
MedRedQA presents unique challenges, such as informal language and the need for
precise responses suited to non-specialist queries. To assess model
performance, responses were generated using five LLMs: GPT-4o mini, Llama 3.1:
70B, Mistral-123B, Mistral-7B, and Gemini-Flash. A cross-evaluation method was
used, where each model evaluated its responses as well as those of others to
minimize bias. The results indicated that GPT-4o mini achieved the highest
alignment with expert responses according to four out of the five models'
judges, while Mistral-7B scored lowest according to three out of five models'
judges. This study highlights the potential and limitations of current LLMs for
consumer health medical question answering, indicating avenues for further
development.
|
2501.00210 | Debunking the CUDA Myth Towards GPU-based AI Systems | cs.DC cs.AI cs.AR | With the rise of AI, NVIDIA GPUs have become the de facto standard for AI
system design. This paper presents a comprehensive evaluation of Intel Gaudi
NPUs as an alternative to NVIDIA GPUs for AI model serving. First, we create a
suite of microbenchmarks to compare Intel Gaudi-2 with NVIDIA A100, showing
that Gaudi-2 achieves competitive performance not only in primitive AI compute,
memory, and communication operations but also in executing several important AI
workloads end-to-end. We then assess Gaudi NPU's programmability by discussing
several software-level optimization strategies to employ for implementing
critical FBGEMM operators and vLLM, evaluating their efficiency against
GPU-optimized counterparts. Results indicate that Gaudi-2 achieves energy
efficiency comparable to A100, though there are notable areas for improvement
in terms of software maturity. Overall, we conclude that, with effective
integration into high-level AI frameworks, Gaudi NPUs could challenge NVIDIA
GPU's dominance in the AI server market, though further improvements are
necessary to fully compete with NVIDIA's robust software ecosystem.
|
2501.00214 | OciorMVBA: Near-Optimal Error-Free Asynchronous MVBA | cs.CR cs.DC cs.IT math.IT | In this work, we propose an error-free, information-theoretically secure,
asynchronous multi-valued validated Byzantine agreement (MVBA) protocol, called
OciorMVBA. This protocol achieves MVBA consensus on a message $\boldsymbol{w}$
with expected $O(n |\boldsymbol{w}|\log n + n^2 \log q)$ communication bits,
expected $O(n^2)$ messages, expected $O(\log n)$ rounds, and expected $O(\log
n)$ common coins, under optimal resilience $n \geq 3t + 1$ in an $n$-node
network, where up to $t$ nodes may be dishonest. Here, $q$ denotes the alphabet
size of the error correction code used in the protocol. When error correction
codes with a constant alphabet size (e.g., Expander Codes) are used, $q$
becomes a constant. An MVBA protocol that guarantees all required properties
without relying on any cryptographic assumptions, such as signatures or
hashing, except for the common coin assumption, is said to be
information-theoretically secure (IT secure). Under the common coin assumption,
an MVBA protocol that guarantees all required properties in all executions is
said to be error-free.
We also propose another error-free, IT-secure, asynchronous MVBA protocol,
called OciorMVBArr. This protocol achieves MVBA consensus with expected $O(n
|\boldsymbol{w}| + n^2 \log n)$ communication bits, expected $O(1)$ rounds, and
expected $O(1)$ common coins, under a relaxed resilience (RR) of $n \geq 5t +
1$. Additionally, we propose a hash-based asynchronous MVBA protocol, called
OciorMVBAh. This protocol achieves MVBA consensus with expected $O(n
|\boldsymbol{w}| + n^3)$ bits, expected $O(1)$ rounds, and expected $O(1)$
common coins, under optimal resilience $n \geq 3t + 1$.
|
2501.00217 | The Potential of LLMs in Automating Software Testing: From Generation to
Reporting | cs.SE cs.AI | Having a high quality software is essential in software engineering, which
requires robust validation and verification processes during testing
activities. Manual testing, while effective, can be time consuming and costly,
leading to an increased demand for automated methods. Recent advancements in
Large Language Models (LLMs) have significantly influenced software
engineering, particularly in areas like requirements analysis, test automation,
and debugging. This paper explores an agent-oriented approach to automated
software testing, using LLMs to reduce human intervention and enhance testing
efficiency. The proposed framework integrates LLMs to generate unit tests,
visualize call graphs, and automate test execution and reporting. Evaluations
across multiple applications in Python and Java demonstrate the system's high
test coverage and efficient operation. This research underscores the potential
of LLM-powered agents to streamline software testing workflows while addressing
challenges in scalability and accuracy.
|
2501.00219 | Autonomous Minibus Service with Semi-on-demand Routes in Grid Networks | eess.SY cs.SY math.OC | This paper investigates the potential of autonomous minibuses which take
on-demand directional routes for pick-up and drop-off in a grid network of
wider area with low density, followed by fixed routes in areas with demand.
Mathematical formulation for generalized costs demonstrates its benefits, with
indicators proposed to select existing bus routes for conversion with the
options of zonal express and parallel routes. Simulations on modeled scenarios
and case studies with bus routes in Chicago show reductions in both passenger
costs and generalized costs over existing fixed-route bus service between
suburban areas and CBD.
|
2501.00220 | DecoratingFusion: A LiDAR-Camera Fusion Network with the Combination of
Point-level and Feature-level Fusion | cs.CV cs.LG | Lidars and cameras play essential roles in autonomous driving, offering
complementary information for 3D detection. The state-of-the-art fusion methods
integrate them at the feature level, but they mostly rely on the learned soft
association between point clouds and images, which lacks interpretability and
neglects the hard association between them. In this paper, we combine
feature-level fusion with point-level fusion, using hard association
established by the calibration matrices to guide the generation of object
queries. Specifically, in the early fusion stage, we use the 2D CNN features of
images to decorate the point cloud data, and employ two independent sparse
convolutions to extract the decorated point cloud features. In the mid-level
fusion stage, we initialize the queries with a center heatmap and embed the
predicted class labels as auxiliary information into the queries, making the
initial positions closer to the actual centers of the targets. Extensive
experiments conducted on two popular datasets, i.e. KITTI, Waymo, demonstrate
the superiority of DecoratingFusion.
|
2501.00223 | CancerKG.ORG A Web-scale, Interactive, Verifiable Knowledge Graph-LLM
Hybrid for Assisting with Optimal Cancer Treatment and Care | cs.AI cs.IR cs.LG | Here, we describe one of the first Web-scale hybrid Knowledge Graph
(KG)-Large Language Model (LLM), populated with the latest peer-reviewed
medical knowledge on colorectal Cancer. It is currently being evaluated to
assist with both medical research and clinical information retrieval tasks at
Moffitt Cancer Center, which is one of the top Cancer centers in the U.S. and
in the world. Our hybrid is remarkable as it serves the user needs better than
just an LLM, KG or a search-engine in isolation. LLMs as is are known to
exhibit hallucinations and catastrophic forgetting as well as are trained on
outdated corpora. The state of the art KGs, such as PrimeKG, cBioPortal,
ChEMBL, NCBI, and other require manual curation, hence are quickly getting
stale. CancerKG is unsupervised and is capable of automatically ingesting and
organizing the latest medical findings. To alleviate the LLMs shortcomings, the
verified KG serves as a Retrieval Augmented Generation (RAG) guardrail.
CancerKG exhibits 5 different advanced user interfaces, each tailored to serve
different data modalities better and more convenient for the user.
|
2501.00224 | Extracting effective solutions hidden in large language models via
generated comprehensive specialists: case studies in developing electronic
devices | cs.CL cs.AI cs.LG | Recently, many studies have increasingly explored the use of large language
models (LLMs) to generate research ideas and scientific hypotheses. However,
real-world research and development often require solving complex,
interdisciplinary challenges where solutions may not be readily found through
existing knowledge related to the problem. Therefore, it is desirable to
leverage the vast, comprehensive knowledge of LLMs to generate effective,
breakthrough solutions by integrating various perspectives from other
disciplines. Here, we propose SELLM (Solution Enumeration via comprehensive
List and LLM), a framework leveraging LLMs and structured guidance using MECE
(Mutually Exclusive, Collectively Exhaustive) principles, such as International
Patent Classification (IPC) and the periodic table of elements. SELLM
systematically constructs comprehensive expert agents from the list to generate
cross-disciplinary and effective solutions. To evaluate SELLM's practicality,
we applied it to two challenges: improving light extraction in organic
light-emitting diode (OLED) lighting and developing electrodes for
next-generation memory materials. The results demonstrate that SELLM
significantly facilitates the generation of effective solutions compared to
cases without specific customization or effort, showcasing the potential of
SELLM to enable LLMs to generate effective solutions even for challenging
problems.
|
2501.00226 | Generative Emergent Communication: Large Language Model is a Collective
World Model | cs.AI cs.CL | This study proposes a unifying theoretical framework called generative
emergent communication (generative EmCom) that bridges emergent communication,
world models, and large language models (LLMs) through the lens of collective
predictive coding (CPC). The proposed framework formalizes the emergence of
language and symbol systems through decentralized Bayesian inference across
multiple agents, extending beyond conventional discriminative model-based
approaches to emergent communication. This study makes the following two key
contributions: First, we propose generative EmCom as a novel framework for
understanding emergent communication, demonstrating how communication emergence
in multi-agent reinforcement learning (MARL) can be derived from control as
inference while clarifying its relationship to conventional discriminative
approaches. Second, we propose a mathematical formulation showing the
interpretation of LLMs as collective world models that integrate multiple
agents' experiences through CPC. The framework provides a unified theoretical
foundation for understanding how shared symbol systems emerge through
collective predictive coding processes, bridging individual cognitive
development and societal language evolution. Through mathematical formulations
and discussion on prior works, we demonstrate how this framework explains
fundamental aspects of language emergence and offers practical insights for
understanding LLMs and developing sophisticated AI systems for improving
human-AI interaction and multi-agent systems.
|
2501.00230 | Federated Deep Subspace Clustering | cs.LG cs.AI cs.CR | This paper introduces FDSC, a private-protected subspace clustering (SC)
approach with federated learning (FC) schema. In each client, there is a deep
subspace clustering network accounting for grouping the isolated data, composed
of a encode network, a self-expressive layer, and a decode network. FDSC is
achieved by uploading the encode network to communicate with other clients in
the server. Besides, FDSC is also enhanced by preserving the local neighborhood
relationship in each client. With the effects of federated learning and
locality preservation, the learned data features from the encoder are boosted
so as to enhance the self-expressiveness learning and result in better
clustering performance. Experiments test FDSC on public datasets and compare
with other clustering methods, demonstrating the effectiveness of FDSC.
|
2501.00233 | Zero-Shot Strategies for Length-Controllable Summarization | cs.CL | Large language models (LLMs) struggle with precise length control,
particularly in zero-shot settings. We conduct a comprehensive study evaluating
LLMs' length control capabilities across multiple measures and propose
practical methods to improve controllability. Our experiments with LLaMA 3
reveal stark differences in length adherence across measures and highlight
inherent biases of the model. To address these challenges, we introduce a set
of methods: length approximation, target adjustment, sample filtering, and
automated revisions. By combining these methods, we demonstrate substantial
improvements in length compliance while maintaining or enhancing summary
quality, providing highly effective zero-shot strategies for precise length
control without the need for model fine-tuning or architectural changes. With
our work, we not only advance our understanding of LLM behavior in controlled
text generation but also pave the way for more reliable and adaptable
summarization systems in real-world applications.
|
2501.00237 | Make Domain Shift a Catastrophic Forgetting Alleviator in
Class-Incremental Learning | cs.CV cs.LG | In the realm of class-incremental learning (CIL), alleviating the
catastrophic forgetting problem is a pivotal challenge. This paper discovers a
counter-intuitive observation: by incorporating domain shift into CIL tasks,
the forgetting rate is significantly reduced. Our comprehensive studies
demonstrate that incorporating domain shift leads to a clearer separation in
the feature distribution across tasks and helps reduce parameter interference
during the learning process. Inspired by this observation, we propose a simple
yet effective method named DisCo to deal with CIL tasks. DisCo introduces a
lightweight prototype pool that utilizes contrastive learning to promote
distinct feature distributions for the current task relative to previous ones,
effectively mitigating interference across tasks. DisCo can be easily
integrated into existing state-of-the-art class-incremental learning methods.
Experimental results show that incorporating our method into various CIL
methods achieves substantial performance improvements, validating the benefits
of our approach in enhancing class-incremental learning by separating feature
representation and reducing interference. These findings illustrate that DisCo
can serve as a robust fashion for future research in class-incremental
learning.
|
2501.00241 | Exploring Variability in Fine-Tuned Models for Text Classification with
DistilBERT | cs.CL cs.AI | This study evaluates fine-tuning strategies for text classification using the
DistilBERT model, specifically the
distilbert-base-uncased-finetuned-sst-2-english variant. Through structured
experiments, we examine the influence of hyperparameters such as learning rate,
batch size, and epochs on accuracy, F1-score, and loss. Polynomial regression
analyses capture foundational and incremental impacts of these hyperparameters,
focusing on fine-tuning adjustments relative to a baseline model.
Results reveal variability in metrics due to hyperparameter configurations,
showing trade-offs among performance metrics. For example, a higher learning
rate reduces loss in relative analysis (p=0.027) but challenges accuracy
improvements. Meanwhile, batch size significantly impacts accuracy and F1-score
in absolute regression (p=0.028 and p=0.005) but has limited influence on loss
optimization (p=0.170). The interaction between epochs and batch size maximizes
F1-score (p=0.001), underscoring the importance of hyperparameter interplay.
These findings highlight the need for fine-tuning strategies addressing
non-linear hyperparameter interactions to balance performance across metrics.
Such variability and metric trade-offs are relevant for tasks beyond text
classification, including NLP and computer vision. This analysis informs
fine-tuning strategies for large language models and promotes adaptive designs
for broader model applicability.
|
2501.00242 | Automotive Speed Estimation: Sensor Types and Error Characteristics from
OBD-II to ADAS | eess.SP cs.RO | Modern on-road navigation systems heavily depend on integrating speed
measurements with inertial navigation systems (INS) and global navigation
satellite systems (GNSS). Telemetry-based applications typically source speed
data from the On-Board Diagnostic II (OBD-II) system. However, the method of
deriving speed, as well as the types of sensors used to measure wheel speed,
differs across vehicles. These differences result in varying error
characteristics that must be accounted for in navigation and autonomy
applications. This paper addresses this gap by examining the diverse
speed-sensing technologies employed in standard automotive systems and
alternative techniques used in advanced systems designed for higher levels of
autonomy, such as Advanced Driver Assistance Systems (ADAS), Autonomous Driving
(AD), or surveying applications. We propose a method to identify the type of
speed sensor in a vehicle and present strategies for accurately modeling its
error characteristics. To validate our approach, we collected and analyzed data
from three long real road trajectories conducted in urban environments in
Toronto and Kingston, Ontario, Canada. The results underscore the critical role
of integrating multiple sensor modalities to achieve more accurate speed
estimation, thus improving automotive navigation state estimation, particularly
in GNSS-denied environments.
|
2501.00243 | Cross-Layer Cache Aggregation for Token Reduction in Ultra-Fine-Grained
Image Recognition | cs.CV | Ultra-fine-grained image recognition (UFGIR) is a challenging task that
involves classifying images within a macro-category. While traditional FGIR
deals with classifying different species, UFGIR goes beyond by classifying
sub-categories within a species such as cultivars of a plant. In recent times
the usage of Vision Transformer-based backbones has allowed methods to obtain
outstanding recognition performances in this task but this comes at a
significant cost in terms of computation specially since this task
significantly benefits from incorporating higher resolution images. Therefore,
techniques such as token reduction have emerged to reduce the computational
cost. However, dropping tokens leads to loss of essential information for
fine-grained categories, specially as the token keep rate is reduced.
Therefore, to counteract the loss of information brought by the usage of token
reduction we propose a novel Cross-Layer Aggregation Classification Head and a
Cross-Layer Cache mechanism to recover and access information from previous
layers in later locations. Extensive experiments covering more than 2000 runs
across diverse settings including 5 datasets, 9 backbones, 7 token reduction
methods, 5 keep rates, and 2 image sizes demonstrate the effectiveness of the
proposed plug-and-play modules and allow us to push the boundaries of accuracy
vs cost for UFGIR by reducing the kept tokens to extremely low ratios of up to
10\% while maintaining a competitive accuracy to state-of-the-art models. Code
is available at: \url{https://github.com/arkel23/CLCA}
|
2501.00244 | Have We Designed Generalizable Structural Knowledge Promptings?
Systematic Evaluation and Rethinking | cs.CL | Large language models (LLMs) have demonstrated exceptional performance in
text generation within current NLP research. However, the lack of factual
accuracy is still a dark cloud hanging over the LLM skyscraper. Structural
knowledge prompting (SKP) is a prominent paradigm to integrate external
knowledge into LLMs by incorporating structural representations, achieving
state-of-the-art results in many knowledge-intensive tasks. However, existing
methods often focus on specific problems, lacking a comprehensive exploration
of the generalization and capability boundaries of SKP. This paper aims to
evaluate and rethink the generalization capability of the SKP paradigm from
four perspectives including Granularity, Transferability, Scalability, and
Universality. To provide a thorough evaluation, we introduce a novel
multi-granular, multi-level benchmark called SUBARU, consisting of 9 different
tasks with varying levels of granularity and difficulty.
|
2501.00249 | A Universal Controller for Grid-Tied Inverters | eess.SY cs.SY | This paper presents the development of "Control-Sync," a novel firmware for
universal inverters in microgrids, designed to enhance grid stability and
flexibility. As hybrid PV-battery systems become increasingly prevalent, there
is a critical need for inverters capable of efficiently transitioning between
grid-forming (GFM) and grid-following (GFL) modes. Our firmware introduces dual
control paths that allow for seamless transitions without reliance on external
control devices, reducing communication overhead and increasing operational
reliability. Key features include direct phase-angle detection and frequency
restoration capabilities, essential for managing asymmetrical power grids and
dynamic load changes. The efficacy of Control-Sync is demonstrated through
rigorous testing with grid emulators and multi-phase inverters, confirming its
potential to improve microgrid reliability and efficiency. This study offers a
scalable solution to enhance inverter adaptability in various grid conditions,
fostering a more resilient energy infrastructure.
|
2501.00252 | Towards Pattern-aware Data Augmentation for Temporal Knowledge Graph
Completion | cs.LG cs.DB cs.IR | Predicting missing facts for temporal knowledge graphs (TKGs) is a
fundamental task, called temporal knowledge graph completion (TKGC). One key
challenge in this task is the imbalance in data distribution, where facts are
unevenly spread across entities and timestamps. This imbalance can lead to poor
completion performance or long-tail entities and timestamps, and unstable
training due to the introduction of false negative samples. Unfortunately, few
previous studies have investigated how to mitigate these effects. Moreover, for
the first time, we found that existing methods suffer from model preferences,
revealing that entities with specific properties (e.g., recently active) are
favored by different models. Such preferences will lead to error accumulation
and further exacerbate the effects of imbalanced data distribution, but are
overlooked by previous studies. To alleviate the impacts of imbalanced data and
model preferences, we introduce Booster, the first data augmentation strategy
for TKGs. The unique requirements here lie in generating new samples that fit
the complex semantic and temporal patterns within TKGs, and identifying
hard-learning samples specific to models. Therefore, we propose a hierarchical
scoring algorithm based on triadic closures within TKGs. By incorporating both
global semantic patterns and local time-aware structures, the algorithm enables
pattern-aware validation for new samples. Meanwhile, we propose a two-stage
training approach to identify samples that deviate from the model's preferred
patterns. With a well-designed frequency-based filtering strategy, this
approach also helps to avoid the misleading of false negatives. Experiments
justify that Booster can seamlessly adapt to existing TKGC models and achieve
up to an 8.7% performance improvement.
|
2501.00254 | Automatically Planning Optimal Parallel Strategy for Large Language
Models | cs.AI cs.CL | The number of parameters in large-scale language models based on transformers
is gradually increasing, and the scale of computing clusters is also growing.
The technology of quickly mobilizing large amounts of computing resources for
parallel computing is becoming increasingly important. In this paper, we
propose an automatic parallel algorithm that automatically plans the parallel
strategy with maximum throughput based on model and hardware information. By
decoupling the training time into computation, communication, and overlap, we
established a training duration simulation model. Based on this simulation
model, we prune the parallel solution space to shorten the search time
required. The multi-node experiment results show that the algorithm can
estimate the parallel training duration in real time with an average accuracy
of 96%. In our test, the recommendation strategy provided by the algorithm is
always globally optimal.
|
2501.00257 | EQUATOR: A Deterministic Framework for Evaluating LLM Reasoning with
Open-Ended Questions. # v1.0.0-beta | cs.CL | Despite the remarkable coherence of Large Language Models (LLMs), existing
evaluation methods often suffer from fluency bias and rely heavily on
multiple-choice formats, making it difficult to assess factual accuracy and
complex reasoning effectively. LLMs thus frequently generate factually
inaccurate responses, especially in complex reasoning tasks, highlighting two
prominent challenges: (1) the inadequacy of existing methods to evaluate
reasoning and factual accuracy effectively, and (2) the reliance on human
evaluators for nuanced judgment, as illustrated by Williams and Huckle
(2024)[1], who found manual grading indispensable despite automated grading
advancements.
To address evaluation gaps in open-ended reasoning tasks, we introduce the
EQUATOR Evaluator (Evaluation of Question Answering Thoroughness in Open-ended
Reasoning). This framework combines deterministic scoring with a focus on
factual accuracy and robust reasoning assessment. Using a vector database,
EQUATOR pairs open-ended questions with human-evaluated answers, enabling more
precise and scalable evaluations. In practice, EQUATOR significantly reduces
reliance on human evaluators for scoring and improves scalability compared to
Williams and Huckle's (2004)[1] methods.
Our results demonstrate that this framework significantly outperforms
traditional multiple-choice evaluations while maintaining high accuracy
standards. Additionally, we introduce an automated evaluation process
leveraging smaller, locally hosted LLMs. We used LLaMA 3.2B, running on the
Ollama binaries to streamline our assessments. This work establishes a new
paradigm for evaluating LLM performance, emphasizing factual accuracy and
reasoning ability, and provides a robust methodological foundation for future
research.
|
2501.00258 | Optimal design of frame structures with mixed categorical and continuous
design variables using the Gumbel-Softmax method | cs.CE math.OC | In optimizing real-world structures, due to fabrication or budgetary
restraints, the design variables may be restricted to a set of standard
engineering choices. Such variables, commonly called categorical variables, are
discrete and unordered in essence, precluding the utilization of gradient-based
optimizers for the problems containing them. In this paper, incorporating the
Gumbel-Softmax (GSM) method, we propose a new gradient-based optimizer for
handling such variables in the optimal design of large-scale frame structures.
The GSM method provides a means to draw differentiable samples from categorical
distributions, thereby enabling sensitivity analysis for the variables
generated from such distributions. The sensitivity information can greatly
reduce the computational cost of traversing high-dimensional and discrete
design spaces in comparison to employing gradient-free optimization methods. In
addition, since the developed optimizer is gradient-based, it can naturally
handle the simultaneous optimization of categorical and continuous design
variables. Through three numerical case studies, different aspects of the
proposed optimizer are studied and its advantages over population-based
optimizers, specifically a genetic algorithm, are demonstrated.
|
2501.00260 | Detection and Prevention of Smishing Attacks | cs.CR cs.SI | Phishing is an online identity theft technique where attackers steal users
personal information, leading to financial losses for individuals and
organizations. With the increasing adoption of smartphones, which provide
functionalities similar to desktop computers, attackers are targeting mobile
users. Smishing, a phishing attack carried out through Short Messaging Service
(SMS), has become prevalent due to the widespread use of SMS-based services. It
involves deceptive messages designed to extract sensitive information. Despite
the growing number of smishing attacks, limited research focuses on detecting
these threats. This work presents a smishing detection model using a
content-based analysis approach. To address the challenge posed by slang,
abbreviations, and short forms in text communication, the model normalizes
these into standard forms. A machine learning classifier is employed to
classify messages as smishing or ham. Experimental results demonstrate the
model effectiveness, achieving classification accuracies of 97.14% for smishing
and 96.12% for ham messages, with an overall accuracy of 96.20%.
|
2501.00261 | Collaborative Approaches to Enhancing Smart Vehicle Cybersecurity by
AI-Driven Threat Detection | cs.CR cs.AI | The introduction sets the stage for exploring collaborative approaches to
bolstering smart vehicle cybersecurity through AI-driven threat detection. As
the automotive industry increasingly adopts connected and automated vehicles
(CAVs), the need for robust cybersecurity measures becomes paramount. With the
emergence of new vulnerabilities and security requirements, the integration of
advanced technologies such as 5G networks, blockchain, and quantum computing
presents promising avenues for enhancing CAV cybersecurity . Additionally, the
roadmap for cybersecurity in autonomous vehicles emphasizes the importance of
efficient intrusion detection systems and AI-based techniques, along with the
integration of secure hardware, software stacks, and advanced threat
intelligence to address cybersecurity challenges in future autonomous vehicles.
|
2501.00262 | Integrating Cascade Pumped Micro-Hydro Storage: A Sustainable Approach
to Energy and Water Management | eess.SY cs.SY | As traditional large hydropower has been extensively exploited, micro-hydro
systems have caught research increasing interest. New engineering challenges
arise in developing micro-hydro systems in areas with significant elevation but
prohibitive horizontal distances between primary reservoirs. This study
addresses these challenges by proposing a cascade-pumped micro-hydro storage
(CPMHS) system that leverages intermediate reservoirs to bridge long horizontal
distances, enabling efficient energy transfer and storage. The methodology
utilizes naturally occurring lakes with substantial head heights but limited
feasibility for direct pumped storage due to horizontal separations.
Integrating smaller, strategically placed intermediate reservoirs maximizes
energy capture along the cascading path, making pumped storage viable in
geographically constrained locations. The proposed system will enhance energy
generation potential and provide additional benefits for water management.
Using geographical data and a detailed case study focused on Mountain Lake and
surrounding lakes, this paper demonstrates the energy efficiency and viability
of cascade-based micro-hydro storage. A practical methodology for implementing
CPMHS systems is proposed and validated by case studies. An optimization
framework is developed for efficient energy capture in regions with challenging
topography.
|
2501.00264 | Enhancing Wireless Sensor Network Security through Integration with the
ServiceNow Cloud Platform | cs.CR cs.AI | Wireless Sensor Networks (WSNs) continue to experience rapid developments and
integration into modern-day applications. Overall, WSNs collect and process
relevant data through sensors or nodes and communicate with different networks
for superior information management. Nevertheless, a primary concern relative
to WSNs is security. Considering the high constraints on throughput, battery,
processing power, and memory, typical security procedures present limitations
for application in WSNs. This research focuses on the integration of WSNs with
the cloud platform, specifically to address these security risks. The cloud
platform also adopts a security-driven approach and has attracted many
applications across various sectors globally. This research specifically
explores how cloud computing could be exploited to impede Denial of Service
attacks from endangering WSNs. WSNs are now deployed in various low-powered
applications, including disaster management, homeland security, battlefield
surveillance, agriculture, and the healthcare industry. WSNs are distinguished
from traditional networks by the numerous wireless connected sensors being
deployed to conduct an assigned task. In testing scenarios, the size of WSNs
ranges from a few to several thousand. The overarching requirements of WSNs
include rapid processing of collected data, low-cost installation and
maintenance, and low latency in network operations. Given that a substantial
amount of WSN applications are used in high-risk and volatile environments,
they must effectively address security concerns. This includes the secure
movement, storage, and communication of data through networks, an environment
in which WSNs are notably vulnerable. The limitations of WSNs have meant that
they are predominantly used in unsecured applications despite positive
advancements. This study explores methods for integrating the WSN with the
cloud.
|
2501.00265 | Outlier-Robust Training of Machine Learning Models | cs.LG cs.CV | Robust training of machine learning models in the presence of outliers has
garnered attention across various domains. The use of robust losses is a
popular approach and is known to mitigate the impact of outliers. We bring to
light two literatures that have diverged in their ways of designing robust
losses: one using M-estimation, which is popular in robotics and computer
vision, and another using a risk-minimization framework, which is popular in
deep learning. We first show that a simple modification of the Black-Rangarajan
duality provides a unifying view. The modified duality brings out a definition
of a robust loss kernel $\sigma$ that is satisfied by robust losses in both the
literatures. Secondly, using the modified duality, we propose an Adaptive
Alternation Algorithm (AAA) for training machine learning models with outliers.
The algorithm iteratively trains the model by using a weighted version of the
non-robust loss, while updating the weights at each iteration. The algorithm is
augmented with a novel parameter update rule by interpreting the weights as
inlier probabilities, and obviates the need for complex parameter tuning.
Thirdly, we investigate convergence of the adaptive alternation algorithm to
outlier-free optima. Considering arbitrary outliers (i.e., with no
distributional assumption on the outliers), we show that the use of robust loss
kernels {\sigma} increases the region of convergence. We experimentally show
the efficacy of our algorithm on regression, classification, and neural scene
reconstruction problems. We release our implementation code:
https://github.com/MIT-SPARK/ORT.
|
Subsets and Splits
No community queries yet
The top public SQL queries from the community will appear here once available.