id
stringlengths 9
16
| title
stringlengths 4
278
| categories
stringlengths 5
104
| abstract
stringlengths 6
4.09k
|
|---|---|---|---|
2502.08649
|
Principles for Open Data Curation: A Case Study with the New York City
311 Service Request Data
|
cs.DB cs.CY stat.ME
|
In the early 21st century, the open data movement began to transform
societies and governments by promoting transparency, innovation, and public
engagement. The City of New York (NYC) has been at the forefront of this
movement since the enactment of the Open Data Law in 2012, creating the NYC
Open Data portal. The portal currently hosts 2,700 datasets, serving as a
crucial resource for research across various domains, including health, urban
development, and transportation. However, the effective use of open data relies
heavily on data quality and usability, challenges that remain insufficiently
addressed in the literature. This paper examines these challenges via a case
study of the NYC 311 Service Request dataset, identifying key issues in data
validity, consistency, and curation efficiency. We propose a set of data
curation principles, tailored for government-released open data, to address
these challenges. Our findings highlight the importance of harmonized field
definitions, streamlined storage, and automated quality checks, offering
practical guidelines for improving the reliability and utility of open
datasets.
|
2502.08651
|
Democratizing AI Governance: Balancing Expertise and Public
Participation
|
cs.CY cs.LG
|
The development and deployment of artificial intelligence (AI) systems, with
their profound societal impacts, raise critical challenges for governance.
Historically, technological innovations have been governed by concentrated
expertise with limited public input. However, AI's pervasive influence across
domains such as healthcare, employment, and justice necessitates inclusive
governance approaches. This article explores the tension between expert-led
oversight and democratic participation, analyzing models of participatory and
deliberative democracy. Using case studies from France and Brazil, we highlight
how inclusive frameworks can bridge the gap between technical complexity and
public accountability. Recommendations are provided for integrating these
approaches into a balanced governance model tailored to the European Union,
emphasizing transparency, diversity, and adaptive regulation to ensure that AI
governance reflects societal values while maintaining technical rigor. This
analysis underscores the importance of hybrid frameworks that unite expertise
and public voice in shaping the future of AI policy.
|
2502.08652
|
LegalScore: Development of a Benchmark for Evaluating AI Models in Legal
Career Exams in Brazil
|
cs.CY cs.AI
|
This research introduces LegalScore, a specialized index for assessing how
generative artificial intelligence models perform in a selected range of career
exams that require a legal background in Brazil. The index evaluates fourteen
different types of artificial intelligence models' performance, from
proprietary to open-source models, in answering objective questions applied to
these exams. The research uncovers the response of the models when applying
English-trained large language models to Brazilian legal contexts, leading us
to reflect on the importance and the need for Brazil-specific training data in
generative artificial intelligence models. Performance analysis shows that
while proprietary and most known models achieved better results overall, local
and smaller models indicated promising performances due to their Brazilian
context alignment in training. By establishing an evaluation framework with
metrics including accuracy, confidence intervals, and normalized scoring,
LegalScore enables systematic assessment of artificial intelligence performance
in legal examinations in Brazil. While the study demonstrates artificial
intelligence's potential value for exam preparation and question development,
it concludes that significant improvements are needed before AI can match human
performance in advanced legal assessments. The benchmark creates a foundation
for continued research, highlighting the importance of local adaptation in
artificial intelligence development.
|
2502.08655
|
Personalizing Education through an Adaptive LMS with Integrated LLMs
|
cs.AI
|
The widespread adoption of large language models (LLMs) marks a
transformative era in technology, especially within the educational sector.
This paper explores the integration of LLMs within learning management systems
(LMSs) to develop an adaptive learning management system (ALMS) personalized
for individual learners across various educational stages. Traditional LMSs,
while facilitating the distribution of educational materials, fall short in
addressing the nuanced needs of diverse student populations, particularly in
settings with limited instructor availability. Our proposed system leverages
the flexibility of AI to provide a customizable learning environment that
adjusts to each user's evolving needs. By integrating a suite of
general-purpose and domain-specific LLMs, this system aims to minimize common
issues such as factual inaccuracies and outdated information, characteristic of
general LLMs like OpenAI's ChatGPT. This paper details the development of an
ALMS that not only addresses privacy concerns and the limitations of existing
educational tools but also enhances the learning experience by maintaining
engagement through personalized educational content.
|
2502.08657
|
Refining Positive and Toxic Samples for Dual Safety Self-Alignment of
LLMs with Minimal Human Interventions
|
cs.CL cs.AI
|
Recent AI agents, such as ChatGPT and LLaMA, primarily rely on instruction
tuning and reinforcement learning to calibrate the output of large language
models (LLMs) with human intentions, ensuring the outputs are harmless and
helpful. Existing methods heavily depend on the manual annotation of
high-quality positive samples, while contending with issues such as noisy
labels and minimal distinctions between preferred and dispreferred response
data. However, readily available toxic samples with clear safety distinctions
are often filtered out, removing valuable negative references that could aid
LLMs in safety alignment. In response, we propose PT-ALIGN, a novel safety
self-alignment approach that minimizes human supervision by automatically
refining positive and toxic samples and performing fine-grained dual
instruction tuning. Positive samples are harmless responses, while toxic
samples deliberately contain extremely harmful content, serving as a new
supervisory signals. Specifically, we utilize LLM itself to iteratively
generate and refine training instances by only exploring fewer than 50 human
annotations. We then employ two losses, i.e., maximum likelihood estimation
(MLE) and fine-grained unlikelihood training (UT), to jointly learn to enhance
the LLM's safety. The MLE loss encourages an LLM to maximize the generation of
harmless content based on positive samples. Conversely, the fine-grained UT
loss guides the LLM to minimize the output of harmful words based on negative
samples at the token-level, thereby guiding the model to decouple safety from
effectiveness, directing it toward safer fine-tuning objectives, and increasing
the likelihood of generating helpful and reliable content. Experiments on 9
popular open-source LLMs demonstrate the effectiveness of our PT-ALIGN for
safety alignment, while maintaining comparable levels of helpfulness and
usefulness.
|
2502.08658
|
Analyzable Parameters Dominated Vehicle Platoon Dynamics Modeling and
Analysis: A Physics-Encoded Deep Learning Approach
|
cs.RO cs.AI
|
Recently, artificial intelligence (AI)-enabled nonlinear vehicle platoon
dynamics modeling plays a crucial role in predicting and optimizing the
interactions between vehicles. Existing efforts lack the extraction and capture
of vehicle behavior interaction features at the platoon scale. More
importantly, maintaining high modeling accuracy without losing physical
analyzability remains to be solved. To this end, this paper proposes a novel
physics-encoded deep learning network, named PeMTFLN, to model the nonlinear
vehicle platoon dynamics. Specifically, an analyzable parameters encoded
computational graph (APeCG) is designed to guide the platoon to respond to the
driving behavior of the lead vehicle while ensuring local stability. Besides, a
multi-scale trajectory feature learning network (MTFLN) is constructed to
capture platoon following patterns and infer the physical parameters required
for APeCG from trajectory data. The human-driven vehicle trajectory datasets
(HIGHSIM) were used to train the proposed PeMTFLN. The trajectories prediction
experiments show that PeMTFLN exhibits superior compared to the baseline models
in terms of predictive accuracy in speed and gap. The stability analysis result
shows that the physical parameters in APeCG is able to reproduce the platoon
stability in real-world condition. In simulation experiments, PeMTFLN performs
low inference error in platoon trajectories generation. Moreover, PeMTFLN also
accurately reproduces ground-truth safety statistics. The code of proposed
PeMTFLN is open source.
|
2502.08659
|
Deployment-friendly Lane-changing Intention Prediction Powered by
Brain-inspired Spiking Neural Networks
|
cs.RO
|
Accurate and real-time prediction of surrounding vehicles' lane-changing
intentions is a critical challenge in deploying safe and efficient autonomous
driving systems in open-world scenarios. Existing high-performing methods
remain hard to deploy due to their high computational cost, long training
times, and excessive memory requirements. Here, we propose an efficient
lane-changing intention prediction approach based on brain-inspired Spiking
Neural Networks (SNN). By leveraging the event-driven nature of SNN, the
proposed approach enables us to encode the vehicle's states in a more efficient
manner. Comparison experiments conducted on HighD and NGSIM datasets
demonstrate that our method significantly improves training efficiency and
reduces deployment costs while maintaining comparable prediction accuracy.
Particularly, compared to the baseline, our approach reduces training time by
75% and memory usage by 99.9%. These results validate the efficiency and
reliability of our method in lane-changing predictions, highlighting its
potential for safe and efficient autonomous driving systems while offering
significant advantages in deployment, including reduced training time, lower
memory usage, and faster inference.
|
2502.08660
|
Semantic Role Labeling: A Systematical Survey
|
cs.CL
|
Semantic role labeling (SRL) is a central natural language processing (NLP)
task aiming to understand the semantic roles within texts, facilitating a wide
range of downstream applications. While SRL has garnered extensive and enduring
research, there is currently a lack of a comprehensive survey that thoroughly
organizes and synthesizes the field. This paper aims to review the entire
research trajectory of the SRL community over the past two decades. We begin by
providing a complete definition of SRL. To offer a comprehensive taxonomy, we
categorize SRL methodologies into four key perspectives: model architectures,
syntax feature modeling, application scenarios, and multi-modal extensions.
Further, we discuss SRL benchmarks, evaluation metrics, and paradigm modeling
approaches, while also exploring practical applications across various domains.
Finally, we analyze future research directions in SRL, addressing the evolving
role of SRL in the age of large language models (LLMs) and its potential impact
on the broader NLP landscape. We maintain a public repository and consistently
update related resources at: https://github.com/DreamH1gh/Awesome-SRL
|
2502.08661
|
Few-shot LLM Synthetic Data with Distribution Matching
|
cs.CL cs.AI
|
As large language models (LLMs) advance, their ability to perform in-context
learning and few-shot language generation has improved significantly. This has
spurred using LLMs to produce high-quality synthetic data to enhance the
performance of smaller models like online retrievers or weak LLMs. However,
LLM-generated synthetic data often differs from the real data in key language
attributes (e.g., styles, tones, content proportions, etc.). As a result,
mixing these synthetic data directly with real data may distort the original
data distribution, potentially hindering performance improvements. To solve
this, we introduce SynAlign: a synthetic data generation and filtering
framework based on key attribute distribution matching. Before generation,
SynAlign employs an uncertainty tracker surrogated by the Gaussian Process
model to iteratively select data clusters distinct from selected ones as
demonstrations for new data synthesis, facilitating the efficient exploration
diversity of the real data. Then, a latent attribute reasoning method is
employed: the LLM summarizes linguistic attributes of demonstrations and then
synthesizes new data based on them. This approach facilitates synthesizing
diverse data with linguistic attributes that appear in real data.After
generation, the Maximum Mean Discrepancy is used as the objective function to
learn the sampling weight of each synthetic data, ensuring distribution
matching with the real data. Our experiments on multiple text prediction tasks
show significant performance improvements. We also conducted an online A/B test
on an online retriever to demonstrate SynAlign's effectiveness.
|
2502.08662
|
RoToR: Towards More Reliable Responses for Order-Invariant Inputs
|
cs.CL cs.AI
|
Mitigating positional bias of language models (LMs) for listwise inputs is a
well-known and important problem (e.g., lost-in-the-middle). While zero-shot
order-invariant LMs have been proposed to solve this issue, their success on
practical listwise problems has been limited. In this work, as a first
contribution, we identify and overcome two limitations to make zero-shot
invariant LMs more practical: (1) training and inference distribution mismatch
arising from modifying positional ID assignments to enforce invariance, and (2)
failure to adapt to a mixture of order-invariant and sensitive inputs in
practical listwise problems. To overcome, we propose (1) RoToR, a zero-shot
invariant LM for genuinely order-invariant inputs with minimal modifications of
positional IDs, and (2) Selective Routing, an adaptive framework that handles
both order-invariant and order-sensitive inputs in listwise tasks. On the Lost
in the middle (LitM), Knowledge Graph Question Answering (KGQA), and MMLU
benchmarks, we show that RoToR with Selective Routing can effectively handle
practical listwise input tasks in a zero-shot manner.
|
2502.08663
|
Hallucination Detection: A Probabilistic Framework Using Embeddings
Distance Analysis
|
cs.CL cs.AI cs.CY
|
Hallucinations are one of the major issues affecting LLMs, hindering their
wide adoption in production systems. While current research solutions for
detecting hallucinations are mainly based on heuristics, in this paper we
introduce a mathematically sound methodology to reason about hallucination, and
leverage it to build a tool to detect hallucinations. To the best of our
knowledge, we are the first to show that hallucinated content has structural
differences with respect to correct content. To prove this result, we resort to
the Minkowski distances in the embedding space. Our findings demonstrate
statistically significant differences in the embedding distance distributions,
that are also scale free -- they qualitatively hold regardless of the distance
norm used and the number of keywords, questions, or responses. We leverage
these structural differences to develop a tool to detect hallucinated
responses, achieving an accuracy of 66\% for a specific configuration of system
parameters -- comparable with the best results in the field. In conclusion, the
suggested methodology is promising and novel, possibly paving the way for
further research in the domain, also along the directions highlighted in our
future work.
|
2502.08664
|
Motion Forecasting for Autonomous Vehicles: A Survey
|
cs.RO cs.AI
|
In recent years, the field of autonomous driving has attracted increasingly
significant public interest. Accurately forecasting the future behavior of
various traffic participants is essential for the decision-making of Autonomous
Vehicles (AVs). In this paper, we focus on both scenario-based and
perception-based motion forecasting for AVs. We propose a formal problem
formulation for motion forecasting and summarize the main challenges
confronting this area of research. We also detail representative datasets and
evaluation metrics pertinent to this field. Furthermore, this study classifies
recent research into two main categories: supervised learning and
self-supervised learning, reflecting the evolving paradigms in both
scenario-based and perception-based motion forecasting. In the context of
supervised learning, we thoroughly examine and analyze each key element of the
methodology. For self-supervised learning, we summarize commonly adopted
techniques. The paper concludes and discusses potential research directions,
aiming to propel progress in this vital area of AV technology.
|
2502.08666
|
Hallucination, Monofacts, and Miscalibration: An Empirical Investigation
|
cs.CL cs.AI
|
Recent theoretical work by [Kalai and Vempala 2024] proves that a particular
notion of hallucination rate in LLMs must be lower bounded by the training data
monofact rate (related to the classical Good-Turing missing mass estimator)
minus model miscalibration. Through systematic experiments with n-gram models
and in-context learning with LLMs, we empirically investigate and validate this
theory by examining how different underlying data distributions affect the
monofact rate and a model's tendency to hallucinate. We then vary model
miscalibration through controlled upweighting of training samples while holding
monofact rates constant, allowing us to isolate miscalibration's reduction
effect on hallucination. These findings suggest that both the distribution of
fact frequencies in training data and the calibration-hallucination trade-off
are inherent to probabilistic language generation. Our results also suggest
that current practices of aggressive deduplication in training data may need to
be reconsidered, as selective duplication could serve as a principled mechanism
for reducing hallucination.
|
2502.08667
|
Unpaired Image-to-Image Translation with Content Preserving Perspective:
A Review
|
eess.IV cs.CV
|
Image-to-image translation (I2I) transforms an image from a source domain to
a target domain while preserving source content. Most computer vision
applications are in the field of image-to-image translation, such as style
transfer, image segmentation, and photo enhancement. The degree of preservation
of the content of the source images in the translation process can be different
according to the problem and the intended application. From this point of view,
in this paper, we divide the different tasks in the field of image-to-image
translation into three categories: Fully Content preserving, Partially Content
preserving, and Non-Content preserving. We present different tasks, datasets,
methods, results of methods for these three categories in this paper. We make a
categorization for I2I methods based on the architecture of different models
and study each category separately. In addition, we introduce well-known
evaluation criteria in the I2I translation field. Specifically, nearly 70
different I2I models were analyzed, and more than 10 quantitative evaluation
metrics and 30 distinct tasks and datasets relevant to the I2I translation
problem were both introduced and assessed. Translating from simulation to real
images could be well viewed as an application of fully content preserving or
partially content preserving unsupervised image-to-image translation methods.
So, we provide a benchmark for Sim-to-Real translation, which can be used to
evaluate different methods. In general, we conclude that because of the
different extent of the obligation to preserving content in various
applications, it is better to consider this issue in choosing a suitable I2I
model for a specific application.
|
2502.08668
|
Style Extraction on Text Embeddings Using VAE and Parallel Dataset
|
cs.CL
|
This study investigates the stylistic differences among various Bible
translations using a Variational Autoencoder (VAE) model. By embedding textual
data into high-dimensional vectors, the study aims to detect and analyze
stylistic variations between translations, with a specific focus on
distinguishing the American Standard Version (ASV) from other translations. The
results demonstrate that each translation exhibits a unique stylistic
distribution, which can be effectively identified using the VAE model. These
findings suggest that the VAE model is proficient in capturing and
differentiating textual styles, although it is primarily optimized for
distinguishing a single style. The study highlights the model's potential for
broader applications in AI-based text generation and stylistic analysis, while
also acknowledging the need for further model refinement to address the
complexity of multi-dimensional stylistic relationships. Future research could
extend this methodology to other text domains, offering deeper insights into
the stylistic features embedded within various types of textual data.
|
2502.08669
|
Assessing the Impact of the Quality of Textual Data on Feature
Representation and Machine Learning Models
|
cs.CL
|
Background: Data collected in controlled settings typically results in
high-quality datasets. However, in real-world applications, the quality of data
collection is often compromised. It is well established that the quality of a
dataset significantly impacts the performance of machine learning models.
Methods: A rudimentary error rate metric was developed to evaluate textual
dataset quality at the token level. Mixtral Large Language Model (LLM) was used
to quantify and correct errors in low quality datasets. The study analyzed two
healthcare datasets: the high-quality MIMIC-III public hospital dataset and a
lower-quality private dataset from Australian aged care homes. Errors were
systematically introduced into MIMIC at varying rates, while the ACH dataset
quality was improved using the LLM.
Results: For the sampled 35,774 and 6,336 patients from the MIMIC and ACH
datasets respectively, we used Mixtral to introduce errors in MIMIC and correct
errors in ACH. Mixtral correctly detected errors in 63% of progress notes, with
17% containing a single token misclassified due to medical terminology. LLMs
demonstrated potential for improving progress note quality by addressing
various errors. Under varying error rates, feature representation performance
was tolerant to lower error rates (<10%) but declined significantly at higher
rates.
Conclusions: The study revealed that models performed relatively well on
datasets with lower error rates (<10%), but their performance declined
significantly as error rates increased (>=10%). Therefore, it is crucial to
evaluate the quality of a dataset before utilizing it for machine learning
tasks. For datasets with higher error rates, implementing corrective measures
is essential to ensure the reliability and effectiveness of machine learning
models.
|
2502.08671
|
Color Universal Design Neural Network for the Color Vision Deficiencies
|
eess.IV cs.CV
|
Information regarding images should be visually understood by anyone,
including those with color deficiency. However, such information is not
recognizable if the color that seems to be distorted to the color deficiencies
meets an adjacent object. The aim of this paper is to propose a color universal
design network, called CUD-Net, that generates images that are visually
understandable by individuals with color deficiency. CUD-Net is a convolutional
deep neural network that can preserve color and distinguish colors for input
images by regressing the node point of a piecewise linear function and using a
specific filter for each image. To generate CUD images for color deficiencies,
we follow a four-step process. First, we refine the CUD dataset based on
specific criteria by color experts. Second, we expand the input image
information through pre-processing that is specialized for color deficiency
vision. Third, we employ a multi-modality fusion architecture to combine
features and process the expanded images. Finally, we propose a conjugate loss
function based on the composition of the predicted image through the model to
address one-to-many problems that arise from the dataset. Our approach is able
to produce high-quality CUD images that maintain color and contrast stability.
The code for CUD-Net is available on the GitHub repository
|
2502.08673
|
High-Throughput SAT Sampling
|
cs.AI cs.LG
|
In this work, we present a novel technique for GPU-accelerated Boolean
satisfiability (SAT) sampling. Unlike conventional sampling algorithms that
directly operate on conjunctive normal form (CNF), our method transforms the
logical constraints of SAT problems by factoring their CNF representations into
simplified multi-level, multi-output Boolean functions. It then leverages
gradient-based optimization to guide the search for a diverse set of valid
solutions. Our method operates directly on the circuit structure of refactored
SAT instances, reinterpreting the SAT problem as a supervised multi-output
regression task. This differentiable technique enables independent bit-wise
operations on each tensor element, allowing parallel execution of learning
processes. As a result, we achieve GPU-accelerated sampling with significant
runtime improvements ranging from $33.6\times$ to $523.6\times$ over
state-of-the-art heuristic samplers. We demonstrate the superior performance of
our sampling method through an extensive evaluation on $60$ instances from a
public domain benchmark suite utilized in previous studies.
|
2502.08674
|
COutfitGAN: Learning to Synthesize Compatible Outfits Supervised by
Silhouette Masks and Fashion Styles
|
cs.CV cs.GR cs.MM
|
How to recommend outfits has gained considerable attention in both academia
and industry in recent years. Many studies have been carried out regarding
fashion compatibility learning, to determine whether the fashion items in an
outfit are compatible or not. These methods mainly focus on evaluating the
compatibility of existing outfits and rarely consider applying such knowledge
to 'design' new fashion items. We propose the new task of generating
complementary and compatible fashion items based on an arbitrary number of
given fashion items. In particular, given some fashion items that can make up
an outfit, the aim of this paper is to synthesize photo-realistic images of
other, complementary, fashion items that are compatible with the given ones. To
achieve this, we propose an outfit generation framework, referred to as
COutfitGAN, which includes a pyramid style extractor, an outfit generator, a
UNet-based real/fake discriminator, and a collocation discriminator. To train
and evaluate this framework, we collected a large-scale fashion outfit dataset
with over 200K outfits and 800K fashion items from the Internet. Extensive
experiments show that COutfitGAN outperforms other baselines in terms of
similarity, authenticity, and compatibility measurements.
|
2502.08676
|
LIR-LIVO: A Lightweight,Robust LiDAR/Vision/Inertial Odometry with
Illumination-Resilient Deep Features
|
cs.RO cs.CV cs.SY eess.SP eess.SY
|
In this paper, we propose LIR-LIVO, a lightweight and robust
LiDAR-inertial-visual odometry system designed for challenging illumination and
degraded environments. The proposed method leverages deep learning-based
illumination-resilient features and LiDAR-Inertial-Visual Odometry (LIVO). By
incorporating advanced techniques such as uniform depth distribution of
features enabled by depth association with LiDAR point clouds and adaptive
feature matching utilizing Superpoint and LightGlue, LIR-LIVO achieves
state-of-the-art (SOTA) accuracy and robustness with low computational cost.
Experiments are conducted on benchmark datasets, including NTU-VIRAL, Hilti'22,
and R3LIVE-Dataset. The corresponding results demonstrate that our proposed
method outperforms other SOTA methods on both standard and challenging
datasets. Particularly, the proposed method demonstrates robust pose estimation
under poor ambient lighting conditions in the Hilti'22 dataset. The code of
this work is publicly accessible on GitHub to facilitate advancements in the
robotics community.
|
2502.08678
|
Multispectral Remote Sensing for Weed Detection in West Australian
Agricultural Lands
|
cs.CV eess.IV
|
The Kondinin region in Western Australia faces significant agricultural
challenges due to pervasive weed infestations, causing economic losses and
ecological impacts. This study constructs a tailored multispectral remote
sensing dataset and an end-to-end framework for weed detection to advance
precision agriculture practices. Unmanned aerial vehicles were used to collect
raw multispectral data from two experimental areas (E2 and E8) over four years,
covering 0.6046 km^{2} and ground truth annotations were created with
GPS-enabled vehicles to manually label weeds and crops. The dataset is
specifically designed for agricultural applications in Western Australia. We
propose an end-to-end framework for weed detection that includes extensive
preprocessing steps, such as denoising, radiometric calibration, image
alignment, orthorectification, and stitching. The proposed method combines
vegetation indices (NDVI, GNDVI, EVI, SAVI, MSAVI) with multispectral channels
to form classification features, and employs several deep learning models to
identify weeds based on the input features. Among these models, ResNet achieves
the highest performance, with a weed detection accuracy of 0.9213, an F1-Score
of 0.8735, an mIOU of 0.7888, and an mDC of 0.8865, validating the efficacy of
the dataset and the proposed weed detection method.
|
2502.08679
|
Deep Learning-Driven Malware Classification with API Call Sequence
Analysis and Concept Drift Handling
|
cs.LG cs.AI cs.CR
|
Malware classification in dynamic environments presents a significant
challenge due to concept drift, where the statistical properties of malware
data evolve over time, complicating detection efforts. To address this issue,
we propose a deep learning framework enhanced with a genetic algorithm to
improve malware classification accuracy and adaptability. Our approach
incorporates mutation operations and fitness score evaluations within genetic
algorithms to continuously refine the deep learning model, ensuring robustness
against evolving malware threats. Experimental results demonstrate that this
hybrid method significantly enhances classification performance and
adaptability, outperforming traditional static models. Our proposed approach
offers a promising solution for real-time malware classification in
ever-changing cybersecurity landscapes.
|
2502.08680
|
Mathematical Reasoning in Large Language Models: Assessing Logical and
Arithmetic Errors across Wide Numerical Ranges
|
cs.LG cs.AI cs.CL
|
Mathematical reasoning in Large Language Models (LLMs) is often evaluated
using benchmarks with limited numerical ranges, failing to reflect real-world
problem-solving across diverse scales. Furthermore, most existing evaluation
methods only compare model outputs to ground-truth answers, obscuring insights
into reasoning processes. To address these limitations, we introduce
GSM-Ranges, a dataset generator derived from GSM8K that systematically perturbs
numerical values in math problems to assess model robustness across varying
numerical scales. Additionally, we propose a novel grading methodology that
distinguishes between logical and non-logical errors, offering a more precise
evaluation of reasoning processes beyond computational accuracy. Our
experiments with various models reveal a significant increase in logical error
rates-up to 14 percentage points-as numerical complexity rises, demonstrating a
general weakness in reasoning with out-of-distribution numerical values.
Moreover, while models demonstrate high accuracy on standalone arithmetic
tasks, their performance deteriorates substantially when computations are
embedded within word problems. These findings provide a comprehensive
evaluation of LLMs' mathematical reasoning capabilities and inform future
research directions for improving numerical generalization in language models.
|
2502.08681
|
Centrally Coordinated Multi-Agent Reinforcement Learning for Power Grid
Topology Control
|
cs.MA cs.AI cs.LG
|
Power grid operation is becoming more complex due to the increase in
generation of renewable energy. The recent series of Learning To Run a Power
Network (L2RPN) competitions have encouraged the use of artificial agents to
assist human dispatchers in operating power grids. However, the combinatorial
nature of the action space poses a challenge to both conventional optimizers
and learned controllers. Action space factorization, which breaks down
decision-making into smaller sub-tasks, is one approach to tackle the curse of
dimensionality. In this study, we propose a centrally coordinated multi-agent
(CCMA) architecture for action space factorization. In this approach, regional
agents propose actions and subsequently a coordinating agent selects the final
action. We investigate several implementations of the CCMA architecture, and
benchmark in different experimental settings against various L2RPN baseline
approaches. The CCMA architecture exhibits higher sample efficiency and
superior final performance than the baseline approaches. The results suggest
high potential of the CCMA approach for further application in
higher-dimensional L2RPN as well as real-world power grid settings.
|
2502.08682
|
On the Role of Pre-trained Embeddings in Binary Code Analysis
|
cs.LG cs.AI
|
Deep learning has enabled remarkable progress in binary code analysis. In
particular, pre-trained embeddings of assembly code have become a gold standard
for solving analysis tasks, such as measuring code similarity or recognizing
functions. These embeddings are capable of learning a vector representation
from unlabeled code. In contrast to natural language processing, however, label
information is not scarce for many tasks in binary code analysis. For example,
labeled training data for function boundaries, optimization levels, and
argument types can be easily derived from debug information provided by a
compiler. Consequently, the main motivation of embeddings does not transfer
directly to binary code analysis.
In this paper, we explore the role of pre-trained embeddings from a critical
perspective. To this end, we systematically evaluate recent embeddings for
assembly code on five downstream tasks using a corpus of 1.2 million functions
from the Debian distribution. We observe that several embeddings perform
similarly when sufficient labeled data is available, and that differences
reported in prior work are hardly noticeable. Surprisingly, we find that
end-to-end learning without pre-training performs best on average, which calls
into question the need for specialized embeddings. By varying the amount of
labeled data, we eventually derive guidelines for when embeddings offer
advantages and when end-to-end learning is preferable for binary code analysis.
|
2502.08683
|
A Deep Learning approach for parametrized and time dependent Partial
Differential Equations using Dimensionality Reduction and Neural ODEs
|
cs.LG
|
Partial Differential Equations (PDEs) are central to science and engineering.
Since solving them is computationally expensive, a lot of effort has been put
into approximating their solution operator via both traditional and recently
increasingly Deep Learning (DL) techniques. A conclusive methodology capable of
accounting both for (continuous) time and parameter dependency in such DL
models however is still lacking. In this paper, we propose an autoregressive
and data-driven method using the analogy with classical numerical solvers for
time-dependent, parametric and (typically) nonlinear PDEs. We present how
Dimensionality Reduction (DR) can be coupled with Neural Ordinary Differential
Equations (NODEs) in order to learn the solution operator of arbitrary PDEs.
The idea of our work is that it is possible to map the high-fidelity (i.e.,
high-dimensional) PDE solution space into a reduced (low-dimensional) space,
which subsequently exhibits dynamics governed by a (latent) Ordinary
Differential Equation (ODE). Solving this (easier) ODE in the reduced space
allows avoiding solving the PDE in the high-dimensional solution space, thus
decreasing the computational burden for repeated calculations for e.g.,
uncertainty quantification or design optimization purposes. The main outcome of
this work is the importance of exploiting DR as opposed to the recent trend of
building large and complex architectures: we show that by leveraging DR we can
deliver not only more accurate predictions, but also a considerably lighter and
faster DL model compared to existing methodologies.
|
2502.08684
|
Self-Evaluation for Job-Shop Scheduling
|
cs.LG cs.AI
|
Combinatorial optimization problems, such as scheduling and route planning,
are crucial in various industries but are computationally intractable due to
their NP-hard nature. Neural Combinatorial Optimization methods leverage
machine learning to address these challenges but often depend on sequential
decision-making, which is prone to error accumulation as small mistakes
propagate throughout the process. Inspired by self-evaluation techniques in
Large Language Models, we propose a novel framework that generates and
evaluates subsets of assignments, moving beyond traditional stepwise
approaches. Applied to the Job-Shop Scheduling Problem, our method integrates a
heterogeneous graph neural network with a Transformer to build a policy model
and a self-evaluation function. Experimental validation on challenging,
well-known benchmarks demonstrates the effectiveness of our approach,
surpassing state-of-the-art methods.
|
2502.08685
|
Beyond Models! Explainable Data Valuation and Metric Adaption for
Recommendation
|
cs.LG cs.AI
|
User behavior records serve as the foundation for recommender systems. While
the behavior data exhibits ease of acquisition, it often suffers from varying
quality. Current methods employ data valuation to discern high-quality data
from low-quality data. However, they tend to employ black-box design, lacking
transparency and interpretability. Besides, they are typically tailored to
specific evaluation metrics, leading to limited generality across various
tasks. To overcome these issues, we propose an explainable and versatile
framework DVR which can enhance the efficiency of data utilization tailored to
any requirements of the model architectures and evaluation metrics. For
explainable data valuation, a data valuator is presented to evaluate the data
quality via calculating its Shapley value from the game-theoretic perspective,
ensuring robust mathematical properties and reliability. In order to
accommodate various evaluation metrics, including differentiable and
non-differentiable ones, a metric adapter is devised based on reinforcement
learning, where a metric is treated as the reinforcement reward that guides
model optimization. Extensive experiments conducted on various benchmarks
verify that our framework can improve the performance of current recommendation
algorithms on various metrics including ranking accuracy, diversity, and
fairness. Specifically, our framework achieves up to 34.7\% improvements over
existing methods in terms of representative NDCG metric. The code is available
at https://github.com/renqii/DVR.
|
2502.08686
|
EEG Artifact Detection and Correction with Deep Autoencoders
|
cs.LG cs.AI
|
EEG signals convey important information about brain activity both in healthy
and pathological conditions. However, they are inherently noisy, which poses
significant challenges for accurate analysis and interpretation. Traditional
EEG artifact removal methods, while effective, often require extensive expert
intervention. This study presents LSTEEG, a novel LSTM-based autoencoder
designed for the detection and correction of artifacts in EEG signals.
Leveraging deep learning, particularly LSTM layers, LSTEEG captures non-linear
dependencies in sequential EEG data. LSTEEG demonstrates superior performance
in both artifact detection and correction tasks compared to other
state-of-the-art convolutional autoencoders. Our methodology enhances the
interpretability and utility of the autoencoder's latent space, enabling
data-driven automated artefact removal in EEG its application in downstream
tasks. This research advances the field of efficient and accurate multi-channel
EEG preprocessing, and promotes the implementation and usage of automated EEG
analysis pipelines for brain health applications.
|
2502.08687
|
Data Augmentation to Improve Large Language Models in Food Hazard and
Product Detection
|
cs.CL
|
The primary objective of this study is to demonstrate the impact of data
augmentation using ChatGPT-4o-mini on food hazard and product analysis. The
augmented data is generated using ChatGPT-4o-mini and subsequently used to
train two large language models: RoBERTa-base and Flan-T5-base. The models are
evaluated on test sets. The results indicate that using augmented data helped
improve model performance across key metrics, including recall, F1 score,
precision, and accuracy, compared to using only the provided dataset. The full
code, including model training and the augmented dataset, can be found in this
repository: https://github.com/AREEG94FAHAD/food-hazard-prdouct-cls
|
2502.08688
|
FAST: A Future Aircraft Sizing Tool for Advanced Aircraft and Propulsion
System Design
|
cs.CE physics.comp-ph
|
Without radical technological advancements, the global aviation industry will
continue to be a major carbon emitter. To reduce aviation's carbon emissions,
innovative aircraft technology, including electrified aircraft propulsion, is
under development. However, current aircraft sizing tools require detailed
design information that may not be available early in the development process,
particularly for novel technologies. This can yield suboptimal designs and
inhibits innovation. A computational tool is needed to easily and rapidly size
an aircraft configuration while allowing the designer to explore the design
space, examine tradeoffs, and evaluate alternative designs. The Future Aircraft
Sizing Tool (FAST), developed in Matlab, addresses this challenge by rapidly
sizing aircraft with any propulsion architecture, including conventional,
electric, and hybrid electric systems, even with limited initial data. FAST
enables engineers to explore various aircraft configurations, evaluate design
alternatives, assess performance across a flight envelope, and visualize
concepts during the sizing process. By supporting early stage design, FAST
addresses a gap in currently available computational tools for developing
sustainable aviation technologies to help reduce the industry's carbon
footprint.
|
2502.08689
|
Advancing machine fault diagnosis: A detailed examination of
convolutional neural networks
|
cs.LG cs.AI
|
The growing complexity of machinery and the increasing demand for operational
efficiency and safety have driven the development of advanced fault diagnosis
techniques. Among these, convolutional neural networks (CNNs) have emerged as a
powerful tool, offering robust and accurate fault detection and classification
capabilities. This comprehensive review delves into the application of CNNs in
machine fault diagnosis, covering its theoretical foundation, architectural
variations, and practical implementations. The strengths and limitations of
CNNs are analyzed in this domain, discussing their effectiveness in handling
various fault types, data complexities, and operational environments.
Furthermore, we explore the evolving landscape of CNN-based fault diagnosis,
examining recent advancements in data augmentation, transfer learning, and
hybrid architectures. Finally, we highlight future research directions and
potential challenges to further enhance the application of CNNs for reliable
and proactive machine fault diagnosis.
|
2502.08690
|
Skrr: Skip and Re-use Text Encoder Layers for Memory Efficient
Text-to-Image Generation
|
cs.LG cs.AI cs.CV
|
Large-scale text encoders in text-to-image (T2I) diffusion models have
demonstrated exceptional performance in generating high-quality images from
textual prompts. Unlike denoising modules that rely on multiple iterative
steps, text encoders require only a single forward pass to produce text
embeddings. However, despite their minimal contribution to total inference time
and floating-point operations (FLOPs), text encoders demand significantly
higher memory usage, up to eight times more than denoising modules. To address
this inefficiency, we propose Skip and Re-use layers (Skrr), a simple yet
effective pruning strategy specifically designed for text encoders in T2I
diffusion models. Skrr exploits the inherent redundancy in transformer blocks
by selectively skipping or reusing certain layers in a manner tailored for T2I
tasks, thereby reducing memory consumption without compromising performance.
Extensive experiments demonstrate that Skrr maintains image quality comparable
to the original model even under high sparsity levels, outperforming existing
blockwise pruning methods. Furthermore, Skrr achieves state-of-the-art memory
efficiency while preserving performance across multiple evaluation metrics,
including the FID, CLIP, DreamSim, and GenEval scores.
|
2502.08691
|
AgentSociety: Large-Scale Simulation of LLM-Driven Generative Agents
Advances Understanding of Human Behaviors and Society
|
cs.SI cs.AI
|
Understanding human behavior and society is a central focus in social
sciences, with the rise of generative social science marking a significant
paradigmatic shift. By leveraging bottom-up simulations, it replaces costly and
logistically challenging traditional experiments with scalable, replicable, and
systematic computational approaches for studying complex social dynamics.
Recent advances in large language models (LLMs) have further transformed this
research paradigm, enabling the creation of human-like generative social agents
and realistic simulacra of society. In this paper, we propose AgentSociety, a
large-scale social simulator that integrates LLM-driven agents, a realistic
societal environment, and a powerful large-scale simulation engine. Based on
the proposed simulator, we generate social lives for over 10k agents,
simulating their 5 million interactions both among agents and between agents
and their environment. Furthermore, we explore the potential of AgentSociety as
a testbed for computational social experiments, focusing on four key social
issues: polarization, the spread of inflammatory messages, the effects of
universal basic income policies, and the impact of external shocks such as
hurricanes. These four issues serve as valuable cases for assessing
AgentSociety's support for typical research methods -- such as surveys,
interviews, and interventions -- as well as for investigating the patterns,
causes, and underlying mechanisms of social issues. The alignment between
AgentSociety's outcomes and real-world experimental results not only
demonstrates its ability to capture human behaviors and their underlying
mechanisms, but also underscores its potential as an important platform for
social scientists and policymakers.
|
2502.08692
|
Efficient Split Learning LSTM Models for FPGA-based Edge IoT Devices
|
cs.LG cs.DC
|
Split Learning (SL) recently emerged as an efficient paradigm for distributed
Machine Learning (ML) suitable for the Internet Of Things (IoT)-Cloud systems.
However, deploying SL on resource-constrained edge IoT platforms poses a
significant challenge in terms of balancing the model performance against the
processing, memory, and energy resources. In this work, we present a practical
study of deploying SL framework on a real-world Field-Programmable Gate Array
(FPGA)-based edge IoT platform. We address the SL framework applied to a
time-series processing model based on Recurrent Neural Networks (RNNs). Set in
the context of river water quality monitoring and using real-world data, we
train, optimize, and deploy a Long Short-Term Memory (LSTM) model on a given
edge IoT FPGA platform in different SL configurations. Our results demonstrate
the importance of aligning design choices with specific application
requirements, whether it is maximizing speed, minimizing power, or optimizing
for resource constraints.
|
2502.08695
|
A Bayesian Nonparametric Perspective on Mahalanobis Distance for Out of
Distribution Detection
|
stat.ML cs.LG
|
Bayesian nonparametric methods are naturally suited to the problem of
out-of-distribution (OOD) detection. However, these techniques have largely
been eschewed in favor of simpler methods based on distances between
pre-trained or learned embeddings of data points. Here we show a formal
relationship between Bayesian nonparametric models and the relative Mahalanobis
distance score (RMDS), a commonly used method for OOD detection. Building on
this connection, we propose Bayesian nonparametric mixture models with
hierarchical priors that generalize the RMDS. We evaluate these models on the
OpenOOD detection benchmark and show that Bayesian nonparametric methods can
improve upon existing OOD methods, especially in regimes where training classes
differ in their covariance structure and where there are relatively few data
points per class.
|
2502.08696
|
Scalable Discrete Diffusion Samplers: Combinatorial Optimization and
Statistical Physics
|
cs.LG cond-mat.stat-mech cs.AI physics.comp-ph stat.ML
|
Learning to sample from complex unnormalized distributions over discrete
domains emerged as a promising research direction with applications in
statistical physics, variational inference, and combinatorial optimization.
Recent work has demonstrated the potential of diffusion models in this domain.
However, existing methods face limitations in memory scaling and thus the
number of attainable diffusion steps since they require backpropagation through
the entire generative process. To overcome these limitations we introduce two
novel training methods for discrete diffusion samplers, one grounded in the
policy gradient theorem and the other one leveraging Self-Normalized Neural
Importance Sampling (SN-NIS). These methods yield memory-efficient training and
achieve state-of-the-art results in unsupervised combinatorial optimization.
Numerous scientific applications additionally require the ability of unbiased
sampling. We introduce adaptations of SN-NIS and Neural Markov Chain Monte
Carlo that enable for the first time the application of discrete diffusion
models to this problem. We validate our methods on Ising model benchmarks and
find that they outperform popular autoregressive approaches. Our work opens new
avenues for applying diffusion models to a wide range of scientific
applications in discrete domains that were hitherto restricted to exact
likelihood models.
|
2502.08697
|
Bilevel Learning for Bilevel Planning
|
cs.RO
|
A robot that learns from demonstrations should not just imitate what it sees
-- it should understand the high-level concepts that are being demonstrated and
generalize them to new tasks. Bilevel planning is a hierarchical model-based
approach where predicates (relational state abstractions) can be leveraged to
achieve compositional generalization. However, previous bilevel planning
approaches depend on predicates that are either hand-engineered or restricted
to very simple forms, limiting their scalability to sophisticated,
high-dimensional state spaces. To address this limitation, we present IVNTR,
the first bilevel planning approach capable of learning neural predicates
directly from demonstrations. Our key innovation is a neuro-symbolic bilevel
learning framework that mirrors the structure of bilevel planning. In IVNTR,
symbolic learning of the predicate "effects" and neural learning of the
predicate "functions" alternate, with each providing guidance for the other. We
evaluate IVNTR in six diverse robot planning domains, demonstrating its
effectiveness in abstracting various continuous and high-dimensional states.
While most existing approaches struggle to generalize (with <35% success rate),
our IVNTR achieves an average of 77% success rate on unseen tasks.
Additionally, we showcase IVNTR on a mobile manipulator, where it learns to
perform real-world mobile manipulation tasks and generalizes to unseen test
scenarios that feature new objects, new states, and longer task horizons. Our
findings underscore the promise of learning and planning with abstractions as a
path towards high-level generalization.
|
2502.08728
|
A Comparative Study of Machine Learning Algorithms for Stock Price
Prediction Using Insider Trading Data
|
cs.LG
|
The research paper empirically investigates several machine learning
algorithms to forecast stock prices depending on insider trading information.
Insider trading offers special insights into market sentiment, pointing to
upcoming changes in stock prices. This study examines the effectiveness of
algorithms like decision trees, random forests, support vector machines (SVM)
with different kernels, and K-Means Clustering using a dataset of Tesla stock
transactions. Examining past data from April 2020 to March 2023, this study
focuses on how well these algorithms identify trends and forecast stock price
fluctuations. The paper uses Recursive Feature Elimination (RFE) and feature
importance analysis to optimize the feature set and, hence, increase prediction
accuracy. While it requires substantially greater processing time than other
models, SVM with the Radial Basis Function (RBF) kernel displays the best
accuracy. This paper highlights the trade-offs between accuracy and efficiency
in machine learning models and proposes the possibility of pooling multiple
data sources to raise prediction performance. The results of this paper aim to
help financial analysts and investors in choosing strong algorithms to optimize
investment strategies.
|
2502.08729
|
Policy Selection and Schedules for Exclusive Bus Lane and High Occupancy
Vehicle Lane in a Bi-modal Transportation Corridor
|
math.OC cs.SY eess.SY
|
Efficient management of transportation corridors is critical for sustaining
urban mobility, directly influencing transportation efficiency. Two prominent
strategies for enhancing public transit services and alleviating congestion,
Exclusive Bus Lane (EBL) and High Occupancy Vehicle Lane (HOVL), are gaining
increasing attention. EBLs prioritize bus transit by providing dedicated lanes
for faster travel times, while HOVLs encourage carpooling by reserving lanes
for high-occupancy vehicles. However, static implementations of these policies
may underutilize road resources and disrupt general-purpose lanes. Dynamic
control of these policies, based on real-time demand, can potentially maximize
road efficiency and minimize negative impacts. This study develops cost
functions for Mixed Traffic Policy (MTP), Exclusive Bus Lane Policy (EBLP), and
High Occupancy Vehicle Lane Policy (HOVLP), incorporating optimized bus
frequency and demand split under equilibrium condition. Switching thresholds
for policy selection are derived to identify optimal periods for implementing
each policy based on dynamic demand simulated using an Ornstein-Uhlenbeck (O-U)
process. Results reveal significant reductions in total system costs with the
proposed dynamic policy integration. Compared to static implementations, the
combined policy achieves cost reductions of 12.0%, 5.3% and 42.5% relative to
MTP-only, EBLP-only, and HOVLP-only scenarios, respectively. Additionally, in
two real case studies of existing EBL and HOVL operations, the proposed dynamic
policy reduces total costs by 32.2% and 27.9%, respectively. The findings
provide valuable insights for policymakers and transit planners, offering a
robust framework for dynamically scheduling and integrating EBL and HOVL
policies to optimize urban corridor efficiency and reduce overall system costs.
|
2502.08730
|
New Bounds for Sparse Variational Gaussian Processes
|
cs.LG stat.ME stat.ML
|
Sparse variational Gaussian processes (GPs) construct tractable posterior
approximations to GP models. At the core of these methods is the assumption
that the true posterior distribution over training function values ${\bf f}$
and inducing variables ${\bf u}$ is approximated by a variational distribution
that incorporates the conditional GP prior $p({\bf f} | {\bf u})$ in its
factorization. While this assumption is considered as fundamental, we show that
for model training we can relax it through the use of a more general
variational distribution $q({\bf f} | {\bf u})$ that depends on $N$ extra
parameters, where $N$ is the number of training examples. In GP regression, we
can analytically optimize the evidence lower bound over the extra parameters
and express a tractable collapsed bound that is tighter than the previous
bound. The new bound is also amenable to stochastic optimization and its
implementation requires minor modifications to existing sparse GP code.
Further, we also describe extensions to non-Gaussian likelihoods. On several
datasets we demonstrate that our method can reduce bias when learning the
hyperpaparameters and can lead to better predictive performance.
|
2502.08731
|
Equity-aware Design and Timing of Fare-free Transit Zoning under Demand
Uncertainty
|
math.OC cs.SY eess.SY
|
We propose the first analytical stochastic model for optimizing the
configuration and implementation policies of fare-free transit. The model
focuses on a transportation corridor with two transportation modes: automobiles
and buses. The corridor is divided into two sections, an inner one with
fare-free transit service and an outer one with fare-based transit service.
Under the static version of the model, the optimized length and frequency of
the fare-free transit zone can be determined by maximizing total social
welfare. The findings indicate that implementing fare-free transit can increase
transit ridership and reduce automobile use within the fare-free zone while
social equity among the demand groups can be enhanced by lengthening the
fare-free zone. Notably, the optimal zone length increases when both social
welfare and equity are considered jointly, compared to only prioritizing social
welfare. The dynamic model, framed within a market entry and exit real options
approach, solves the fare policy switching problem, establishing optimal timing
policies for activating or terminating fare-free service. The results from
dynamic models reveal earlier implementation and extended durations of
fare-free transit in the social welfare-aware regime, driven by lower
thresholds compared to the social equity-aware regime.
|
2502.08736
|
Recurrent Memory for Online Interdomain Gaussian Processes
|
cs.LG stat.ML
|
We propose a novel online Gaussian process (GP) model that is capable of
capturing long-term memory in sequential data in an online regression setting.
Our model, Online HiPPO Sparse Variational Gaussian Process Regression
(OHSGPR), leverages the HiPPO (High-order Polynomial Projection Operators)
framework, which is popularized in the RNN domain due to its long-range memory
modeling capabilities. We interpret the HiPPO time-varying orthogonal
projections as inducing variables with time-dependent orthogonal polynomial
basis functions, which allows the SGPR inducing points to memorize the process
history. We show that the HiPPO framework fits naturally into the interdomain
GP framework and demonstrate that the kernel matrices can also be updated
online in a recurrence form based on the ODE evolution of HiPPO. We evaluate
our method on time series regression tasks, showing that it outperforms the
existing online GP method in terms of predictive performance and computational
efficiency
|
2502.08742
|
"Active Neighbour": A Novel Monitoring Model for Cyber-Physical Systems
|
eess.SY cs.SY
|
Over the past decade, advancements in technology have enabled Cyber-Physical
Systems (CPS) to monitor sensor networks through various methodologies.
However, these developments have concurrently introduced significant security
challenges, necessitating robust protective measures. As a result, securing CPS
has become a critical area of research. This paper reviews existing CPS
monitoring models and introduces an innovative role-based monitoring model
designed to meet contemporary security requirements. The proposed model is
implemented within the COOJA simulator of the Contiki OS and evaluated under
three distinct security configurations. Preliminary results demonstrate
promising outcomes, although further comprehensive testing is ongoing.
|
2502.08744
|
Are Expressions for Music Emotions the Same Across Cultures?
|
cs.CL cs.HC cs.SD eess.AS
|
Music evokes profound emotions, yet the universality of emotional descriptors
across languages remains debated. A key challenge in cross-cultural research on
music emotion is biased stimulus selection and manual curation of taxonomies,
predominantly relying on Western music and languages. To address this, we
propose a balanced experimental design with nine online experiments in Brazil,
the US, and South Korea, involving N=672 participants. First, we sample a
balanced set of popular music from these countries. Using an open-ended tagging
pipeline, we then gather emotion terms to create culture-specific taxonomies.
Finally, using these bottom-up taxonomies, participants rate emotions of each
song. This allows us to map emotional similarities within and across cultures.
Results show consistency in high arousal, high valence emotions but greater
variability in others. Notably, machine translations were often inadequate to
capture music-specific meanings. These findings together highlight the need for
a domain-sensitive, open-ended, bottom-up emotion elicitation approach to
reduce cultural biases in emotion research.
|
2502.08745
|
IHEval: Evaluating Language Models on Following the Instruction
Hierarchy
|
cs.CL
|
The instruction hierarchy, which establishes a priority order from system
messages to user messages, conversation history, and tool outputs, is essential
for ensuring consistent and safe behavior in language models (LMs). Despite its
importance, this topic receives limited attention, and there is a lack of
comprehensive benchmarks for evaluating models' ability to follow the
instruction hierarchy. We bridge this gap by introducing IHEval, a novel
benchmark comprising 3,538 examples across nine tasks, covering cases where
instructions in different priorities either align or conflict. Our evaluation
of popular LMs highlights their struggle to recognize instruction priorities.
All evaluated models experience a sharp performance decline when facing
conflicting instructions, compared to their original instruction-following
performance. Moreover, the most competitive open-source model only achieves 48%
accuracy in resolving such conflicts. Our results underscore the need for
targeted optimization in the future development of LMs.
|
2502.08754
|
HistoSmith: Single-Stage Histology Image-Label Generation via
Conditional Latent Diffusion for Enhanced Cell Segmentation and
Classification
|
cs.CV cs.AI
|
Precise segmentation and classification of cell instances are vital for
analyzing the tissue microenvironment in histology images, supporting medical
diagnosis, prognosis, treatment planning, and studies of brain
cytoarchitecture. However, the creation of high-quality annotated datasets for
training remains a major challenge. This study introduces a novel single-stage
approach (HistoSmith) for generating image-label pairs to augment histology
datasets. Unlike state-of-the-art methods that utilize diffusion models with
separate components for label and image generation, our approach employs a
latent diffusion model to learn the joint distribution of cellular layouts,
classification masks, and histology images. This model enables tailored data
generation by conditioning on user-defined parameters such as cell types,
quantities, and tissue types. Trained on the Conic H&E histopathology dataset
and the Nissl-stained CytoDArk0 dataset, the model generates realistic and
diverse labeled samples. Experimental results demonstrate improvements in cell
instance segmentation and classification, particularly for underrepresented
cell types like neutrophils in the Conic dataset. These findings underscore the
potential of our approach to address data scarcity challenges.
|
2502.08756
|
From PowerPoint UI Sketches to Web-Based Applications: Pattern-Driven
Code Generation for GIS Dashboard Development Using Knowledge-Augmented LLMs,
Context-Aware Visual Prompting, and the React Framework
|
cs.AI cs.SE
|
Developing web-based GIS applications, commonly known as CyberGIS dashboards,
for querying and visualizing GIS data in environmental research often demands
repetitive and resource-intensive efforts. While Generative AI offers
automation potential for code generation, it struggles with complex scientific
applications due to challenges in integrating domain knowledge, software
engineering principles, and UI design best practices. This paper introduces a
knowledge-augmented code generation framework that retrieves software
engineering best practices, domain expertise, and advanced technology stacks
from a specialized knowledge base to enhance Generative Pre-trained
Transformers (GPT) for front-end development. The framework automates the
creation of GIS-based web applications (e.g., dashboards, interfaces) from
user-defined UI wireframes sketched in tools like PowerPoint or Adobe
Illustrator. A novel Context-Aware Visual Prompting method, implemented in
Python, extracts layouts and interface features from these wireframes to guide
code generation. Our approach leverages Large Language Models (LLMs) to
generate front-end code by integrating structured reasoning, software
engineering principles, and domain knowledge, drawing inspiration from
Chain-of-Thought (CoT) prompting and Retrieval-Augmented Generation (RAG). A
case study demonstrates the framework's capability to generate a modular,
maintainable web platform hosting multiple dashboards for visualizing
environmental and energy data (e.g., time-series, shapefiles, rasters) from
user-sketched wireframes. By employing a knowledge-driven approach, the
framework produces scalable, industry-standard front-end code using design
patterns such as Model-View-ViewModel (MVVM) and frameworks like React. This
significantly reduces manual effort in design and coding, pioneering an
automated and efficient method for developing smart city software.
|
2502.08757
|
A Low-Complexity Plug-and-Play Deep Learning Model for Massive MIMO
Precoding Across Sites
|
eess.SP cs.LG
|
Massive multiple-input multiple-output (mMIMO) technology has transformed
wireless communication by enhancing spectral efficiency and network capacity.
This paper proposes a novel deep learning-based mMIMO precoder to tackle the
complexity challenges of existing approaches, such as weighted minimum mean
square error (WMMSE), while leveraging meta-learning domain generalization and
a teacher-student architecture to improve generalization across diverse
communication environments. When deployed to a previously unseen site, the
proposed model achieves excellent sum-rate performance while maintaining low
computational complexity by avoiding matrix inversions and by using a simpler
neural network structure. The model is trained and tested on a custom
ray-tracing dataset composed of several base station locations. The
experimental results indicate that our method effectively balances
computational efficiency with high sum-rate performance while showcasing strong
generalization performance in unseen environments. Furthermore, with
fine-tuning, the proposed model outperforms WMMSE across all tested sites and
SNR conditions while reducing complexity by at least 73$\times$.
|
2502.08758
|
Compression of Site-Specific Deep Neural Networks for Massive MIMO
Precoding
|
eess.SP cs.LG
|
The deployment of deep learning (DL) models for precoding in massive
multiple-input multiple-output (mMIMO) systems is often constrained by high
computational demands and energy consumption. In this paper, we investigate the
compute energy efficiency of mMIMO precoders using DL-based approaches,
comparing them to conventional methods such as zero forcing and weighted
minimum mean square error (WMMSE). Our energy consumption model accounts for
both memory access and calculation energy within DL accelerators. We propose a
framework that incorporates mixed-precision quantization-aware training and
neural architecture search to reduce energy usage without compromising
accuracy. Using a ray-tracing dataset covering various base station sites, we
analyze how site-specific conditions affect the energy efficiency of compressed
models. Our results show that deep neural network compression generates
precoders with up to 35 times higher energy efficiency than WMMSE at equal
performance, depending on the scenario and the desired rate. These results
establish a foundation and a benchmark for the development of energy-efficient
DL-based mMIMO precoders.
|
2502.08759
|
Contextual bandits with entropy-based human feedback
|
cs.AI
|
In recent years, preference-based human feedback mechanisms have become
essential for enhancing model performance across diverse applications,
including conversational AI systems such as ChatGPT. However, existing
approaches often neglect critical aspects, such as model uncertainty and the
variability in feedback quality. To address these challenges, we introduce an
entropy-based human feedback framework for contextual bandits, which
dynamically balances exploration and exploitation by soliciting expert feedback
only when model entropy exceeds a predefined threshold. Our method is
model-agnostic and can be seamlessly integrated with any contextual bandit
agent employing stochastic policies. Through comprehensive experiments, we show
that our approach achieves significant performance improvements while requiring
minimal human feedback, even under conditions of suboptimal feedback quality.
This work not only presents a novel strategy for feedback solicitation but also
highlights the robustness and efficacy of incorporating human guidance into
machine learning systems. Our code is publicly available:
https://github.com/BorealisAI/CBHF
|
2502.08764
|
Demand Response Optimization MILP Framework for Microgrids with DERs
|
eess.SY cs.LG cs.SY
|
The integration of renewable energy sources in microgrids introduces
significant operational challenges due to their intermittent nature and the
mismatch between generation and demand patterns. Effective demand response (DR)
strategies are crucial for maintaining system stability and economic
efficiency, particularly in microgrids with high renewable penetration. This
paper presents a comprehensive mixed-integer linear programming (MILP)
framework for optimizing DR operations in a microgrid with solar generation and
battery storage systems. The framework incorporates load classification,
dynamic price thresholding, and multi-period coordination for optimal DR event
scheduling. Analysis across seven distinct operational scenarios demonstrates
consistent peak load reduction of 10\% while achieving energy cost savings
ranging from 13.1\% to 38.0\%. The highest performance was observed in
scenarios with high solar generation, where the framework achieved 38.0\%
energy cost reduction through optimal coordination of renewable resources and
DR actions. The results validate the framework's effectiveness in managing
diverse operational challenges while maintaining system stability and economic
efficiency.
|
2502.08766
|
Unlocking Mental Health: Exploring College Students' Well-being through
Smartphone Behaviors
|
cs.CY cs.HC cs.LG cs.SE
|
The global mental health crisis is a pressing concern, with college students
particularly vulnerable to rising mental health disorders. The widespread use
of smartphones among young adults, while offering numerous benefits, has also
been linked to negative outcomes such as addiction and regret, significantly
impacting well-being. Leveraging the longest longitudinal dataset collected
over four college years through passive mobile sensing, this study is the first
to examine the relationship between students' smartphone unlocking behaviors
and their mental health at scale in real-world settings. We provide the first
evidence demonstrating the predictability of phone unlocking behaviors for
mental health outcomes based on a large dataset, highlighting the potential of
these novel features for future predictive models. Our findings reveal
important variations in smartphone usage across genders and locations, offering
a deeper understanding of the interplay between digital behaviors and mental
health. We highlight future research directions aimed at mitigating adverse
effects and promoting digital well-being in this population.
|
2502.08767
|
SelfElicit: Your Language Model Secretly Knows Where is the Relevant
Evidence
|
cs.CL cs.AI
|
Providing Language Models (LMs) with relevant evidence in the context (either
via retrieval or user-provided) can significantly improve their ability to
provide factually correct grounded responses. However, recent studies have
found that LMs often struggle to fully comprehend and utilize key evidence from
the context, especially when it contains noise and irrelevant information - an
issue common in real-world scenarios. To address this, we propose SelfElicit,
an inference-time approach that helps LMs focus on key contextual evidence
through self-guided explicit highlighting. By leveraging the inherent
evidence-finding capabilities of LMs using the attention scores of deeper
layers, our method automatically identifies and emphasizes key evidence within
the input context, facilitating more accurate and factually grounded responses
without additional training or iterative prompting. We demonstrate that
SelfElicit brings consistent and significant improvement on multiple
evidence-based QA tasks for various LM families while maintaining computational
efficiency. Our code and documentation are available at
https://github.com/ZhiningLiu1998/SelfElicit.
|
2502.08769
|
Cluster and Predict Latent Patches for Improved Masked Image Modeling
|
cs.CV cs.AI
|
Masked Image Modeling (MIM) offers a promising approach to self-supervised
representation learning, however existing MIM models still lag behind the
state-of-the-art. In this paper, we systematically analyze target
representations, loss functions, and architectures, to introduce CAPI - a novel
pure-MIM framework that relies on the prediction of latent clusterings. Our
approach leverages a clustering-based loss, which is stable to train, and
exhibits promising scaling properties. Our ViT-L backbone, CAPI, achieves 83.8%
accuracy on ImageNet and 32.1% mIoU on ADE20K with simple linear probes,
substantially outperforming previous MIM methods and approaching the
performance of the current state-of-the-art, DINOv2. We release all our code
and models.
|
2502.08773
|
Universal Model Routing for Efficient LLM Inference
|
cs.CL cs.LG
|
Large language models' significant advances in capabilities are accompanied
by significant increases in inference costs. Model routing is a simple
technique for reducing inference cost, wherein one maintains a pool of
candidate LLMs, and learns to route each prompt to the smallest feasible LLM.
Existing works focus on learning a router for a fixed pool of LLMs. In this
paper, we consider the problem of dynamic routing, where new, previously
unobserved LLMs are available at test time. We propose a new approach to this
problem that relies on representing each LLM as a feature vector, derived based
on predictions on a set of representative prompts. Based on this, we detail two
effective strategies, relying on cluster-based routing and a learned cluster
map respectively. We prove that these strategies are estimates of a
theoretically optimal routing rule, and provide an excess risk bound to
quantify their errors. Experiments on a range of public benchmarks show the
effectiveness of the proposed strategies in routing amongst more than 30 unseen
LLMs.
|
2502.08774
|
Exploring Test Time Adaptation for Subcortical Segmentation of the Fetal
Brain in 3D Ultrasound
|
cs.CV cs.AI cs.LG
|
Monitoring the growth of subcortical regions of the fetal brain in ultrasound
(US) images can help identify the presence of abnormal development. Manually
segmenting these regions is a challenging task, but recent work has shown that
it can be automated using deep learning. However, applying pretrained models to
unseen freehand US volumes often leads to a degradation of performance due to
the vast differences in acquisition and alignment. In this work, we first
demonstrate that test time adaptation (TTA) can be used to improve model
performance in the presence of both real and simulated domain shifts. We
further propose a novel TTA method by incorporating a normative atlas as a
prior for anatomy. In the presence of various types of domain shifts, we
benchmark the performance of different TTA methods and demonstrate the
improvements brought by our proposed approach, which may further facilitate
automated monitoring of fetal brain development. Our code is available at
https://github.com/joshuaomolegan/TTA-for-3D-Fetal-Subcortical-Segmentation.
|
2502.08776
|
Treatment response as a latent variable
|
stat.ME cs.LG stat.ML
|
Scientists often need to analyze the samples in a study that responded to
treatment in order to refine their hypotheses and find potential causal drivers
of response. Natural variation in outcomes makes teasing apart responders from
non-responders a statistical inference problem. To handle latent responses, we
introduce the causal two-groups (C2G) model, a causal extension of the
classical two-groups model. The C2G model posits that treated samples may or
may not experience an effect, according to some prior probability. We propose
two empirical Bayes procedures for the causal two-groups model, one under
semi-parametric conditions and another under fully nonparametric conditions.
The semi-parametric model assumes additive treatment effects and is
identifiable from observed data. The nonparametric model is unidentifiable, but
we show it can still be used to test for response in each treated sample. We
show empirically and theoretically that both methods for selecting responders
control the false discovery rate at the target level with near-optimal power.
We also propose two novel estimands of interest and provide a strategy for
deriving estimand intervals in the unidentifiable nonparametric model. On a
cancer immunotherapy dataset, the nonparametric C2G model recovers
clinically-validated predictive biomarkers of both positive and negative
outcomes. Code is available at https://github.com/tansey-lab/causal2groups.
|
2502.08777
|
Zero-Shot Belief: A Hard Problem for LLMs
|
cs.CL
|
We present two LLM-based approaches to zero-shot source-and-target belief
prediction on FactBank: a unified system that identifies events, sources, and
belief labels in a single pass, and a hybrid approach that uses a fine-tuned
DeBERTa tagger for event detection. We show that multiple open-sourced,
closed-source, and reasoning-based LLMs struggle with the task. Using the
hybrid approach, we achieve new state-of-the-art results on FactBank and offer
a detailed error analysis. Our approach is then tested on the Italian belief
corpus ModaFact.
|
2502.08779
|
SB-Bench: Stereotype Bias Benchmark for Large Multimodal Models
|
cs.CV
|
Stereotype biases in Large Multimodal Models (LMMs) perpetuate harmful
societal prejudices, undermining the fairness and equity of AI applications. As
LMMs grow increasingly influential, addressing and mitigating inherent biases
related to stereotypes, harmful generations, and ambiguous assumptions in
real-world scenarios has become essential. However, existing datasets
evaluating stereotype biases in LMMs often lack diversity and rely on synthetic
images, leaving a gap in bias evaluation for real-world visual contexts. To
address this, we introduce the Stereotype Bias Benchmark (SB-bench), the most
comprehensive framework to date for assessing stereotype biases across nine
diverse categories with non-synthetic images. SB-bench rigorously evaluates
LMMs through carefully curated, visually grounded scenarios, challenging them
to reason accurately about visual stereotypes. It offers a robust evaluation
framework featuring real-world visual samples, image variations, and
multiple-choice question formats. By introducing visually grounded queries that
isolate visual biases from textual ones, SB-bench enables a precise and nuanced
assessment of a model's reasoning capabilities across varying levels of
difficulty. Through rigorous testing of state-of-the-art open-source and
closed-source LMMs, SB-bench provides a systematic approach to assessing
stereotype biases in LMMs across key social dimensions. This benchmark
represents a significant step toward fostering fairness in AI systems and
reducing harmful biases, laying the groundwork for more equitable and socially
responsible LMMs. Our code and dataset are publicly available.
|
2502.08782
|
A comparative study of different TSO-DSO coordination in the reserve
market
|
eess.SY cs.SY
|
The increasing penetration of Distributed Energy Resources (DERs) in the
distribution system has led to the emergence of a new market actor - the
aggregator. The aggregator serves as a facilitator, enabling flexibility asset
owners to get access to different markets. In which, EVs aggregators are
gaining more attention due to their expanding use and potential to provide
services in various types of markets, particularly in the reserve market.
Currently, TSO indirectly utilizes these resources under the management of the
distribution system operators (DSO), which can negatively impact the
distribution grid. Conversely, adjustments from DSOs can impact service
provision to TSO due to the shortage of TSO usage information. These factors
highlight the importance of evaluating the service provision from aggregators
under different TSO-DSO coordination schemes. This paper focuses on the
provision of flexibility from electric vehicles (EVs) aggregators for balancing
service in the TSO-DSO hybrid-managed and compares it with the DSO-managed
coordination schemes. The behavior of aggregators reacting to price
fluctuations and TSO requests under different coordination schemes and
simulation scenarios is thoroughly evaluated. Additionally, their impact on the
grid is analyzed through the DSO's congestion management process and validated
using data from a real part of the Dutch distribution network. Results find
that the hybrid-managed coordination scheme gives more benefit to the
aggregator than the DSO-managed scheme and the EVs aggregator will gain more
profit in winter than summer due to more upward regulation service is needed.
|
2502.08783
|
Learning Discontinuous Galerkin Solutions to Elliptic Problems via Small
Linear Convolutional Neural Networks
|
cs.LG cs.NA math.NA
|
In recent years, there has been an increasing interest in using deep learning
and neural networks to tackle scientific problems, particularly in solving
partial differential equations (PDEs). However, many neural network-based
methods, such as physics-informed neural networks, depend on automatic
differentiation and the sampling of collocation points, which can result in a
lack of interpretability and lower accuracy compared to traditional numerical
methods. To address this issue, we propose two approaches for learning
discontinuous Galerkin solutions to PDEs using small linear convolutional
neural networks. Our first approach is supervised and depends on labeled data,
while our second approach is unsupervised and does not rely on any training
data. In both cases, our methods use substantially fewer parameters than
similar numerics-based neural networks while also demonstrating comparable
accuracy to the true and DG solutions for elliptic problems.
|
2502.08784
|
Acoustic Wave Manipulation Through Sparse Robotic Actuation
|
cs.RO cs.AI
|
Recent advancements in robotics, control, and machine learning have
facilitated progress in the challenging area of object manipulation. These
advancements include, among others, the use of deep neural networks to
represent dynamics that are partially observed by robot sensors, as well as
effective control using sparse control signals. In this work, we explore a more
general problem: the manipulation of acoustic waves, which are partially
observed by a robot capable of influencing the waves through spatially sparse
actuators. This problem holds great potential for the design of new artificial
materials, ultrasonic cutting tools, energy harvesting, and other applications.
We develop an efficient data-driven method for robot learning that is
applicable to either focusing scattered acoustic energy in a designated region
or suppressing it, depending on the desired task. The proposed method is better
in terms of a solution quality and computational complexity as compared to a
state-of-the-art learning based method for manipulation of dynamical systems
governed by partial differential equations. Furthermore our proposed method is
competitive with a classical semi-analytical method in acoustics research on
the demonstrated tasks. We have made the project code publicly available, along
with a web page featuring video demonstrations:
https://gladisor.github.io/waves/.
|
2502.08785
|
Decision Tree Based Wrappers for Hearing Loss
|
cs.LG cs.SD
|
Audiology entities are using Machine Learning (ML) models to guide their
screening towards people at risk. Feature Engineering (FE) focuses on
optimizing data for ML models, with evolutionary methods being effective in
feature selection and construction tasks. This work aims to benchmark an
evolutionary FE wrapper, using models based on decision trees as proxies. The
FEDORA framework is applied to a Hearing Loss (HL) dataset, being able to
reduce data dimensionality and statistically maintain baseline performance.
Compared to traditional methods, FEDORA demonstrates superior performance, with
a maximum balanced accuracy of 76.2%, using 57 features. The framework also
generated an individual that achieved 72.8% balanced accuracy using a single
feature.
|
2502.08786
|
MRUCT: Mixed Reality Assistance for Acupuncture Guided by Ultrasonic
Computed Tomography
|
cs.HC cs.CV cs.GR
|
Chinese acupuncture practitioners primarily depend on muscle memory and
tactile feedback to insert needles and accurately target acupuncture points, as
the current workflow lacks imaging modalities and visual aids. Consequently,
new practitioners often learn through trial and error, requiring years of
experience to become proficient and earn the trust of patients. Medical
students face similar challenges in mastering this skill. To address these
challenges, we developed an innovative system, MRUCT, that integrates
ultrasonic computed tomography (UCT) with mixed reality (MR) technology to
visualize acupuncture points in real-time. This system offers offline image
registration and real-time guidance during needle insertion, enabling them to
accurately position needles based on anatomical structures such as bones,
muscles, and auto-generated reference points, with the potential for clinical
implementation. In this paper, we outline the non-rigid registration methods
used to reconstruct anatomical structures from UCT data, as well as the key
design considerations of the MR system. We evaluated two different 3D user
interface (3DUI) designs and compared the performance of our system to
traditional workflows for both new practitioners and medical students. The
results highlight the potential of MR to enhance therapeutic medical practices
and demonstrate the effectiveness of the system we developed.
|
2502.08788
|
If Multi-Agent Debate is the Answer, What is the Question?
|
cs.CL cs.LG
|
Multi-agent debate (MAD) has emerged as a promising approach to enhance the
factual accuracy and reasoning quality of large language models (LLMs) by
engaging multiple agents in iterative discussions during inference. Despite its
potential, we argue that current MAD research suffers from critical
shortcomings in evaluation practices, including limited dataset overlap and
inconsistent baselines, raising significant concerns about generalizability.
Correspondingly, this paper presents a systematic evaluation of five
representative MAD methods across nine benchmarks using four foundational
models. Surprisingly, our findings reveal that MAD methods fail to reliably
outperform simple single-agent baselines such as Chain-of-Thought and
Self-Consistency, even when consuming additional inference-time computation.
From our analysis, we found that model heterogeneity can significantly improve
MAD frameworks. We propose Heter-MAD enabling a single LLM agent to access the
output from heterogeneous foundation models, which boosts the performance of
current MAD frameworks. Finally, we outline potential directions for advancing
MAD, aiming to spark a broader conversation and inspire future work in this
area.
|
2502.08789
|
Delay Analysis of 5G HARQ in the Presence of Decoding and Feedback
Latencies
|
cs.IT cs.SY eess.SY math.IT
|
The growing demand for stringent quality of service (QoS) guarantees in 5G
networks requires accurate characterisation of delay performance, often
measured using Delay Violation Probability (DVP) for a given target delay.
Widely used retransmission schemes like Automatic Repeat reQuest (ARQ) and
Hybrid ARQ (HARQ) improve QoS through effective feedback, incremental
redundancy (IR), and parallel retransmission processes. However, existing works
to quantify the DVP under these retransmission schemes overlook practical
aspects such as decoding complexity, feedback delays, and the resulting need
for multiple parallel ARQ/HARQ processes that enable packet transmissions
without waiting for previous feedback, thus exploiting valuable transmission
opportunities. This work proposes a comprehensive multi-server delay model for
ARQ/HARQ that incorporates these aspects. Using a finite blocklength error
model, we derive closed-form expressions and algorithms for accurate DVP
evaluation under realistic 5G configurations aligned with 3GPP standards. Our
numerical evaluations demonstrate notable improvements in DVP accuracy over the
state-of-the-art, highlight the impact of parameter tuning and resource
allocation, and reveal how DVP affects system throughput.
|
2502.08791
|
ClipRover: Zero-shot Vision-Language Exploration and Target Discovery by
Mobile Robots
|
cs.RO
|
Vision-language navigation (VLN) has emerged as a promising paradigm,
enabling mobile robots to perform zero-shot inference and execute tasks without
specific pre-programming. However, current systems often separate map
exploration and path planning, with exploration relying on inefficient
algorithms due to limited (partially observed) environmental information. In
this paper, we present a novel navigation pipeline named ''ClipRover'' for
simultaneous exploration and target discovery in unknown environments,
leveraging the capabilities of a vision-language model named CLIP. Our approach
requires only monocular vision and operates without any prior map or knowledge
about the target. For comprehensive evaluations, we design the functional
prototype of a UGV (unmanned ground vehicle) system named ''Rover Master'', a
customized platform for general-purpose VLN tasks. We integrate and deploy the
ClipRover pipeline on Rover Master to evaluate its throughput, obstacle
avoidance capability, and trajectory performance across various real-world
scenarios. Experimental results demonstrate that ClipRover consistently
outperforms traditional map traversal algorithms and achieves performance
comparable to path-planning methods that depend on prior map and target
knowledge. Notably, ClipRover offers real-time active navigation without
requiring pre-captured candidate images or pre-built node graphs, addressing
key limitations of existing VLN pipelines.
|
2502.08792
|
Auction Design using Value Prediction with Hallucinations
|
cs.GT cs.AI
|
We investigate a Bayesian mechanism design problem where a seller seeks to
maximize revenue by selling an indivisible good to one of n buyers,
incorporating potentially unreliable predictions (signals) of buyers' private
values derived from a machine learning model. We propose a framework where
these signals are sometimes reflective of buyers' true valuations but other
times are hallucinations, which are uncorrelated with the buyers' true
valuations. Our main contribution is a characterization of the optimal auction
under this framework. Our characterization establishes a near-decomposition of
how to treat types above and below the signal. For the one buyer case, the
seller's optimal strategy is to post one of three fairly intuitive prices
depending on the signal, which we call the "ignore", "follow" and "cap"
actions.
|
2502.08794
|
Spectral Journey: How Transformers Predict the Shortest Path
|
cs.LG
|
Decoder-only transformers lead to a step-change in capability of large
language models. However, opinions are mixed as to whether they are really
planning or reasoning. A path to making progress in this direction is to study
the model's behavior in a setting with carefully controlled data. Then
interpret the learned representations and reverse-engineer the computation
performed internally. We study decoder-only transformer language models trained
from scratch to predict shortest paths on simple, connected and undirected
graphs. In this setting, the representations and the dynamics learned by the
model are interpretable. We present three major results: (1) Two-layer
decoder-only language models can learn to predict shortest paths on simple,
connected graphs containing up to 10 nodes. (2) Models learn a graph embedding
that is correlated with the spectral decomposition of the line graph. (3)
Following the insights, we discover a novel approximate path-finding algorithm
Spectral Line Navigator (SLN) that finds shortest path by greedily selecting
nodes in the space of spectral embedding of the line graph.
|
2502.08795
|
Low-Resolution Neural Networks
|
cs.LG
|
The expanding scale of large neural network models introduces significant
challenges, driving efforts to reduce memory usage and enhance computational
efficiency. Such measures are crucial to ensure the practical implementation
and effective application of these sophisticated models across a wide array of
use cases. This study examines the impact of parameter bit precision on model
performance compared to standard 32-bit models, with a focus on multiclass
object classification in images. The models analyzed include those with fully
connected layers, convolutional layers, and transformer blocks, with model
weight resolution ranging from 1 bit to 4.08 bits. The findings indicate that
models with lower parameter bit precision achieve results comparable to 32-bit
models, showing promise for use in memory-constrained devices. While
low-resolution models with a small number of parameters require more training
epochs to achieve accuracy comparable to 32-bit models, those with a large
number of parameters achieve similar performance within the same number of
epochs. Additionally, data augmentation can destabilize training in
low-resolution models, but including zero as a potential value in the weight
parameters helps maintain stability and prevents performance degradation.
Overall, 2.32-bit weights offer the optimal balance of memory reduction,
performance, and efficiency. However, further research should explore other
dataset types and more complex and larger models. These findings suggest a
potential new era for optimized neural network models with reduced memory
requirements and improved computational efficiency, though advancements in
dedicated hardware are necessary to fully realize this potential.
|
2502.08796
|
A Systematic Review on the Evaluation of Large Language Models in Theory
of Mind Tasks
|
cs.CL cs.CY cs.HC
|
In recent years, evaluating the Theory of Mind (ToM) capabilities of large
language models (LLMs) has received significant attention within the research
community. As the field rapidly evolves, navigating the diverse approaches and
methodologies has become increasingly complex. This systematic review
synthesizes current efforts to assess LLMs' ability to perform ToM tasks, an
essential aspect of human cognition involving the attribution of mental states
to oneself and others. Despite notable advancements, the proficiency of LLMs in
ToM remains a contentious issue. By categorizing benchmarks and tasks through a
taxonomy rooted in cognitive science, this review critically examines
evaluation techniques, prompting strategies, and the inherent limitations of
LLMs in replicating human-like mental state reasoning. A recurring theme in the
literature reveals that while LLMs demonstrate emerging competence in ToM
tasks, significant gaps persist in their emulation of human cognitive
abilities.
|
2502.08803
|
Deep EEG Super-Resolution: Upsampling EEG Spatial Resolution with
Generative Adversarial Networks
|
cs.LG
|
Electroencephalography (EEG) activity contains a wealth of information about
what is happening within the human brain. Recording more of this data has the
potential to unlock endless future applications. However, the cost of EEG
hardware is increasingly expensive based upon the number of EEG channels being
recorded simultaneously. We combat this problem in this paper by proposing a
novel deep EEG super-resolution (SR) approach based on Generative Adversarial
Networks (GANs). This approach can produce high spatial resolution EEG data
from low resolution samples, by generating channel-wise upsampled data to
effectively interpolate numerous missing channels, thus reducing the need for
expensive EEG equipment. We tested the performance using an EEG dataset from a
mental imagery task. Our proposed GAN model provided 10^4 fold and 10^2 fold
reduction in mean-squared error (MSE) and mean-absolute error (MAE),
respectively, over the baseline bicubic interpolation method. We further
validate our method by training a classifier on the original classification
task, which displayed minimal loss in accuracy while using the super-resolved
data. The proposed SR EEG by GAN is a promising approach to improve the spatial
resolution of low density EEG headsets.
|
2502.08806
|
CLOVER: A Test Case Generation Benchmark with Coverage, Long-Context,
and Verification
|
cs.SE cs.AI cs.LG
|
Software testing is a critical aspect of software development, yet generating
test cases remains a routine task for engineers. This paper presents a
benchmark, CLOVER, to evaluate models' capabilities in generating and
completing test cases under specific conditions. Spanning from simple assertion
completions to writing test cases that cover specific code blocks across
multiple files, these tasks are based on 12 python repositories, analyzing 845
problems with context lengths ranging from 4k to 128k tokens. Utilizing code
testing frameworks, we propose a method to construct retrieval contexts using
coverage information. While models exhibit comparable performance with short
contexts, notable differences emerge with 16k contexts. Notably, models like
GPT-4o and Claude 3.5 can effectively leverage relevant snippets; however, all
models score below 35\% on the complex Task III, even with the oracle context
provided, underscoring the benchmark's significance and the potential for model
improvement. The benchmark is containerized for code execution across tasks,
and we will release the code, data, and construction methodologies.
|
2502.08807
|
InTAR: Inter-Task Auto-Reconfigurable Accelerator Design for High Data
Volume Variation in DNNs
|
cs.AR cs.LG
|
The rise of deep neural networks (DNNs) has driven a boom in AI services,
which results in an increased demand for computing power and memory. In modern
DNNs, the data sizes produced and consumed are highly varied across operations
(high data volume variation, HDV). Because existing design paradigms use fixed
execution patterns that lead to either low computational efficiency due to
pipeline stalls or frequent off-chip memory accesses to manage large
intermediate data, HDV applications are challenging to accelerate on FPGAs. To
address these challenges, we introduce the Inter-Task Auto-Reconfigurable
Accelerator (InTAR), a novel accelerator design for HDV applications on FPGAs.
InTAR combines the high computational efficiency of sequential execution with
the reduced off-chip memory overhead of dataflow execution. It switches
execution patterns automatically with a static schedule determined before
circuit design based on resource constraints and model parameters. Unlike
previous reconfigurable accelerators, InTAR encodes reconfiguration schedules
during circuit design, allowing model-specific optimizations that allocate only
the necessary logic and interconnects. Thus, InTAR achieves a high clock
frequency with fewer resources and low reconfiguration time. Furthermore, InTAR
supports high-level tools such as HLS for fast design generation. We implement
a set of multi-task kernels in various HDV DNNs using InTAR. Compared with
dataflow and sequential accelerators, InTAR exhibits $1.8\times$ and $7.1
\times$ speedups correspondingly. We also implement InTAR for GPT-2 medium as a
more complex example, which achieves a speedup of $\mathbf{3.65 \sim
39.14\times}$ and a $\mathbf{1.72 \sim 10.44\times}$ boost in DSP efficiency
compared to the corresponding SoTA accelerators on FPGAs.
|
2502.08808
|
A First-order Generative Bilevel Optimization Framework for Diffusion
Models
|
cs.LG math.OC stat.ML
|
Diffusion models, which iteratively denoise data samples to synthesize
high-quality outputs, have achieved empirical success across domains. However,
optimizing these models for downstream tasks often involves nested bilevel
structures, such as tuning hyperparameters for fine-tuning tasks or noise
schedules in training dynamics, where traditional bilevel methods fail due to
the infinite-dimensional probability space and prohibitive sampling costs. We
formalize this challenge as a generative bilevel optimization problem and
address two key scenarios: (1) fine-tuning pre-trained models via an
inference-only lower-level solver paired with a sample-efficient gradient
estimator for the upper level, and (2) training diffusion models from scratch
with noise schedule optimization by reparameterizing the lower-level problem
and designing a computationally tractable gradient estimator. Our first-order
bilevel framework overcomes the incompatibility of conventional bilevel methods
with diffusion processes, offering theoretical grounding and computational
practicality. Experiments demonstrate that our method outperforms existing
fine-tuning and hyperparameter search baselines.
|
2502.08813
|
Measuring Anxiety Levels with Head Motion Patterns in Severe Depression
Population
|
cs.CV
|
Depression and anxiety are prevalent mental health disorders that frequently
cooccur, with anxiety significantly influencing both the manifestation and
treatment of depression. An accurate assessment of anxiety levels in
individuals with depression is crucial to develop effective and personalized
treatment plans. This study proposes a new noninvasive method for quantifying
anxiety severity by analyzing head movements -- specifically speed,
acceleration, and angular displacement -- during video-recorded interviews with
patients suffering from severe depression. Using data from a new CALYPSO
Depression Dataset, we extracted head motion characteristics and applied
regression analysis to predict clinically evaluated anxiety levels. Our results
demonstrate a high level of precision, achieving a mean absolute error (MAE) of
0.35 in predicting the severity of psychological anxiety based on head movement
patterns. This indicates that our approach can enhance the understanding of
anxiety's role in depression and assist psychiatrists in refining treatment
strategies for individuals.
|
2502.08818
|
Lexical Manifold Reconfiguration in Large Language Models: A Novel
Architectural Approach for Contextual Modulation
|
cs.CL
|
Contextual adaptation in token embeddings plays a central role in determining
how well language models maintain coherence and retain semantic relationships
over extended text sequences. Static embeddings often impose constraints on
lexical flexibility, leading to suboptimal performance when faced with complex
sentence structures or domain-specific terminology shifts. To address this
limitation, a structured approach was developed for dynamically reconfiguring
token embeddings through continuous geometric transformations, ensuring that
representations evolved in response to evolving discourse structures. A
manifold-based transformation mechanism was integrated to regulate lexical
positioning, allowing embeddings to undergo controlled shifts while preserving
linguistic relationships across varying textual contexts. Empirical evaluations
demonstrated that embedding reconfiguration contributed to reductions in
perplexity, improved lexical coherence, and enhanced sentence-level continuity,
particularly in structured and domain-adaptive text generation tasks.
Comparative analyses of embedding drift indicated that dynamically restructured
representations maintained stronger contextual consistency, reducing
misalignment in token dependencies while preserving fluency in language
modeling outputs. Computational overhead assessments confirmed that while
training complexity increased due to the iterative refinement of embeddings,
inference remained efficient, ensuring practical feasibility for real-time
generation. Evaluations across multiple datasets further demonstrated that
dynamically modulated embeddings exhibited broader lexical diversity, reducing
repetitive token patterns and enabling a more adaptable representation learning
process.
|
2502.08820
|
Can a Single Model Master Both Multi-turn Conversations and Tool Use?
CoALM: A Unified Conversational Agentic Language Model
|
cs.AI cs.CL
|
Large Language Models (LLMs) with API-calling capabilities enabled building
effective Language Agents (LA), while also revolutionizing the conventional
task-oriented dialogue (TOD) paradigm. However, current approaches face a
critical dilemma: TOD systems are often trained on a limited set of target
APIs, requiring new data to maintain their quality when interfacing with new
services, while LAs are not trained to maintain user intent over multi-turn
conversations. Because both robust multi-turn management and advanced function
calling are crucial for effective conversational agents, we evaluate these
skills on three popular benchmarks: MultiWOZ 2.4 (TOD), BFCL V3 (LA), and
API-Bank (LA), and our analyses reveal that specialized approaches excel in one
domain but underperform in the other. To bridge this chasm, we introduce CoALM
(Conversational Agentic Language Model), a unified approach that integrates
both conversational and agentic capabilities. We created CoALM-IT, a carefully
constructed multi-task dataset that interleave multi-turn ReAct reasoning with
complex API usage. Using CoALM-IT, we train three models CoALM 8B, CoALM 70B,
and CoALM 405B, which outperform top domain-specific models, including GPT-4o,
across all three benchmarks. This demonstrates the feasibility of a single
model approach for both TOD and LA, setting a new standard for conversational
agents.
|
2502.08821
|
DejAIvu: Identifying and Explaining AI Art on the Web in Real-Time with
Saliency Maps
|
cs.CV cs.AI cs.LG
|
The recent surge in advanced generative models, such as diffusion models and
generative adversarial networks (GANs), has led to an alarming rise in
AI-generated images across various domains on the web. While such technologies
offer benefits such as democratizing artistic creation, they also pose
challenges in misinformation, digital forgery, and authenticity verification.
Additionally, the uncredited use of AI-generated images in media and marketing
has sparked significant backlash from online communities. In response to this,
we introduce DejAIvu, a Chrome Web extension that combines real-time
AI-generated image detection with saliency-based explainability while users
browse the web. Using an ONNX-optimized deep learning model, DejAIvu
automatically analyzes images on websites such as Google Images, identifies
AI-generated content using model inference, and overlays a saliency heatmap to
highlight AI-related artifacts. Our approach integrates efficient in-browser
inference, gradient-based saliency analysis, and a seamless user experience,
ensuring that AI detection is both transparent and interpretable. We also
evaluate DejAIvu across multiple pretrained architectures and benchmark
datasets, demonstrating high accuracy and low latency, making it a practical
and deployable tool for enhancing AI image accountability. The code for this
system can be found at https://github.com/Noodulz/dejAIvu.
|
2502.08822
|
$\mathsf{CSMAE~}$:~Cataract Surgical Masked Autoencoder (MAE) based
Pre-training
|
cs.CV
|
Automated analysis of surgical videos is crucial for improving surgical
training, workflow optimization, and postoperative assessment. We introduce a
CSMAE, Masked Autoencoder (MAE)-based pretraining approach, specifically
developed for Cataract Surgery video analysis, where instead of randomly
selecting tokens for masking, they are selected based on the spatiotemporal
importance of the token. We created a large dataset of cataract surgery videos
to improve the model's learning efficiency and expand its robustness in
low-data regimes. Our pre-trained model can be easily adapted for specific
downstream tasks via fine-tuning, serving as a robust backbone for further
analysis. Through rigorous testing on a downstream step-recognition task on two
Cataract Surgery video datasets, D99 and Cataract-101, our approach surpasses
current state-of-the-art self-supervised pretraining and adapter-based transfer
learning methods by a significant margin. This advancement not only
demonstrates the potential of our MAE-based pretraining in the field of
surgical video analysis but also sets a new benchmark for future research.
|
2502.08825
|
Examining and Adapting Time for Multilingual Classification via Mixture
of Temporal Experts
|
cs.CL
|
Time is implicitly embedded in classification process: classifiers are
usually built on existing data while to be applied on future data whose
distributions (e.g., label and token) may change. However, existing
state-of-the-art classification models merely consider the temporal variations
and primarily focus on English corpora, which leaves temporal studies less
explored, let alone under multilingual settings. In this study, we fill the gap
by treating time as domains (e.g., 2024 vs. 2025), examining temporal effects,
and developing a domain adaptation framework to generalize classifiers over
time on multiple languages. Our framework proposes Mixture of Temporal Experts
(MoTE) to leverage both semantic and data distributional shifts to learn and
adapt temporal trends into classification models. Our analysis shows
classification performance varies over time across different languages, and we
experimentally demonstrate that MoTE can enhance classifier generalizability
over temporal data shifts. Our study provides analytic insights and addresses
the need for time-aware models that perform robustly in multilingual scenarios.
|
2502.08826
|
Ask in Any Modality: A Comprehensive Survey on Multimodal
Retrieval-Augmented Generation
|
cs.CL cs.AI cs.IR
|
Large Language Models (LLMs) struggle with hallucinations and outdated
knowledge due to their reliance on static training data. Retrieval-Augmented
Generation (RAG) mitigates these issues by integrating external dynamic
information enhancing factual and updated grounding. Recent advances in
multimodal learning have led to the development of Multimodal RAG,
incorporating multiple modalities such as text, images, audio, and video to
enhance the generated outputs. However, cross-modal alignment and reasoning
introduce unique challenges to Multimodal RAG, distinguishing it from
traditional unimodal RAG. This survey offers a structured and comprehensive
analysis of Multimodal RAG systems, covering datasets, metrics, benchmarks,
evaluation, methodologies, and innovations in retrieval, fusion, augmentation,
and generation. We precisely review training strategies, robustness
enhancements, and loss functions, while also exploring the diverse Multimodal
RAG scenarios. Furthermore, we discuss open challenges and future research
directions to support advancements in this evolving field. This survey lays the
foundation for developing more capable and reliable AI systems that effectively
leverage multimodal dynamic external knowledge bases. Resources are available
at https://github.com/llm-lab-org/Multimodal-RAG-Survey.
|
2502.08828
|
A Survey on Data-Centric AI: Tabular Learning from Reinforcement
Learning and Generative AI Perspective
|
cs.LG cs.AI
|
Tabular data is one of the most widely used data formats across various
domains such as bioinformatics, healthcare, and marketing. As artificial
intelligence moves towards a data-centric perspective, improving data quality
is essential for enhancing model performance in tabular data-driven
applications. This survey focuses on data-driven tabular data optimization,
specifically exploring reinforcement learning (RL) and generative approaches
for feature selection and feature generation as fundamental techniques for
refining data spaces. Feature selection aims to identify and retain the most
informative attributes, while feature generation constructs new features to
better capture complex data patterns. We systematically review existing
generative methods for tabular data engineering, analyzing their latest
advancements, real-world applications, and respective strengths and
limitations. This survey emphasizes how RL-based and generative techniques
contribute to the automation and intelligence of feature engineering. Finally,
we summarize the existing challenges and discuss future research directions,
aiming to provide insights that drive continued innovation in this field.
|
2502.08829
|
PLayer-FL: A Principled Approach to Personalized Layer-wise Cross-Silo
Federated Learning
|
cs.LG
|
Non-identically distributed data is a major challenge in Federated Learning
(FL). Personalized FL tackles this by balancing local model adaptation with
global model consistency. One variant, partial FL, leverages the observation
that early layers learn more transferable features by federating only early
layers. However, current partial FL approaches use predetermined,
architecture-specific rules to select layers, limiting their applicability. We
introduce Principled Layer-wise-FL (PLayer-FL), which uses a novel federation
sensitivity metric to identify layers that benefit from federation. This
metric, inspired by model pruning, quantifies each layer's contribution to
cross-client generalization after the first training epoch, identifying a
transition point in the network where the benefits of federation diminish. We
first demonstrate that our federation sensitivity metric shows strong
correlation with established generalization measures across diverse
architectures. Next, we show that PLayer-FL outperforms existing FL algorithms
on a range of tasks, also achieving more uniform performance improvements
across clients.
|
2502.08832
|
LSM Trees in Adversarial Environments
|
cs.DB cs.CR
|
The Log Structured Merge (LSM) Tree is a popular choice for key-value stores
that focus on optimized write throughput while maintaining performant,
production-ready read latencies. To optimize read performance, LSM stores rely
on a probabilistic data structure called the Bloom Filter (BF). In this paper,
we focus on adversarial workloads that lead to a sharp degradation in read
performance by impacting the accuracy of BFs used within the LSM store. Our
evaluation shows up to $800\%$ increase in the read latency of lookups for
popular LSM stores. We define adversarial models and security definitions for
LSM stores. We implement adversary resilience into two popular LSM stores,
LevelDB and RocksDB. We use our implementations to demonstrate how performance
degradation under adversarial workloads can be mitigated.
|
2502.08834
|
A Reversible Solver for Diffusion SDEs
|
cs.LG cs.AI stat.ML
|
Diffusion models have quickly become the state-of-the-art for generation
tasks across many different data modalities. An important ability of diffusion
models is the ability to encode samples from the data distribution back into
the sampling prior distribution. This is useful for performing alterations to
real data samples along with guided generation via the continuous adjoint
equations. We propose an algebraically reversible solver for diffusion SDEs
that can exactly invert real data samples into the prior distribution.
|
2502.08835
|
A Bundle-based Augmented Lagrangian Framework: Algorithm, Convergence,
and Primal-dual Principles
|
math.OC cs.SY eess.SY
|
We propose a new bundle-based augmented Lagrangian framework for solving
constrained convex problems. Unlike the classical (inexact) augmented
Lagrangian method (ALM) that has a nested double-loop structure, our framework
features a $\textit{single-loop}$ process. Motivated by the proximal bundle
method (PBM), we use a $\textit{bundle}$ of past iterates to approximate the
subproblem in ALM to get a computationally efficient update at each iteration.
We establish sub-linear convergences for primal feasibility, primal cost
values, and dual iterates under mild assumptions. With further regularity
conditions, such as quadratic growth, our algorithm enjoys $\textit{linear}$
convergences. Importantly, this linear convergence can happen for a class of
conic optimization problems, including semidefinite programs. Our proof
techniques leverage deep connections with inexact ALM and primal-dual
principles with PBM.
|
2502.08836
|
Survey on Single-Image Reflection Removal using Deep Learning Techniques
|
cs.CV
|
The phenomenon of reflection is quite common in digital images, posing
significant challenges for various applications such as computer vision,
photography, and image processing. Traditional methods for reflection removal
often struggle to achieve clean results while maintaining high fidelity and
robustness, particularly in real-world scenarios. Over the past few decades,
numerous deep learning-based approaches for reflection removal have emerged,
yielding impressive results. In this survey, we conduct a comprehensive review
of the current literature by focusing on key venues such as ICCV, ECCV, CVPR,
NeurIPS, etc., as these conferences and journals have been central to advances
in the field. Our review follows a structured paper selection process, and we
critically assess both single-stage and two-stage deep learning methods for
reflection removal. The contribution of this survey is three-fold: first, we
provide a comprehensive summary of the most recent work on single-image
reflection removal; second, we outline task hypotheses, current deep learning
techniques, publicly available datasets, and relevant evaluation metrics; and
third, we identify key challenges and opportunities in deep learning-based
reflection removal, highlighting the potential of this rapidly evolving
research area.
|
2502.08840
|
Thresholds for Reconstruction of Random Hypergraphs From Graph
Projections
|
math.ST cs.IT math.IT math.PR stat.TH
|
The graph projection of a hypergraph is a simple graph with the same vertex
set and with an edge between each pair of vertices that appear in a hyperedge.
We consider the problem of reconstructing a random $d$-uniform hypergraph from
its projection. Feasibility of this task depends on $d$ and the density of
hyperedges in the random hypergraph. For $d=3$ we precisely determine the
threshold, while for $d\geq 4$ we give bounds. All of our feasibility results
are obtained by exhibiting an efficient algorithm for reconstructing the
original hypergraph, while infeasibility is information-theoretic.
Our results also apply to mildly inhomogeneous random hypergrahps, including
hypergraph stochastic block models (HSBM). A consequence of our results is an
optimal HSBM recovery algorithm, improving on a result of Guadio and Joshi in
2023.
|
2502.08841
|
Delayed takedown of illegal content on social media makes moderation
ineffective
|
cs.SI cs.CY
|
Social media platforms face legal and regulatory demands to swiftly remove
illegal content, sometimes under strict takedown deadlines. However, the
effects of moderation speed and the impact of takedown deadlines remain
underexplored. This study models the relationship between the timeliness of
illegal content removal and its prevalence, reach, and exposure on social
media. By simulating illegal content diffusion using empirical data from the
DSA Transparency Database, we demonstrate that rapid takedown (within hours)
significantly reduces illegal content prevalence and exposure, while longer
delays decrease the effectiveness of moderation efforts. While these findings
support tight takedown deadlines for content removal, such deadlines cannot
address the delay in identifying the illegal content and can adversely affect
the quality of content moderation.
|
2502.08844
|
MuJoCo Playground
|
cs.RO
|
We introduce MuJoCo Playground, a fully open-source framework for robot
learning built with MJX, with the express goal of streamlining simulation,
training, and sim-to-real transfer onto robots. With a simple "pip install
playground", researchers can train policies in minutes on a single GPU.
Playground supports diverse robotic platforms, including quadrupeds, humanoids,
dexterous hands, and robotic arms, enabling zero-shot sim-to-real transfer from
both state and pixel inputs. This is achieved through an integrated stack
comprising a physics engine, batch renderer, and training environments. Along
with video results, the entire framework is freely available at
playground.mujoco.org
|
2502.08845
|
Optimal Dataset Size for Recommender Systems: Evaluating Algorithms'
Performance via Downsampling
|
cs.IR
|
This thesis investigates dataset downsampling as a strategy to optimize
energy efficiency in recommender systems while maintaining competitive
performance. With increasing dataset sizes posing computational and
environmental challenges, this study explores the trade-offs between energy
efficiency and recommendation quality in Green Recommender Systems, which aim
to reduce environmental impact. By applying two downsampling approaches to
seven datasets, 12 algorithms, and two levels of core pruning, the research
demonstrates significant reductions in runtime and carbon emissions. For
example, a 30% downsampling portion can reduce runtime by 52% compared to the
full dataset, leading to a carbon emission reduction of up to 51.02 KgCO2e
during the training of a single algorithm on a single dataset. The analysis
reveals that algorithm performance under different downsampling portions
depends on factors like dataset characteristics, algorithm complexity, and the
specific downsampling configuration (scenario dependent). Some algorithms,
which showed lower nDCG@10 scores compared to higher-performing ones, exhibited
lower sensitivity to the amount of training data, offering greater potential
for efficiency in lower downsampling portions. On average, these algorithms
retained 81% of full-size performance using only 50% of the training set. In
certain downsampling configurations, where more users were progressively
included while keeping the test set size fixed, they even showed higher nDCG@10
scores than when using the full dataset. These findings highlight the
feasibility of balancing sustainability and effectiveness, providing insights
for designing energy-efficient recommender systems and promoting sustainable AI
practices.
|
2502.08856
|
A Systematic Evaluation of Generative Models on Tabular Transportation
Data
|
cs.LG
|
The sharing of large-scale transportation data is beneficial for
transportation planning and policymaking. However, it also raises significant
security and privacy concerns, as the data may include identifiable personal
information, such as individuals' home locations. To address these concerns,
synthetic data generation based on real transportation data offers a promising
solution that allows privacy protection while potentially preserving data
utility. Although there are various synthetic data generation techniques, they
are often not tailored to the unique characteristics of transportation data,
such as the inherent structure of transportation networks formed by all trips
in the datasets. In this paper, we use New York City taxi data as a case study
to conduct a systematic evaluation of the performance of widely used tabular
data generative models. In addition to traditional metrics such as distribution
similarity, coverage, and privacy preservation, we propose a novel graph-based
metric tailored specifically for transportation data. This metric evaluates the
similarity between real and synthetic transportation networks, providing
potentially deeper insights into their structural and functional alignment. We
also introduced an improved privacy metric to address the limitations of the
commonly-used one. Our experimental results reveal that existing tabular data
generative models often fail to perform as consistently as claimed in the
literature, particularly when applied to transportation data use cases.
Furthermore, our novel graph metric reveals a significant gap between synthetic
and real data. This work underscores the potential need to develop generative
models specifically tailored to take advantage of the unique characteristics of
emerging domains, such as transportation.
|
2502.08858
|
Estimating Probabilities of Causation with Machine Learning Models
|
cs.AI
|
Probabilities of causation play a crucial role in modern decision-making.
This paper addresses the challenge of predicting probabilities of causation for
subpopulations with insufficient data using machine learning models. Tian and
Pearl first defined and derived tight bounds for three fundamental
probabilities of causation: the probability of necessity and sufficiency (PNS),
the probability of sufficiency (PS), and the probability of necessity (PN).
However, estimating these probabilities requires both experimental and
observational distributions specific to each subpopulation, which are often
unavailable or impractical to obtain with limited population-level data. We
assume that the probabilities of causation for each subpopulation are
determined by its characteristics. To estimate these probabilities for
subpopulations with insufficient data, we propose using machine learning models
that draw insights from subpopulations with sufficient data. Our evaluation of
multiple machine learning models indicates that, given sufficient
population-level data and an appropriate choice of machine learning model and
activation function, PNS can be effectively predicted. Through simulation
studies, we show that our multilayer perceptron (MLP) model with the Mish
activation function achieves a mean absolute error (MAE) of approximately 0.02
in predicting PNS for 32,768 subpopulations using data from around 2,000
subpopulations.
|
2502.08859
|
EnigmaEval: A Benchmark of Long Multimodal Reasoning Challenges
|
cs.AI cs.CL
|
As language models master existing reasoning benchmarks, we need new
challenges to evaluate their cognitive frontiers. Puzzle-solving events are
rich repositories of challenging multimodal problems that test a wide range of
advanced reasoning and knowledge capabilities, making them a unique testbed for
evaluating frontier language models. We introduce EnigmaEval, a dataset of
problems and solutions derived from puzzle competitions and events that probes
models' ability to perform implicit knowledge synthesis and multi-step
deductive reasoning. Unlike existing reasoning and knowledge benchmarks, puzzle
solving challenges models to discover hidden connections between seemingly
unrelated pieces of information to uncover solution paths. The benchmark
comprises 1184 puzzles of varying complexity -- each typically requiring teams
of skilled solvers hours to days to complete -- with unambiguous, verifiable
solutions that enable efficient evaluation. State-of-the-art language models
achieve extremely low accuracy on these puzzles, even lower than other
difficult benchmarks such as Humanity's Last Exam, unveiling models'
shortcomings when challenged with problems requiring unstructured and lateral
reasoning.
|
2502.08860
|
Brain in the Dark: Design Principles for Neuromimetic Inference under
the Free Energy Principle
|
cs.NE
|
Deep learning has revolutionised artificial intelligence (AI) by enabling
automatic feature extraction and function approximation from raw data. However,
it faces challenges such as a lack of out-of-distribution generalisation,
catastrophic forgetting and poor interpretability. In contrast, biological
neural networks, such as those in the human brain, do not suffer from these
issues, inspiring AI researchers to explore neuromimetic deep learning, which
aims to replicate brain mechanisms within AI models. A foundational theory for
this approach is the Free Energy Principle (FEP), which despite its potential,
is often considered too complex to understand and implement in AI as it
requires an interdisciplinary understanding across a variety of fields. This
paper seeks to demystify the FEP and provide a comprehensive framework for
designing neuromimetic models with human-like perception capabilities. We
present a roadmap for implementing these models and a Pytorch code repository
for applying FEP in a predictive coding network.
|
2502.08864
|
Off-Switching Not Guaranteed
|
cs.AI
|
Hadfield-Menell et al. (2017) propose the Off-Switch Game, a model of
Human-AI cooperation in which AI agents always defer to humans because they are
uncertain about our preferences. I explain two reasons why AI agents might not
defer. First, AI agents might not value learning. Second, even if AI agents
value learning, they might not be certain to learn our actual preferences.
|
2502.08866
|
BrainWavLM: Fine-tuning Speech Representations with Brain Responses to
Language
|
cs.CL
|
Speech encoding models use auditory representations to predict how the human
brain responds to spoken language stimuli. Most performant encoding models
linearly map the hidden states of artificial neural networks to brain data, but
this linear restriction may limit their effectiveness. In this work, we use
low-rank adaptation (LoRA) to fine-tune a WavLM-based encoding model end-to-end
on a brain encoding objective, producing a model we name BrainWavLM. We show
that fine-tuning across all of cortex improves average encoding performance
with greater stability than without LoRA. This improvement comes at the expense
of low-level regions like auditory cortex (AC), but selectively fine-tuning on
these areas improves performance in AC, while largely retaining gains made in
the rest of cortex. Fine-tuned models generalized across subjects, indicating
that they learned robust brain-like representations of the speech stimuli.
Finally, by training linear probes, we showed that the brain data strengthened
semantic representations in the speech model without any explicit annotations.
Our results demonstrate that brain fine-tuning produces best-in-class speech
encoding models, and that non-linear methods have the potential to bridge the
gap between artificial and biological representations of semantics.
|
2502.08869
|
Harnessing Vision Models for Time Series Analysis: A Survey
|
cs.LG cs.AI cs.CV
|
Time series analysis has witnessed the inspiring development from traditional
autoregressive models, deep learning models, to recent Transformers and Large
Language Models (LLMs). Efforts in leveraging vision models for time series
analysis have also been made along the way but are less visible to the
community due to the predominant research on sequence modeling in this domain.
However, the discrepancy between continuous time series and the discrete token
space of LLMs, and the challenges in explicitly modeling the correlations of
variates in multivariate time series have shifted some research attentions to
the equally successful Large Vision Models (LVMs) and Vision Language Models
(VLMs). To fill the blank in the existing literature, this survey discusses the
advantages of vision models over LLMs in time series analysis. It provides a
comprehensive and in-depth overview of the existing methods, with dual views of
detailed taxonomy that answer the key research questions including how to
encode time series as images and how to model the imaged time series for
various tasks. Additionally, we address the challenges in the pre- and
post-processing steps involved in this framework and outline future directions
to further advance time series analysis with vision models.
|
Subsets and Splits
No community queries yet
The top public SQL queries from the community will appear here once available.