id
stringlengths 9
16
| title
stringlengths 4
278
| categories
stringlengths 5
104
| abstract
stringlengths 6
4.09k
|
|---|---|---|---|
2501.12400
|
Interpolation pour l'augmentation de donnees : Application \`a la
gestion des adventices de la canne a sucre a la Reunion
|
q-bio.QM cs.LG stat.AP stat.ME stat.ML
|
Data augmentation is a crucial step in the development of robust supervised
learning models, especially when dealing with limited datasets. This study
explores interpolation techniques for the augmentation of geo-referenced data,
with the aim of predicting the presence of Commelina benghalensis L. in
sugarcane plots in La R\'eunion. Given the spatial nature of the data and the
high cost of data collection, we evaluated two interpolation approaches:
Gaussian processes (GPs) with different kernels and kriging with various
variograms. The objectives of this work are threefold: (i) to identify which
interpolation methods offer the best predictive performance for various
regression algorithms, (ii) to analyze the evolution of performance as a
function of the number of observations added, and (iii) to assess the spatial
consistency of augmented datasets. The results show that GP-based methods, in
particular with combined kernels (GP-COMB), significantly improve the
performance of regression algorithms while requiring less additional data.
Although kriging shows slightly lower performance, it is distinguished by a
more homogeneous spatial coverage, a potential advantage in certain contexts.
|
2501.12405
|
Scopes of Alignment
|
cs.CY cs.AI cs.CL
|
Much of the research focus on AI alignment seeks to align large language
models and other foundation models to the context-less and generic values of
helpfulness, harmlessness, and honesty. Frontier model providers also strive to
align their models with these values. In this paper, we motivate why we need to
move beyond such a limited conception and propose three dimensions for doing
so. The first scope of alignment is competence: knowledge, skills, or behaviors
the model must possess to be useful for its intended purpose. The second scope
of alignment is transience: either semantic or episodic depending on the
context of use. The third scope of alignment is audience: either mass, public,
small-group, or dyadic. At the end of the paper, we use the proposed framework
to position some technologies and workflows that go beyond prevailing notions
of alignment.
|
2501.12407
|
The Streaming Batch Model for Efficient and Fault-Tolerant Heterogeneous
Execution
|
cs.DC cs.LG
|
While ML model training and inference are both GPU-intensive, CPU-based data
processing is often the bottleneck. Distributed data processing systems based
on the batch or stream processing models assume homogeneous resource
requirements. They excel at CPU-based computation but either under-utilize
heterogeneous resources or impose high overheads on failure and
reconfiguration. We introduce the streaming batch model, a hybrid of the two
models that enables efficient and fault-tolerant heterogeneous execution. The
key idea is to execute one partition at a time to allow lineage-based recovery
with dynamic resource allocation. This enables memory-efficient pipelining
across heterogeneous resources, similar to stream processing, but also offers
the elasticity and fault tolerance properties of batch processing. We present
Ray Data, an implementation of the streaming batch model that improves
throughput on heterogeneous batch inference pipelines by 3--8$\times$ compared
to traditional batch and stream processing systems. When training Stable
Diffusion, Ray Data matches the throughput of single-node ML data loaders while
additionally leveraging distributed heterogeneous clusters to further improve
training throughput by 31%.
|
2501.12408
|
Control-ITRA: Controlling the Behavior of a Driving Model
|
cs.AI cs.LG cs.RO cs.SY eess.SY stat.ML
|
Simulating realistic driving behavior is crucial for developing and testing
autonomous systems in complex traffic environments. Equally important is the
ability to control the behavior of simulated agents to tailor scenarios to
specific research needs and safety considerations. This paper extends the
general-purpose multi-agent driving behavior model ITRA (Scibior et al., 2021),
by introducing a method called Control-ITRA to influence agent behavior through
waypoint assignment and target speed modulation. By conditioning agents on
these two aspects, we provide a mechanism for them to adhere to specific
trajectories and indirectly adjust their aggressiveness. We compare different
approaches for integrating these conditions during training and demonstrate
that our method can generate controllable, infraction-free trajectories while
preserving realism in both seen and unseen locations.
|
2501.12415
|
Comparative Analysis of Hand-Crafted and Machine-Driven
Histopathological Features for Prostate Cancer Classification and
Segmentation
|
eess.IV cs.CV cs.LG q-bio.QM
|
Histopathological image analysis is a reliable method for prostate cancer
identification. In this paper, we present a comparative analysis of two
approaches for segmenting glandular structures in prostate images to automate
Gleason grading. The first approach utilizes a hand-crafted learning technique,
combining Gray Level Co-Occurrence Matrix (GLCM) and Local Binary Pattern (LBP)
texture descriptors to highlight spatial dependencies and minimize information
loss at the pixel level. For machine driven feature extraction, we employ a
U-Net convolutional neural network to perform semantic segmentation of prostate
gland stroma tissue. Support vector machine-based learning of hand-crafted
features achieves impressive classification accuracies of 99.0% and 95.1% for
GLCM and LBP, respectively, while the U-Net-based machine-driven features
attain 94% accuracy. Furthermore, a comparative analysis demonstrates superior
segmentation quality for histopathological grades 1, 2, 3, and 4 using the
U-Net approach, as assessed by Jaccard and Dice metrics. This work underscores
the utility of machine-driven features in clinical applications that rely on
automated pixel-level segmentation in prostate tissue images.
|
2501.12417
|
Egoistic MDS-based Rigid Body Localization
|
cs.RO eess.SP
|
We consider a novel anchorless rigid body localization (RBL) suitable for
application in autonomous driving (AD), in so far as the algorithm enables a
rigid body to egoistically detect the location (relative translation) and
orientation (relative rotation) of another body, without knowledge of the shape
of the latter, based only on a set of measurements of the distances between
sensors of one vehicle to the other. A key point of the proposed method is that
the translation vector between the two-bodies is modeled using the
double-centering operator from multidimensional scaling (MDS) theory, enabling
the method to be used between rigid bodies regardless of their shapes, in
contrast to conventional approaches which require both bodies to have the same
shape. Simulation results illustrate the good performance of the proposed
technique in terms of root mean square error (RMSE) of the estimates in
different setups.
|
2501.12418
|
ImageRef-VL: Enabling Contextual Image Referencing in Vision-Language
Models
|
cs.CV cs.AI
|
Vision-Language Models (VLMs) have demonstrated remarkable capabilities in
understanding multimodal inputs and have been widely integrated into
Retrieval-Augmented Generation (RAG) based conversational systems. While
current VLM-powered chatbots can provide textual source references in their
responses, they exhibit significant limitations in referencing contextually
relevant images during conversations. In this paper, we introduce Contextual
Image Reference -- the ability to appropriately reference relevant images from
retrieval documents based on conversation context -- and systematically
investigate VLMs' capability in this aspect. We conduct the first evaluation
for contextual image referencing, comprising a dedicated testing dataset and
evaluation metrics. Furthermore, we propose ImageRef-VL, a method that
significantly enhances open-source VLMs' image referencing capabilities through
instruction fine-tuning on a large-scale, manually curated multimodal
conversation dataset. Experimental results demonstrate that ImageRef-VL not
only outperforms proprietary models but also achieves an 88% performance
improvement over state-of-the-art open-source VLMs in contextual image
referencing tasks. Our code is available at
https://github.com/bytedance/ImageRef-VL.
|
2501.12419
|
Ensemble score filter with image inpainting for data assimilation in
tracking surface quasi-geostrophic dynamics with partial observations
|
physics.ao-ph cs.LG physics.data-an physics.flu-dyn stat.ML
|
Data assimilation plays a pivotal role in understanding and predicting
turbulent systems within geoscience and weather forecasting, where data
assimilation is used to address three fundamental challenges, i.e.,
high-dimensionality, nonlinearity, and partial observations. Recent advances in
machine learning (ML)-based data assimilation methods have demonstrated
encouraging results. In this work, we develop an ensemble score filter (EnSF)
that integrates image inpainting to solve the data assimilation problems with
partial observations. The EnSF method exploits an exclusively designed
training-free diffusion models to solve high-dimensional nonlinear data
assimilation problems. Its performance has been successfully demonstrated in
the context of having full observations, i.e., all the state variables are
directly or indirectly observed. However, because the EnSF does not use a
covariance matrix to capture the dependence between the observed and unobserved
state variables, it is nontrivial to extend the original EnSF method to the
partial observation scenario. In this work, we incorporate various image
inpainting techniques into the EnSF to predict the unobserved states during
data assimilation. At each filtering step, we first use the diffusion model to
estimate the observed states by integrating the likelihood information into the
score function. Then, we use image inpainting methods to predict the unobserved
state variables. We demonstrate the performance of the EnSF with inpainting by
tracking the Surface Quasi-Geostrophic (SQG) model dynamics under a variety of
scenarios. The successful proof of concept paves the way to more in-depth
investigations on exploiting modern image inpainting techniques to advance data
assimilation methodology for practical geoscience and weather forecasting
problems.
|
2501.12420
|
Consolidating TinyML Lifecycle with Large Language Models: Reality,
Illusion, or Opportunity?
|
cs.SE cs.AI cs.LG
|
The evolving requirements of Internet of Things (IoT) applications are
driving an increasing shift toward bringing intelligence to the edge, enabling
real-time insights and decision-making within resource-constrained
environments. Tiny Machine Learning (TinyML) has emerged as a key enabler of
this evolution, facilitating the deployment of ML models on devices such as
microcontrollers and embedded systems. However, the complexity of managing the
TinyML lifecycle, including stages such as data processing, model optimization
and conversion, and device deployment, presents significant challenges and
often requires substantial human intervention. Motivated by these challenges,
we began exploring whether Large Language Models (LLMs) could help automate and
streamline the TinyML lifecycle. We developed a framework that leverages the
natural language processing (NLP) and code generation capabilities of LLMs to
reduce development time and lower the barriers to entry for TinyML deployment.
Through a case study involving a computer vision classification model, we
demonstrate the framework's ability to automate key stages of the TinyML
lifecycle. Our findings suggest that LLM-powered automation holds potential for
improving the lifecycle development process and adapting to diverse
requirements. However, while this approach shows promise, there remain
obstacles and limitations, particularly in achieving fully automated solutions.
This paper sheds light on both the challenges and opportunities of integrating
LLMs into TinyML workflows, providing insights into the path forward for
efficient, AI-assisted embedded system development.
|
2501.12421
|
Tackling Small Sample Survival Analysis via Transfer Learning: A Study
of Colorectal Cancer Prognosis
|
cs.LG cs.AI q-bio.QM
|
Survival prognosis is crucial for medical informatics. Practitioners often
confront small-sized clinical data, especially cancer patient cases, which can
be insufficient to induce useful patterns for survival predictions. This study
deals with small sample survival analysis by leveraging transfer learning, a
useful machine learning technique that can enhance the target analysis with
related knowledge pre-learned from other data. We propose and develop various
transfer learning methods designed for common survival models. For parametric
models such as DeepSurv, Cox-CC (Cox-based neural networks), and DeepHit
(end-to-end deep learning model), we apply standard transfer learning
techniques like pretraining and fine-tuning. For non-parametric models such as
Random Survival Forest, we propose a new transfer survival forest (TSF) model
that transfers tree structures from source tasks and fine-tunes them with
target data. We evaluated the transfer learning methods on colorectal cancer
(CRC) prognosis. The source data are 27,379 SEER CRC stage I patients, and the
target data are 728 CRC stage I patients from the West China Hospital. When
enhanced by transfer learning, Cox-CC's $C^{td}$ value was boosted from 0.7868
to 0.8111, DeepHit's from 0.8085 to 0.8135, DeepSurv's from 0.7722 to 0.8043,
and RSF's from 0.7940 to 0.8297 (the highest performance). All models trained
with data as small as 50 demonstrated even more significant improvement.
Conclusions: Therefore, the current survival models used for cancer prognosis
can be enhanced and improved by properly designed transfer learning techniques.
The source code used in this study is available at
https://github.com/YonghaoZhao722/TSF.
|
2501.12422
|
CroMe: Multimodal Fake News Detection using Cross-Modal Tri-Transformer
and Metric Learning
|
cs.LG cs.AI cs.CV
|
Multimodal Fake News Detection has received increasing attention recently.
Existing methods rely on independently encoded unimodal data and overlook the
advantages of capturing intra-modality relationships and integrating
inter-modal similarities using advanced techniques. To address these issues,
Cross-Modal Tri-Transformer and Metric Learning for Multimodal Fake News
Detection (CroMe) is proposed. CroMe utilizes Bootstrapping Language-Image
Pre-training with Frozen Image Encoders and Large Language Models (BLIP2) as
encoders to capture detailed text, image and combined image-text
representations. The metric learning module employs a proxy anchor method to
capture intra-modality relationships while the feature fusion module uses a
Cross-Modal and Tri-Transformer for effective integration. The final fake news
detector processes the fused features through a classifier to predict the
authenticity of the content. Experiments on datasets show that CroMe excels in
multimodal fake news detection.
|
2501.12423
|
FREYR: A Framework for Recognizing and Executing Your Requests
|
cs.SE cs.AI
|
Large language models excel as conversational agents, but their capabilities
can be further extended through tool usage, i.e.: executable code, to enhance
response accuracy or address specialized domains. Current approaches to enable
tool usage often rely on model-specific prompting or fine-tuning a model for
function-calling instructions. Both approaches have notable limitations,
including reduced adaptability to unseen tools and high resource requirements.
This paper introduces FREYR, a streamlined framework that modularizes the tool
usage process into separate steps. Through this decomposition, we show that
FREYR achieves superior performance compared to conventional tool usage
methods. We evaluate FREYR on a set of real-world test cases specific for video
game design and compare it against traditional tool usage as provided by the
Ollama API.
|
2501.12424
|
Multi-Modality Collaborative Learning for Sentiment Analysis
|
cs.LG cs.AI cs.IR
|
Multimodal sentiment analysis (MSA) identifies individuals' sentiment states
in videos by integrating visual, audio, and text modalities. Despite progress
in existing methods, the inherent modality heterogeneity limits the effective
capture of interactive sentiment features across modalities. In this paper, by
introducing a Multi-Modality Collaborative Learning (MMCL) framework, we
facilitate cross-modal interactions and capture enhanced and complementary
features from modality-common and modality-specific representations,
respectively. Specifically, we design a parameter-free decoupling module and
separate uni-modality into modality-common and modality-specific components
through semantics assessment of cross-modal elements. For modality-specific
representations, inspired by the act-reward mechanism in reinforcement
learning, we design policy models to adaptively mine complementary sentiment
features under the guidance of a joint reward. For modality-common
representations, intra-modal attention is employed to highlight crucial
components, playing enhanced roles among modalities. Experimental results,
including superiority evaluations on four databases, effectiveness verification
of each module, and assessment of complementary features, demonstrate that MMCL
successfully learns collaborative features across modalities and significantly
improves performance. The code can be available at
https://github.com/smwanghhh/MMCL.
|
2501.12425
|
Multi-stage intermediate fusion for multimodal learning to classify
non-small cell lung cancer subtypes from CT and PET
|
eess.IV cs.AI cs.CV q-bio.QM
|
Accurate classification of histological subtypes of non-small cell lung
cancer (NSCLC) is essential in the era of precision medicine, yet current
invasive techniques are not always feasible and may lead to clinical
complications. This study presents a multi-stage intermediate fusion approach
to classify NSCLC subtypes from CT and PET images. Our method integrates the
two modalities at different stages of feature extraction, using voxel-wise
fusion to exploit complementary information across varying abstraction levels
while preserving spatial correlations. We compare our method against unimodal
approaches using only CT or PET images to demonstrate the benefits of modality
fusion, and further benchmark it against early and late fusion techniques to
highlight the advantages of intermediate fusion during feature extraction.
Additionally, we compare our model with the only existing intermediate fusion
method for histological subtype classification using PET/CT images. Our results
demonstrate that the proposed method outperforms all alternatives across key
metrics, with an accuracy and AUC equal to 0.724 and 0.681, respectively. This
non-invasive approach has the potential to significantly improve diagnostic
accuracy, facilitate more informed treatment decisions, and advance
personalized care in lung cancer management.
|
2501.12427
|
SafePowerGraph-HIL: Real-Time HIL Validation of Heterogeneous GNNs for
Bridging Sim-to-Real Gap in Power Grids
|
cs.LG cs.AI
|
As machine learning (ML) techniques gain prominence in power system research,
validating these methods' effectiveness under real-world conditions requires
real-time hardware-in-the-loop (HIL) simulations. HIL simulation platforms
enable the integration of computational models with physical devices, allowing
rigorous testing across diverse scenarios critical to system resilience and
reliability. In this study, we develop a SafePowerGraph-HIL framework that
utilizes HIL simulations on the IEEE 9-bus system, modeled in Hypersim, to
generate high-fidelity data, which is then transmitted in real-time via SCADA
to an AWS cloud database before being input into a Heterogeneous Graph Neural
Network (HGNN) model designed for power system state estimation and dynamic
analysis. By leveraging Hypersim's capabilities, we simulate complex grid
interactions, providing a robust dataset that captures critical parameters for
HGNN training. The trained HGNN is subsequently validated using newly generated
data under varied system conditions, demonstrating accuracy and robustness in
predicting power system states. The results underscore the potential of
integrating HIL with advanced neural network architectures to enhance the
real-time operational capabilities of power systems. This approach represents a
significant advancement toward the development of intelligent, adaptive control
strategies that support the robustness and resilience of evolving power grids.
|
2501.12428
|
SplitQuant: Layer Splitting for Low-Bit Neural Network Quantization
|
cs.LG cs.AI
|
Quantization for deep neural networks (DNNs) is the process of mapping the
parameter values of DNNs from original data types to other data types of lower
precision to reduce model sizes and make inference faster. Quantization often
maps different original values to a single quantized value because the range of
the original values is larger than the range of the quantized values. This
leads to the degradation of the accuracy of the quantized DNNs. Outliers are a
main cause of the degradation of quantization resolution because they enlarge
the range of original values. To solve the problem, the percentile method is
often used to clip outliers. However, clipping the outliers has another problem
of removing the important and strong signals in the DNNs. This paper proposes
SplitQuant to keep the outliers and improve the quantization resolution at the
same time. SplitQuant narrows down the range of the original values and
mitigates the effect of outliers by splitting each quantizable layer into three
mathematically equivalent layers and applies different scaling factors.
Especially, weights and biases are clustered into lower, middle and upper
clusters for optimized split. By preprocessing DNNs with SplitQuant,
quantization algorithms can achieve better results. SplitQuant was applied on
two BERT-Tiny models and improved the accuracy of INT2 quantization by 3.3%p
and 2.1%p, achieving accuracies comparable to those of the original FP32
models.
|
2501.12429
|
Fuel Efficiency Analysis of the Public Transportation System Based on
the Gaussian Mixture Model Clustering
|
cs.LG cs.AI
|
Public transportation is a major source of greenhouse gas emissions,
highlighting the need to improve bus fuel efficiency. Clustering algorithms
assist in analyzing fuel efficiency by grouping data into clusters, but
irrelevant features may complicate the analysis and choosing the optimal number
of clusters remains a challenging task. Therefore, this paper employs the
Gaussian mixture models to cluster the solo fuel-efficiency dataset. Moreover,
an integration method that combines the Silhouette index, Calinski-Harabasz
index, and Davies-Bouldin index is developed to select the optimal cluster
numbers. A dataset with 4006 bus trips in North Jutland, Denmark is utilized as
the case study. Trips are first split into three groups, then one group is
divided further, resulting in four categories: extreme, normal, low, and
extremely low fuel efficiency. A preliminary study using visualization analysis
is conducted to investigate how driving behaviors and route conditions affect
fuel efficiency. The results indicate that both individual driving habits and
route characteristics have a significant influence on fuel efficiency.
|
2501.12430
|
SCFCRC: Simultaneously Counteract Feature Camouflage and Relation
Camouflage for Fraud Detection
|
cs.LG cs.AI
|
In fraud detection, fraudsters often interact with many benign users,
camouflaging their features or relations to hide themselves. Most existing work
concentrates solely on either feature camouflage or relation camouflage, or
decoupling feature learning and relation learning to avoid the two camouflage
from affecting each other. However, this inadvertently neglects the valuable
information derived from features or relations, which could mutually enhance
their adversarial camouflage strategies. In response to this gap, we propose
SCFCRC, a Transformer-based fraud detector that Simultaneously Counteract
Feature Camouflage and Relation Camouflage. SCFCRC consists of two components:
Feature Camouflage Filter and Relation Camouflage Refiner. The feature
camouflage filter utilizes pseudo labels generated through label propagation to
train the filter and uses contrastive learning that combines instance-wise and
prototype-wise to improve the quality of features. The relation camouflage
refiner uses Mixture-of-Experts(MoE) network to disassemble the multi-relations
graph into multiple substructures and divide and conquer them to mitigate the
degradation of detection performance caused by relation camouflage.
Furthermore, we introduce a regularization method for MoE to enhance the
robustness of the model. Extensive experiments on two fraud detection benchmark
datasets demonstrate that our method outperforms state-of-the-art baselines.
|
2501.12431
|
Modality Interactive Mixture-of-Experts for Fake News Detection
|
cs.LG cs.AI cs.CL
|
The proliferation of fake news on social media platforms disproportionately
impacts vulnerable populations, eroding trust, exacerbating inequality, and
amplifying harmful narratives. Detecting fake news in multimodal contexts --
where deceptive content combines text and images -- is particularly challenging
due to the nuanced interplay between modalities. Existing multimodal fake news
detection methods often emphasize cross-modal consistency but ignore the
complex interactions between text and visual elements, which may complement,
contradict, or independently influence the predicted veracity of a post. To
address these challenges, we present Modality Interactive Mixture-of-Experts
for Fake News Detection (MIMoE-FND), a novel hierarchical Mixture-of-Experts
framework designed to enhance multimodal fake news detection by explicitly
modeling modality interactions through an interaction gating mechanism. Our
approach models modality interactions by evaluating two key aspects of modality
interactions: unimodal prediction agreement and semantic alignment. The
hierarchical structure of MIMoE-FND allows for distinct learning pathways
tailored to different fusion scenarios, adapting to the unique characteristics
of each modality interaction. By tailoring fusion strategies to diverse
modality interaction scenarios, MIMoE-FND provides a more robust and nuanced
approach to multimodal fake news detection. We evaluate our approach on three
real-world benchmarks spanning two languages, demonstrating its superior
performance compared to state-of-the-art methods. By enhancing the accuracy and
interpretability of fake news detection, MIMoE-FND offers a promising tool to
mitigate the spread of misinformation, with the potential to better safeguard
vulnerable communities against its harmful effects.
|
2501.12432
|
Divide-Then-Aggregate: An Efficient Tool Learning Method via Parallel
Tool Invocation
|
cs.LG cs.AI cs.CL
|
Although current Large Language Models (LLMs) exhibit impressive
capabilities, performing complex real-world tasks still requires tool learning.
Mainstream methods, such as CoT/ReAct, rely on step-by-step tool invocation to
interact with external environments, but they are limited in perceptual scope
and lack adequate task-planning capability. To address these limitations, other
studies introduce the first Search-based Decision Tree (DFSDT), which still
suffers from the high computational cost. In this paper, we introduce a novel
parallel tool invocation paradigm, DTA-Llama (Divide-Then-Aggregate Llama).
First, we transform traditional tree-based tool search paths into Directed
Acyclic Graph (DAG) structure, generating a high-quality parallel tool
invocation dataset. The DTA-Llama is then trained on the dataset to learn to
iteratively divide the current task into several parallel tool invocation
sub-tasks and aggregate the invocation results to decide the next actions.
Furthermore, we introduce an efficient inference framework inspired by the
Process/Threads mechanism when applying the DTA-Llama to practical tasks.
Experimental results show that our approach substantially enhances task
performance while reducing token consumption and inference time. Llama2-7B,
using our method, is comparable to the official parallel function calling
method of GPT-3.5. The relevant code, dataset, and model weights are available
at https://corn0205.github.io/
|
2501.12433
|
Owls are wise and foxes are unfaithful: Uncovering animal stereotypes in
vision-language models
|
cs.CV cs.AI cs.CL
|
Animal stereotypes are deeply embedded in human culture and language. They
often shape our perceptions and expectations of various species. Our study
investigates how animal stereotypes manifest in vision-language models during
the task of image generation. Through targeted prompts, we explore whether
DALL-E perpetuates stereotypical representations of animals, such as "owls as
wise," "foxes as unfaithful," etc. Our findings reveal significant stereotyped
instances where the model consistently generates images aligned with cultural
biases. The current work is the first of its kind to examine animal
stereotyping in vision-language models systematically and to highlight a
critical yet underexplored dimension of bias in AI-generated visual content.
|
2501.12434
|
Enhancing Retrosynthesis with Conformer: A Template-Free Method
|
cs.LG cs.AI
|
Retrosynthesis plays a crucial role in the fields of organic synthesis and
drug development, where the goal is to identify suitable reactants that can
yield a target product molecule. Although existing methods have achieved
notable success, they typically overlook the 3D conformational details and
internal spatial organization of molecules. This oversight makes it challenging
to predict reactants that conform to genuine chemical principles, particularly
when dealing with complex molecular structures, such as polycyclic and
heteroaromatic compounds. In response to this challenge, we introduce a novel
transformer-based, template-free approach that incorporates 3D conformer data
and spatial information. Our approach includes an Atom-align Fusion module that
integrates 3D positional data at the input stage, ensuring correct alignment
between atom tokens and their respective 3D coordinates. Additionally, we
propose a Distance-weighted Attention mechanism that refines the self-attention
process, constricting the model s focus to relevant atom pairs in 3D space.
Extensive experiments on the USPTO-50K dataset demonstrate that our model
outperforms previous template-free methods, setting a new benchmark for the
field. A case study further highlights our method s ability to predict
reasonable and accurate reactants.
|
2501.12447
|
Tight relations and equivalences between smooth relative entropies
|
quant-ph cs.IT math-ph math.IT math.MP
|
The precise one-shot characterisation of operational tasks in classical and
quantum information theory relies on different forms of smooth entropic
quantities. A particularly important connection is between the hypothesis
testing relative entropy and the smoothed max-relative entropy, which together
govern many operational settings. We first strengthen this connection into a
type of equivalence: we show that the hypothesis testing relative entropy is
equivalent to a variant of the smooth max-relative entropy based on the
information spectrum divergence, which can be alternatively understood as a
measured smooth max-relative entropy. Furthermore, we improve a fundamental
lemma due to Datta and Renner that connects the different variants of the
smoothed max-relative entropy, introducing a modified proof technique based on
matrix geometric means and a tightened gentle measurement lemma. We use the
unveiled connections and tools to strictly improve on previously known one-shot
bounds and duality relations between the smooth max-relative entropy and the
hypothesis testing relative entropy, sharpening also bounds that connect the
max-relative entropy with R\'enyi divergences.
|
2501.12456
|
Deploying Privacy Guardrails for LLMs: A Comparative Analysis of
Real-World Applications
|
cs.CR cs.AI cs.LG cs.SE
|
The adoption of Large Language Models (LLMs) has revolutionized AI
applications but poses significant challenges in safeguarding user privacy.
Ensuring compliance with privacy regulations such as GDPR and CCPA while
addressing nuanced privacy risks requires robust and scalable frameworks. This
paper presents a detailed study of OneShield Privacy Guard, a framework
designed to mitigate privacy risks in user inputs and LLM outputs across
enterprise and open-source settings. We analyze two real-world deployments:(1)
a multilingual privacy-preserving system integrated with Data and Model
Factory, focusing on enterprise-scale data governance; and (2) PR Insights, an
open-source repository emphasizing automated triaging and community-driven
refinements. In Deployment 1, OneShield achieved a 0.95 F1 score in detecting
sensitive entities like dates, names, and phone numbers across 26 languages,
outperforming state-of-the-art tool such as StarPII and Presidio by up to 12\%.
Deployment 2, with an average F1 score of 0.86, reduced manual effort by over
300 hours in three months, accurately flagging 8.25\% of 1,256 pull requests
for privacy risks with enhanced context sensitivity. These results demonstrate
OneShield's adaptability and efficacy in diverse environments, offering
actionable insights for context-aware entity recognition, automated compliance,
and ethical AI adoption. This work advances privacy-preserving frameworks,
supporting user trust and compliance across operational contexts.
|
2501.12465
|
Adaptive PII Mitigation Framework for Large Language Models
|
cs.LG cs.AI cs.CR
|
Artificial Intelligence (AI) faces growing challenges from evolving data
protection laws and enforcement practices worldwide. Regulations like GDPR and
CCPA impose strict compliance requirements on Machine Learning (ML) models,
especially concerning personal data use. These laws grant individuals rights
such as data correction and deletion, complicating the training and deployment
of Large Language Models (LLMs) that rely on extensive datasets. Public data
availability does not guarantee its lawful use for ML, amplifying these
challenges.
This paper introduces an adaptive system for mitigating risk of Personally
Identifiable Information (PII) and Sensitive Personal Information (SPI) in
LLMs. It dynamically aligns with diverse regulatory frameworks and integrates
seamlessly into Governance, Risk, and Compliance (GRC) systems. The system uses
advanced NLP techniques, context-aware analysis, and policy-driven masking to
ensure regulatory compliance.
Benchmarks highlight the system's effectiveness, with an F1 score of 0.95 for
Passport Numbers, outperforming tools like Microsoft Presidio (0.33) and Amazon
Comprehend (0.54). In human evaluations, the system achieved an average user
trust score of 4.6/5, with participants acknowledging its accuracy and
transparency. Observations demonstrate stricter anonymization under GDPR
compared to CCPA, which permits pseudonymization and user opt-outs. These
results validate the system as a scalable and robust solution for enterprise
privacy compliance.
|
2501.12473
|
RIS-Aided Monitoring With Cooperative Jamming: Design and Performance
Analysis
|
eess.SY cs.SY
|
We investigate a reconfigurable intelligent surface (RIS) aided wireless
surveillance system. In this system, a monitor not only receives signal from
suspicious transmitter via a RIS-enhanced legitimate surveillance (LS) link but
also simultaneously takes control of multiple jammers to degrade the quality of
received suspicious signal. Under this setup, to enhance monitoring performance
requires improvements of both the received signal quality at the monitor and
the cooperative jamming (CJ). Considering that the surveillance system is aided
by one RIS, whose phase shift optimization involves both channel state
information (CSI) of the LS and CJ links, we utilize partial CSI to alleviate
the CSI acquisition burden in our design. We propose two RIS-aided monitoring
schemes with optimal jammer selection (OJS), and derive their closed-form
expressions of surveillance success probability (SSP), respectively.
Furthermore, we consider RIS-aided monitoring schemes with random jammer
selection as corresponding benchmarks. Thereafter, we analyze special cases
where the jammers are using power control to avoid being found, making it
appears like passive monitoring. Also, the effect of RIS is highlighted by
considering asymptotically large number of RIS elements. Numerical results
verify that the proposed OJS strategy further enhances the RIS-aided monitoring
performance compared with non-jammer-selection RISLR and RISCR schemes, where
the superiority comes at the cost of CSI knowledge and becomes marginal in the
region of high jamming power. In addition, the RISLO shows surveillance
performance advantage overRISCOwhen the suspicious power is low or when the
number of RIS elements is large.
|
2501.12477
|
Slot-BERT: Self-supervised Object Discovery in Surgical Video
|
eess.IV cs.CV
|
Object-centric slot attention is a powerful framework for unsupervised
learning of structured and explainable representations that can support
reasoning about objects and actions, including in surgical videos. While
conventional object-centric methods for videos leverage recurrent processing to
achieve efficiency, they often struggle with maintaining long-range temporal
coherence required for long videos in surgical applications. On the other hand,
fully parallel processing of entire videos enhances temporal consistency but
introduces significant computational overhead, making it impractical for
implementation on hardware in medical facilities. We present Slot-BERT, a
bidirectional long-range model that learns object-centric representations in a
latent space while ensuring robust temporal coherence. Slot-BERT scales object
discovery seamlessly to long videos of unconstrained lengths. A novel slot
contrastive loss further reduces redundancy and improves the representation
disentanglement by enhancing slot orthogonality. We evaluate Slot-BERT on
real-world surgical video datasets from abdominal, cholecystectomy, and
thoracic procedures. Our method surpasses state-of-the-art object-centric
approaches under unsupervised training achieving superior performance across
diverse domains. We also demonstrate efficient zero-shot domain adaptation to
data from diverse surgical specialties and databases.
|
2501.12479
|
Degree-Based Logical Adjacency Checking (DBLAC): A Novel Heuristic for
Vertex Coloring
|
cs.DM cs.AI
|
Degree Based Logical Adjacency Checking (DBLAC). An efficient coloring of
graphs with unique logical AND operations. The logical AND operation shows more
effective color assignment and fewer number of induced colors in the case of
common edges between vertices. In this work, we provide a detailed theoretical
analysis of DBLAC's time and space complexity. It furthermore shows its
effectiveness through prolonged experiments on standard benchmark graphs. We
compare it with existing algorithms, namely DSATUR and Recursive Largest First
(RLF). Second, we show how DBLAC achieves competitive results with respect to
both the number of colors used and runtime performance.
|
2501.12482
|
TOFFE -- Temporally-binned Object Flow from Events for High-speed and
Energy-Efficient Object Detection and Tracking
|
cs.CV cs.ET cs.LG cs.NE cs.RO
|
Object detection and tracking is an essential perception task for enabling
fully autonomous navigation in robotic systems. Edge robot systems such as
small drones need to execute complex maneuvers at high-speeds with limited
resources, which places strict constraints on the underlying algorithms and
hardware. Traditionally, frame-based cameras are used for vision-based
perception due to their rich spatial information and simplified synchronous
sensing capabilities. However, obtaining detailed information across frames
incurs high energy consumption and may not even be required. In addition, their
low temporal resolution renders them ineffective in high-speed motion
scenarios. Event-based cameras offer a biologically-inspired solution to this
by capturing only changes in intensity levels at exceptionally high temporal
resolution and low power consumption, making them ideal for high-speed motion
scenarios. However, their asynchronous and sparse outputs are not natively
suitable with conventional deep learning methods. In this work, we propose
TOFFE, a lightweight hybrid framework for performing event-based object motion
estimation (including pose, direction, and speed estimation), referred to as
Object Flow. TOFFE integrates bio-inspired Spiking Neural Networks (SNNs) and
conventional Analog Neural Networks (ANNs), to efficiently process events at
high temporal resolutions while being simple to train. Additionally, we present
a novel event-based synthetic dataset involving high-speed object motion to
train TOFFE. Our experimental results show that TOFFE achieves 5.7x/8.3x
reduction in energy consumption and 4.6x/5.8x reduction in latency on edge
GPU(Jetson TX2)/hybrid hardware(Loihi-2 and Jetson TX2), compared to previous
event-based object detection baselines.
|
2501.12483
|
A Smart IoT Framework for Climate-Resilient and Sustainable Maize
Farming In Uganda
|
cs.CE
|
This study provides a framework that incorporates the Internet of Things
(IoT) technology into maize farming activities in Central Uganda as a solution
to various challenges including climate change, sub-optimal resource use and
low crop yields. Using IoT-based modeling and simulation, the presented
solution recommends cost-effective and efficient approaches to irrigation, crop
yield improvement enhancement and prevention of drinking water loss while being
practical for smallholder farmers. The framework is developed in a manner that
is appropriate for low resource use regions by using local strategies that are
easily understandable and actionable for the farmers thus solving the issue of
technology access and social economic constraints. Research in this area
brought to light the promise that the IoT holds for the evolution of
agriculture into a more data-informed, climate-smart sector, contributes to the
much-needed food in the world, is economically viable, facilitates sustainable
rural development and is a huge step for the agriculture modernization of
Uganda.
|
2501.12485
|
R2D2: Remembering, Reflecting and Dynamic Decision Making for Web Agents
|
cs.AI
|
The proliferation of web agents necessitates advanced navigation and
interaction strategies within complex web environments. Current models often
struggle with efficient navigation and action execution due to limited
visibility and understanding of web structures. Our proposed R2D2 framework
addresses these challenges by integrating two paradigms: Remember and Reflect.
The Remember paradigm utilizes a replay buffer that aids agents in
reconstructing the web environment dynamically, thus enabling the formulation
of a detailed ``map'' of previously visited pages. This helps in reducing
navigational errors and optimizing the decision-making process during web
interactions. Conversely, the Reflect paradigm allows agents to learn from past
mistakes by providing a mechanism for error analysis and strategy refinement,
enhancing overall task performance. We evaluate R2D2 using the WEBARENA
benchmark, demonstrating significant improvements over existing methods,
including a 50% reduction in navigation errors and a threefold increase in task
completion rates. Our findings suggest that a combination of memory-enhanced
navigation and reflective learning promisingly advances the capabilities of web
agents, potentially benefiting various applications such as automated customer
service and personal digital assistants.
|
2501.12486
|
The Journey Matters: Average Parameter Count over Pre-training Unifies
Sparse and Dense Scaling Laws
|
cs.LG cs.CL
|
Pruning eliminates unnecessary parameters in neural networks; it offers a
promising solution to the growing computational demands of large language
models (LLMs). While many focus on post-training pruning, sparse
pre-training--which combines pruning and pre-training into a single
phase--provides a simpler alternative. In this work, we present the first
systematic exploration of optimal sparse pre-training configurations for LLMs
through an examination of 80 unique pruning schedules across different sparsity
levels and training durations. We find that initiating pruning at 25% of total
training compute and concluding at 75% achieves near-optimal final evaluation
loss. These findings provide valuable insights for efficient and effective
sparse pre-training of LLMs. Furthermore, we propose a new scaling law that
modifies the Chinchilla scaling law to use the average parameter count over
pre-training. Through empirical and theoretical validation, we demonstrate that
this modified scaling law accurately models evaluation loss for both sparsely
and densely pre-trained LLMs, unifying scaling laws across pre-training
paradigms. Our findings indicate that while sparse pre-training achieves the
same final model quality as dense pre-training for equivalent compute budgets,
it provides substantial benefits through reduced model size, enabling
significant potential computational savings during inference.
|
2501.12487
|
fabSAM: A Farmland Boundary Delineation Method Based on the Segment
Anything Model
|
cs.CV cs.AI eess.IV
|
Delineating farmland boundaries is essential for agricultural management such
as crop monitoring and agricultural census. Traditional methods using remote
sensing imagery have been efficient but limited in generalisation. The Segment
Anything Model (SAM), known for its impressive zero shot performance, has been
adapted for remote sensing tasks through prompt learning and fine tuning. Here,
we propose a SAM based farmland boundary delineation framework 'fabSAM' that
combines a Deeplabv3+ based Prompter and SAM. Also, a fine tuning strategy was
introduced to enable SAMs decoder to improve the use of prompt information.
Experimental results on the AI4Boundaries and AI4SmallFarms datasets have shown
that fabSAM has a significant improvement in farmland region identification and
boundary delineation. Compared to zero shot SAM, fabSAM surpassed it by 23.5%
and 15.1% in mIOU on the AI4Boundaries and AI4SmallFarms datasets,
respectively. For Deeplabv3+, fabSAM outperformed it by 4.9% and 12.5% in mIOU,
respectively. These results highlight the effectiveness of fabSAM, which also
means that we can more easily obtain the global farmland region and boundary
maps from open source satellite image datasets like Sentinel2.
|
2501.12488
|
Bidirectional Brain Image Translation using Transfer Learning from
Generic Pre-trained Models
|
eess.IV cs.CV q-bio.TO
|
Brain imaging plays a crucial role in the diagnosis and treatment of various
neurological disorders, providing valuable insights into the structure and
function of the brain. Techniques such as magnetic resonance imaging (MRI) and
computed tomography (CT) enable non-invasive visualization of the brain, aiding
in the understanding of brain anatomy, abnormalities, and functional
connectivity. However, cost and radiation dose may limit the acquisition of
specific image modalities, so medical image synthesis can be used to generate
required medical images without actual addition. In the medical domain, where
obtaining labeled medical images is labor-intensive and expensive, addressing
data scarcity is a major challenge. Recent studies propose using transfer
learning to overcome this issue. This involves adapting pre-trained CycleGAN
models, initially trained on non-medical data, to generate realistic medical
images. In this work, transfer learning was applied to the task of MR-CT image
translation and vice versa using 18 pre-trained non-medical models, and the
models were fine-tuned to have the best result. The models' performance was
evaluated using four widely used image quality metrics:
Peak-signal-to-noise-ratio, Structural Similarity Index, Universal Quality
Index, and Visual Information Fidelity. Quantitative evaluation and qualitative
perceptual analysis by radiologists demonstrate the potential of transfer
learning in medical imaging and the effectiveness of the generic pre-trained
model. The results provide compelling evidence of the model's exceptional
performance, which can be attributed to the high quality and similarity of the
training images to actual human brain images. These results underscore the
significance of carefully selecting appropriate and representative training
images to optimize performance in brain image analysis tasks.
|
2501.12489
|
Large-image Object Detection for Fine-grained Recognition of Punches
Patterns in Medieval Panel Painting
|
cs.CV cs.AI cs.LG
|
The attribution of the author of an art piece is typically a laborious manual
process, usually relying on subjective evaluations of expert figures. However,
there are some situations in which quantitative features of the artwork can
support these evaluations. The extraction of these features can sometimes be
automated, for instance, with the use of Machine Learning (ML) techniques. An
example of these features is represented by repeated, mechanically impressed
patterns, called punches, present chiefly in 13th and 14th-century panel
paintings from Tuscany. Previous research in art history showcased a strong
connection between the shapes of punches and specific artists or workshops,
suggesting the possibility of using these quantitative cues to support the
attribution. In the present work, we first collect a dataset of large-scale
images of these panel paintings. Then, using YOLOv10, a recent and popular
object detection model, we train a ML pipeline to perform object detection on
the punches contained in the images. Due to the large size of the images, the
detection procedure is split across multiple frames by adopting a
sliding-window approach with overlaps, after which the predictions are combined
for the whole image using a custom non-maximal suppression routine. Our results
indicate how art historians working in the field can reliably use our method
for the identification and extraction of punches.
|
2501.12491
|
Optimizing Blockchain Analysis: Tackling Temporality and Scalability
with an Incremental Approach with Metropolis-Hastings Random Walks
|
cs.CE stat.ML
|
Blockchain technology, with implications in the financial domain, offers data
in the form of large-scale transaction networks. Analyzing transaction networks
facilitates fraud detection, market analysis, and supports government
regulation. Despite many graph representation learning methods for transaction
network analysis, we pinpoint two salient limitations that merit more
investigation. Existing methods predominantly focus on the snapshots of
transaction networks, sidelining the evolving nature of blockchain transaction
networks. Existing methodologies may not sufficiently emphasize efficient,
incremental learning capabilities, which are essential for addressing the
scalability challenges in ever-expanding large-scale transaction networks. To
address these challenges, we employed an incremental approach for random
walk-based node representation learning in transaction networks. Further, we
proposed a Metropolis-Hastings-based random walk mechanism for improved
efficiency. The empirical evaluation conducted on blockchain transaction
datasets reveals comparable performance in node classification tasks while
reducing computational overhead. Potential applications include transaction
network monitoring, the efficient classification of blockchain addresses for
fraud detection or the identification of specialized address types within the
network.
|
2501.12493
|
ELEGNT: Expressive and Functional Movement Design for
Non-anthropomorphic Robot
|
cs.RO cs.HC
|
Nonverbal behaviors such as posture, gestures, and gaze are essential for
conveying internal states, both consciously and unconsciously, in human
interaction. For robots to interact more naturally with humans, robot movement
design should likewise integrate expressive qualities, such as intention,
attention, and emotions, alongside traditional functional considerations like
task fulfillment and time efficiency. In this paper, we present the design and
prototyping of a lamp-like robot that explores the interplay between functional
and expressive objectives in movement design. Using a research-through-design
methodology, we document the hardware design process, define expressive
movement primitives, and outline a set of interaction scenario storyboards. We
propose a framework that incorporates both functional and expressive utilities
during movement generation, and implement the robot behavior sequences in
different function- and social- oriented tasks. Through a user study comparing
expression-driven versus function-driven movements across six task scenarios,
our findings indicate that expression-driven movements significantly enhance
user engagement and perceived robot qualities. This effect is especially
pronounced in social-oriented tasks.
|
2501.12500
|
Identification of Nonparametric Dynamic Causal Structure and Latent
Process in Climate System
|
cs.LG stat.ME
|
The study of learning causal structure with latent variables has advanced the
understanding of the world by uncovering causal relationships and latent
factors, e.g., Causal Representation Learning (CRL). However, in real-world
scenarios, such as those in climate systems, causal relationships are often
nonparametric, dynamic, and exist among both observed variables and latent
variables. These challenges motivate us to consider a general setting in which
causal relations are nonparametric and unrestricted in their occurrence, which
is unconventional to current methods. To solve this problem, with the aid of
3-measurement in temporal structure, we theoretically show that both latent
variables and processes can be identified up to minor indeterminacy under mild
assumptions. Moreover, we tackle the general nonlinear Causal Discovery (CD)
from observations, e.g., temperature, as a specific task of learning
independent representation, through the principle of functional equivalence.
Based on these insights, we develop an estimation approach simultaneously
recovering both the observed causal structure and latent causal process in a
nontrivial manner. Simulation studies validate the theoretical foundations and
demonstrate the effectiveness of the proposed methodology. In the experiments
involving climate data, this approach offers a powerful and in-depth
understanding of the climate system.
|
2501.12502
|
Sequence Spreading-Based Semantic Communication Under High RF
Interference
|
cs.NI cs.LG
|
In the evolving landscape of wireless communications, semantic communication
(SemCom) has recently emerged as a 6G enabler that prioritizes the transmission
of meaning and contextual relevance over conventional bit-centric metrics.
However, the deployment of SemCom systems in industrial settings presents
considerable challenges, such as high radio frequency interference (RFI), that
can adversely affect system performance. To address this problem, in this work,
we propose a novel approach based on integrating sequence spreading techniques
with SemCom to enhance system robustness against such adverse conditions and
enable scalable multi-user (MU) SemCom. In addition, we propose a novel signal
refining network (SRN) to refine the received signal after despreading and
equalization. The proposed network eliminates the need for computationally
intensive end-to-end (E2E) training while improving performance metrics,
achieving a 25% gain in BLEU score and a 12% increase in semantic similarity
compared to E2E training using the same bandwidth.
|
2501.12508
|
The Finite Element Neural Network Method: One Dimensional Study
|
cs.CE cs.AI
|
The potential of neural networks (NN) in engineering is rooted in their
capacity to understand intricate patterns and complex systems, leveraging their
universal nonlinear approximation capabilities and high expressivity.
Meanwhile, conventional numerical methods, backed by years of meticulous
refinement, continue to be the standard for accuracy and dependability.
Bridging these paradigms, this research introduces the finite element neural
network method (FENNM) within the framework of the Petrov-Galerkin method using
convolution operations to approximate the weighted residual of the differential
equations. The NN generates the global trial solution, while the test functions
belong to the Lagrange test function space. FENNM introduces several key
advantages. Notably, the weak-form of the differential equations introduces
flux terms that contribute information to the loss function compared to VPINN,
hp-VPINN, and cv-PINN. This enables the integration of forcing terms and
natural boundary conditions into the loss function similar to conventional
finite element method (FEM) solvers, facilitating its optimization, and
extending its applicability to more complex problems, which will ease
industrial adoption. This study will elaborate on the derivation of FENNM,
highlighting its similarities with FEM. Additionally, it will provide insights
into optimal utilization strategies and user guidelines to ensure
cost-efficiency. Finally, the study illustrates the robustness and accuracy of
FENNM by presenting multiple numerical case studies and applying adaptive mesh
refinement techniques.
|
2501.12516
|
Robustness of Selected Learning Models under Label-Flipping Attack
|
cs.LG cs.CR
|
In this paper we compare traditional machine learning and deep learning
models trained on a malware dataset when subjected to adversarial attack based
on label-flipping. Specifically, we investigate the robustness of Support
Vector Machines (SVM), Random Forest, Gaussian Naive Bayes (GNB), Gradient
Boosting Machine (GBM), LightGBM, XGBoost, Multilayer Perceptron (MLP),
Convolutional Neural Network (CNN), MobileNet, and DenseNet models when facing
varying percentages of misleading labels. We empirically assess the the
accuracy of each of these models under such an adversarial attack on the
training data. This research aims to provide insights into which models are
inherently more robust, in the sense of being better able to resist intentional
disruptions to the training data. We find wide variation in the robustness of
the models tested to adversarial attack, with our MLP model achieving the best
combination of initial accuracy and robustness.
|
2501.12521
|
An Empirically-grounded tool for Automatic Prompt Linting and Repair: A
Case Study on Bias, Vulnerability, and Optimization in Developer Prompts
|
cs.SE cs.AI
|
The tidal wave of advancements in Large Language Models (LLMs) has led to
their swift integration into application-level logic. Many software systems now
use prompts to interact with these black-box models, combining natural language
with dynamic values interpolated at runtime, to perform tasks ranging from
sentiment analysis to question answering. Due to the programmatic and
structured natural language aspects of these prompts, we refer to them as
Developer Prompts. Unlike traditional software artifacts, Dev Prompts blend
natural language instructions with artificial languages such as programming and
markup languages, thus requiring specialized tools for analysis, distinct from
classical software evaluation methods.
In response to this need, we introduce PromptDoctor, a tool explicitly
designed to detect and correct issues of Dev Prompts. PromptDoctor identifies
and addresses problems related to bias, vulnerability, and sub-optimal
performance in Dev Prompts, helping mitigate their possible harms. In our
analysis of 2,173 Dev Prompts, selected as a representative sample of 40,573
Dev Prompts, we found that 3.46% contained one or more forms of bias, 10.75%
were vulnerable to prompt injection attacks. Additionally, 3,310 were amenable
to automated prompt optimization. To address these issues, we applied
PromptDoctor to the flawed Dev Prompts we discovered. PromptDoctor de-biased
68.29% of the biased Dev Prompts, hardened 41.81% of the vulnerable Dev
Prompts, and improved the performance of 37.1% sub-optimal Dev Prompts.
Finally, we developed a PromptDoctor VSCode extension, enabling developers to
easily enhance Dev Prompts in their existing development workflows. The data
and source code for this work are available at
|
2501.12522
|
Topology of Out-of-Distribution Examples in Deep Neural Networks
|
cs.LG
|
As deep neural networks (DNNs) become increasingly common, concerns about
their robustness do as well. A longstanding problem for deployed DNNs is their
behavior in the face of unfamiliar inputs; specifically, these models tend to
be overconfident and incorrect when encountering out-of-distribution (OOD)
examples. In this work, we present a topological approach to characterizing OOD
examples using latent layer embeddings from DNNs. Our goal is to identify
topological features, referred to as landmarks, that indicate OOD examples. We
conduct extensive experiments on benchmark datasets and a realistic DNN model,
revealing a key insight for OOD detection. Well-trained DNNs have been shown to
induce a topological simplification on training data for simple models and
datasets; we show that this property holds for realistic, large-scale test and
training data, but does not hold for OOD examples. More specifically, we find
that the average lifetime (or persistence) of OOD examples is statistically
longer than that of training or test examples. This indicates that DNNs
struggle to induce topological simplification on unfamiliar inputs. Our
empirical results provide novel evidence of topological simplification in
realistic DNNs and lay the groundwork for topologically-informed OOD detection
strategies.
|
2501.12523
|
Federated Discrete Denoising Diffusion Model for Molecular Generation
with OpenFL
|
cs.LG cs.CR
|
Generating unique molecules with biochemically desired properties to serve as
viable drug candidates is a difficult task that requires specialized domain
expertise. In recent years, diffusion models have shown promising results in
accelerating the drug design process through AI-driven molecular generation.
However, training these models requires massive amounts of data, which are
often isolated in proprietary silos. OpenFL is a federated learning framework
that enables privacy-preserving collaborative training across these
decentralized data sites. In this work, we present a federated discrete
denoising diffusion model that was trained using OpenFL. The federated model
achieves comparable performance with a model trained on centralized data when
evaluating the uniqueness and validity of the generated molecules. This
demonstrates the utility of federated learning in the drug design process.
OpenFL is available at: https://github.com/securefederatedai/openfl
|
2501.12524
|
Efficient Lung Ultrasound Severity Scoring Using Dedicated Feature
Extractor
|
eess.IV cs.AI cs.CV
|
With the advent of the COVID-19 pandemic, ultrasound imaging has emerged as a
promising technique for COVID-19 detection, due to its non-invasive nature,
affordability, and portability. In response, researchers have focused on
developing AI-based scoring systems to provide real-time diagnostic support.
However, the limited size and lack of proper annotation in publicly available
ultrasound datasets pose significant challenges for training a robust AI model.
This paper proposes MeDiVLAD, a novel pipeline to address the above issue for
multi-level lung-ultrasound (LUS) severity scoring. In particular, we leverage
self-knowledge distillation to pretrain a vision transformer (ViT) without
label and aggregate frame-level features via dual-level VLAD aggregation. We
show that with minimal finetuning, MeDiVLAD outperforms conventional
fully-supervised methods in both frame- and video-level scoring, while offering
classification reasoning with exceptional quality. This superior performance
enables key applications such as the automatic identification of critical lung
pathology areas and provides a robust solution for broader medical video
classification tasks.
|
2501.12528
|
Improved Coded Caching Scheme for Multi-User Information Retrieval
System
|
cs.IT math.IT
|
In this paper, we study the coded caching scheme for the $(L, K, M, N)$
multi-user information retrieval (MIR) system, which consists of a content
library containing $N$ files, a base station (BS) with $L$ antennas that cannot
access the library, and $K$ single-antenna users, each of which can cache at
most $M$ files from the library. The users communicate with the others assisted
by the BS to decode their required files. In this paper, we focus on designing
a coded caching scheme with low communication latency measured by normalized
delivery time (NDT), computational complexity, and subpacketizations. When
$\frac{KM}{N}\geq L$ we first simply the precoding matrix in the downlink step
to an identity matrix and use the multiple-antenna placement delivery array
(MAPDA), which was originally proposed for the multiple-input single-output
networks, to generate several new schemes for MIR system. Compared to the
existing schemes, both the theoretical and numerical analyses show that our new
schemes achieve much lower computational complexity and smaller
subpacketizations with the same NDT.
|
2501.12535
|
How Does the Spatial Distribution of Pre-training Data Affect Geospatial
Foundation Models?
|
cs.LG cs.CV
|
Foundation models have made rapid advances in many domains including Earth
observation, where Geospatial Foundation Models (GFMs) can help address global
challenges such as climate change, agriculture, and disaster response. Previous
work on GFMs focused on tailoring model architecture and pre-text tasks, and
did not investigate the impact of pre-training data selection on model
performance. However, recent works from other domains show that the
pre-training data distribution is an important factor influencing the
performance of the foundation models. With this motivation, our research
explores how the geographic distribution of pre-training data affects the
performance of GFMs. We evaluated several pre-training data distributions by
sampling different compositions from a global data pool. Our experiments with
two GFMs on downstream tasks indicate that balanced and globally representative
data compositions often outperform region-specific sampling, highlighting the
importance of diversity and global coverage in pre-training data. Our results
suggest that the most appropriate data sampling technique may depend on the
specific GFM architecture. These findings will support the development of
robust GFMs by incorporating quality pre-training data distributions,
ultimately improving machine learning solutions for Earth observation.
|
2501.12536
|
Interaction Dataset of Autonomous Vehicles with Traffic Lights and Signs
|
cs.RO cs.AI
|
This paper presents the development of a comprehensive dataset capturing
interactions between Autonomous Vehicles (AVs) and traffic control devices,
specifically traffic lights and stop signs. Derived from the Waymo Motion
dataset, our work addresses a critical gap in the existing literature by
providing real-world trajectory data on how AVs navigate these traffic control
devices. We propose a methodology for identifying and extracting relevant
interaction trajectory data from the Waymo Motion dataset, incorporating over
37,000 instances with traffic lights and 44,000 with stop signs. Our
methodology includes defining rules to identify various interaction types,
extracting trajectory data, and applying a wavelet-based denoising method to
smooth the acceleration and speed profiles and eliminate anomalous values,
thereby enhancing the trajectory quality. Quality assessment metrics indicate
that trajectories obtained in this study have anomaly proportions in
acceleration and jerk profiles reduced to near-zero levels across all
interaction categories. By making this dataset publicly available, we aim to
address the current gap in datasets containing AV interaction behaviors with
traffic lights and signs. Based on the organized and published dataset, we can
gain a more in-depth understanding of AVs' behavior when interacting with
traffic lights and signs. This will facilitate research on AV integration into
existing transportation infrastructures and networks, supporting the
development of more accurate behavioral models and simulation tools.
|
2501.12537
|
Enhancing Privacy in the Early Detection of Sexual Predators Through
Federated Learning and Differential Privacy
|
cs.CL cs.CY
|
The increased screen time and isolation caused by the COVID-19 pandemic have
led to a significant surge in cases of online grooming, which is the use of
strategies by predators to lure children into sexual exploitation. Previous
efforts to detect grooming in industry and academia have involved accessing and
monitoring private conversations through centrally-trained models or sending
private conversations to a global server. In this work, we implement a
privacy-preserving pipeline for the early detection of sexual predators. We
leverage federated learning and differential privacy in order to create safer
online spaces for children while respecting their privacy. We investigate
various privacy-preserving implementations and discuss their benefits and
shortcomings. Our extensive evaluation using real-world data proves that
privacy and utility can coexist with only a slight reduction in utility.
|
2501.12538
|
Academic Case Reports Lack Diversity: Assessing the Presence and
Diversity of Sociodemographic and Behavioral Factors related to Post COVID-19
Condition
|
cs.CL cs.AI
|
Understanding the prevalence, disparities, and symptom variations of Post
COVID-19 Condition (PCC) for vulnerable populations is crucial to improving
care and addressing intersecting inequities. This study aims to develop a
comprehensive framework for integrating social determinants of health (SDOH)
into PCC research by leveraging NLP techniques to analyze disparities and
variations in SDOH representation within PCC case reports. Following
construction of a PCC Case Report Corpus, comprising over 7,000 case reports
from the LitCOVID repository, a subset of 709 reports were annotated with 26
core SDOH-related entity types using pre-trained named entity recognition (NER)
models, human review, and data augmentation to improve quality, diversity and
representation of entity types. An NLP pipeline integrating NER, natural
language inference (NLI), trigram and frequency analyses was developed to
extract and analyze these entities. Both encoder-only transformer models and
RNN-based models were assessed for the NER objective.
Fine-tuned encoder-only BERT models outperformed traditional RNN-based models
in generalizability to distinct sentence structures and greater class sparsity.
Exploratory analysis revealed variability in entity richness, with prevalent
entities like condition, age, and access to care, and underrepresentation of
sensitive categories like race and housing status. Trigram analysis highlighted
frequent co-occurrences among entities, including age, gender, and condition.
The NLI objective (entailment and contradiction analysis) showed attributes
like "Experienced violence or abuse" and "Has medical insurance" had high
entailment rates (82.4%-80.3%), while attributes such as "Is
female-identifying," "Is married," and "Has a terminal condition" exhibited
high contradiction rates (70.8%-98.5%).
|
2501.12539
|
Compositional Instruction Following with Language Models and
Reinforcement Learning
|
cs.LG cs.CL
|
Combining reinforcement learning with language grounding is challenging as
the agent needs to explore the environment while simultaneously learning
multiple language-conditioned tasks. To address this, we introduce a novel
method: the compositionally-enabled reinforcement learning language agent
(CERLLA). Our method reduces the sample complexity of tasks specified with
language by leveraging compositional policy representations and a semantic
parser trained using reinforcement learning and in-context learning. We
evaluate our approach in an environment requiring function approximation and
demonstrate compositional generalization to novel tasks. Our method
significantly outperforms the previous best non-compositional baseline in terms
of sample complexity on 162 tasks designed to test compositional
generalization. Our model attains a higher success rate and learns in fewer
steps than the non-compositional baseline. It reaches a success rate equal to
an oracle policy's upper-bound performance of 92%. With the same number of
environment steps, the baseline only reaches a success rate of 80%.
|
2501.12540
|
Comparative Approaches to Sentiment Analysis Using Datasets in Major
European and Arabic Languages
|
cs.CL
|
This study explores transformer-based models such as BERT, mBERT, and XLM-R
for multi-lingual sentiment analysis across diverse linguistic structures. Key
contributions include the identification of XLM-R superior adaptability in
morphologically complex languages, achieving accuracy levels above 88%. The
work highlights fine-tuning strategies and emphasizes their significance for
improving sentiment classification in underrepresented languages.
|
2501.12542
|
Reinforcement Learning Constrained Beam Search for Parameter
Optimization of Paper Drying Under Flexible Constraints
|
cs.LG cs.AI cs.SY eess.SY
|
Existing approaches to enforcing design constraints in Reinforcement Learning
(RL) applications often rely on training-time penalties in the reward function
or training/inference-time invalid action masking, but these methods either
cannot be modified after training, or are limited in the types of constraints
that can be implemented. To address this limitation, we propose Reinforcement
Learning Constrained Beam Search (RLCBS) for inference-time refinement in
combinatorial optimization problems. This method respects flexible,
inference-time constraints that support exclusion of invalid actions and forced
inclusion of desired actions, and employs beam search to maximize sequence
probability for more sensible constraint incorporation. RLCBS is extensible to
RL-based planning and optimization problems that do not require real-time
solution, and we apply the method to optimize process parameters for a novel
modular testbed for paper drying. An RL agent is trained to minimize energy
consumption across varying machine speed levels by generating optimal dryer
module and air supply temperature configurations. Our results demonstrate that
RLCBS outperforms NSGA-II under complex design constraints on drying module
configurations at inference-time, while providing a 2.58-fold or higher speed
improvement.
|
2501.12547
|
Human-like conceptual representations emerge from language prediction
|
cs.CL cs.AI
|
Recent advances in large language models (LLMs) provide a new opportunity to
address the long-standing question of how concepts are represented and
organized in the mind, which is central to unravelling the nature of human
cognition. Here, we reframed the classic reverse dictionary task to simulate
human concept inference in context and investigated the emergence of human-like
conceptual representations within LLMs. We found that LLMs were able to infer
concepts from definitional descriptions and construct representation spaces
that converge towards a shared, context-independent structure. These
representations effectively predicted human behavioural judgments and aligned
well with neural activity patterns in the human brain, offering evidence for
biological plausibility. These findings demonstrate that human-like conceptual
representations and organization can naturally emerge from language prediction,
even without real-world grounding. Our work supports the view that LLMs serve
as valuable tools for understanding complex human cognition and paves the way
for better alignment between artificial and human intelligence.
|
2501.12548
|
Galaxy Codes: Advancing Achievability for Deterministic Identification
via Gaussian Channels
|
cs.IT math.IT
|
Deterministic identification offers an efficient solution for scenarios where
decoding entire messages is unnecessary. It is commonly used in alarm systems
and control systems. A key advantage of this approach is that the capacity for
deterministic identification in Gaussian channels with power constraints grows
superexponentially, unlike Shannon's transmission capacity. This allows for a
significantly higher number of messages to be transmitted using this
event-driven method. So far, only upper and lower bounds for deterministic
identification capacity have been established. Our work introduces a novel
construction: galaxy codes for deterministic identification. Using these codes,
we demonstrate an improvement in the achievability bound of 1/4 to 3/8,
representing a previously unknown advance that opens new possibilities for
efficient communication.
|
2501.12553
|
ViDDAR: Vision Language Model-Based Task-Detrimental Content Detection
for Augmented Reality
|
cs.CV
|
In Augmented Reality (AR), virtual content enhances user experience by
providing additional information. However, improperly positioned or designed
virtual content can be detrimental to task performance, as it can impair users'
ability to accurately interpret real-world information. In this paper we
examine two types of task-detrimental virtual content: obstruction attacks, in
which virtual content prevents users from seeing real-world objects, and
information manipulation attacks, in which virtual content interferes with
users' ability to accurately interpret real-world information. We provide a
mathematical framework to characterize these attacks and create a custom
open-source dataset for attack evaluation. To address these attacks, we
introduce ViDDAR (Vision language model-based Task-Detrimental content Detector
for Augmented Reality), a comprehensive full-reference system that leverages
Vision Language Models (VLMs) and advanced deep learning techniques to monitor
and evaluate virtual content in AR environments, employing a user-edge-cloud
architecture to balance performance with low latency. To the best of our
knowledge, ViDDAR is the first system to employ VLMs for detecting
task-detrimental content in AR settings. Our evaluation results demonstrate
that ViDDAR effectively understands complex scenes and detects task-detrimental
content, achieving up to 92.15% obstruction detection accuracy with a detection
latency of 533 ms, and an 82.46% information manipulation content detection
accuracy with a latency of 9.62 s.
|
2501.12554
|
Generalization Performance of Hypergraph Neural Networks
|
cs.LG
|
Hypergraph neural networks have been promising tools for handling learning
tasks involving higher-order data, with notable applications in web graphs,
such as modeling multi-way hyperlink structures and complex user interactions.
Yet, their generalization abilities in theory are less clear to us. In this
paper, we seek to develop margin-based generalization bounds for four
representative classes of hypergraph neural networks, including
convolutional-based methods (UniGCN), set-based aggregation (AllDeepSets),
invariant and equivariant transformations (M-IGN), and tensor-based approaches
(T-MPHN). Through the PAC-Bayes framework, our results reveal the manner in
which hypergraph structure and spectral norms of the learned weights can affect
the generalization bounds, where the key technical challenge lies in developing
new perturbation analysis for hypergraph neural networks, which offers a
rigorous understanding of how variations in the model's weights and hypergraph
structure impact its generalization behavior. Our empirical study examines the
relationship between the practical performance and theoretical bounds of the
models over synthetic and real-world datasets. One of our primary observations
is the strong correlation between the theoretical bounds and empirical loss,
with statistically significant consistency in most cases.
|
2501.12557
|
Understanding the LLM-ification of CHI: Unpacking the Impact of LLMs at
CHI through a Systematic Literature Review
|
cs.HC cs.AI cs.CL cs.CY
|
Large language models (LLMs) have been positioned to revolutionize HCI, by
reshaping not only the interfaces, design patterns, and sociotechnical systems
that we study, but also the research practices we use. To-date, however, there
has been little understanding of LLMs' uptake in HCI. We address this gap via a
systematic literature review of 153 CHI papers from 2020-24 that engage with
LLMs. We taxonomize: (1) domains where LLMs are applied; (2) roles of LLMs in
HCI projects; (3) contribution types; and (4) acknowledged limitations and
risks. We find LLM work in 10 diverse domains, primarily via empirical and
artifact contributions. Authors use LLMs in five distinct roles, including as
research tools or simulated users. Still, authors often raise validity and
reproducibility concerns, and overwhelmingly study closed models. We outline
opportunities to improve HCI research with and on LLMs, and provide guiding
questions for researchers to consider the validity and appropriateness of
LLM-related work.
|
2501.12558
|
Structural and mechanical properties of W-Cu compounds characterized by
a neural-network-based potential
|
cond-mat.mtrl-sci cs.LG
|
Tungsten-copper (W-Cu) compounds are widely utilized in various industrial
fields due to their exceptional mechanical properties. In this study, we have
developed a neural-network-based deep potential (DP) model that covers a wide
range of temperatures, ranging from 0 to 3,000 K, and pressures, varying from 0
to 10 GPa. This study presents a model trained using density functional theory
data for full concentration CuxW100-x compounds. Through this model, we
systematically investigate the structural and mechanical properties of W-Cu
alloys and have the following findings. First, the bulk modulus (B) and Young's
modulus (E) of W-Cu alloys exhibit a linear decline as the Cu content
increases, indicating a softening trend in the CuxW100-x compounds as the Cu
concentration rises. Second, a higher Cu content results in higher critical
strain and lower critical stress for these compounds. A brittle-to-ductile
transition in the deformation mode predicted is predicted at around 37.5 at. %
Cu content. Third, tensile loading tests in the W-Cu gradient structure reveal
that Cu-poor region serves as a barrier, hindering shear band propagation while
promoting new shear band formation in the Cu-rich region. The above results
from the DP model are anticipated to aid in exploring the physical mechanisms
underlying the complex phenomena of W-Cu systems and contribute to the
advancement of methodologies for materials simulation.
|
2501.12564
|
Energy Landscape Shaping for Robust Control of Atoms in Optical Lattices
|
quant-ph cs.SY eess.SY
|
Robust quantum control is crucial for realizing practical quantum
technologies. Energy landscape shaping offers an alternative to conventional
dynamic control, providing theoretically enhanced robustness and simplifying
implementation for certain applications. This work demonstrates the feasibility
of robust energy landscape control in a practical implementation with ultracold
atoms. We leverage a digital mirror device (DMD) to shape optical potentials,
creating complex energy landscapes. To achieve a desired objective, such as
efficient quantum state transfer, we formulate a novel hybrid optimization
approach that effectively handles both continuous (laser power) and discrete
(DMD pixel activation) control parameters. This approach combines constrained
quasi-Newton methods with surrogate models for efficient exploration of the
vast parameter space. Furthermore, we introduce a framework for analyzing the
robustness of the resulting control schemes against experimental uncertainties.
By modeling uncertainties as structured perturbations, we systematically assess
controller performance and identify robust solutions. We apply these techniques
to maximize spin transfer in a chain of trapped atoms, achieving high-fidelity
control while maintaining robustness. Our findings provide insights into the
experimental viability of controlled spin transfer in cold atom systems. More
broadly, the presented optimization and robustness analysis methods apply to a
wide range of quantum control problems, offering a toolkit for designing and
evaluating robust controllers in complex experimental settings.
|
2501.12570
|
O1-Pruner: Length-Harmonizing Fine-Tuning for O1-Like Reasoning Pruning
|
cs.CL
|
Recently, long-thought reasoning LLMs, such as OpenAI's O1, adopt extended
reasoning processes similar to how humans ponder over complex problems. This
reasoning paradigm significantly enhances the model's problem-solving abilities
and has achieved promising results. However, long-thought reasoning process
leads to a substantial increase in inference time. A pressing challenge is
reducing the inference overhead of long-thought LLMs while ensuring accuracy.
In this paper, we experimentally demonstrate that long-thought reasoning models
struggle to effectively allocate token budgets based on problem difficulty and
reasoning redundancies. To address this, we propose Length-Harmonizing
Fine-Tuning (O1-Pruner), aiming at minimizing reasoning overhead while
maintaining accuracy. This effective fine-tuning method first estimates the
LLM's baseline performance through pre-sampling and then uses RL-style
fine-tuning to encourage the model to generate shorter reasoning processes
under accuracy constraints. This allows the model to achieve efficient
reasoning with lower redundancy while maintaining accuracy. Experiments on
various mathematical reasoning benchmarks show that O1-Pruner not only
significantly reduces inference overhead but also achieves higher accuracy,
providing a novel and promising solution to this challenge. Our code is coming
soon at https://github.com/StarDewXXX/O1-Pruner
|
2501.12571
|
Exploring Unknown Social Networks for Discovering Hidden Nodes
|
cs.SI cs.CY
|
In this paper, we address the challenge of discovering hidden nodes in
unknown social networks, formulating three types of hidden-node discovery
problems, namely, Sybil-node discovery, peripheral-node discovery, and
influencer discovery. We tackle these problems by employing a graph exploration
framework grounded in machine learning. Leveraging the structure of the
subgraph gradually obtained from graph exploration, we construct prediction
models to identify target hidden nodes in unknown social graphs. Through
empirical investigations of real social graphs, we investigate the efficiency
of graph exploration strategies in uncovering hidden nodes. Our results show
that our graph exploration strategies discover hidden nodes with an efficiency
comparable to that when the graph structure is known. Specifically, the query
cost of discovering 10% of the hidden nodes is at most only 1.2 times that when
the topology is known, and the query-cost multiplier for discovering 90% of the
hidden nodes is at most only 1.4. Furthermore, our results suggest that using
node embeddings, which are low-dimensional vector representations of nodes, for
hidden-node discovery is a double-edged sword: it is effective in certain
scenarios but sometimes degrades the efficiency of node discovery. Guided by
this observation, we examine the effectiveness of using a bandit algorithm to
combine the prediction models that use node embeddings with those that do not,
and our analysis shows that the bandit-based graph exploration strategy
achieves efficient node discovery across a wide array of settings.
|
2501.12573
|
Leveraging LLMs to Create a Haptic Devices' Recommendation System
|
cs.MM cs.AI cs.HC cs.SY eess.SY
|
Haptic technology has seen significant growth, yet a lack of awareness of
existing haptic device design knowledge hinders development. This paper
addresses these limitations by leveraging advancements in Large Language Models
(LLMs) to develop a haptic agent, focusing specifically on Grounded Force
Feedback (GFF) devices recommendation. Our approach involves automating the
creation of a structured haptic device database using information from research
papers and product specifications. This database enables the recommendation of
relevant GFF devices based on user queries. To ensure precise and contextually
relevant recommendations, the system employs a dynamic retrieval method that
combines both conditional and semantic searches. Benchmarking against the
established UEQ and existing haptic device searching tools, the proposed haptic
recommendation agent ranks in the top 10\% across all UEQ categories with mean
differences favoring the agent in nearly all subscales, and maintains no
significant performance bias across different user groups, showcasing superior
usability and user satisfaction.
|
2501.12582
|
Ultralow-dimensionality reduction for identifying critical transitions
by spatial-temporal PCA
|
stat.ML cs.LG
|
Discovering dominant patterns and exploring dynamic behaviors especially
critical state transitions and tipping points in high-dimensional time-series
data are challenging tasks in study of real-world complex systems, which demand
interpretable data representations to facilitate comprehension of both spatial
and temporal information within the original data space. Here, we proposed a
general and analytical ultralow-dimensionality reduction method for dynamical
systems named spatial-temporal principal component analysis (stPCA) to fully
represent the dynamics of a high-dimensional time-series by only a single
latent variable without distortion, which transforms high-dimensional spatial
information into one-dimensional temporal information based on nonlinear
delay-embedding theory. The dynamics of this single variable is analytically
solved and theoretically preserves the temporal property of original
high-dimensional time-series, thereby accurately and reliably identifying the
tipping point before an upcoming critical transition. Its applications to
real-world datasets such as individual-specific heterogeneous ICU records
demonstrated the effectiveness of stPCA, which quantitatively and robustly
provides the early-warning signals of the critical/tipping state on each
patient.
|
2501.12583
|
Chasing price drains liquidity
|
cs.CE
|
Assuming that the price in a Uniswap v3 style Automated Market Maker (AMM)
follows a Geometric Brownian Motion (GBM), we prove that the strategy that
adjusts the position of liquidity to track the current price leads to a
deterministic and exponentially fast decay of liquidity. Next, assuming that
there is a Centralized Exchange (CEX), in which the price follows a GBM and the
AMM price mean reverts to the CEX price, we show numerically that the same
strategy still leads to decay. Last, we propose a strategy that increases the
liquidity even without compounding fees earned through liquidity provision.
|
2501.12584
|
Entropy Polarization-Based Data Compression Without Frozen Set
Construction
|
cs.IT eess.SP math.IT
|
Classical source polar codes require the construction of frozen sets for
given sources. While this scheme offers excellent theoretical performance, it
faces challenges in practical data compression systems, including sensitivity
to the accuracy and computational complexity of the construction algorithm. In
this letter, we explore the feasibility of construction-free polar compression
schemes. By optimally selecting output symbols based on the decoder's behavior,
the proposed scheme not only enhances flexibility but also achieves significant
improvements in compression rates. Several enhancements are introduced to
facilitate the practical implementation of the proposed scheme. Numerical
results demonstrate the superior performance compared to existing polar
compression approaches.
|
2501.12587
|
How Collective Intelligence Emerges in a Crowd of People Through Learned
Division of Labor: A Case Study
|
cs.MA
|
This paper investigates the factors fostering collective intelligence (CI)
through a case study of *LinYi's Experiment, where over 2000 human players
collectively controll an avatar car. By conducting theoretical analysis and
replicating observed behaviors through numerical simulations, we demonstrate
how self-organized division of labor (DOL) among individuals fosters the
emergence of CI and identify two essential conditions fostering CI by
formulating this problem into a stability problem of a Markov Jump Linear
System (MJLS). These conditions, independent of external stimulus, emphasize
the importance of both elite and common players in fostering CI. Additionally,
we propose an index for emergence of CI and a distributed method for estimating
joint actions, enabling individuals to learn their optimal social roles without
global action information of the whole crowd.
|
2501.12588
|
Fundamental Limits of Non-Adaptive Group Testing with Markovian
Correlation
|
cs.IT math.IT
|
We study a correlated group testing model where items are infected according
to a Markov chain, which creates bursty binfection patterns. Focusing on a very
sparse infections regime, we propose a non adaptive testing strategy with an
efficient decoding scheme that is nearly optimal. Specifically, it achieves
asymptotically vanishing error with a number of tests that is within a
$1/\ln(2) \approx 1.44$ multiplicative factor of the fundamental entropy bound
a result that parallels the independent group testing setting. We show that the
number of tests reduces with an increase in the expected burst length of
infected items, quantifying the advantage of exploiting correlation in test
design.
|
2501.12592
|
FedGrAINS: Personalized SubGraph Federated Learning with Adaptive
Neighbor Sampling
|
cs.LG cs.AI cs.DC cs.IR
|
Graphs are crucial for modeling relational and biological data. As datasets
grow larger in real-world scenarios, the risk of exposing sensitive information
increases, making privacy-preserving training methods like federated learning
(FL) essential to ensure data security and compliance with privacy regulations.
Recently proposed personalized subgraph FL methods have become the de-facto
standard for training personalized Graph Neural Networks (GNNs) in a federated
manner while dealing with the missing links across clients' subgraphs due to
privacy restrictions. However, personalized subgraph FL faces significant
challenges due to the heterogeneity in client subgraphs, such as degree
distributions among the nodes, which complicate federated training of graph
models. To address these challenges, we propose \textit{FedGrAINS}, a novel
data-adaptive and sampling-based regularization method for subgraph FL.
FedGrAINS leverages generative flow networks (GFlowNets) to evaluate node
importance concerning clients' tasks, dynamically adjusting the message-passing
step in clients' GNNs. This adaptation reflects task-optimized sampling aligned
with a trajectory balance objective. Experimental results demonstrate that the
inclusion of \textit{FedGrAINS} as a regularizer consistently improves the FL
performance compared to baselines that do not leverage such regularization.
|
2501.12594
|
A 3-Step Optimization Framework with Hybrid Models for a Humanoid
Robot's Jump Motion
|
cs.RO
|
High dynamic jump motions are challenging tasks for humanoid robots to
achieve environment adaptation and obstacle crossing. The trajectory
optimization is a practical method to achieve high-dynamic and explosive
jumping. This paper proposes a 3-step trajectory optimization framework for
generating a jump motion for a humanoid robot. To improve iteration speed and
achieve ideal performance, the framework comprises three sub-optimizations. The
first optimization incorporates momentum, inertia, and center of pressure
(CoP), treating the robot as a static reaction momentum pendulum (SRMP) model
to generate corresponding trajectories. The second optimization maps these
trajectories to joint space using effective Quadratic Programming (QP) solvers.
Finally, the third optimization generates whole-body joint trajectories
utilizing trajectories generated by previous parts. With the combined
consideration of momentum and inertia, the robot achieves agile forward jump
motions. A simulation and experiments (Fig. \ref{Fig First page fig}) of
forward jump with a distance of 1.0 m and 0.5 m height are presented in this
paper, validating the applicability of the proposed framework.
|
2501.12595
|
A Unified Invariant Learning Framework for Graph Classification
|
cs.LG cs.AI
|
Invariant learning demonstrates substantial potential for enhancing the
generalization of graph neural networks (GNNs) with out-of-distribution (OOD)
data. It aims to recognize stable features in graph data for classification,
based on the premise that these features causally determine the target label,
and their influence is invariant to changes in distribution. Along this line,
most studies have attempted to pinpoint these stable features by emphasizing
explicit substructures in the graph, such as masked or attentive subgraphs, and
primarily enforcing the invariance principle in the semantic space, i.e., graph
representations. However, we argue that focusing only on the semantic space may
not accurately identify these stable features. To address this, we introduce
the Unified Invariant Learning (UIL) framework for graph classification. It
provides a unified perspective on invariant graph learning, emphasizing both
structural and semantic invariance principles to identify more robust stable
features. In the graph space, UIL adheres to the structural invariance
principle by reducing the distance between graphons over a set of stable
features across different environments. Simultaneously, to confirm semantic
invariance, UIL underscores that the acquired graph representations should
demonstrate exemplary performance across diverse environments. We present both
theoretical and empirical evidence to confirm our method's ability to recognize
superior stable features. Moreover, through a series of comprehensive
experiments complemented by in-depth analyses, we demonstrate that UIL
considerably enhances OOD generalization, surpassing the performance of leading
baseline methods. Our codes are available at https://github.com/yongduosui/UIL.
|
2501.12596
|
Adapting OpenAI's CLIP Model for Few-Shot Image Inspection in
Manufacturing Quality Control: An Expository Case Study with Multiple
Application Examples
|
cs.CV stat.AP stat.OT
|
This expository paper introduces a simplified approach to image-based quality
inspection in manufacturing using OpenAI's CLIP (Contrastive Language-Image
Pretraining) model adapted for few-shot learning. While CLIP has demonstrated
impressive capabilities in general computer vision tasks, its direct
application to manufacturing inspection presents challenges due to the domain
gap between its training data and industrial applications. We evaluate CLIP's
effectiveness through five case studies: metallic pan surface inspection, 3D
printing extrusion profile analysis, stochastic textured surface evaluation,
automotive assembly inspection, and microstructure image classification. Our
results show that CLIP can achieve high classification accuracy with relatively
small learning sets (50-100 examples per class) for single-component and
texture-based applications. However, the performance degrades with complex
multi-component scenes. We provide a practical implementation framework that
enables quality engineers to quickly assess CLIP's suitability for their
specific applications before pursuing more complex solutions. This work
establishes CLIP-based few-shot learning as an effective baseline approach that
balances implementation simplicity with robust performance, demonstrated in
several manufacturing quality control applications.
|
2501.12597
|
Multi-Instance Partial-Label Learning with Margin Adjustment
|
cs.LG
|
Multi-instance partial-label learning (MIPL) is an emerging learning
framework where each training sample is represented as a multi-instance bag
associated with a candidate label set. Existing MIPL algorithms often overlook
the margins for attention scores and predicted probabilities, leading to
suboptimal generalization performance. A critical issue with these algorithms
is that the highest prediction probability of the classifier may appear on a
non-candidate label. In this paper, we propose an algorithm named MIPLMA, i.e.,
Multi-Instance Partial-Label learning with Margin Adjustment, which adjusts the
margins for attention scores and predicted probabilities. We introduce a
margin-aware attention mechanism to dynamically adjust the margins for
attention scores and propose a margin distribution loss to constrain the
margins between the predicted probabilities on candidate and non-candidate
label sets. Experimental results demonstrate the superior performance of MIPLMA
over existing MIPL algorithms, as well as other well-established multi-instance
learning algorithms and partial-label learning algorithms.
|
2501.12598
|
On Accelerating Deep Neural Network Mutation Analysis by Neuron and
Mutant Clustering
|
cs.SE cs.LG cs.NE
|
Mutation analysis of deep neural networks (DNNs) is a promising method for
effective evaluation of test data quality and model robustness, but it can be
computationally expensive, especially for large models. To alleviate this, we
present DEEPMAACC, a technique and a tool that speeds up DNN mutation analysis
through neuron and mutant clustering. DEEPMAACC implements two methods: (1)
neuron clustering to reduce the number of generated mutants and (2) mutant
clustering to reduce the number of mutants to be tested by selecting
representative mutants for testing. Both use hierarchical agglomerative
clustering to group neurons and mutants with similar weights, with the goal of
improving efficiency while maintaining mutation score. DEEPMAACC has been
evaluated on 8 DNN models across 4 popular classification datasets and two DNN
architectures. When compared to exhaustive, or vanilla, mutation analysis, the
results provide empirical evidence that neuron clustering approach, on average,
accelerates mutation analysis by 69.77%, with an average -26.84% error in
mutation score. Meanwhile, mutant clustering approach, on average, accelerates
mutation analysis by 35.31%, with an average 1.96% error in mutation score. Our
results demonstrate that a trade-off can be made between mutation testing speed
and mutation score error.
|
2501.12599
|
Kimi k1.5: Scaling Reinforcement Learning with LLMs
|
cs.AI cs.LG
|
Language model pretraining with next token prediction has proved effective
for scaling compute but is limited to the amount of available training data.
Scaling reinforcement learning (RL) unlocks a new axis for the continued
improvement of artificial intelligence, with the promise that large language
models (LLMs) can scale their training data by learning to explore with
rewards. However, prior published work has not produced competitive results. In
light of this, we report on the training practice of Kimi k1.5, our latest
multi-modal LLM trained with RL, including its RL training techniques,
multi-modal data recipes, and infrastructure optimization. Long context scaling
and improved policy optimization methods are key ingredients of our approach,
which establishes a simplistic, effective RL framework without relying on more
complex techniques such as Monte Carlo tree search, value functions, and
process reward models. Notably, our system achieves state-of-the-art reasoning
performance across multiple benchmarks and modalities -- e.g., 77.5 on AIME,
96.2 on MATH 500, 94-th percentile on Codeforces, 74.9 on MathVista -- matching
OpenAI's o1. Moreover, we present effective long2short methods that use
long-CoT techniques to improve short-CoT models, yielding state-of-the-art
short-CoT reasoning results -- e.g., 60.8 on AIME, 94.6 on MATH500, 47.3 on
LiveCodeBench -- outperforming existing short-CoT models such as GPT-4o and
Claude Sonnet 3.5 by a large margin (up to +550%).
|
2501.12602
|
BLR-MoE: Boosted Language-Routing Mixture of Experts for Domain-Robust
Multilingual E2E ASR
|
cs.CL cs.SD eess.AS
|
Recently, the Mixture of Expert (MoE) architecture, such as LR-MoE, is often
used to alleviate the impact of language confusion on the multilingual ASR
(MASR) task. However, it still faces language confusion issues, especially in
mismatched domain scenarios. In this paper, we decouple language confusion in
LR-MoE into confusion in self-attention and router. To alleviate the language
confusion in self-attention, based on LR-MoE, we propose to apply attention-MoE
architecture for MASR. In our new architecture, MoE is utilized not only on
feed-forward network (FFN) but also on self-attention. In addition, to improve
the robustness of the LID-based router on language confusion, we propose expert
pruning and router augmentation methods. Combining the above, we get the
boosted language-routing MoE (BLR-MoE) architecture. We verify the
effectiveness of the proposed BLR-MoE in a 10,000-hour MASR dataset.
|
2501.12604
|
Image Motion Blur Removal in the Temporal Dimension with Video Diffusion
Models
|
eess.IV cs.CV cs.LG
|
Most motion deblurring algorithms rely on spatial-domain convolution models,
which struggle with the complex, non-linear blur arising from camera shake and
object motion. In contrast, we propose a novel single-image deblurring approach
that treats motion blur as a temporal averaging phenomenon. Our core innovation
lies in leveraging a pre-trained video diffusion transformer model to capture
diverse motion dynamics within a latent space. It sidesteps explicit kernel
estimation and effectively accommodates diverse motion patterns. We implement
the algorithm within a diffusion-based inverse problem framework. Empirical
results on synthetic and real-world datasets demonstrate that our method
outperforms existing techniques in deblurring complex motion blur scenarios.
This work paves the way for utilizing powerful video diffusion models to
address single-image deblurring challenges.
|
2501.12607
|
Low-Dimensional Representation-Driven TSK Fuzzy System for Feature
Selection
|
cs.LG
|
Feature selection can select important features to address dimensional
curses. Subspace learning, a widely used dimensionality reduction method, can
project the original data into a low-dimensional space. However, the
low-dimensional representation is often transformed back into the original
space, resulting in information loss. Additionally, gate function-based methods
in Takagi-Sugeno-Kang fuzzy system (TSK-FS) are commonly less discrimination.
To address these issues, this paper proposes a novel feature selection method
that integrates subspace learning with TSK-FS. Specifically, a projection
matrix is used to fit the intrinsic low-dimensional representation.
Subsequently, the low-dimensional representation is fed to TSK-FS to measure
its availability. The firing strength is slacked so that TSK-FS is not limited
by numerical underflow. Finally, the $\ell _{2,1}$-norm is introduced to select
significant features and the connection to related works is discussed. The
proposed method is evaluated against six state-of-the-art methods on eighteen
datasets, and the results demonstrate the superiority of the proposed method.
|
2501.12610
|
Exploring Wikipedia Gender Diversity Over Time $\unicode{x2013}$ The
Wikipedia Gender Dashboard (WGD)
|
cs.CY cs.IR
|
The Wikipedia editors' community has been actively pursuing the intent of
achieving gender equality. To that end, it is important to explore the
historical evolution of underlying gender disparities in Wikipedia articles.
This paper presents the Wikipedia Gender Dashboard (WGD), a tool designed to
enable the interaction with gender distribution data, including the average age
in every subclass of individuals (i.e. Astronauts, Politicians, etc.) over the
years. Wikipedia APIs, DBpedia, and Wikidata endpoints were used to query the
data to ensure persistent data collection. The WGD was then created with
Microsoft Power BI before being embedded on a public website. The analysis of
the data available in the WGD found that female articles only represent around
17% of English Wikipedia, but it has been growing steadily over the last 20
years. Meanwhile, the average age across genders decreased over time. WGD also
shows that most subclasses of `Person' are male-dominated. Wikipedia editors
can make use of WGD to locate areas with marginalized genders in Wikipedia, and
increase their efforts to produce more content providing coverage for those
genders to achieve better gender equality in Wikipedia.
|
2501.12612
|
T2ISafety: Benchmark for Assessing Fairness, Toxicity, and Privacy in
Image Generation
|
cs.CL cs.CR
|
Text-to-image (T2I) models have rapidly advanced, enabling the generation of
high-quality images from text prompts across various domains. However, these
models present notable safety concerns, including the risk of generating
harmful, biased, or private content. Current research on assessing T2I safety
remains in its early stages. While some efforts have been made to evaluate
models on specific safety dimensions, many critical risks remain unexplored. To
address this gap, we introduce T2ISafety, a safety benchmark that evaluates T2I
models across three key domains: toxicity, fairness, and bias. We build a
detailed hierarchy of 12 tasks and 44 categories based on these three domains,
and meticulously collect 70K corresponding prompts. Based on this taxonomy and
prompt set, we build a large-scale T2I dataset with 68K manually annotated
images and train an evaluator capable of detecting critical risks that previous
work has failed to identify, including risks that even ultra-large proprietary
models like GPTs cannot correctly detect. We evaluate 12 prominent diffusion
models on T2ISafety and reveal several concerns including persistent issues
with racial fairness, a tendency to generate toxic content, and significant
variation in privacy protection across the models, even with defense methods
like concept erasing. Data and evaluator are released under
https://github.com/adwardlee/t2i_safety.
|
2501.12615
|
GATE: Adaptive Learning with Working Memory by Information Gating in
Multi-lamellar Hippocampal Formation
|
q-bio.NC cs.AI
|
Hippocampal formation (HF) can rapidly adapt to varied environments and build
flexible working memory (WM). To mirror the HF's mechanism on generalization
and WM, we propose a model named Generalization and Associative Temporary
Encoding (GATE), which deploys a 3-D multi-lamellar dorsoventral (DV)
architecture, and learns to build up internally representation from externally
driven information layer-wisely. In each lamella, regions of HF:
EC3-CA1-EC5-EC3 forms a re-entrant loop that discriminately maintains
information by EC3 persistent activity, and selectively readouts the retained
information by CA1 neurons. CA3 and EC5 further provides gating function that
controls these processes. After learning complex WM tasks, GATE forms neuron
representations that align with experimental records, including splitter, lap,
evidence, trace, delay-active cells, as well as conventional place cells.
Crucially, DV architecture in GATE also captures information, range from
detailed to abstract, which enables a rapid generalization ability when cue,
environment or task changes, with learned representations inherited. GATE
promises a viable framework for understanding the HF's flexible memory
mechanisms and for progressively developing brain-inspired intelligent systems.
|
2501.12617
|
Deep Learning-Based Identification of Inconsistent Method Names: How Far
Are We?
|
cs.SE cs.AI
|
Concise and meaningful method names are crucial for program comprehension and
maintenance. However, method names may become inconsistent with their
corresponding implementations, causing confusion and errors. Several deep
learning (DL)-based approaches have been proposed to identify such
inconsistencies, with initial evaluations showing promising results. However,
these evaluations typically use a balanced dataset, where the number of
inconsistent and consistent names are equal. This setup, along with flawed
dataset construction, leads to false positives, making reported performance
less reliable in real-world scenarios, where most method names are consistent.
In this paper, we present an empirical study that evaluates state-of-the-art
DL-based methods for identifying inconsistent method names. We create a new
benchmark by combining automatic identification from commit histories and
manual developer inspections, reducing false positives. We evaluate five
representative DL approaches (one retrieval-based and four generation-based) on
this benchmark. Our results show that performance drops substantially when
moving from the balanced dataset to the new benchmark. We further conduct
quantitative and qualitative analyses to understand the strengths and
weaknesses of the approaches. Retrieval-based methods perform well on simple
methods and those with popular name sub-tokens but fail due to inefficient
representation techniques. Generation-based methods struggle with inaccurate
similarity calculations and immature name generation. Based on these findings,
we propose improvements using contrastive learning and large language models
(LLMs). Our study suggests that significant improvements are needed before
these DL approaches can be effectively applied to real-world software systems.
|
2501.12619
|
Quantification of Large Language Model Distillation
|
cs.CL
|
Model distillation is a fundamental technique in building large language
models (LLMs), transferring knowledge from a teacher model to a student model.
However, distillation can lead to model homogenization, reducing diversity
among models and impairing their ability to robustly handle complex or novel
tasks. These limitations underscore the need to systematically quantify the
distillation process and its impact. In this work, we propose a framework to
evaluate and quantify model distillation. Our method addresses two key aspects:
(1) Identifying identity cognition contradictions to assess discrepancies in
how models perceive and represent identity-related information, and (2)
Analyzing multi-granularity response similarities across models to measure the
extent of homogenization. Experimental results demonstrate two key insights:
(1) Well-known closed-source and open-source LLMs usually exhibit high
distillation degrees, except for Claude, Doubao, and Gemini. (2) Base LLMs show
higher distillation degrees compared to aligned LLMs. By offering a systematic
approach to improve the transparency of LLM data distillation, we call for LLMs
with more independent development and more transparent technical reports to
improve LLMs' robustness and safety. The code and data are available under
https://github.com/Aegis1863/LLMs-Distillation-Quantification.
|
2501.12620
|
Adaptive Data Exploitation in Deep Reinforcement Learning
|
cs.LG cs.AI
|
We introduce ADEPT: Adaptive Data ExPloiTation, a simple yet powerful
framework to enhance the **data efficiency** and **generalization** in deep
reinforcement learning (RL). Specifically, ADEPT adaptively manages the use of
sampled data across different learning stages via multi-armed bandit (MAB)
algorithms, optimizing data utilization while mitigating overfitting. Moreover,
ADEPT can significantly reduce the computational overhead and accelerate a wide
range of RL algorithms. We test ADEPT on benchmarks including Procgen,
MiniGrid, and PyBullet. Extensive simulation demonstrates that ADEPT can
achieve superior performance with remarkable computational efficiency, offering
a practical solution to data-efficient RL. Our code is available at
https://github.com/yuanmingqi/ADEPT.
|
2501.12622
|
Towards Robust Multi-tab Website Fingerprinting
|
cs.CR cs.AI
|
Website fingerprinting enables an eavesdropper to determine which websites a
user is visiting over an encrypted connection. State-of-the-art website
fingerprinting (WF) attacks have demonstrated effectiveness even against
Tor-protected network traffic. However, existing WF attacks have critical
limitations on accurately identifying websites in multi-tab browsing sessions,
where the holistic pattern of individual websites is no longer preserved, and
the number of tabs opened by a client is unknown a priori. In this paper, we
propose ARES, a novel WF framework natively designed for multi-tab WF attacks.
ARES formulates the multi-tab attack as a multi-label classification problem
and solves it using the novel Transformer-based models. Specifically, ARES
extracts local patterns based on multi-level traffic aggregation features and
utilizes the improved self-attention mechanism to analyze the correlations
between these local patterns, effectively identifying websites. We implement a
prototype of ARES and extensively evaluate its effectiveness using our
large-scale datasets collected over multiple months. The experimental results
illustrate that ARES achieves optimal performance in several realistic
scenarios. Further, ARES remains robust even against various WF defenses.
|
2501.12624
|
Toward Model-centric Heterogeneous Federated Graph Learning: A
Knowledge-driven Approach
|
cs.LG cs.DC
|
Federated graph learning (FGL) has emerged as a promising paradigm for
collaborative machine learning, enabling multiple parties to jointly train
models while preserving the privacy of raw graph data. However, existing FGL
methods often overlook the model-centric heterogeneous FGL (MHtFGL) problem,
which arises in real-world applications, such as the aggregation of models from
different companies with varying scales and architectures. MHtFGL presents an
additional challenge: the diversity of client model architectures hampers
common learning and integration of graph representations. To address this
issue, we propose the Federated Graph Knowledge Collaboration (FedGKC)
framework, comprising two key components: Client-side Self-Mutual Knowledge
Distillation, which fosters effective knowledge sharing among clients through
copilot models; and Server-side Knowledge-Aware Model Aggregation, which
enhances model integration by accounting for the knowledge acquired by clients.
Experiments on eight benchmark datasets demonstrate that FedGKC achieves an
average accuracy improvement of 3.74% over baseline models in MHtFGL scenarios,
while also maintaining excellent performance in homogeneous settings.
|
2501.12626
|
The Intrinsic State Variable in Fundamental Lemma and Its Use in
Stability Design for Data-based Control
|
eess.SY cs.SY math.DS
|
In the data-based setting, analysis and control design of dynamical systems
using measured data are typically based on overlapping trajectory segments of
the input and output variables. This could lead to complex designs because the
system internal dynamics, which is typically reflected by the system state
variable, is unavailable. In this paper, we will show that the coefficient
vector in a modified version of Willems' fundamental lemma is an intrinsic and
observable state variable for the system behavior. This argument evolves from
the behavioral framework without the requirement of prior knowledge on the
causality among system variables or any predefined representation structure
(e.g., a state space representation). Such a view allows for the construction
of a state map based on the fundamental lemma, bridging the trajectory space
and the state space. The state property of the coefficient vector allows for a
simple stability design approach using memoryless quadratic functions of it as
Lyapunov functions, from which the control action for each step can be
explicitly constructed. Using the coefficient vector as a state variable could
see wide applications in the analysis and control design of dynamical systems
including directions beyond the discussions in this paper.
|
2501.12627
|
Deep Reinforcement Learning with Hybrid Intrinsic Reward Model
|
cs.LG
|
Intrinsic reward shaping has emerged as a prevalent approach to solving
hard-exploration and sparse-rewards environments in reinforcement learning
(RL). While single intrinsic rewards, such as curiosity-driven or novelty-based
methods, have shown effectiveness, they often limit the diversity and
efficiency of exploration. Moreover, the potential and principle of combining
multiple intrinsic rewards remains insufficiently explored. To address this
gap, we introduce HIRE (Hybrid Intrinsic REward), a flexible and elegant
framework for creating hybrid intrinsic rewards through deliberate fusion
strategies. With HIRE, we conduct a systematic analysis of the application of
hybrid intrinsic rewards in both general and unsupervised RL across multiple
benchmarks. Extensive experiments demonstrate that HIRE can significantly
enhance exploration efficiency and diversity, as well as skill acquisition in
complex and dynamic settings.
|
2501.12632
|
TeD-Loc: Text Distillation for Weakly Supervised Object Localization
|
cs.CV cs.LG
|
Weakly supervised object localization (WSOL) using classification models
trained with only image-class labels remains an important challenge in computer
vision. Given their reliance on classification objectives, traditional WSOL
methods like class activation mapping focus on the most discriminative object
parts, often missing the full spatial extent. In contrast, recent WSOL methods
based on vision-language models like CLIP require ground truth classes or
external classifiers to produce a localization map, limiting their deployment
in downstream tasks. Moreover, methods like GenPromp attempt to address these
issues but introduce considerable complexity due to their reliance on
conditional denoising processes and intricate prompt learning. This paper
introduces Text Distillation for Localization (TeD-Loc), an approach that
directly distills knowledge from CLIP text embeddings into the model backbone
and produces patch-level localization. Multiple instance learning of these
image patches allows for accurate localization and classification using one
model without requiring external classifiers. Such integration of textual and
visual modalities addresses the longstanding challenge of achieving accurate
localization and classification concurrently, as WSOL methods in the literature
typically converge at different epochs. Extensive experiments show that
leveraging text embeddings and localization cues provides a cost-effective WSOL
model. TeD-Loc improves Top-1 LOC accuracy over state-of-the-art models by
about 5% on both CUB and ILSVRC datasets, while significantly reducing
computational complexity compared to GenPromp.
|
2501.12633
|
Inverse Reinforcement Learning with Switching Rewards and History
Dependency for Characterizing Animal Behaviors
|
cs.LG cs.AI
|
Traditional approaches to studying decision-making in neuroscience focus on
simplified behavioral tasks where animals perform repetitive, stereotyped
actions to receive explicit rewards. While informative, these methods constrain
our understanding of decision-making to short timescale behaviors driven by
explicit goals. In natural environments, animals exhibit more complex,
long-term behaviors driven by intrinsic motivations that are often
unobservable. Recent works in time-varying inverse reinforcement learning (IRL)
aim to capture shifting motivations in long-term, freely moving behaviors.
However, a crucial challenge remains: animals make decisions based on their
history, not just their current state. To address this, we introduce SWIRL
(SWitching IRL), a novel framework that extends traditional IRL by
incorporating time-varying, history-dependent reward functions. SWIRL models
long behavioral sequences as transitions between short-term decision-making
processes, each governed by a unique reward function. SWIRL incorporates
biologically plausible history dependency to capture how past decisions and
environmental contexts shape behavior, offering a more accurate description of
animal decision-making. We apply SWIRL to simulated and real-world animal
behavior datasets and show that it outperforms models lacking history
dependency, both quantitatively and qualitatively. This work presents the first
IRL model to incorporate history-dependent policies and rewards to advance our
understanding of complex, naturalistic decision-making in animals.
|
2501.12635
|
Multiple Queries with Multiple Keys: A Precise Prompt Matching Paradigm
for Prompt-based Continual Learning
|
cs.CV
|
Continual learning requires machine learning models to continuously acquire
new knowledge in dynamic environments while avoiding the forgetting of previous
knowledge. Prompt-based continual learning methods effectively address the
issue of catastrophic forgetting through prompt expansion and selection.
However, existing approaches often suffer from low accuracy in prompt
selection, which can result in the model receiving biased knowledge and making
biased predictions. To address this issue, we propose the Multiple Queries with
Multiple Keys (MQMK) prompt matching paradigm for precise prompt selection. The
goal of MQMK is to select the prompts whose training data distribution most
closely matches that of the test sample. Specifically, Multiple Queries enable
precise breadth search by introducing task-specific knowledge, while Multiple
Keys perform deep search by representing the feature distribution of training
samples at a fine-grained level. Experiments show that MQMK enhances the prompt
matching rate by over 30% in challenging scenarios and achieves
state-of-the-art performance on three widely adopted continual learning
benchmarks. Once this paper is accepted, we will release the code.
|
2501.12637
|
DWTNeRF: Boosting Few-shot Neural Radiance Fields via Discrete Wavelet
Transform
|
cs.CV
|
Neural Radiance Fields (NeRF) has achieved superior performance in novel view
synthesis and 3D scene representation, but its practical applications are
hindered by slow convergence and reliance on dense training views. To this end,
we present DWTNeRF, a unified framework based on Instant-NGP's fast-training
hash encoding. It is coupled with regularization terms designed for few-shot
NeRF, which operates on sparse training views. Our DWTNeRF additionally
includes a novel Discrete Wavelet loss that allows explicit prioritization of
low frequencies directly in the training objective, reducing few-shot NeRF's
overfitting on high frequencies in earlier training stages. We also introduce a
model-based approach, based on multi-head attention, that is compatible with
INGP, which are sensitive to architectural changes. On the 3-shot LLFF
benchmark, DWTNeRF outperforms Vanilla INGP by 15.07% in PSNR, 24.45% in SSIM
and 36.30% in LPIPS. Our approach encourages a re-thinking of current few-shot
approaches for fast-converging implicit representations like INGP or 3DGS.
|
2501.12640
|
Dynamics of Toxicity in Political Podcasts
|
cs.CL cs.AI
|
Toxicity in digital media poses significant challenges, yet little attention
has been given to its dynamics within the rapidly growing medium of podcasts.
This paper addresses this gap by analyzing political podcast data to study the
emergence and propagation of toxicity, focusing on conversation
chains-structured reply patterns within podcast transcripts. Leveraging
state-of-the-art transcription models and advanced conversational analysis
techniques, we systematically examine toxic discourse in over 30 popular
political podcasts in the United States. Our key contributions include: (1)
creating a comprehensive dataset of transcribed and diarized political
podcasts, identifying thousands of toxic instances using Google's Perspective
API, (2) uncovering concerning trends where a majority of episodes contain at
least one toxic instance, (3) introducing toxic conversation chains and
analyzing their structural and linguistic properties, revealing characteristics
such as longer durations, repetitive patterns, figurative language, and
emotional cues tied to anger and annoyance, (4) identifying demand-related
words like 'want', 'like', and 'know' as precursors to toxicity, and (5)
developing predictive models to anticipate toxicity shifts based on annotated
change points. Our findings provide critical insights into podcast toxicity and
establish a foundation for future research on real-time monitoring and
intervention mechanisms to foster healthier discourse in this influential
medium.
|
2501.12644
|
Current Opinions on Memristor-Accelerated Machine Learning Hardware
|
cs.ET cs.AR cs.LG eess.SP physics.app-ph
|
The unprecedented advancement of artificial intelligence has placed immense
demands on computing hardware, but traditional silicon-based semiconductor
technologies are approaching their physical and economic limit, prompting the
exploration of novel computing paradigms. Memristor offers a promising
solution, enabling in-memory analog computation and massive parallelism, which
leads to low latency and power consumption. This manuscript reviews the current
status of memristor-based machine learning accelerators, highlighting the
milestones achieved in developing prototype chips, that not only accelerate
neural networks inference but also tackle other machine learning tasks. More
importantly, it discusses our opinion on current key challenges that remain in
this field, such as device variation, the need for efficient peripheral
circuitry, and systematic co-design and optimization. We also share our
perspective on potential future directions, some of which address existing
challenges while others explore untouched territories. By addressing these
challenges through interdisciplinary efforts spanning device engineering,
circuit design, and systems architecture, memristor-based accelerators could
significantly advance the capabilities of AI hardware, particularly for edge
applications where power efficiency is paramount.
|
2501.12651
|
The potential -- and the pitfalls -- of using pre-trained language
models as cognitive science theories
|
cs.CL cs.AI
|
Many studies have evaluated the cognitive alignment of Pre-trained Language
Models (PLMs), i.e., their correspondence to adult performance across a range
of cognitive domains. Recently, the focus has expanded to the developmental
alignment of these models: identifying phases during training where
improvements in model performance track improvements in children's thinking
over development. However, there are many challenges to the use of PLMs as
cognitive science theories, including different architectures, different
training data modalities and scales, and limited model interpretability. In
this paper, we distill lessons learned from treating PLMs, not as engineering
artifacts but as cognitive science and developmental science models. We review
assumptions used by researchers to map measures of PLM performance to measures
of human performance. We identify potential pitfalls of this approach to
understanding human thinking, and we end by enumerating criteria for using PLMs
as credible accounts of cognition and cognitive development.
|
2501.12654
|
AnyNav: Visual Neuro-Symbolic Friction Learning for Off-road Navigation
|
cs.RO
|
Off-road navigation is essential for a wide range of applications in field
robotics such as planetary exploration and disaster response. However, it
remains an unresolved challenge due to the unstructured environments and
inherent complexity of terrain-vehicle interactions. Traditional physics-based
methods struggle to accurately model the nonlinear dynamics of these
interactions, while data-driven approaches often suffer from overfitting to
specific motion patterns, vehicle sizes, and types, limiting their
generalizability. To overcome these challenges, we introduce a vision-based
friction estimation framework grounded in neuro-symbolic principles,
integrating neural networks for visual perception with symbolic reasoning for
physical modeling. This enables significantly improved generalization abilities
through explicit physical reasoning incorporating the predicted friction.
Additionally, we develop a physics-informed planner that leverages the learned
friction coefficient to generate physically feasible and efficient paths, along
with corresponding speed profiles. We refer to our approach as AnyNav and
evaluate it in both simulation and real-world experiments, demonstrating its
utility and robustness across various off-road scenarios and multiple types of
four-wheeled vehicles. These results mark an important step toward developing
neuro-symbolic spatial intelligence to reason about complex, unstructured
environments and enable autonomous off-road navigation in challenging
scenarios. Video demonstrations are available at https://sairlab.org/anynav/,
where the source code will also be released.
|
2501.12656
|
PPO-Based Vehicle Control for Ramp Merging Scheme Assisted by Enhanced
C-V2X
|
cs.NI cs.LG
|
On-ramp merging presents a critical challenge in autonomous driving, as
vehicles from merging lanes need to dynamically adjust their positions and
speeds while monitoring traffic on the main road to prevent collisions. To
address this challenge, we propose a novel merging control scheme based on
reinforcement learning, which integrates lateral control mechanisms. This
approach ensures the smooth integration of vehicles from the merging lane onto
the main road, optimizing both fuel efficiency and passenger comfort.
Furthermore, we recognize the impact of vehicle-to-vehicle (V2V) communication
on control strategies and introduce an enhanced protocol leveraging Cellular
Vehicle-to-Everything (C-V2X) Mode 4. This protocol aims to reduce the Age of
Information (AoI) and improve communication reliability. In our simulations, we
employ two AoI-based metrics to rigorously assess the protocol's effectiveness
in autonomous driving scenarios. By combining the NS3 network simulator with
Python, we simulate V2V communication and vehicle control simultaneously. The
results demonstrate that the enhanced C-V2X Mode 4 outperforms the standard
version, while the proposed control scheme ensures safe and reliable vehicle
operation during on-ramp merging.
|
2501.12660
|
Extracting General-use Transformers for Low-resource Languages via
Knowledge Distillation
|
cs.CL
|
In this paper, we propose the use of simple knowledge distillation to produce
smaller and more efficient single-language transformers from Massively
Multilingual Transformers (MMTs) to alleviate tradeoffs associated with the use
of such in low-resource settings. Using Tagalog as a case study, we show that
these smaller single-language models perform on-par with strong baselines in a
variety of benchmark tasks in a much more efficient manner. Furthermore, we
investigate additional steps during the distillation process that improves the
soft-supervision of the target language, and provide a number of analyses and
ablations to show the efficacy of the proposed method.
|
2501.12666
|
Explicit Eigenvalue Regularization Improves Sharpness-Aware Minimization
|
cs.LG cs.CV
|
Sharpness-Aware Minimization (SAM) has attracted significant attention for
its effectiveness in improving generalization across various tasks. However,
its underlying principles remain poorly understood. In this work, we analyze
SAM's training dynamics using the maximum eigenvalue of the Hessian as a
measure of sharpness, and propose a third-order stochastic differential
equation (SDE), which reveals that the dynamics are driven by a complex mixture
of second- and third-order terms. We show that alignment between the
perturbation vector and the top eigenvector is crucial for SAM's effectiveness
in regularizing sharpness, but find that this alignment is often inadequate in
practice, limiting SAM's efficiency. Building on these insights, we introduce
Eigen-SAM, an algorithm that explicitly aims to regularize the top Hessian
eigenvalue by aligning the perturbation vector with the leading eigenvector. We
validate the effectiveness of our theory and the practical advantages of our
proposed approach through comprehensive experiments. Code is available at
https://github.com/RitianLuo/EigenSAM.
|
2501.12667
|
Sequential Change Point Detection via Denoising Score Matching
|
stat.ML cs.LG
|
Sequential change-point detection plays a critical role in numerous
real-world applications, where timely identification of distributional shifts
can greatly mitigate adverse outcomes. Classical methods commonly rely on
parametric density assumptions of pre- and post-change distributions, limiting
their effectiveness for high-dimensional, complex data streams. This paper
proposes a score-based CUSUM change-point detection, in which the score
functions of the data distribution are estimated by injecting noise and
applying denoising score matching. We consider both offline and online versions
of score estimation. Through theoretical analysis, we demonstrate that
denoising score matching can enhance detection power by effectively controlling
the injected noise scale. Finally, we validate the practical efficacy of our
method through numerical experiments on two synthetic datasets and a real-world
earthquake precursor detection task, demonstrating its effectiveness in
challenging scenarios.
|
Subsets and Splits
No community queries yet
The top public SQL queries from the community will appear here once available.