id
stringlengths 9
16
| title
stringlengths 4
278
| categories
stringlengths 5
104
| abstract
stringlengths 6
4.09k
|
|---|---|---|---|
2501.15539
|
Studying Behavioral Addiction by Combining Surveys and Digital Traces: A
Case Study of TikTok
|
cs.SI cs.CY
|
Opaque algorithms disseminate and mediate the content that users consume on
online social media platforms. This algorithmic mediation serves users with
contents of their liking, on the other hand, it may cause several inadvertent
risks to society at scale. While some of these risks, e.g., filter bubbles or
dissemination of hateful content, are well studied in the community, behavioral
addiction, designated by the Digital Services Act (DSA) as a potential systemic
risk, has been understudied. In this work, we aim to study if one can
effectively diagnose behavioral addiction using digital data traces from social
media platforms. Focusing on the TikTok short-format video platform as a case
study, we employ a novel mixed methodology of combining survey responses with
data donations of behavioral traces. We survey 1590 TikTok users and stratify
them into three addiction groups (i.e., less/moderately/highly likely
addicted). Then, we obtain data donations from 107 surveyed participants. By
analyzing users' data we find that, among others, highly likely addicted users
spend more time watching TikTok videos and keep coming back to TikTok
throughout the day, indicating a compulsion to use the platform. Finally, by
using basic user engagement features, we train classifier models to identify
highly likely addicted users with $F_1 \geq 0.55$. The performance of the
classifier models suggests predicting addictive users solely based on their
usage is rather difficult.
|
2501.15542
|
Estimating the Optimal Number of Clusters in Categorical Data Clustering
by Silhouette Coefficient
|
cs.LG
|
The problem of estimating the number of clusters (say k) is one of the major
challenges for the partitional clustering. This paper proposes an algorithm
named k-SCC to estimate the optimal k in categorical data clustering. For the
clustering step, the algorithm uses the kernel density estimation approach to
define cluster centers. In addition, it uses an information-theoretic based
dissimilarity to measure the distance between centers and objects in each
cluster. The silhouette analysis based approach is then used to evaluate the
quality of different clustering obtained in the former step to choose the best
k. Comparative experiments were conducted on both synthetic and real datasets
to compare the performance of k-SCC with three other algorithms. Experimental
results show that k-SCC outperforms the compared algorithms in determining the
number of clusters for each dataset.
|
2501.15544
|
Advancing Generative Artificial Intelligence and Large Language Models
for Demand Side Management with Internet of Electric Vehicles
|
cs.LG cs.AI
|
Generative artificial intelligence, particularly through large language
models (LLMs), is poised to transform energy optimization and demand side
management (DSM) within microgrids. This paper explores the integration of LLMs
into energy management, emphasizing their roles in automating the optimization
of DSM strategies with Internet of electric vehicles. We investigate challenges
and solutions associated with DSM and explore the new opportunities presented
by leveraging LLMs. Then, we propose an innovative solution that enhances LLMs
with retrieval-augmented generation for automatic problem formulation, code
generation, and customizing optimization. We present a case study to
demonstrate the effectiveness of our proposed solution in charging scheduling
and optimization for electric vehicles, highlighting our solution's significant
advancements in energy efficiency and user adaptability. This work underscores
the potential of LLMs for energy optimization and fosters a new era of
intelligent DSM solutions.
|
2501.15547
|
Building Efficient Lightweight CNN Models
|
cs.CV cs.AI cs.LG
|
Convolutional Neural Networks (CNNs) are pivotal in image classification
tasks due to their robust feature extraction capabilities. However, their high
computational and memory requirements pose challenges for deployment in
resource-constrained environments. This paper introduces a methodology to
construct lightweight CNNs while maintaining competitive accuracy. The approach
integrates two stages of training; dual-input-output model and transfer
learning with progressive unfreezing. The dual-input-output model train on
original and augmented datasets, enhancing robustness. Progressive unfreezing
is applied to the unified model to optimize pre-learned features during
fine-tuning, enabling faster convergence and improved model accuracy.
The methodology was evaluated on three benchmark datasets; handwritten digit
MNIST, fashion MNIST, and CIFAR-10. The proposed model achieved a
state-of-the-art accuracy of 99% on the handwritten digit MNIST and 89% on
fashion MNIST, with only 14,862 parameters and a model size of 0.17 MB. While
performance on CIFAR-10 was comparatively lower (65% with less than 20,00
parameters), the results highlight the scalability of this method. The final
model demonstrated fast inference times and low latency, making it suitable for
real-time applications.
Future directions include exploring advanced augmentation techniques,
improving architectural scalability for complex datasets, and extending the
methodology to tasks beyond classification. This research underscores the
potential for creating efficient, scalable, and task-specific CNNs for diverse
applications.
|
2501.15549
|
Optimal Transport on Categorical Data for Counterfactuals using
Compositional Data and Dirichlet Transport
|
cs.LG stat.ME
|
Recently, optimal transport-based approaches have gained attention for
deriving counterfactuals, e.g., to quantify algorithmic discrimination.
However, in the general multivariate setting, these methods are often opaque
and difficult to interpret. To address this, alternative methodologies have
been proposed, using causal graphs combined with iterative quantile regressions
(Ple\v{c}ko and Meinshausen (2020)) or sequential transport (Fernandes Machado
et al. (2025)) to examine fairness at the individual level, often referred to
as ``counterfactual fairness.'' Despite these advancements, transporting
categorical variables remains a significant challenge in practical applications
with real datasets. In this paper, we propose a novel approach to address this
issue. Our method involves (1) converting categorical variables into
compositional data and (2) transporting these compositions within the
probabilistic simplex of $\mathbb{R}^d$. We demonstrate the applicability and
effectiveness of this approach through an illustration on real-world data, and
discuss limitations.
|
2501.15552
|
Community-centric modeling of citation dynamics explains collective
citation patterns in science, law, and patents
|
physics.soc-ph cs.SI
|
Many human knowledge systems, such as science, law, and invention, are built
on documents and the citations that link them. Citations, while serving
multiple purposes, primarily function as a way to explicitly document the use
of prior work and thus have become central to the study of knowledge systems.
Analyzing citation dynamics has revealed statistical patterns that shed light
on knowledge production, recognition, and formalization, and has helped
identify key mechanisms driving these patterns. However, most quantitative
findings are confined to scientific citations, raising the question of
universality of these findings. Moreover, existing models of individual
citation trajectories fail to explain phenomena such as delayed recognition,
calling for a unifying framework. Here, we analyze a newly available corpus of
U.S. case law, in addition to scientific and patent citation networks, to show
that they share remarkably similar citation patterns, including a heavy-tailed
distribution of sleeping beauties. We propose a holistic model that captures
the three core mechanisms driving collective dynamics and replicates the
elusive phenomenon of delayed recognition. We demonstrate that the model not
only replicates observed citation patterns, but also better predicts future
successes by considering the whole system. Our work offers insights into key
mechanisms that govern large-scale patterns of collective human knowledge
systems and may provide generalizable perspectives on discovery and innovation
across domains.
|
2501.15554
|
BoTier: Multi-Objective Bayesian Optimization with Tiered Composite
Objectives
|
cs.LG math.OC stat.ME stat.ML
|
Scientific optimization problems are usually concerned with balancing
multiple competing objectives, which come as preferences over both the outcomes
of an experiment (e.g. maximize the reaction yield) and the corresponding input
parameters (e.g. minimize the use of an expensive reagent). Typically,
practical and economic considerations define a hierarchy over these objectives,
which must be reflected in algorithms for sample-efficient experiment planning.
Herein, we introduce BoTier, a composite objective that can flexibly represent
a hierarchy of preferences over both experiment outcomes and input parameters.
We provide systematic benchmarks on synthetic and real-life surfaces,
demonstrating the robust applicability of BoTier across a number of use cases.
Importantly, BoTier is implemented in an auto-differentiable fashion, enabling
seamless integration with the BoTorch library, thereby facilitating adoption by
the scientific community.
|
2501.15555
|
Distributionally Robust Graph Out-of-Distribution Recommendation via
Diffusion Model
|
cs.LG cs.AI cs.GR stat.ML
|
The distributionally robust optimization (DRO)-based graph neural network
methods improve recommendation systems' out-of-distribution (OOD)
generalization by optimizing the model's worst-case performance. However, these
studies fail to consider the impact of noisy samples in the training data,
which results in diminished generalization capabilities and lower accuracy.
Through experimental and theoretical analysis, this paper reveals that current
DRO-based graph recommendation methods assign greater weight to noise
distribution, leading to model parameter learning being dominated by it. When
the model overly focuses on fitting noise samples in the training data, it may
learn irrelevant or meaningless features that cannot be generalized to OOD
data. To address this challenge, we design a Distributionally Robust Graph
model for OOD recommendation (DRGO). Specifically, our method first employs a
simple and effective diffusion paradigm to alleviate the noisy effect in the
latent space. Additionally, an entropy regularization term is introduced in the
DRO objective function to avoid extreme sample weights in the worst-case
distribution. Finally, we provide a theoretical proof of the generalization
error bound of DRGO as well as a theoretical analysis of how our approach
mitigates noisy sample effects, which helps to better understand the proposed
framework from a theoretical perspective. We conduct extensive experiments on
four datasets to evaluate the effectiveness of our framework against three
typical distribution shifts, and the results demonstrate its superiority in
both independently and identically distributed distributions (IID) and OOD.
|
2501.15556
|
Commute Your Domains: Trajectory Optimality Criterion for Multi-Domain
Learning
|
cs.LG cs.CL
|
In multi-domain learning, a single model is trained on diverse data domains
to leverage shared knowledge and improve generalization. The order in which the
data from these domains is used for training can significantly affect the
model's performance on each domain. However, this dependence is under-studied.
In this paper, we investigate the influence of training order (or data mixing)
in multi-domain learning using the concept of Lie bracket of gradient vector
fields. By analyzing the infinitesimal effects of changing the training order,
we identify regions in the parameter space where altering the order between two
training domains can benefit the target loss. We validate the predictions of
our theoretical framework on the influence of training order (or data mixing)
both on a toy example and bilingual LLM pre-training.
|
2501.15558
|
Ocean-OCR: Towards General OCR Application via a Vision-Language Model
|
cs.CV
|
Multimodal large language models (MLLMs) have shown impressive capabilities
across various domains, excelling in processing and understanding information
from multiple modalities. Despite the rapid progress made previously,
insufficient OCR ability hinders MLLMs from excelling in text-related tasks. In
this paper, we present \textbf{Ocean-OCR}, a 3B MLLM with state-of-the-art
performance on various OCR scenarios and comparable understanding ability on
general tasks. We employ Native Resolution ViT to enable variable resolution
input and utilize a substantial collection of high-quality OCR datasets to
enhance the model performance. We demonstrate the superiority of Ocean-OCR
through comprehensive experiments on open-source OCR benchmarks and across
various OCR scenarios. These scenarios encompass document understanding, scene
text recognition, and handwritten recognition, highlighting the robust OCR
capabilities of Ocean-OCR. Note that Ocean-OCR is the first MLLM to outperform
professional OCR models such as TextIn and PaddleOCR.
|
2501.15559
|
Towards Sharper Information-theoretic Generalization Bounds for
Meta-Learning
|
stat.ML cs.LG
|
In recent years, information-theoretic generalization bounds have emerged as
a promising approach for analyzing the generalization capabilities of
meta-learning algorithms. However, existing results are confined to two-step
bounds, failing to provide a sharper characterization of the
meta-generalization gap that simultaneously accounts for environment-level and
task-level dependencies. This paper addresses this fundamental limitation by
establishing novel single-step information-theoretic bounds for meta-learning.
Our bounds exhibit substantial advantages over prior MI- and CMI-based bounds,
especially in terms of tightness, scaling behavior associated with sampled
tasks and samples per task, and computational tractability. Furthermore, we
provide novel theoretical insights into the generalization behavior of two
classes of noise and iterative meta-learning algorithms via gradient covariance
analysis, where the meta-learner uses either the entire meta-training data
(e.g., Reptile), or separate training and test data within the task (e.g.,
model agnostic meta-learning (MAML)). Numerical results validate the
effectiveness of the derived bounds in capturing the generalization dynamics of
meta-learning.
|
2501.15562
|
CE-SDWV: Effective and Efficient Concept Erasure for Text-to-Image
Diffusion Models via a Semantic-Driven Word Vocabulary
|
cs.CV cs.AI
|
Large-scale text-to-image (T2I) diffusion models have achieved remarkable
generative performance about various concepts. With the limitation of privacy
and safety in practice, the generative capability concerning NSFW (Not Safe For
Work) concepts is undesirable, e.g., producing sexually explicit photos, and
licensed images. The concept erasure task for T2I diffusion models has
attracted considerable attention and requires an effective and efficient
method. To achieve this goal, we propose a CE-SDWV framework, which removes the
target concepts (e.g., NSFW concepts) of T2I diffusion models in the text
semantic space by only adjusting the text condition tokens and does not need to
re-train the original T2I diffusion model's weights. Specifically, our
framework first builds a target concept-related word vocabulary to enhance the
representation of the target concepts within the text semantic space, and then
utilizes an adaptive semantic component suppression strategy to ablate the
target concept-related semantic information in the text condition tokens. To
further adapt the above text condition tokens to the original image semantic
space, we propose an end-to-end gradient-orthogonal token optimization
strategy. Extensive experiments on I2P and UnlearnCanvas benchmarks demonstrate
the effectiveness and efficiency of our method.
|
2501.15563
|
PCAP-Backdoor: Backdoor Poisoning Generator for Network Traffic in
CPS/IoT Environments
|
cs.LG cs.CR cs.NI
|
The rapid expansion of connected devices has made them prime targets for
cyberattacks. To address these threats, deep learning-based, data-driven
intrusion detection systems (IDS) have emerged as powerful tools for detecting
and mitigating such attacks. These IDSs analyze network traffic to identify
unusual patterns and anomalies that may indicate potential security breaches.
However, prior research has shown that deep learning models are vulnerable to
backdoor attacks, where attackers inject triggers into the model to manipulate
its behavior and cause misclassifications of network traffic. In this paper, we
explore the susceptibility of deep learning-based IDS systems to backdoor
attacks in the context of network traffic analysis. We introduce
\texttt{PCAP-Backdoor}, a novel technique that facilitates backdoor poisoning
attacks on PCAP datasets. Our experiments on real-world Cyber-Physical Systems
(CPS) and Internet of Things (IoT) network traffic datasets demonstrate that
attackers can effectively backdoor a model by poisoning as little as 1\% or
less of the entire training dataset. Moreover, we show that an attacker can
introduce a trigger into benign traffic during model training yet cause the
backdoored model to misclassify malicious traffic when the trigger is present.
Finally, we highlight the difficulty of detecting this trigger-based backdoor,
even when using existing backdoor defense techniques.
|
2501.15564
|
Diffusion-Based Planning for Autonomous Driving with Flexible Guidance
|
cs.RO cs.AI cs.LG
|
Achieving human-like driving behaviors in complex open-world environments is
a critical challenge in autonomous driving. Contemporary learning-based
planning approaches such as imitation learning methods often struggle to
balance competing objectives and lack of safety assurance,due to limited
adaptability and inadequacy in learning complex multi-modal behaviors commonly
exhibited in human planning, not to mention their strong reliance on the
fallback strategy with predefined rules. We propose a novel transformer-based
Diffusion Planner for closed-loop planning, which can effectively model
multi-modal driving behavior and ensure trajectory quality without any
rule-based refinement. Our model supports joint modeling of both prediction and
planning tasks under the same architecture, enabling cooperative behaviors
between vehicles. Moreover, by learning the gradient of the trajectory score
function and employing a flexible classifier guidance mechanism, Diffusion
Planner effectively achieves safe and adaptable planning behaviors. Evaluations
on the large-scale real-world autonomous planning benchmark nuPlan and our
newly collected 200-hour delivery-vehicle driving dataset demonstrate that
Diffusion Planner achieves state-of-the-art closed-loop performance with robust
transferability in diverse driving styles.
|
2501.15570
|
ARWKV: Pretrain is not what we need, an RNN-Attention-Based Language
Model Born from Transformer
|
cs.CL
|
As is known, hybrid quadratic and subquadratic attention models in multi-head
architectures have surpassed both Transformer and Linear RNN models , with
these works primarily focusing on reducing KV complexity and improving
efficiency. For further research on expressiveness, we introduce our series of
models distilled from Qwen 2.5, based on pure native RWKV-7 attention, which
aims to make RNN more expressive and demonstrates state tracking ability beyond
transformers. We work with QRWK 32B based on RWKV-6 architecture, another
approach that reduces the entire knowledge processing time to just 8 hours
using 16 AMD MI300X GPUs while maintaining Qwen 2.5's performance. In fact, the
distillation process can utilize any LLM, not just Qwen, and enables knowledge
transfer from larger LLMs to smaller ones with more fewer tokens. We will
explain the detailed process and share our insights on building more powerful
foundation models. Please note that this is an ongoing work that will be
updated continuously. The model checkpoints and source code are available at
\href{https://github.com/yynil/RWKVInside}{https://github.com/yynil/RWKVInside},
\href{https://huggingface.co/RWKV-Red-Team/ARWKV-7B-Preview-0.1}{https://huggingface.co/RWKV-Red-Team/ARWKV-7B-Preview-0.1}.
|
2501.15571
|
Cross-Cultural Fashion Design via Interactive Large Language Models and
Diffusion Models
|
cs.CL
|
Fashion content generation is an emerging area at the intersection of
artificial intelligence and creative design, with applications ranging from
virtual try-on to culturally diverse design prototyping. Existing methods often
struggle with cultural bias, limited scalability, and alignment between textual
prompts and generated visuals, particularly under weak supervision. In this
work, we propose a novel framework that integrates Large Language Models (LLMs)
with Latent Diffusion Models (LDMs) to address these challenges. Our method
leverages LLMs for semantic refinement of textual prompts and introduces a weak
supervision filtering module to effectively utilize noisy or weakly labeled
data. By fine-tuning the LDM on an enhanced DeepFashion+ dataset enriched with
global fashion styles, the proposed approach achieves state-of-the-art
performance. Experimental results demonstrate that our method significantly
outperforms baselines, achieving lower Frechet Inception Distance (FID) and
higher Inception Scores (IS), while human evaluations confirm its ability to
generate culturally diverse and semantically relevant fashion content. These
results highlight the potential of LLM-guided diffusion models in driving
scalable and inclusive AI-driven fashion innovation.
|
2501.15572
|
Comparative clinical evaluation of "memory-efficient" synthetic 3d
generative adversarial networks (gan) head-to-head to state of art: results
on computed tomography of the chest
|
eess.IV cs.AI cs.CV
|
Introduction: Generative Adversarial Networks (GANs) are increasingly used to
generate synthetic medical images, addressing the critical shortage of
annotated data for training Artificial Intelligence (AI) systems. This study
introduces a novel memory-efficient GAN architecture, incorporating Conditional
Random Fields (CRFs) to generate high-resolution 3D medical images and
evaluates its performance against the state-of-the-art hierarchical (HA)-GAN
model.
Materials and Methods: The CRF-GAN was trained using the open-source lung CT
LUNA16 dataset. The architecture was compared to HA-GAN through a quantitative
evaluation, using Frechet Inception Distance (FID) and Maximum Mean Discrepancy
(MMD) metrics, and a qualitative evaluation, through a two-alternative forced
choice (2AFC) test completed by a pool of 12 resident radiologists, in order to
assess the realism of the generated images.
Results: CRF-GAN outperformed HA-GAN with lower FID (0.047 vs. 0.061) and MMD
(0.084 vs. 0.086) scores, indicating better image fidelity. The 2AFC test
showed a significant preference for images generated by CRF-Gan over those
generated by HA-GAN with a p-value of 1.93e-05. Additionally, CRF-GAN
demonstrated 9.34% lower memory usage at 256 resolution and achieved up to
14.6% faster training speeds, offering substantial computational savings.
Discussion: CRF-GAN model successfully generates high-resolution 3D medical
images with non-inferior quality to conventional models, while being more
memory-efficient and faster. Computational power and time saved can be used to
improve the spatial resolution and anatomical accuracy of generated images,
which is still a critical factor limiting their direct clinical applicability.
|
2501.15573
|
Approximate Message Passing for Bayesian Neural Networks
|
cs.LG cs.CV
|
Bayesian neural networks (BNNs) offer the potential for reliable uncertainty
quantification and interpretability, which are critical for trustworthy AI in
high-stakes domains. However, existing methods often struggle with issues such
as overconfidence, hyperparameter sensitivity, and posterior collapse, leaving
room for alternative approaches. In this work, we advance message passing (MP)
for BNNs and present a novel framework that models the predictive posterior as
a factor graph. To the best of our knowledge, our framework is the first MP
method that handles convolutional neural networks and avoids double-counting
training data, a limitation of previous MP methods that causes overconfidence.
We evaluate our approach on CIFAR-10 with a convolutional neural network of
roughly 890k parameters and find that it can compete with the SOTA baselines
AdamW and IVON, even having an edge in terms of calibration. On synthetic data,
we validate the uncertainty estimates and observe a strong correlation (0.9)
between posterior credible intervals and its probability of covering the true
data-generating function outside the training range. While our method scales to
an MLP with 5.6 million parameters, further improvements are necessary to match
the scale and performance of state-of-the-art variational inference methods.
|
2501.15574
|
Instruction Tuning for Story Understanding and Generation with Weak
Supervision
|
cs.CL
|
Story understanding and generation have long been a challenging task in
natural language processing (NLP), especially when dealing with various levels
of instruction specificity. In this paper, we propose a novel approach called
"Weak to Strong Instruction Tuning" for improving story generation by tuning
models with instructions of varying clarity. We explore the potential of large
language models (LLMs) to adapt to different types of instructions, weak and
strong, and show that our method significantly enhances performance in story
comprehension and generation. By leveraging the strength of instruction tuning,
we train models to understand the nuances of story plots, characters, and
themes while generating coherent and engaging narratives. Through extensive
experiments on several benchmark datasets and comparison with state-of-the-art
baselines, we demonstrate that our method outperforms existing techniques,
yielding substantial improvements in both automatic evaluation metrics and
human evaluations. Our work shows that adaptive instruction tuning can be a
powerful tool in refining generative models for complex narrative tasks.
|
2501.15576
|
First Real-Time Detection of Ambient Backscatters using Uplink Sounding
Reference Signals of a Commercial 4G Smartphone
|
cs.IT math.IT
|
Recently, cellular Ambient Backscattering has been proposed for 4G/5G/6G
networks. An Ambient backscatter tag broadcasts its message by backscattering
ambient downlink waves from the closest base station according to a predefined
pattern. A tag is detected by smartphones nearby. This paper presents, for the
first time, a novel ambient backscatter communication system exploiting uplink
ambient waves from smartphones instead of downlink waves. In this novel system,
the base station connected to a smartphone monitors the uplink pilot signals
and detects tags in proximity. The proposed system is implemented and tested
with prototypes of tags, a commercial 4G smartphone and a 4G Software Defined
Radio base station. At the base station side, a non-coherent correlator
receiver is implemented, and a novel technique based on pre-correlation data
processing is proposed to separate useful variations on pilot signals due to
tags from variations due to time varying channel effects. To deal with
collision between multiple tags, distinct Gold pseudo noise codes with minimum
cross correlation are used. Tests are run in different indoor and outdoor
environments. A receiver detection probability of 95% has been achieved at 0.5%
False-alarm probability when the tag is at 5 meters from the UE. At the refresh
rate of 2 seconds, the proposed scheme is suitable for tracking objects at
moderate speeds and can therefore be used for many passive IoT-based
applications.
|
2501.15579
|
ConceptCLIP: Towards Trustworthy Medical AI via Concept-Enhanced
Contrastive Langauge-Image Pre-training
|
cs.CV cs.CL
|
Trustworthiness is essential for the precise and interpretable application of
artificial intelligence (AI) in medical imaging. Traditionally, precision and
interpretability have been addressed as separate tasks, namely medical image
analysis and explainable AI, each developing its own models independently. In
this study, for the first time, we investigate the development of a unified
medical vision-language pre-training model that can achieve both accurate
analysis and interpretable understanding of medical images across various
modalities. To build the model, we construct MedConcept-23M, a large-scale
dataset comprising 23 million medical image-text pairs extracted from 6.2
million scientific articles, enriched with concepts from the Unified Medical
Language System (UMLS). Based on MedConcept-23M, we introduce ConceptCLIP, a
medical AI model utilizing concept-enhanced contrastive language-image
pre-training. The pre-training of ConceptCLIP involves two primary components:
image-text alignment learning (IT-Align) and patch-concept alignment learning
(PC-Align). This dual alignment strategy enhances the model's capability to
associate specific image regions with relevant concepts, thereby improving both
the precision of analysis and the interpretability of the AI system. We
conducted extensive experiments on 5 diverse types of medical image analysis
tasks, spanning 51 subtasks across 10 image modalities, with the broadest range
of downstream tasks. The results demonstrate the effectiveness of the proposed
vision-language pre-training model. Further explainability analysis across 6
modalities reveals that ConceptCLIP achieves superior performance, underscoring
its robust ability to advance explainable AI in medical imaging. These findings
highlight ConceptCLIP's capability in promoting trustworthy AI in the field of
medicine.
|
2501.15581
|
Error Classification of Large Language Models on Math Word Problems: A
Dynamically Adaptive Framework
|
cs.CL
|
Large Language Models (LLMs) have demonstrated remarkable capabilities across
various domains. Math Word Problems (MWPs) serve as a crucial benchmark for
evaluating LLMs' reasoning abilities. While most research primarily focuses on
improving accuracy, it often neglects understanding and addressing the
underlying patterns of errors. Current error classification methods rely on
static and predefined categories, which limit their ability to capture the full
spectrum of error patterns in mathematical reasoning. To enable systematic
error analysis, we collect error samples from 15 different LLMs of varying
sizes across four distinct MWP datasets using multiple sampling strategies.
Based on this extensive collection, we introduce MWPES-300K, a comprehensive
dataset containing 304,865 error samples that cover diverse error patterns and
reasoning paths. To reduce human bias and enable fine-grained analysis of error
patterns, we propose a novel framework for automated dynamic error
classification in mathematical reasoning. Experimental results demonstrate that
dataset characteristics significantly shape error patterns, which evolve from
basic to complex manifestations as model capabilities increase. With deeper
insights into error patterns, we propose error-aware prompting that
incorporates common error patterns as explicit guidance, leading to significant
improvements in mathematical reasoning performance.
|
2501.15585
|
Twin Transition or Competing Interests? Validation of the Artificial
Intelligence and Sustainability Perceptions Inventory (AISPI)
|
cs.CY cs.AI
|
As artificial intelligence (AI) and sustainability initiatives increasingly
intersect, understanding public perceptions of their relationship becomes
crucial for successful implementation. However, no validated instrument exists
to measure these specific perceptions. This paper presents the development and
validation of the Artificial Intelligence and Sustainability Perceptions
Inventory (AISPI), a novel 13-item instrument measuring how individuals view
the relationship between AI advancement and environmental sustainability.
Through factor analysis (N=105), we identified two distinct dimensions: Twin
Transition and Competing Interests. The instrument demonstrated strong
reliability (alpha=.89) and construct validity through correlations with
established measures of AI and sustainability attitudes. Our findings suggest
that individuals can simultaneously recognize both synergies and tensions in
the AI-sustainability relationship, offering important implications for
researchers and practitioners working at this critical intersection. This work
provides a foundational tool for future research on public perceptions of AI's
role in sustainable development.
|
2501.15587
|
SCP-116K: A High-Quality Problem-Solution Dataset and a Generalized
Pipeline for Automated Extraction in the Higher Education Science Domain
|
cs.CL cs.AI cs.IR
|
Recent breakthroughs in large language models (LLMs) exemplified by the
impressive mathematical and scientific reasoning capabilities of the o1 model
have spotlighted the critical importance of high-quality training data in
advancing LLM performance across STEM disciplines. While the mathematics
community has benefited from a growing body of curated datasets, the scientific
domain at the higher education level has long suffered from a scarcity of
comparable resources. To address this gap, we present SCP-116K, a new
large-scale dataset of 116,756 high-quality problem-solution pairs,
automatically extracted from heterogeneous sources using a streamlined and
highly generalizable pipeline. Our approach involves stringent filtering to
ensure the scientific rigor and educational level of the extracted materials,
while maintaining adaptability for future expansions or domain transfers. By
openly releasing both the dataset and the extraction pipeline, we seek to
foster research on scientific reasoning, enable comprehensive performance
evaluations of new LLMs, and lower the barrier to replicating the successes of
advanced models like o1 in the broader science community. We believe SCP-116K
will serve as a critical resource, catalyzing progress in high-level scientific
reasoning tasks and promoting further innovations in LLM development. The
dataset and code are publicly available at
https://github.com/AQA6666/SCP-116K-open.
|
2501.15588
|
Tumor Detection, Segmentation and Classification Challenge on Automated
3D Breast Ultrasound: The TDSC-ABUS Challenge
|
eess.IV cs.CV
|
Breast cancer is one of the most common causes of death among women
worldwide. Early detection helps in reducing the number of deaths. Automated 3D
Breast Ultrasound (ABUS) is a newer approach for breast screening, which has
many advantages over handheld mammography such as safety, speed, and higher
detection rate of breast cancer. Tumor detection, segmentation, and
classification are key components in the analysis of medical images, especially
challenging in the context of 3D ABUS due to the significant variability in
tumor size and shape, unclear tumor boundaries, and a low signal-to-noise
ratio. The lack of publicly accessible, well-labeled ABUS datasets further
hinders the advancement of systems for breast tumor analysis. Addressing this
gap, we have organized the inaugural Tumor Detection, Segmentation, and
Classification Challenge on Automated 3D Breast Ultrasound 2023
(TDSC-ABUS2023). This initiative aims to spearhead research in this field and
create a definitive benchmark for tasks associated with 3D ABUS image analysis.
In this paper, we summarize the top-performing algorithms from the challenge
and provide critical analysis for ABUS image examination. We offer the
TDSC-ABUS challenge as an open-access platform at
https://tdsc-abus2023.grand-challenge.org/ to benchmark and inspire future
developments in algorithmic research.
|
2501.15590
|
Assessing and Predicting Air Pollution in Asia: A Regional and Temporal
Study (2018-2023)
|
cs.LG stat.AP
|
This study analyzes and predicts air pollution in Asia, focusing on PM 2.5
levels from 2018 to 2023 across five regions: Central, East, South, Southeast,
and West Asia. South Asia emerged as the most polluted region, with Bangladesh,
India, and Pakistan consistently having the highest PM 2.5 levels and death
rates, especially in Nepal, Pakistan, and India. East Asia showed the lowest
pollution levels. K-means clustering categorized countries into high, moderate,
and low pollution groups. The ARIMA model effectively predicted 2023 PM 2.5
levels (MAE: 3.99, MSE: 33.80, RMSE: 5.81, R: 0.86). The findings emphasize the
need for targeted interventions to address severe pollution and health risks in
South Asia.
|
2501.15592
|
Information Consistent Pruning: How to Efficiently Search for Sparse
Networks?
|
cs.LG cs.IT cs.NE math.IT
|
Iterative magnitude pruning methods (IMPs), proven to be successful in
reducing the number of insignificant nodes in over-parameterized deep neural
networks (DNNs), have been getting an enormous amount of attention with the
rapid deployment of DNNs into cutting-edge technologies with computation and
memory constraints. Despite IMPs popularity in pruning networks, a fundamental
limitation of existing IMP algorithms is the significant training time required
for each pruning iteration. Our paper introduces a novel \textit{stopping
criterion} for IMPs that monitors information and gradient flows between
networks layers and minimizes the training time. Information Consistent Pruning
(\ourmethod{}) eliminates the need to retrain the network to its original
performance during intermediate steps while maintaining overall performance at
the end of the pruning process. Through our experiments, we demonstrate that
our algorithm is more efficient than current IMPs across multiple dataset-DNN
combinations. We also provide theoretical insights into the core idea of our
algorithm alongside mathematical explanations of flow-based IMP. Our code is
available at \url{https://github.com/Sekeh-Lab/InfCoP}.
|
2501.15595
|
SedarEval: Automated Evaluation using Self-Adaptive Rubrics
|
cs.CV
|
The evaluation paradigm of LLM-as-judge gains popularity due to its
significant reduction in human labor and time costs. This approach utilizes one
or more large language models (LLMs) to assess the quality of outputs from
other LLMs. However, existing methods rely on generic scoring rubrics that fail
to consider the specificities of each question and its problem-solving process,
compromising precision and stability in assessments. Inspired by human
examination scoring processes, we propose a new evaluation paradigm based on
self-adaptive rubrics. Specifically, we create detailed scoring rubrics for
each question, capturing the primary and secondary criteria in a structured
format of scoring and deduction points that mimic a human evaluator's
analytical process. Building on this paradigm, we further develop a novel
benchmark called SedarEval, which covers a range of domains including long-tail
knowledge, mathematics, coding, and logical reasoning. SedarEval consists of
1,000 meticulously crafted questions, each with its own self-adaptive rubric.
To further streamline the evaluation, we train a specialized evaluator language
model (evaluator LM) to supplant human graders. Using the same training data,
our evaluator LM achieves a higher concordance rate with human grading results
than other paradigms, including GPT-4, highlighting the superiority and
efficiency of our approach. We release our dataset at
https://github.com/wwn1233/sedareval.
|
2501.15598
|
Diffusion Generative Modeling for Spatially Resolved Gene Expression
Inference from Histology Images
|
q-bio.QM cs.AI cs.CV cs.LG stat.ML
|
Spatial Transcriptomics (ST) allows a high-resolution measurement of RNA
sequence abundance by systematically connecting cell morphology depicted in
Hematoxylin and Eosin (H&E) stained histology images to spatially resolved gene
expressions. ST is a time-consuming, expensive yet powerful experimental
technique that provides new opportunities to understand cancer mechanisms at a
fine-grained molecular level, which is critical for uncovering new approaches
for disease diagnosis and treatments. Here, we present $\textbf{Stem}$
($\textbf{S}$pa$\textbf{T}$ially resolved gene $\textbf{E}$xpression inference
with diffusion $\textbf{M}$odel), a novel computational tool that leverages a
conditional diffusion generative model to enable in silico gene expression
inference from H&E stained images. Through better capturing the inherent
stochasticity and heterogeneity in ST data, $\textbf{Stem}$ achieves
state-of-the-art performance on spatial gene expression prediction and
generates biologically meaningful gene profiles for new H&E stained images at
test time. We evaluate the proposed algorithm on datasets with various tissue
sources and sequencing platforms, where it demonstrates clear improvement over
existing approaches. $\textbf{Stem}$ generates high-fidelity gene expression
predictions that share similar gene variation levels as ground truth data,
suggesting that our method preserves the underlying biological heterogeneity.
Our proposed pipeline opens up the possibility of analyzing existing, easily
accessible H&E stained histology images from a genomics point of view without
physically performing gene expression profiling and empowers potential
biological discovery from H&E stained histology images.
|
2501.15602
|
Rethinking External Slow-Thinking: From Snowball Errors to Probability
of Correct Reasoning
|
cs.AI cs.CL
|
Test-time scaling, which is also often referred to as slow-thinking, has been
demonstrated to enhance multi-step reasoning in large language models (LLMs).
However, despite its widespread utilization, the mechanisms underlying
slow-thinking methods remain poorly understood. This paper explores the
mechanisms of external slow-thinking from a theoretical standpoint. We begin by
examining the snowball error effect within the LLM reasoning process and
connect it to the likelihood of correct reasoning using information theory.
Building on this, we show that external slow-thinking methods can be
interpreted as strategies to mitigate the error probability. We further provide
a comparative analysis of popular external slow-thinking approaches, ranging
from simple to complex, highlighting their differences and interrelationships.
Our findings suggest that the efficacy of these methods is not primarily
determined by the specific framework employed, and that expanding the search
scope or the model's internal reasoning capacity may yield more sustained
improvements in the long term. We open-source our code at
https://github.com/ZyGan1999/Snowball-Errors-and-Probability.
|
2501.15603
|
Advancing TDFN: Precise Fixation Point Generation Using Reconstruction
Differences
|
cs.CV
|
Wang and Wang (2025) proposed the Task-Driven Fixation Network (TDFN) based
on the fixation mechanism, which leverages low-resolution information along
with high-resolution details near fixation points to accomplish specific visual
tasks. The model employs reinforcement learning to generate fixation points.
However, training reinforcement learning models is challenging, particularly
when aiming to generate pixel-level accurate fixation points on high-resolution
images. This paper introduces an improved fixation point generation method by
leveraging the difference between the reconstructed image and the input image
to train the fixation point generator. This approach directs fixation points to
areas with significant differences between the reconstructed and input images.
Experimental results demonstrate that this method achieves highly accurate
fixation points, significantly enhances the network's classification accuracy,
and reduces the average number of required fixations to achieve a predefined
accuracy level.
|
2501.15610
|
Radiologist-in-the-Loop Self-Training for Generalizable CT Metal
Artifact Reduction
|
eess.IV cs.CV
|
Metal artifacts in computed tomography (CT) images can significantly degrade
image quality and impede accurate diagnosis. Supervised metal artifact
reduction (MAR) methods, trained using simulated datasets, often struggle to
perform well on real clinical CT images due to a substantial domain gap.
Although state-of-the-art semi-supervised methods use pseudo ground-truths
generated by a prior network to mitigate this issue, their reliance on a fixed
prior limits both the quality and quantity of these pseudo ground-truths,
introducing confirmation bias and reducing clinical applicability. To address
these limitations, we propose a novel Radiologist-In-the-loop SElf-training
framework for MAR, termed RISE-MAR, which can integrate radiologists' feedback
into the semi-supervised learning process, progressively improving the quality
and quantity of pseudo ground-truths for enhanced generalization on real
clinical CT images. For quality assurance, we introduce a clinical quality
assessor model that emulates radiologist evaluations, effectively selecting
high-quality pseudo ground-truths for semi-supervised training. For quantity
assurance, our self-training framework iteratively generates additional
high-quality pseudo ground-truths, expanding the clinical dataset and further
improving model generalization. Extensive experimental results on multiple
clinical datasets demonstrate the superior generalization performance of our
RISE-MAR over state-of-the-art methods, advancing the development of MAR models
for practical application. Code is available at
https://github.com/Masaaki-75/rise-mar.
|
2501.15611
|
Nuisance-free Automatic Ground Collision Avoidance System Design:
Merging Exponential-CBF and Adaptive Sliding Manifolds
|
eess.SY cs.SY
|
The significance of the automatic ground collision avoidance system
(Auto-GCAS) has been proven by considering the fatal crashes that have occurred
over decades. Even though extensive efforts have been put forth to address the
ground collision avoidance in the literature, the notion of being nuisance-free
has not been sufficiently addressed. At this point, in this study, the
Auto-GCAS design is formulated by merging exponential control barrier functions
with sliding manifolds to manipulate the barrier function dynamics. The
adaptive properties of the sliding manifolds are tailored to the key and
governing flight parameters, ensuring that the nuisance-free requirement is
satisfied. Furthermore, to ensure all safety requirements are met, a flight
envelope protection algorithm is designed using control barrier functions to
assess the commands generated by the Auto-GCAS. Eventually, the performance of
the proposed methodology is demonstrated, focusing on authority-sharing,
collision avoidance capability, and nuisance-free operation through various
scenarios and Monte Carlo simulations.
|
2501.15613
|
Stepback: Enhanced Disentanglement for Voice Conversion via Multi-Task
Learning
|
cs.SD cs.CL eess.AS
|
Voice conversion (VC) modifies voice characteristics while preserving
linguistic content. This paper presents the Stepback network, a novel model for
converting speaker identity using non-parallel data. Unlike traditional VC
methods that rely on parallel data, our approach leverages deep learning
techniques to enhance disentanglement completion and linguistic content
preservation. The Stepback network incorporates a dual flow of different domain
data inputs and uses constraints with self-destructive amendments to optimize
the content encoder. Extensive experiments show that our model significantly
improves VC performance, reducing training costs while achieving high-quality
voice conversion. The Stepback network's design offers a promising solution for
advanced voice conversion tasks.
|
2501.15615
|
Deterministic Reservoir Computing for Chaotic Time Series Prediction
|
cs.LG
|
Reservoir Computing was shown in recent years to be useful as efficient to
learn networks in the field of time series tasks. Their randomized
initialization, a computational benefit, results in drawbacks in theoretical
analysis of large random graphs, because of which deterministic variations are
an still open field of research. Building upon Next-Gen Reservoir Computing and
the Temporal Convolution Derived Reservoir Computing, we propose a
deterministic alternative to the higher-dimensional mapping therein, TCRC-LM
and TCRC-CM, utilizing the parametrized but deterministic Logistic mapping and
Chebyshev maps. To further enhance the predictive capabilities in the task of
time series forecasting, we propose the novel utilization of the Lobachevsky
function as non-linear activation function.
As a result, we observe a new, fully deterministic network being able to
outperform TCRCs and classical Reservoir Computing in the form of the prominent
Echo State Networks by up to $99.99\%$ for the non-chaotic time series and
$87.13\%$ for the chaotic ones.
|
2501.15616
|
IPVTON: Image-based 3D Virtual Try-on with Image Prompt Adapter
|
cs.CV
|
Given a pair of images depicting a person and a garment separately,
image-based 3D virtual try-on methods aim to reconstruct a 3D human model that
realistically portrays the person wearing the desired garment. In this paper,
we present IPVTON, a novel image-based 3D virtual try-on framework. IPVTON
employs score distillation sampling with image prompts to optimize a hybrid 3D
human representation, integrating target garment features into diffusion priors
through an image prompt adapter. To avoid interference with non-target areas,
we leverage mask-guided image prompt embeddings to focus the image features on
the try-on regions. Moreover, we impose geometric constraints on the 3D model
with a pseudo silhouette generated by ControlNet, ensuring that the clothed 3D
human model retains the shape of the source identity while accurately wearing
the target garments. Extensive qualitative and quantitative experiments
demonstrate that IPVTON outperforms previous methods in image-based 3D virtual
try-on tasks, excelling in both geometry and texture.
|
2501.15617
|
I-trustworthy Models. A framework for trustworthiness evaluation of
probabilistic classifiers
|
stat.ML cs.LG stat.ME
|
As probabilistic models continue to permeate various facets of our society
and contribute to scientific advancements, it becomes a necessity to go beyond
traditional metrics such as predictive accuracy and error rates and assess
their trustworthiness. Grounded in the competence-based theory of trust, this
work formalizes I-trustworthy framework -- a novel framework for assessing the
trustworthiness of probabilistic classifiers for inference tasks by linking
local calibration to trustworthiness. To assess I-trustworthiness, we use the
local calibration error (LCE) and develop a method of hypothesis-testing. This
method utilizes a kernel-based test statistic, Kernel Local Calibration Error
(KLCE), to test local calibration of a probabilistic classifier. This study
provides theoretical guarantees by offering convergence bounds for an unbiased
estimator of KLCE. Additionally, we present a diagnostic tool designed to
identify and measure biases in cases of miscalibration. The effectiveness of
the proposed test statistic is demonstrated through its application to both
simulated and real-world datasets. Finally, LCE of related recalibration
methods is studied, and we provide evidence of insufficiency of existing
methods to achieve I-trustworthiness.
|
2501.15618
|
Your Learned Constraint is Secretly a Backward Reachable Tube
|
cs.RO cs.AI cs.LG
|
Inverse Constraint Learning (ICL) is the problem of inferring constraints
from safe (i.e., constraint-satisfying) demonstrations. The hope is that these
inferred constraints can then be used downstream to search for safe policies
for new tasks and, potentially, under different dynamics. Our paper explores
the question of what mathematical entity ICL recovers. Somewhat surprisingly,
we show that both in theory and in practice, ICL recovers the set of states
where failure is inevitable, rather than the set of states where failure has
already happened. In the language of safe control, this means we recover a
backwards reachable tube (BRT) rather than a failure set. In contrast to the
failure set, the BRT depends on the dynamics of the data collection system. We
discuss the implications of the dynamics-conditionedness of the recovered
constraint on both the sample-efficiency of policy search and the
transferability of learned constraints.
|
2501.15619
|
GaussianToken: An Effective Image Tokenizer with 2D Gaussian Splatting
|
cs.CV cs.AI
|
Effective image tokenization is crucial for both multi-modal understanding
and generation tasks due to the necessity of the alignment with discrete text
data. To this end, existing approaches utilize vector quantization (VQ) to
project pixels onto a discrete codebook and reconstruct images from the
discrete representation. However, compared with the continuous latent space,
the limited discrete codebook space significantly restrict the representational
ability of these image tokenizers. In this paper, we propose GaussianToken: An
Effective Image Tokenizer with 2D Gaussian Splatting as a solution. We first
represent the encoded samples as multiple flexible featured 2D Gaussians
characterized by positions, rotation angles, scaling factors, and feature
coefficients. We adopt the standard quantization for the Gaussian features and
then concatenate the quantization results with the other intrinsic Gaussian
parameters before the corresponding splatting operation and the subsequent
decoding module. In general, GaussianToken integrates the local influence of 2D
Gaussian distribution into the discrete space and thus enhances the
representation capability of the image tokenizer. Competitive reconstruction
performances on CIFAR, Mini-ImageNet, and ImageNet-1K demonstrate the
effectiveness of our framework. Our code is available at:
https://github.com/ChrisDong-THU/GaussianToken.
|
2501.15624
|
Improving Estonian Text Simplification through Pretrained Language
Models and Custom Datasets
|
cs.CL
|
This study introduces an approach to Estonian text simplification using two
model architectures: a neural machine translation model and a fine-tuned large
language model (LLaMA). Given the limited resources for Estonian, we developed
a new dataset, the Estonian Simplification Dataset, combining translated data
and GPT-4.0-generated simplifications. We benchmarked OpenNMT, a neural machine
translation model that frames text simplification as a translation task, and
fine-tuned the LLaMA model on our dataset to tailor it specifically for
Estonian simplification. Manual evaluations on the test set show that the LLaMA
model consistently outperforms OpenNMT in readability, grammaticality, and
meaning preservation. These findings underscore the potential of large language
models for low-resource languages and provide a basis for further research in
Estonian text simplification.
|
2501.15627
|
HardML: A Benchmark For Evaluating Data Science And Machine Learning
knowledge and reasoning in AI
|
cs.LG cs.AI
|
We present HardML, a benchmark designed to evaluate the knowledge and
reasoning abilities in the fields of data science and machine learning. HardML
comprises a diverse set of 100 challenging multiple-choice questions,
handcrafted over a period of 6 months, covering the most popular and modern
branches of data science and machine learning. These questions are challenging
even for a typical Senior Machine Learning Engineer to answer correctly. To
minimize the risk of data contamination, HardML uses mostly original content
devised by the author. Current state of the art AI models achieve a 30% error
rate on this benchmark, which is about 3 times larger than the one achieved on
the equivalent, well known MMLU ML. While HardML is limited in scope and not
aiming to push the frontier, primarily due to its multiple choice nature, it
serves as a rigorous and modern testbed to quantify and track the progress of
top AI. While plenty benchmarks and experimentation in LLM evaluation exist in
other STEM fields like mathematics, physics and chemistry, the subfields of
data science and machine learning remain fairly underexplored.
|
2501.15630
|
Quantum-Enhanced Attention Mechanism in NLP: A Hybrid Classical-Quantum
Approach
|
cs.CL quant-ph
|
Transformer-based models have achieved remarkable results in natural language
processing (NLP) tasks such as text classification and machine translation.
However, their computational complexity and resource demands pose challenges
for scalability and accessibility. This research proposes a hybrid
quantum-classical transformer model that integrates a quantum-enhanced
attention mechanism to address these limitations. By leveraging quantum kernel
similarity and variational quantum circuits (VQC), the model captures intricate
token dependencies while improving computational efficiency. Experimental
results on the IMDb dataset demonstrate that the quantum-enhanced model
outperforms the classical baseline across all key metrics, achieving a 1.5%
improvement in accuracy (65.5% vs. 64%), precision, recall, and F1 score.
Statistical significance tests validate these improvements, highlighting the
robustness of the quantum approach. These findings illustrate the
transformative potential of quantum-enhanced attention mechanisms in optimizing
NLP architectures for real-world applications.
|
2501.15631
|
BoKDiff: Best-of-K Diffusion Alignment for Target-Specific 3D Molecule
Generation
|
q-bio.BM cs.LG
|
Structure-based drug design (SBDD) leverages the 3D structure of biomolecular
targets to guide the creation of new therapeutic agents. Recent advances in
generative models, including diffusion models and geometric deep learning, have
demonstrated promise in optimizing ligand generation. However, the scarcity of
high-quality protein-ligand complex data and the inherent challenges in
aligning generated ligands with target proteins limit the effectiveness of
these methods. We propose BoKDiff, a novel framework that enhances ligand
generation by combining multi-objective optimization and Best-of-K alignment
methodologies. Built upon the DecompDiff model, BoKDiff generates diverse
candidates and ranks them using a weighted evaluation of molecular properties
such as QED, SA, and docking scores. To address alignment challenges, we
introduce a method that relocates the center of mass of generated ligands to
their docking poses, enabling accurate sub-component extraction. Additionally,
we integrate a Best-of-N (BoN) sampling approach, which selects the optimal
ligand from multiple generated candidates without requiring fine-tuning. BoN
achieves exceptional results, with QED values exceeding 0.6, SA scores above
0.75, and a success rate surpassing 35%, demonstrating its efficiency and
practicality. BoKDiff achieves state-of-the-art results on the CrossDocked2020
dataset, including a -8.58 average Vina docking score and a 26% success rate in
molecule generation. This study is the first to apply Best-of-K alignment and
Best-of-N sampling to SBDD, highlighting their potential to bridge generative
modeling with practical drug discovery requirements. The code is provided at
https://github.com/khodabandeh-ali/BoKDiff.git.
|
2501.15634
|
Be Intentional About Fairness!: Fairness, Size, and Multiplicity in the
Rashomon Set
|
cs.CY cs.LG
|
When selecting a model from a set of equally performant models, how much
unfairness can you really reduce? Is it important to be intentional about
fairness when choosing among this set, or is arbitrarily choosing among the set
of ''good'' models good enough? Recent work has highlighted that the phenomenon
of model multiplicity-where multiple models with nearly identical predictive
accuracy exist for the same task-has both positive and negative implications
for fairness, from strengthening the enforcement of civil rights law in AI
systems to showcasing arbitrariness in AI decision-making. Despite the enormous
implications of model multiplicity, there is little work that explores the
properties of sets of equally accurate models, or Rashomon sets, in general. In
this paper, we present five main theoretical and methodological contributions
which help us to understand the relatively unexplored properties of the
Rashomon set, in particular with regards to fairness. Our contributions include
methods for efficiently sampling models from this set and techniques for
identifying the fairest models according to key fairness metrics such as
statistical parity. We also derive the probability that an individual's
prediction will be flipped within the Rashomon set, as well as expressions for
the set's size and the distribution of error tolerance used across models.
These results lead to policy-relevant takeaways, such as the importance of
intentionally looking for fair models within the Rashomon set, and
understanding which individuals or groups may be more susceptible to arbitrary
decisions.
|
2501.15638
|
A Comprehensive Survey on Self-Interpretable Neural Networks
|
cs.LG cs.AI
|
Neural networks have achieved remarkable success across various fields.
However, the lack of interpretability limits their practical use, particularly
in critical decision-making scenarios. Post-hoc interpretability, which
provides explanations for pre-trained models, is often at risk of robustness
and fidelity. This has inspired a rising interest in self-interpretable neural
networks, which inherently reveal the prediction rationale through the model
structures. Although there exist surveys on post-hoc interpretability, a
comprehensive and systematic survey of self-interpretable neural networks is
still missing. To address this gap, we first collect and review existing works
on self-interpretable neural networks and provide a structured summary of their
methodologies from five key perspectives: attribution-based, function-based,
concept-based, prototype-based, and rule-based self-interpretation. We also
present concrete, visualized examples of model explanations and discuss their
applicability across diverse scenarios, including image, text, graph data, and
deep reinforcement learning. Additionally, we summarize existing evaluation
metrics for self-interpretability and identify open challenges in this field,
offering insights for future research. To support ongoing developments, we
present a publicly accessible resource to track advancements in this domain:
https://github.com/yangji721/Awesome-Self-Interpretable-Neural-Network.
|
2501.15641
|
Bringing Characters to New Stories: Training-Free Theme-Specific Image
Generation via Dynamic Visual Prompting
|
cs.CV
|
The stories and characters that captivate us as we grow up shape unique
fantasy worlds, with images serving as the primary medium for visually
experiencing these realms. Personalizing generative models through fine-tuning
with theme-specific data has become a prevalent approach in text-to-image
generation. However, unlike object customization, which focuses on learning
specific objects, theme-specific generation encompasses diverse elements such
as characters, scenes, and objects. Such diversity also introduces a key
challenge: how to adaptively generate multi-character, multi-concept, and
continuous theme-specific images (TSI). Moreover, fine-tuning approaches often
come with significant computational overhead, time costs, and risks of
overfitting. This paper explores a fundamental question: Can image generation
models directly leverage images as contextual input, similarly to how large
language models use text as context? To address this, we present T-Prompter, a
novel training-free TSI method for generation. T-Prompter introduces visual
prompting, a mechanism that integrates reference images into generative models,
allowing users to seamlessly specify the target theme without requiring
additional training. To further enhance this process, we propose a Dynamic
Visual Prompting (DVP) mechanism, which iteratively optimizes visual prompts to
improve the accuracy and quality of generated images. Our approach enables
diverse applications, including consistent story generation, character design,
realistic character generation, and style-guided image generation. Comparative
evaluations against state-of-the-art personalization methods demonstrate that
T-Prompter achieves significantly better results and excels in maintaining
character identity preserving, style consistency and text alignment, offering a
robust and flexible solution for theme-specific image generation.
|
2501.15645
|
Individual Confidential Computing of Polynomials over Non-Uniform
Information
|
cs.IT math.IT
|
In this paper, we address the problem of secure distributed computation in
scenarios where user data is not uniformly distributed, extending existing
frameworks that assume uniformity, an assumption that is challenging to enforce
in data for computation. Motivated by the pervasive reliance on single service
providers for data storage and computation, we propose a privacy-preserving
scheme that achieves information-theoretic security guarantees for computing
polynomials over non-uniform data distributions. Our framework builds upon the
concept of perfect subset privacy and employs linear hashing techniques to
transform non-uniform data into approximately uniform distributions, enabling
robust and secure computation. We derive leakage bounds and demonstrate that
information leakage of any subset of user data to untrusted service providers,
i.e., not only to colluding workers but also (and more importantly) to the
admin, remains negligible under the proposed scheme.
|
2501.15646
|
Mathematical analysis of the gradients in deep learning
|
cs.LG cs.NA math.NA
|
Deep learning algorithms -- typically consisting of a class of deep
artificial neural networks (ANNs) trained by a stochastic gradient descent
(SGD) optimization method -- are nowadays an integral part in many areas of
science, industry, and also our day to day life. Roughly speaking, in their
most basic form, ANNs can be regarded as functions that consist of a series of
compositions of affine-linear functions with multidimensional versions of
so-called activation functions. One of the most popular of such activation
functions is the rectified linear unit (ReLU) function $\mathbb{R} \ni x
\mapsto \max\{ x, 0 \} \in \mathbb{R}$. The ReLU function is, however, not
differentiable and, typically, this lack of regularity transfers to the cost
function of the supervised learning problem under consideration. Regardless of
this lack of differentiability issue, deep learning practioners apply SGD
methods based on suitably generalized gradients in standard deep learning
libraries like {\sc TensorFlow} or {\sc Pytorch}. In this work we reveal an
accurate and concise mathematical description of such generalized gradients in
the training of deep fully-connected feedforward ANNs and we also study the
resulting generalized gradient function analytically. Specifically, we provide
an appropriate approximation procedure that uniquely describes the generalized
gradient function, we prove that the generalized gradients are limiting
Fr\'echet subgradients of the cost functional, and we conclude that the
generalized gradients must coincide with the standard gradient of the cost
functional on every open sets on which the cost functional is continuously
differentiable.
|
2501.15648
|
Can Pose Transfer Models Generate Realistic Human Motion?
|
cs.CV cs.AI cs.LG
|
Recent pose-transfer methods aim to generate temporally consistent and fully
controllable videos of human action where the motion from a reference video is
reenacted by a new identity. We evaluate three state-of-the-art pose-transfer
methods -- AnimateAnyone, MagicAnimate, and ExAvatar -- by generating videos
with actions and identities outside the training distribution and conducting a
participant study about the quality of these videos. In a controlled
environment of 20 distinct human actions, we find that participants, presented
with the pose-transferred videos, correctly identify the desired action only
42.92% of the time. Moreover, the participants find the actions in the
generated videos consistent with the reference (source) videos only 36.46% of
the time. These results vary by method: participants find the splatting-based
ExAvatar more consistent and photorealistic than the diffusion-based
AnimateAnyone and MagicAnimate.
|
2501.15652
|
Rate Distortion Approach to Joint Communication and Sensing With Markov
States: Open Loop Case
|
cs.IT math.IT
|
We investigate a joint communication and sensing (JCAS) framework in which a
transmitter concurrently transmits information to a receiver and estimates a
state of interest based on noisy observations. The state is assumed to evolve
according to a known dynamical model. Past state estimates may then be used to
inform current state estimates. We show that Bayesian filtering constitutes the
optimal sensing strategy. We analyze JCAS performance under an open loop
encoding strategy with results presented in terms of the tradeoff between
asymptotic communication rate and expected per-block distortion of the state.
We illustrate the general result by specializing the analysis to a
beam-pointing model with mobile state tracking. Our results shed light on the
relative performance of two beam control strategies, beam-switching and
multi-beam.
|
2501.15653
|
A Privacy Enhancing Technique to Evade Detection by Street Video Cameras
Without Using Adversarial Accessories
|
cs.CV
|
In this paper, we propose a privacy-enhancing technique leveraging an
inherent property of automatic pedestrian detection algorithms, namely, that
the training of deep neural network (DNN) based methods is generally performed
using curated datasets and laboratory settings, while the operational areas of
these methods are dynamic real-world environments. In particular, we leverage a
novel side effect of this gap between the laboratory and the real world:
location-based weakness in pedestrian detection. We demonstrate that the
position (distance, angle, height) of a person, and ambient light level,
directly impact the confidence of a pedestrian detector when detecting the
person. We then demonstrate that this phenomenon is present in pedestrian
detectors observing a stationary scene of pedestrian traffic, with blind spot
areas of weak detection of pedestrians with low confidence. We show how
privacy-concerned pedestrians can leverage these blind spots to evade detection
by constructing a minimum confidence path between two points in a scene,
reducing the maximum confidence and average confidence of the path by up to
0.09 and 0.13, respectively, over direct and random paths through the scene. To
counter this phenomenon, and force the use of more costly and sophisticated
methods to leverage this vulnerability, we propose a novel countermeasure to
improve the confidence of pedestrian detectors in blind spots, raising the
max/average confidence of paths generated by our technique by 0.09 and 0.05,
respectively. In addition, we demonstrate that our countermeasure improves a
Faster R-CNN-based pedestrian detector's TPR and average true positive
confidence by 0.03 and 0.15, respectively.
|
2501.15654
|
People who frequently use ChatGPT for writing tasks are accurate and
robust detectors of AI-generated text
|
cs.CL cs.AI
|
In this paper, we study how well humans can detect text generated by
commercial LLMs (GPT-4o, Claude, o1). We hire annotators to read 300
non-fiction English articles, label them as either human-written or
AI-generated, and provide paragraph-length explanations for their decisions.
Our experiments show that annotators who frequently use LLMs for writing tasks
excel at detecting AI-generated text, even without any specialized training or
feedback. In fact, the majority vote among five such "expert" annotators
misclassifies only 1 of 300 articles, significantly outperforming most
commercial and open-source detectors we evaluated even in the presence of
evasion tactics like paraphrasing and humanization. Qualitative analysis of the
experts' free-form explanations shows that while they rely heavily on specific
lexical clues ('AI vocabulary'), they also pick up on more complex phenomena
within the text (e.g., formality, originality, clarity) that are challenging to
assess for automatic detectors. We release our annotated dataset and code to
spur future research into both human and automated detection of AI-generated
text.
|
2501.15655
|
A Machine Learning Approach to Automatic Fall Detection of Soldiers
|
cs.LG cs.NE
|
Military personnel and security agents often face significant physical risks
during conflict and engagement situations, particularly in urban operations.
Ensuring the rapid and accurate communication of incidents involving injuries
is crucial for the timely execution of rescue operations. This article presents
research conducted under the scope of the Brazilian Navy's ``Soldier of the
Future'' project, focusing on the development of a Casualty Detection System to
identify injuries that could incapacitate a soldier and lead to severe blood
loss. The study specifically addresses the detection of soldier falls, which
may indicate critical injuries such as hypovolemic hemorrhagic shock. To
generate the publicly available dataset, we used smartwatches and smartphones
as wearable devices to collect inertial data from soldiers during various
activities, including simulated falls. The data were used to train 1D
Convolutional Neural Networks (CNN1D) with the objective of accurately
classifying falls that could result from life-threatening injuries. We explored
different sensor placements (on the wrists and near the center of mass) and
various approaches to using inertial variables, including linear and angular
accelerations. The neural network models were optimized using Bayesian
techniques to enhance their performance. The best-performing model and its
results, discussed in this article, contribute to the advancement of automated
systems for monitoring soldier safety and improving response times in
engagement scenarios.
|
2501.15656
|
Classifying Deepfakes Using Swin Transformers
|
cs.CV
|
The proliferation of deepfake technology poses significant challenges to the
authenticity and trustworthiness of digital media, necessitating the
development of robust detection methods. This study explores the application of
Swin Transformers, a state-of-the-art architecture leveraging shifted windows
for self-attention, in detecting and classifying deepfake images. Using the
Real and Fake Face Detection dataset by Yonsei University's Computational
Intelligence Photography Lab, we evaluate the Swin Transformer and hybrid
models such as Swin-ResNet and Swin-KNN, focusing on their ability to identify
subtle manipulation artifacts. Our results demonstrate that the Swin
Transformer outperforms conventional CNN-based architectures, including VGG16,
ResNet18, and AlexNet, achieving a test accuracy of 71.29%. Additionally, we
present insights into hybrid model design, highlighting the complementary
strengths of transformer and CNN-based approaches in deepfake detection. This
study underscores the potential of transformer-based architectures for
improving accuracy and generalizability in image-based manipulation detection,
paving the way for more effective countermeasures against deepfake threats.
|
2501.15659
|
AirIO: Learning Inertial Odometry with Enhanced IMU Feature
Observability
|
cs.RO cs.CV cs.LG
|
Inertial odometry (IO) using only Inertial Measurement Units (IMUs) offers a
lightweight and cost-effective solution for Unmanned Aerial Vehicle (UAV)
applications, yet existing learning-based IO models often fail to generalize to
UAVs due to the highly dynamic and non-linear-flight patterns that differ from
pedestrian motion. In this work, we identify that the conventional practice of
transforming raw IMU data to global coordinates undermines the observability of
critical kinematic information in UAVs. By preserving the body-frame
representation, our method achieves substantial performance improvements, with
a 66.7% average increase in accuracy across three datasets. Furthermore,
explicitly encoding attitude information into the motion network results in an
additional 23.8% improvement over prior results. Combined with a data-driven
IMU correction model (AirIMU) and an uncertainty-aware Extended Kalman Filter
(EKF), our approach ensures robust state estimation under aggressive UAV
maneuvers without relying on external sensors or control inputs. Notably, our
method also demonstrates strong generalizability to unseen data not included in
the training set, underscoring its potential for real-world UAV applications.
|
2501.15660
|
Marker Track: Accurate Fiducial Marker Tracking for Evaluation of
Residual Motions During Breath-Hold Radiotherapy
|
cs.CV cs.AI eess.IV physics.med-ph
|
Fiducial marker positions in projection image of cone-beam computed
tomography (CBCT) scans have been studied to evaluate daily residual motion
during breath-hold radiation therapy. Fiducial marker migration posed
challenges in accurately locating markers, prompting the development of a novel
algorithm that reconstructs volumetric probability maps of marker locations
from filtered gradient maps of projections. This guides the development of a
Python-based algorithm to detect fiducial markers in projection images using
Meta AI's Segment Anything Model 2 (SAM 2). Retrospective data from a
pancreatic cancer patient with two fiducial markers were analyzed. The
three-dimensional (3D) marker positions from simulation computed tomography
(CT) were compared to those reconstructed from CBCT images, revealing a
decrease in relative distances between markers over time. Fiducial markers were
successfully detected in 2777 out of 2786 projection frames. The average
standard deviation of superior-inferior (SI) marker positions was 0.56 mm per
breath-hold, with differences in average SI positions between two breath-holds
in the same scan reaching up to 5.2 mm, and a gap of up to 7.3 mm between the
end of the first and beginning of the second breath-hold. 3D marker positions
were calculated using projection positions and confirmed marker migration. This
method effectively calculates marker probability volume and enables accurate
fiducial marker tracking during treatment without requiring any specialized
equipment, additional radiation doses, or manual initialization and labeling.
It has significant potential for automatically assessing daily residual motion
to adjust planning margins, functioning as an adaptive radiation therapy tool.
|
2501.15661
|
Constrained Hybrid Metaheuristic Algorithm for Probabilistic Neural
Networks Learning
|
cs.NE cs.AI
|
This study investigates the potential of hybrid metaheuristic algorithms to
enhance the training of Probabilistic Neural Networks (PNNs) by leveraging the
complementary strengths of multiple optimisation strategies. Traditional
learning methods, such as gradient-based approaches, often struggle to optimise
high-dimensional and uncertain environments, while single-method metaheuristics
may fail to exploit the solution space fully. To address these challenges, we
propose the constrained Hybrid Metaheuristic (cHM) algorithm, a novel approach
that combines multiple population-based optimisation techniques into a unified
framework. The proposed procedure operates in two phases: an initial probing
phase evaluates multiple metaheuristics to identify the best-performing one
based on the error rate, followed by a fitting phase where the selected
metaheuristic refines the PNN to achieve optimal smoothing parameters. This
iterative process ensures efficient exploration and convergence, enhancing the
network's generalisation and classification accuracy. cHM integrates several
popular metaheuristics, such as BAT, Simulated Annealing, Flower Pollination
Algorithm, Bacterial Foraging Optimization, and Particle Swarm Optimisation as
internal optimisers. To evaluate cHM performance, experiments were conducted on
16 datasets with varying characteristics, including binary and multiclass
classification tasks, balanced and imbalanced class distributions, and diverse
feature dimensions. The results demonstrate that cHM effectively combines the
strengths of individual metaheuristics, leading to faster convergence and more
robust learning. By optimising the smoothing parameters of PNNs, the proposed
method enhances classification performance across diverse datasets, proving its
application flexibility and efficiency.
|
2501.15665
|
StagFormer: Time Staggering Transformer Decoding for RunningLayers In
Parallel
|
cs.LG cs.AI
|
Standard decoding in a Transformer based language model is inherently
sequential as we wait for a token's embedding to pass through all the layers in
the network before starting the generation of the next token. In this work, we
propose a new architecture StagFormer (Staggered Transformer), which staggered
execution along the time axis and thereby enables parallelizing the decoding
process along the depth of the model. We achieve this by breaking the
dependency of the token representation at time step $i$ in layer $l$ upon the
representations of tokens until time step $i$ from layer $l-1$. Instead, we
stagger the execution and only allow a dependency on token representations
until time step $i-1$. The later sections of the Transformer still get access
to the ``rich" representations from the prior section but only from those token
positions which are one time step behind. StagFormer allows for different
sections of the model to be executed in parallel yielding at potential 33\%
speedup in decoding while being quality neutral in our simulations. We also
explore many natural variants of this idea. We present how weight-sharing
across the different sections being staggered can be more practical in settings
with limited memory. We show how one can approximate a recurrent model during
inference using such weight-sharing. We explore the efficacy of using a bounded
window attention to pass information from one section to another which helps
drive further latency gains for some applications. We also explore demonstrate
the scalability of the staggering idea over more than 2 sections of the
Transformer.
|
2501.15666
|
MimicGait: A Model Agnostic approach for Occluded Gait Recognition using
Correlational Knowledge Distillation
|
cs.CV
|
Gait recognition is an important biometric technique over large distances.
State-of-the-art gait recognition systems perform very well in controlled
environments at close range. Recently, there has been an increased interest in
gait recognition in the wild prompted by the collection of outdoor, more
challenging datasets containing variations in terms of illumination, pitch
angles, and distances. An important problem in these environments is that of
occlusion, where the subject is partially blocked from camera view. While
important, this problem has received little attention. Thus, we propose
MimicGait, a model-agnostic approach for gait recognition in the presence of
occlusions. We train the network using a multi-instance correlational
distillation loss to capture both inter-sequence and intra-sequence
correlations in the occluded gait patterns of a subject, utilizing an auxiliary
Visibility Estimation Network to guide the training of the proposed mimic
network. We demonstrate the effectiveness of our approach on challenging
real-world datasets like GREW, Gait3D and BRIAR. We release the code in
https://github.com/Ayush-00/mimicgait.
|
2501.15674
|
TensorLLM: Tensorising Multi-Head Attention for Enhanced Reasoning and
Compression in LLMs
|
cs.CL cs.LG
|
The reasoning abilities of Large Language Models (LLMs) can be improved by
structurally denoising their weights, yet existing techniques primarily focus
on denoising the feed-forward network (FFN) of the transformer block, and can
not efficiently utilise the Multi-head Attention (MHA) block, which is the core
of transformer architectures. To address this issue, we propose a novel
intuitive framework that, at its very core, performs MHA compression through a
multi-head tensorisation process and the Tucker decomposition. This enables
both higher-dimensional structured denoising and compression of the MHA
weights, by enforcing a shared higher-dimensional subspace across the weights
of the multiple attention heads. We demonstrate that this approach consistently
enhances the reasoning capabilities of LLMs across multiple benchmark datasets,
and for both encoder-only and decoder-only architectures, while achieving
compression rates of up to $\sim 250$ times in the MHA weights, all without
requiring any additional data, training, or fine-tuning. Furthermore, we show
that the proposed method can be seamlessly combined with existing
FFN-only-based denoising techniques to achieve further improvements in LLM
reasoning performance.
|
2501.15675
|
Joint Communication and Sensing with Bipartite Entanglement over Bosonic
Channels
|
quant-ph cs.IT math.IT
|
We consider a joint communication and sensing problem in an optical link in
which a low-power transmitter attempts to communicate with a receiver while
simultaneously identifying the range of a defect creating a backscattered
signal. We model the system as a lossy thermal noise bosonic channel in which
the location of the target, modeled as a beamsplitter, affects the timing of
the backscattered signal. Motivated by the envisioned deployment of
entanglement sharing quantum networks, we allow the transmitter to exploit
entanglement to assist its sensing and communication. Since entanglement is
known to enhance sensing, as known from quantum illumination, and increase
communication rates, as known from the characterization of the
entanglement-assisted capacity, the transmitter is faced with a trade-off and
must judiciously allocate its entanglement resources. Our main result is a
characterization of the trade-offs incurred in the form of an achievable
rate/error-exponent region which can beat time-sharing in certain cases. The
proof of our result relies on technical results of independent interests, by
which we carefully show how to extend the known asymptotic characterization of
multi-hypothesis testing Chernoff exponent in finite-dimensional spaces to
infinite-dimensional spaces and provide a characterization of phase shift
keying modulated displaced thermal states in Fock basis.
|
2501.15677
|
Exploring the Feasibility of Deep Learning Models for Long-term Disease
Prediction: A Case Study for Wheat Yellow Rust in England
|
cs.LG
|
Wheat yellow rust, caused by the fungus Puccinia striiformis, is a critical
disease affecting wheat crops across Britain, leading to significant yield
losses and economic consequences. Given the rapid environmental changes and the
evolving virulence of pathogens, there is a growing need for innovative
approaches to predict and manage such diseases over the long term. This study
explores the feasibility of using deep learning models to predict outbreaks of
wheat yellow rust in British fields, offering a proactive approach to disease
management. We construct a yellow rust dataset with historial weather
information and disease indicator acrossing multiple regions in England. We
employ two poweful deep learning models, including fully connected neural
networks and long short-term memory to develop predictive models capable of
recognizing patterns and predicting future disease outbreaks.The models are
trained and validated in a randomly sliced datasets. The performance of these
models with different predictive time steps are evaluated based on their
accuracy, precision, recall, and F1-score. Preliminary results indicate that
deep learning models can effectively capture the complex interactions between
multiple factors influencing disease dynamics, demonstrating a promising
capacity to forecast wheat yellow rust with considerable accuracy.
Specifically, the fully-connected neural network achieved 83.65% accuracy in a
disease prediction task with 6 month predictive time step setup. These findings
highlight the potential of deep learning to transform disease management
strategies, enabling earlier and more precise interventions. Our study provides
a methodological framework for employing deep learning in agricultural settings
but also opens avenues for future research to enhance the robustness and
applicability of predictive models in combating crop diseases globally.
|
2501.15678
|
Blissful (A)Ignorance: People form overly positive impressions of others
based on their written messages, despite wide-scale adoption of Generative AI
|
cs.CY cs.AI cs.CL cs.HC
|
As the use of Generative AI (GenAI) tools becomes more prevalent in
interpersonal communication, understanding their impact on social perceptions
is crucial. According to signaling theory, GenAI may undermine the credibility
of social signals conveyed in writing, since it reduces the cost of writing and
makes it hard to verify the authenticity of messages. Using a pre-registered
large-scale online experiment (N = 647; Prolific), featuring scenarios in a
range of communication contexts (personal vs. professional; close others vs.
strangers), we explored how senders' use of GenAI influenced recipients'
impressions of senders, both when GenAI use was known or uncertain. Consistent
with past work, we found strong negative effects on social impressions when
disclosing that a message was AI-generated, compared to when the same message
was human-written. However, under the more realistic condition when potential
GenAI use was not explicitly highlighted, recipients did not exhibit any
skepticism towards senders, and these "uninformed" impressions were virtually
indistinguishable from those of fully human-written messages. Even when we
highlighted the potential (but uncertain) use of GenAI, recipients formed
overly positive impressions. These results are especially striking given that
46% of our sample admitted having used such tools for writing messages, just
within the past two weeks. Our findings put past work in a new light: While
social judgments can be substantially affected when GenAI use is explicitly
disclosed, this information may not be readily available in more realistic
communication settings, making recipients blissfully ignorant about others'
potential use of GenAI.
|
2501.15688
|
Transformer-Based Multimodal Knowledge Graph Completion with Link-Aware
Contexts
|
cs.CL cs.AI cs.LG
|
Multimodal knowledge graph completion (MMKGC) aims to predict missing links
in multimodal knowledge graphs (MMKGs) by leveraging information from various
modalities alongside structural data. Existing MMKGC approaches primarily
extend traditional knowledge graph embedding (KGE) models, which often require
creating an embedding for every entity. This results in large model sizes and
inefficiencies in integrating multimodal information, particularly for
real-world graphs. Meanwhile, Transformer-based models have demonstrated
competitive performance in knowledge graph completion (KGC). However, their
focus on single-modal knowledge limits their capacity to utilize cross-modal
information. Recently, Large vision-language models (VLMs) have shown potential
in cross-modal tasks but are constrained by the high cost of training. In this
work, we propose a novel approach that integrates Transformer-based KGE models
with cross-modal context generated by pre-trained VLMs, thereby extending their
applicability to MMKGC. Specifically, we employ a pre-trained VLM to transform
relevant visual information from entities and their neighbors into textual
sequences. We then frame KGC as a sequence-to-sequence task, fine-tuning the
model with the generated cross-modal context. This simple yet effective method
significantly reduces model size compared to traditional KGE approaches while
achieving competitive performance across multiple large-scale datasets with
minimal hyperparameter tuning.
|
2501.15690
|
Refined climatologies of future precipitation over High Mountain Asia
using probabilistic ensemble learning
|
physics.ao-ph cs.LG stat.ML
|
High Mountain Asia holds the largest concentration of frozen water outside
the polar regions, serving as a crucial water source for more than 1.9 billion
people. In the face of climate change, precipitation represents the largest
source of uncertainty for hydrological modelling in this area. Future
precipitation predictions remain challenging due to complex orography, lack of
in situ hydrological observations, and limitations in climate model resolution
and parametrisation for this region. To address the uncertainty posed by these
challenges, climate models are often aggregated into multi-model ensembles.
While multi-model ensembles are known to improve the predictive accuracy and
analysis of future climate projections, consensus regarding how models are
aggregated is lacking. In this study, we propose a probabilistic machine
learning framework to systematically combine 13 regional climate models from
the Coordinated Regional Downscaling Experiment (CORDEX) over High Mountain
Asia. Our approach accounts for seasonal and spatial biases within the models,
enabling the prediction of more faithful precipitation distributions. The
framework is validated against gridded historical precipitation data and is
used to generate projections for the near-future (2036-2065) and far-future
(2066-2095) under RCP4.5 and RCP8.5 scenarios.
|
2501.15693
|
Beyond Benchmarks: On The False Promise of AI Regulation
|
cs.LG cs.AI cs.CL
|
The rapid advancement of artificial intelligence (AI) systems in critical
domains like healthcare, justice, and social services has sparked numerous
regulatory initiatives aimed at ensuring their safe deployment. Current
regulatory frameworks, exemplified by recent US and EU efforts, primarily focus
on procedural guidelines while presuming that scientific benchmarking can
effectively validate AI safety, similar to how crash tests verify vehicle
safety or clinical trials validate drug efficacy. However, this approach
fundamentally misunderstands the unique technical challenges posed by modern AI
systems. Through systematic analysis of successful technology regulation case
studies, we demonstrate that effective scientific regulation requires a causal
theory linking observable test outcomes to future performance - for instance,
how a vehicle's crash resistance at one speed predicts its safety at lower
speeds. We show that deep learning models, which learn complex statistical
patterns from training data without explicit causal mechanisms, preclude such
guarantees. This limitation renders traditional regulatory approaches
inadequate for ensuring AI safety. Moving forward, we call for regulators to
reckon with this limitation, and propose a preliminary two-tiered regulatory
framework that acknowledges these constraints: mandating human oversight for
high-risk applications while developing appropriate risk communication
strategies for lower-risk uses. Our findings highlight the urgent need to
reconsider fundamental assumptions in AI regulation and suggest a concrete path
forward for policymakers and researchers.
|
2501.15694
|
A Statistical Learning Approach to Mediterranean Cyclones
|
physics.ao-ph cs.LG
|
Mediterranean cyclones are extreme meteorological events of which much less
is known compared to their tropical, oceanic counterparts. The raising interest
in such phenomena is due to their impact on a region increasingly more affected
by climate change, but a precise characterization remains a non trivial task.
In this work we showcase how a Bayesian algorithm (Latent Dirichlet Allocation)
can classify Mediterranean cyclones relying on wind velocity data, leading to a
drastic dimensional reduction that allows the use of supervised statistical
learning techniques for detecting and tracking new cyclones.
|
2501.15695
|
Contextual Knowledge Sharing in Multi-Agent Reinforcement Learning with
Decentralized Communication and Coordination
|
cs.MA cs.AI
|
Decentralized Multi-Agent Reinforcement Learning (Dec-MARL) has emerged as a
pivotal approach for addressing complex tasks in dynamic environments. Existing
Multi-Agent Reinforcement Learning (MARL) methodologies typically assume a
shared objective among agents and rely on centralized control. However, many
real-world scenarios feature agents with individual goals and limited
observability of other agents, complicating coordination and hindering
adaptability. Existing Dec-MARL strategies prioritize either communication or
coordination, lacking an integrated approach that leverages both. This paper
presents a novel Dec-MARL framework that integrates peer-to-peer communication
and coordination, incorporating goal-awareness and time-awareness into the
agents' knowledge-sharing processes. Our framework equips agents with the
ability to (i) share contextually relevant knowledge to assist other agents,
and (ii) reason based on information acquired from multiple agents, while
considering their own goals and the temporal context of prior knowledge. We
evaluate our approach through several complex multi-agent tasks in environments
with dynamically appearing obstacles. Our work demonstrates that incorporating
goal-aware and time-aware knowledge sharing significantly enhances overall
performance.
|
2501.15696
|
Random Walk Guided Hyperbolic Graph Distillation
|
cs.LG
|
Graph distillation (GD) is an effective approach to extract useful
information from large-scale network structures. However, existing methods,
which operate in Euclidean space to generate condensed graphs, struggle to
capture the inherent tree-like geometry of real-world networks, resulting in
distilled graphs with limited task-specific information for downstream tasks.
Furthermore, these methods often fail to extract dynamic properties from
graphs, which are crucial for understanding information flow and facilitating
graph continual learning. This paper presents the Hyperbolic Graph Distillation
with Random Walks Optimization (HyDRO), a novel graph distillation approach
that leverages hyperbolic embeddings to capture complex geometric patterns and
optimize the spectral gap in hyperbolic space. Experiments show that HyDRO
demonstrates strong task generalization, consistently outperforming
state-of-the-art methods in both node classification and link prediction tasks.
HyDRO also effectively preserves graph random walk properties, producing
condensed graphs that achieve enhanced performance in continual graph learning.
Additionally, HyDRO achieves competitive results on mainstream graph
distillation benchmarks, while maintaining a strong balance between privacy and
utility, and exhibiting robust resistance to noises.
|
2501.15700
|
Adapting Biomedical Abstracts into Plain language using Large Language
Models
|
cs.CL
|
A vast amount of medical knowledge is available for public use through online
health forums, and question-answering platforms on social media. The majority
of the population in the United States doesn't have the right amount of health
literacy to make the best use of that information. Health literacy means the
ability to obtain and comprehend the basic health information to make
appropriate health decisions. To build the bridge between this gap,
organizations advocate adapting this medical knowledge into plain language.
Building robust systems to automate the adaptations helps both medical and
non-medical professionals best leverage the available information online. The
goal of the Plain Language Adaptation of Biomedical Abstracts (PLABA) track is
to adapt the biomedical abstracts in English language extracted from PubMed
based on the questions asked in MedlinePlus for the general public using plain
language at the sentence level. As part of this track, we leveraged the best
open-source Large Language Models suitable and fine-tuned for dialog use cases.
We compare and present the results for all of our systems and our ranking among
the other participants' submissions. Our top performing GPT-4 based model
ranked first in the avg. simplicity measure and 3rd on the avg. accuracy
measure.
|
2501.15705
|
Disentanglement Analysis in Deep Latent Variable Models Matching
Aggregate Posterior Distributions
|
cs.LG stat.ML
|
Deep latent variable models (DLVMs) are designed to learn meaningful
representations in an unsupervised manner, such that the hidden explanatory
factors are interpretable by independent latent variables (aka
disentanglement). The variational autoencoder (VAE) is a popular DLVM widely
studied in disentanglement analysis due to the modeling of the posterior
distribution using a factorized Gaussian distribution that encourages the
alignment of the latent factors with the latent axes. Several metrics have been
proposed recently, assuming that the latent variables explaining the variation
in data are aligned with the latent axes (cardinal directions). However, there
are other DLVMs, such as the AAE and WAE-MMD (matching the aggregate posterior
to the prior), where the latent variables might not be aligned with the latent
axes. In this work, we propose a statistical method to evaluate disentanglement
for any DLVMs in general. The proposed technique discovers the latent vectors
representing the generative factors of a dataset that can be different from the
cardinal latent axes. We empirically demonstrate the advantage of the method on
two datasets.
|
2501.15708
|
StaICC: Standardized Evaluation for Classification Task in In-context
Learning
|
cs.CL cs.AI
|
Classification tasks are widely investigated in the In-Context Learning (ICL)
paradigm. However, current efforts are evaluated on disjoint benchmarks and
settings, while their performances are significantly influenced by some trivial
variables, such as prompt templates, data sampling, instructions, etc., which
leads to significant inconsistencies in the results reported across various
literature, preventing fair comparison or meta-analysis across different
papers. Therefore, this paper proposes a standardized and easy-to-use
evaluation toolkit (StaICC) for in-context classification. Including, for the
normal classification task, we provide StaICC-Normal, selecting 10 widely used
datasets, and generating prompts with a fixed form, to mitigate the variance
among the experiment implementations. To enrich the usage of our benchmark, we
also provide a sub-benchmark StaICC-Diag for diagnosing ICL from several
aspects, aiming for a more robust inference processing.
|
2501.15712
|
SeqSeg: Learning Local Segments for Automatic Vascular Model
Construction
|
eess.IV cs.CV q-bio.TO
|
Computational modeling of cardiovascular function has become a critical part
of diagnosing, treating and understanding cardiovascular disease. Most
strategies involve constructing anatomically accurate computer models of
cardiovascular structures, which is a multistep, time-consuming process. To
improve the model generation process, we herein present SeqSeg (sequential
segmentation): a novel deep learning based automatic tracing and segmentation
algorithm for constructing image-based vascular models. SeqSeg leverages local
U-Net-based inference to sequentially segment vascular structures from medical
image volumes. We tested SeqSeg on CT and MR images of aortic and aortofemoral
models and compared the predictions to those of benchmark 2D and 3D global
nnU-Net models, which have previously shown excellent accuracy for medical
image segmentation. We demonstrate that SeqSeg is able to segment more complete
vasculature and is able to generalize to vascular structures not annotated in
the training data.
|
2501.15713
|
Modeling shared micromobility as a label propagation process for
detecting the overlapping communities
|
cs.SI physics.app-ph
|
Shared micro-mobility such as e-scooters has gained significant popularity in
many cities. However, existing methods for detecting community structures in
mobility networks often overlook potential overlaps between communities. In
this study, we conceptualize shared micro-mobility in urban spaces as a process
of information exchange, where locations are connected through e-scooters,
facilitating the interaction and propagation of community affiliations. As a
result, similar locations are assigned the same label. Based on this concept,
we developed a Geospatial Interaction Propagation model (GIP) by designing a
Speaker-Listener Label Propagation Algorithm (SLPA) that accounts for
geographic distance decay, incorporating anomaly detection to ensure the
derived community structures reflect meaningful spatial patterns. We applied
this model to detect overlapping communities within the e-scooter system in
Washington, D.C. The results demonstrate that our algorithm outperforms
existing model of overlapping community detection in both efficiency and
modularity. However, existing methods for detecting community structures in
mobility networks often overlook potential overlaps between communities. In
this study, we conceptualize shared micro-mobility in urban spaces as a process
of information exchange, where locations are connected through e-scooters,
facilitating the interaction and propagation of community affiliations. As a
result, similar locations are assigned the same label. Based on this concept,
we developed a Geospatial Interaction Propagation model (GIP) by designing a
Speaker-Listener Label Propagation Algorithm (SLPA) that accounts for
geographic distance decay, incorporating anomaly detection to ensure the
derived community structures reflect meaningful spatial patterns.
|
2501.15717
|
Physics-Aware Decoding for Communication Channels Governed by Partial
Differential Equations
|
cs.IT math.IT
|
Digital communication systems inherently operate through physical media
governed by partial differential equations (PDEs). In this paper, we introduce
a physics-aware decoding framework that integrates gradient descent-based error
correcting algorithms with PDE-based channel modeling using differentiable PDE
solvers. At the core of our approach is gradient flow decoding, which harnesses
gradient information directly from the PDE solver to guide the decoding
process. We validate our method through numerical experiments on both the heat
equation and the nonlinear Schr\"odinger equation (NLSE), demonstrating
significant improvements in decoding performance. The implications of this work
extend beyond decoding applications, establishing a new paradigm for
physics-aware signal processing that shows promise for various signal detection
and signal recovery tasks.
|
2501.15718
|
CENSOR: Defense Against Gradient Inversion via Orthogonal Subspace
Bayesian Sampling
|
cs.LG cs.CR
|
Federated learning collaboratively trains a neural network on a global
server, where each local client receives the current global model weights and
sends back parameter updates (gradients) based on its local private data. The
process of sending these model updates may leak client's private data
information. Existing gradient inversion attacks can exploit this vulnerability
to recover private training instances from a client's gradient vectors.
Recently, researchers have proposed advanced gradient inversion techniques that
existing defenses struggle to handle effectively. In this work, we present a
novel defense tailored for large neural network models. Our defense capitalizes
on the high dimensionality of the model parameters to perturb gradients within
a subspace orthogonal to the original gradient. By leveraging cold posteriors
over orthogonal subspaces, our defense implements a refined gradient update
mechanism. This enables the selection of an optimal gradient that not only
safeguards against gradient inversion attacks but also maintains model utility.
We conduct comprehensive experiments across three different datasets and
evaluate our defense against various state-of-the-art attacks and defenses.
Code is available at https://censor-gradient.github.io.
|
2501.15720
|
ESGSenticNet: A Neurosymbolic Knowledge Base for Corporate
Sustainability Analysis
|
cs.CL
|
Evaluating corporate sustainability performance is essential to drive
sustainable business practices, amid the need for a more sustainable economy.
However, this is hindered by the complexity and volume of corporate
sustainability data (i.e. sustainability disclosures), not least by the
effectiveness of the NLP tools used to analyse them. To this end, we identify
three primary challenges - immateriality, complexity, and subjectivity, that
exacerbate the difficulty of extracting insights from sustainability
disclosures. To address these issues, we introduce ESGSenticNet, a publicly
available knowledge base for sustainability analysis. ESGSenticNet is
constructed from a neurosymbolic framework that integrates specialised concept
parsing, GPT-4o inference, and semi-supervised label propagation, together with
a hierarchical taxonomy. This approach culminates in a structured knowledge
base of 44k knowledge triplets - ('halve carbon emission', supports, 'emissions
control'), for effective sustainability analysis. Experiments indicate that
ESGSenticNet, when deployed as a lexical method, more effectively captures
relevant and actionable sustainability information from sustainability
disclosures compared to state of the art baselines. Besides capturing a high
number of unique ESG topic terms, ESGSenticNet outperforms baselines on the ESG
relatedness and ESG action orientation of these terms by 26% and 31%
respectively. These metrics describe the extent to which topic terms are
related to ESG, and depict an action toward ESG. Moreover, when deployed as a
lexical method, ESGSenticNet does not require any training, possessing a key
advantage in its simplicity for non-technical stakeholders.
|
2501.15722
|
INRet: A General Framework for Accurate Retrieval of INRs for Shapes
|
cs.LG
|
Implicit neural representations (INRs) have become an important method for
encoding various data types, such as 3D objects or scenes, images, and videos.
They have proven to be particularly effective at representing 3D content, e.g.,
3D scene reconstruction from 2D images, novel 3D content creation, as well as
the representation, interpolation, and completion of 3D shapes. With the
widespread generation of 3D data in an INR format, there is a need to support
effective organization and retrieval of INRs saved in a data store. A key
aspect of retrieval and clustering of INRs in a data store is the formulation
of similarity between INRs that would, for example, enable retrieval of similar
INRs using a query INR. In this work, we propose INRet, a method for
determining similarity between INRs that represent shapes, thus enabling
accurate retrieval of similar shape INRs from an INR data store. INRet flexibly
supports different INR architectures such as INRs with octree grids, triplanes,
and hash grids, as well as different implicit functions including
signed/unsigned distance function and occupancy field. We demonstrate that our
method is more general and accurate than the existing INR retrieval method,
which only supports simple MLP INRs and requires the same architecture between
the query and stored INRs. Furthermore, compared to converting INRs to other
representations (e.g., point clouds or multi-view images) for 3D shape
retrieval, INRet achieves higher accuracy while avoiding the conversion
overhead.
|
2501.15724
|
A Survey on Computational Pathology Foundation Models: Datasets,
Adaptation Strategies, and Evaluation Tasks
|
cs.CV cs.AI
|
Computational pathology foundation models (CPathFMs) have emerged as a
powerful approach for analyzing histopathological data, leveraging
self-supervised learning to extract robust feature representations from
unlabeled whole-slide images. These models, categorized into uni-modal and
multi-modal frameworks, have demonstrated promise in automating complex
pathology tasks such as segmentation, classification, and biomarker discovery.
However, the development of CPathFMs presents significant challenges, such as
limited data accessibility, high variability across datasets, the necessity for
domain-specific adaptation, and the lack of standardized evaluation benchmarks.
This survey provides a comprehensive review of CPathFMs in computational
pathology, focusing on datasets, adaptation strategies, and evaluation tasks.
We analyze key techniques, such as contrastive learning and multi-modal
integration, and highlight existing gaps in current research. Finally, we
explore future directions from four perspectives for advancing CPathFMs. This
survey serves as a valuable resource for researchers, clinicians, and AI
practitioners, guiding the advancement of CPathFMs toward robust and clinically
applicable AI-driven pathology solutions.
|
2501.15726
|
Vision-Aided Channel Prediction Based on Image Segmentation at Street
Intersection Scenarios
|
cs.IT eess.SP math.IT
|
Intelligent vehicular communication with vehicle road collaboration
capability is a key technology enabled by 6G, and the integration of various
visual sensors on vehicles and infrastructures plays a crucial role. Moreover,
accurate channel prediction is foundational to realizing intelligent vehicular
communication. Traditional methods are still limited by the inability to
balance accuracy and operability based on substantial spectrum resource
consumption and highly refined description of environment. Therefore,
leveraging out-of-band information introduced by visual sensors provides a new
solution and is increasingly applied across various communication tasks. In
this paper, we propose a computer vision (CV)-based prediction model for
vehicular communications, realizing accurate channel characterization
prediction including path loss, Rice K-factor and delay spread based on image
segmentation. First, we conduct extensive vehicle-to-infrastructure measurement
campaigns, collecting channel and visual data from various street intersection
scenarios. The image-channel dataset is generated after a series of data
post-processing steps. Image data consists of individual segmentation of target
user using YOLOv8 network. Subsequently, established dataset is used to train
and test prediction network ResNet-32, where segmented images serve as input of
network, and various channel characteristics are treated as labels or target
outputs of network. Finally, self-validation and cross-validation experiments
are performed. The results indicate that models trained with segmented images
achieve high prediction accuracy and remarkable generalization performance
across different streets and target users. The model proposed in this paper
offers novel solutions for achieving intelligent channel
prediction in vehicular communications.
|
2501.15727
|
Gensors: Authoring Personalized Visual Sensors with Multimodal
Foundation Models and Reasoning
|
cs.HC cs.AI
|
Multimodal large language models (MLLMs), with their expansive world
knowledge and reasoning capabilities, present a unique opportunity for
end-users to create personalized AI sensors capable of reasoning about complex
situations. A user could describe a desired sensing task in natural language
(e.g., "alert if my toddler is getting into mischief"), with the MLLM analyzing
the camera feed and responding within seconds. In a formative study, we found
that users saw substantial value in defining their own sensors, yet struggled
to articulate their unique personal requirements and debug the sensors through
prompting alone. To address these challenges, we developed Gensors, a system
that empowers users to define customized sensors supported by the reasoning
capabilities of MLLMs. Gensors 1) assists users in eliciting requirements
through both automatically-generated and manually created sensor criteria, 2)
facilitates debugging by allowing users to isolate and test individual criteria
in parallel, 3) suggests additional criteria based on user-provided images, and
4) proposes test cases to help users "stress test" sensors on potentially
unforeseen scenarios. In a user study, participants reported significantly
greater sense of control, understanding, and ease of communication when
defining sensors using Gensors. Beyond addressing model limitations, Gensors
supported users in debugging, eliciting requirements, and expressing unique
personal requirements to the sensor through criteria-based reasoning; it also
helped uncover users' "blind spots" by exposing overlooked criteria and
revealing unanticipated failure modes. Finally, we discuss how unique
characteristics of MLLMs--such as hallucinations and inconsistent
responses--can impact the sensor-creation process. These findings contribute to
the design of future intelligent sensing systems that are intuitive and
customizable by everyday users.
|
2501.15728
|
Integrating Personalized Federated Learning with Control Systems for
Enhanced Performance
|
cs.LG cs.SY eess.SY
|
In the expanding field of machine learning, federated learning has emerged as
a pivotal methodology for distributed data environments, ensuring privacy while
leveraging decentralized data sources. However, the heterogeneity of client
data and the need for tailored models necessitate the integration of
personalization techniques to enhance learning efficacy and model performance.
This paper introduces a novel framework that amalgamates personalized federated
learning with robust control systems, aimed at optimizing both the learning
process and the control of data flow across diverse networked environments. Our
approach harnesses personalized algorithms that adapt to the unique
characteristics of each client's data, thereby improving the relevance and
accuracy of the model for individual nodes without compromising the overall
system performance. To manage and control the learning process across the
network, we employ a sophisticated control system that dynamically adjusts the
parameters based on real-time feedback and system states, ensuring stability
and efficiency. Through rigorous experimentation, we demonstrate that our
integrated system not only outperforms standard federated learning models in
terms of accuracy and learning speed but also maintains system integrity and
robustness in face of varying network conditions and data distributions. The
experimental results, obtained from a multi-client simulated environment with
non-IID data distributions, underscore the benefits of integrating control
systems into personalized federated learning frameworks, particularly in
scenarios demanding high reliability and precision.
|
2501.15729
|
Measurement-Based Non-Stationary Markov Tapped Delay Line Channel Model
for 5G-Railways
|
cs.IT math.IT
|
5G for Railways (5G-R) is globally recognized as a promising next-generation
railway communication system designed to meet increasing demands. Channel
modeling serves as foundation for communication system design, with tapped
delay line (TDL) models widely utilized in system simulations due to their
simplicity and practicality and serves as a crucial component of various
standards like 3GPP. However, existing TDL models applicable to 5G-R systems
are limited. Most fail to capture non-stationarity, a critical characteristic
of railway communications, while others are unsuitable for the specific
frequency bands and bandwidths of 5G-R. In this paper, a channel measurement
campaign for 5G-R dedicated network is carried out, resulting in a
measurement-based 5-tap TDL model utilizing a first-order two-state Markov
chain to represent channel non stationarity. Key model parameters, including
number of taps, statistical distribution of amplitude, phase and Doppler shift,
and state transition probability matrix, are extracted. The correlation between
tap amplitudes are also obtained. Finally, accuracy of model is validated
through comparisons with measurement data and 3GPP model. These findings are
expected to offer valuable insights for design, optimization, and link-level
simulation and validation of 5G-R systems.
|
2501.15731
|
Renewable Energy Prediction: A Comparative Study of Deep Learning Models
for Complex Dataset Analysis
|
cs.LG cs.AI
|
The increasing focus on predicting renewable energy production aligns with
advancements in deep learning (DL). The inherent variability of renewable
sources and the complexity of prediction methods require robust approaches,
such as DL models, in the renewable energy sector. DL models are preferred over
traditional machine learning (ML) because they capture complex, nonlinear
relationships in renewable energy datasets. This study examines key factors
influencing DL technique accuracy, including sampling and hyperparameter
optimization, by comparing various methods and training and test ratios within
a DL framework. Seven machine learning methods, LSTM, Stacked LSTM, CNN,
CNN-LSTM, DNN, Time-Distributed MLP (TD-MLP), and Autoencoder (AE), are
evaluated using a dataset combining weather and photovoltaic power output data
from 12 locations. Regularization techniques such as early stopping, neuron
dropout, L1 and L2 regularization are applied to address overfitting. The
results demonstrate that the combination of early stopping, dropout, and L1
regularization provides the best performance to reduce overfitting in the CNN
and TD-MLP models with larger training set, while the combination of early
stopping, dropout, and L2 regularization is the most effective to reduce the
overfitting in CNN-LSTM and AE models with smaller training set.
|
2501.15733
|
Leveraging Video Vision Transformer for Alzheimer's Disease Diagnosis
from 3D Brain MRI
|
eess.IV cs.AI cs.CV
|
Alzheimer's disease (AD) is a neurodegenerative disorder affecting millions
worldwide, necessitating early and accurate diagnosis for optimal patient
management. In recent years, advancements in deep learning have shown
remarkable potential in medical image analysis. Methods In this study, we
present "ViTranZheimer," an AD diagnosis approach which leverages video vision
transformers to analyze 3D brain MRI data. By treating the 3D MRI volumes as
videos, we exploit the temporal dependencies between slices to capture
intricate structural relationships. The video vision transformer's
self-attention mechanisms enable the model to learn long-range dependencies and
identify subtle patterns that may indicate AD progression. Our proposed deep
learning framework seeks to enhance the accuracy and sensitivity of AD
diagnosis, empowering clinicians with a tool for early detection and
intervention. We validate the performance of the video vision transformer using
the ADNI dataset and conduct comparative analyses with other relevant models.
Results The proposed ViTranZheimer model is compared with two hybrid models,
CNN-BiLSTM and ViT-BiLSTM. CNN-BiLSTM is the combination of a convolutional
neural network (CNN) and a bidirectional long-short-term memory network
(BiLSTM), while ViT-BiLSTM is the combination of a vision transformer (ViT)
with BiLSTM. The accuracy levels achieved in the ViTranZheimer, CNN-BiLSTM, and
ViT-BiLSTM models are 98.6%, 96.479%, and 97.465%, respectively. ViTranZheimer
demonstrated the highest accuracy at 98.6%, outperforming other models in this
evaluation metric, indicating its superior performance in this specific
evaluation metric. Conclusion This research advances the understanding of
applying deep learning techniques in neuroimaging and Alzheimer's disease
research, paving the way for earlier and less invasive clinical diagnosis.
|
2501.15735
|
Selective Experience Sharing in Reinforcement Learning Enhances
Interference Management
|
cs.LG eess.SP
|
We propose a novel multi-agent reinforcement learning (RL) approach for
inter-cell interference mitigation, in which agents selectively share their
experiences with other agents. Each base station is equipped with an agent,
which receives signal-to-interference-plus-noise ratio from its own associated
users. This information is used to evaluate and selectively share experiences
with neighboring agents. The idea is that even a few pertinent experiences from
other agents can lead to effective learning. This approach enables fully
decentralized training and execution, minimizes information sharing between
agents and significantly reduces communication overhead, which is typically the
burden of interference management. The proposed method outperforms
state-of-the-art multi-agent RL techniques where training is done in a
decentralized manner. Furthermore, with a 75% reduction in experience sharing,
the proposed algorithm achieves 98% of the spectral efficiency obtained by
algorithms sharing all experiences.
|
2501.15737
|
Geometric Deep Learning for Automated Landmarking of Maxillary Arches on
3D Oral Scans from Newborns with Cleft Lip and Palate
|
eess.IV cs.LG
|
Rapid advances in 3D model scanning have enabled the mass digitization of
dental clay models. However, most clinicians and researchers continue to use
manual morphometric analysis methods on these models such as landmarking. This
is a significant step in treatment planning for craniomaxillofacial conditions.
We aimed to develop and test a geometric deep learning model that would
accurately and reliably label landmarks on a complicated and specialized
patient population -- infants, as accurately as a human specialist without a
large amount of training data. Our developed pipeline demonstrated an accuracy
of 94.44% with an absolute mean error of 1.676 +/- 0.959 mm on a set of 100
models acquired from newborn babies with cleft lip and palate. Our proposed
pipeline has the potential to serve as a fast, accurate, and reliable
quantifier of maxillary arch morphometric features, as well as an integral step
towards a future fully automated dental treatment pipeline.
|
2501.15738
|
Towards Interoperable Data Spaces: Comparative Analysis of Data Space
Implementations between Japan and Europe
|
cs.DB
|
The rapid evolution of data spaces is transforming the landscape of secure
and interoperable data sharing across industries and geographies. In Europe,
the concept of data spaces, supported by initiatives such as the European Data
Strategy, emphasises the importance of trust, sovereignty, and
interoperability. Meanwhile, Japan has been developing its approach to data
sharing, in line with global trends but also to address unique domestic
challenges. Despite these parallel advances, achieving interoperability between
European and Japanese data spaces remains a critical challenge due to
differences in governance, technology standards, and authentication frameworks.
This paper undertakes a comparative analysis of DATA-EX and Catena-X to explore
the challenges and opportunities for achieving interoperability between
Japanese and European data spaces. By examining common data exchange processes,
key objects such as participants, datasets, and data catalogs, and specific
evaluation criteria, the study identifies gaps and proposes actionable
solutions. Through this analysis, the paper aims to contribute to the ongoing
discourse on global data interoperability. It proposes an interoperable
architecture that bridges regional differences while addressing common
challenges. It also identifies challenges that should be addressed to achieve
interoperability.
|
2501.15739
|
Automatic Machine Learning Framework to Study Morphological Parameters
of AGN Host Galaxies within $z < 1.4$ in the Hyper Supreme-Cam Wide Survey
|
astro-ph.GA astro-ph.IM cs.LG
|
We present a composite machine learning framework to estimate posterior
probability distributions of bulge-to-total light ratio, half-light radius, and
flux for Active Galactic Nucleus (AGN) host galaxies within $z<1.4$ and $m<23$
in the Hyper Supreme-Cam Wide survey. We divide the data into five redshift
bins: low ($0<z<0.25$), mid ($0.25<z<0.5$), high ($0.5<z<0.9$), extra
($0.9<z<1.1$) and extreme ($1.1<z<1.4$), and train our models independently in
each bin. We use PSFGAN to decompose the AGN point source light from its host
galaxy, and invoke the Galaxy Morphology Posterior Estimation Network (GaMPEN)
to estimate morphological parameters of the recovered host galaxy. We first
trained our models on simulated data, and then fine-tuned our algorithm via
transfer learning using labeled real data. To create training labels for
transfer learning, we used GALFIT to fit $\sim 20,000$ real HSC galaxies in
each redshift bin. We comprehensively examined that the predicted values from
our final models agree well with the GALFIT values for the vast majority of
cases. Our PSFGAN + GaMPEN framework runs at least three orders of magnitude
faster than traditional light-profile fitting methods, and can be easily
retrained for other morphological parameters or on other datasets with diverse
ranges of resolutions, seeing conditions, and signal-to-noise ratios, making it
an ideal tool for analyzing AGN host galaxies from large surveys coming soon
from the Rubin-LSST, Euclid, and Roman telescopes.
|
2501.15740
|
Propositional Interpretability in Artificial Intelligence
|
cs.AI
|
Mechanistic interpretability is the program of explaining what AI systems are
doing in terms of their internal mechanisms. I analyze some aspects of the
program, along with setting out some concrete challenges and assessing progress
to date. I argue for the importance of propositional interpretability, which
involves interpreting a system's mechanisms and behavior in terms of
propositional attitudes: attitudes (such as belief, desire, or subjective
probability) to propositions (e.g. the proposition that it is hot outside).
Propositional attitudes are the central way that we interpret and explain human
beings and they are likely to be central in AI too. A central challenge is what
I call thought logging: creating systems that log all of the relevant
propositional attitudes in an AI system over time. I examine currently popular
methods of interpretability (such as probing, sparse auto-encoders, and chain
of thought methods) as well as philosophical methods of interpretation
(including those grounded in psychosemantics) to assess their strengths and
weaknesses as methods of propositional interpretability.
|
2501.15742
|
Intuition and importance of feedback control through laboratory
experiences
|
eess.SY cs.SY
|
This work aims to raise awareness among engineering students from different
disciplines on the importance of feedback control. The proposal consists in
comparing the performance of different control strategies in a laboratory
session, considering Matlab/Simulink simulations of the non-linear pendulum
model. First, students attempt to make the pendulum stop at unstable
equilibrium by controlling the torque input with a joystick connected to the
computer via an Arduino board. Different friction scenarios are considered for
students to explore the dissipation in the system response. Then, as a second
task, the Arduino is used to introduce the position reference, while students
implement different control strategies, such as Bang-Bang, PID
(proportional-integral-derivative) and FPID (fractional PID) controllers,
analyzing the system response by inspecting the signals in a scope and in a 3D
animated model. The dynamic model results as an application of the laws of
rotational motion, and the control methods are explained from an intuitive
point of view, focusing on the meaning and motivation of the control actions,
with the intention to develop intuition about PID and FPID control methods.
|
2501.15743
|
Z-Stack Scanning can Improve AI Detection of Mitosis: A Case Study of
Meningiomas
|
eess.IV cs.CV
|
Z-stack scanning is an emerging whole slide imaging technology that captures
multiple focal planes alongside the z-axis of a glass slide. Because z-stacking
can offer enhanced depth information compared to the single-layer whole slide
imaging, this technology can be particularly useful in analyzing small-scaled
histopathological patterns. However, its actual clinical impact remains debated
with mixed results. To clarify this, we investigate the effect of z-stack
scanning on artificial intelligence (AI) mitosis detection of meningiomas. With
the same set of 22 Hematoxylin and Eosin meningioma glass slides scanned by
three different digital pathology scanners, we tested the performance of three
AI pipelines on both single-layer and z-stacked whole slide images (WSIs).
Results showed that in all scanner-AI combinations, z-stacked WSIs
significantly increased AI's sensitivity (+17.14%) on the mitosis detection
with only a marginal impact on precision. Our findings provide quantitative
evidence that highlights z-stack scanning as a promising technique for AI
mitosis detection, paving the way for more reliable AI-assisted pathology
workflows, which can ultimately benefit patient management.
|
2501.15747
|
IndicMMLU-Pro: Benchmarking Indic Large Language Models on Multi-Task
Language Understanding
|
cs.CL cs.AI
|
Known by more than 1.5 billion people in the Indian subcontinent, Indic
languages present unique challenges and opportunities for natural language
processing (NLP) research due to their rich cultural heritage, linguistic
diversity, and complex structures. IndicMMLU-Pro is a comprehensive benchmark
designed to evaluate Large Language Models (LLMs) across Indic languages,
building upon the MMLU Pro (Massive Multitask Language Understanding)
framework. Covering major languages such as Hindi, Bengali, Gujarati, Marathi,
Kannada, Punjabi, Tamil, Telugu, and Urdu, our benchmark addresses the unique
challenges and opportunities presented by the linguistic diversity of the
Indian subcontinent. This benchmark encompasses a wide range of tasks in
language comprehension, reasoning, and generation, meticulously crafted to
capture the intricacies of Indian languages. IndicMMLU-Pro provides a
standardized evaluation framework to push the research boundaries in Indic
language AI, facilitating the development of more accurate, efficient, and
culturally sensitive models. This paper outlines the benchmarks' design
principles, task taxonomy, and data collection methodology, and presents
baseline results from state-of-the-art multilingual models.
|
2501.15749
|
LLM-powered Multi-agent Framework for Goal-oriented Learning in
Intelligent Tutoring System
|
cs.AI cs.MA
|
Intelligent Tutoring Systems (ITSs) have revolutionized education by offering
personalized learning experiences. However, as goal-oriented learning, which
emphasizes efficiently achieving specific objectives, becomes increasingly
important in professional contexts, existing ITSs often struggle to deliver
this type of targeted learning experience. In this paper, we propose GenMentor,
an LLM-powered multi-agent framework designed to deliver goal-oriented,
personalized learning within ITS. GenMentor begins by accurately mapping
learners' goals to required skills using a fine-tuned LLM trained on a custom
goal-to-skill dataset. After identifying the skill gap, it schedules an
efficient learning path using an evolving optimization approach, driven by a
comprehensive and dynamic profile of learners' multifaceted status.
Additionally, GenMentor tailors learning content with an
exploration-drafting-integration mechanism to align with individual learner
needs. Extensive automated and human evaluations demonstrate GenMentor's
effectiveness in learning guidance and content quality. Furthermore, we have
deployed it in practice and also implemented it as an application. Practical
human study with professional learners further highlights its effectiveness in
goal alignment and resource targeting, leading to enhanced personalization.
Supplementary resources are available at
https://github.com/GeminiLight/gen-mentor.
|
2501.15751
|
A Privacy Model for Classical & Learned Bloom Filters
|
cs.CR cs.LG
|
The Classical Bloom Filter (CBF) is a class of Probabilistic Data Structures
(PDS) for handling Approximate Query Membership (AMQ). The Learned Bloom Filter
(LBF) is a recently proposed class of PDS that combines the Classical Bloom
Filter with a Learning Model while preserving the Bloom Filter's one-sided
error guarantees. Bloom Filters have been used in settings where inputs are
sensitive and need to be private in the presence of an adversary with access to
the Bloom Filter through an API or in the presence of an adversary who has
access to the internal state of the Bloom Filter. Prior work has investigated
the privacy of the Classical Bloom Filter providing attacks and defenses under
various privacy definitions. In this work, we formulate a stronger differential
privacy-based model for the Bloom Filter. We propose constructions of the
Classical and Learned Bloom Filter that satisfy $(\epsilon, 0)$-differential
privacy. This is also the first work that analyses and addresses the privacy of
the Learned Bloom Filter under any rigorous model, which is an open problem.
|
2501.15753
|
Scale-Insensitive Neural Network Significance Tests
|
stat.ML cs.LG econ.EM
|
This paper develops a scale-insensitive framework for neural network
significance testing, substantially generalizing existing approaches through
three key innovations. First, we replace metric entropy calculations with
Rademacher complexity bounds, enabling the analysis of neural networks without
requiring bounded weights or specific architectural constraints. Second, we
weaken the regularity conditions on the target function to require only Sobolev
space membership $H^s([-1,1]^d)$ with $s > d/2$, significantly relaxing
previous smoothness assumptions while maintaining optimal approximation rates.
Third, we introduce a modified sieve space construction based on moment bounds
rather than weight constraints, providing a more natural theoretical framework
for modern deep learning practices. Our approach achieves these generalizations
while preserving optimal convergence rates and establishing valid asymptotic
distributions for test statistics. The technical foundation combines
localization theory, sharp concentration inequalities, and scale-insensitive
complexity measures to handle unbounded weights and general Lipschitz
activation functions. This framework better aligns theoretical guarantees with
contemporary deep learning practice while maintaining mathematical rigor.
|
2501.15754
|
Weight-based Analysis of Detokenization in Language Models:
Understanding the First Stage of Inference Without Inference
|
cs.CL
|
According to the stages-of-inference hypothesis, early layers of language
models map their subword-tokenized input, which does not necessarily correspond
to a linguistically meaningful segmentation, to more meaningful representations
that form the model's "inner vocabulary". Prior analysis of this detokenization
stage has predominantly relied on probing and interventions such as path
patching, which involve selecting particular inputs, choosing a subset of
components that will be patched, and then observing changes in model behavior.
Here, we show that several important aspects of the detokenization stage can be
understood purely by analyzing model weights, without performing any model
inference steps. Specifically, we introduce an analytical decomposition of
first-layer attention in GPT-2. Our decomposition yields interpretable terms
that quantify the relative contributions of position-related, token-related,
and mixed effects. By focusing on terms in this decomposition, we discover
weight-based explanations of attention bias toward close tokens and attention
for detokenization.
|
2501.15755
|
GraphICL: Unlocking Graph Learning Potential in LLMs through Structured
Prompt Design
|
cs.LG
|
The growing importance of textual and relational systems has driven interest
in enhancing large language models (LLMs) for graph-structured data,
particularly Text-Attributed Graphs (TAGs), where samples are represented by
textual descriptions interconnected by edges. While research has largely
focused on developing specialized graph LLMs through task-specific instruction
tuning, a comprehensive benchmark for evaluating LLMs solely through prompt
design remains surprisingly absent. Without such a carefully crafted evaluation
benchmark, most if not all, tailored graph LLMs are compared against general
LLMs using simplistic queries (e.g., zero-shot reasoning with LLaMA), which can
potentially camouflage many advantages as well as unexpected predicaments of
them. To achieve more general evaluations and unveil the true potential of LLMs
for graph tasks, we introduce Graph In-context Learning (GraphICL) Benchmark, a
comprehensive benchmark comprising novel prompt templates designed to capture
graph structure and handle limited label knowledge. Our systematic evaluation
shows that general-purpose LLMs equipped with our GraphICL outperform
state-of-the-art specialized graph LLMs and graph neural network models in
resource-constrained settings and out-of-domain tasks. These findings highlight
the significant potential of prompt engineering to enhance LLM performance on
graph learning tasks without training and offer a strong baseline for advancing
research in graph LLMs.
|
2501.15757
|
Efficiency Bottlenecks of Convolutional Kolmogorov-Arnold Networks: A
Comprehensive Scrutiny with ImageNet, AlexNet, LeNet and Tabular
Classification
|
cs.CV cs.AI
|
Algorithmic level developments like Convolutional Neural Networks,
transformers, attention mechanism, Retrieval Augmented Generation and so on
have changed Artificial Intelligence. Recent such development was observed by
Kolmogorov-Arnold Networks that suggested to challenge the fundamental concept
of a Neural Network, thus change Multilayer Perceptron, and Convolutional
Neural Networks. They received a good reception in terms of scientific
modeling, yet had some drawbacks in terms of efficiency. In this paper, we
train Convolutional Kolmogorov Arnold Networks (CKANs) with the ImageNet-1k
dataset with 1.3 million images, MNIST dataset with 60k images and a tabular
biological science related MoA dataset and test the promise of CKANs in terms
of FLOPS, Inference Time, number of trainable parameters and training time
against the accuracy, precision, recall and f-1 score they produce against the
standard industry practice on CNN models. We show that the CKANs perform fair
yet slower than CNNs in small size dataset like MoA and MNIST but are not
nearly comparable as the dataset gets larger and more complex like the
ImageNet. The code implementation of this paper can be found on the link:
\href{https://github.com/ashimdahal/Study-of-Convolutional-Kolmogorov-Arnold-networks}{https://github.com/ashimdahal/Study-of-Convolutional-Kolmogorov-Arnold-networks}
|
2501.15758
|
Risk-Aware Distributional Intervention Policies for Language Models
|
cs.LG cs.CL math.OC
|
Language models are prone to occasionally undesirable generations, such as
harmful or toxic content, despite their impressive capability to produce texts
that appear accurate and coherent. This paper presents a new two-stage approach
to detect and mitigate undesirable content generations by rectifying
activations. First, we train an ensemble of layerwise classifiers to detect
undesirable content using activations by minimizing a smooth surrogate of the
risk-aware score. Then, for contents that are detected as undesirable, we
propose layerwise distributional intervention policies that perturb the
attention heads minimally while guaranteeing probabilistically the
effectiveness of the intervention. Benchmarks on several language models and
datasets show that our method outperforms baselines in reducing the generation
of undesirable output.
|
Subsets and Splits
No community queries yet
The top public SQL queries from the community will appear here once available.