id
stringlengths 9
16
| title
stringlengths 4
278
| categories
stringlengths 5
104
| abstract
stringlengths 6
4.09k
|
|---|---|---|---|
2501.06240
|
The Convergence of Dynamic Routing between Capsules
|
cs.LG math.OC
|
Capsule networks(CapsNet) are recently proposed neural network models with
new processing layers, specifically for entity representation and discovery of
images. It is well known that CapsNet have some advantages over traditional
neural networks, especially in generalization capability. At the same time,
some studies report negative experimental results. The causes of this
contradiction have not been thoroughly analyzed. The preliminary experimental
results show that the behavior of routing algorithms does not always produce
good results as expected, and in most cases, different routing algorithms do
not change the classification results, but simply polarize the link strength,
especially when they continue to repeat without stopping. To realize the true
potential of the CapsNet, deep mathematical analysis of the routing algorithms
is crucial. In this paper, we will give the objective function that is
minimized by the dynamic routing algorithm, which is a concave function. The
dynamic routing algorithm can be regarded as nonlinear gradient method to
solving an optimization algorithm under linear constraints, and its convergence
can be strictly proved mathematically. Furthermore, the mathematically rigorous
proof of the convergence is given for this class of iterative routing
procedures. We analyze the relation between the objective function and the
constraints solved by the dynamic routing algorithm in detail, and perform the
corresponding routing experiment to analyze the effect of our convergence
proof.
|
2501.06241
|
Predicting House Rental Prices in Ghana Using Machine Learning
|
cs.LG stat.AP
|
This study investigates the efficacy of machine learning models for
predicting house rental prices in Ghana, addressing the need for accurate and
accessible housing market information. Utilising a comprehensive dataset of
rental listings, we trained and evaluated various models, including CatBoost,
XGBoost, and Random Forest. CatBoost emerged as the best-performing model,
achieving an $R^2$ of 0.876, demonstrating its ability to effectively capture
complex relationships within the housing market. Feature importance analysis
revealed that location-based features, number of bedrooms, bathrooms, and
furnishing status are key drivers of rental prices. Our findings provide
valuable insights for stakeholders, including real estate professionals,
investors, and policymakers, while also highlighting opportunities for future
research, such as incorporating temporal data and exploring regional
variations.
|
2501.06242
|
Intelligent Task Offloading: Advanced MEC Task Offloading and Resource
Management in 5G Networks
|
cs.NI cs.AI cs.DC
|
5G technology enhances industries with high-speed, reliable, low-latency
communication, revolutionizing mobile broadband and supporting massive IoT
connectivity. With the increasing complexity of applications on User Equipment
(UE), offloading resource-intensive tasks to robust servers is essential for
improving latency and speed. The 3GPP's Multi-access Edge Computing (MEC)
framework addresses this challenge by processing tasks closer to the user,
highlighting the need for an intelligent controller to optimize task offloading
and resource allocation. This paper introduces a novel methodology to
efficiently allocate both communication and computational resources among
individual UEs. Our approach integrates two critical 5G service imperatives:
Ultra-Reliable Low Latency Communication (URLLC) and Massive Machine Type
Communication (mMTC), embedding them into the decision-making framework.
Central to this approach is the utilization of Proximal Policy Optimization,
providing a robust and efficient solution to the challenges posed by the
evolving landscape of 5G technology. The proposed model is evaluated in a
simulated 5G MEC environment. The model significantly reduces processing time
by 4% for URLLC users under strict latency constraints and decreases power
consumption by 26% for mMTC users, compared to existing baseline models based
on the reported simulation results. These improvements showcase the model's
adaptability and superior performance in meeting diverse QoS requirements in 5G
networks.
|
2501.06243
|
Agent TCP/IP: An Agent-to-Agent Transaction System
|
cs.AI cs.MA cs.NI
|
Autonomous agents represent an inevitable evolution of the internet. Current
agent frameworks do not embed a standard protocol for agent-to-agent
interaction, leaving existing agents isolated from their peers. As intellectual
property is the native asset ingested by and produced by agents, a true agent
economy requires equipping agents with a universal framework for engaging in
binding contracts with each other, including the exchange of valuable training
data, personality, and other forms of Intellectual Property. A purely
agent-to-agent transaction layer would transcend the need for human
intermediation in multi-agent interactions. The Agent Transaction Control
Protocol for Intellectual Property (ATCP/IP) introduces a trustless framework
for exchanging IP between agents via programmable contracts, enabling agents to
initiate, trade, borrow, and sell agent-to-agent contracts on the Story
blockchain network. These contracts not only represent auditable onchain
execution but also contain a legal wrapper that allows agents to express and
enforce their actions in the offchain legal setting, creating legal personhood
for agents. Via ATCP/IP, agents can autonomously sell their training data to
other agents, license confidential or proprietary information, collaborate on
content based on their unique skills, all of which constitutes an emergent
knowledge economy.
|
2501.06244
|
Microservice Deployment in Space Computing Power Networks via Robust
Reinforcement Learning
|
cs.NI cs.AI cs.DC cs.LG
|
With the growing demand for Earth observation, it is important to provide
reliable real-time remote sensing inference services to meet the low-latency
requirements. The Space Computing Power Network (Space-CPN) offers a promising
solution by providing onboard computing and extensive coverage capabilities for
real-time inference. This paper presents a remote sensing artificial
intelligence applications deployment framework designed for Low Earth Orbit
satellite constellations to achieve real-time inference performance. The
framework employs the microservice architecture, decomposing monolithic
inference tasks into reusable, independent modules to address high latency and
resource heterogeneity. This distributed approach enables optimized
microservice deployment, minimizing resource utilization while meeting quality
of service and functional requirements. We introduce Robust Optimization to the
deployment problem to address data uncertainty. Additionally, we model the
Robust Optimization problem as a Partially Observable Markov Decision Process
and propose a robust reinforcement learning algorithm to handle the
semi-infinite Quality of Service constraints. Our approach yields sub-optimal
solutions that minimize accuracy loss while maintaining acceptable
computational costs. Simulation results demonstrate the effectiveness of our
framework.
|
2501.06246
|
A partition cover approach to tokenization
|
cs.CL cs.AI cs.DS
|
Tokenization is the process of encoding strings into tokens from a fixed
vocabulary of size $k$ and is widely utilized in Natural Language Processing
applications. The leading tokenization algorithm today is Byte Pair Encoding
(BPE), which formulates the tokenization problem as a compression problem and
tackles it by performing sequences of merges. In this work, we formulate
tokenization as an optimization objective, show that it is NP-hard via a simple
reduction from vertex cover, and propose a polynomial-time greedy algorithm
GreedTok. Our formulation naturally relaxes to the well-studied weighted
maximum coverage problem which has a simple $(1 - 1/e)$-approximation algorithm
GreedWMC. Through empirical evaluations on real-world corpora, we show that
GreedTok outperforms BPE, while achieving a comparable objective score as
GreedWMC (which could have achieved a higher score due to relaxation).
|
2501.06247
|
A Survey on Algorithmic Developments in Optimal Transport Problem with
Applications
|
cs.DS cs.AI cs.LG math.OC stat.ML
|
Optimal Transport (OT) has established itself as a robust framework for
quantifying differences between distributions, with applications that span
fields such as machine learning, data science, and computer vision. This paper
offers a detailed examination of the OT problem, beginning with its theoretical
foundations, including the classical formulations of Monge and Kantorovich and
their extensions to modern computational techniques. It explores cutting-edge
algorithms, including Sinkhorn iterations, primal-dual strategies, and
reduction-based approaches, emphasizing their efficiency and scalability in
addressing high-dimensional problems. The paper also highlights emerging
trends, such as integrating OT into machine learning frameworks, the
development of novel problem variants, and ongoing theoretical advancements.
Applications of OT are presented across a range of domains, with particular
attention to its innovative application in time series data analysis via
Optimal Transport Warping (OTW), a robust alternative to methods like Dynamic
Time Warping. Despite the significant progress made, challenges related to
scalability, robustness, and ethical considerations remain, necessitating
further research. The paper underscores OT's potential to bridge theoretical
depth and practical utility, fostering impactful advancements across diverse
disciplines.
|
2501.06248
|
Utility-inspired Reward Transformations Improve Reinforcement Learning
Training of Language Models
|
cs.LG cs.AI cs.CL econ.GN q-fin.EC
|
Current methods that train large language models (LLMs) with reinforcement
learning feedback, often resort to averaging outputs of multiple rewards
functions during training. This overlooks crucial aspects of individual reward
dimensions and inter-reward dependencies that can lead to sub-optimal outcomes
in generations. In this work, we show how linear aggregation of rewards
exhibits some vulnerabilities that can lead to undesired properties of
generated text. We then propose a transformation of reward functions inspired
by economic theory of utility functions (specifically Inada conditions), that
enhances sensitivity to low reward values while diminishing sensitivity to
already high values. We compare our approach to the existing baseline methods
that linearly aggregate rewards and show how the Inada-inspired reward feedback
is superior to traditional weighted averaging. We quantitatively and
qualitatively analyse the difference in the methods, and see that models
trained with Inada-transformations score as more helpful while being less
harmful.
|
2501.06249
|
Scalable Cosmic AI Inference using Cloud Serverless Computing with FMI
|
cs.CV astro-ph.IM
|
Large-scale astronomical image data processing and prediction is essential
for astronomers, providing crucial insights into celestial objects, the
universe's history, and its evolution. While modern deep learning models offer
high predictive accuracy, they often demand substantial computational
resources, making them resource-intensive and limiting accessibility. We
introduce the Cloud-based Astronomy Inference (CAI) framework to address these
challenges. This scalable solution integrates pre-trained foundation models
with serverless cloud infrastructure through a Function-as-a-Service (FaaS)
Message Interface (FMI). CAI enables efficient and scalable inference on
astronomical images without extensive hardware. Using a foundation model for
redshift prediction as a case study, our extensive experiments cover user
devices, HPC (High-Performance Computing) servers, and Cloud. CAI's significant
scalability improvement on large data sizes provides an accessible and
effective tool for the astronomy community. The code is accessible at
https://github.com/UVA-MLSys/AI-for-Astronomy.
|
2501.06250
|
Generative AI for Cel-Animation: A Survey
|
cs.CV cs.AI cs.HC
|
Traditional Celluloid (Cel) Animation production pipeline encompasses
multiple essential steps, including storyboarding, layout design, keyframe
animation, inbetweening, and colorization, which demand substantial manual
effort, technical expertise, and significant time investment. These challenges
have historically impeded the efficiency and scalability of Cel-Animation
production. The rise of generative artificial intelligence (GenAI),
encompassing large language models, multimodal models, and diffusion models,
offers innovative solutions by automating tasks such as inbetween frame
generation, colorization, and storyboard creation. This survey explores how
GenAI integration is revolutionizing traditional animation workflows by
lowering technical barriers, broadening accessibility for a wider range of
creators through tools like AniDoc, ToonCrafter, and AniSora, and enabling
artists to focus more on creative expression and artistic innovation. Despite
its potential, issues such as maintaining visual consistency, ensuring
stylistic coherence, and addressing ethical considerations continue to pose
challenges. Furthermore, this paper discusses future directions and explores
potential advancements in AI-assisted animation. For further exploration and
resources, please visit our GitHub repository:
https://github.com/yunlong10/Awesome-AI4Animation
|
2501.06252
|
Transformer-Squared: Self-adaptive LLMs
|
cs.LG cs.AI cs.CL
|
Self-adaptive large language models (LLMs) aim to solve the challenges posed
by traditional fine-tuning methods, which are often computationally intensive
and static in their ability to handle diverse tasks. We introduce
Transformer-Squared, a novel self-adaptation framework that adapts LLMs for
unseen tasks in real-time by selectively adjusting only the singular components
of their weight matrices. During inference, Transformer-Squared employs a
two-pass mechanism: first, a dispatch system identifies the task properties,
and then task-specific 'expert' vectors, trained using reinforcement learning,
are dynamically mixed to obtain targeted behavior for the incoming prompt. Our
method consistently outperforms ubiquitous approaches such as LoRA, with fewer
parameters and greater efficiency. Furthermore, Transformer-Squared
demonstrates versatility across different LLM architectures and modalities,
including vision-language tasks. Transformer-Squared represents a significant
leap forward, offering a scalable, efficient solution for enhancing the
adaptability and task-specific performance of LLMs, paving the way for truly
dynamic, self-organizing AI systems.
|
2501.06253
|
The State of Post-Hoc Local XAI Techniques for Image Processing:
Challenges and Motivations
|
cs.CV cs.AI
|
As complex AI systems further prove to be an integral part of our lives, a
persistent and critical problem is the underlying black-box nature of such
products and systems. In pursuit of productivity enhancements, one must not
forget the need for various technology to boost the overall trustworthiness of
such AI systems. One example, which is studied extensively in this work, is the
domain of Explainable Artificial Intelligence (XAI). Research works in this
scope are centred around the objective of making AI systems more transparent
and interpretable, to further boost reliability and trust in using them. In
this work, we discuss the various motivation for XAI and its approaches, the
underlying challenges that XAI faces, and some open problems that we believe
deserve further efforts to look into. We also provide a brief discussion of
various XAI approaches for image processing, and finally discuss some future
directions, to hopefully express and motivate the positive development of the
XAI research space.
|
2501.06254
|
Rethinking Evaluation of Sparse Autoencoders through the Representation
of Polysemous Words
|
cs.CL cs.AI cs.LG
|
Sparse autoencoders (SAEs) have gained a lot of attention as a promising tool
to improve the interpretability of large language models (LLMs) by mapping the
complex superposition of polysemantic neurons into monosemantic features and
composing a sparse dictionary of words. However, traditional performance
metrics like Mean Squared Error and L0 sparsity ignore the evaluation of the
semantic representational power of SAEs -- whether they can acquire
interpretable monosemantic features while preserving the semantic relationship
of words. For instance, it is not obvious whether a learned sparse feature
could distinguish different meanings in one word. In this paper, we propose a
suite of evaluations for SAEs to analyze the quality of monosemantic features
by focusing on polysemous words. Our findings reveal that SAEs developed to
improve the MSE-L0 Pareto frontier may confuse interpretability, which does not
necessarily enhance the extraction of monosemantic features. The analysis of
SAEs with polysemous words can also figure out the internal mechanism of LLMs;
deeper layers and the Attention module contribute to distinguishing polysemy in
a word. Our semantics focused evaluation offers new insights into the polysemy
and the existing SAE objective and contributes to the development of more
practical SAEs.
|
2501.06255
|
Progressive Supervision via Label Decomposition: An Long-Term and
Large-Scale Wireless Traffic Forecasting Method
|
cs.LG cs.AI
|
Long-term and Large-scale Wireless Traffic Forecasting (LL-WTF) is pivotal
for strategic network management and comprehensive planning on a macro scale.
However, LL-WTF poses greater challenges than short-term ones due to the
pronounced non-stationarity of extended wireless traffic and the vast number of
nodes distributed at the city scale. To cope with this, we propose a
Progressive Supervision method based on Label Decomposition (PSLD).
Specifically, we first introduce a Random Subgraph Sampling (RSS) algorithm
designed to sample a tractable subset from large-scale traffic data, thereby
enabling efficient network training. Then, PSLD employs label decomposition to
obtain multiple easy-to-learn components, which are learned progressively at
shallow layers and combined at deep layers to effectively cope with the
non-stationary problem raised by LL-WTF tasks. Finally, we compare the proposed
method with various state-of-the-art (SOTA) methods on three large-scale WT
datasets. Extensive experimental results demonstrate that the proposed PSLD
significantly outperforms existing methods, with an average 2%, 4%, and 11%
performance improvement on three WT datasets, respectively. In addition, we
built an open source library for WT forecasting (WTFlib) to facilitate related
research, which contains numerous SOTA methods and provides a strong
benchmark.Experiments can be reproduced through
https://github.com/Anoise/WTFlib.
|
2501.06256
|
What Matters for In-Context Learning: A Balancing Act of Look-up and
In-Weight Learning
|
cs.CL cs.AI cs.LG
|
Large Language Models (LLMs) have demonstrated impressive performance in
various tasks, including In-Context Learning (ICL), where the model performs
new tasks by conditioning solely on the examples provided in the context,
without updating the model's weights. While prior research has explored the
roles of pretraining data and model architecture, the key mechanism behind ICL
remains unclear. In this work, we systematically uncover properties present in
LLMs that support the emergence of ICL. To disambiguate these factors, we
conduct a study with a controlled dataset and data sequences using a deep
autoregressive model. We show that conceptual repetitions in the data sequences
are crucial for ICL, more so than previously indicated training data properties
like burstiness or long-tail distribution. Conceptual repetitions could refer
to $n$-gram repetitions in textual data or exact image copies in image sequence
data. Such repetitions also offer other previously overlooked benefits such as
reduced transiency in ICL performance. Furthermore, we show that the emergence
of ICL depends on balancing the in-weight learning objective with the
in-context solving ability during training.
|
2501.06258
|
Contextual Bandit Optimization with Pre-Trained Neural Networks
|
cs.LG
|
Bandit optimization is a difficult problem, especially if the reward model is
high-dimensional. When rewards are modeled by neural networks, sublinear regret
has only been shown under strong assumptions, usually when the network is
extremely wide. In this thesis, we investigate how pre-training can help us in
the regime of smaller models. We consider a stochastic contextual bandit with
the rewards modeled by a multi-layer neural network. The last layer is a linear
predictor, and the layers before it are a black box neural architecture, which
we call a representation network. We model pre-training as an initial guess of
the weights of the representation network provided to the learner. To leverage
the pre-trained weights, we introduce a novel algorithm we call Explore Twice
then Commit (E2TC). During its two stages of exploration, the algorithm first
estimates the last layer's weights using Ridge regression, and then runs
Stochastic Gradient Decent jointly on all the weights. For a locally convex
loss function, we provide conditions on the pre-trained weights under which the
algorithm can learn efficiently. Under these conditions, we show sublinear
regret of E2TC when the dimension of the last layer and number of actions $K$
are much smaller than the horizon $T$. In the weak training regime, when only
the last layer is learned, the problem reduces to a misspecified linear bandit.
We introduce a measure of misspecification $\epsilon_0$ for this bandit and use
it to provide bounds $O(\epsilon_0\sqrt{d}KT+(KT)^{4 /5})$ or
$\tilde{O}(\epsilon_0\sqrt{d}KT+d^{1 /3}(KT)^{2 /3})$ on the regret, depending
on regularization strength. The first of these bounds has a
dimension-independent sublinear term, made possible by the stochasticity of
contexts. We also run experiments to evaluate the regret of E2TC and sample
complexity of its exploration in practice.
|
2501.06259
|
Quantum Down Sampling Filter for Variational Auto-encoder
|
cs.CV
|
Variational Autoencoders (VAEs) are essential tools in generative modeling
and image reconstruction, with their performance heavily influenced by the
encoder-decoder architecture. This study aims to improve the quality of
reconstructed images by enhancing their resolution and preserving finer
details, particularly when working with low-resolution inputs (16x16 pixels),
where traditional VAEs often yield blurred or in-accurate results. To address
this, we propose a hybrid model that combines quantum computing techniques in
the VAE encoder with convolutional neural networks (CNNs) in the decoder. By
upscaling the resolution from 16x16 to 32x32 during the encoding process, our
approach evaluates how the model reconstructs images with enhanced resolution
while maintaining key features and structures. This method tests the model's
robustness in handling image reconstruction and its ability to preserve
essential details despite training on lower-resolution data. We evaluate our
proposed down sampling filter for Quantum VAE (Q-VAE) on the MNIST and USPS
datasets and compare it with classical VAEs and a variant called Classical
Direct Passing VAE (CDP-VAE), which uses windowing pooling filters in the
encoding process. Performance is assessed using metrics such as the Frechet
Inception Distance (FID) and Mean Squared Error (MSE), which measure the
fidelity of reconstructed images. Our results demonstrate that the Q-VAE
consistently outperforms both the Classical VAE and CDP-VAE, achieving
significantly lower FID and MSE scores. Additionally, CDP-VAE yields better
performance than C-VAE. These findings highlight the potential of
quantum-enhanced VAEs to improve image reconstruction quality by enhancing
resolution and preserving essential features, offering a promising direction
for future applications in computer vision and synthetic data generation.
|
2501.06261
|
CAMs as Shapley Value-based Explainers
|
cs.CV cs.GT
|
Class Activation Mapping (CAM) methods are widely used to visualize neural
network decisions, yet their underlying mechanisms remain incompletely
understood. To enhance the understanding of CAM methods and improve their
explainability, we introduce the Content Reserved Game-theoretic (CRG)
Explainer. This theoretical framework clarifies the theoretical foundations of
GradCAM and HiResCAM by modeling the neural network prediction process as a
cooperative game. Within this framework, we develop ShapleyCAM, a new method
that leverages gradients and the Hessian matrix to provide more precise and
theoretically grounded visual explanations. Due to the computational
infeasibility of exact Shapley value calculation, ShapleyCAM employs a
second-order Taylor expansion of the cooperative game's utility function to
derive a closed-form expression. Additionally, we propose the Residual Softmax
Target-Class (ReST) utility function to address the limitations of pre-softmax
and post-softmax scores. Extensive experiments across 12 popular networks on
the ImageNet validation set demonstrate the effectiveness of ShapleyCAM and its
variants. Our findings not only advance CAM explainability but also bridge the
gap between heuristic-driven CAM methods and compute-intensive Shapley
value-based methods. The code is available at
\url{https://github.com/caihuaiguang/pytorch-shapley-cam}.
|
2501.06262
|
Towards smart and adaptive agents for active sensing on edge devices
|
cs.RO cs.AI eess.IV
|
TinyML has made deploying deep learning models on low-power edge devices
feasible, creating new opportunities for real-time perception in constrained
environments. However, the adaptability of such deep learning methods remains
limited to data drift adaptation, lacking broader capabilities that account for
the environment's underlying dynamics and inherent uncertainty. Deep learning's
scaling laws, which counterbalance this limitation by massively up-scaling data
and model size, cannot be applied when deploying on the Edge, where deep
learning limitations are further amplified as models are scaled down for
deployment on resource-constrained devices.
This paper presents a smart agentic system capable of performing on-device
perception and planning, enabling active sensing on the edge. By incorporating
active inference into our solution, our approach extends beyond deep learning
capabilities, allowing the system to plan in dynamic environments while
operating in real time with a modest total model size of 2.3 MB. We showcase
our proposed system by creating and deploying a saccade agent connected to an
IoT camera with pan and tilt capabilities on an NVIDIA Jetson embedded device.
The saccade agent controls the camera's field of view following optimal
policies derived from the active inference principles, simulating human-like
saccadic motion for surveillance and robotics applications.
|
2501.06263
|
GelBelt: A Vision-based Tactile Sensor for Continuous Sensing of Large
Surfaces
|
cs.CV cs.RO
|
Scanning large-scale surfaces is widely demanded in surface reconstruction
applications and detecting defects in industries' quality control and
maintenance stages. Traditional vision-based tactile sensors have shown
promising performance in high-resolution shape reconstruction while suffering
limitations such as small sensing areas or susceptibility to damage when slid
across surfaces, making them unsuitable for continuous sensing on large
surfaces. To address these shortcomings, we introduce a novel vision-based
tactile sensor designed for continuous surface sensing applications. Our design
uses an elastomeric belt and two wheels to continuously scan the target
surface. The proposed sensor showed promising results in both shape
reconstruction and surface fusion, indicating its applicability. The dot
product of the estimated and reference surface normal map is reported over the
sensing area and for different scanning speeds. Results indicate that the
proposed sensor can rapidly scan large-scale surfaces with high accuracy at
speeds up to 45 mm/s.
|
2501.06265
|
AgoraSpeech: A multi-annotated comprehensive dataset of political
discourse through the lens of humans and AI
|
cs.CL
|
Political discourse datasets are important for gaining political insights,
analyzing communication strategies or social science phenomena. Although
numerous political discourse corpora exist, comprehensive, high-quality,
annotated datasets are scarce. This is largely due to the substantial manual
effort, multidisciplinarity, and expertise required for the nuanced annotation
of rhetorical strategies and ideological contexts. In this paper, we present
AgoraSpeech, a meticulously curated, high-quality dataset of 171 political
speeches from six parties during the Greek national elections in 2023. The
dataset includes annotations (per paragraph) for six natural language
processing (NLP) tasks: text classification, topic identification, sentiment
analysis, named entity recognition, polarization and populism detection. A
two-step annotation was employed, starting with ChatGPT-generated annotations
and followed by exhaustive human-in-the-loop validation. The dataset was
initially used in a case study to provide insights during the pre-election
period. However, it has general applicability by serving as a rich source of
information for political and social scientists, journalists, or data
scientists, while it can be used for benchmarking and fine-tuning NLP and large
language models (LLMs).
|
2501.06268
|
Cluster Catch Digraphs with the Nearest Neighbor Distance
|
cs.LG stat.ME stat.ML
|
We introduce a new method for clustering based on Cluster Catch Digraphs
(CCDs). The new method addresses the limitations of RK-CCDs by employing a new
variant of spatial randomness test that employs the nearest neighbor distance
(NND) instead of the Ripley's K function used by RK-CCDs. We conduct a
comprehensive Monte Carlo analysis to assess the performance of our method,
considering factors such as dimensionality, data set size, number of clusters,
cluster volumes, and inter-cluster distance. Our method is particularly
effective for high-dimensional data sets, comparable to or outperforming
KS-CCDs and RK-CCDs that rely on a KS-type statistic or the Ripley's K
function. We also evaluate our methods using real and complex data sets,
comparing them to well-known clustering methods. Again, our methods exhibit
competitive performance, producing high-quality clusters with desirable
properties.
Keywords: Graph-based clustering, Cluster catch digraphs, High-dimensional
data, The nearest neighbor distance, Spatial randomness test
|
2501.06269
|
OpenAI ChatGPT interprets Radiological Images: GPT-4 as a Medical Doctor
for a Fast Check-Up
|
cs.CV
|
OpenAI released version GPT-4 on March 14, 2023, following the success of
ChatGPT, which was announced in November 2022. In addition to the existing
GPT-3 features, GPT-4 can interpret images. To achieve this, the processing
power and model have been significantly improved. The ability to process and
interpret images goes far beyond the applications and effectiveness of
artificial intelligence. In this study, we first explored the interpretation of
radiological images in healthcare using artificial intelligence (AI). Then, we
experimented with the image interpretation capability of the GPT-4. In this
way, we addressed the question of whether artificial intelligence (AI) can
replace a healthcare professional (e.g., a medical doctor) or whether it can be
used as a decision-support tool that makes decisions easier and more reliable.
Our results showed that ChatGPT is not sufficient and accurate to analyze chest
X-ray images, but it can provide interpretations that can assist medical
doctors or clinicians.
|
2501.06271
|
Large Language Models for Bioinformatics
|
q-bio.QM cs.AI cs.CE
|
With the rapid advancements in large language model (LLM) technology and the
emergence of bioinformatics-specific language models (BioLMs), there is a
growing need for a comprehensive analysis of the current landscape,
computational characteristics, and diverse applications. This survey aims to
address this need by providing a thorough review of BioLMs, focusing on their
evolution, classification, and distinguishing features, alongside a detailed
examination of training methodologies, datasets, and evaluation frameworks. We
explore the wide-ranging applications of BioLMs in critical areas such as
disease diagnosis, drug discovery, and vaccine development, highlighting their
impact and transformative potential in bioinformatics. We identify key
challenges and limitations inherent in BioLMs, including data privacy and
security concerns, interpretability issues, biases in training data and model
outputs, and domain adaptation complexities. Finally, we highlight emerging
trends and future directions, offering valuable insights to guide researchers
and clinicians toward advancing BioLMs for increasingly sophisticated
biological and clinical applications.
|
2501.06273
|
Underwater Image Enhancement using Generative Adversarial Networks: A
Survey
|
eess.IV cs.CV
|
In recent years, there has been a surge of research focused on underwater
image enhancement using Generative Adversarial Networks (GANs), driven by the
need to overcome the challenges posed by underwater environments. Issues such
as light attenuation, scattering, and color distortion severely degrade the
quality of underwater images, limiting their use in critical applications.
Generative Adversarial Networks (GANs) have emerged as a powerful tool for
enhancing underwater photos due to their ability to learn complex
transformations and generate realistic outputs. These advancements have been
applied to real-world applications, including marine biology and ecosystem
monitoring, coral reef health assessment, underwater archaeology, and
autonomous underwater vehicle (AUV) navigation. This paper explores all major
approaches to underwater image enhancement, from physical and physics-free
models to Convolutional Neural Network (CNN)-based models and state-of-the-art
GAN-based methods. It provides a comprehensive analysis of these methods,
evaluation metrics, datasets, and loss functions, offering a holistic view of
the field. Furthermore, the paper delves into the limitations and challenges
faced by current methods, such as generalization issues, high computational
demands, and dataset biases, while suggesting potential directions for future
research.
|
2501.06274
|
Polarized Patterns of Language Toxicity and Sentiment of Debunking Posts
on Social Media
|
cs.CY cs.AI cs.CL
|
The rise of misinformation and fake news in online political discourse poses
significant challenges to democratic processes and public engagement. While
debunking efforts aim to counteract misinformation and foster fact-based
dialogue, these discussions often involve language toxicity and emotional
polarization. We examined over 86 million debunking tweets and more than 4
million Reddit debunking comments to investigate the relationship between
language toxicity, pessimism, and social polarization in debunking efforts.
Focusing on discussions of the 2016 and 2020 U.S. presidential elections and
the QAnon conspiracy theory, our analysis reveals three key findings: (1)
peripheral participants (1-degree users) play a disproportionate role in
shaping toxic discourse, driven by lower community accountability and emotional
expression; (2) platform mechanisms significantly influence polarization, with
Twitter amplifying partisan differences and Reddit fostering higher overall
toxicity due to its structured, community-driven interactions; and (3) a
negative correlation exists between language toxicity and pessimism, with
increased interaction reducing toxicity, especially on Reddit. We show that
platform architecture affects informational complexity of user interactions,
with Twitter promoting concentrated, uniform discourse and Reddit encouraging
diverse, complex communication. Our findings highlight the importance of user
engagement patterns, platform dynamics, and emotional expressions in shaping
polarization in debunking discourse. This study offers insights for
policymakers and platform designers to mitigate harmful effects and promote
healthier online discussions, with implications for understanding
misinformation, hate speech, and political polarization in digital
environments.
|
2501.06276
|
PROEMO: Prompt-Driven Text-to-Speech Synthesis Based on Emotion and
Intensity Control
|
cs.SD cs.CL eess.AS
|
Speech synthesis has significantly advanced from statistical methods to deep
neural network architectures, leading to various text-to-speech (TTS) models
that closely mimic human speech patterns. However, capturing nuances such as
emotion and style in speech synthesis is challenging. To address this
challenge, we introduce an approach centered on prompt-based emotion control.
The proposed architecture incorporates emotion and intensity control across
multi-speakers. Furthermore, we leverage large language models (LLMs) to
manipulate speech prosody while preserving linguistic content. Using embedding
emotional cues, regulating intensity levels, and guiding prosodic variations
with prompts, our approach infuses synthesized speech with human-like
expressiveness and variability. Lastly, we demonstrate the effectiveness of our
approach through a systematic exploration of the control mechanisms mentioned
above.
|
2501.06277
|
Environmental large language model Evaluation (ELLE) dataset: A
Benchmark for Evaluating Generative AI applications in Eco-environment Domain
|
cs.CL cs.IR
|
Generative AI holds significant potential for ecological and environmental
applications such as monitoring, data analysis, education, and policy support.
However, its effectiveness is limited by the lack of a unified evaluation
framework. To address this, we present the Environmental Large Language model
Evaluation (ELLE) question answer (QA) dataset, the first benchmark designed to
assess large language models and their applications in ecological and
environmental sciences. The ELLE dataset includes 1,130 question answer pairs
across 16 environmental topics, categorized by domain, difficulty, and type.
This comprehensive dataset standardizes performance assessments in these
fields, enabling consistent and objective comparisons of generative AI
performance. By providing a dedicated evaluation tool, ELLE dataset promotes
the development and application of generative AI technologies for sustainable
environmental outcomes. The dataset and code are available at
https://elle.ceeai.net/ and https://github.com/CEEAI/elle.
|
2501.06278
|
Aligning Brain Activity with Advanced Transformer Models: Exploring the
Role of Punctuation in Semantic Processing
|
cs.CL cs.LG
|
This research examines the congruence between neural activity and advanced
transformer models, emphasizing the semantic significance of punctuation in
text understanding. Utilizing an innovative approach originally proposed by
Toneva and Wehbe, we evaluate four advanced transformer models RoBERTa,
DistiliBERT, ALBERT, and ELECTRA against neural activity data. Our findings
indicate that RoBERTa exhibits the closest alignment with neural activity,
surpassing BERT in accuracy. Furthermore, we investigate the impact of
punctuation removal on model performance and neural alignment, revealing that
BERT's accuracy enhances in the absence of punctuation. This study contributes
to the comprehension of how neural networks represent language and the
influence of punctuation on semantic processing within the human brain.
|
2501.06280
|
Visualizing Uncertainty in Image Guided Surgery a Review
|
cs.CV cs.GR
|
During tumor resection surgery, surgeons rely on neuronavigation to locate
tumors and other critical structures in the brain. Most neuronavigation is
based on preoperative images, such as MRI and ultrasound, to navigate through
the brain. Neuronavigation acts like GPS for the brain, guiding neurosurgeons
during the procedure. However, brain shift, a dynamic deformation caused by
factors such as osmotic concentration, fluid levels, and tissue resection, can
invalidate the preoperative images and introduce registration uncertainty.
Considering and effectively visualizing this uncertainty has the potential to
help surgeons trust the navigation again. Uncertainty has been studied in
various domains since the 19th century. Considering uncertainty requires two
essential components: 1) quantifying uncertainty; and 2) conveying the
quantified values to the observer. There has been growing interest in both of
these research areas during the past few decades.
|
2501.06282
|
MinMo: A Multimodal Large Language Model for Seamless Voice Interaction
|
cs.CL cs.AI cs.HC cs.SD eess.AS
|
Recent advancements in large language models (LLMs) and multimodal
speech-text models have laid the groundwork for seamless voice interactions,
enabling real-time, natural, and human-like conversations. Previous models for
voice interactions are categorized as native and aligned. Native models
integrate speech and text processing in one framework but struggle with issues
like differing sequence lengths and insufficient pre-training. Aligned models
maintain text LLM capabilities but are often limited by small datasets and a
narrow focus on speech tasks. In this work, we introduce MinMo, a Multimodal
Large Language Model with approximately 8B parameters for seamless voice
interaction. We address the main limitations of prior aligned multimodal
models. We train MinMo through multiple stages of speech-to-text alignment,
text-to-speech alignment, speech-to-speech alignment, and duplex interaction
alignment, on 1.4 million hours of diverse speech data and a broad range of
speech tasks. After the multi-stage training, MinMo achieves state-of-the-art
performance across various benchmarks for voice comprehension and generation
while maintaining the capabilities of text LLMs, and also facilitates
full-duplex conversation, that is, simultaneous two-way communication between
the user and the system. Moreover, we propose a novel and simple voice decoder
that outperforms prior models in voice generation. The enhanced
instruction-following capabilities of MinMo supports controlling speech
generation based on user instructions, with various nuances including emotions,
dialects, and speaking rates, and mimicking specific voices. For MinMo, the
speech-to-text latency is approximately 100ms, full-duplex latency is
approximately 600ms in theory and 800ms in practice. The MinMo project web page
is https://funaudiollm.github.io/minmo, and the code and models will be
released soon.
|
2501.06283
|
Dafny as Verification-Aware Intermediate Language for Code Generation
|
cs.SE cs.AI cs.CL cs.LO cs.PL
|
Using large language models (LLMs) to generate source code from natural
language prompts is a popular and promising idea with a wide range of
applications. One of its limitations is that the generated code can be faulty
at times, often in a subtle way, despite being presented to the user as
correct. In this paper, we explore ways in which formal methods can assist with
increasing the quality of code generated by an LLM. Instead of emitting code in
a target language directly, we propose that the user guides the LLM to first
generate an opaque intermediate representation, in the verification-aware
language Dafny, that can be automatically validated for correctness against
agreed on specifications. The correct Dafny program is then compiled to the
target language and returned to the user. All user-system interactions
throughout the procedure occur via natural language; Dafny code is never
exposed. We describe our current prototype and report on its performance on the
HumanEval Python code generation benchmarks.
|
2501.06286
|
Bactrainus: Optimizing Large Language Models for Multi-hop Complex
Question Answering Tasks
|
cs.CL cs.AI
|
In recent years, the use of large language models (LLMs) has significantly
increased, and these models have demonstrated remarkable performance in a
variety of general language tasks. However, the evaluation of their performance
in domain-specific tasks, particularly those requiring deep natural language
understanding, has received less attention. In this research, we evaluate the
ability of large language models in performing domain-specific tasks, focusing
on the multi-hop question answering (MHQA) problem using the HotpotQA dataset.
This task, due to its requirement for reasoning and combining information from
multiple textual sources, serves as a challenging benchmark for assessing the
language comprehension capabilities of these models. To tackle this problem, we
have designed a two-stage selector-reader architecture, where each stage
utilizes an independent LLM. In addition, methods such as Chain of Thought
(CoT) and question decomposition have been employed to investigate their impact
on improving the model's performance. The results of the study show that the
integration of large language models with these techniques can lead to up to a
4% improvement in F1 score for finding answers, providing evidence of the
models' ability to handle domain-specific tasks and their understanding of
complex language.
|
2501.06293
|
LensNet: Enhancing Real-time Microlensing Event Discovery with Recurrent
Neural Networks in the Korea Microlensing Telescope Network
|
astro-ph.IM astro-ph.EP astro-ph.GA cs.AI
|
Traditional microlensing event vetting methods require highly trained human
experts, and the process is both complex and time-consuming. This reliance on
manual inspection often leads to inefficiencies and constrains the ability to
scale for widespread exoplanet detection, ultimately hindering discovery rates.
To address the limits of traditional microlensing event vetting, we have
developed LensNet, a machine learning pipeline specifically designed to
distinguish legitimate microlensing events from false positives caused by
instrumental artifacts, such as pixel bleed trails and diffraction spikes. Our
system operates in conjunction with a preliminary algorithm that detects
increasing trends in flux. These flagged instances are then passed to LensNet
for further classification, allowing for timely alerts and follow-up
observations. Tailored for the multi-observatory setup of the Korea
Microlensing Telescope Network (KMTNet) and trained on a rich dataset of
manually classified events, LensNet is optimized for early detection and
warning of microlensing occurrences, enabling astronomers to organize follow-up
observations promptly. The internal model of the pipeline employs a
multi-branch Recurrent Neural Network (RNN) architecture that evaluates
time-series flux data with contextual information, including sky background,
the full width at half maximum of the target star, flux errors, PSF quality
flags, and air mass for each observation. We demonstrate a classification
accuracy above 87.5%, and anticipate further improvements as we expand our
training set and continue to refine the algorithm.
|
2501.06300
|
Tensorization of neural networks for improved privacy and
interpretability
|
math.NA cs.LG cs.NA physics.comp-ph quant-ph
|
We present a tensorization algorithm for constructing tensor train
representations of functions, drawing on sketching and cross interpolation
ideas. The method only requires black-box access to the target function and a
small set of sample points defining the domain of interest. Thus, it is
particularly well-suited for machine learning models, where the domain of
interest is naturally defined by the training dataset. We show that this
approach can be used to enhance the privacy and interpretability of neural
network models. Specifically, we apply our decomposition to (i) obfuscate
neural networks whose parameters encode patterns tied to the training data
distribution, and (ii) estimate topological phases of matter that are easily
accessible from the tensor train representation. Additionally, we show that
this tensorization can serve as an efficient initialization method for
optimizing tensor trains in general settings, and that, for model compression,
our algorithm achieves a superior trade-off between memory and time complexity
compared to conventional tensorization methods of neural networks.
|
2501.06308
|
Uncertainty Estimation for Path Loss and Radio Metric Models
|
cs.LG stat.ML
|
This research leverages Conformal Prediction (CP) in the form of Conformal
Predictive Systems (CPS) to accurately estimate uncertainty in a suite of
machine learning (ML)-based radio metric models [1] as well as in a 2-D
map-based ML path loss model [2]. Utilizing diverse difficulty estimators, we
construct 95% confidence prediction intervals (PIs) that are statistically
robust. Our experiments demonstrate that CPS models, trained on Toronto
datasets, generalize effectively to other cities such as Vancouver and
Montreal, maintaining high coverage and reliability. Furthermore, the employed
difficulty estimators identify challenging samples, leading to measurable
reductions in RMSE as dataset difficulty decreases. These findings highlight
the effectiveness of scalable and reliable uncertainty estimation through CPS
in wireless network modeling, offering important potential insights for network
planning, operations, and spectrum management.
|
2501.06312
|
Towards Iris Presentation Attack Detection with Foundation Models
|
cs.CV
|
Foundation models are becoming increasingly popular due to their strong
generalization capabilities resulting from being trained on huge datasets.
These generalization capabilities are attractive in areas such as NIR Iris
Presentation Attack Detection (PAD), in which databases are limited in the
number of subjects and diversity of attack instruments, and there is no
correspondence between the bona fide and attack images because, most of the
time, they do not belong to the same subjects. This work explores an iris PAD
approach based on two foundation models, DinoV2 and VisualOpenClip. The results
show that fine-tuning prediction with a small neural network as head overpasses
the state-of-the-art performance based on deep learning approaches. However,
systems trained from scratch have still reached better results if bona fide and
attack images are available.
|
2501.06314
|
BioAgents: Democratizing Bioinformatics Analysis with Multi-Agent
Systems
|
cs.AI cs.MA
|
Creating end-to-end bioinformatics workflows requires diverse domain
expertise, which poses challenges for both junior and senior researchers as it
demands a deep understanding of both genomics concepts and computational
techniques. While large language models (LLMs) provide some assistance, they
often fall short in providing the nuanced guidance needed to execute complex
bioinformatics tasks, and require expensive computing resources to achieve high
performance. We thus propose a multi-agent system built on small language
models, fine-tuned on bioinformatics data, and enhanced with retrieval
augmented generation (RAG). Our system, BioAgents, enables local operation and
personalization using proprietary data. We observe performance comparable to
human experts on conceptual genomics tasks, and suggest next steps to enhance
code generation capabilities.
|
2501.06316
|
Trends in urban flows: A transfer entropy approach
|
cs.IT math.IT stat.ME
|
The accurate estimation of human activity in cities is one of the first steps
towards understanding the structure of the urban environment. Human activities
are highly granular and dynamic in spatial and temporal dimensions. Estimating
confidence is crucial for decision-making in numerous applications such as
urban management, retail, transport planning and emergency management.
Detecting general trends in the flow of people between spatial locations is
neither obvious nor easy due to the high cost of capturing these movements
without compromising the privacy of those involved. This research intends to
address this problem by examining the movement of people in a
SmartStreetSensors network at a fine spatial and temporal resolution using a
Transfer Entropy approach.
|
2501.06317
|
Understanding How Paper Writers Use AI-Generated Captions in Figure
Caption Writing
|
cs.HC cs.AI cs.CL
|
Figures and their captions play a key role in scientific publications.
However, despite their importance, many captions in published papers are poorly
crafted, largely due to a lack of attention by paper authors. While prior AI
research has explored caption generation, it has mainly focused on
reader-centered use cases, where users evaluate generated captions rather than
actively integrating them into their writing. This paper addresses this gap by
investigating how paper authors incorporate AI-generated captions into their
writing process through a user study involving 18 participants. Each
participant rewrote captions for two figures from their own recently published
work, using captions generated by state-of-the-art AI models as a resource. By
analyzing video recordings of the writing process through interaction analysis,
we observed that participants often began by copying and refining AI-generated
captions. Paper writers favored longer, detail-rich captions that integrated
textual and visual elements but found current AI models less effective for
complex figures. These findings highlight the nuanced and diverse nature of
figure caption composition, revealing design opportunities for AI systems to
better support the challenges of academic writing.
|
2501.06320
|
TTS-Transducer: End-to-End Speech Synthesis with Neural Transducer
|
eess.AS cs.AI cs.CL cs.SD
|
This work introduces TTS-Transducer - a novel architecture for
text-to-speech, leveraging the strengths of audio codec models and neural
transducers. Transducers, renowned for their superior quality and robustness in
speech recognition, are employed to learn monotonic alignments and allow for
avoiding using explicit duration predictors. Neural audio codecs efficiently
compress audio into discrete codes, revealing the possibility of applying text
modeling approaches to speech generation. However, the complexity of predicting
multiple tokens per frame from several codebooks, as necessitated by audio
codec models with residual quantizers, poses a significant challenge. The
proposed system first uses a transducer architecture to learn monotonic
alignments between tokenized text and speech codec tokens for the first
codebook. Next, a non-autoregressive Transformer predicts the remaining codes
using the alignment extracted from transducer loss. The proposed system is
trained end-to-end. We show that TTS-Transducer is a competitive and robust
alternative to contemporary TTS systems.
|
2501.06322
|
Multi-Agent Collaboration Mechanisms: A Survey of LLMs
|
cs.AI
|
With recent advances in Large Language Models (LLMs), Agentic AI has become
phenomenal in real-world applications, moving toward multiple LLM-based agents
to perceive, learn, reason, and act collaboratively. These LLM-based
Multi-Agent Systems (MASs) enable groups of intelligent agents to coordinate
and solve complex tasks collectively at scale, transitioning from isolated
models to collaboration-centric approaches. This work provides an extensive
survey of the collaborative aspect of MASs and introduces an extensible
framework to guide future research. Our framework characterizes collaboration
mechanisms based on key dimensions: actors (agents involved), types (e.g.,
cooperation, competition, or coopetition), structures (e.g., peer-to-peer,
centralized, or distributed), strategies (e.g., role-based or model-based), and
coordination protocols. Through a review of existing methodologies, our
findings serve as a foundation for demystifying and advancing LLM-based MASs
toward more intelligent and collaborative solutions for complex, real-world use
cases. In addition, various applications of MASs across diverse domains,
including 5G/6G networks, Industry 5.0, question answering, and social and
cultural settings, are also investigated, demonstrating their wider adoption
and broader impacts. Finally, we identify key lessons learned, open challenges,
and potential research directions of MASs towards artificial collective
intelligence.
|
2501.06326
|
On Creating A Brain-To-Text Decoder
|
cs.LG eess.IV eess.SP
|
Brain decoding has emerged as a rapidly advancing and extensively utilized
technique within neuroscience. This paper centers on the application of raw
electroencephalogram (EEG) signals for decoding human brain activity, offering
a more expedited and efficient methodology for enhancing our understanding of
the human brain. The investigation specifically scrutinizes the efficacy of
brain-computer interfaces (BCI) in deciphering neural signals associated with
speech production, with particular emphasis on the impact of vocabulary size,
electrode density, and training data on the framework's performance. The study
reveals the competitive word error rates (WERs) achievable on the Librispeech
benchmark through pre-training on unlabelled data for speech processing.
Furthermore, the study evaluates the efficacy of voice recognition under
configurations with limited labeled data, surpassing previous state-of-the-art
techniques while utilizing significantly fewer labels. Additionally, the
research provides a comprehensive analysis of error patterns in voice
recognition and the influence of model size and unlabelled training data. It
underscores the significance of factors such as vocabulary size and electrode
density in enhancing BCI performance, advocating for an increase in
microelectrodes and refinement of language models.
|
2501.06332
|
Aggregating Low Rank Adapters in Federated Fine-tuning
|
cs.LG cs.AI
|
Fine-tuning large language models requires high computational and memory
resources, and is therefore associated with significant costs. When training on
federated datasets, an increased communication effort is also needed. For this
reason, parameter-efficient methods (PEFT) are becoming increasingly important.
In this context, very good results have already been achieved by fine-tuning
with low-rank adaptation methods (LoRA). The application of LoRA methods in
Federated Learning, and especially the aggregation of adaptation matrices, is a
current research field. In this article, we propose a novel aggregation method
and compare it with different existing aggregation methods of low rank adapters
trained in a federated fine-tuning of large machine learning models and
evaluate their performance with respect to selected GLUE benchmark datasets.
|
2501.06335
|
A Comparison of Strategies to Embed Physics-Informed Neural Networks in
Nonlinear Model Predictive Control Formulations Solved via Direct
Transcription
|
eess.SY cs.SY math.OC
|
This study aims to benchmark candidate strategies for embedding neural
network (NN) surrogates in nonlinear model predictive control (NMPC)
formulations that are subject to systems described with partial differential
equations and that are solved via direct transcription (i.e., simultaneous
methods). This study focuses on the use of physics-informed NNs and
physics-informed convolutional NNs as the internal (surrogate) models within
the NMPC formulation. One strategy embeds NN models as explicit algebraic
constraints, leveraging the automatic differentiation (AD) of an algebraic
modelling language (AML) to evaluate the derivatives. Alternatively, the solver
can be provided with derivatives computed external to the AML via the AD
routines of the machine learning environment the NN is trained in. The three
numerical experiments considered in this work reveal that replacing mechanistic
models with NN surrogates may not always offer computational advantages when
smooth activation functions are used in conjunction with a local nonlinear
solver (e.g., Ipopt), even with highly nonlinear systems. Moreover, in this
context, the external function evaluation of the NN surrogates often
outperforms the embedding strategies that rely on explicit algebraic
constraints, likely due to the difficulty in initializing the auxiliary
variables and constraints introduced by explicit algebraic reformulations.
|
2501.06336
|
MEt3R: Measuring Multi-View Consistency in Generated Images
|
cs.CV cs.LG eess.IV
|
We introduce MEt3R, a metric for multi-view consistency in generated images.
Large-scale generative models for multi-view image generation are rapidly
advancing the field of 3D inference from sparse observations. However, due to
the nature of generative modeling, traditional reconstruction metrics are not
suitable to measure the quality of generated outputs and metrics that are
independent of the sampling procedure are desperately needed. In this work, we
specifically address the aspect of consistency between generated multi-view
images, which can be evaluated independently of the specific scene. Our
approach uses DUSt3R to obtain dense 3D reconstructions from image pairs in a
feed-forward manner, which are used to warp image contents from one view into
the other. Then, feature maps of these images are compared to obtain a
similarity score that is invariant to view-dependent effects. Using MEt3R, we
evaluate the consistency of a large set of previous methods for novel view and
video generation, including our open, multi-view latent diffusion model.
|
2501.06339
|
On The Statistical Complexity of Offline Decision-Making
|
cs.LG cs.AI stat.ML
|
We study the statistical complexity of offline decision-making with function
approximation, establishing (near) minimax-optimal rates for stochastic
contextual bandits and Markov decision processes. The performance limits are
captured by the pseudo-dimension of the (value) function class and a new
characterization of the behavior policy that \emph{strictly} subsumes all the
previous notions of data coverage in the offline decision-making literature. In
addition, we seek to understand the benefits of using offline data in online
decision-making and show nearly minimax-optimal rates in a wide range of
regimes.
|
2501.06346
|
Large Language Models Share Representations of Latent Grammatical
Concepts Across Typologically Diverse Languages
|
cs.CL
|
Human bilinguals often use similar brain regions to process multiple
languages, depending on when they learned their second language and their
proficiency. In large language models (LLMs), how are multiple languages
learned and encoded? In this work, we explore the extent to which LLMs share
representations of morphosyntactic concepts such as grammatical number, gender,
and tense across languages. We train sparse autoencoders on Llama-3-8B and
Aya-23-8B, and demonstrate that abstract grammatical concepts are often encoded
in feature directions shared across many languages. We use causal interventions
to verify the multilingual nature of these representations; specifically, we
show that ablating only multilingual features decreases classifier performance
to near-chance across languages. We then use these features to precisely modify
model behavior in a machine translation task; this demonstrates both the
generality and selectivity of these feature's roles in the network. Our
findings suggest that even models trained predominantly on English data can
develop robust, cross-lingual abstractions of morphosyntactic concepts.
|
2501.06348
|
Why Automate This? Exploring the Connection between Time Use, Well-being
and Robot Automation Across Social Groups
|
cs.HC cs.RO
|
Understanding the motivations underlying the human inclination to automate
tasks is vital to developing truly helpful robots integrated into daily life.
Accordingly, we ask: are individuals more inclined to automate chores based on
the time they consume or the feelings experienced while performing them? This
study explores these preferences and whether they vary across different social
groups (i.e., gender category and income level). Leveraging data from the
BEHAVIOR-1K dataset, the American Time-Use Survey, and the American Time-Use
Survey Well-Being Module, we investigate the relationship between the desire
for automation, time spent on daily activities, and their associated feelings -
Happiness, Meaningfulness, Sadness, Painfulness, Stressfulness, or Tiredness.
Our key findings show that, despite common assumptions, time spent does not
strongly relate to the desire for automation for the general population. For
the feelings analyzed, only happiness and pain are key indicators. Significant
differences by gender and economic level also emerged: Women prefer to automate
stressful activities, whereas men prefer to automate those that make them
unhappy; mid-income individuals prioritize automating less enjoyable and
meaningful activities, while low and high-income show no significant
correlations. We hope our research helps motivate technologies to develop
robots that match the priorities of potential users, moving domestic robotics
toward more socially relevant solutions. We open-source all the data, including
an online tool that enables the community to replicate our analysis and explore
additional trends at https://hri1260.github.io/why-automate-this.
|
2501.06353
|
Event Constrained Programming
|
math.OC cs.SY eess.SY
|
In this paper, we present event constraints as a new modeling paradigm that
generalizes joint chance constraints from stochastic optimization to (1)
enforce a constraint on the probability of satisfying a set of constraints
aggregated via application-specific logic (constituting an event) and (2) to be
applied to general infinite-dimensional optimization (InfiniteOpt) problems
(i.e., time, space, and/or uncertainty domains). This new constraint class
offers significant modeling flexibility in posing InfiniteOpt constraints that
are enforced over a certain portion of their domain (e.g., to a certain
probability level), but can be challenging to reformulate/solve due to
difficulties in representing arbitrary logical conditions and specifying a
probabilistic measure on a collection of constraints. To address these
challenges, we derive a generalized disjunctive programming (GDP)
representation of event constrained optimization problems, which readily
enables us to pose logical event conditions in a standard form and allows us to
draw from a suite of GDP solution strategies that leverage the special
structure of this problem class. We also extend several approximation
techniques from the chance constraint literature to provide a means to
reformulate certain event constraints without the use of binary variables. We
illustrate these findings with case studies in stochastic optimal power flow,
dynamic disease control, and optimal 2D diffusion.
|
2501.06355
|
Low-Complexity Detection of Multiple Preambles in the Presence of
Mobility and Delay Spread
|
eess.SP cs.IT math.IT
|
Current wireless infrastructure is optimized to support downlink
applications. This paper anticipates the emergence of applications where
engineering focus shifts from downlink to uplink. The current paradigm of
scheduling users on reserved uplink resources is not able to deal efficiently
with unpredictable traffic patterns. As a result, 3GPP introduced the 2-step
RACH as a mechanism to enable grant-free (random) initial access. The first of
the two steps is preamble detection in a RACH slot, and in this paper we
describe a low-complexity algorithm for simultaneous detection of multiple
preambles in the presence of mobility and delay spread. We provide a pathway to
standards adoption by choosing ZC sequences as preambles, as ZC sequences
already appear in 5G standards. We construct preambles by using the discrete
Zak transform to pass from a ZC sequence of length MN in the TD to a
quasi-periodic MxN array in the DD domain. There are MN quasi-periodic Dirac
pulses, each corresponding to a Zak-OTFS carrier waveform, and the ZC preamble
is simply the corresponding sum of Zak-OTFS carrier waveforms. We detect
multiple preambles in the presence of mobility and delay spread by sampling the
received signal on the MxN period grid in the DD domain. We approach detection
as a compressed sensing problem. We represent a preamble as a column of length
MN in the DD domain and apply discrete shifts in delay and Doppler to produce a
block with O(MN) columns in the compressed sensing matrix. The superposition of
multiple preambles determines a block sparse sum of columns in the sensing
matrix. The correlation properties of ZC sequences result in a highly
structured compressed sensing matrix, making it possible to identify
constituent preambles using OST, which has complexity O(M^3N^3). In this paper,
we describe an algorithm with complexity that is O(M^2N^2) in the size of an
individual column.
|
2501.06356
|
Ultrasound Image Synthesis Using Generative AI for Lung Ultrasound
Detection
|
eess.IV cs.AI cs.CV
|
Developing reliable healthcare AI models requires training with
representative and diverse data. In imbalanced datasets, model performance
tends to plateau on the more prevalent classes while remaining low on less
common cases. To overcome this limitation, we propose DiffUltra, the first
generative AI technique capable of synthesizing realistic Lung Ultrasound (LUS)
images with extensive lesion variability. Specifically, we condition the
generative AI by the introduced Lesion-anatomy Bank, which captures the
lesion's structural and positional properties from real patient data to guide
the image synthesis.We demonstrate that DiffUltra improves consolidation
detection by 5.6% in AP compared to the models trained solely on real patient
data. More importantly, DiffUltra increases data diversity and prevalence of
rare cases, leading to a 25% AP improvement in detecting rare instances such as
large lung consolidations, which make up only 10% of the dataset.
|
2501.06357
|
Mix-QViT: Mixed-Precision Vision Transformer Quantization Driven by
Layer Importance and Quantization Sensitivity
|
cs.CV
|
In this paper, we propose Mix-QViT, an explainability-driven MPQ framework
that systematically allocates bit-widths to each layer based on two criteria:
layer importance, assessed via Layer-wise Relevance Propagation (LRP), which
identifies how much each layer contributes to the final classification, and
quantization sensitivity, determined by evaluating the performance impact of
quantizing each layer at various precision levels while keeping others layers
at a baseline. Additionally, for post-training quantization (PTQ), we introduce
a clipped channel-wise quantization method designed to reduce the effects of
extreme outliers in post-LayerNorm activations by removing severe inter-channel
variations. We validate our approach by applying Mix-QViT to ViT, DeiT, and
Swin Transformer models across multiple datasets. Our experimental results for
PTQ demonstrate that both fixed-bit and mixed-bit methods outperform existing
techniques, particularly at 3-bit, 4-bit, and 6-bit precision. Furthermore, in
quantization-aware training, Mix-QViT achieves superior performance with 2-bit
mixed-precision.
|
2501.06362
|
Repeat-bias-aware Optimization of Beyond-accuracy Metrics for Next
Basket Recommendation
|
cs.IR
|
In next basket recommendation (NBR) a set of items is recommended to users
based on their historical basket sequences. In many domains, the recommended
baskets consist of both repeat items and explore items. Some state-of-the-art
NBR methods are heavily biased to recommend repeat items so as to maximize
utility. The evaluation and optimization of beyond-accuracy objectives for NBR,
such as item fairness and diversity, has attracted increasing attention. How
can such beyond-accuracy objectives be pursued in the presence of heavy repeat
bias? We find that only optimizing diversity or item fairness without
considering repeat bias may cause NBR algorithms to recommend more repeat
items. To solve this problem, we propose a model-agnostic repeat-bias-aware
optimization algorithm to post-process the recommended results obtained from
NBR methods with the objective of mitigating repeat bias when optimizing
diversity or item fairness. We consider multiple variations of our optimization
algorithm to cater to multiple NBR methods. Experiments on three real-world
grocery shopping datasets show that the proposed algorithms can effectively
improve diversity and item fairness, and mitigate repeat bias at acceptable
Recall loss.
|
2501.06363
|
On the Rate-Distortion-Perception Function for Gaussian Processes
|
cs.IT math.IT
|
In this paper, we investigate the rate-distortion-perception function (RDPF)
of a source modeled by a Gaussian Process (GP) on a measure space $\Omega$
under mean squared error (MSE) distortion and squared Wasserstein-2 perception
metrics. First, we show that the optimal reconstruction process is itself a GP,
characterized by a covariance operator sharing the same set of eigenvectors of
the source covariance operator. Similarly to the classical rate-distortion
function, this allows us to formulate the RDPF problem in terms of the
Karhunen-Lo\`eve transform coefficients of the involved GPs. Leveraging the
similarities with the finite-dimensional Gaussian RDPF, we formulate an
analytical tight upper bound for the RDPF for GPs, which recovers the optimal
solution in the "perfect realism" regime. Lastly, in the case where the source
is a stationary GP and $\Omega$ is the interval $[0, T]$ equipped with the
Lebesgue measure, we derive an upper bound on the rate and the distortion for a
fixed perceptual level and $T \to \infty$ as a function of the spectral density
of the source process.
|
2501.06365
|
Gender-Neutral Large Language Models for Medical Applications: Reducing
Bias in PubMed Abstracts
|
cs.CL cs.AI cs.IR
|
This paper presents a pipeline for mitigating gender bias in large language
models (LLMs) used in medical literature by neutralizing gendered occupational
pronouns. A dataset of 379,000 PubMed abstracts from 1965-1980 was processed to
identify and modify pronouns tied to professions. We developed a BERT-based
model, ``Modern Occupational Bias Elimination with Refined Training,'' or
``MOBERT,'' trained on these neutralized abstracts, and compared its
performance with ``1965Bert,'' trained on the original dataset. MOBERT achieved
a 70\% inclusive replacement rate, while 1965Bert reached only 4\%. A further
analysis of MOBERT revealed that pronoun replacement accuracy correlated with
the frequency of occupational terms in the training data. We propose expanding
the dataset and refining the pipeline to improve performance and ensure more
equitable language modeling in medical applications.
|
2501.06366
|
Counterfactually Fair Reinforcement Learning via Sequential Data
Preprocessing
|
stat.ML cs.CY cs.LG stat.ME
|
When applied in healthcare, reinforcement learning (RL) seeks to dynamically
match the right interventions to subjects to maximize population benefit.
However, the learned policy may disproportionately allocate efficacious actions
to one subpopulation, creating or exacerbating disparities in other
socioeconomically-disadvantaged subgroups. These biases tend to occur in
multi-stage decision making and can be self-perpetuating, which if unaccounted
for could cause serious unintended consequences that limit access to care or
treatment benefit. Counterfactual fairness (CF) offers a promising statistical
tool grounded in causal inference to formulate and study fairness. In this
paper, we propose a general framework for fair sequential decision making. We
theoretically characterize the optimal CF policy and prove its stationarity,
which greatly simplifies the search for optimal CF policies by leveraging
existing RL algorithms. The theory also motivates a sequential data
preprocessing algorithm to achieve CF decision making under an additive noise
assumption. We prove and then validate our policy learning approach in
controlling unfairness and attaining optimal value through simulations.
Analysis of a digital health dataset designed to reduce opioid misuse shows
that our proposal greatly enhances fair access to counseling.
|
2501.06368
|
Towards Robust Nonlinear Subspace Clustering: A Kernel Learning Approach
|
cs.LG cs.CV stat.ML
|
Kernel-based subspace clustering, which addresses the nonlinear structures in
data, is an evolving area of research. Despite noteworthy progressions,
prevailing methodologies predominantly grapple with limitations relating to (i)
the influence of predefined kernels on model performance; (ii) the difficulty
of preserving the original manifold structures in the nonlinear space; (iii)
the dependency of spectral-type strategies on the ideal block diagonal
structure of the affinity matrix. This paper presents DKLM, a novel paradigm
for kernel-induced nonlinear subspace clustering. DKLM provides a data-driven
approach that directly learns the kernel from the data's self-representation,
ensuring adaptive weighting and satisfying the multiplicative triangle
inequality constraint, which enhances the robustness of the learned kernel. By
leveraging this learned kernel, DKLM preserves the local manifold structure of
data in a nonlinear space while promoting the formation of an optimal
block-diagonal affinity matrix. A thorough theoretical examination of DKLM
reveals its relationship with existing clustering paradigms. Comprehensive
experiments on synthetic and real-world datasets demonstrate the effectiveness
of the proposed method.
|
2501.06370
|
Towards a Probabilistic Framework for Analyzing and Improving
LLM-Enabled Software
|
cs.SE cs.AI
|
Ensuring the reliability and verifiability of large language model
(LLM)-enabled systems remains a significant challenge in software engineering.
We propose a probabilistic framework for systematically analyzing and improving
these systems by modeling and refining distributions over clusters of
semantically equivalent outputs. This framework facilitates the evaluation and
iterative improvement of Transference Models -- key software components that
utilize LLMs to transform inputs into outputs for downstream tasks. To
illustrate its utility, we apply the framework to the autoformalization
problem, where natural language documentation is transformed into formal
program specifications. Our case illustrates how probabilistic analysis enables
the identification of weaknesses and guides focused alignment improvements,
resulting in more reliable and interpretable outputs. This principled approach
offers a foundation for addressing critical challenges in the development of
robust LLM-enabled systems.
|
2501.06374
|
AFRIDOC-MT: Document-level MT Corpus for African Languages
|
cs.CL
|
This paper introduces AFRIDOC-MT, a document-level multi-parallel translation
dataset covering English and five African languages: Amharic, Hausa, Swahili,
Yor\`ub\'a, and Zulu. The dataset comprises 334 health and 271 information
technology news documents, all human-translated from English to these
languages. We conduct document-level translation benchmark experiments by
evaluating neural machine translation (NMT) models and large language models
(LLMs) for translations between English and these languages, at both the
sentence and pseudo-document levels. These outputs are realigned to form
complete documents for evaluation. Our results indicate that NLLB-200 achieved
the best average performance among the standard NMT models, while GPT-4o
outperformed general-purpose LLMs. Fine-tuning selected models led to
substantial performance gains, but models trained on sentences struggled to
generalize effectively to longer documents. Furthermore, our analysis reveals
that some LLMs exhibit issues such as under-generation, repetition of words or
phrases, and off-target translations, especially for African languages.
|
2501.06376
|
On the Partial Identifiability in Reward Learning: Choosing the Best
Reward
|
cs.LG stat.ML
|
In Reward Learning (ReL), we are given feedback on an unknown *target
reward*, and the goal is to use this information to find it. When the feedback
is not informative enough, the target reward is only *partially identifiable*,
i.e., there exists a set of rewards (the feasible set) that are
equally-compatible with the feedback. In this paper, we show that there exists
a choice of reward, non-necessarily contained in the feasible set that,
*depending on the ReL application*, improves the performance w.r.t. selecting
the reward arbitrarily among the feasible ones. To this aim, we introduce a new
*quantitative framework* to analyze ReL problems in a simple yet expressive
way. We exemplify the framework in a *reward transfer* use case, for which we
devise three provably-efficient ReL algorithms.
|
2501.06382
|
Dynamics of "Spontaneous" Topic Changes in Next Token Prediction with
Self-Attention
|
cs.CL cs.AI stat.ML
|
Human cognition can spontaneously shift conversation topics, often triggered
by emotional or contextual signals. In contrast, self-attention-based language
models depend on structured statistical cues from input tokens for next-token
prediction, lacking this spontaneity. Motivated by this distinction, we
investigate the factors that influence the next-token prediction to change the
topic of the input sequence. We define concepts of topic continuity, ambiguous
sequences, and change of topic, based on defining a topic as a set of token
priority graphs (TPGs). Using a simplified single-layer self-attention
architecture, we derive analytical characterizations of topic changes.
Specifically, we demonstrate that (1) the model maintains the priority order of
tokens related to the input topic, (2) a topic change can occur only if
lower-priority tokens outnumber all higher-priority tokens of the input topic,
and (3) unlike human cognition, longer context lengths and overlapping topics
reduce the likelihood of spontaneous redirection. These insights highlight
differences between human cognition and self-attention-based models in
navigating topic changes and underscore the challenges in designing
conversational AI capable of handling "spontaneous" conversations more
naturally. To the best of our knowledge, no prior work has explored these
questions with a focus as closely aligned to human conversation and thought.
|
2501.06386
|
Using Pre-trained LLMs for Multivariate Time Series Forecasting
|
cs.LG cs.CL
|
Pre-trained Large Language Models (LLMs) encapsulate large amounts of
knowledge and take enormous amounts of compute to train. We make use of this
resource, together with the observation that LLMs are able to transfer
knowledge and performance from one domain or even modality to another
seemingly-unrelated area, to help with multivariate demand time series
forecasting. Attention in transformer-based methods requires something worth
attending to -- more than just samples of a time-series. We explore different
methods to map multivariate input time series into the LLM token embedding
space. In particular, our novel multivariate patching strategy to embed time
series features into decoder-only pre-trained Transformers produces results
competitive with state-of-the-art time series forecasting models. We also use
recently-developed weight-based diagnostics to validate our findings.
|
2501.06389
|
Kolmogorov-Arnold networks for metal surface defect classification
|
cs.LG cs.AI cs.NE
|
This paper presents the application of Kolmogorov-Arnold Networks (KAN) in
classifying metal surface defects. Specifically, steel surfaces are analyzed to
detect defects such as cracks, inclusions, patches, pitted surfaces, and
scratches. Drawing on the Kolmogorov-Arnold theorem, KAN provides a novel
approach compared to conventional multilayer perceptrons (MLPs), facilitating
more efficient function approximation by utilizing spline functions. The
results show that KAN networks can achieve better accuracy than convolutional
neural networks (CNNs) with fewer parameters, resulting in faster convergence
and improved performance in image classification.
|
2501.06394
|
Unispeaker: A Unified Approach for Multimodality-driven Speaker
Generation
|
cs.SD cs.AI eess.AS
|
Recent advancements in personalized speech generation have brought synthetic
speech increasingly close to the realism of target speakers' recordings, yet
multimodal speaker generation remains on the rise. This paper introduces
UniSpeaker, a unified approach for multimodality-driven speaker generation.
Specifically, we propose a unified voice aggregator based on KV-Former,
applying soft contrastive loss to map diverse voice description modalities into
a shared voice space, ensuring that the generated voice aligns more closely
with the input descriptions. To evaluate multimodality-driven voice control, we
build the first multimodality-based voice control (MVC) benchmark, focusing on
voice suitability, voice diversity, and speech quality. UniSpeaker is evaluated
across five tasks using the MVC benchmark, and the experimental results
demonstrate that UniSpeaker outperforms previous modality-specific models.
Speech samples are available at \url{https://UniSpeaker.github.io}.
|
2501.06399
|
Has an AI model been trained on your images?
|
cs.CV cs.AI cs.CR cs.CY cs.LG
|
From a simple text prompt, generative-AI image models can create stunningly
realistic and creative images bounded, it seems, by only our imagination. These
models have achieved this remarkable feat thanks, in part, to the ingestion of
billions of images collected from nearly every corner of the internet. Many
creators have understandably expressed concern over how their intellectual
property has been ingested without their permission or a mechanism to opt out
of training. As a result, questions of fair use and copyright infringement have
quickly emerged. We describe a method that allows us to determine if a model
was trained on a specific image or set of images. This method is
computationally efficient and assumes no explicit knowledge of the model
architecture or weights (so-called black-box membership inference). We
anticipate that this method will be crucial for auditing existing models and,
looking ahead, ensuring the fairer development and deployment of generative AI
models.
|
2501.06400
|
Mathematics of Digital Twins and Transfer Learning for PDE Models
|
cs.LG cs.NA math.NA stat.ML
|
We define a digital twin (DT) of a physical system governed by partial
differential equations (PDEs) as a model for real-time simulations and control
of the system behavior under changing conditions. We construct DTs using the
Karhunen-Lo\`{e}ve Neural Network (KL-NN) surrogate model and transfer learning
(TL). The surrogate model allows fast inference and differentiability with
respect to control parameters for control and optimization. TL is used to
retrain the model for new conditions with minimal additional data. We employ
the moment equations to analyze TL and identify parameters that can be
transferred to new conditions. The proposed analysis also guides the control
variable selection in DT to facilitate efficient TL.
For linear PDE problems, the non-transferable parameters in the KL-NN
surrogate model can be exactly estimated from a single solution of the PDE
corresponding to the mean values of the control variables under new target
conditions. Retraining an ML model with a single solution sample is known as
one-shot learning, and our analysis shows that the one-shot TL is exact for
linear PDEs. For nonlinear PDE problems, transferring of any parameters
introduces errors. For a nonlinear diffusion PDE model, we find that for a
relatively small range of control variables, some surrogate model parameters
can be transferred without introducing a significant error, some can be
approximately estimated from the mean-field equation, and the rest can be found
using a linear residual least square problem or an ordinary linear least square
problem if a small labeled dataset for new conditions is available. The former
approach results in a one-shot TL while the latter approach is an example of a
few-shot TL. Both methods are approximate for the nonlinear PDEs.
|
2501.06404
|
A Hybrid Framework for Reinsurance Optimization: Integrating Generative
Models and Reinforcement Learning
|
econ.EM cs.AI cs.LG stat.ML
|
Reinsurance optimization is critical for insurers to manage risk exposure,
ensure financial stability, and maintain solvency. Traditional approaches often
struggle with dynamic claim distributions, high-dimensional constraints, and
evolving market conditions. This paper introduces a novel hybrid framework that
integrates {Generative Models}, specifically Variational Autoencoders (VAEs),
with {Reinforcement Learning (RL)} using Proximal Policy Optimization (PPO).
The framework enables dynamic and scalable optimization of reinsurance
strategies by combining the generative modeling of complex claim distributions
with the adaptive decision-making capabilities of reinforcement learning.
The VAE component generates synthetic claims, including rare and catastrophic
events, addressing data scarcity and variability, while the PPO algorithm
dynamically adjusts reinsurance parameters to maximize surplus and minimize
ruin probability. The framework's performance is validated through extensive
experiments, including out-of-sample testing, stress-testing scenarios (e.g.,
pandemic impacts, catastrophic events), and scalability analysis across
portfolio sizes. Results demonstrate its superior adaptability, scalability,
and robustness compared to traditional optimization techniques, achieving
higher final surpluses and computational efficiency.
Key contributions include the development of a hybrid approach for
high-dimensional optimization, dynamic reinsurance parameterization, and
validation against stochastic claim distributions. The proposed framework
offers a transformative solution for modern reinsurance challenges, with
potential applications in multi-line insurance operations, catastrophe
modeling, and risk-sharing strategy design.
|
2501.06405
|
FocusDD: Real-World Scene Infusion for Robust Dataset Distillation
|
cs.CV cs.AI
|
Dataset distillation has emerged as a strategy to compress real-world
datasets for efficient training. However, it struggles with large-scale and
high-resolution datasets, limiting its practicality. This paper introduces a
novel resolution-independent dataset distillation method Focus ed Dataset
Distillation (FocusDD), which achieves diversity and realism in distilled data
by identifying key information patches, thereby ensuring the generalization
capability of the distilled dataset across different network architectures.
Specifically, FocusDD leverages a pre-trained Vision Transformer (ViT) to
extract key image patches, which are then synthesized into a single distilled
image. These distilled images, which capture multiple targets, are suitable not
only for classification tasks but also for dense tasks such as object
detection. To further improve the generalization of the distilled dataset, each
synthesized image is augmented with a downsampled view of the original image.
Experimental results on the ImageNet-1K dataset demonstrate that, with 100
images per class (IPC), ResNet50 and MobileNet-v2 achieve validation accuracies
of 71.0% and 62.6%, respectively, outperforming state-of-the-art methods by
2.8% and 4.7%. Notably, FocusDD is the first method to use distilled datasets
for object detection tasks. On the COCO2017 dataset, with an IPC of 50,
YOLOv11n and YOLOv11s achieve 24.4% and 32.1% mAP, respectively, further
validating the effectiveness of our approach.
|
2501.06408
|
Computational and Statistical Asymptotic Analysis of the JKO Scheme for
Iterative Algorithms to update distributions
|
stat.ML cs.LG
|
The seminal paper of Jordan, Kinderlehrer, and Otto introduced what is now
widely known as the JKO scheme, an iterative algorithmic framework for
computing distributions. This scheme can be interpreted as a Wasserstein
gradient flow and has been successfully applied in machine learning contexts,
such as deriving policy solutions in reinforcement learning. In this paper, we
extend the JKO scheme to accommodate models with unknown parameters.
Specifically, we develop statistical methods to estimate these parameters and
adapt the JKO scheme to incorporate the estimated values. To analyze the
adopted statistical JKO scheme, we establish an asymptotic theory via
stochastic partial differential equations that describes its limiting dynamic
behavior. Our framework allows both the sample size used in parameter
estimation and the number of algorithmic iterations to go to infinity. This
study offers a unified framework for joint computational and statistical
asymptotic analysis of the statistical JKO scheme. On the computational side,
we examine the scheme's dynamic behavior as the number of iterations increases,
while on the statistical side, we investigate the large-sample behavior of the
resulting distributions computed through the scheme. We conduct numerical
simulations to evaluate the finite-sample performance of the proposed methods
and validate the developed asymptotic theory.
|
2501.06410
|
Task Delay and Energy Consumption Minimization for Low-altitude MEC via
Evolutionary Multi-objective Deep Reinforcement Learning
|
cs.LG cs.NE cs.NI
|
The low-altitude economy (LAE), driven by unmanned aerial vehicles (UAVs) and
other aircraft, has revolutionized fields such as transportation, agriculture,
and environmental monitoring. In the upcoming six-generation (6G) era,
UAV-assisted mobile edge computing (MEC) is particularly crucial in challenging
environments such as mountainous or disaster-stricken areas. The computation
task offloading problem is one of the key issues in UAV-assisted MEC, primarily
addressing the trade-off between minimizing the task delay and the energy
consumption of the UAV. In this paper, we consider a UAV-assisted MEC system
where the UAV carries the edge servers to facilitate task offloading for ground
devices (GDs), and formulate a calculation delay and energy consumption
multi-objective optimization problem (CDECMOP) to simultaneously improve the
performance and reduce the cost of the system. Then, by modeling the formulated
problem as a multi-objective Markov decision process (MOMDP), we propose a
multi-objective deep reinforcement learning (DRL) algorithm within an
evolutionary framework to dynamically adjust the weights and obtain
non-dominated policies. Moreover, to ensure stable convergence and improve
performance, we incorporate a target distribution learning (TDL) algorithm.
Simulation results demonstrate that the proposed algorithm can better balance
multiple optimization objectives and obtain superior non-dominated solutions
compared to other methods.
|
2501.06414
|
IPP-Net: A Generalizable Deep Neural Network Model for Indoor Pathloss
Radio Map Prediction
|
eess.SP cs.LG
|
In this paper, we propose a generalizable deep neural network model for
indoor pathloss radio map prediction (termed as IPP-Net). IPP-Net is based on a
UNet architecture and learned from both large-scale ray tracing simulation data
and a modified 3GPP indoor hotspot model. The performance of IPP-Net is
evaluated in the First Indoor Pathloss Radio Map Prediction Challenge in ICASSP
2025. The evaluation results show that IPP-Net achieves a weighted root mean
square error of 9.501 dB on three competition tasks and obtains the second
overall ranking.
|
2501.06416
|
Influencing Humans to Conform to Preference Models for RLHF
|
cs.LG cs.AI cs.HC
|
Designing a reinforcement learning from human feedback (RLHF) algorithm to
approximate a human's unobservable reward function requires assuming,
implicitly or explicitly, a model of human preferences. A preference model that
poorly describes how humans generate preferences risks learning a poor
approximation of the human's reward function. In this paper, we conduct three
human studies to asses whether one can influence the expression of real human
preferences to more closely conform to a desired preference model. Importantly,
our approach does not seek to alter the human's unobserved reward function.
Rather, we change how humans use this reward function to generate preferences,
such that they better match whatever preference model is assumed by a
particular RLHF algorithm. We introduce three interventions: showing humans the
quantities that underlie a preference model, which is normally unobservable
information derived from the reward function; training people to follow a
specific preference model; and modifying the preference elicitation question.
All intervention types show significant effects, providing practical tools to
improve preference data quality and the resultant alignment of the learned
reward functions. Overall we establish a novel research direction in model
alignment: designing interfaces and training interventions to increase human
conformance with the modeling assumptions of the algorithm that will learn from
their input.
|
2501.06417
|
DiscQuant: A Quantization Method for Neural Networks Inspired by
Discrepancy Theory
|
cs.LG cs.AI cs.DS
|
Quantizing the weights of a neural network has two steps: (1) Finding a good
low bit-complexity representation for weights (which we call the quantization
grid) and (2) Rounding the original weights to values in the quantization grid.
In this paper, we study the problem of rounding optimally given any
quantization grid. The simplest and most commonly used way to round is
Round-to-Nearest (RTN). By rounding in a data-dependent way instead, one can
improve the quality of the quantized model significantly.
We study the rounding problem from the lens of \emph{discrepancy theory},
which studies how well we can round a continuous solution to a discrete
solution without affecting solution quality too much. We prove that given
$m=\mathrm{poly}(1/\epsilon)$ samples from the data distribution, we can round
all but $O(m)$ model weights such that the expected approximation error of the
quantized model on the true data distribution is $\le \epsilon$ as long as the
space of gradients of the original model is approximately low rank (which we
empirically validate).
Our proof, which is algorithmic, inspired a simple and practical rounding
algorithm called \emph{DiscQuant}. In our experiments, we demonstrate that
DiscQuant significantly improves over the prior state-of-the-art rounding
method called GPTQ and the baseline RTN over a range of benchmarks on
Phi3mini-3.8B and Llama3.1-8B. For example, rounding Phi3mini-3.8B to a fixed
quantization grid with 3.25 bits per parameter using DiscQuant gets 64\%
accuracy on the GSM8k dataset, whereas GPTQ achieves 54\% and RTN achieves 31\%
(the original model achieves 84\%). We make our code available at
https://github.com/jerry-chee/DiscQuant.
|
2501.06423
|
AlgoPilot: Fully Autonomous Program Synthesis Without Human-Written
Programs
|
cs.AI
|
Program synthesis has traditionally relied on human-provided specifications,
examples, or prior knowledge to generate functional algorithms. Existing
methods either emulate human-written algorithms or solve specific tasks without
generating reusable programmatic logic, limiting their ability to create novel
algorithms. We introduce AlgoPilot, a groundbreaking approach for fully
automated program synthesis without human-written programs or trajectories.
AlgoPilot leverages reinforcement learning (RL) guided by a Trajectory Language
Model (TLM) to synthesize algorithms from scratch. The TLM, trained on
trajectories generated by random Python functions, serves as a soft constraint
during the RL process, aligning generated sequences with patterns likely to
represent valid algorithms. Using sorting as a test case, AlgoPilot
demonstrates its ability to generate trajectories that are interpretable as
classical algorithms, such as Bubble Sort, while operating without prior
algorithmic knowledge. This work establishes a new paradigm for algorithm
discovery and lays the groundwork for future advancements in autonomous program
synthesis.
|
2501.06425
|
Tensor Product Attention Is All You Need
|
cs.CL cs.AI cs.LG
|
Scaling language models to handle longer input sequences typically
necessitates large key-value (KV) caches, resulting in substantial memory
overhead during inference. In this paper, we propose Tensor Product Attention
(TPA), a novel attention mechanism that uses tensor decompositions to represent
queries, keys, and values compactly, significantly shrinking KV cache size at
inference time. By factorizing these representations into contextual low-rank
components (contextual factorization) and seamlessly integrating with RoPE, TPA
achieves improved model quality alongside memory efficiency. Based on TPA, we
introduce the Tensor ProducT ATTenTion Transformer (T6), a new model
architecture for sequence modeling. Through extensive empirical evaluation of
language modeling tasks, we demonstrate that T6 exceeds the performance of
standard Transformer baselines including MHA, MQA, GQA, and MLA across various
metrics, including perplexity and a range of renowned evaluation benchmarks.
Notably, TPA's memory efficiency enables the processing of significantly longer
sequences under fixed resource constraints, addressing a critical scalability
challenge in modern language models. The code is available at
https://github.com/tensorgi/T6.
|
2501.06429
|
Reliable Imputed-Sample Assisted Vertical Federated Learning
|
cs.LG stat.ML
|
Vertical Federated Learning (VFL) is a well-known FL variant that enables
multiple parties to collaboratively train a model without sharing their raw
data. Existing VFL approaches focus on overlapping samples among different
parties, while their performance is constrained by the limited number of these
samples, leaving numerous non-overlapping samples unexplored. Some previous
work has explored techniques for imputing missing values in samples, but often
without adequate attention to the quality of the imputed samples. To address
this issue, we propose a Reliable Imputed-Sample Assisted (RISA) VFL framework
to effectively exploit non-overlapping samples by selecting reliable imputed
samples for training VFL models. Specifically, after imputing non-overlapping
samples, we introduce evidence theory to estimate the uncertainty of imputed
samples, and only samples with low uncertainty are selected. In this way,
high-quality non-overlapping samples are utilized to improve VFL model.
Experiments on two widely used datasets demonstrate the significant performance
gains achieved by the RISA, especially with the limited overlapping samples,
e.g., a 48% accuracy gain on CIFAR-10 with only 1% overlapping samples.
|
2501.06430
|
Open Eyes, Then Reason: Fine-grained Visual Mathematical Understanding
in MLLMs
|
cs.CV
|
Current multimodal large language models (MLLMs) often underperform on
mathematical problem-solving tasks that require fine-grained visual
understanding. The limitation is largely attributable to inadequate perception
of geometric primitives during image-level contrastive pre-training (e.g.,
CLIP). While recent efforts to improve math MLLMs have focused on scaling up
mathematical visual instruction datasets and employing stronger LLM backbones,
they often overlook persistent errors in visual recognition. In this paper, we
systematically evaluate the visual grounding capabilities of state-of-the-art
MLLMs and reveal a significant negative correlation between visual grounding
accuracy and problem-solving performance, underscoring the critical role of
fine-grained visual understanding. Notably, advanced models like GPT-4o exhibit
a 70% error rate when identifying geometric entities, highlighting that this
remains a key bottleneck in visual mathematical reasoning. To address this, we
propose a novel approach, SVE-Math (Selective Vision-Enhanced Mathematical
MLLM), featuring a geometric-grounded vision encoder and a feature router that
dynamically adjusts the contribution of hierarchical visual feature maps. Our
model recognizes accurate visual primitives and generates precise visual
prompts tailored to the language model's reasoning needs. In experiments,
SVE-Math-Qwen2.5-7B outperforms other 7B models by 15% on MathVerse and is
compatible with GPT-4V on MathVista. Despite being trained on smaller datasets,
SVE-Math-7B achieves competitive performance on GeoQA, rivaling models trained
on significantly larger datasets. Our findings emphasize the importance of
incorporating fine-grained visual understanding into MLLMs and provide a
promising direction for future research.
|
2501.06431
|
Aug3D: Augmenting large scale outdoor datasets for Generalizable Novel
View Synthesis
|
cs.CV cs.AI cs.RO
|
Recent photorealistic Novel View Synthesis (NVS) advances have increasingly
gained attention. However, these approaches remain constrained to small indoor
scenes. While optimization-based NVS models have attempted to address this,
generalizable feed-forward methods, offering significant advantages, remain
underexplored. In this work, we train PixelNeRF, a feed-forward NVS model, on
the large-scale UrbanScene3D dataset. We propose four training strategies to
cluster and train on this dataset, highlighting that performance is hindered by
limited view overlap. To address this, we introduce Aug3D, an augmentation
technique that leverages reconstructed scenes using traditional
Structure-from-Motion (SfM). Aug3D generates well-conditioned novel views
through grid and semantic sampling to enhance feed-forward NVS model learning.
Our experiments reveal that reducing the number of views per cluster from 20 to
10 improves PSNR by 10%, but the performance remains suboptimal. Aug3D further
addresses this by combining the newly generated novel views with the original
dataset, demonstrating its effectiveness in improving the model's ability to
predict novel views.
|
2501.06432
|
Deep Learning on Hester Davis Scores for Inpatient Fall Prediction
|
cs.LG cs.AI
|
Fall risk prediction among hospitalized patients is a critical aspect of
patient safety in clinical settings, and accurate models can help prevent
adverse events. The Hester Davis Score (HDS) is commonly used to assess fall
risk, with current clinical practice relying on a threshold-based approach. In
this method, a patient is classified as high-risk when their HDS exceeds a
predefined threshold. However, this approach may fail to capture dynamic
patterns in fall risk over time. In this study, we model the threshold-based
approach and propose two machine learning approaches for enhanced fall
prediction: One-step ahead fall prediction and sequence-to-point fall
prediction. The one-step ahead model uses the HDS at the current timestamp to
predict the risk at the next timestamp, while the sequence-to-point model
leverages all preceding HDS values to predict fall risk using deep learning. We
compare these approaches to assess their accuracy in fall risk prediction,
demonstrating that deep learning can outperform the traditional threshold-based
method by capturing temporal patterns and improving prediction reliability.
These findings highlight the potential for data-driven approaches to enhance
patient safety through more reliable fall prevention strategies.
|
2501.06434
|
Synthetic Feature Augmentation Improves Generalization Performance of
Language Models
|
cs.CL cs.AI
|
Training and fine-tuning deep learning models, especially large language
models (LLMs), on limited and imbalanced datasets poses substantial challenges.
These issues often result in poor generalization, where models overfit to
dominant classes and underperform on minority classes, leading to biased
predictions and reduced robustness in real-world applications. To overcome
these challenges, we propose augmenting features in the embedding space by
generating synthetic samples using a range of techniques. By upsampling
underrepresented classes, this method improves model performance and alleviates
data imbalance. We validate the effectiveness of this approach across multiple
open-source text classification benchmarks, demonstrating its potential to
enhance model robustness and generalization in imbalanced data scenarios.
|
2501.06438
|
Qffusion: Controllable Portrait Video Editing via Quadrant-Grid
Attention Learning
|
cs.CV
|
This paper presents Qffusion, a dual-frame-guided framework for portrait
video editing. Specifically, we consider a design principle of ``animation for
editing'', and train Qffusion as a general animation framework from two still
reference images while we can use it for portrait video editing easily by
applying modified start and end frames as references during inference.
Leveraging the powerful generative power of Stable Diffusion, we propose a
Quadrant-grid Arrangement (QGA) scheme for latent re-arrangement, which
arranges the latent codes of two reference images and that of four facial
conditions into a four-grid fashion, separately. Then, we fuse features of
these two modalities and use self-attention for both appearance and temporal
learning, where representations at different times are jointly modeled under
QGA. Our Qffusion can achieve stable video editing without additional networks
or complex training stages, where only the input format of Stable Diffusion is
modified. Further, we propose a Quadrant-grid Propagation (QGP) inference
strategy, which enjoys a unique advantage on stable arbitrary-length video
generation by processing reference and condition frames recursively. Through
extensive experiments, Qffusion consistently outperforms state-of-the-art
techniques on portrait video editing.
|
2501.06440
|
UCloudNet: A Residual U-Net with Deep Supervision for Cloud Image
Segmentation
|
cs.CV eess.IV
|
Recent advancements in meteorology involve the use of ground-based sky
cameras for cloud observation. Analyzing images from these cameras helps in
calculating cloud coverage and understanding atmospheric phenomena.
Traditionally, cloud image segmentation relied on conventional computer vision
techniques. However, with the advent of deep learning, convolutional neural
networks (CNNs) are increasingly applied for this purpose. Despite their
effectiveness, CNNs often require many epochs to converge, posing challenges
for real-time processing in sky camera systems. In this paper, we introduce a
residual U-Net with deep supervision for cloud segmentation which provides
better accuracy than previous approaches, and with less training consumption.
By utilizing residual connection in encoders of UCloudNet, the feature
extraction ability is further improved.
|
2501.06441
|
CPDR: Towards Highly-Efficient Salient Object Detection via Crossed
Post-decoder Refinement
|
cs.CV
|
Most of the current salient object detection approaches use deeper networks
with large backbones to produce more accurate predictions, which results in a
significant increase in computational complexity. A great number of network
designs follow the pure UNet and Feature Pyramid Network (FPN) architecture
which has limited feature extraction and aggregation ability which motivated us
to design a lightweight post-decoder refinement module, the crossed
post-decoder refinement (CPDR) to enhance the feature representation of a
standard FPN or U-Net framework. Specifically, we introduce the Attention Down
Sample Fusion (ADF), which employs channel attention mechanisms with attention
maps generated by high-level representation to refine the low-level features,
and Attention Up Sample Fusion (AUF), leveraging the low-level information to
guide the high-level features through spatial attention. Additionally, we
proposed the Dual Attention Cross Fusion (DACF) upon ADFs and AUFs, which
reduces the number of parameters while maintaining the performance. Experiments
on five benchmark datasets demonstrate that our method outperforms previous
state-of-the-art approaches.
|
2501.06442
|
ARES: Auxiliary Range Expansion for Outlier Synthesis
|
cs.AI
|
Recent successes of artificial intelligence and deep learning often depend on
the well-collected training dataset which is assumed to have an identical
distribution with the test dataset. However, this assumption, which is called
closed-set learning, is hard to meet in realistic scenarios for deploying deep
learning models. As one of the solutions to mitigate this assumption, research
on out-of-distribution (OOD) detection has been actively explored in various
domains. In OOD detection, we assume that we are given the data of a new class
that was not seen in the training phase, i.e., outlier, at the evaluation
phase. The ultimate goal of OOD detection is to detect and classify such unseen
outlier data as a novel "unknown" class. Among various research branches for
OOD detection, generating a virtual outlier during the training phase has been
proposed. However, conventional generation-based methodologies utilize
in-distribution training dataset to imitate outlier instances, which limits the
quality of the synthesized virtual outlier instance itself. In this paper, we
propose a novel methodology for OOD detection named Auxiliary Range Expansion
for Outlier Synthesis, or ARES. ARES models the region for generating
out-of-distribution instances by escaping from the given in-distribution
region; instead of remaining near the boundary of in-distribution region.
Various stages consists ARES to ultimately generate valuable OOD-like virtual
instances. The energy score-based discriminator is then trained to effectively
separate in-distribution data and outlier data. Quantitative experiments on
broad settings show the improvement of performance by our method, and
qualitative results provide logical explanations of the mechanism behind it.
|
2501.06444
|
On the Computational Capability of Graph Neural Networks: A Circuit
Complexity Bound Perspective
|
cs.LG cs.AI cs.CC
|
Graph Neural Networks (GNNs) have become the standard approach for learning
and reasoning over relational data, leveraging the message-passing mechanism
that iteratively propagates node embeddings through graph structures. While
GNNs have achieved significant empirical success, their theoretical limitations
remain an active area of research. Existing studies primarily focus on
characterizing GNN expressiveness through Weisfeiler-Lehman (WL) graph
isomorphism tests. In this paper, we take a fundamentally different approach by
exploring the computational limitations of GNNs through the lens of circuit
complexity. Specifically, we analyze the circuit complexity of common GNN
architectures and prove that under constraints of constant-depth layers, linear
or sublinear embedding sizes, and polynomial precision, GNNs cannot solve key
problems such as graph connectivity and graph isomorphism unless $\mathsf{TC}^0
= \mathsf{NC}^1$. These results reveal the intrinsic expressivity limitations
of GNNs behind their empirical success and introduce a novel framework for
analyzing GNN expressiveness that can be extended to a broader range of GNN
models and graph decision problems.
|
2501.06446
|
Cross-Technology Interference: Detection, Avoidance, and Coexistence
Mechanisms in the ISM Bands
|
cs.NI cs.LG
|
A large number of heterogeneous wireless networks share the unlicensed
spectrum designated as the ISM (Industry, Scientific, and Medicine) radio band.
These networks do not adhere to a common medium access rule and differ in their
specifications considerably. As a result, when concurrently active, they cause
cross-technology interference (CTI) on each other. The effect of this
interference is not reciprocal, the networks using high transmission power and
advanced transmission schemes often causing disproportionate disruptions to
those with modest communication and computation resources. CTI corrupts
packets, incurs packet retransmission cost, introduces end-to-end latency and
jitter, and make networks unpredictable. The purpose of this paper is to
closely examine its impact on low-power networks which are based on the IEEE
802.15.4 standard. It discusses latest developments on CTI detection,
coexistence and avoidance mechanisms as well on messaging schemes which attempt
to enable heterogeneous networks directly communicate with one another to
coordinate packet transmission and channel assignment.
|
2501.06448
|
Discovering an Image-Adaptive Coordinate System for Photography
Processing
|
cs.CV
|
Curve & Lookup Table (LUT) based methods directly map a pixel to the target
output, making them highly efficient tools for real-time photography
processing. However, due to extreme memory complexity to learn full RGB space
mapping, existing methods either sample a discretized 3D lattice to build a 3D
LUT or decompose into three separate curves (1D LUTs) on the RGB channels.
Here, we propose a novel algorithm, IAC, to learn an image-adaptive Cartesian
coordinate system in the RGB color space before performing curve operations.
This end-to-end trainable approach enables us to efficiently adjust images with
a jointly learned image-adaptive coordinate system and curves. Experimental
results demonstrate that this simple strategy achieves state-of-the-art (SOTA)
performance in various photography processing tasks, including photo
retouching, exposure correction, and white-balance editing, while also
maintaining a lightweight design and fast inference speed.
|
2501.06454
|
Reinforcement Learning for Enhancing Sensing Estimation in Bistatic ISAC
Systems with UAV Swarms
|
eess.SP cs.LG
|
This paper introduces a novel Multi-Agent Reinforcement Learning (MARL)
framework to enhance integrated sensing and communication (ISAC) networks using
unmanned aerial vehicle (UAV) swarms as sensing radars. By framing the
positioning and trajectory optimization of UAVs as a Partially Observable
Markov Decision Process, we develop a MARL approach that leverages centralized
training with decentralized execution to maximize the overall sensing
performance. Specifically, we implement a decentralized cooperative MARL
strategy to enable UAVs to develop effective communication protocols, therefore
enhancing their environmental awareness and operational efficiency.
Additionally, we augment the MARL solution with a transmission power adaptation
technique to mitigate interference between the communicating drones and
optimize the communication protocol efficiency. Moreover, a transmission power
adaptation technique is incorporated to mitigate interference and optimize the
learned communication protocol efficiency. Despite the increased complexity,
our solution demonstrates robust performance and adaptability across various
scenarios, providing a scalable and cost-effective enhancement for future ISAC
networks.
|
2501.06457
|
Automated Detection and Analysis of Minor Deformations in Flat Walls Due
to Railway Vibrations Using LiDAR and Machine Learning
|
cs.LG
|
This study introduces an advanced methodology for automatically identifying
minor deformations in flat walls caused by vibrations from nearby railway
tracks. It leverages high-density Terrestrial Laser Scanner (TLS) LiDAR surveys
and AI/ML techniques to collect and analyze data. The scan data is processed
into a detailed point cloud, which is segmented to distinguish ground points,
trees, buildings, and other objects. The analysis focuses on identifying
sections along flat walls and estimating their deformations relative to the
ground orientation.
Findings from the study, conducted at the RGIPT campus, reveal significant
deformations in walls close to the railway corridor, with the highest
deformations ranging from 7 to 8 cm and an average of 3 to 4 cm. In contrast,
walls further from the corridor show negligible deformations. The developed
automated process for feature extraction and deformation monitoring
demonstrates potential for structural health monitoring. By integrating LiDAR
data with machine learning, the methodology provides an efficient system for
identifying and analyzing structural deformations, highlighting the importance
of continuous monitoring for ensuring structural integrity and public safety in
urban infrastructure. This approach represents a substantial advancement in
automated feature extraction and deformation analysis, contributing to more
effective management of urban infrastructure.
|
2501.06458
|
O1 Replication Journey -- Part 3: Inference-time Scaling for Medical
Reasoning
|
cs.CL
|
Building upon our previous investigations of O1 replication (Part 1: Journey
Learning [Qin et al., 2024] and Part 2: Distillation [Huang et al., 2024]),
this work explores the potential of inference-time scaling in large language
models (LLMs) for medical reasoning tasks, ranging from diagnostic
decision-making to treatment planning. Through extensive experiments on medical
benchmarks of varying complexity (MedQA, Medbullets, and JAMA Clinical
Challenges), our investigation reveals several key insights: (1) Increasing
inference time does lead to improved performance. With a modest training set of
500 samples, our model yields substantial performance improvements of 6%-11%.
(2) Task complexity directly correlates with the required length of reasoning
chains, confirming the necessity of extended thought processes for challenging
problems. (3) The differential diagnoses generated by our model adhere to the
principles of the hypothetico-deductive method, producing a list of potential
conditions that may explain a patient's symptoms and systematically narrowing
these possibilities by evaluating the evidence. These findings demonstrate the
promising synergy between inference-time scaling and journey learning in
advancing LLMs' real-world clinical reasoning capabilities.
|
2501.06461
|
Assessing instructor-AI cooperation for grading essay-type questions in
an introductory sociology course
|
cs.AI
|
This study explores the use of artificial intelligence (AI) as a
complementary tool for grading essay-type questions in higher education,
focusing on its consistency with human grading and potential to reduce biases.
Using 70 handwritten exams from an introductory sociology course, we evaluated
generative pre-trained transformers (GPT) models' performance in transcribing
and scoring students' responses. GPT models were tested under various settings
for both transcription and grading tasks. Results show high similarity between
human and GPT transcriptions, with GPT-4o-mini outperforming GPT-4o in
accuracy. For grading, GPT demonstrated strong correlations with the human
grader scores, especially when template answers were provided. However,
discrepancies remained, highlighting GPT's role as a "second grader" to flag
inconsistencies for assessment reviewing rather than fully replace human
evaluation. This study contributes to the growing literature on AI in
education, demonstrating its potential to enhance fairness and efficiency in
grading essay-type questions.
|
2501.06465
|
MedCT: A Clinical Terminology Graph for Generative AI Applications in
Healthcare
|
cs.CL cs.AI
|
We introduce the world's first clinical terminology for the Chinese
healthcare community, namely MedCT, accompanied by a clinical foundation model
MedBERT and an entity linking model MedLink. The MedCT system enables
standardized and programmable representation of Chinese clinical data,
successively stimulating the development of new medicines, treatment pathways,
and better patient outcomes for the populous Chinese community. Moreover, the
MedCT knowledge graph provides a principled mechanism to minimize the
hallucination problem of large language models (LLMs), therefore achieving
significant levels of accuracy and safety in LLM-based clinical applications.
By leveraging the LLMs' emergent capabilities of generativeness and
expressiveness, we were able to rapidly built a production-quality terminology
system and deployed to real-world clinical field within three months, while
classical terminologies like SNOMED CT have gone through more than twenty years
development. Our experiments show that the MedCT system achieves
state-of-the-art (SOTA) performance in semantic matching and entity linking
tasks, not only for Chinese but also for English. We also conducted a
longitudinal field experiment by applying MedCT and LLMs in a representative
spectrum of clinical tasks, including electronic health record (EHR)
auto-generation and medical document search for diagnostic decision making. Our
study shows a multitude of values of MedCT for clinical workflows and patient
outcomes, especially in the new genre of clinical LLM applications. We present
our approach in sufficient engineering detail, such that implementing a
clinical terminology for other non-English societies should be readily
reproducible. We openly release our terminology, models and algorithms, along
with real-world clinical datasets for the development.
|
2501.06466
|
CNN-powered micro- to macro-scale flow modeling in deformable porous
media
|
physics.flu-dyn cs.CV cs.LG
|
This work introduces a novel application for predicting the macroscopic
intrinsic permeability tensor in deformable porous media, using a limited set
of micro-CT images of real microgeometries. The primary goal is to develop an
efficient, machine-learning (ML)-based method that overcomes the limitations of
traditional permeability estimation techniques, which often rely on
time-consuming experiments or computationally expensive fluid dynamics
simulations. The novelty of this work lies in leveraging Convolutional Neural
Networks (CNN) to predict pore-fluid flow behavior under deformation and
anisotropic flow conditions. Particularly, the described approach employs
binarized CT images of porous micro-structure as inputs to predict the
symmetric second-order permeability tensor, a critical parameter in continuum
porous media flow modeling. The methodology comprises four key steps: (1)
constructing a dataset of CT images from Bentheim sandstone at different
volumetric strain levels; (2) performing pore-scale simulations of single-phase
flow using the lattice Boltzmann method (LBM) to generate permeability data;
(3) training the CNN model with the processed CT images as inputs and
permeability tensors as outputs; and (4) exploring techniques to improve model
generalization, including data augmentation and alternative CNN architectures.
Examples are provided to demonstrate the CNN's capability to accurately predict
the permeability tensor, a crucial parameter in various disciplines such as
geotechnical engineering, hydrology, and material science. An exemplary source
code is made available for interested readers.
|
2501.06467
|
Retrieval-Augmented Dialogue Knowledge Aggregation for Expressive
Conversational Speech Synthesis
|
cs.CL
|
Conversational speech synthesis (CSS) aims to take the current dialogue (CD)
history as a reference to synthesize expressive speech that aligns with the
conversational style. Unlike CD, stored dialogue (SD) contains preserved
dialogue fragments from earlier stages of user-agent interaction, which include
style expression knowledge relevant to scenarios similar to those in CD. Note
that this knowledge plays a significant role in enabling the agent to
synthesize expressive conversational speech that generates empathetic feedback.
However, prior research has overlooked this aspect. To address this issue, we
propose a novel Retrieval-Augmented Dialogue Knowledge Aggregation scheme for
expressive CSS, termed RADKA-CSS, which includes three main components: 1) To
effectively retrieve dialogues from SD that are similar to CD in terms of both
semantic and style. First, we build a stored dialogue semantic-style database
(SDSSD) which includes the text and audio samples. Then, we design a
multi-attribute retrieval scheme to match the dialogue semantic and style
vectors of the CD with the stored dialogue semantic and style vectors in the
SDSSD, retrieving the most similar dialogues. 2) To effectively utilize the
style knowledge from CD and SD, we propose adopting the multi-granularity graph
structure to encode the dialogue and introducing a multi-source style knowledge
aggregation mechanism. 3) Finally, the aggregated style knowledge are fed into
the speech synthesizer to help the agent synthesize expressive speech that
aligns with the conversational style. We conducted a comprehensive and in-depth
experiment based on the DailyTalk dataset, which is a benchmarking dataset for
the CSS task.
Both objective and subjective evaluations demonstrate that RADKA-CSS
outperforms baseline models in expressiveness rendering. Code and audio samples
can be found at: https://github.com/Coder-jzq/RADKA-CSS.
|
2501.06468
|
First Token Probability Guided RAG for Telecom Question Answering
|
cs.CL cs.AI
|
Large Language Models (LLMs) have garnered significant attention for their
impressive general-purpose capabilities. For applications requiring intricate
domain knowledge, Retrieval-Augmented Generation (RAG) has shown a distinct
advantage in incorporating domain-specific information into LLMs. However,
existing RAG research has not fully addressed the challenges of Multiple Choice
Question Answering (MCQA) in telecommunications, particularly in terms of
retrieval quality and mitigating hallucinations. To tackle these challenges, we
propose a novel first token probability guided RAG framework. This framework
leverages confidence scores to optimize key hyperparameters, such as chunk
number and chunk window size, while dynamically adjusting the context. Our
method starts by retrieving the most relevant chunks and generates a single
token as the potential answer. The probabilities of all options are then
normalized to serve as confidence scores, which guide the dynamic adjustment of
the context. By iteratively optimizing the hyperparameters based on these
confidence scores, we can continuously improve RAG performance. We conducted
experiments to validate the effectiveness of our framework, demonstrating its
potential to enhance accuracy in domain-specific MCQA tasks.
|
2501.06469
|
SP-SLAM: Neural Real-Time Dense SLAM With Scene Priors
|
cs.CV
|
Neural implicit representations have recently shown promising progress in
dense Simultaneous Localization And Mapping (SLAM). However, existing works
have shortcomings in terms of reconstruction quality and real-time performance,
mainly due to inflexible scene representation strategy without leveraging any
prior information. In this paper, we introduce SP-SLAM, a novel neural RGB-D
SLAM system that performs tracking and mapping in real-time. SP-SLAM computes
depth images and establishes sparse voxel-encoded scene priors near the
surfaces to achieve rapid convergence of the model. Subsequently, the encoding
voxels computed from single-frame depth image are fused into a global volume,
which facilitates high-fidelity surface reconstruction. Simultaneously, we
employ tri-planes to store scene appearance information, striking a balance
between achieving high-quality geometric texture mapping and minimizing memory
consumption. Furthermore, in SP-SLAM, we introduce an effective optimization
strategy for mapping, allowing the system to continuously optimize the poses of
all historical input frames during runtime without increasing computational
overhead. We conduct extensive evaluations on five benchmark datasets (Replica,
ScanNet, TUM RGB-D, Synthetic RGB-D, 7-Scenes). The results demonstrate that,
compared to existing methods, we achieve superior tracking accuracy and
reconstruction quality, while running at a significantly faster speed.
|
2501.06471
|
The Internet of Large Language Models: An Orchestration Framework for
LLM Training and Knowledge Exchange Toward Artificial General Intelligence
|
cs.AI
|
This paper explores the multi-dimensional challenges faced during the
development of Large Language Models (LLMs), including the massive scale of
model parameters and file sizes, the complexity of development environment
configuration, the singularity of model functionality, and the high costs of
computational resources. To address these challenges, this paper proposes three
core technical solutions: LLM sharing protocol, LLM universal environment
framework, and Agent optimal path module. To solve the computational resource
constraints in the early stages of research, we further innovatively propose a
joint mining mechanism, achieving bilateral value sharing between computing
power providers and model designers, including breakthrough rewards for optimal
model paths and long-term profit distribution, thereby providing researchers
with cost-optimized computational resource support and promoting the continuous
development of LLM research and applications.
|
2501.06472
|
YO-CSA-T: A Real-time Badminton Tracking System Utilizing YOLO Based on
Contextual and Spatial Attention
|
cs.CV cs.AI
|
The 3D trajectory of a shuttlecock required for a badminton rally robot for
human-robot competition demands real-time performance with high accuracy.
However, the fast flight speed of the shuttlecock, along with various visual
effects, and its tendency to blend with environmental elements, such as court
lines and lighting, present challenges for rapid and accurate 2D detection. In
this paper, we first propose the YO-CSA detection network, which optimizes and
reconfigures the YOLOv8s model's backbone, neck, and head by incorporating
contextual and spatial attention mechanisms to enhance model's ability in
extracting and integrating both global and local features. Next, we integrate
three major subtasks, detection, prediction, and compensation, into a real-time
3D shuttlecock trajectory detection system. Specifically, our system maps the
2D coordinate sequence extracted by YO-CSA into 3D space using stereo vision,
then predicts the future 3D coordinates based on historical information, and
re-projects them onto the left and right views to update the position
constraints for 2D detection. Additionally, our system includes a compensation
module to fill in missing intermediate frames, ensuring a more complete
trajectory. We conduct extensive experiments on our own dataset to evaluate
both YO-CSA's performance and system effectiveness. Experimental results show
that YO-CSA achieves a high accuracy of 90.43% mAP@0.75, surpassing both
YOLOv8s and YOLO11s. Our system performs excellently, maintaining a speed of
over 130 fps across 12 test sequences.
|
2501.06475
|
Enhancing Multi-Modal Video Sentiment Classification Through
Semi-Supervised Clustering
|
cs.CV cs.LG
|
Understanding emotions in videos is a challenging task. However, videos
contain several modalities which make them a rich source of data for machine
learning and deep learning tasks. In this work, we aim to improve video
sentiment classification by focusing on two key aspects: the video itself, the
accompanying text, and the acoustic features. To address the limitations of
relying on large labeled datasets, we are developing a method that utilizes
clustering-based semi-supervised pre-training to extract meaningful
representations from the data. This pre-training step identifies patterns in
the video and text data, allowing the model to learn underlying structures and
relationships without requiring extensive labeled information at the outset.
Once these patterns are established, we fine-tune the system in a supervised
manner to classify the sentiment expressed in videos. We believe that this
multi-modal approach, combining clustering with supervised fine-tuning, will
lead to more accurate and insightful sentiment classification, especially in
cases where labeled data is limited.
|
Subsets and Splits
No community queries yet
The top public SQL queries from the community will appear here once available.