id
stringlengths 9
16
| title
stringlengths 4
278
| categories
stringlengths 5
104
| abstract
stringlengths 6
4.09k
|
|---|---|---|---|
2501.10122
|
Integrating Mediumband with Emerging Technologies: Unified Vision for 6G
and Beyond Physical Layer
|
cs.IT math.IT
|
In this paper, we present a vision for the physical layer of 6G and beyond,
where emerging physical layer technologies integrate to drive wireless links
toward mediumband operation, addressing a major challenge: deep fading, a
prevalent, and perhaps the most consequential, obstacle in wireless
communication link performance. By leveraging recent insights into wireless
channel fundamentals and advancements in computing, multi-modal sensing, and
AI, we articulate how reflecting surfaces (RS), sensing, digital twins (DTs),
ray-tracing, and AI can work synergistically to lift the burden of deep fading
in future wireless communication networks. This refreshingly new approach
promises transformative improvements in reliability, spectral efficiency,
energy efficiency, and network resilience, positioning 6G for truly superior
performance.
|
2501.10124
|
Gene Regulatory Network Inference in the Presence of Selection Bias and
Latent Confounders
|
cs.LG
|
Gene Regulatory Network Inference (GRNI) aims to identify causal
relationships among genes using gene expression data, providing insights into
regulatory mechanisms. A significant yet often overlooked challenge is
selection bias, a process where only cells meeting specific criteria, such as
gene expression thresholds, survive or are observed, distorting the true joint
distribution of genes and thus biasing GRNI results. Furthermore, gene
expression is influenced by latent confounders, such as non-coding RNAs, which
add complexity to GRNI. To address these challenges, we propose GISL (Gene
Regulatory Network Inference in the presence of Selection bias and Latent
confounders), a novel algorithm to infer true regulatory relationships in the
presence of selection and confounding issues. Leveraging data obtained via
multiple gene perturbation experiments, we show that the true regulatory
relationships, as well as selection processes and latent confounders can be
partially identified without strong parametric models and under mild graphical
assumptions. Experimental results on both synthetic and real-world single-cell
gene expression datasets demonstrate the superiority of GISL over existing
methods.
|
2501.10128
|
FECT: Classification of Breast Cancer Pathological Images Based on
Fusion Features
|
eess.IV cs.CV
|
Breast cancer is one of the most common cancers among women globally, with
early diagnosis and precise classification being crucial. With the advancement
of deep learning and computer vision, the automatic classification of breast
tissue pathological images has emerged as a research focus. Existing methods
typically rely on singular cell or tissue features and lack design
considerations for morphological characteristics of challenging-to-classify
categories, resulting in suboptimal classification performance. To address
these problems, we proposes a novel breast cancer tissue classification model
that Fused features of Edges, Cells, and Tissues (FECT), employing the
ResMTUNet and an attention-based aggregator to extract and aggregate these
features. Extensive testing on the BRACS dataset demonstrates that our model
surpasses current advanced methods in terms of classification accuracy and F1
scores. Moreover, due to its feature fusion that aligns with the diagnostic
approach of pathologists, our model exhibits interpretability and holds promise
for significant roles in future clinical applications.
|
2501.10129
|
Spatio-temporal Graph Learning on Adaptive Mined Key Frames for
High-performance Multi-Object Tracking
|
cs.CV cs.AI
|
In the realm of multi-object tracking, the challenge of accurately capturing
the spatial and temporal relationships between objects in video sequences
remains a significant hurdle. This is further complicated by frequent
occurrences of mutual occlusions among objects, which can lead to tracking
errors and reduced performance in existing methods. Motivated by these
challenges, we propose a novel adaptive key frame mining strategy that
addresses the limitations of current tracking approaches. Specifically, we
introduce a Key Frame Extraction (KFE) module that leverages reinforcement
learning to adaptively segment videos, thereby guiding the tracker to exploit
the intrinsic logic of the video content. This approach allows us to capture
structured spatial relationships between different objects as well as the
temporal relationships of objects across frames. To tackle the issue of object
occlusions, we have developed an Intra-Frame Feature Fusion (IFF) module.
Unlike traditional graph-based methods that primarily focus on inter-frame
feature fusion, our IFF module uses a Graph Convolutional Network (GCN) to
facilitate information exchange between the target and surrounding objects
within a frame. This innovation significantly enhances target
distinguishability and mitigates tracking loss and appearance similarity due to
occlusions. By combining the strengths of both long and short trajectories and
considering the spatial relationships between objects, our proposed tracker
achieves impressive results on the MOT17 dataset, i.e., 68.6 HOTA, 81.0 IDF1,
66.6 AssA, and 893 IDS, proving its effectiveness and accuracy.
|
2501.10131
|
ACE: Anatomically Consistent Embeddings in Composition and Decomposition
|
cs.CV
|
Medical images acquired from standardized protocols show consistent
macroscopic or microscopic anatomical structures, and these structures consist
of composable/decomposable organs and tissues, but existing self-supervised
learning (SSL) methods do not appreciate such composable/decomposable structure
attributes inherent to medical images. To overcome this limitation, this paper
introduces a novel SSL approach called ACE to learn anatomically consistent
embedding via composition and decomposition with two key branches: (1) global
consistency, capturing discriminative macro-structures via extracting global
features; (2) local consistency, learning fine-grained anatomical details from
composable/decomposable patch features via corresponding matrix matching.
Experimental results across 6 datasets 2 backbones, evaluated in few-shot
learning, fine-tuning, and property analysis, show ACE's superior robustness,
transferability, and clinical potential. The innovations of our ACE lie in
grid-wise image cropping, leveraging the intrinsic properties of
compositionality and decompositionality of medical images, bridging the
semantic gap from high-level pathologies to low-level tissue anomalies, and
providing a new SSL method for medical imaging.
|
2501.10132
|
ComplexFuncBench: Exploring Multi-Step and Constrained Function Calling
under Long-Context Scenario
|
cs.CL
|
Enhancing large language models (LLMs) with real-time APIs can help generate
more accurate and up-to-date responses. However, evaluating the function
calling abilities of LLMs in real-world scenarios remains under-explored due to
the complexity of data collection and evaluation. In this work, we introduce
ComplexFuncBench, a benchmark for complex function calling across five
real-world scenarios. Compared to existing benchmarks, ComplexFuncBench
encompasses multi-step and constrained function calling, which requires
long-parameter filing, parameter value reasoning, and 128k long context.
Additionally, we propose an automatic framework, ComplexEval, for
quantitatively evaluating complex function calling tasks. Through comprehensive
experiments, we demonstrate the deficiencies of state-of-the-art LLMs in
function calling and suggest future directions for optimizing these
capabilities. The data and code are available at
\url{https://github.com/THUDM/ComplexFuncBench}.
|
2501.10134
|
Exploring the Impact of Generative Artificial Intelligence in Education:
A Thematic Analysis
|
cs.AI cs.HC cs.LG
|
The recent advancements in Generative Artificial intelligence (GenAI)
technology have been transformative for the field of education. Large Language
Models (LLMs) such as ChatGPT and Bard can be leveraged to automate boilerplate
tasks, create content for personalised teaching, and handle repetitive tasks to
allow more time for creative thinking. However, it is important to develop
guidelines, policies, and assessment methods in the education sector to ensure
the responsible integration of these tools. In this article, thematic analysis
has been performed on seven essays obtained from professionals in the education
sector to understand the advantages and pitfalls of using GenAI models such as
ChatGPT and Bard in education. Exploratory Data Analysis (EDA) has been
performed on the essays to extract further insights from the text. The study
found several themes which highlight benefits and drawbacks of GenAI tools, as
well as suggestions to overcome these limitations and ensure that students are
using these tools in a responsible and ethical manner.
|
2501.10137
|
Visual Exploration of Stopword Probabilities in Topic Models
|
cs.HC cs.LG
|
Stopword removal is a critical stage in many Machine Learning methods but
often receives little consideration, it interferes with the model
visualizations and disrupts user confidence. Inappropriately chosen or hastily
omitted stopwords not only lead to suboptimal performance but also
significantly affect the quality of models, thus reducing the willingness of
practitioners and stakeholders to rely on the output visualizations. This paper
proposes a novel extraction method that provides a corpus-specific
probabilistic estimation of stopword likelihood and an interactive
visualization system to support their analysis. We evaluated our approach and
interface using real-world data, a commonly used Machine Learning method (Topic
Modelling), and a comprehensive qualitative experiment probing user confidence.
The results of our work show that our system increases user confidence in the
credibility of topic models by (1) returning reasonable probabilities, (2)
generating an appropriate and representative extension of common stopword
lists, and (3) providing an adjustable threshold for estimating and analyzing
stopwords visually. Finally, we discuss insights, recommendations, and best
practices to support practitioners while improving the output of Machine
Learning methods and topic model visualizations with robust stopword analysis
and removal.
|
2501.10139
|
Conformal Prediction Sets with Improved Conditional Coverage using Trust
Scores
|
cs.LG cs.AI stat.ME stat.ML
|
Standard conformal prediction offers a marginal guarantee on coverage, but
for prediction sets to be truly useful, they should ideally ensure coverage
conditional on each test point. Unfortunately, it is impossible to achieve
exact, distribution-free conditional coverage in finite samples. In this work,
we propose an alternative conformal prediction algorithm that targets coverage
where it matters most--in instances where a classifier is overconfident in its
incorrect predictions. We start by dissecting miscoverage events in
marginally-valid conformal prediction, and show that miscoverage rates vary
based on the classifier's confidence and its deviation from the Bayes optimal
classifier. Motivated by this insight, we develop a variant of conformal
prediction that targets coverage conditional on a reduced set of two variables:
the classifier's confidence in a prediction and a nonparametric trust score
that measures its deviation from the Bayes classifier. Empirical evaluation on
multiple image datasets shows that our method generally improves conditional
coverage properties compared to standard conformal prediction, including
class-conditional coverage, coverage over arbitrary subgroups, and coverage
over demographic groups.
|
2501.10141
|
Enhancing UAV Path Planning Efficiency Through Accelerated Learning
|
cs.LG cs.AI
|
Unmanned Aerial Vehicles (UAVs) are increasingly essential in various fields
such as surveillance, reconnaissance, and telecommunications. This study aims
to develop a learning algorithm for the path planning of UAV wireless
communication relays, which can reduce storage requirements and accelerate Deep
Reinforcement Learning (DRL) convergence. Assuming the system possesses terrain
maps of the area and can estimate user locations using localization algorithms
or direct GPS reporting, it can input these parameters into the learning
algorithms to achieve optimized path planning performance. However, higher
resolution terrain maps are necessary to extract topological information such
as terrain height, object distances, and signal blockages. This requirement
increases memory and storage demands on UAVs while also lengthening convergence
times in DRL algorithms. Similarly, defining the telecommunication coverage map
in UAV wireless communication relays using these terrain maps and user position
estimations demands higher memory and storage utilization for the learning path
planning algorithms. Our approach reduces path planning training time by
applying a dimensionality reduction technique based on Principal Component
Analysis (PCA), sample combination, Prioritized Experience Replay (PER), and
the combination of Mean Squared Error (MSE) and Mean Absolute Error (MAE) loss
calculations in the coverage map estimates, thereby enhancing a Twin Delayed
Deep Deterministic Policy Gradient (TD3) algorithm. The proposed solution
reduces the convergence episodes needed for basic training by approximately
four times compared to the traditional TD3.
|
2501.10143
|
A Worrying Reproducibility Study of Intent-Aware Recommendation Models
|
cs.IR
|
Lately, we have observed a growing interest in intent-aware recommender
systems (IARS). The promise of such systems is that they are capable of
generating better recommendations by predicting and considering the underlying
motivations and short-term goals of consumers. From a technical perspective,
various sophisticated neural models were recently proposed in this emerging and
promising area. In the broader context of complex neural recommendation models,
a growing number of research works unfortunately indicates that (i) reproducing
such works is often difficult and (ii) that the true benefits of such models
may be limited in reality, e.g., because the reported improvements were
obtained through comparisons with untuned or weak baselines. In this work, we
investigate if recent research in IARS is similarly affected by such problems.
Specifically, we tried to reproduce five contemporary IARS models that were
published in top-level outlets, and we benchmarked them against a number of
traditional non-neural recommendation models. In two of the cases, running the
provided code with the optimal hyperparameters reported in the paper did not
yield the results reported in the paper. Worryingly, we find that all examined
IARS approaches are consistently outperformed by at least one traditional
model. These findings point to sustained methodological issues and to a
pressing need for more rigorous scholarly practices.
|
2501.10144
|
A Vision-Language Framework for Multispectral Scene Representation Using
Language-Grounded Features
|
cs.CV
|
Scene understanding in remote sensing often faces challenges in generating
accurate representations for complex environments such as various land use
areas or coastal regions, which may also include snow, clouds, or haze. To
address this, we present a vision-language framework named Spectral LLaVA,
which integrates multispectral data with vision-language alignment techniques
to enhance scene representation and description. Using the BigEarthNet v2
dataset from Sentinel-2, we establish a baseline with RGB-based scene
descriptions and further demonstrate substantial improvements through the
incorporation of multispectral information. Our framework optimizes a
lightweight linear projection layer for alignment while keeping the vision
backbone of SpectralGPT frozen. Our experiments encompass scene classification
using linear probing and language modeling for jointly performing scene
classification and description generation. Our results highlight Spectral
LLaVA's ability to produce detailed and accurate descriptions, particularly for
scenarios where RGB data alone proves inadequate, while also enhancing
classification performance by refining SpectralGPT features into semantically
meaningful representations.
|
2501.10150
|
Dual Debiasing: Remove Stereotypes and Keep Factual Gender for Fair
Language Modeling and Translation
|
cs.CL cs.AI
|
Mitigation of biases, such as language models' reliance on gender
stereotypes, is a crucial endeavor required for the creation of reliable and
useful language technology. The crucial aspect of debiasing is to ensure that
the models preserve their versatile capabilities, including their ability to
solve language tasks and equitably represent various genders. To address this
issue, we introduce a streamlined Dual Dabiasing Algorithm through Model
Adaptation (2DAMA). Novel Dual Debiasing enables robust reduction of
stereotypical bias while preserving desired factual gender information encoded
by language models. We show that 2DAMA effectively reduces gender bias in
English and is one of the first approaches facilitating the mitigation of
stereotypical tendencies in translation. The proposed method's key advantage is
the preservation of factual gender cues, which are useful in a wide range of
natural language processing tasks.
|
2501.10151
|
Topology-Driven Attribute Recovery for Attribute Missing Graph Learning
in Social Internet of Things
|
cs.AI
|
With the advancement of information technology, the Social Internet of Things
(SIoT) has fostered the integration of physical devices and social networks,
deepening the study of complex interaction patterns. Text Attribute Graphs
(TAGs) capture both topological structures and semantic attributes, enhancing
the analysis of complex interactions within the SIoT. However, existing graph
learning methods are typically designed for complete attributed graphs, and the
common issue of missing attributes in Attribute Missing Graphs (AMGs) increases
the difficulty of analysis tasks. To address this, we propose the
Topology-Driven Attribute Recovery (TDAR) framework, which leverages
topological data for AMG learning. TDAR introduces an improved pre-filling
method for initial attribute recovery using native graph topology.
Additionally, it dynamically adjusts propagation weights and incorporates
homogeneity strategies within the embedding space to suit AMGs' unique
topological structures, effectively reducing noise during information
propagation. Extensive experiments on public datasets demonstrate that TDAR
significantly outperforms state-of-the-art methods in attribute reconstruction
and downstream tasks, offering a robust solution to the challenges posed by
AMGs. The code is available at https://github.com/limengran98/TDAR.
|
2501.10152
|
Quantum Advantage in Private Multiple Hypothesis Testing
|
quant-ph cs.IT math.IT
|
For multiple hypothesis testing based on classical data samples, we
demonstrate a quantum advantage in the optimal privacy-utility trade-off (PUT),
where the privacy and utility measures are set to (quantum) local differential
privacy and the pairwise-minimum Chernoff information, respectively. To show
the quantum advantage, we consider some class of hypotheses that we coin
smoothed point masses. For such hypotheses, we derive an upper bound of the
optimal PUT achieved by classical mechanisms, which is tight for some cases,
and propose a certain quantum mechanism which achieves a better PUT than the
upper bound. The proposed quantum mechanism consists of a classical-quantum
channel whose outputs are pure states corresponding to a symmetric
informationally complete positive operator-valued measure (SIC-POVM), and a
depolarizing channel.
|
2501.10153
|
Region-wise stacking ensembles for estimating brain-age using MRI
|
cs.LG cs.AI
|
Predictive modeling using structural magnetic resonance imaging (MRI) data is
a prominent approach to study brain-aging. Machine learning algorithms and
feature extraction methods have been employed to improve predictions and
explore healthy and accelerated aging e.g. neurodegenerative and psychiatric
disorders. The high-dimensional MRI data pose challenges to building
generalizable and interpretable models as well as for data privacy. Common
practices are resampling or averaging voxels within predefined parcels, which
reduces anatomical specificity and biological interpretability as voxels within
a region may differently relate to aging. Effectively, naive fusion by
averaging can result in information loss and reduced accuracy. We present a
conceptually novel two-level stacking ensemble (SE) approach. The first level
comprises regional models for predicting individuals' age based on voxel-wise
information, fused by a second-level model yielding final predictions. Eight
data fusion scenarios were explored using as input Gray matter volume (GMV)
estimates from four datasets covering the adult lifespan. Performance, measured
using mean absolute error (MAE), R2, correlation and prediction bias, showed
that SE outperformed the region-wise averages. The best performance was
obtained when first-level regional predictions were obtained as out-of-sample
predictions on the application site with second-level models trained on
independent and site-specific data (MAE=4.75 vs baseline regional mean GMV
MAE=5.68). Performance improved as more datasets were used for training.
First-level predictions showed improved and more robust aging signal providing
new biological insights and enhanced data privacy. Overall, the SE improves
accuracy compared to the baseline while preserving or enhancing data privacy.
|
2501.10156
|
Tethered Variable Inertial Attitude Control Mechanisms through a Modular
Jumping Limbed Robot
|
cs.RO
|
This paper presents the concept of a tethered variable inertial attitude
control mechanism for a modular jumping-limbed robot designed for planetary
exploration in low-gravity environments. The system, named SPLITTER, comprises
two sub-10 kg quadrupedal robots connected by a tether, capable of executing
successive jumping gaits and stabilizing in-flight using inertial morphing
technology. Through model predictive control (MPC), attitude control was
demonstrated by adjusting the limbs and tether length to modulate the system's
principal moments of inertia. Our results indicate that this control strategy
allows the robot to stabilize during flight phases without needing traditional
flywheel-based systems or relying on aerodynamics, making the approach
mass-efficient and ideal for small-scale planetary robots' successive jumps.
The paper outlines the dynamics, MPC formulation for inertial morphing,
actuator requirements, and simulation results, illustrating the potential of
agile exploration for small-scale rovers in low-gravity environments like the
Moon or asteroids.
|
2501.10157
|
Structure-guided Deep Multi-View Clustering
|
cs.CV
|
Deep multi-view clustering seeks to utilize the abundant information from
multiple views to improve clustering performance. However, most of the existing
clustering methods often neglect to fully mine multi-view structural
information and fail to explore the distribution of multi-view data, limiting
clustering performance. To address these limitations, we propose a
structure-guided deep multi-view clustering model. Specifically, we introduce a
positive sample selection strategy based on neighborhood relationships, coupled
with a corresponding loss function. This strategy constructs multi-view nearest
neighbor graphs to dynamically redefine positive sample pairs, enabling the
mining of local structural information within multi-view data and enhancing the
reliability of positive sample selection. Additionally, we introduce a Gaussian
distribution model to uncover latent structural information and introduce a
loss function to reduce discrepancies between view embeddings. These two
strategies explore multi-view structural information and data distribution from
different perspectives, enhancing consistency across views and increasing
intra-cluster compactness. Experimental evaluations demonstrate the efficacy of
our method, showing significant improvements in clustering performance on
multiple benchmark datasets compared to state-of-the-art multi-view clustering
approaches.
|
2501.10160
|
CSSDM Ontology to Enable Continuity of Care Data Interoperability
|
cs.AI
|
The rapid advancement of digital technologies and recent global pandemic
scenarios have led to a growing focus on how these technologies can enhance
healthcare service delivery and workflow to address crises. Action plans that
consolidate existing digital transformation programs are being reviewed to
establish core infrastructure and foundations for sustainable healthcare
solutions. Reforming health and social care to personalize home care, for
example, can help avoid treatment in overcrowded acute hospital settings and
improve the experiences and outcomes for both healthcare professionals and
service users. In this information-intensive domain, addressing the
interoperability challenge through standards-based roadmaps is crucial for
enabling effective connections between health and social care services. This
approach facilitates safe and trustworthy data workflows between different
healthcare system providers. In this paper, we present a methodology for
extracting, transforming, and loading data through a semi-automated process
using a Common Semantic Standardized Data Model (CSSDM) to create personalized
healthcare knowledge graph (KG). The CSSDM is grounded in the formal ontology
of ISO 13940 ContSys and incorporates FHIR-based specifications to support
structural attributes for generating KGs. We propose that the CSSDM facilitates
data harmonization and linking, offering an alternative approach to
interoperability. This approach promotes a novel form of collaboration between
companies developing health information systems and cloud-enabled health
services. Consequently, it provides multiple stakeholders with access to
high-quality data and information sharing.
|
2501.10162
|
Convex Physics Informed Neural Networks for the Monge-Amp\`ere Optimal
Transport Problem
|
math.NA cs.LG cs.NA
|
Optimal transportation of raw material from suppliers to customers is an
issue arising in logistics that is addressed here with a continuous model
relying on optimal transport theory. A physics informed neuralnetwork method is
advocated here for the solution of the corresponding generalized Monge-Amp`ere
equation. Convex neural networks are advocated to enforce the convexity of the
solution to the Monge-Amp\`ere equation and obtain a suitable approximation of
the optimal transport map. A particular focus is set on the enforcement of
transport boundary conditions in the loss function. Numerical experiments
illustrate the solution to the optimal transport problem in several
configurations, and sensitivity analyses are performed.
|
2501.10163
|
Invariant Theory and Magic State Distillation
|
quant-ph cs.IT hep-th math.IT
|
We show that the performance of a linear self-orthogonal $GF(4)$ code for
magic state distillation of Bravyi and Kitaev's $|T\rangle$-state is
characterized by its simple weight enumerator. We compute weight enumerators of
all such codes with fewer than 20 qubits and find none whose threshold exceeds
that of the 5-qubit code. Using constraints on weight enumerators from
invariant theory and linear programming, we establish bounds on the exponent
characterizing noise suppression of a $|T\rangle$-state distillation protocol.
We also obtain new non-negativity constraints on such weight enumerators by
demanding consistency of the associated magic state distillation routine. These
constraints yield new bounds on the distances of classical Hermitian self-dual
and maximal self-orthogonal linear $GF(4)$ codes, notably proving the
nonexistence of such codes with parameters $[12m, 6m, 4m+2]_{GF(4)}$.
|
2501.10165
|
MechIR: A Mechanistic Interpretability Framework for Information
Retrieval
|
cs.IR
|
Mechanistic interpretability is an emerging diagnostic approach for neural
models that has gained traction in broader natural language processing domains.
This paradigm aims to provide attribution to components of neural systems where
causal relationships between hidden layers and output were previously
uninterpretable. As the use of neural models in IR for retrieval and evaluation
becomes ubiquitous, we need to ensure that we can interpret why a model
produces a given output for both transparency and the betterment of systems.
This work comprises a flexible framework for diagnostic analysis and
intervention within these highly parametric neural systems specifically
tailored for IR tasks and architectures. In providing such a framework, we look
to facilitate further research in interpretable IR with a broader scope for
practical interventions derived from mechanistic interpretability. We provide
preliminary analysis and look to demonstrate our framework through an axiomatic
lens to show its applications and ease of use for those IR practitioners
inexperienced in this emerging paradigm.
|
2501.10166
|
Implementing Finite Impulse Response Filters on Quantum Computers
|
eess.SP cs.IT math.IT quant-ph
|
While signal processing is a mature area, its connections with quantum
computing have received less attention. In this work, we propose approaches
that perform classical discrete-time signal processing using quantum systems.
Our approaches encode the classical discrete-time input signal into quantum
states, and design unitaries to realize classical concepts of finite impulse
response (FIR) filters. We also develop strategies to cascade lower-order
filters to realize higher-order filters through designing appropriate unitary
operators. Finally, a few directions for processing quantum states on classical
systems after converting them to classical signals are suggested for future
work.
|
2501.10172
|
Mean and Variance Estimation Complexity in Arbitrary Distributions via
Wasserstein Minimization
|
cs.LG
|
Parameter estimation is a fundamental challenge in machine learning, crucial
for tasks such as neural network weight fitting and Bayesian inference. This
paper focuses on the complexity of estimating translation $\boldsymbol{\mu} \in
\mathbb{R}^l$ and shrinkage $\sigma \in \mathbb{R}_{++}$ parameters for a
distribution of the form $\frac{1}{\sigma^l} f_0 \left( \frac{\boldsymbol{x} -
\boldsymbol{\mu}}{\sigma} \right)$, where $f_0$ is a known density in
$\mathbb{R}^l$ given $n$ samples. We highlight that while the problem is
NP-hard for Maximum Likelihood Estimation (MLE), it is possible to obtain
$\varepsilon$-approximations for arbitrary $\varepsilon > 0$ within
$\text{poly} \left( \frac{1}{\varepsilon} \right)$ time using the Wasserstein
distance.
|
2501.10173
|
Optimal Restart Strategies for Parameter-dependent Optimization
Algorithms
|
math.OC cs.NE
|
This paper examines restart strategies for algorithms whose successful
termination depends on an unknown parameter $\lambda$. After each restart,
$\lambda$ is increased, until the algorithm terminates successfully. It is
assumed that there is a unique, unknown, optimal value for $\lambda$. For the
algorithm to run successfully, this value must be reached or surpassed. The key
question is whether there exists an optimal strategy for selecting $\lambda$
after each restart taking into account that the computational costs (runtime)
increases with $\lambda$. In this work, potential restart strategies are
classified into parameter-dependent strategy types. A loss function is
introduced to quantify the wasted computational cost relative to the optimal
strategy. A crucial requirement for any efficient restart strategy is that its
loss, relative to the optimal $\lambda$, remains bounded. To this end, upper
and lower bounds of the loss are derived. Using these bounds it will be shown
that not all strategy types are bounded. However, for a particular strategy
type, where $\lambda$ is increased multiplicatively by a constant factor
$\lambda$, the relative loss function is bounded. Furthermore, it will be
demonstrated that within this strategy type, there exists an optimal value for
$\lambda$ that minimizes the maximum relative loss. In the asymptotic limit,
this optimal choice of $\lambda$ does not depend on the unknown optimal
$\lambda$.
|
2501.10175
|
Multi-stage Training of Bilingual Islamic LLM for Neural Passage
Retrieval
|
cs.CL
|
This study examines the use of Natural Language Processing (NLP) technology
within the Islamic domain, focusing on developing an Islamic neural retrieval
model. By leveraging the robust XLM-R model, the research employs a language
reduction technique to create a lightweight bilingual large language model
(LLM). Our approach for domain adaptation addresses the unique challenges faced
in the Islamic domain, where substantial in-domain corpora exist only in Arabic
while limited in other languages, including English.
The work utilizes a multi-stage training process for retrieval models,
incorporating large retrieval datasets, such as MS MARCO, and smaller,
in-domain datasets to improve retrieval performance. Additionally, we have
curated an in-domain retrieval dataset in English by employing data
augmentation techniques and involving a reliable Islamic source. This approach
enhances the domain-specific dataset for retrieval, leading to further
performance gains.
The findings suggest that combining domain adaptation and a multi-stage
training method for the bilingual Islamic neural retrieval model enables it to
outperform monolingual models on downstream retrieval tasks.
|
2501.10179
|
A Simple but Effective Closed-form Solution for Extreme Multi-label
Learning
|
cs.IR cs.AI cs.CL cs.LG
|
Extreme multi-label learning (XML) is a task of assigning multiple labels
from an extremely large set of labels to each data instance. Many current
high-performance XML models are composed of a lot of hyperparameters, which
complicates the tuning process. Additionally, the models themselves are adapted
specifically to XML, which complicates their reimplementation. To remedy this
problem, we propose a simple method based on ridge regression for XML. The
proposed method not only has a closed-form solution but also is composed of a
single hyperparameter. Since there are no precedents on applying ridge
regression to XML, this paper verified the performance of the method by using
various XML benchmark datasets. Furthermore, we enhanced the prediction of
low-frequency labels in XML, which hold informative content. This prediction is
essential yet challenging because of the limited amount of data. Here, we
employed a simple frequency-based weighting. This approach greatly simplifies
the process compared with existing techniques. Experimental results revealed
that it can achieve levels of performance comparable to, or even exceeding,
those of models with numerous hyperparameters. Additionally, we found that the
frequency-based weighting significantly improved the predictive performance for
low-frequency labels, while requiring almost no changes in implementation. The
source code for the proposed method is available on github at
https://github.com/cars1015/XML-ridge.
|
2501.10181
|
Improved learning rates in multi-unit uniform price auctions
|
cs.GT cs.LG
|
Motivated by the strategic participation of electricity producers in
electricity day-ahead market, we study the problem of online learning in
repeated multi-unit uniform price auctions focusing on the adversarial opposing
bid setting. The main contribution of this paper is the introduction of a new
modeling of the bid space. Indeed, we prove that a learning algorithm
leveraging the structure of this problem achieves a regret of
$\tilde{O}(K^{4/3}T^{2/3})$ under bandit feedback, improving over the bound of
$\tilde{O}(K^{7/4}T^{3/4})$ previously obtained in the literature. This
improved regret rate is tight up to logarithmic terms. Inspired by electricity
reserve markets, we further introduce a different feedback model under which
all winning bids are revealed. This feedback interpolates between the
full-information and bandit scenarios depending on the auctions' results. We
prove that, under this feedback, the algorithm that we propose achieves regret
$\tilde{O}(K^{5/2}\sqrt{T})$.
|
2501.10185
|
Modeling the drying process in hard carbon electrodes based on the
phase-field method
|
cs.CE
|
The present work addresses the simulation of pore emptying during the drying
of battery electrodes. For this purpose, a model based on the multiphase-field
method (MPF) is used, since it is an established approach for modeling and
simulating multiphysical problems. A model based on phase fields is introduced
that takes into account fluid flow, capillary effects, and wetting behavior,
all of which play an important role in drying. In addition, the MPF makes it
possible to track the movement of the liquid-air interface without
computationally expensive adaptive mesh generation. The presented model is used
for the first time to investigate pore emptying in real hard carbon
microstructures. For this purpose, the microstructures of real dried electrodes
are used as input for the simulations. The simulations performed here
demonstrate the importance of considering the resolved microstructural
information compared to models that rely only on statistical geometry
parameters such as pore size distributions. The influence of various parameters
such as different microstructures, fluid viscosity, and the contact angle on
pore emptying are investigated. In addition, this work establishes a
correlation between the capillary number and the breakthrough time of the
solvent as well as the height difference of the solvent front at the time of
breakthrough. The results indicate that the drying process can be optimized by
doping the particle surface, which changes the contact angle between the fluids
and the particles.
|
2501.10186
|
Generative Artificial Intelligence: Implications for Biomedical and
Health Professions Education
|
cs.AI
|
Generative AI has had a profound impact on biomedicine and health, both in
professional work and in education. Based on large language models (LLMs),
generative AI has been found to perform as well as humans in simulated
situations taking medical board exams, answering clinical questions, solving
clinical cases, applying clinical reasoning, and summarizing information.
Generative AI is also being used widely in education, performing well in
academic courses and their assessments. This review summarizes the successes of
LLMs and highlights some of their challenges in the context of education, most
notably aspects that may undermines the acquisition of knowledge and skills for
professional work. It then provides recommendations for best practices
overcoming shortcomings for LLM use in education. Although there are challenges
for use of generative AI in education, all students and faculty, in biomedicine
and health and beyond, must have understanding and be competent in its use.
|
2501.10187
|
Good things come in small packages: Should we adopt Lite-GPUs in AI
infrastructure?
|
cs.AR cs.AI cs.DC
|
To match the blooming demand of generative AI workloads, GPU designers have
so far been trying to pack more and more compute and memory into single complex
and expensive packages. However, there is growing uncertainty about the
scalability of individual GPUs and thus AI clusters, as state-of-the-art GPUs
are already displaying packaging, yield, and cooling limitations. We propose to
rethink the design and scaling of AI clusters through efficiently-connected
large clusters of Lite-GPUs, GPUs with single, small dies and a fraction of the
capabilities of larger GPUs. We think recent advances in co-packaged optics can
be key in overcoming the communication challenges of distributing AI workloads
onto more Lite-GPUs. In this paper, we present the key benefits of Lite-GPUs on
manufacturing cost, blast radius, yield, and power efficiency; and discuss
systems opportunities and challenges around resource, workload, memory, and
network management.
|
2501.10190
|
Temporal Causal Reasoning with (Non-Recursive) Structural Equation
Models
|
cs.AI cs.LO
|
Structural Equation Models (SEM) are the standard approach to representing
causal dependencies between variables in causal models. In this paper we
propose a new interpretation of SEMs when reasoning about Actual Causality, in
which SEMs are viewed as mechanisms transforming the dynamics of exogenous
variables into the dynamics of endogenous variables. This allows us to combine
counterfactual causal reasoning with existing temporal logic formalisms, and to
introduce a temporal logic, CPLTL, for causal reasoning about such structures.
We show that the standard restriction to so-called \textit{recursive} models
(with no cycles in the dependency graph) is not necessary in our approach,
allowing us to reason about mutually dependent processes and feedback loops.
Finally, we introduce new notions of model equivalence for temporal causal
models, and show that CPLTL has an efficient model-checking procedure.
|
2501.10193
|
Surrogate-based multiscale analysis of experiments on thermoplastic
composites under off-axis loading
|
math.NA cond-mat.mtrl-sci cs.LG cs.NA
|
In this paper, we present a surrogate-based multiscale approach to model
constant strain-rate and creep experiments on unidirectional thermoplastic
composites under off-axis loading. In previous contributions, these experiments
were modeled through a single-scale micromechanical simulation under the
assumption of macroscopic homogeneity. Although efficient and accurate in many
scenarios, simulations with low-off axis angles showed significant
discrepancies with the experiments. It was hypothesized that the mismatch was
caused by macroscopic inhomogeneity, which would require a multiscale approach
to capture it. However, full-field multiscale simulations remain
computationally prohibitive. To address this issue, we replace the micromodel
with a Physically Recurrent Neural Network (PRNN), a surrogate model that
combines data-driven components with embedded constitutive models to capture
history-dependent behavior naturally. The explainability of the latent space of
this network is also explored in a transfer learning strategy that requires no
re-training. With the surrogate-based simulations, we confirm the hypothesis
raised on the inhomogeneity of the macroscopic strain field and gain insights
into the influence of adjustment of the experimental setup with oblique
end-tabs. Results from the surrogate-based multiscale approach show better
agreement with experiments than the single-scale micromechanical approach over
a wide range of settings, although with limited accuracy on the creep
experiments, where macroscopic test effects were implicitly taken into account
in the material properties calibration.
|
2501.10195
|
Contributions to the Decision Theoretic Foundations of Machine Learning
and Robust Statistics under Weakly Structured Information
|
stat.ML cs.LG
|
This habilitation thesis is cumulative and, therefore, is collecting and
connecting research that I (together with several co-authors) have conducted
over the last few years. Thus, the absolute core of the work is formed by the
ten publications listed on page 5 under the name Contributions 1 to 10. The
references to the complete versions of these articles are also found in this
list, making them as easily accessible as possible for readers wishing to dive
deep into the different research projects. The chapters following this thesis,
namely Parts A to C and the concluding remarks, serve to place the articles in
a larger scientific context, to (briefly) explain their respective content on a
less formal level, and to highlight some interesting perspectives for future
research in their respective contexts. Naturally, therefore, the following
presentation has neither the level of detail nor the formal rigor that can
(hopefully) be found in the papers. The purpose of the following text is to
provide the reader an easy and high-level access to this interesting and
important research field as a whole, thereby, advertising it to a broader
audience.
|
2501.10196
|
Pricing Mechanisms versus Non-Pricing Mechanisms for Demand Side
Management in Microgrids
|
eess.SY cs.SY
|
In this paper, we compare pricing and non-pricing mechanisms for implementing
demand-side management (DSM) mechanisms in a neighborhood in Helsinki, Finland.
We compare load steering based on peak load-reduction using the profile
steering method, and load steering based on market price signals, in terms of
peak loads, losses, and device profiles. We found that there are significant
differences between the two methods; the peak-load reduction control strategies
contribute to reducing peak power and improving power flow stability, while
strategies primarily based on prices result in higher peaks and increased grid
losses. Our results highlight the need to potentially move away from
market-price-based DSM to DSM incentivization and control strategies that are
based on peak load reductions and other system requirements.
|
2501.10197
|
CSHNet: A Novel Information Asymmetric Image Translation Method
|
cs.CV
|
Despite advancements in cross-domain image translation, challenges persist in
asymmetric tasks such as SAR-to-Optical and Sketch-to-Instance conversions,
which involve transforming data from a less detailed domain into one with
richer content. Traditional CNN-based methods are effective at capturing fine
details but struggle with global structure, leading to unwanted merging of
image regions. To address this, we propose the CNN-Swin Hybrid Network
(CSHNet), which combines two key modules: Swin Embedded CNN (SEC) and CNN
Embedded Swin (CES), forming the SEC-CES-Bottleneck (SCB). SEC leverages CNN's
detailed feature extraction while integrating the Swin Transformer's structural
bias. CES, in turn, preserves the Swin Transformer's global integrity,
compensating for CNN's lack of focus on structure. Additionally, CSHNet
includes two components designed to enhance cross-domain information retention:
the Interactive Guided Connection (IGC), which enables dynamic information
exchange between SEC and CES, and Adaptive Edge Perception Loss (AEPL), which
maintains structural boundaries during translation. Experimental results show
that CSHNet outperforms existing methods in both visual quality and performance
metrics across scene-level and instance-level datasets. Our code is available
at: https://github.com/XduShi/CSHNet.
|
2501.10199
|
Adaptive Clustering for Efficient Phenotype Segmentation of UAV
Hyperspectral Data
|
cs.CV eess.IV
|
Unmanned Aerial Vehicles (UAVs) combined with Hyperspectral imaging (HSI)
offer potential for environmental and agricultural applications by capturing
detailed spectral information that enables the prediction of invisible features
like biochemical leaf properties. However, the data-intensive nature of HSI
poses challenges for remote devices, which have limited computational resources
and storage. This paper introduces an Online Hyperspectral Simple Linear
Iterative Clustering algorithm (OHSLIC) framework for real-time tree phenotype
segmentation. OHSLIC reduces inherent noise and computational demands through
adaptive incremental clustering and a lightweight neural network, which
phenotypes trees using leaf contents such as chlorophyll, carotenoids, and
anthocyanins. A hyperspectral dataset is created using a custom simulator that
incorporates realistic leaf parameters, and light interactions. Results
demonstrate that OHSLIC achieves superior regression accuracy and segmentation
performance compared to pixel- or window-based methods while significantly
reducing inference time. The method`s adaptive clustering enables dynamic
trade-offs between computational efficiency and accuracy, paving the way for
scalable edge-device deployment in HSI applications.
|
2501.10201
|
ODMA-Based Cell-Free Unsourced Random Access with Successive
Interference Cancellation
|
cs.ET cs.IT cs.SY eess.SY math.IT
|
We consider the unsourced random access problem with multiple receivers and
propose a cell-free type solution for that. In our proposed scheme, the active
users transmit their signals to the access points (APs) distributed in a
geographical area and connected to a central processing unit (CPU). The
transmitted signals are composed of a pilot and polar codeword, where the polar
codeword bits occupy a small fraction of the data part of the transmission
frame. The receiver operations of pilot detection and channel and symbol
estimation take place at the APs, while the actual message bits are detected at
the CPU by combining the symbol estimates from the APs forwarded over the
fronthaul. The effect of the successfully decoded messages is then subtracted
at the APs. Numerical examples illustrate that the proposed scheme can support
up to 1400 users with a high energy efficiency, and the distributed structure
decreases the error probability by more than two orders of magnitude.
|
2501.10202
|
Provably Safeguarding a Classifier from OOD and Adversarial Samples: an
Extreme Value Theory Approach
|
stat.ML cs.LG
|
This paper introduces a novel method, Sample-efficient Probabilistic
Detection using Extreme Value Theory (SPADE), which transforms a classifier
into an abstaining classifier, offering provable protection against
out-of-distribution and adversarial samples. The approach is based on a
Generalized Extreme Value (GEV) model of the training distribution in the
classifier's latent space, enabling the formal characterization of OOD samples.
Interestingly, under mild assumptions, the GEV model also allows for formally
characterizing adversarial samples. The abstaining classifier, which rejects
samples based on their assessment by the GEV model, provably avoids OOD and
adversarial samples. The empirical validation of the approach, conducted on
various neural architectures (ResNet, VGG, and Vision Transformer) and medium
and large-sized datasets (CIFAR-10, CIFAR-100, and ImageNet), demonstrates its
frugality, stability, and efficiency compared to the state of the art.
|
2501.10209
|
Hypercone Assisted Contour Generation for Out-of-Distribution Detection
|
cs.CV cs.LG
|
Recent advances in the field of out-of-distribution (OOD) detection have
placed great emphasis on learning better representations suited to this task.
While there are distance-based approaches, distributional awareness has seldom
been exploited for better performance. We present HAC$_k$-OOD, a novel OOD
detection method that makes no distributional assumption about the data, but
automatically adapts to its distribution. Specifically, HAC$_k$-OOD constructs
a set of hypercones by maximizing the angular distance to neighbors in a given
data-point's vicinity to approximate the contour within which in-distribution
(ID) data-points lie. Experimental results show state-of-the-art FPR@95 and
AUROC performance on Near-OOD detection and on Far-OOD detection on the
challenging CIFAR-100 benchmark without explicitly training for OOD
performance.
|
2501.10212
|
Disharmony: Forensics using Reverse Lighting Harmonization
|
cs.CV
|
Content generation and manipulation approaches based on deep learning methods
have seen significant advancements, leading to an increased need for techniques
to detect whether an image has been generated or edited. Another area of
research focuses on the insertion and harmonization of objects within images.
In this study, we explore the potential of using harmonization data in
conjunction with a segmentation model to enhance the detection of edited image
regions. These edits can be either manually crafted or generated using deep
learning methods. Our findings demonstrate that this approach can effectively
identify such edits. Existing forensic models often overlook the detection of
harmonized objects in relation to the background, but our proposed Disharmony
Network addresses this gap. By utilizing an aggregated dataset of harmonization
techniques, our model outperforms existing forensic networks in identifying
harmonized objects integrated into their backgrounds, and shows potential for
detecting various forms of edits, including virtual try-on tasks.
|
2501.10214
|
Temporal Graph MLP Mixer for Spatio-Temporal Forecasting
|
cs.LG
|
Spatiotemporal forecasting is critical in applications such as traffic
prediction, climate modeling, and environmental monitoring. However, the
prevalence of missing data in real-world sensor networks significantly
complicates this task. In this paper, we introduce the Temporal Graph MLP-Mixer
(T-GMM), a novel architecture designed to address these challenges. The model
combines node-level processing with patch-level subgraph encoding to capture
localized spatial dependencies while leveraging a three-dimensional MLP-Mixer
to handle temporal, spatial, and feature-based dependencies. Experiments on the
AQI, ENGRAD, PV-US and METR-LA datasets demonstrate the model's ability to
effectively forecast even in the presence of significant missing data. While
not surpassing state-of-the-art models in all scenarios, the T-GMM exhibits
strong learning capabilities, particularly in capturing long-range
dependencies. These results highlight its potential for robust, scalable
spatiotemporal forecasting.
|
2501.10216
|
The Relevance of AWS Chronos: An Evaluation of Standard Methods for Time
Series Forecasting with Limited Tuning
|
cs.LG
|
A systematic comparison of Chronos, a transformer-based time series
forecasting framework, against traditional approaches including ARIMA and
Prophet. We evaluate these models across multiple time horizons and user
categories, with a focus on the impact of historical context length. Our
analysis reveals that while Chronos demonstrates superior performance for
longer-term predictions and maintains accuracy with increased context,
traditional models show significant degradation as context length increases. We
find that prediction quality varies systematically between user classes,
suggesting that underlying behavior patterns always influence model
performance. This study provides a case for deploying Chronos in real-world
applications where limited model tuning is feasible, especially in scenarios
requiring longer prediction.
|
2501.10219
|
Robust Egoistic Rigid Body Localization
|
eess.SP cs.CV
|
We consider a robust and self-reliant (or "egoistic") variation of the rigid
body localization (RBL) problem, in which a primary rigid body seeks to
estimate the pose (i.e., location and orientation) of another rigid body (or
"target"), relative to its own, without the assistance of external
infrastructure, without prior knowledge of the shape of the target, and taking
into account the possibility that the available observations are incomplete.
Three complementary contributions are then offered for such a scenario. The
first is a method to estimate the translation vector between the center point
of both rigid bodies, which unlike existing techniques does not require that
both objects have the same shape or even the same number of landmark points.
This technique is shown to significantly outperform the state-of-the-art (SotA)
under complete information, but to be sensitive to data erasures, even when
enhanced by matrix completion methods. The second contribution, designed to
offer improved performance in the presence of incomplete information, offers a
robust alternative to the latter, at the expense of a slight relative loss
under complete information. Finally, the third contribution is a scheme for the
estimation of the rotation matrix describing the relative orientation of the
target rigid body with respect to the primary. Comparisons of the proposed
schemes and SotA techniques demonstrate the advantage of the contributed
methods in terms of root mean square error (RMSE) performance under fully
complete information and incomplete conditions.
|
2501.10221
|
Modelling Activity Scheduling Behaviour with Deep Generative Machine
Learning
|
cs.LG
|
We model human activity scheduling behaviour using a deep generative machine
learning approach. Activity schedules, which represent the activities and
associated travel behaviours of individuals, are a core component of many
applied models in the transport, energy and epidemiology domains. Our data
driven approach learns human preferences and scheduling logic without the need
for complex interacting combinations of sub-models and custom-rules, this makes
our approach significantly faster and simpler to operate that existing
approaches. We find activity schedule data combines aspects of both continuous
image data and also discrete text data, requiring novel approaches. We
additionally contribute a novel schedule representation and comprehensive
evaluation framework for generated schedules. Evaluation shows our approach is
able to rapidly generate large, diverse and realistic synthetic samples of
activity schedules.
|
2501.10227
|
Joint Active and Passive Beamforming Optimization for Beyond Diagonal
RIS-aided Multi-User Communications
|
eess.SP cs.IT math.IT
|
Benefiting from its capability to generalize existing reconfigurable
intelligent surface (RIS) architectures and provide additional design
flexibility via interactions between RIS elements, beyond-diagonal RIS (BD-RIS)
has attracted considerable research interests recently. However, due to the
symmetric and unitary passive beamforming constraint imposed on BD-RIS,
existing joint active and passive beamforming optimization algorithms for
BD-RIS either exhibit high computational complexity to achieve near optimal
solutions or rely on heuristic algorithms with substantial performance loss. In
this paper, we address this issue by proposing an efficient optimization
framework for BD-RIS assisted multi-user multi-antenna communication networks.
Specifically, we solve the weighted sum rate maximization problem by
introducing a novel beamforming optimization algorithm that alternately
optimizes active and passive beamforming matrices using iterative closed-form
solutions. Numerical results demonstrate that our algorithm significantly
reduces computational complexity while ensuring a sub-optimal solution.
|
2501.10229
|
Amortized Bayesian Mixture Models
|
stat.ML cs.LG stat.CO
|
Finite mixtures are a broad class of models useful in scenarios where
observed data is generated by multiple distinct processes but without explicit
information about the responsible process for each data point. Estimating
Bayesian mixture models is computationally challenging due to issues such as
high-dimensional posterior inference and label switching. Furthermore,
traditional methods such as MCMC are applicable only if the likelihoods for
each mixture component are analytically tractable.
Amortized Bayesian Inference (ABI) is a simulation-based framework for
estimating Bayesian models using generative neural networks. This allows the
fitting of models without explicit likelihoods, and provides fast inference.
ABI is therefore an attractive framework for estimating mixture models. This
paper introduces a novel extension of ABI tailored to mixture models. We
factorize the posterior into a distribution of the parameters and a
distribution of (categorical) mixture indicators, which allows us to use a
combination of generative neural networks for parameter inference, and
classification networks for mixture membership identification. The proposed
framework accommodates both independent and dependent mixture models, enabling
filtering and smoothing. We validate and demonstrate our approach through
synthetic and real-world datasets.
|
2501.10234
|
Counterfactual Explanations for k-means and Gaussian Clustering
|
cs.LG
|
Counterfactuals have been recognized as an effective approach to explain
classifier decisions. Nevertheless, they have not yet been considered in the
context of clustering. In this work, we propose the use of counterfactuals to
explain clustering solutions. First, we present a general definition for
counterfactuals for model-based clustering that includes plausibility and
feasibility constraints. Then we consider the counterfactual generation problem
for k-means and Gaussian clustering assuming Euclidean distance. Our approach
takes as input the factual, the target cluster, a binary mask indicating
actionable or immutable features and a plausibility factor specifying how far
from the cluster boundary the counterfactual should be placed. In the k-means
clustering case, analytical mathematical formulas are presented for computing
the optimal solution, while in the Gaussian clustering case (assuming full,
diagonal, or spherical covariances) our method requires the numerical solution
of a nonlinear equation with a single parameter only. We demonstrate the
advantages of our approach through illustrative examples and quantitative
experimental comparisons.
|
2501.10235
|
SpaceTime: Causal Discovery from Non-Stationary Time Series
|
cs.LG
|
Understanding causality is challenging and often complicated by changing
causal relationships over time and across environments. Climate patterns, for
example, shift over time with recurring seasonal trends, while also depending
on geographical characteristics such as ecosystem variability. Existing methods
for discovering causal graphs from time series either assume stationarity, do
not permit both temporal and spatial distribution changes, or are unaware of
locations with the same causal relationships. In this work, we therefore unify
the three tasks of causal graph discovery in the non-stationary multi-context
setting, of reconstructing temporal regimes, and of partitioning datasets and
time intervals into those where invariant causal relationships hold. To
construct a consistent score that forms the basis of our method, we employ the
Minimum Description Length principle. Our resulting algorithm SPACETIME
simultaneously accounts for heterogeneity across space and non-stationarity
over time. Given multiple time series, it discovers regime changepoints and a
temporal causal graph using non-parametric functional modeling and kernelized
discrepancy testing. We also show that our method provides insights into
real-world phenomena such as river-runoff measured at different catchments and
biosphere-atmosphere interactions across ecosystems.
|
2501.10236
|
Actively Coupled Sensor Configuration and Planning in Unknown Dynamic
Environments
|
eess.SY cs.SY
|
We address the problem of path-planning for an autonomous mobile vehicle,
called the ego vehicle, in an unknown andtime-varying environment. The
objective is for the ego vehicle to minimize exposure to a
spatiotemporally-varying unknown scalar field called the threat field. Noisy
measurements of the threat field are provided by a network of mobile sensors.
Weaddress the problem of optimally configuring (placing) these sensors in the
environment. To this end, we propose sensor reconfiguration by maximizing a
reward function composed of three different elements. First, the reward
includes an informa tion measure that we call context-relevant mutual
information (CRMI). Unlike typical sensor placement techniques that maxi mize
mutual information of the measurements and environment state, CRMI directly
quantifies uncertainty reduction in the ego path cost while it moves in the
environment. Therefore, the CRMI introduces active coupling between the ego
vehicle and the sensor network. Second, the reward includes a penalty on the
distances traveled by the sensors. Third, the reward includes a measure of
proximity of the sensors to the ego vehicle. Although we do not consider
communication issues in this paper, such proximity is of relevance for future
work that addresses communications between the sensors and the ego vehicle. We
illustrate and analyze the proposed technique via numerical simulations.
|
2501.10240
|
Challenges and recommendations for Electronic Health Records data
extraction and preparation for dynamic prediction modelling in hospitalized
patients -- a practical guide
|
cs.LG cs.AI
|
Dynamic predictive modeling using electronic health record (EHR) data has
gained significant attention in recent years. The reliability and
trustworthiness of such models depend heavily on the quality of the underlying
data, which is largely determined by the stages preceding the model
development: data extraction from EHR systems and data preparation. We list
over forty challenges encountered during these stages and provide actionable
recommendations for addressing them. These challenges are organized into four
categories: cohort definition, outcome definition, feature engineering, and
data cleaning. This list is designed to serve as a practical guide for data
extraction engineers and researchers, supporting better practices and improving
the quality and real-world applicability of dynamic prediction models in
clinical settings.
|
2501.10243
|
Random-Key Algorithms for Optimizing Integrated Operating Room
Scheduling
|
cs.NE cs.AI math.CO
|
Efficient surgery room scheduling is essential for hospital efficiency,
patient satisfaction, and resource utilization. This study addresses this
challenge by introducing a novel concept of Random-Key Optimizer (RKO),
rigorously tested on literature and new, real-world inspired instances. Our
combinatorial optimization problem incorporates multi-room scheduling,
equipment scheduling, and complex availability constraints for rooms, patients,
and surgeons, facilitating rescheduling and enhancing operational flexibility.
The RKO approach represents solutions as points in a continuous space, which
are then mapped in the problem solution space via a deterministic function
known as a decoder. The core idea is to operate metaheuristics and heuristics
in the random-key space, unaware of the original solution space. We design the
Biased Random-Key Genetic Algorithm with $Q$-Learning, Simulated Annealing, and
Iterated Local Search for use within an RKO framework, employing a single
decoder function. The proposed metaheuristics are complemented by lower-bound
formulations, providing optimal gaps for evaluating the effectiveness of the
heuristic results. Our results demonstrate significant lower and upper bounds
improvements for the literature instances, notably proving one optimal result.
Furthermore, the best-proposed metaheuristic efficiently generates schedules
for the newly introduced instances, even in highly constrained scenarios. This
research offers valuable insights and practical solutions for improving surgery
scheduling processes, offering tangible benefits to hospitals by optimising
resource allocation, reducing patient wait times, and enhancing overall
operational efficiency.
|
2501.10245
|
Over-the-Air Multi-Sensor Inference with Neural Networks Using
Memristor-Based Analog Computing
|
cs.LG cs.DC cs.IT math.IT
|
Deep neural networks provide reliable solutions for many classification and
regression tasks; however, their application in real-time wireless systems with
simple sensor networks is limited due to high energy consumption and
significant bandwidth needs. This study proposes a multi-sensor wireless
inference system with memristor-based analog computing. Given the sensors'
limited computational capabilities, the features from the network's front end
are transmitted to a central device where an $L_p$-norm inspired approximation
of the maximum operation is employed to achieve transformation-invariant
features, enabling efficient over-the-air transmission. We also introduce a
trainable over-the-air sensor fusion method based on $L_p$-norm inspired
combining function that customizes sensor fusion to match the network and
sensor distribution characteristics, enhancing adaptability. To address the
energy constraints of sensors, we utilize memristors, known for their
energy-efficient in-memory computing, enabling analog-domain computations that
reduce energy use and computational overhead in edge computing. This dual
approach of memristors and $L_p$-norm inspired sensor fusion fosters
energy-efficient computational and transmission paradigms and serves as a
practical energy-efficient solution with minimal performance loss.
|
2501.10251
|
The Distributed Multi-User Point Function
|
cs.IT cs.CR math.IT
|
In this paper, we study the problem of information-theoretic distributed
multi-user point function, involving a trusted master node, $N \in \mathbb{N}$
server nodes, and $K\in \mathbb{N}$ users, where each user has access to the
contents of a subset of the storages of server nodes. Each user is associated
with an independent point function $f_{X_k,Z_k}: \{1,2,\hdots,T\}
\rightarrow{GF(q^{m R_k})},T,mR_k \in \mathbb{N}$. Using these point functions,
the trusted master node encodes and places functional shares
$G_1,G_2,\hdots,G_N \in GF(q^{M}), M \in \mathbb{N}$ in the storage nodes such
that each user can correctly recover its point function result from the
response transmitted to itself and gains no information about the point
functions of any other user, even with knowledge of all responses transmitted
from its connected servers. For the first time, we propose a multi-user scheme
that satisfies the correctness and information-theoretic privacy constraints,
ensuring recovery for all point functions. We also characterize the inner and
outer bounds on the capacity -- the maximum achievable rate defined as the size
of the range of each point function $mR_k$ relative to the storage size of the
servers $M$ -- of the distributed multi-user point function scheme by
presenting a novel converse argument.
|
2501.10256
|
Unsupervised Rhythm and Voice Conversion of Dysarthric to Healthy Speech
for ASR
|
eess.AS cs.AI cs.LG cs.SD
|
Automatic speech recognition (ASR) systems are well known to perform poorly
on dysarthric speech. Previous works have addressed this by speaking rate
modification to reduce the mismatch with typical speech. Unfortunately, these
approaches rely on transcribed speech data to estimate speaking rates and
phoneme durations, which might not be available for unseen speakers. Therefore,
we combine unsupervised rhythm and voice conversion methods based on
self-supervised speech representations to map dysarthric to typical speech. We
evaluate the outputs with a large ASR model pre-trained on healthy speech
without further fine-tuning and find that the proposed rhythm conversion
especially improves performance for speakers of the Torgo corpus with more
severe cases of dysarthria. Code and audio samples are available at
https://idiap.github.io/RnV .
|
2501.10258
|
DADA: Dual Averaging with Distance Adaptation
|
math.OC cs.LG
|
We present a novel universal gradient method for solving convex optimization
problems. Our algorithm -- Dual Averaging with Distance Adaptation (DADA) -- is
based on the classical scheme of dual averaging and dynamically adjusts its
coefficients based on observed gradients and the distance between iterates and
the starting point, eliminating the need for problem-specific parameters. DADA
is a universal algorithm that simultaneously works for a broad spectrum of
problem classes, provided the local growth of the objective function around its
minimizer can be bounded. Particular examples of such problem classes are
nonsmooth Lipschitz functions, Lipschitz-smooth functions, H\"older-smooth
functions, functions with high-order Lipschitz derivative,
quasi-self-concordant functions, and $(L_0,L_1)$-smooth functions. Crucially,
DADA is applicable to both unconstrained and constrained problems, even when
the domain is unbounded, without requiring prior knowledge of the number of
iterations or desired accuracy.
|
2501.10261
|
Logarithmic Regret for Nonlinear Control
|
cs.LG
|
We address the problem of learning to control an unknown nonlinear dynamical
system through sequential interactions. Motivated by high-stakes applications
in which mistakes can be catastrophic, such as robotics and healthcare, we
study situations where it is possible for fast sequential learning to occur.
Fast sequential learning is characterized by the ability of the learning agent
to incur logarithmic regret relative to a fully-informed baseline. We
demonstrate that fast sequential learning is achievable in a diverse class of
continuous control problems where the system dynamics depend smoothly on
unknown parameters, provided the optimal control policy is persistently
exciting. Additionally, we derive a regret bound which grows with the square
root of the number of interactions for cases where the optimal policy is not
persistently exciting. Our results provide the first regret bounds for
controlling nonlinear dynamical systems depending nonlinearly on unknown
parameters. We validate the trends our theory predicts in simulation on a
simple dynamical system.
|
2501.10262
|
Deployment of an Aerial Multi-agent System for Automated Task Execution
in Large-scale Underground Mining Environments
|
cs.RO cs.SY eess.SY
|
In this article, we present a framework for deploying an aerial multi-agent
system in large-scale subterranean environments with minimal infrastructure for
supporting multi-agent operations. The multi-agent objective is to optimally
and reactively allocate and execute inspection tasks in a mine, which are
entered by a mine operator on-the-fly. The assignment of currently available
tasks to the team of agents is accomplished through an auction-based system,
where the agents bid for available tasks, which are used by a central
auctioneer to optimally assigns tasks to agents. A mobile Wi-Fi mesh supports
inter-agent communication and bi-directional communication between the agents
and the task allocator, while the task execution is performed completely
infrastructure-free. Given a task to be accomplished, a reliable and modular
agent behavior is synthesized by generating behavior trees from a pool of agent
capabilities, using a back-chaining approach. The auction system in the
proposed framework is reactive and supports addition of new operator-specified
tasks on-the-go, at any point through a user-friendly operator interface. The
framework has been validated in a real underground mining environment using
three aerial agents, with several inspection locations spread in an environment
of almost 200 meters. The proposed framework can be utilized for missions
involving rapid inspection, gas detection, distributed sensing and mapping etc.
in a subterranean environment. The proposed framework and its field deployment
contributes towards furthering reliable automation in large-scale subterranean
environments to offload both routine and dangerous tasks from human operators
to autonomous aerial robots.
|
2501.10266
|
MutualForce: Mutual-Aware Enhancement for 4D Radar-LiDAR 3D Object
Detection
|
cs.CV
|
Radar and LiDAR have been widely used in autonomous driving as LiDAR provides
rich structure information, and radar demonstrates high robustness under
adverse weather. Recent studies highlight the effectiveness of fusing radar and
LiDAR point clouds. However, challenges remain due to the modality misalignment
and information loss during feature extractions. To address these issues, we
propose a 4D radar-LiDAR framework to mutually enhance their representations.
Initially, the indicative features from radar are utilized to guide both radar
and LiDAR geometric feature learning. Subsequently, to mitigate their sparsity
gap, the shape information from LiDAR is used to enrich radar BEV features.
Extensive experiments on the View-of-Delft (VoD) dataset demonstrate our
approach's superiority over existing methods, achieving the highest mAP of
71.76% across the entire area and 86.36\% within the driving corridor.
Especially for cars, we improve the AP by 4.17% and 4.20% due to the strong
indicative features and symmetric shapes.
|
2501.10273
|
SEANN: A Domain-Informed Neural Network for Epidemiological Insights
|
cs.LG cs.AI
|
In epidemiology, traditional statistical methods such as logistic regression,
linear regression, and other parametric models are commonly employed to
investigate associations between predictors and health outcomes. However,
non-parametric machine learning techniques, such as deep neural networks
(DNNs), coupled with explainable AI (XAI) tools, offer new opportunities for
this task. Despite their potential, these methods face challenges due to the
limited availability of high-quality, high-quantity data in this field. To
address these challenges, we introduce SEANN, a novel approach for informed
DNNs that leverages a prevalent form of domain-specific knowledge: Pooled
Effect Sizes (PES). PESs are commonly found in published Meta-Analysis studies,
in different forms, and represent a quantitative form of a scientific
consensus. By direct integration within the learning procedure using a custom
loss, we experimentally demonstrate significant improvements in the
generalizability of predictive performances and the scientific plausibility of
extracted relationships compared to a domain-knowledge agnostic neural network
in a scarce and noisy data setting.
|
2501.10282
|
Computational Protein Science in the Era of Large Language Models (LLMs)
|
cs.CE cs.CL q-bio.BM
|
Considering the significance of proteins, computational protein science has
always been a critical scientific field, dedicated to revealing knowledge and
developing applications within the protein sequence-structure-function
paradigm. In the last few decades, Artificial Intelligence (AI) has made
significant impacts in computational protein science, leading to notable
successes in specific protein modeling tasks. However, those previous AI models
still meet limitations, such as the difficulty in comprehending the semantics
of protein sequences, and the inability to generalize across a wide range of
protein modeling tasks. Recently, LLMs have emerged as a milestone in AI due to
their unprecedented language processing & generalization capability. They can
promote comprehensive progress in fields rather than solving individual tasks.
As a result, researchers have actively introduced LLM techniques in
computational protein science, developing protein Language Models (pLMs) that
skillfully grasp the foundational knowledge of proteins and can be effectively
generalized to solve a diversity of sequence-structure-function reasoning
problems. While witnessing prosperous developments, it's necessary to present a
systematic overview of computational protein science empowered by LLM
techniques. First, we summarize existing pLMs into categories based on their
mastered protein knowledge, i.e., underlying sequence patterns, explicit
structural and functional information, and external scientific languages.
Second, we introduce the utilization and adaptation of pLMs, highlighting their
remarkable achievements in promoting protein structure prediction, protein
function prediction, and protein design studies. Then, we describe the
practical application of pLMs in antibody design, enzyme design, and drug
discovery. Finally, we specifically discuss the promising future directions in
this fast-growing field.
|
2501.10283
|
GSTAR: Gaussian Surface Tracking and Reconstruction
|
cs.CV
|
3D Gaussian Splatting techniques have enabled efficient photo-realistic
rendering of static scenes. Recent works have extended these approaches to
support surface reconstruction and tracking. However, tracking dynamic surfaces
with 3D Gaussians remains challenging due to complex topology changes, such as
surfaces appearing, disappearing, or splitting. To address these challenges, we
propose GSTAR, a novel method that achieves photo-realistic rendering, accurate
surface reconstruction, and reliable 3D tracking for general dynamic scenes
with changing topology. Given multi-view captures as input, GSTAR binds
Gaussians to mesh faces to represent dynamic objects. For surfaces with
consistent topology, GSTAR maintains the mesh topology and tracks the meshes
using Gaussians. In regions where topology changes, GSTAR adaptively unbinds
Gaussians from the mesh, enabling accurate registration and the generation of
new surfaces based on these optimized Gaussians. Additionally, we introduce a
surface-based scene flow method that provides robust initialization for
tracking between frames. Experiments demonstrate that our method effectively
tracks and reconstructs dynamic surfaces, enabling a range of applications. Our
project page with the code release is available at
https://eth-ait.github.io/GSTAR/.
|
2501.10290
|
Pairwise Elimination with Instance-Dependent Guarantees for Bandits with
Cost Subsidy
|
cs.LG
|
Multi-armed bandits (MAB) are commonly used in sequential online
decision-making when the reward of each decision is an unknown random variable.
In practice however, the typical goal of maximizing total reward may be less
important than minimizing the total cost of the decisions taken, subject to a
reward constraint. For example, we may seek to make decisions that have at
least the reward of a reference ``default'' decision, with as low a cost as
possible. This problem was recently introduced in the Multi-Armed Bandits with
Cost Subsidy (MAB-CS) framework. MAB-CS is broadly applicable to problem
domains where a primary metric (cost) is constrained by a secondary metric
(reward), and the rewards are unknown. In our work, we address variants of
MAB-CS including ones with reward constrained by the reward of a known
reference arm or by the subsidized best reward. We introduce the
Pairwise-Elimination (PE) algorithm for the known reference arm variant and
generalize PE to PE-CS for the subsidized best reward variant. Our
instance-dependent analysis of PE and PE-CS reveals that both algorithms have
an order-wise logarithmic upper bound on Cost and Quality Regret, making our
policies the first with such a guarantee. Moreover, by comparing our upper and
lower bound results we establish that PE is order-optimal for all known
reference arm problem instances. Finally, experiments are conducted using the
MovieLens 25M and Goodreads datasets for both PE and PE-CS revealing the
effectiveness of PE and the superior balance between performance and
reliability offered by PE-CS compared to baselines from the literature.
|
2501.10300
|
An Ontology for Social Determinants of Education (SDoEd) based on
Human-AI Collaborative Approach
|
cs.AI
|
The use of computational ontologies is well-established in the field of
Medical Informatics. The topic of Social Determinants of Health (SDoH) has also
received extensive attention. Work at the intersection of ontologies and SDoH
has been published. However, a standardized framework for Social Determinants
of Education (SDoEd) is lacking. In this paper, we are closing the gap by
introducing an SDoEd ontology for creating a precise conceptualization of the
interplay between life circumstances of students and their possible educational
achievements. The ontology was developed utilizing suggestions from
ChatGPT-3.5-010422 and validated using peer-reviewed research articles. The
first version of developed ontology was evaluated by human experts in the field
of education and validated using standard ontology evaluation software. This
version of the SDoEd ontology contains 231 domain concepts, 10 object
properties, and 24 data properties
|
2501.10309
|
Entropic versions of Bergstr\"om's and Bonnesen's inequalities
|
cs.IT math.FA math.IT
|
We establish analogues of the Bergstr\"om and Bonnesen inequalities, related
to determinants and volumes respectively, for the entropy power and for the
Fisher information. The obtained inequalities strengthen the well-known
convolution inequality for the Fisher information as well as the entropy power
inequality in dimensions $d>1$, while they reduce to the former in $d=1$. Our
results recover the original Bergstr\"om inequality and generalize a proof of
Bergstr\"om's inequality given by Dembo, Cover and Thomas. We characterize the
equality case in our entropic Bonnesen inequality.
|
2501.10316
|
Know Your Mistakes: Towards Preventing Overreliance on Task-Oriented
Conversational AI Through Accountability Modeling
|
cs.CL
|
Recent LLMs have enabled significant advancements for conversational agents.
However, they are also well known to hallucinate, producing responses that seem
plausible but are factually incorrect. On the other hand, users tend to
over-rely on LLM-based AI agents, accepting AI's suggestion even when it is
wrong. Adding positive friction, such as explanations or getting user
confirmations, has been proposed as a mitigation in AI-supported
decision-making systems. In this paper, we propose an accountability model for
LLM-based task-oriented dialogue agents to address user overreliance via
friction turns in cases of model uncertainty and errors associated with
dialogue state tracking (DST). The accountability model is an augmented LLM
with an additional accountability head that functions as a binary classifier to
predict the relevant slots of the dialogue state mentioned in the conversation.
We perform our experiments with multiple backbone LLMs on two established
benchmarks (MultiWOZ and Snips). Our empirical findings demonstrate that the
proposed approach not only enables reliable estimation of AI agent errors but
also guides the decoder in generating more accurate actions. We observe around
3% absolute improvement in joint goal accuracy (JGA) of DST output by
incorporating accountability heads into modern LLMs. Self-correcting the
detected errors further increases the JGA from 67.13 to 70.51, achieving
state-of-the-art DST performance. Finally, we show that error correction
through user confirmations (friction turn) achieves a similar performance gain,
highlighting its potential to reduce user overreliance.
|
2501.10318
|
HiMix: Reducing Computational Complexity in Large Vision-Language Models
|
cs.CV
|
Benefiting from recent advancements in large language models and modality
alignment techniques, existing Large Vision-Language Models(LVLMs) have
achieved prominent performance across a wide range of scenarios. However, the
excessive computational complexity limits the widespread use of these models in
practical applications. We argue that one main bottleneck in computational
complexity is caused by the involvement of redundant vision sequences in model
computation. This is inspired by a reassessment of the efficiency of vision and
language information transmission in the language decoder of LVLMs. Then, we
propose a novel hierarchical vision-language interaction mechanism called
Hierarchical Vision injection for Mixture Attention (HiMix). In HiMix, only the
language sequence undergoes full forward propagation, while the vision sequence
interacts with the language at specific stages within each language decoder
layer. It is striking that our approach significantly reduces computational
complexity with minimal performance loss. Specifically, HiMix achieves a 10x
reduction in the computational cost of the language decoder across multiple
LVLM models while maintaining comparable performance. This highlights the
advantages of our method, and we hope our research brings new perspectives to
the field of vision-language understanding. Project Page:
https://xuange923.github.io/HiMix
|
2501.10319
|
Natural Language Processing of Privacy Policies: A Survey
|
cs.CL
|
Natural Language Processing (NLP) is an essential subset of artificial
intelligence. It has become effective in several domains, such as healthcare,
finance, and media, to identify perceptions, opinions, and misuse, among
others. Privacy is no exception, and initiatives have been taken to address the
challenges of usable privacy notifications to users with the help of NLP. To
this aid, we conduct a literature review by analyzing 109 papers at the
intersection of NLP and privacy policies. First, we provide a brief
introduction to privacy policies and discuss various facets of associated
problems, which necessitate the application of NLP to elevate the current state
of privacy notices and disclosures to users. Subsequently, we a) provide an
overview of the implementation and effectiveness of NLP approaches for better
privacy policy communication; b) identify the methodologies that can be further
enhanced to provide robust privacy policies; and c) identify the gaps in the
current state-of-the-art research. Our systematic analysis reveals that several
research papers focus on annotating and classifying privacy texts for analysis
but need to adequately dwell on other aspects of NLP applications, such as
summarization. More specifically, ample research opportunities exist in this
domain, covering aspects such as corpus generation, summarization vectors,
contextualized word embedding, identification of privacy-relevant statement
categories, fine-grained classification, and domain-specific model tuning.
|
2501.10321
|
Towards Human-Guided, Data-Centric LLM Co-Pilots
|
cs.LG stat.ML
|
Machine learning (ML) has the potential to revolutionize various domains, but
its adoption is often hindered by the disconnect between the needs of domain
experts and translating these needs into robust and valid ML tools. Despite
recent advances in LLM-based co-pilots to democratize ML for non-technical
domain experts, these systems remain predominantly focused on model-centric
aspects while overlooking critical data-centric challenges. This limitation is
problematic in complex real-world settings where raw data often contains
complex issues, such as missing values, label noise, and domain-specific
nuances requiring tailored handling. To address this we introduce CliMB-DC, a
human-guided, data-centric framework for LLM co-pilots that combines advanced
data-centric tools with LLM-driven reasoning to enable robust, context-aware
data processing. At its core, CliMB-DC introduces a novel, multi-agent
reasoning system that combines a strategic coordinator for dynamic planning and
adaptation with a specialized worker agent for precise execution. Domain
expertise is then systematically incorporated to guide the reasoning process
using a human-in-the-loop approach. To guide development, we formalize a
taxonomy of key data-centric challenges that co-pilots must address.
Thereafter, to address the dimensions of the taxonomy, we integrate
state-of-the-art data-centric tools into an extensible, open-source
architecture, facilitating the addition of new tools from the research
community. Empirically, using real-world healthcare datasets we demonstrate
CliMB-DC's ability to transform uncurated datasets into ML-ready formats,
significantly outperforming existing co-pilot baselines for handling
data-centric challenges. CliMB-DC promises to empower domain experts from
diverse domains -- healthcare, finance, social sciences and more -- to actively
participate in driving real-world impact using ML.
|
2501.10322
|
Hierarchical Autoregressive Transformers: Combining Byte- and Word-Level
Processing for Robust, Adaptable Language Models
|
cs.CL cs.AI cs.LG
|
Tokenization is a fundamental step in natural language processing, breaking
text into units that computational models can process. While learned subword
tokenizers have become the de-facto standard, they present challenges such as
large vocabularies, limited adaptability to new domains or languages, and
sensitivity to spelling errors and variations. To overcome these limitations,
we investigate a hierarchical architecture for autoregressive language
modelling that combines character-level and word-level processing. It employs a
lightweight character-level encoder to convert character sequences into word
embeddings, which are then processed by a word-level backbone model and decoded
back into characters via a compact character-level decoder. This method retains
the sequence compression benefits of word-level tokenization without relying on
a rigid, predefined vocabulary. We demonstrate, at scales up to 7 billion
parameters, that hierarchical transformers match the downstream task
performance of subword-tokenizer-based models while exhibiting significantly
greater robustness to input perturbations. Additionally, during continued
pretraining on an out-of-domain language, our model trains almost twice as
fast, achieves superior performance on the target language, and retains more of
its previously learned knowledge. Hierarchical transformers pave the way for
NLP systems that are more robust, flexible, and generalizable across languages
and domains.
|
2501.10324
|
New Fashion Products Performance Forecasting: A Survey on Evolutions,
Models and Emerging Trends
|
cs.LG cs.CV
|
The fast fashion industry's insatiable demand for new styles and rapid
production cycles has led to a significant environmental burden.
Overproduction, excessive waste, and harmful chemicals have contributed to the
negative environmental impact of the industry. To mitigate these issues, a
paradigm shift that prioritizes sustainability and efficiency is urgently
needed. Integrating learning-based predictive analytics into the fashion
industry represents a significant opportunity to address environmental
challenges and drive sustainable practices. By forecasting fashion trends and
optimizing production, brands can reduce their ecological footprint while
remaining competitive in a rapidly changing market. However, one of the key
challenges in forecasting fashion sales is the dynamic nature of consumer
preferences. Fashion is acyclical, with trends constantly evolving and
resurfacing. In addition, cultural changes and unexpected events can disrupt
established patterns. This problem is also known as New Fashion Products
Performance Forecasting (NFPPF), and it has recently gained more and more
interest in the global research landscape. Given its multidisciplinary nature,
the field of NFPPF has been approached from many different angles. This
comprehensive survey wishes to provide an up-to-date overview that focuses on
learning-based NFPPF strategies. The survey is based on the Preferred Reporting
Items for Systematic Reviews and Meta-Analyses (PRISMA) methodological flow,
allowing for a systematic and complete literature review. In particular, we
propose the first taxonomy that covers the learning panorama for NFPPF,
examining in detail the different methodologies used to increase the amount of
multimodal information, as well as the state-of-the-art available datasets.
Finally, we discuss the challenges and future directions.
|
2501.10325
|
DiffStereo: High-Frequency Aware Diffusion Model for Stereo Image
Restoration
|
cs.CV
|
Diffusion models (DMs) have achieved promising performance in image
restoration but haven't been explored for stereo images. The application of DM
in stereo image restoration is confronted with a series of challenges. The need
to reconstruct two images exacerbates DM's computational cost. Additionally,
existing latent DMs usually focus on semantic information and remove
high-frequency details as redundancy during latent compression, which is
precisely what matters for image restoration. To address the above problems, we
propose a high-frequency aware diffusion model, DiffStereo for stereo image
restoration as the first attempt at DM in this domain. Specifically, DiffStereo
first learns latent high-frequency representations (LHFR) of HQ images. DM is
then trained in the learned space to estimate LHFR for stereo images, which are
fused into a transformer-based stereo image restoration network providing
beneficial high-frequency information of corresponding HQ images. The
resolution of LHFR is kept the same as input images, which preserves the
inherent texture from distortion. And the compression in channels alleviates
the computational burden of DM. Furthermore, we devise a position encoding
scheme when integrating the LHFR into the restoration network, enabling
distinctive guidance in different depths of the restoration network.
Comprehensive experiments verify that by combining generative DM and
transformer, DiffStereo achieves both higher reconstruction accuracy and better
perceptual quality on stereo super-resolution, deblurring, and low-light
enhancement compared with state-of-the-art methods.
|
2501.10326
|
Large language models for automated scholarly paper review: A survey
|
cs.AI cs.CL cs.DL
|
Large language models (LLMs) have significantly impacted human society,
influencing various domains. Among them, academia is not simply a domain
affected by LLMs, but it is also the pivotal force in the development of LLMs.
In academic publications, this phenomenon is represented during the
incorporation of LLMs into the peer review mechanism for reviewing manuscripts.
We proposed the concept of automated scholarly paper review (ASPR) in our
previous paper. As the incorporation grows, it now enters the coexistence phase
of ASPR and peer review, which is described in that paper. LLMs hold
transformative potential for the full-scale implementation of ASPR, but they
also pose new issues and challenges that need to be addressed. In this survey
paper, we aim to provide a holistic view of ASPR in the era of LLMs. We begin
with a survey to find out which LLMs are used to conduct ASPR. Then, we review
what ASPR-related technological bottlenecks have been solved with the
incorporation of LLM technology. After that, we move on to explore new methods,
new datasets, new source code, and new online systems that come with LLMs for
ASPR. Furthermore, we summarize the performance and issues of LLMs in ASPR, and
investigate the attitudes and reactions of publishers and academia to ASPR.
Lastly, we discuss the challenges associated with the development of LLMs for
ASPR. We hope this survey can serve as an inspirational reference for the
researchers and promote the progress of ASPR for its actual implementation.
|
2501.10328
|
BoK: Introducing Bag-of-Keywords Loss for Interpretable Dialogue
Response Generation
|
cs.CL
|
The standard language modeling (LM) loss by itself has been shown to be
inadequate for effective dialogue modeling. As a result, various training
approaches, such as auxiliary loss functions and leveraging human feedback, are
being adopted to enrich open-domain dialogue systems. One such auxiliary loss
function is Bag-of-Words (BoW) loss, defined as the cross-entropy loss for
predicting all the words/tokens of the next utterance. In this work, we propose
a novel auxiliary loss named Bag-of-Keywords (BoK) loss to capture the central
thought of the response through keyword prediction and leverage it to enhance
the generation of meaningful and interpretable responses in open-domain
dialogue systems. BoK loss upgrades the BoW loss by predicting only the
keywords or critical words/tokens of the next utterance, intending to estimate
the core idea rather than the entire response. We incorporate BoK loss in both
encoder-decoder (T5) and decoder-only (DialoGPT) architecture and train the
models to minimize the weighted sum of BoK and LM (BoK-LM) loss. We perform our
experiments on two popular open-domain dialogue datasets, DailyDialog and
Persona-Chat. We show that the inclusion of BoK loss improves the dialogue
generation of backbone models while also enabling post-hoc interpretability. We
also study the effectiveness of BoK-LM loss as a reference-free metric and
observe comparable performance to the state-of-the-art metrics on various
dialogue evaluation datasets.
|
2501.10332
|
Agent4Edu: Generating Learner Response Data by Generative Agents for
Intelligent Education Systems
|
cs.CY cs.AI
|
Personalized learning represents a promising educational strategy within
intelligent educational systems, aiming to enhance learners' practice
efficiency. However, the discrepancy between offline metrics and online
performance significantly impedes their progress. To address this challenge, we
introduce Agent4Edu, a novel personalized learning simulator leveraging recent
advancements in human intelligence through large language models (LLMs).
Agent4Edu features LLM-powered generative agents equipped with learner profile,
memory, and action modules tailored to personalized learning algorithms. The
learner profiles are initialized using real-world response data, capturing
practice styles and cognitive factors. Inspired by human psychology theory, the
memory module records practice facts and high-level summaries, integrating
reflection mechanisms. The action module supports various behaviors, including
exercise understanding, analysis, and response generation. Each agent can
interact with personalized learning algorithms, such as computerized adaptive
testing, enabling a multifaceted evaluation and enhancement of customized
services. Through a comprehensive assessment, we explore the strengths and
weaknesses of Agent4Edu, emphasizing the consistency and discrepancies in
responses between agents and human learners. The code, data, and appendix are
publicly available at https://github.com/bigdata-ustc/Agent4Edu.
|
2501.10337
|
Uncertainty-Aware Digital Twins: Robust Model Predictive Control using
Time-Series Deep Quantile Learning
|
eess.SY cs.SY
|
Digital Twins, virtual replicas of physical systems that enable real-time
monitoring, model updates, predictions, and decision-making, present novel
avenues for proactive control strategies for autonomous systems. However,
achieving real-time decision-making in Digital Twins considering uncertainty
necessitates an efficient uncertainty quantification (UQ) approach and
optimization driven by accurate predictions of system behaviors, which remains
a challenge for learning-based methods. This paper presents a simultaneous
multi-step robust model predictive control (MPC) framework that incorporates
real-time decision-making with uncertainty awareness for Digital Twin systems.
Leveraging a multistep ahead predictor named Time-Series Dense Encoder (TiDE)
as the surrogate model, this framework differs from conventional MPC models
that provide only one-step ahead predictions. In contrast, TiDE can predict
future states within the prediction horizon in a one-shot, significantly
accelerating MPC. Furthermore, quantile regression is employed with the
training of TiDE to perform flexible while computationally efficient UQ on data
uncertainty. Consequently, with the deep learning quantiles, the robust MPC
problem is formulated into a deterministic optimization problem and provides a
safety buffer that accommodates disturbances to enhance constraint satisfaction
rate. As a result, the proposed method outperforms existing robust MPC methods
by providing less-conservative UQ and has demonstrated efficacy in an
engineering case study involving Directed Energy Deposition (DED) additive
manufacturing. This proactive while uncertainty-aware control capability
positions the proposed method as a potent tool for future Digital Twin
applications and real-time process control in engineering systems.
|
2501.10342
|
Hybrid Deep Learning Model for epileptic seizure classification by using
1D-CNN with multi-head attention mechanism
|
cs.LG
|
Epilepsy is a prevalent neurological disorder globally, impacting around 50
million people \cite{WHO_epilepsy_50million}. Epileptic seizures result from
sudden abnormal electrical activity in the brain, which can be read as sudden
and significant changes in the EEG signal of the brain. The signal can vary in
severity and frequency, which results in loss of consciousness and muscle
contractions for a short period of time \cite{epilepsyfoundation_myoclonic}.
Individuals with epilepsy often face significant employment challenges due to
safety concerns in certain work environments. Many jobs that involve working at
heights, operating heavy machinery, or in other potentially hazardous settings
may be restricted for people with seizure disorders. This certainly limits job
options and economic opportunities for those living with epilepsy.
|
2501.10343
|
3rd Workshop on Maritime Computer Vision (MaCVi) 2025: Challenge Results
|
cs.CV cs.AI
|
The 3rd Workshop on Maritime Computer Vision (MaCVi) 2025 addresses maritime
computer vision for Unmanned Surface Vehicles (USV) and underwater. This report
offers a comprehensive overview of the findings from the challenges. We provide
both statistical and qualitative analyses, evaluating trends from over 700
submissions. All datasets, evaluation code, and the leaderboard are available
to the public at https://macvi.org/workshop/macvi25.
|
2501.10344
|
FC-Datalog as a Framework for Efficient String Querying
|
cs.LO cs.DB cs.FL
|
Core spanners are a class of document spanners that capture the core
functionality of IBM's AQL. FC is a logic on strings built around word
equations that when extended with constraints for regular languages can be seen
as a logic for core spanners. The recently introduced FC-Datalog extends FC
with recursion, which allows us to define recursive relations for core
spanners. Additionally, as FC-Datalog captures P, it is also a tractable
version of Datalog on strings. This presents an opportunity for optimization.
We propose a series of FC-Datalog fragments with desirable properties in
terms of complexity of model checking, expressive power, and efficiency of
checking membership in the fragment. This leads to a range of fragments that
all capture LOGSPACE, which we further restrict to obtain linear combined
complexity. This gives us a framework to tailor fragments for particular
applications. To showcase this, we simulate deterministic regex in a tailored
fragment of FC-Datalog.
|
2501.10347
|
ColNet: Collaborative Optimization in Decentralized Federated Multi-task
Learning Systems
|
cs.LG
|
The integration of Federated Learning (FL) and Multi-Task Learning (MTL) has
been explored to address client heterogeneity, with Federated Multi-Task
Learning (FMTL) treating each client as a distinct task. However, most existing
research focuses on data heterogeneity (e.g., addressing non-IID data) rather
than task heterogeneity, where clients solve fundamentally different tasks.
Additionally, much of the work relies on centralized settings with a server
managing the federation, leaving the more challenging domain of decentralized
FMTL largely unexplored. Thus, this work bridges this gap by proposing ColNet,
a framework designed for heterogeneous tasks in decentralized federated
environments. ColNet divides models into the backbone and task-specific layers,
forming groups of similar clients, with group leaders performing
conflict-averse cross-group aggregation. A pool of experiments with different
federations demonstrated ColNet outperforms the compared aggregation schemes in
decentralized settings with label and task heterogeneity scenarios.
|
2501.10348
|
Credit Risk Identification in Supply Chains Using Generative Adversarial
Networks
|
cs.LG
|
Credit risk management within supply chains has emerged as a critical
research area due to its significant implications for operational stability and
financial sustainability. The intricate interdependencies among supply chain
participants mean that credit risks can propagate across networks, with impacts
varying by industry. This study explores the application of Generative
Adversarial Networks (GANs) to enhance credit risk identification in supply
chains. GANs enable the generation of synthetic credit risk scenarios,
addressing challenges related to data scarcity and imbalanced datasets. By
leveraging GAN-generated data, the model improves predictive accuracy while
effectively capturing dynamic and temporal dependencies in supply chain data.
The research focuses on three representative industries-manufacturing (steel),
distribution (pharmaceuticals), and services (e-commerce) to assess
industry-specific credit risk contagion. Experimental results demonstrate that
the GAN-based model outperforms traditional methods, including logistic
regression, decision trees, and neural networks, achieving superior accuracy,
recall, and F1 scores. The findings underscore the potential of GANs in
proactive risk management, offering robust tools for mitigating financial
disruptions in supply chains. Future research could expand the model by
incorporating external market factors and supplier relationships to further
enhance predictive capabilities. Keywords- Generative Adversarial Networks
(GANs); Supply Chain Risk; Credit Risk Identification; Machine Learning; Data
Augmentation
|
2501.10356
|
DexForce: Extracting Force-informed Actions from Kinesthetic
Demonstrations for Dexterous Manipulation
|
cs.RO
|
Imitation learning requires high-quality demonstrations consisting of
sequences of state-action pairs. For contact-rich dexterous manipulation tasks
that require fine-grained dexterity, the actions in these state-action pairs
must produce the right forces. Current widely-used methods for collecting
dexterous manipulation demonstrations are difficult to use for demonstrating
contact-rich tasks due to unintuitive human-to-robot motion retargeting and the
lack of direct haptic feedback. Motivated by this, we propose DexForce, a
method for collecting demonstrations of contact-rich dexterous manipulation.
DexForce leverages contact forces, measured during kinesthetic demonstrations,
to compute force-informed actions for policy learning. We use DexForce to
collect demonstrations for six tasks and show that policies trained on our
force-informed actions achieve an average success rate of 76% across all tasks.
In contrast, policies trained directly on actions that do not account for
contact forces have near-zero success rates. We also conduct a study ablating
the inclusion of force data in policy observations. We find that while using
force data never hurts policy performance, it helps the most for tasks that
require an advanced level of precision and coordination, like opening an
AirPods case and unscrewing a nut.
|
2501.10357
|
Zero-Shot Monocular Scene Flow Estimation in the Wild
|
cs.CV
|
Large models have shown generalization across datasets for many low-level
vision tasks, like depth estimation, but no such general models exist for scene
flow. Even though scene flow has wide potential use, it is not used in practice
because current predictive models do not generalize well. We identify three key
challenges and propose solutions for each. First, we create a method that
jointly estimates geometry and motion for accurate prediction. Second, we
alleviate scene flow data scarcity with a data recipe that affords us 1M
annotated training samples across diverse synthetic scenes. Third, we evaluate
different parameterizations for scene flow prediction and adopt a natural and
effective parameterization. Our resulting model outperforms existing methods as
well as baselines built on large-scale models in terms of 3D end-point error,
and shows zero-shot generalization to the casually captured videos from DAVIS
and the robotic manipulation scenes from RoboTAP. Overall, our approach makes
scene flow prediction more practical in-the-wild.
|
2501.10360
|
FaceXBench: Evaluating Multimodal LLMs on Face Understanding
|
cs.CV
|
Multimodal Large Language Models (MLLMs) demonstrate impressive
problem-solving abilities across a wide range of tasks and domains. However,
their capacity for face understanding has not been systematically studied. To
address this gap, we introduce FaceXBench, a comprehensive benchmark designed
to evaluate MLLMs on complex face understanding tasks. FaceXBench includes
5,000 multimodal multiple-choice questions derived from 25 public datasets and
a newly created dataset, FaceXAPI. These questions cover 14 tasks across 6
broad categories, assessing MLLMs' face understanding abilities in bias and
fairness, face authentication, recognition, analysis, localization and tool
retrieval. Using FaceXBench, we conduct an extensive evaluation of 26
open-source MLLMs alongside 2 proprietary models, revealing the unique
challenges in complex face understanding tasks. We analyze the models across
three evaluation settings: zero-shot, in-context task description, and
chain-of-thought prompting. Our detailed analysis reveals that current MLLMs,
including advanced models like GPT-4o, and GeminiPro 1.5, show significant room
for improvement. We believe FaceXBench will be a crucial resource for
developing MLLMs equipped to perform sophisticated face understanding. Code:
https://github.com/Kartik-3004/facexbench
|
2501.10361
|
How Large Language Models (LLMs) Extrapolate: From Guided Missiles to
Guided Prompts
|
cs.CY cs.CL
|
This paper argues that we should perceive LLMs as machines of extrapolation.
Extrapolation is a statistical function for predicting the next value in a
series. Extrapolation contributes to both GPT successes and controversies
surrounding its hallucination. The term hallucination implies a malfunction,
yet this paper contends that it in fact indicates the chatbot efficiency in
extrapolation, albeit an excess of it. This article bears a historical
dimension: it traces extrapolation to the nascent years of cybernetics. In
1941, when Norbert Wiener transitioned from missile science to communication
engineering, the pivotal concept he adopted was none other than extrapolation.
Soviet mathematician Andrey Kolmogorov, renowned for his compression logic that
inspired OpenAI, had developed in 1939 another extrapolation project that
Wiener later found rather like his own. This paper uncovers the connections
between hot war science, Cold War cybernetics, and the contemporary debates on
LLM performances.
|
2501.10362
|
Reviewing Uses of Regulatory Compliance Monitoring
|
cs.CY cs.DB
|
In order to deliver their services and products to customers, organizations
need to manage numerous business processes. One important consideration thereby
lies in the adherence to regulations such as laws, guidelines, or industry
standards. In order to monitor adherence of their business processes to
regulations - in other words, their regulatory compliance - organizations make
use of various techniques that draw on process execution data of IT systems
that support these processes. While previous research has investigated
conformance checking, an operation of process mining, for the domains in which
it is applied, its operationalization of regulations, the techniques being
used, and the presentation of results produced, other techniques for compliance
monitoring, which we summarize as compliance checking techniques, have not yet
been investigated in a structural manner. To this end, this work presents a
systematic literature review on uses of regulatory compliance monitoring of
business processes, thereby offering insights into the various techniques being
used, their application and the results they generate. We highlight
commonalities and differences between the approaches and find that various
steps are performed manually; we also provide further impulses for research on
compliance monitoring and its use in practice.
|
2501.10365
|
Can LLMs Identify Gaps and Misconceptions in Students' Code
Explanations?
|
cs.CY cs.AI cs.SE
|
This paper investigates various approaches using Large Language Models (LLMs)
to identify gaps and misconceptions in students' self-explanations of specific
instructional material, in our case explanations of code examples. This
research is a part of our larger effort to automate the assessment of students'
freely generated responses, focusing specifically on their self-explanations of
code examples during activities related to code comprehension. In this work, we
experiment with zero-shot prompting, Supervised Fine-Tuning (SFT), and
preference alignment of LLMs to identify gaps in students' self-explanation.
With simple prompting, GPT-4 consistently outperformed LLaMA3 and Mistral in
identifying gaps and misconceptions, as confirmed by human evaluations.
Additionally, our results suggest that fine-tuned large language models are
more effective at identifying gaps in students' explanations compared to
zero-shot and few-shot prompting techniques. Furthermore, our findings show
that the preference optimization approach using Odds Ratio Preference
Optimization (ORPO) outperforms SFT in identifying gaps and misconceptions in
students' code explanations.
|
2501.10366
|
Participatory Assessment of Large Language Model Applications in an
Academic Medical Center
|
cs.CY cs.AI cs.LG
|
Although Large Language Models (LLMs) have shown promising performance in
healthcare-related applications, their deployment in the medical domain poses
unique challenges of ethical, regulatory, and technical nature. In this study,
we employ a systematic participatory approach to investigate the needs and
expectations regarding clinical applications of LLMs at Lausanne University
Hospital, an academic medical center in Switzerland. Having identified
potential LLM use-cases in collaboration with thirty stakeholders, including
clinical staff across 11 departments as well nursing and patient
representatives, we assess the current feasibility of these use-cases taking
into account the regulatory frameworks, data protection regulation, bias,
hallucinations, and deployment constraints. This study provides a framework for
a participatory approach to identifying institutional needs with respect to
introducing advanced technologies into healthcare practice, and a realistic
analysis of the technology readiness level of LLMs for medical applications,
highlighting the issues that would need to be overcome LLMs in healthcare to be
ethical, and regulatory compliant.
|
2501.10367
|
GTDE: Grouped Training with Decentralized Execution for Multi-agent
Actor-Critic
|
cs.MA cs.AI
|
The rapid advancement of multi-agent reinforcement learning (MARL) has given
rise to diverse training paradigms to learn the policies of each agent in the
multi-agent system. The paradigms of decentralized training and execution
(DTDE) and centralized training with decentralized execution (CTDE) have been
proposed and widely applied. However, as the number of agents increases, the
inherent limitations of these frameworks significantly degrade the performance
metrics, such as win rate, total reward, etc. To reduce the influence of the
increasing number of agents on the performance metrics, we propose a novel
training paradigm of grouped training decentralized execution (GTDE). This
framework eliminates the need for a centralized module and relies solely on
local information, effectively meeting the training requirements of large-scale
multi-agent systems. Specifically, we first introduce an adaptive grouping
module, which divides each agent into different groups based on their
observation history. To implement end-to-end training, GTDE uses Gumbel-Sigmoid
for efficient point-to-point sampling on the grouping distribution while
ensuring gradient backpropagation. To adapt to the uncertainty in the number of
members in a group, two methods are used to implement a group information
aggregation module that merges member information within the group. Empirical
results show that in a cooperative environment with 495 agents, GTDE increased
the total reward by an average of 382\% compared to the baseline. In a
competitive environment with 64 agents, GTDE achieved a 100\% win rate against
the baseline.
|
2501.10368
|
The Potential of Answer Classes in Large-scale Written Computer-Science
Exams -- Vol. 2
|
cs.CY cs.AI
|
Students' answers to tasks provide a valuable source of information in
teaching as they result from applying cognitive processes to a learning content
addressed in the task. Due to steadily increasing course sizes, analyzing
student answers is frequently the only means of obtaining evidence about
student performance. However, in many cases, resources are limited, and when
evaluating exams, the focus is solely on identifying correct or incorrect
answers. This overlooks the value of analyzing incorrect answers, which can
help improve teaching strategies or identify misconceptions to be addressed in
the next cohort.
In teacher training for secondary education, assessment guidelines are
mandatory for every exam, including anticipated errors and misconceptions. We
applied this concept to a university exam with 462 students and 41 tasks. For
each task, the instructors developed answer classes -- classes of expected
responses, to which student answers were mapped during the exam correction
process. The experiment resulted in a shift in mindset among the tutors and
instructors responsible for the course: after initially having great
reservations about whether the significant additional effort would yield an
appropriate benefit, the procedure was subsequently found to be extremely
valuable.
The concept presented, and the experience gained from the experiment were
cast into a system with which it is possible to correct paper-based exams on
the basis of answer classes. This updated version of the paper provides an
overview and new potential in the course of using the digital version of the
approach.
|
2501.10369
|
Creative Loss: Ambiguity, Uncertainty and Indeterminacy
|
cs.CY cs.AI cs.HC cs.LG
|
This article evaluates how creative uses of machine learning can address
three adjacent terms: ambiguity, uncertainty and indeterminacy. Through the
progression of these concepts it reflects on increasing ambitions for machine
learning as a creative partner, illustrated with research from Unit 21 at the
Bartlett School of Architecture, UCL. Through indeterminacy are potential
future approaches to machine learning and design.
|
2501.10370
|
Harnessing Large Language Models for Mental Health: Opportunities,
Challenges, and Ethical Considerations
|
cs.CY cs.AI cs.LG
|
Large Language Models (LLMs) are transforming mental health care by enhancing
accessibility, personalization, and efficiency in therapeutic interventions.
These AI-driven tools empower mental health professionals with real-time
support, improved data integration, and the ability to encourage care-seeking
behaviors, particularly in underserved communities. By harnessing LLMs,
practitioners can deliver more empathetic, tailored, and effective support,
addressing longstanding gaps in mental health service provision. However, their
implementation comes with significant challenges and ethical concerns.
Performance limitations, data privacy risks, biased outputs, and the potential
for generating misleading information underscore the critical need for
stringent ethical guidelines and robust evaluation mechanisms. The sensitive
nature of mental health data further necessitates meticulous safeguards to
protect patient rights and ensure equitable access to AI-driven care.
Proponents argue that LLMs have the potential to democratize mental health
resources, while critics warn of risks such as misuse and the diminishment of
human connection in therapy. Achieving a balance between innovation and ethical
responsibility is imperative. This paper examines the transformative potential
of LLMs in mental health care, highlights the associated technical and ethical
complexities, and advocates for a collaborative, multidisciplinary approach to
ensure these advancements align with the goal of providing compassionate,
equitable, and effective mental health support.
|
2501.10371
|
What we learned while automating bias detection in AI hiring systems for
compliance with NYC Local Law 144
|
cs.CY cs.AI
|
Since July 5, 2023, New York City's Local Law 144 requires employers to
conduct independent bias audits for any automated employment decision tools
(AEDTs) used in hiring processes. The law outlines a minimum set of bias tests
that AI developers and implementers must perform to ensure compliance. Over the
past few months, we have collected and analyzed audits conducted under this
law, identified best practices, and developed a software tool to streamline
employer compliance. Our tool, ITACA_144, tailors our broader bias auditing
framework to meet the specific requirements of Local Law 144. While automating
these legal mandates, we identified several critical challenges that merit
attention to ensure AI bias regulations and audit methodologies are both
effective and practical. This document presents the insights gained from
automating compliance with NYC Local Law 144. It aims to support other cities
and states in crafting similar legislation while addressing the limitations of
the NYC framework. The discussion focuses on key areas including data
requirements, demographic inclusiveness, impact ratios, effective bias,
metrics, and data reliability.
|
2501.10373
|
DK-PRACTICE: An Intelligent Educational Platform for Personalized
Learning Content Recommendations Based on Students Knowledge State
|
cs.CY cs.AI
|
This study introduces DK-PRACTICE (Dynamic Knowledge Prediction and
Educational Content Recommendation System), an intelligent online platform that
leverages machine learning to provide personalized learning recommendations
based on student knowledge state. Students participate in a short, adaptive
assessment using the question-and-answer method regarding key concepts in a
specific knowledge domain. The system dynamically selects the next question for
each student based on the correctness and accuracy of their previous answers.
After the test is completed, DK-PRACTICE analyzes students' interaction history
to recommend learning materials to empower the student's knowledge state in
identified knowledge gaps. Both question selection and learning material
recommendations are based on machine learning models trained using anonymized
data from a real learning environment. To provide self-assessment and monitor
learning progress, DK-PRACTICE allows students to take two tests: one
pre-teaching and one post-teaching. After each test, a report is generated with
detailed results. In addition, the platform offers functions to visualize
learning progress based on recorded test statistics. DK-PRACTICE promotes
adaptive and personalized learning by empowering students with self-assessment
capabilities and providing instructors with valuable information about
students' knowledge levels. DK-PRACTICE can be extended to various educational
environments and knowledge domains, provided the necessary data is available
according to the educational topics. A subsequent paper will present the
methodology for the experimental application and evaluation of the platform.
|
2501.10375
|
DAOP: Data-Aware Offloading and Predictive Pre-Calculation for Efficient
MoE Inference
|
cs.DC cs.LG
|
Mixture-of-Experts (MoE) models, though highly effective for various machine
learning tasks, face significant deployment challenges on memory-constrained
devices. While GPUs offer fast inference, their limited memory compared to CPUs
means not all experts can be stored on the GPU simultaneously, necessitating
frequent, costly data transfers from CPU memory, often negating GPU speed
advantages. To address this, we present DAOP, an on-device MoE inference engine
to optimize parallel GPU-CPU execution. DAOP dynamically allocates experts
between CPU and GPU based on per-sequence activation patterns, and selectively
pre-calculates predicted experts on CPUs to minimize transfer latency. This
approach enables efficient resource utilization across various expert cache
ratios while maintaining model accuracy through a novel graceful degradation
mechanism. Comprehensive evaluations across various datasets show that DAOP
outperforms traditional expert caching and prefetching methods by up to 8.20x
and offloading techniques by 1.35x while maintaining accuracy.
|
2501.10376
|
Energy-Constrained Information Storage on Memristive Devices in the
Presence of Resistive Drift
|
cs.ET cs.IT cs.LG eess.SP math.IT
|
In this paper, we examine the problem of information storage on memristors
affected by resistive drift noise under energy constraints. We introduce a
novel, fundamental trade-off between the information lifetime of memristive
states and the energy that must be expended to bring the device into a
particular state. We then treat the storage problem as one of communication
over a noisy, energy-constrained channel, and propose a joint source-channel
coding (JSCC) approach to storing images in an analogue fashion. To design an
encoding scheme for natural images and to model the memristive channel, we make
use of data-driven techniques from the field of deep learning for
communications, namely deep joint source-channel coding (DeepJSCC), employing a
generative model of resistive drift as a computationally tractable
differentiable channel model for end-to-end optimisation. We introduce a
modified version of generalised divisive normalisation (GDN), a biologically
inspired form of normalisation, that we call conditional GDN (cGDN), allowing
for conditioning on continuous channel characteristics, including the initial
resistive state and the delay between storage and reading. Our results show
that the delay-conditioned network is able to learn an energy-aware coding
scheme that achieves a higher and more balanced reconstruction quality across a
range of storage delays.
|
2501.10377
|
The Three Social Dimensions of Chatbot Technology
|
cs.CY cs.AI cs.CL
|
The development and deployment of chatbot technology, while spanning decades
and employing different techniques, require innovative frameworks to understand
and interrogate their functionality and implications. A mere technocentric
account of the evolution of chatbot technology does not fully illuminate how
conversational systems are embedded in societal dynamics. This study presents a
structured examination of chatbots across three societal dimensions,
highlighting their roles as objects of scientific research, commercial
instruments, and agents of intimate interaction. Through furnishing a
dimensional framework for the evolution of conversational systems, from
laboratories to marketplaces to private lives, this article contributes to the
wider scholarly inquiry of chatbot technology and its impact in lived human
experiences and dynamics.
|
2501.10384
|
Nirvana AI Governance: How AI Policymaking Is Committing Three Old
Fallacies
|
cs.CY cs.HC cs.LG
|
This research applies Harold Demsetz's concept of the nirvana approach to the
realm of AI governance and debunks three common fallacies in various AI policy
proposals--"the grass is always greener on the other side," "free lunch," and
"the people could be different." Through this, I expose fundamental flaws in
the current AI regulatory proposal. First, some commentators intuitively
believe that people are more reliable than machines and that government works
better in risk control than companies' self-regulation, but they do not fully
compare the differences between the status quo and the proposed replacements.
Second, when proposing some regulatory tools, some policymakers and researchers
do not realize and even gloss over the fact that harms and costs are also
inherent in their proposals. Third, some policy proposals are initiated based
on a false comparison between the AI-driven world, where AI does lead to some
risks, and an entirely idealized world, where no risk exists at all. However,
the appropriate approach is to compare the world where AI causes risks to the
real world where risks are everywhere, but people can live well with these
risks. The prevalence of these fallacies in AI governance underscores a broader
issue: the tendency to idealize potential solutions without fully considering
their real-world implications. This idealization can lead to regulatory
proposals that are not only impractical but potentially harmful to innovation
and societal progress.
|
2501.10385
|
Autonomous Microscopy Experiments through Large Language Model Agents
|
cs.CY cond-mat.mtrl-sci cs.AI physics.ins-det
|
The emergence of large language models (LLMs) has accelerated the development
of self-driving laboratories (SDLs) for materials research. Despite their
transformative potential, current SDL implementations rely on rigid, predefined
protocols that limit their adaptability to dynamic experimental scenarios
across different labs. A significant challenge persists in measuring how
effectively AI agents can replicate the adaptive decision-making and
experimental intuition of expert scientists. Here, we introduce AILA
(Artificially Intelligent Lab Assistant), a framework that automates atomic
force microscopy (AFM) through LLM-driven agents. Using AFM as an experimental
testbed, we develop AFMBench-a comprehensive evaluation suite that challenges
AI agents based on language models like GPT-4o and GPT-3.5 to perform tasks
spanning the scientific workflow: from experimental design to results analysis.
Our systematic assessment shows that state-of-the-art language models struggle
even with basic tasks such as documentation retrieval, leading to a significant
decline in performance in multi-agent coordination scenarios. Further, we
observe that LLMs exhibit a tendency to not adhere to instructions or even
divagate to additional tasks beyond the original request, raising serious
concerns regarding safety alignment aspects of AI agents for SDLs. Finally, we
demonstrate the application of AILA on increasingly complex experiments
open-ended experiments: automated AFM calibration, high-resolution feature
detection, and mechanical property measurement. Our findings emphasize the
necessity for stringent benchmarking protocols before deploying AI agents as
laboratory assistants across scientific disciplines.
|
2501.10388
|
Beyond the Sum: Unlocking AI Agents Potential Through Market Forces
|
cs.CY cs.AI cs.CL cs.GT cs.MA
|
The emergence of Large Language Models has fundamentally transformed the
capabilities of AI agents, enabling a new class of autonomous agents capable of
interacting with their environment through dynamic code generation and
execution. These agents possess the theoretical capacity to operate as
independent economic actors within digital markets, offering unprecedented
potential for value creation through their distinct advantages in operational
continuity, perfect replication, and distributed learning capabilities.
However, contemporary digital infrastructure, architected primarily for human
interaction, presents significant barriers to their participation.
This work presents a systematic analysis of the infrastructure requirements
necessary for AI agents to function as autonomous participants in digital
markets. We examine four key areas - identity and authorization, service
discovery, interfaces, and payment systems - to show how existing
infrastructure actively impedes agent participation. We argue that addressing
these infrastructure challenges represents more than a technical imperative; it
constitutes a fundamental step toward enabling new forms of economic
organization. Much as traditional markets enable human intelligence to
coordinate complex activities beyond individual capability, markets
incorporating AI agents could dramatically enhance economic efficiency through
continuous operation, perfect information sharing, and rapid adaptation to
changing conditions. The infrastructure challenges identified in this work
represent key barriers to realizing this potential.
|
Subsets and Splits
No community queries yet
The top public SQL queries from the community will appear here once available.